As the race to develop more powerful artificial intelligence services like ChatGPT accelerates, some regulators are relying on old laws to control a technology that could upend the way societies and businesses operate.
The European Union is at the forefront of drafting new AI rules that could set the global benchmark to address privacy and safety concerns that have arisen with the rapid advances in the generative AI technology behind OpenAI’s ChatGPT.
The agency will begin examining other generative AI tools more broadly, a source close to Garante told Reuters. Data protection authorities in France and Spain also launched in April probes into OpenAI’s compliance with privacy laws.
BRING IN THE EXPERTS
Generative AI models have become well known for making mistakes, or “hallucinations”, spewing up misinformation with uncanny certainty.
He cited the U.S. Federal Trade Commission’s (FTC) investigation of algorithms for discriminatory practices under existing regulatory powers.
‘THINKING CREATIVELY
French data regulator CNIL has started “thinking creatively” about how existing laws might apply to AI, according to Bertrand Pailhes, its technology lead. For example, in France discrimination claims are usually handled by the Defenseur des Droits (Defender of Rights). However, its lack of expertise in AI bias has prompted CNIL to take a lead on the issue, he said.