In a scoop exclusive to Time.com, it has been unveiled that OpenAI, a frontrunner in artificial intelligence, has been secretively lobbying within the European Union with the aim of softening key aspects of the AI Act – esteemed as the world’s most thorough AI legislation.
OpenAI’s lobbying has been described as a bid to alleviate its regulatory burden.
The Contradictory Face of OpenAI
Despite OpenAI CEO Sam Altman’s public crusade for global AI regulation, a different story has emerged behind closed doors.
In a series of undisclosed manoeuvres, OpenAI has reportedly sought changes to the EU’s AI Act that would ease its own regulatory constraints.
Documents obtained by Time.com through freedom of information requests from the European Commission indicate that OpenAI successfully secured several amendments to the final EU law.
This move calls into question the company’s public stance on AI regulation.
‘High Risk’ AI – A Matter of Debate
OpenAI has insisted that its general-purpose AI systems, such as GPT-3 and the image generator Dall-E 2, should not be classified as “high risk,” a label that would impose rigorous legal requirements including transparency, traceability, and human oversight.
This position aligns OpenAI with tech behemoths Microsoft and Google, both of which have previously lobbied EU officials to lessen the Act’s regulatory pressure on major AI providers.
The fruits of OpenAI’s lobbying labours appear evident in the AI Act’s final draft.
Language in earlier drafts suggesting that general-purpose AI systems should be inherently high risk was conspicuously absent from the approved law. Instead, it requires providers of “foundation models” to meet a more limited set of requirements.
A White Paper Raises Eyebrows
A seven-page document titled “OpenAI White Paper on the European Union’s Artificial Intelligence Act” outlined the details of OpenAI’s lobbying efforts.
In this white paper, OpenAI contested an amendment to the AI Act that would have classified generative AI systems as “high risk” if they produced text or imagery capable of falsely appearing human-generated and authentic.
Balancing Public Benefits and Private Interests
The lobbying activities of OpenAI, laid bare by Time.com, have sparked a debate on the role of big tech firms in shaping AI legislation.
While OpenAI insists it supports the EU’s goal of ensuring the safe construction, deployment, and use of AI tools, critics argue that the company’s lobbying is more about guarding its financial interests than promoting public benefit.
Speaking on the matter, Sarah Chander, an advisor at European Digital Rights, who reviewed the OpenAI White Paper at TIME’s request, said:
“The document shows that OpenAI, like many Big Tech companies, have used the argument of utility and public benefit of AI to mask their financial interest in watering down the regulation.”
In a statement to TIME, an OpenAI spokesperson defended their actions:
“At the request of policymakers in the EU, in September 2022 we provided an overview of our approach to deploying systems like GPT-3 safely, and commented on the then-draft of the [AI Act] based on that experience. We continue to engage with policymakers and support the EU’s goal of ensuring AI tools are built, deployed and used safely now and in the future.”
OpenAI’s lobbying efforts offer a fascinating insight into the company’s influence over regulatory matters and raise key questions about the intersection of public and private interests in the rapidly evolving AI landscape.