European Union Unveils Rules for Powerful A.I. Systems

European Union Unveils Rules for Powerful A.I. Systems

European Union officials unveiled new rules on Thursday to regulate artificial intelligence. Makers of the most powerful A.I. systems will have to improve transparency, limit copyright violations and protect public safety.

The rules, which are not enforceable until next year, come during an intense debate in Brussels about how aggressively to regulate a new technology seen by many leaders as crucial to future economic success in the face of competition with the United States and China. Some critics accused regulators of watering down the rules to win industry support.

The guidelines apply only to a small number of tech companies like OpenAI, Microsoft and Google that make so-called general-purpose A.I. These systems underpin services like ChatGPT, and can analyze enormous amounts of data, learn on their own and perform some human tasks.

The so-called code of practice represents some of the first concrete details about how E.U. regulators plan to enforce a law, called the A.I. Act, that was passed last year. Rules for general-purpose A.I. systems take effect on Aug. 2, though E.U. regulators will not be able to impose penalties for noncompliance until August 2026, according to the European Commission, the executive branch of the 27-nation bloc.

The European Commission said the code of practice is meant to help companies comply with the A.I. Act. Companies that agreed to the voluntary code would benefit from a “reduced administrative burden and increased legal certainty.” Officials said those that did not sign would still have to prove compliance with the A.I. Act through other means, which could potentially be more costly and time-consuming.

It was not immediately clear which companies would join the code of practice. Google and OpenAI said they were reviewing the final text. Microsoft declined to comment. Meta, which had signaled it would not agree, did not have an immediate comment. Amazon and Mistral, a leading A.I. company in France, did not respond to a request for comment.

CCIA Europe, a tech industry trade group representing companies including Amazon, Google and Meta, said the code of practice “imposes a disproportionate burden on A.I. providers.”

Under the guidelines, tech companies will have to provide detailed breakdowns about the content used for training their algorithms, something long sought by media publishers concerned that their intellectual property is being used to trained the A.I. systems. Other rules would require the companies to conduct risk assessments to see how their services could be misused for things like creating biological weapons that pose a risk to public safety.

(The New York Times has sued OpenAI and its partner, Microsoft, claiming copyright infringement of news content related to A.I. systems. The two companies have denied the suit’s claims.)

What is less clear is how the law will address issues like the spread of misinformation and harmful content. This week, Grok, a chatbot created by Elon Musk’s artificial intelligence company, xAI, shared several antisemitic comments on X, including praise of Hitler.

Henna Virkkunen, the European Commission’s executive vice president for tech sovereignty, security and democracy, said the policy was “an important step in making the most advanced A.I. models available in Europe not only innovative but also safe and transparent.”

Nick Moës, executive director of the Future Society, a civil society group focused on A.I. policy, said tech companies won major concessions. “The lobbying they did to change the code really resulted in them determining what is OK to do,” he said.

The guidelines introduced on Thursday are just one part of a sprawling law that will take full effect in the coming years. The act was intended to prevent the most harmful effects of artificial intelligence, but European officials have more recently been weighing the consequences of regulating such a fast-moving and competitive technology.

Leaders across the continent are increasingly worried about Europe’s economic position against the United States and China. Europe has long struggled to produce large tech companies, making it dependent on services from foreign corporations. Tensions with the Trump administration over tariffs and trade have intensified the debate.

Groups representing many European businesses have urged policymakers to delay enforcement of the A.I. Act, saying the regulation threatens to slow innovation, while putting their companies at a disadvantage against foreign competition.

“Regulation should not be the best export product from the E.U.,” said Aura Salla, a member of the European Parliament from Finland who was previously a top lobbyist for Meta in Brussels. “It’s hurting our own companies.”

Read this on New York Times Technology
  About

Omnixia News is your intelligent news aggregator, delivering real-time, curated headlines from trusted global sources. Stay informed with personalized updates on tech, business, entertainment, and more — all in one place..