The world’s first comprehensive artificial intelligence (AI) regulation, which will take effect across the European Union (EU) on August 1, is expected to raise assessment and compliance costs for Chinese tech companies operating in the EU’s 27 member states, according to industry experts. Passed by the European Parliament in March and approved by the Council of the EU in May, the Artificial Intelligence Law aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from so-called high-risk AI, while also promoting innovation and establishing Europe as a technology leader.
Some Chinese AI companies are already expecting to spend more time and money complying with the new EU rules, facing concerns that over-regulation could stifle innovation.
Hong Kong-based Dayta AI, a global retail analytics software provider, said it has met the EU regulation’s “compliance and assessment requirements.” [to] Increase research and development [research and development] The company’s co-founder and CEO, Patrick Tu, said the increase will reduce the company’s “security and testing costs” by about 20 to 40 percent. He said the increased spending was due to “additional documentation, audits, [certain] “Technological measures.” A catchy artificial intelligence-related slogan on display at the Google booth at the Hannover Messe 2024 trade fair in Germany, April 22, 2024. Photo: BloombergThe passage and implementation of the EU’s new rules reflect a global race to create AI guardrails amid a boom in generative AI (GenAI) services since OpenAI released ChatGPT in November 2022. GenAI refers to algorithms that can be used to create new content, such as voice, code, images, text, simulations and videos, in response to short prompts.
“The EU institutions may give people the impression of over-regulation,” said Tanguy van Overstraten, a partner at Linklaters and head of the firm’s Technology, Media and Telecommunications (TMT) group in Brussels. “What the EU is trying to do with AI law is create an environment of trust.”
The AI Act sets out obligations for technologies based on the extent of their potential risks and impacts. The regulation consists of 12 main titles covering prohibited activities, high-risk systems, governance, post-market surveillance, information sharing, and transparency obligations for market surveillance.
The regulation also requires member states to establish so-called regulatory sandboxes and real-world testing at national level. However, the rule does not apply to AI systems or models (including their outputs) that have been specifically developed and operated solely for the purposes of scientific research and development.
The European Union took an early lead in the global race to develop guardrails for artificial intelligence. Photo: Shutterstock
If a company wants to test [an AI application] “In the real world, they can benefit from a so-called sandbox that can last for up to 12 months, during which they can test their systems to a certain extent,” Linklaters’ van Overstraten said.
Failure to comply with rules banning certain AI practices could result in administrative fines of up to 35 million euros (US$38 million) or up to 7% of the violating company’s total worldwide annual turnover for the previous financial year, whichever is greater.
“EU regulations on the quality, relevance and representativeness of training data require us to be even more careful in choosing our data sources,” said Dayta AI’s Tu.
“Our focus on data quality will ultimately improve the performance and fairness of our solution,” he added.
Du said the AI law offers a comprehensive, user-rights-focused approach that “imposes strict restrictions on the use of personal data.” By comparison, “China and Hong Kong’s rules appear to be more focused on enabling technological advances and aligning them with the strategic priorities of their governments,” he said.
On August 15 last year, Beijing put new GenAI regulations into effect, stipulating that GenAI service providers must “adhere to core socialist values” and not generate content that “incites subversion of state power or subversion of the socialist system, endangers national security and interests, tarnishes the country’s image, incites secession from the state, undermines national unity and social stability, or promotes terrorism, extremism, national hatred, ethnic discrimination, violence, obscenity, or pornography.”
More generally, AI models and chatbots should not generate “false and harmful information.”
“Chinese regulations require companies and products to adhere to socialist values and ensure that AI output is not perceived as harmful to political or social stability,” said Alex Roberts, a partner at Linklaters in Shanghai and head of the firm’s China TMT group. “For multinationals who are unfamiliar with these concepts, this could cause confusion among compliance officers.”
He added that China’s regulations are so far focused only on GenAI and are “viewed as more of a state- and government-led rulebook,” while EU AI law is “focused on user rights.”
Still, Roberts said the key principles of EU and Chinese AI regulation are “very similar,” including being “transparent to customers, protecting data, being accountable to stakeholders, and providing direction and guidance on products.”
The European Union’s comprehensive artificial intelligence rules, which come into force on August 1, will serve as a blueprint for the world as more governments seek to regulate the technology. Photo: ShutterstockBeijing is also pushing for a comprehensive AI law. China’s cabinet, the State Council, included the effort in its annual legislative plan for 2023 and 2024. But no bill has yet been proposed. Other jurisdictions in Asia are also working on AI regulations. South Korea, for example, drafted the Act on the Promotion of AI Industry and the Framework for Establishing Trustworthy AI last year. The proposed regulations are still under review.
“Currently, some governments [Asia-Pacific] “European regions are working on their own AI legislation, drawing heavily from EU regulations on data and AI,” said Linklaters’ Roberts. “Firms could consider lobbying local government stakeholders to increase harmonization and consistency of rules across markets.”