Ticker

6/recent/ticker-posts

Ad Code

Responsive Advertisement

Europe Moves Forward with AI Regulation

European lawmakers today voted overwhelmingly in favor of the landmark AI regulation known as the EU AI Act. While the act does not yet have the force of law, the lopsided vote indicates it soon will in the European Union. Companies would still be free to use AI in the United States, which so far lacks consensus on whether AI represents a risk or opportunity.

A draft of the AI Act passed by a large margin today, with 499 members of European Parliament voting in favor, 28 against and 93 abstentions. A final vote could be taken later this year after negotiations by members of Parliament, the EU Commission, and the EU Council.

First proposed in April 2021, the EU AI Act would restrict how companies can use AI in their products; require AI to be implemented in a safe, legal, ethical, and transparent manner; force companies to get prior approval for certain AI use cases; and require companies to monitor their AI products.

The AI law would rank different AI uses by the risk that they pose, and require companies meet safety standards before the AI could be exposed to customers. AI with minimal risk, such as spam filters or video games, could continue to be used as they have been historically, and would be exempt from transparency requirements.

The screws begin to tighten with AI said to have “limited risk,” a category that includes chatbots such as OpenAI’s ChatGPT or Google’s Bard. To abide by the EU AI Act, a user must be informed that they are interacting with a chatbot, according to the proposed law.

Organizations would need to conduct impact assessments and audits on so-called high-risk AI systems, which includes things like self-driving cars, as well as decision-support systems in education, immigration, and employment. Europe’s central government would track high-risk AI use cases in a central database.

(Fenton/Shutterstock)

AI deemed to carry an “unacceptable” risk would never be allowed in the EU, even with audits and regulation. Examples of this type of forbidden AI includes real-time biometric monitoring and social scoring systems. Failing to adhere to the regulation could bring fines equal to 6% or 7% of a company’s revenue.

Today’s vote bolsters the notion that AI is out of control and needs to be reined in. A number of prominent AI developers recently have called for a ban or a pause on AI research, including Geoffrey Hinton and Yoshua Bengio, who helped popularize modern neural networks and who signed a statement from the Center for AI Safety calling for treating AI as a global risk.

Hinton, who left his job at Google this spring so he could speak more freely about the threat of AI, compared AI to nuclear weapons. “I’m just a scientist who suddenly realized that these things are getting smarter than us,” Hinton told CNN’s Jake Tapper May 3. “…[W]e should worry seriously about how we stop these things getting control over us.”

However, not all AI researchers or computer scientists share that point of view. Yann LeCun, who heads AI research at Facebook-parent meta–and who joined Hinton and Bengio in winning the 2018 Turing Award for their collective work on neural networks–has been outspoken in his belief that this is not the right time to regulate AI.

LeCun said today on Twitter that he believes “premature regulation would stifle innovation,” specifically in reference to the new EU AI Act.

“At a general level, AI is intrinsically good because the effect of AI is to make people smarter,” LeCunn said this week at the VivaTech conference in Paris, France. “You can think of AI as an amplifier of human intelligence. When people are smarter better things happen. People are more productive, happier.”

“You can think of AI as an emplifer of human intelligence,” Meta’s AI chief Yann LeCun said at VivaTech

“Now there’s no question that bad actors can use it for bad things,” LeCunn continued. “And then it’s a question of are whether their more good actors than bad actors.”

Just as the EU’s General Data Protection Regulation (GDPR) formed the basis for many data privacy laws in other countries and American states, such as California, the proposed EU AI Act would set the path forward for AI regulation around the world, says business transformation expert Kamales Lardi.

“EU’s Act could become a global standard, with influence on how AI impacts our lives and how it could be regulated globally,” she says. “However, there are limitations in the Act…Regulation should focus on striking an intelligent balance between innovation and wrongful application of technology. The act is also inflexible and doesn’t take into account the exponential rate of AI development, which in a year or two could look very different from today.”

Ulrik Stig Hansen, co-founder and president of the London-based AI firm Encord, says now is not the right time to regulate AI.

“We’ve heard of too big to regulate, but what about too early?” he tells Datanami. “In classic EU fashion, they’re seeking to regulate a new technology that few businesses or consumers have adopted, and few people are, in the grand scheme of things, even developing at this point.”

Since we don’t yet have a firm grasp of the risks inherent in AI systems, it’s premature to write laws regulating AI, he says.

(NMStudio789/Shutterstock)

“A more sensible approach could be for relevant industry bodies to regulate AI like they would other technology,” he says. “AI as a medical device is an excellent example of that where it is subject to FDA approval or CE marking. This is in line with what we’re seeing in the UK, which has adopted a more pragmatic pro-innovation approach and passed responsibility to existing regulators in the sectors where AI is applied.”

While the US does not have an AI regulation in the works at the moment, the federal government is taking steps to guide organizations towards ethical use of AI. In January, the National Institute of Standards and Technology (NIST) published the AI Risk Management Framework, which guides organizations through the process of mapping, measuring, managing, and governing AI systems.

The RMF has several things going for it, AI legal expert and BNH.ai co-founder Andrew Burt told Datanami earlier this year, including the potential to becoming a legal standard recognized by multiple parties. More importantly, it retains the flexibility to adapt to fast-changing AI technology, something that the AI EU Act lacks, he said.

Related Items:

AI Researchers Issue Warning: Treat AI Risks as Global Priority

NIST Puts AI Risk Management on the Map with New Framework

Europe’s New AI Act Puts Ethics In the Spotlight

The post Europe Moves Forward with AI Regulation appeared first on Datanami.

Enregistrer un commentaire

0 Commentaires