EUROPEAN UNION’S MOVES TO REGULATE ARTIFICIAL INTELLIGENCE

European Union’s lawmakers this month voted on an Act to regulate Artificial intelligence in one of the first significant attempts to regulate the developing sector. It was a move that will be putting kingpins of artificial intelligence (AI) under supervision. Other countries are also scrambling to design rules. Though industry leaders like OpenAI boss Sam Altman warn some approaches are too onerous, the risks of pandering to special pleading exceed the dangers of stifling a new technology.

European Union lawmakers have agreed changes to draft artificial intelligence (AI) rules to include a ban on the use of the technology in biometric surveillance, and to mandate that generative AI systems like the ChatGPT chatbot must disclose AI-generated content.

AI is developing so rapidly that there’s not even a consensus on whether it needs specific regulations. As venture capitalist Marc Andreessen notes, AI models are made up of codes and algorithms just like other computer programmes. What distinguishes them, however, is that they don’t mechanically follow instructions set by humans, but draw conclusions independently. For instance AI technology empower photo generating tools to create pictures that look like real people.These advantages are also the reasons AI might create big problems.

AI’s ability to learn from freely available data also poses big questions about violations of copyright law. And the potential for technology to rapidly displace large numbers of jobs understandably makes politicians nervous.

Despite these challenges, governments have agreed to some general principles. A 2019 agreement brokered by the Organisation for Economic Co-operation and Development specified that AI should be transparent, robust, accountable and secure. Beyond these generalities, however, there’s widespread disagreement over what counts as AI, what regulators should try to solve for, and to what extent they need to enforce their objectives.

Policymakers are divided into different camps. At one end of the spectrum are the EU, China and Canada, which are trying to construct a new regulatory architecture. At the other end are India and the United Kingdom, whose white paper in April effectively said AI doesn’t require any special regulation beyond a set of principles similar to those articulated by the OECD.

President Joe Biden has proposed an AI Bill of Rights while Congress is still debating the need for targeted rules. Such divergence suggests the world is unlikely to see a global AI regulator.

AI models can replicate tasks done by humans, it’s harder to spot which is which. The model’s ability to make apparently independent decisions also blurs lines of responsibility and makes it harder for the designer of artificial body’s to monitor

The EU’s proposed law advocates placing AI applications into four different buckets. A minority of “unacceptable risk” uses of AI, such as real-time facial recognition for surveillance of citizens, will be banned. The majority will be deemed limited, low-risk and subject to minimal oversight. AI systems that could be used to influence voters and the outcome of elections and systems used by social media platforms with over 45 million users are labeled “high-risk”.

The general-purpose AI systems, including OpenAI’s ChatGPT tool, are the basis of these applications which would make them accountable for these risks.

The EU law will require “high risk” applications to reveal content generated by the technology, publish summaries of the copyrighted data used to do so, and punish companies which make inadequate disclosures with fines worth up to 7% of their total revenue.

That seems excessive for applications like ChatGPT, which are mainly used for summarising documents and helping to write code. The exacting standards could also discourage smaller companies or non-profit organisations from developing general-purpose AI systems, limiting innovation and limiting rivals to industry behemoths like Microsoft-backed OpenAI or Google owner Alphabet. That in turn could stifle positive uses of AI in areas such as drug development or designing semiconductors, or ensure that countries with a lighter regulatory touch reap the benefits of innovation.

Currently their is a contest between regulators and big technology firms. Watchdogs in Brussels and the United States are engaged in a belated attempt to limit the power of giants like Alphabet, Microsoft and Facebook owner Meta Platforms. And even the EU’s proposed regulation effectively lets AI practitioners mark their own homework. Given the speed of innovation and the risk of ChatGPT and other generative AI models spawning more problematic applications, the threat of too much regulation is less daunting than the alternative.

Among other changes, European Union lawmakers want any company using generative tools to disclose copyrighted material used to train its systems and for companies working on “high-risk” applications to do a fundamental rights impact assessment and evaluate environmental impact. Systems like ChatGPT would have to disclose that the content was AI-generated, help distinguish so-called deep-fake images from real ones and ensure safeguards against illegal content.

Sam Altman OpenAI CEO said during a speech in Paris that the creator of ChatGPT wants “to make sure it is able to comply” in discussions with European regulators,

Altman also said on his Twitter handle, that the company has no plans to leave Europe, reversing a threat made earlier that to leave the region if it becomes too hard to comply with upcoming laws on artificial intelligence. “We are excited to continue to operate here and of course have no plans to leave.”

Follow us on social media

Leave a Reply

Your email address will not be published. Required fields are marked *

Share via
Copy link
Powered by Social Snap