How AI is being regulated

Background

Former US rear admiral and computer programmer Grace Hopper, no stranger to trying to get things done in large bureaucracies, famously said it is best to ask for forgiveness instead of asking for permission. The implication was that, with so many rules and policies, this was the only way to get things done.

Over the last 15 years, this is the approach modern tech companies have taken. That is, rather than try to adhere to all existing regulations when introducing a new technology or service, they will ignore the rules, ask for special dispensation, and once they have reached a certain size and level of success, apologize for their misdemeanours. Then they will work with lawmakers to ‘help’ make appropriate rules for their industry, which of course they try to make as favourable as possible to themselves. Recently, AI companies have appeared to follow a similar playbook, with OpenAI CEO Sam Altman suggesting a focus on directly regulating AI language models to avoid existential risk if an AI eventually became self-aware, instead of, for example, focusing on robust consumer protections for the everyday impacts of such technologies.

While it might sound hard to believe that the average lawmaker or government would accept this type of approach from tech companies, there are sound arguments to grant new technologies some leeway. It is true that existing rules aren’t always appropriate for regulating new technologies, and overregulating emerging sectors can risk strangling them at birth, allowing the benefits (and tax revenues) to accrue to more lenient countries.

Existing Regulations

Citizens and governments are now far more sceptical of tech companies like Google and Facebook after seeing how they treated the average person’s private data over the last decade. This has led to increasingly strong data privacy requirements, most famously in Europe, which has adopted the strict GDPR framework to manage the collection and storage of individual personal data.

As many uses of AI will also use personal or confidential data, laws such as the GDPR will affect them. Similarly, because of the large variety of types of AI, other regulatory frameworks and existing legislation will also have an impact. For example, anti-discrimination, consumer protection, or employment legislation. So for example, if a company uses AI to help with its hiring processes, but that leads to candidates being discriminated against due to AI being trained on biased data, existing laws against discrimination would remain applicable, and a specific AI law wouldn’t be necessary.

Similarly, as noted in a previous article on this site, Ethical Considerations in AI Adoption, there is an ongoing battle over the legality of data trained on copyrighted images, and the same logic extends to the scraping of public written data, such as from news articles or even Twitter posts.  

Some sectors AI is impacting are naturally more regulated to begin with, such as self-driving cars, which have to meet strict road safety standards and address industry concerns. This technology has seemed to be on the verge of mainstream adoption for at least five years; however, while trials have been carried out globally, at the time of writing, any rollouts have been limited.

Different Global Approaches

Seeing the potential of AI to improve productivity, countries globally have generally given as much support to it as possible while cautiously reviewing their existing regulations.

Artificial Intelligence has been at the heart of China’s recent industrial policy, and the country has set the goal of becoming the global AI leader by 2030 and given significant state support to R&D. In tandem with these investments, however, in August 2023, they introduced a law setting out restrictions for content generated by AI. Notably, the most serious restrictions in this law focused on “public-facing” content, giving significant freedom to companies using AI internally.

Similarly, Saudi Arabia has also set a 2030 target to realize its AI ambitions, which it is folding into its ongoing Vision 2030 plans to diversify its economy. In March 2024, the New York Times also reported that Saudi Arabia planned to invest USD 40 billion in AI.  While they are yet to lay out comprehensive legislation specifically targeting AI, at the end of 2023, the Saudi Data and AI Authority (SDAIA) government agency laid out detailed personal data protection policies, bringing the country closer to the EU’s GDPR model.

With US companies so far leading the AI revolution, the country is approaching regulation from multiple angles. In late 2023, Joe Biden issued an executive order on Safe, Secure and Trustworthy Artificial Intelligence, tasking a range of government agencies with preparing guidance and reports on generative AI, focused on data privacy, national security, and establishing industry standards. In part due to the fast development of AI during the last half year, no substantial legislation has yet been adopted, but various agencies have already submitted their original reports. Some of this has included guidelines for AI in the US workplace, such as not exclusively relying on such systems for employment decisions.

Separately, the FTC (Federal Trade Commission) opened an investigation into the AI platform OpenAI in mid-2023, citing concerns about user privacy after a data leak. Similarly, in January 2024, the FTC demanded information on investments and partnerships from five companies involved in generative AI or provision of cloud services due to anti-competition concerns. The FTC has separately noted that generative AI, by its networked nature, may stifle competition and create monopolies, as it requires large datasets, some of which may not be publicly available to competitors, extensive IT infrastructure, and access to the world’s limited number of top AI engineers.

The UK has followed what they have described as a ‘pro-innovation’ approach, mostly focused on investing and adapting existing data privacy and cybersecurity rules to fit AI.

Globally, most countries have gone down similar paths to the above examples: implementing task forces, repurposing existing data protection legislation, but generally stopping short of any large policies. The exception has been the European Union (EU), which has already passed significant legislation.

The EU Approach

On March 2024, the EU passed the Artificial Intelligence Act, which, in a similar way to GDPR legislation, may form a template for other countries’ approaches. Similar to GDPR, it includes provisions for EU residents to lodge complaints and have a right to explanations about certain types of AI-connected decisions.

Broadly speaking, the law divides AI into different categories of risk, each regulated differently:

  1. Banned or heavily restricted uses cover areas that threaten human rights, such as scraping online data to use in facial recognition databases, monitoring people’s emotions, or attempting to ‘predict’ who is likely to commit a crime.

  2. High-risk can include using AI or robots for things that potentially threaten life, such as surgery, or that have a large impact on life, such as in law enforcement or immigration applications, all of which are restricted and subject to strict transparency requirements. Notably, general-purpose AI models, which underlie many services such as ChatGPT, would need to share details of the content they were trained on, which may be challenging to implement.

  3. Limited-risk covers areas such as chatbots, AI-generated text, and deepfakes or artificial videos. In many situations, companies will be required to make it explicitly clear to consumers that they are interacting with a chatbot, or that such videos are AI generated and not ‘real.’

  4. Areas of minimal or no risk, with limited regulation, would include types of AI used in computer games or spam filters.

Critics of the EU’s policy have criticized the length and depth of the law, and argued that some of the restrictions, particularly the transparency requirements, may lead US companies to simply avoid offering their services in the EU, which would in turn hurt European companies that wish to use or build upon these foundational services. However, despite the EU being the first mover, it is important not to overemphasize the impact of the law, which, while passed, is not expected to fully come into force until mid-2026, and may itself be further influenced by the rapid pace of new AI technology.

It is also important not to oversimplify this as certain countries being more or open or restrictive to AI, because AI takes so many forms. For example, some countries may want to heavily regulate generative AI to stop it creating political or sexual content, but would be very open to using AI for facial recognition matching in order to help their police forces, something that the EU’s legislation appears keen to restrict.

For firms using or developing AI products in the future, the key thing is to realize that while most governments are looking to make AI successful, the paths they take to doing that will be different, and so it is important to stay up to date with new regulations.

Previous
Previous

AI & the Uncanny Valley

Next
Next

Is AI being overhyped?