Published:


Last week, the European Union released a draft of strict regulations regarding the creation and use of artificial intelligence (AI). This 108-page document of rules and regulations aims to ban or restrict multiple “unacceptable” uses of AI, which the European Commission deems as any AI system that is considered a clear threat to the safety, livelihoods, and rights of people. This includes the use of AI in a range of activities like hiring decisions, bank lending, school enrollment selections, court decisions, and facial recognition in public places. Although the EU’s proposal faces a long road before it becomes law, as it must be approved by both the European Council and European Parliament, it has profound implications for big tech and national governments around the world.

Providers of AI systems, which include massive companies like Amazon and Facebook and smaller, yet competitive companies like DeepMind and FAIR, will have to deal with a bit of a curveball. For AI activities labeled “high-risk” like the ones mentioned earlier, these companies will have to provide extensive documentation to regulators about how their system works, as well as “show a proper level of human oversight” both in how the system is designed and put to use. For example, a credit-scoring AI system for bank loans would have to prove its accuracy and fairness, keep records of all activity, and have human monitorization at all times. Failure to do so would result in a fine of up to 6% of global turnover.

Similar to the EU’s General Data Protection Regulation, which became the global standard for big tech regulation in 2018, this draft will have major implications for national governments worldwide. Foreign governments usually need to adopt the EU’s rules to help their firms compete and comply with the global standard. The U.S., China, and France are among many countries that see this as an obstacle. For example, the U.S. currently uses AI in criminal justice and in the allocation of public services like income support. China uses AI for social scoring systems that track the trustworthiness of people and businesses. France has already integrated AI into its security apparatus. Should the EU’s draft of AI regulations go into effect, these countries will likely be forced to modify or even ban their own preexisting technologies.

Share this article