Back to Blog

AI Global regulation and what lies ahead

18 Jul
AI Global regulation and what lies ahead

Artificial Intelligence (AI) has become a focal point of extensive government regulation worldwide. While AI offers numerous advantages, such as enhanced productivity and cost-effectiveness, it also comes with inherent risks and challenges. Instances of bias and discrimination in AI systems can lead to unjust outcomes, raising concerns about privacy and data security due to the reliance on vast amounts of personal data.

As a result, governments across the globe are actively introducing regulations to ensure the safe, responsible, and ethical development and use of AI. These regulations encompass various aspects, ranging from data privacy and security to algorithmic transparency and accountability.

This article delves into the distinctive AI regulatory approaches taken by the United States, the European Union (EU), Canada, China, Brazil, and India. Each country seeks to strike a balance between economic progress, societal welfare, and public interests while promoting innovation.

 

European Union: The Introduction of the Artificial Intelligence Act (AIA)

The European Union unveiled the Artificial Intelligence Act (AIA) on April 21, 2021. This legislation proposes a risk-based approach to govern the use of AI in both public and private sectors. It categorizes AI applications into three risk levels: unacceptable risk, high-risk, and applications not explicitly banned. Critical services with potential threats to livelihoods or harmful behavior are prohibited from employing AI, while sensitive sectors like health require rigorous safety and efficacy checks by regulators. The AIA is currently undergoing review in the European Parliament.

Notably, the EU’s approach encompasses all automated technologies, not just specific areas of concern. The definition of AI systems includes a broad array of automated decision-making tools, even those not traditionally considered as AI.

 

Canada: Introducing the Artificial Intelligence and Data Act (AIDA)

In June 2022, the Canadian Parliament introduced a draft regulatory framework for AI, employing a modified risk-based strategy. The legislation comprises three pillars, with a specific focus on AI in the Artificial Intelligence and Data Act (AIDA). Canada’s AI regulations aim to standardize the design and development of AI across provinces and territories by private companies.

Unlike the EU, Canada’s modified risk-based approach does not outright ban automated decision-making tools in critical areas. Instead, the AIDA requires developers to create mitigation plans to minimize risks and enhance transparency when utilizing AI in high-risk systems. These plans must comply with anti-discrimination laws, aiming to reduce risk and increase transparency in AI’s application within social, business, and political systems.

 

United States: AI Bill of Rights and State Initiatives

As of now, the United States has not passed comprehensive federal legislation governing AI applications. Instead, the Biden Administration and the National Institute of Standards and Technology (NIST) have released broad AI guidance to ensure safe AI use. Additionally, various state and city governments are implementing their own AI regulations and task forces targeting specific use cases rather than a holistic regulation of AI technology.

At the federal level, the recently unveiled AI Bill of Rights addresses concerns about AI misuse and provides recommendations for safe AI usage in both public and private sectors. Though not legally binding, it advocates key safety strategies such as enhanced data privacy, protection against algorithmic discrimination, and guidelines for prioritizing secure and effective AI tools. This blueprint serves as guidance for lawmakers at all government levels when considering AI regulation.

NIST, an agency within the Department of Commerce responsible for developing technology standards, has also published guidelines for managing AI bias and monitors the integration of AI tools across the federal government.

In 2022, fifteen states and localities proposed or enacted AI-related legislation. Some bills focus on regulating AI tools in the private sector, while others set standards for AI use in the public sector. New York City introduced one of the first AI laws in the U.S., effective from January 2023, aimed at preventing AI bias in employment practices. Meanwhile, states like Colorado and Vermont established task forces to study AI applications like facial recognition at the state level.

 

China: Focus on Algorithm Transparency and AI Industry Development

China has set an ambitious goal for its private AI industry to reach an annual revenue of $154 billion by 2030. While the country has not yet implemented comprehensive AI technology rules, it recently introduced a law that governs how private companies use online algorithms for consumer marketing. This law mandates companies to inform users when AI is used for marketing purposes and prohibits using customer financial data to advertise the same product at different prices. However, the law does not extend to the Chinese government’s use of AI.

In September 2022, Shanghai became the first province to pass a law targeting private-sector AI development. The Shanghai Regulations on Promoting the Development of the AI Industry provide a framework for companies in the region to align their AI products with non-Chinese AI regulations.

 

Brazil: Draft Regulation on AI

Brazil is in the process of crafting its initial law to regulate AI. On December 1, 2022, a non-permanent jurisprudence commission of the Brazilian Senate presented a report featuring studies on AI regulation, including a draft for AI regulation. The draft will serve as a starting point for further deliberations in the Senate concerning new AI legislation. According to the commission’s rapporteur, AI regulation will be based on three main pillars: safeguarding the rights of individuals impacted by the system, classifying risk levels, and implementing governance measures for companies providing or operating AI systems.

 

India: NITI Aayog’s Draft Regulation

India currently lacks a specific regulatory framework for AI systems. Nevertheless, some working papers from the Indian Commission NITI Aayog in 2020, 2021, and 2022 indicate the government’s intention to pursue AI regulation. The central proposal involves establishing a supervisory authority responsible for setting principles for responsible AI, providing guidelines and standards, and coordinating various AI sectors’ authorities.

 

Future Steps for Global Regulation

Artificial Intelligence holds immense promise and is driving global growth while shaping the future of innovation. However, acknowledging the potential for misuse, it is crucial to implement regulations that safeguard consumers.

The various approaches discussed in this article offer insights into how policymakers worldwide are tackling specific AI-related issues and the technology as a whole. The EU adopts a broad approach that regulates all automated decision-making tools, delineating where they can and cannot be used. In contrast, the U.S. provides voluntary recommendations and standards at the federal level, with states and cities pursuing targeted regulations to address specific harms. Canada’s modified risk-based approach regulates all AI tools while allowing companies to devise their own risk-mitigation strategies in certain areas. China seeks to increase transparency for consumers while striving to become an AI standards global powerhouse.

Companies must establish comprehensive stances on AI ethics and compliance to comply with evolving regulations. Policymakers, in turn, must focus on protecting consumers from legitimate harms while remaining vigilant of the impact stricter regulatory regimes may have on AI innovation.

Share: