Introduction
Artificial Intelligence (AI) has become a transformative force, revolutionizing industries, governance, and daily life. However, its rapid advancement has also raised significant ethical, legal, and human rights concerns, particularly in the areas of privacy, misinformation, and surveillance. In response, various authorities have introduced regulations to manage AI’s risks while fostering innovation. Among these, the European Union (EU) and China stand out for their contrasting approaches to AI regulation, especially concerning “deep fakes” and surveillance technology, each navigating the delicate balance of innovation, control, and human rights impacts.
The EU AI Act
The EU AI Act (Regulation (EU) 2024/1689 laying down harmonized rules on artificial intelligence), which came into effect on 12 July 2024, represents a landmark regulatory framework aimed at ensuring AI is developed and used responsibly across Europe. As part of its broad scope, the Act’s extraterritorial reach has been a key point of discussion, ensuring that companies outside the EU are also held accountable when their AI systems impact EU residents or markets.
The Act categorizes AI systems into four risk levels: unacceptable, high, limited, and minimal. However, it is crucial to recognize that the AI Act is primarily an instrument of EU consumer law, focusing on the use of AI in consumer products and services, rather than solely addressing societal harms or surveillance. This broad focus ensures that the regulation extends protections to a wide range of AI applications. Moreover, the Act is complemented by amendments to the Product Liability Directive, which ensures that AI-based products are subject to EU liability rules, providing consumers with greater protection in the event of harm caused by these systems. For example, unacceptable risk AI, like AI systems used for social scoring (e.g., China’s social credit system), is prohibited due to its potential harm to fundamental rights. High-risk AI systems, such as biometric identification used in public spaces or automated recruitment tools, are subject to strict regulations. Limited-risk AI applications, like chatbots used for customer service, pose less harm and are subject to lighter regulation. Lastly, minimal-risk AI systems, such as video recommendations on streaming platforms, have negligible impact on consumer safety and are minimally regulated. This tiered approach allows the EU to impose stricter regulations on AI applications deemed potentially harmful, particularly those involving deep fakes and surveillance.
Deep fakes are AI-generated synthetic media that mimic real people’s voices and appearances. These pose significant challenges in terms of misinformation, defamation, and privacy violations. The EU AI Act classifies deep fakes as a high-risk AI application due to their potential to disrupt democratic processes, spread false information, and manipulate public opinion. For example, deep fakes can be used to create false narratives or misleading political statements that manipulate voters or interfere with elections. This can lead to misinformation spreading rapidly and disrupting the electoral process. Under the Act, any AI system that generates deep fakes must be subject to stringent transparency obligations. This includes clearly labelling synthetic content to distinguish it from real media, ensuring that users and viewers are aware they are interacting with or consuming AI-generated content. Additionally, the developers and deployers of such AI systems are required to maintain detailed documentation of their systems’ design, training data, and decision-making processes to facilitate oversight and accountability.
Surveillance technology, particularly AI-driven systems used for biometric identification and analysis, is another high-risk category under the EU AI Act. The regulation prohibits the use of real-time biometric identification systems in public spaces, except under narrowly defined circumstances, such as for preventing imminent terrorist threats. The Act also mandates rigorous data protection measures, ensuring that individuals’ biometric data is handled with the utmost care and privacy safeguards.
The EU’s approach reflects its broader commitment to human rights and data protection, as recorded in the General Data Protection Regulation (GDPR) which also sparked similar controversies regarding its balance between protecting individual rights and fostering innovation. By limiting the use of AI in surveillance, the EU aims to prevent the erosion of civil liberties while still allowing for the deployment of AI in a controlled, lawful manner.
China’s Approach to AI Regulation
China has taken a different approach to AI regulation, one that is closely aligned with its broader governance goals of maintaining social stability, national security, and economic development. While China has not yet introduced a comprehensive AI law equivalent to the EU AI Act, it has implemented a series of regulations and guidelines targeting specific AI applications, including deep fakes and surveillance technology that raise significant human rights concerns. The focus of the two approaches is also quite different, which is one of the reasons why the penalties are different. The EU AI Act primarily regulates the design and operation of AI systems, focusing on ensuring consumer safety and addressing issues like bias and transparency in AI applications. This makes the EU’s approach more consumer-oriented, regulating the technology itself rather than the users. In contrast, China’s regulations are more focused on the end-users and how AI technology is applied, particularly when it comes to controlling content and ensuring public order.
China’s approach to regulating deep fakes is encapsulated in its “Provisions on the Governance of Online Information Content,” which were introduced by the Cyberspace Administration of China (CAC). These provisions require AI-generated content, including deep fakes, to be clearly labeled and prohibit the use of deep fake technology to produce content that violates laws, distorts history, or undermines national security. Moreover, China has imposed strict penalties on individuals and organizations that use deep fake technology to create misleading or harmful content. These penalties, which may include fines or criminal charges, reflect China’s broader strategy of maintaining control over information dissemination to protect public security and social order. The emphasis on labelling and penalizing misuse demonstrates China’s intent to regulate deep fakes not just as a technological issue but as a matter of state control and national stability. Surveillance technology plays a central role in China’s AI strategy, particularly in maintaining public order and security. Unlike the EU, which restricts the use of AI-driven surveillance, China has embraced these technologies, integrating them into its broader social governance framework. China’s “Social Credit System” and its extensive network of surveillance cameras equipped with facial recognition technology are prime examples of how AI is used for monitoring and controlling the population. These systems are designed to track individuals’ behavior, enforce social norms, and ensure compliance with government policies.
While the Chinese government has introduced some regulations to ensure that surveillance data is used responsibly, these measures are primarily focused on preventing data breaches and unauthorized access rather than protecting individual human rights such as privacy. This shift reflects a broader regulatory focus, as China’s emphasis moves from AI-specific oversight to more comprehensive data protection laws. For example, the Personal Information Protection Law (PIPL) and the Network Data Security Management aim to regulate how personal information, including that collected by surveillance technologies, is handled and protected. These laws are part of China’s evolving legal framework to manage the vast amounts of data generated by AI systems and surveillance networks. The government’s priority is to ensure that AI surveillance technologies are effective in achieving their intended security objectives. China’s AI regulation also reflects its broader geopolitical strategy, particularly in its competition with the U.S. for technological dominance. As Emmie Hine notes, AI is seen by China not only as a tool for national security but also as a means to assert global influence. China’s regulatory approach emphasizes state control and aligns with its goal to lead in AI, while balancing domestic stability and international power dynamics.
Contrasting Philosophies
The EU and China’s differing approaches to AI regulation reflect their fundamentally distinct philosophical foundations. The EU’s framework is grounded in principles of transparency, accountability, and human rights protection. The AI Act builds on the EU’s precautionary approach, as outlined in regulations like the General Data Protection Regulation (GDPR) and the EU Charter of Fundamental Rights. This legal tradition reflects a cautious attitude toward AI deployment, particularly in areas that may impinge on civil liberties. The EU’s focus on transparency and oversight ensures that AI technologies, particularly deep fakes and surveillance, are subjected to stringent scrutiny. However, it’s important to recognize that the AI Act, much like the GDPR, is not only about protecting individual rights but also about fostering consumer protection and economic growth within the EU. While the EU aims to mitigate risks and prevent misuse of AI technologies, its regulations also serve to support the broader economic interests of the digital economy, ensuring that AI innovation remains competitive and beneficial for EU businesses.
In contrast, China’s AI regulation is rooted in a state-centric approach that prioritizes national security, social stability, and economic growth over human rights (critics such as Human Rights Watch have argued that the EU AI Act also fails to protect individuals.). China sees AI as a tool to reinforce state governance, where maintaining social order takes precedence over individual rights. The country’s “techno-nationalism” philosophy, positions AI at the core of its public policy and security strategy, with technologies like AI-driven facial recognition and the Social Credit System integral to social governance. China’s emphasis on using AI to enhance state control diverges sharply from the EU’s rights-based regulatory model, where AI surveillance is tightly regulated to safeguard human rights.
These philosophical differences are starkly evident in the regulation of surveillance technology. The EU limits the use of AI-driven biometric surveillance to prevent the erosion of privacy, while China actively promotes AI as a tool for governance, prioritizing its role in maintaining public security and social stability over human rights concerns.
Conclusion
As AI continues to evolve, the regulatory frameworks governing its use will play a critical role in shaping its impact on human rights. The EU AI Act and China’s AI regulations offer two distinct models for managing the risks and opportunities presented by AI, particularly in the areas of deep fakes and surveillance technology.
The EU’s approach, with its emphasis on transparency and individual rights, seeks to mitigate the potential harms of AI while fostering innovation in a controlled environment. In contrast, China’s regulatory strategy reflects its broader goals of maintaining state control and social stability, even if it means expanding the use of AI in surveillance and information management at the cost of individual human rights.








Leave a Reply