
The Geopolitics of Artificial Intelligence Regulation
Artificial intelligence (AI) has the potential to be what Edison said about electricity - "it is a field of fields... that holds the secret to reorganizing the life of the world". Artificial intelligence, at all of its levels of complexity, is a technology with a radical impact on the global economy and security. AI solutions can be integrated into industrial robots as well as military robots, can become defenders as well as cyber attackers far more agile than human experts, can analyze information to uncover threats but also to violate the rights and the privacy of individuals, as well as to create discrimination or amplify social and political polarization. More than any other technology, artificial intelligence illustrates the dualism of emerging technologies and the need for cooperation, regulation and sustainable adoption of these technologies while maintaining the resilience and values of their respective societies.
Artificial intelligence research has a long history, but only a few of its peaks have impacted the public consciousness, such as the Deep Blue algorithm that beat Garry Kasparov at chess in 1997, or IBM's Watson that won the general knowledge competition Jeopardy! in the US in 2011. With the launch of ChatGPT for natural language processing and generative artificial intelligence for images such as MidjourneyAI as services for ordinary users, we have seen an explosion of public debate about the impact of AI on the field of work, the economy, warfare and the survival prospects of the human species. As if in attempt to compensate for the frivolity of previous discussions, the new awareness of AI's potential that comes from direct interaction with near-magical services has generated a kind of hysteria about the dangers (but also the potential) of the technology. Strong warnings coming from such high-profile figures as Elon Musk, as well as a huge number of digital technology leaders and researchers, have begun to establish among the general public the idea of the firm need for AI regulation. Unfortunately, this regulation seems to have naturally intersected with the geopolitical competition between major powers, which risks diverting it from the goal of ensuring the collective good of humanity.
Is it though?
In all this debate, there is a promiscuity in the use of the concept of AI. The average person thinks of Skynet from the Terminator movies, but we have no prospect at this point of creating an AI that is truly conscious and parallels human intelligence. Rather, we are generating algorithms optimized for tricking users, a kind of Artificial Competence that meets the Turing test (named after the cyberneticist Alan Turing) of not recognizing that you are not talking to a human. We are trained both by the media and the fiction we consume, and by the human tendency to anthropomorphize everything, including animals and weather phenomena, to expect real AI, an impression maintained for advertising purposes and by those working in the field. The use of machine learning algorithms can lead to a very complex AI model with emergent, i.e. unplanned, qualities, but I do not think we can talk about the idea that a complicated or even complex system automatically has that quality that self-aware beings possess. Professional and amateur philosophers have taken ChatGPT by storm to see if OpenAI has created some kind of Galatea, but it is clear that the definitions we have been working with to date to indicate intelligence no longer match the realities of our technical capabilities. It resembles the future imagined by Star Trek in the 1960s, in which mobile devices with very high computing power were put on the same plane of technological advancement as the warp drive system that helps humanity travel through the galaxy. Half a century later, everyone has a smartphone, but we no longer have the ability to reach the moon, at least for now.
Therefore, I believe that the regulation of artificial intelligence includes all systems capable of autonomous decision making and content generation, not just possible systems that could be considered alive. At the same time, the strong anxiety we see today is not a purely rational reaction to the technological issue of true AI, but also a manifestation of the eschatological obsession of Western civilization (no other civilization being as obsessed with the idea of collective decline and ruin), as well as the anxiety of modern man who has lost his autonomy in favor of complex systems that self-regulate without much human input to their daily functioning, beyond managerial and supervisory aspects. More and more industrial control systems for critical infrastructures have been heavily digitalized and self-regulate based on SCADA system sensors, and new generations of systems are even more heavily divorced from human intent in their operation, whether we are talking about smart grids, traffic management systems, the synchronization of databases and cloud services on the Internet or even content moderation on social media.
Transatlantic regulation of AI
The main geopolitical axis of cooperation on AI regulation is transatlantic, between the US and the European Union. In the Trade and Technology Cooperation Council launched in 2021 following the EU-US summit, one of the ten working groups is on AI and the ethical and value aspects of its implementation.
The EU has also had a clear role in coordinating Member States on AI, through the Artificial Intelligence Act (COM/2021/206 final, Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union acts), the 2021 Coordinated Plan on AI and various documents proposing general principles for ethical AI, voluntary codes of conduct, recommendations and other forms of non-coercive governance, prioritizing action with partners such as the US. The EU High Commission for AI has already published an ethical guide for AI which is intended as a basis for regulating a wide range of applications that may incorporate artificial intelligence. The Europeans are using the "trusted AI" formula to encapsulate varying dimensions of regulation from avoiding discrimination, to not using AI for censorship, to banning weapons with autonomous trigger-pulling decisions.
The EU approach has an important geopolitical dimension, as the EU aims to become, according to COM/2021/206, "a global leader in safe, trusted and ethical artificial intelligence" because only "joint action at EU level can protect the EU's digital sovereignty and use regulatory power and tools to shape global rules and standards". The EU believes that early regulation of AI can lead to a "Brussels effect" in the field ensuring that the need to adapt to the European market will force widespread adoption of European regulatory preferences, as has happened with many industrial, health and safety standards.
The Trump Administration was reluctant to support international cooperation on AI and regulatory efforts because it assessed the issue from a security impact perspective and wanted to avoid discouraging innovation through an overly restrictive framework, but the Biden Administration has implemented a different approach that prioritizes risk management of new technologies and makes transatlantic cooperation and coordination on AI possible. International cooperation is one of the six strategic pillars of the US National Program on AI, and the US Think Tank sector has published proposed regulatory principles as early as 2019 (e.g. Brookings Institution). Even the US Department of Defense has issued principles governing the use of AI. The final report of the US Commission on Homeland Security in Artificial Intelligence has a chapter dedicated to cooperation with partners, particularly the EU.
There is evidence of the inclination towards cooperation through both entities' support for the OECD Principles on AI, which emphasize the need for trust, respect for human rights and democratic processes. Although they are competitors in the field of AI on the commercial and research side (a claim that overstates European performance in the field), both the EU and the US stress the need to cooperate with allies and partners to establish a global technological order favorable to themselves and to democratic aspirations in all areas of new technologies.
Global competition
China has been noted as the biggest competitor on AI for the US, with some expert reports estimating that Chinese investment in the field is even higher than the US. The transatlantic discourse on AI regulation is heavily laden with value judgments about preventing digital authoritarianism and upholding liberal and democratic values, naturally targeting China and its partners, which seek to create a multipolar order in which they are no longer structurally dependent on the West. The Americans, in particular, have linked their efforts in digital technologies, including AI, to the work of the alliance of democracies against global autocracies announced by Joe Biden. China does not have a similar framework of partnerships on AI as the transatlantic one. Its partners have sovereigntist leanings and do not want to be constrained in experimentation and commercial approaches by others. To the extent there is cooperation, it will take place within the framework of Chinese strategic initiatives, particularly in the economic area, such as the Digital Silk Road under the Belt and Road Initiative. Other potential initiatives include the BRI Space Information Corridor on space cooperation. Chinese preferences in the field are promoted through comprehensive commercial partnerships, where China's partners procure AI-enabled products and services, including through government-to-government transactions and projects. The Europeans and Americans lack the capacity to mobilize resources to promote the rapid adoption of their preferred standards, and are intellectually (though perhaps not practically or dogmatically) committed to a liberal-competitive governance framework, in which they are automatically disadvantaged by comparison to China's strong ties to Chinese companies in the field, whether private or public. In this sense, the noises being made by Westerners on the subject of ethical and trusted AI may be an attempt to disadvantage China and limit competition – China will either find itself excluded from third party markets that adopt Western standards, or will have to compromise on its vision to meet the requirements. A similar phenomenon is occurring in the area of regulation of international critical infrastructure projects, where the G20 Osaka Principles for Sustainable Infrastructure and the Blue Dot Network created between the Americans, Australians and Japanese are restricting China's leeway in comprehensive infrastructure partnerships with third countries.
Conclusion
AI is often presented as a radical paradigm shifter. At the moment, that shift has not taken place, and in the area of regulation, the AI issue has folded into the camps and rifts already created between the West and the group of developing countries coordinated if not led by China. Despite major lingering differences between the EU and the US over the taxation of US tech companies and their access to European citizens' data, the AI issue is thorny enough that it has become a priority in the transatlantic relationship, which is perhaps the most well-connected emerging technology regulatory bloc in the world. The need for regulation is a real one, given the many applications for AI and their sensitive or dual nature, but AI is also a geopolitical battleground between a relatively declining West and a rising East that sees emerging technologies in general and AI in particular as an equalizer of forces against the West.
Photo source: pxhere.com.