Robots and Empire(s)
Few concepts have ever been as tightly related to the notion of technological advancement and the future in general as that of artificial intelligence. The idea of highly intelligent, even sentient robots permeating various facets of human activity and society has been a staple of science fiction since the past century. Though the term “robot” itself was introduced to the English language and the world by Czech playwright Karel Capek in 1920 (“robot” meaning “work” in Czech), robots and artificial intelligence were developed most prominently through the works of such authors as Isaac Asimov (one of whose novels lent its title to this article), Arthur C. Clarke and Philip K. Dick, while Mary Shelley’s Frankenstein has also been viewed as another example of artificial beings appearing in fiction. The concept has been heavily featured in several blockbuster science fiction films and TV series as well, either as the main theme or as part of the technologically advanced future. Prominent examples include 2001: A Space Odyssey, Aliens, Star Wars, Terminator, The Matrix and more recently, Interstellar and Ex Machina.
Not every X is Y
The earliest conceptual roots of AI, however, date back to antiquity and legend; here, we can recall the tale of Talos, the giant bronze-made automaton said to be guarding the shores of Crete against pirates.
Historically speaking, artificial intelligence has been studied ever since the fifties, more specifically since 1956, when the term along with the field itself are thought to have been created at a seminal workshop at Dartmouth College attended by pioneers in the field such as Claude Shannon, John McCarthy or Marvin Minsky. The earliest conceptual roots of AI, however, date back to antiquity and legend; here, we can recall the tale of Talos, the giant bronze-made automaton said to be guarding the shores of Crete against pirates. Today, AI is intimately linked to a wider range of fields, its implications having extended far beyond the confines of technology where it originated as an independent field of research: among others, AI now touches upon philosophy of mind, and the nature of consciousness itself (can consciousness be simulated, and would a machine be aware and have conscious experiences as a human does?), ethics (how far should we go with developing AI, and can AI be attributed moral agency?), neuroscience, linguistics, psychology, free will, and even security, where names such as Stephen Hawking, Bill Gates and Elon Musk have warned about the dangers of AI becoming intelligent enough that it could pose a threat to humanity.
A clarification is due, however, when referring to AI. In the vast majority of communications on artificial intelligence outside of scientific sources, the term “artificial intelligence” is often conflated with a certain type of AI, namely strong artificial intelligence, also called artificial general intelligence – that is, a machine with all the cognitive capabilities of a human being and thus indistinguishable from a human in terms of personhood and intellectual capacity. This is what is typically portrayed in science fiction as previously discussed, and one of the primary goals of AI research. The accomplishment of an artificial person, the actualisation of strong artificial intelligence is widely regarded as one of the greatest challenges faced by science and philosophy, and arguably the Magnum Opus that mankind pursues. Nonetheless, the consensus is that such an event is very unlikely to come about in the near future, as not only is more research needed to further our understanding the human consciousness, the brain and the extent to which they can be simulated, but it is widely thought that the complexity of the human brain is such that simulating one lies well beyond our current technological capabilities.
At the other end there is weak or narrow artificial intelligence (or simply artificial intelligence) i.e. AI applied to a very specific set of tasks, lacking self-awareness, cognitive capabilities and cannot function beyond its designed purpose. All applications of AI today are considered different forms of weak AI. IBM’s Deep Blue, Apple’s Siri and Microsoft’s Cortana serve as examples. Weak AI has a widespread use today, ranging from stock market analysis to video games to telecommunications to medical diagnosis applications. One way to distinguish between strong and weak AI is to consider an application designed to play the game of battleships: whereas weak AI can play the game and win, it cannot understand the game itself or the concept of game in general, and how it is a simplified abstraction of real-world sea battles for the purpose of entertainment. Strong AI, on the other hand, would be assumed to be capable to understand the nature and purpose of the game.
AlphaGo shocked the world by playing on equal footing with, and winning four out of five matches against Lee, one of the world’s top professional go players. This victory came after another surprising series of victories, most notably against the European go champion in October 2015.
One of the most important milestones came, perhaps surprisingly, in 2016, when Google’s AlphaGo program faced off against one Lee Sedol in a series of five matches of go, a millennia-old Chinese board game with much simpler rules than chess which it predates, but often thought to be much more complex as far as strategic depth goes, given its vast board size (19x19 intersections, compared to the 8x8 squares used in chess) and its additive rather than subtractive nature (i.e. the board starts blank and is gradually filled with pieces called ‘stones’, as opposed to chess where the board is pre-arranged and pieces are gradually removed from it). The complexity of go is such that the best go software prior to AlphaGo could only be compared to intermediate players; only seldom could go programs win against masters, and each time they needed a handicap on the human player’s part to win. Yet, AlphaGo shocked the world by playing on equal footing with, and winning four out of five matches against Lee, one of the world’s top professional go players. This victory came after another surprising series of victories, most notably against the European go champion in October 2015. What was even more intriguing, though, was the software’s playing style, which go experts described as original and beautiful in an alien way, with many moves that seemed odd and erratic at first, but made much more sense in hindsight, betraying the software’s extensive computational abilities and processing power owing to its AI algorithms and hardware (much more advanced than what was used by Deep Blue to defeat Garry Kasparov).
We may now consider what the advantages that machine learning, neural networks and artificial intelligence as applied to geopolitics and geostrategy may bring us.
That the AI not only managed to win against some of the world’s best players, but did so using a completely new, unique style and philosophy considering the game’s over 3000 years, is what brings us to the topic at hand. We may now consider what the advantages that machine learning, neural networks and artificial intelligence as applied to geopolitics and geostrategy may bring us. The fact that AI managed to bring a new perspective on go strategy and tactics leads us to wonder what we may gain from employing similar techniques in the field of geopolitical analysis. There are several things to consider in order to ascertain the usefulness of the application of AI to this particular activity.
To start off, analysing information requires at its very basis the capability of determining the information input necessary, sifting through a wide variety of sources based on their credibility and relevance to the issue at hand, as well as detecting emerging patterns that help build the picture. It is worth pinpointing the vast quantity of information that we have at our disposal today. While, in theory, this should mean that the quality of analysis should improve, in reality the vast quantity of information and data available is not of much use given the varying degrees of credibility of the available information as well as our limited cognitive capacities for absorbing and processing it. Due to the latter, it may actually make it more difficult to notice patterns and trends among the many threads of geopolitical activity. It is what specialists call “information overload” or “data smog”, and it has become one of the traits often associated with the so-called “Information Age”. In data mining and analytics, it is also called “big data”. This is something that a particular type of algorithm used in artificial intelligence called “machine learning”, and especially, a subtype called “deep learning”, can be of great help. Machine learning is extremely useful when it comes to dealing with large amounts of datasets, and to spotting patterns and trends.
To be sure, machine learning is more than just statistics, though it is related to it. Machine learning is, in fact, the ability of a program to learn without being specifically programmed to, and exploiting data so as to learn from it, improve its models and thus yield more accurate results. In order to improve its gameplay, Google’s AlphaGo extensively used a particular implementation of machine learning, called artificial neural networks, which aims to emulate the behaviour of real human neurons. It thus simulated hundreds of thousands of games against itself in order to devise the optimal moves.
The use of AI in analysing and predicting such events as uprisings, terrorist attacks, crime rates, consumer trends, fragile states or even the chances of a military clash, as well as market and consumer tendencies may yield valuable insight for public policy and decision making.
Sifting through mounds of data and finding patterns are tasks that artificial intelligence could be developed to handle and be of great value to analysts seeking to discern different meanings in the events that unfold, as well as to policymakers in need of more input to decide the best course of conduct to pursue. AI is known to have been used in the prediction of electoral outcomes and market behaviour, and can be made use of to better understand the ebb and flow of key events around the world and integrate it in our comprehension of their causality. The use of AI in analysing and predicting such events as uprisings, terrorist attacks, crime rates, consumer trends, fragile states or even the chances of a military clash, as well as market and consumer tendencies may yield valuable insight for public policy and decision making. For example, we may conceive of an AI application that computes data on violent clashes in geopolitical hotspots and reveals the likelihood of the conflict spilling over in the neighbouring areas, and whether or not sending military reinforcements would be beneficial or deleterious to its interests in the region. This would enable a decision maker to conceive of various solutions to best meet such an outcome. AI bots that highlight drivers and tendencies of economic agents could help policymakers decide on the best mix of public policies to prevent or manage an economic crisis, or attract foreign direct investments, for instance, along with their assorted socioeconomic consequences.
The neutral arbiter
Machine programs, albeit devoid of the higher intellectual capabilities of humans, also lack several of the cognitive biases and preconceptions that often lead people off track when detecting and interpreting relations of cause and effect as well as forecasting future trends.
One reason that should be the case is that machine programs, albeit devoid of the higher intellectual capabilities of humans, also lack several of the cognitive biases and preconceptions that often lead people off track when detecting and interpreting relations of cause and effect as well as forecasting future trends; with greater computational power, AI programs can overcome some errors inherent in their initial programming. Therefore, the use of AI applications that can absorb large amounts of data to pore over for patterns and relations between entities can produce some unusual yet valid points of views and alternatives that may normally elude human thought processes. In other words, it can give us an “outsider’s view” on human affairs. Nevertheless, the caveat here is that machines and bots can also be subjected to errors and mistakes that humans would normally steer clear of. When considering the uses of AI in, say, airport security or terrorist profiling, one must be very cautious as to the pitfalls of applying AI to these tasks and the potentially nefarious effects of both false positives and false negatives (such as someone’s rights being violated or a terrorist eluding security checks). As AI relies, at its very basics, on certain rules programmed into it, there is considerable danger to be taken into account, should an opponent learn how to use an AI’s programming to manipulate it for their own benefit.
Of interest is also the use of AI in its very own cybernetic realm, where it can be employed in cybersecurity and cyberwarfare. Research is currently being carried out to explore the possibilities of integrating AI in the security framework of a company. The main points where artificial intelligences can be enlisted would be identifying threats, risk assessment and remediating security gaps. With the accusations of cyber interference in last year’s US elections surrounding Donald Trump’s victory and the early term of his presidency, the geopolitical consequences of employing software that can spot potential threats in a timely manner (or on the flipside, perpetrate cyberattacks against a target) are such that it can impact diplomatic relations (for better or worse) as well as warfare itself.
Another noteworthy feature that research in artificial intelligence has to offer is natural language processing, or NLP for short. NLP is a subfield of AI and it centres on the capability of a machine to understand human language and how machine language interacts with natural language. The benefits of applying NLP to geopolitics would consist in using NLP-enabled software to interpret political statements. The advantages come in two kinds: the first involves finding political biases or contradictions with previous statements, and the second would focus on the potential effect of statements by important actors (e.g. the statement of the US President on the stock market, or the national bank’s macroeconomic forecast) on the behaviour of political and economic agents.
In light of all of the above, one final question which we should ask ourselves is what would happen if the use of artificial intelligence as an aid in understanding geopolitical and economic phenomena would lead to its application in the elaboration of geostrategies. It is clear that, for the moment, artificial intelligence cannot replace or totally exceed human thought and action, and much effort would indeed be necessary to create a competent and viable artificially intelligent tool to aid in the various facets of analysing political and economic behaviour. Yet, the impressive progress made by research in deep learning, neural networks and natural language processing, among others, does raise the possibility that trying to replicate these successes in geopolitics and geoeconomics would yield a great many benefits, both for analysts seeking to comprehend the phenomena that occur, as well as policymakers and decision makers who need the output of this kind of analysis for deciding on what course of action to take to ensure the desired outcome.
 Arkajyoti Misra, Sanjib Basak (2016), “Political Bias Analysis”, https://cs224d.stanford.edu/reports/MisraBasak.pdf