franchise is a short story by Isaac Asimov that first appeared in a science fiction magazine in 1955. The story is about how the United States has transformed into an electronic democracy in which the world’s most advanced computer (Multivac) selects one person to number of questions and then uses the answers to determine the outcome of a vote, making an actual election unnecessary.
While we have not yet reached this disturbing future, the role of artificial intelligence and data science in democratic elections is becoming increasingly important. The election campaigns of Barack Obama and Donald Trump, the Synthetic Party of Denmark, and the massive data theft of the Macron campaign are good examples.
One of the first successful examples of using big data and social networking analytics techniques to refine an election bid was Barack Obama’s 2012 US presidential campaign. This campaign and many others that followed used traditional survey methods supplemented with social media analysis.
These analytical techniques provide low-cost and near real-time methods of measuring voter opinion. Natural Language Processing (NLP) techniques such as sentiment analysis are often used to analyze messages in tweets, blogs and other online posts and measure whether the opinions expressed are positive or negative regarding a particular politician or election message.
The main problem with this approach is sample bias, as the most active social media users are often young, tech-savvy and not representative of the population as a whole. This bias limits the ability to accurately predict election results, although the techniques are very useful for studying voting trends and opinions.
The 2016 Trump campaign
While sentiment analysis on social media can be disturbing, it’s even more disturbing when it’s used to influence opinions and voting results. One of the most famous examples is Donald Trump’s campaign for the US presidency in 2016. Big data and psychographic profiling had a lot to do with a victory that traditional polls could not predict.
Trump’s example was not a case of mass manipulation. Instead, individual voters received different messages based on predictions about their sensitivity to different arguments. They often received information that was biased, incomplete and sometimes contradicted other reports from the same candidate. The Trump campaign Cambridge Analytica contracted for this effort, the same company that was sued and forced to shut down after it was caught collecting information from millions of Facebook users. Cambridge Analytica’s approach was based on psychometric methods developed by Dr. Michal Kosinski who were able to develop a comprehensive user profile by analyzing a small number of social media likes.
The problem with this approach is not the technology used, but how campaigns secretly use it for psychological manipulation of vulnerable voters by directly appealing to their emotions and deliberately spreading fake news via bots. This happened during Emmanuel Macron’s bid for the French presidency in 2017, when his campaign suffered a massive email theft just two days before the election. A large number of bots were then deployed to distribute alleged evidence of crimes described in the emails, which were later found to be false.
Political action and government
Another worrying thought is the possibility of a government driven by artificial intelligence (AI).
Denmark’s last general election saw the emergence of a new political party called the Synthetic Party, led by an AI chatbot named Leader Lars seeking a seat in the country’s parliament. Of course there are real people behind the chatbot, especially the MindFuture Foundation. Leader Lars has been machine-trained since 1970 in all the political manifestos of Denmark’s fringe political parties with the aim of developing a platform that appeals to the 20% of the country’s population who never vote.
While the Synthetic Party may have outlandish ideas, such as a universal basic income of nearly $15,000 a month, it has stimulated debate about the potential for an AI-driven government. Can a well-trained and well-equipped AI application really control humans?
We are currently seeing one AI breakthrough after another at lightning speed, particularly in natural language processing, following the introduction of a new, simple network architecture – the Transformer. These are giant, artificial neural networks that are trained to generate text, but can also be easily adapted to many other tasks. These networks learn the general structure of human language and develop an understanding of the world through what they have ‘read’.
One of the most advanced and impressive examples is called ChatGPT, developed by OpenAI. It is a chatbot capable of coherently answering almost any question in natural language. It can generate text and perform complicated tasks, such as writing entire computer programs, with just a few instructions from the user.
Immune to corruption, but opaque
The use of AI applications in government has several advantages. Their ability to process data and knowledge for decision making is far superior to that of a human being. Theoretically, it would also be immune to the influence of corruption and have no personal interests.
At the moment, chatbots can only respond to the information someone gives them. They cannot really think spontaneously or take initiative. Today’s AI systems are better viewed as answering machines – oracles – that can respond to “what do you think would happen if…” questions, rather than agents that can take action or exert control.
There are many scientific studies on the potential problems and dangers of this type of intelligence based on large neural networks. A fundamental problem is their lack of transparency – they don’t explain how they arrived at a decision. These systems are like black boxes – something goes in and something comes out – but we can’t see what’s going on inside the box.
We must not forget that there are people behind these machines who can consciously or unconsciously introduce certain biases through the learning texts they use to train the systems. Moreover, as many ChatGPT users have learned, AI chatbots can also spew out incorrect information and bad advice.
Recent advances in technology give us a glimpse of future AI capabilities that could potentially “reign,” but not without the essential human control for now. The debate should quickly shift from technological issues to ethical and social issues.