With Terminator: Dark Fate now released worldwide, guest writer Nikolas Kairinos, CEO and Founder, Fountech looks at a key question: How is AI presented in film, and how is this different to reality?

Artificial intelligence (AI) and Hollywood have a long and complicated relationship. And while it’s easy to think that AI is a new addition to the silver screen, its presence in popular films in fact goes back more than half a century.

One of the earlier films that comes to mind is Metropolis, which was released back in 1927. Some might be more familiar however, with cult classics like 2001: A Space Odyssey which came about some four decades later, and followed by Blade Runner and The Terminator in the 1980s.

AI has been a popular theme for production companies for a long time now, and this trend is only growing stronger. Between 2010 and 2018, there was a massive 144% increase in the number of AI-themed films released when compared to the previous decade.

We must thank popular cinema for bringing awareness to this technology, which many people might otherwise not be familiar with. But we also cannot ignore the fact that this trend has also had some notable repercussions on our understanding of the practical capabilities of AI.

According to recent research by Fountech, one in four UK adults think that AI could be responsible for the end of humankind. And while this view might lean more towards the extreme (the majority, or 62%, of UK adults actually believe AI will do more good than harm to the world), it still gives us some food for thought: what are the dangers of misrepresenting technology in pop culture?

Below I outline some of the more common misconceptions from famous films that we ought to shatter.

What mistakes do popular films make about AI?

Let’s first consider the two main ways that AI is presented in film. The first is AI in the form of a cyborg, or a robot with human-like or super-human abilities that can either assist or harm mankind. I, Robot and Transformers are two such examples.

There is a huge caveat to the accuracies of this depiction: namely, this interpretation has very little meaningful connection with the way that we – both as consumers and businesses – would use the term ‘artificial intelligence’ today. Indeed, while the idea of cyborgs might lend itself to exciting action films, this is far from the direction that AI development has taken, or will take, in the years to come.

The second interpretation of AI is more in line with how it is used today (but still not entirely accurate). Specifically, this is AI in the form of non-physical computer programmes applying human-like intelligence and decision-making to complicated, laborious and data-intensive process as in The Matrix. Common everyday examples of this include Amazon recommending products to users based on their previous online behaviour.

The real-life applications of AI

Delving a bit deeper, what specifically does Hollywood get wrong? One major theme is the idea that AI can function without humans, or indeed overtake us.

This is an idea portrayed in 2001: A Space Odyssey through HAL (or, the Heuristically Programmed Algorithmic Computer). In the film, which takes place predominately within the Discovery One spacecraft, the machine quickly begins to “think” for itself and take its own course without the involvement of the human crew. This is despite the machine being originally created in order to control the systems of the spacecraft.

You might be wondering what is wrong with this picture, given how advanced and futuristic AI is – after all, it’s able to drive our cars without us having to control the wheel. The reality is that we’re a long way away from the technology being completely independent; indeed, even the most intelligent AI programmes still feed off human input.

Take IBM’s Watson, for example, which is renowned for taking on two of the all-time most successful human chess players and beating them in front of millions of TV viewers. And while this is no doubt an impressive feat, the reality is that it is reflective of a generation of human-machine interactions which rely on direct human command and a massive supply of information.

Let’s consider how the AI is used within two fields: healthcare and finance.

AI in healthcare

In healthcare, AI is used to identify potential symptoms and treatments in medical patients. And while this process might seem self-governing, in reality it is reliant on the expertise of doctors and physicians through their medical knowledge and the provision of patient history. Thus, the natural-language processing (NLP) abilities of the AI are used to enhance their capacity to offer accurate and effective medical assistance.

On a basic level, NLP is the sub-field of AI that is focused on enabling computers to understand and process human languages. It is through this ability that AI can understand human input and extract data from it, before then analysing it to solve real-life problems. By inputting patient data, therefore, the AI can then use NPL to identify potential symptoms and treatments, with the doctor thereafter deciding on the best course of action.

In other cases, Watson’s vision recognition is being used to help doctors read scans such as X-rays and MRIs to better narrow the focus of a potential ailment. This entails the machine processing the raw visual input by quickly and accurately recognising and categorising different objects – after which it is able to offer an indication on how best to proceed.

AI in finance

Over in the finance industry, IBM’s Watson is being used to offer financial guidance and help companies manage risk. Again, this relies on human direction and data.

Finance professional can ask the AI a question and receive an answer based on quite a simple function: upon receiving the question, the machine can sift through, process and analyse huge stores of data within the organisation’s database and offer an accurate response. At first glance, it seems like this is a task that can easily be handled by a professional, and in reality, it is. However, there is one major benefit to delegating it to AI – the machine can sift through all this information at an infinitely faster rate than any human can. Together, human-AI collaboration makes the task much more cost-effective, both in terms of time and resource.

 

Major films that present AI in a powerful light have no doubt served as a powerful illustration of AI’s potential. To that end, Hollywood has played a starring role in bringing AI into the public consciousness, making people aware of the way interconnected technologies and huge volumes of data can translate into machine learning (ML) and AI toolsets.

But the underlying message when examining the relationship between AI and the silver screen is to take everything with a hefty pinch of salt. Yes, AI’s capabilities are immense, but in reality they are applied to far more mundane (though no less important) matters – they are used by organisations to save time and money, and deliver hitherto unthinkable products and services.

Nick Kairinos is the CEO and co-founder of both Prospex and Fountech. Prospex is a sales and marketing solution that delivers AI-powered leads. Developed in partnership with LOMi and Fountech, a leading AI development company, Prospex applies sophisticated AI technology to provide qualified, hyper-personalized and cost-effective leads for small businesses through to large corporates.