Skip to main content

How intelligent is AI?

AI
By Matthew Ford
29 August 2023
Digital and Insight
News

How intelligent is AI?

Artificial Intelligence is a very hot topic at the moment, with Google Trend data showing an explosion in AI search queries in the past 12 months 

No wonder global interest in Artificial Intelligence is so high, with new AI powered services being launched every week.

There’s a huge amount of enthusiasm around AI and also some fear about how it might impact people’s work.

But how intelligent is the AI running these new services and how intelligent might it get? Those are the questions we’ll seek to answer, but first we need to address how we define AI and how we measure intelligence.

How do we define AI?

Defining AI is actually quite difficult, with services ranging from things as diverse as a spam filter to AI-powered self-driving cars.

But AI has two things in common that set it apart. These are the ability to be both autonomous and adaptive. Autonomous in the sense of requiring minimal user guidance to function and adaptive in that AI is able to learn and improve.

How do we measure intelligence?

Next we need to ask how we measure intelligence. For humans our intelligence is usually measured through the tests we do while at school or university. There is no school or university for AI and so we have the Turing Test, which was invented by Alan Turing, the father of computer science.

The Turing Test is fairly simple. If a panel of judges cannot distinguish a computer from a real human, when in conversation with it, then the computer has passed the test.

There is currently some debate about whether ChatGPT has passed the Turing Test, but it is certain that Google’s LaMDA achieved the feat in 2022.

The issue for AI like ChatGPT is that the Turing Test does not reveal whether AI has true comprehension or cognition. What it really proves is that AI can imitate human speech in a convincing way.

The Chinese Room 

To underline this difference between being intelligent and acting in an intelligent way, John Searle posited the Chinese Room thought experiment. 

Searle’s idea is an experiment, where a person is locked in a room, with a large manual on how to respond to notes slipped to her in Mandarin, despite not speaking Mandarin.

The person outside the room slipping the notes would have the impression that the person inside the room can speak Mandarin when they cannot. They are just using the manual. That means they can respond to a person in Mandarin, without speaking or understanding it.

In a similar way, current AI can behave in a way that makes it seem intelligent but that does not mean that it actually is intelligent, in the sense of having understanding or comprehension.

Chess moves are easy

Despite not possessing understanding or comprehensions, some challenges that we thought were hard are quite easy for a machine. Take chess, long thought to be a difficult game and true measure of intelligence.

The chess computer Deep Blue beat the reigning world champion Garry Kasparov way back in 1997. Stockfish, the highest rated chess engine in 2023, has won against every human it has played against.

This is because in a zero-sum game like chess, where there is perfect information, computers can excel with their ability to make billions of computations a second to find the optimal moves.

Moving chess pieces is hard

However, for AI, what is significantly harder than winning a game of chess is actually moving the chess pieces.

That’s because moving any object requires a level of innate intelligence that animals have developed through hundreds of thousands of years of evolution. The intelligence required is not just using your sight to find an object to move but also moving many muscles in sequence and, critically, understanding the amount of force that should be used. 

This is why we’re able to pick up an egg without cracking it. But humans still get this wrong sometimes. We’ve all been handed an object that was much heavier than we expected!

This type of less venerated intelligence has simply been much more difficult for AI to get to grips with (excuse the pun). Nonetheless, progress in AI is now being used to make advancements to improve robot grippers through applying the correct force to different objects. This could lead to major break throughs in the development of advanced robotics in the near future.

All AI is narrow

These impressive advancements aside, all current AI is still classified as ‘narrow’, as opposed to ‘general’. Narrow means AI that is focused on handling one task.

ChatGPT may seem intelligent, for example, but to go back to the Searle experiment, it is a narrow AI chatbot that is very good at understanding patterns of speech. Based on its training data it can predict what word should come after another, but it does not actually understand or comprehend the conversations it is having. This is why AI chatbots so frequently ‘hallucinate’ and provide inaccurate responses. 

Is AI artistic?

Other narrow AI like Midjourney and Dall-E 2 can produce images at astonishing speeds, based on a user’s prompt. But applying the term ‘art’ to them is difficult.

 I feel like Tolstoy would have a lot to say about so-called AI art, were he alive today. This was a man after all, who confidently dismissed the works of Wagner and Shakespeare in his polemic ‘What is Art?’

Tolstoy argued that in his time art ‘ceased to be sincere and became artificial…’ He outlined four common markers of bad art, which included ‘borrowing’ and ‘imitation’. These are two almost universal markers of so-called AI art, where users typically prompt AI to produce images in the style of other artists or artistic movements. The same goes for the output of written work by chatbots.

Ultimately, Tolstoy defines true art as being achieved when an artist is sincerely compelled to ‘infect’ others with feeling and emotion, where the artist ‘experiences the feeling he conveys’.

This is something that it is impossible for narrow AI to achieve, given that it cannot experience feeling. And, for Tolstoy, if a work of art cannot infect others with feeling then it is dismissed as counterfeit art.

So while AI can produce interesting images and aesthetics, if we are to follow Tolstoy’s lead, it cannot produce true art.

AGI

The real pinnacle for AI development though is the idea of Artificial General Intelligence, or AGI. The concept of AGI is an artificial intelligence that can handle any intellectual task and apply its learning to other tasks. This would be an AI that is not just behaving intelligently but actually is intelligent and, potentially, self-conscious.

This really is getting into the realms of science fiction, but the creation of AGI may come within our lifetimes. In, a recent survey of 356 AI experts, half believed there was a 50% chance of there being AI with human level intelligence by 2061.

So is AI intelligent? 

In conclusion, what we now have is increasingly sophisticated narrow AI that can behave in an intelligent way.

This narrow AI is useful for making tasks more efficient. It can speed up your writing process, make your video editing workflow easier, provide you with interesting data insights and summarise large amounts of information for you. It can help you discover music you never knew existed, point you in the direction of a film there’s a good chance you’ll like and even help keep your inbox free of spam. One day it might even drive your car.

And yet there are many things that AI cannot do. It cannot actually comprehend or understand the tasks it is performing. And it cannot produce sincere works of art that convey emotions that it simply does not have.

But if we do ever manage to achieve Artificial General Intelligence then our world will be set to radically change, with machines powered by AI that could even exceed human level intelligence.

That’s a prospect that both excites and terrifies most people in equal measure.

If you’d like to hear more about the future of AI, please join SEC Newgate for a breakfast panel discussion in London on the 5th of September. Details on how to attend are here.