If your weather app tells you to wear a t-shirt when it’s 28 degrees outside and a scarf when it’s 2 degrees, is that artificial intelligence? If you believe the exuberant offers and heady promises made on the IT market, the answer will have to be yes. At the moment, even a simple set of rules can be hyped as an amazing piece of software from the future. The “AI” stamp grabs the attention of customers and investors, yet all the noise about options and functions has two unpleasant effects: it blocks out news of genuine progress in AI, which sometimes occurs quietly and without any fanfare, and distracts from the capabilities AI applications might provide.
This ultimately gives the impression that AI is just another one of those hyped-up phenomena that analysts and IT service providers make a big song and dance about every so often. Fears are already growing that AI is entering a kind of ice age in which only a handful of specialist researchers still have any interest in the subject. Businesses are turning away in disappointment.
This is wrong, though, because we are only just beginning to recognise the potential of AI – it’s just that it is not some kind of magic wand, but more of a screwdriver. A useful tool for certain jobs.
There can be no AI without data – that’s straightforward enough. Data is the prerequisite that enables apps to exploit their strengths: identifying models and making predictions. But this does not imply that everything based on data is automatically AI. That would bring us back to the example at the start, with the t-shirt and the scarf. Intelligence presupposes the capacity to learn – in the case of AI, to learn mechanically. Applications that rigidly derive conclusions from a set of rules can achieve spectacular results, that’s the way nearly all of the software around us works. But we can only start talking about “intelligence” when the software has the ability to improve itself.
This kind of intelligence has little to do with the cognitive abilities we ascribe to ourselves as humans. When we talk about AI, what we really mean is something called “weak” AI. This is capable of solving concrete application problems, for example: Is that a stop sign? How likely is a machine to fail? To complete these tasks, applications use strategies that appear to be intelligent. They operate with probabilities and improve their performance over the period of use. But – and this is a big but – this only works for the very narrowly defined scenarios in which they operate. Strong AI, which can keep up with the bandwidth of human abilities, remains a topic for a philosophy debate. We won’t be discussing last night’s football results with our robot vacuum cleaners any time soon.
In the cold light of day, many aspects of what is called AI reveal themselves to be a combination of traditional software and slick marketing. One example is robotic process automation, or RPA, in which software robots imitate human interaction with applications, for example making entries into an ERP system or testing software. What looks like intelligent behaviour is not AI at all (it’s not enough to just include the word “robotic”). It is only a posteriori interface integration. As software technology, it is of questionable value because any structural modification will make it necessary to reset the program, but it is nevertheless quick to implement. Another example can be found in chatbots. The vast majority of dialogue-based applications in use simply reel off the scripts they are provided with. If a bot detects the terms “service” and “number” in an inquiry, it will answer “Our service can be reached on 0800 XXXXXX” – even if the frustrated customer has actually written: “I have a number of complaints about the service I have received”.
Let us be clear, though, RPA does provide good services – chatbots have a role to play across a range of service processes, and there are dialogue systems with learning capabilities which can conduct nuanced conversations in their fields. In most cases, however, there is no AI under the hood, just the art of classic software engineering. This is not an attempt to belittle certain applications, disparage their significance or talk down the complexity of their development, but there is a need to have a serious debate about the opportunities and limitations of AI. Only then will the people responsible for choosing a tool be able to make the correct decisions.
There are enough problems which the trusted methods and processes of AI can help to solve. What these are should be decided more by the IT experts working for suppliers and consultants and less by marketing experts. If not, AI will find itself on the descent before it has even had its time to shine.