Is this the year we unravel the AI conundrum?
Having worked with machine learning (ML) for almost a decade, it has become quite challenging for the last few years – since its transformation to AI – to provide balanced and constructive advice. ML was typically highly targeted to solve specific problems, offer a clear way to assess the challenges and potential benefits, and was generally understood to offer a range of different techniques that could be used for tackling a wide set of use cases.
In contrast, AI has been positioned as a cure-all for every problem going, with expectations seemingly justified by the superficially intelligent outputs of LLMs (such as ChatGPT, Claude and Gemini) to almost every conceivable prompt, while also sweeping up all the demonstrable successes of ML to help make its case. This belief is however proving harder and harder to sustain as we discover the limitations of “gen AI” and are forced to fall back on our previous approach. We can’t make a catch-all statement about whether AI is good or bad without assessing the appropriate and successful use cases. While these certainly exist – document summarisation, broad internet research and data transformation are great examples that are driving automation of business tasks which are incredibly difficult for traditional software – there might not be so many as we’ve been led to believe.
Our conclusion is that it’s simply not possible to make good business decisions about AI without some understanding of what the term AI is being used to refer to. We have to remember that intelligence isn’t in the model, but is in the suitability of its output to the problem we’re trying to solve.