Beating the odds

Beating the odds

 

According to research by Gartner, the harsh reality is that 9/10 Artificial Intelligence (AI) deployments fail. To be more specific, 8/10 fail to make it off the drawing board, and 60% of those that do flounder and fail to deliver the expected benefits.

In this article, Adrian Harvey, CEO at Elephants Don’t Forget, examines the commonalties as to why the majority of AI deployments fail, and looks at why our methodology for successful deployment ensures we can guarantee success every single time.

Gartner’s research highlighted four commonalties as to why AI deployments fail:

1.

They didn’t define a narrow business problem.

2.

They don’t have the right team.

3.

They didn’t have enough high-quality data.

4.

They didn’t confront bias head on.

I was recently invited to address the Financial Conduct Authority’s (FCA) AI community and share with them the lessons we had learned over the past nine years as we have grown to become a global market leader in our AI niche. What, pray, gives me the right to think this audience or indeed the many PhDs and technical specialists within the FCA AI community would have anything to learn from our journey?

Simply, that we have found a way to not only buck the trend and beat the rather stark and unfavourable odds exposed in Gartner’s research but have, for the past couple of years, been able to financially underwrite every single deployment of our AI. In other words: we guarantee our AI will work every single time.

What really interested the clever folks at the FCA was what we had done – and what we do – to be able to deploy AI so consistently and effectively in household-name firms at a rate of one per week. Hopefully, our learnings to date will resonate with this readership and perhaps help users and suppliers beat the Gartner odds too.

When we started the journey, we didn’t have access to the Gartner research but, curiously, we arrived at almost the same set of conclusions. It is worth exploring the details behind these (seemingly obvious conclusions) to harvest the real lessons:

1. The client must articulate specific objectives

We spend a great deal of time and effort working with the client to establish the specific business problem that they wish to solve. Making something “better” or “easier” is a great macro-objective, but we have learned it is largely useless as an objective for AI deployment.

We need clients to be able and willing to articulate specifically what KPI(s) they wish to improve and by how much and – importantly – face facts about what will happen if this doesn’t happen. This creates the right “focus” and clarity, removing any possible future misunderstandings around what success looks like as it is already pre-defined.

2. Everybody must be continually involved in the process

Many will interpret this fail incorrectly and believe it relates to the technical skills of the team(s), client, and supplier side. It doesn’t. It relates to what we refer to as “getting the right folks in the room”; none of whom will be wearing a technical hat.

What we have learned is that the functional leaders who own the P&L, who will benefit from the improved KPI(s), and who are responsible for the employees that will be using and interacting with our AI, absolutely must be involved before, during and perhaps – most importantly – after the deployment. We “soft contract” with these functional leaders to give us one hour of their time each month to draw (operationally expert) conclusions from our data and help us to continually refine the AI as it relates to their environment.

3. Accumulate accurate and relevant data to add real value

The “lack of data” issue is kind of an obvious failure, given our AI (and all AI) needs accurate data to operate.

In our world, we never deploy with the AI “switched on”. What we do is deploy in what one of our customers once referred to as “dumb elephant mode” (the clue is in the name: Elephants Don’t Forget) and it has now stuck! This means the AI remains turned off until such time that we have harvested enough accurate and relevant data for the AI to add value. This usually takes at least 40 days.

4. Objectively address the issue of bias

Finally, to the issue of bias. What Gartner refers to is “coder bias”, where the software itself is tainted by the bias of those who wrote it. E.g., where perhaps cultural differences of a coder in one culture permeate the software and taint its effectiveness in deployment in another culture.

Our AI doesn’t draw conclusions, so we can reliably and accurately claim there is no bias in our AI; but there may well be bias in the content that is provided by the client.

The only other learning worthy of sharing would be communication, and I was surprised to see it didn’t appear within the Gartner learnings. We know that if you do not win the hearts and minds of the employees, particularly the first-line supervisory management/Team Leaders then the deployment will be hard work and could very well fail, even if the other four material points above have been addressed.

This fifth “fail” is vitally important: we won’t actually deploy unless the client has done a sufficiently good job of communicating what, when, where, why, and how the AI will be used.

See our award-winning AI in action…

Join the herd

Request a time to discover how Elephants Don't Forget can help transform your business today.

Nelly