THREE LAWS FOR DRIVING ROI OF ARTIFICIAL INTELLIGENCE PROJECTS
Alan Turing formulated the Turing Test in 1950. 70 years later data science is yet to pass it. However, in its attempts to do so, especially over the last decade, Artificial Intelligence has now become a household name. Significant efforts by companies like Amazon, Apple, Google, and others have changed everyone’s day to day experiences, as well as our collective imagination.
At enterprises, however, AI is still being met with skepticism. During these 70 years enterprise IT engineers relied on one law – if the system is not accurate, it must be broken. AI comes with an implicit guarantee of less than 100% accuracy. There are other considerations. The costs and benefits of AI projects are not linear. They are not certain, either.
We have six years’ experience helping teams at Fortune 500 and Government Agencies; debating cost curves and ROI; and balancing hard, measurable incremental gains vs. pie in the sky promises of disruption. Over this time, we have understood the fundamental laws of AI, which every practitioner should know.
Let’s start putting things together. Every business makes appropriate investments to get returns. For AI, typically the investments are constant over time – it’s the cost of the data science team or an external vendor. Most vendors price based on time and complexity of solutions. Returns pick up at their own pace and then accelerate. The following graph depicts typical Cost-Reward curves for AI implementation. Each dot is the launch of a new use case.
On the sidebar, you will find the model that drives these curves. Key assumptions are toward the end of the article.
A few powerful truths emerge from these curves.
Law III – Data Scientists Are Not the Bottleneck For ROI
You must hire smart data scientists; they will do a world of good for your organization. Intelligent data scientists accelerate the development process. In our curves, the dots come closer, and the blue line picks up faster. They also help you with the higher applicability of the use cases as well as higher accuracy; all of which brings the breakeven closer.
Below we compare the curves in the base case and in the case where deployment takes 20% less time due to smarter data scientists. You can see the ROI move.
In our experience, things are rarely as simple. Typically, data scientists are not the bottleneck for deployment, management processes are. In other words, smarter data scientists may not be able to drive ROI unless a larger team matches their brilliance.
There is one more thing – smarter data scientists cost more. Just recently there was an article about how even non-profits are paying north of one million to the best talent.
The takeaway is this – You must hire smart data scientists, either directly in your organization or by working with smart vendors. However, if you somehow feel that you are behind in that race, don’t beat yourself trying to catch up.
Law II – Accuracy of Your System Is More Important Than You Think
In the world of real enterprises, there is a lot more to the fact that more accurate systems give higher ROI.
An AI system will save you money or time on a process if two things are right: a) the AI system’s use cases cover the particular problem, and b) the problem is diagnosed correctly by the AI. We call these factors applicability and accuracy, respectively. Applicability is driven by a combination of factors typically outside the control of a data science team, but accuracy is more straightforward.
Back to our curves, the more accurate the system, the faster the blue line picks up. It keeps bearing fruit long after our costs are paid. Not only that, the gains compound on each use case. See what happens when the accuracy goes from 60% to 90%.
The Virtuous Cycle of AI
There is a lot more to accuracy. Accurate systems attract traffic. They provide an incentive for the user not to try alternative methods to get their job done. So, a virtuous cycle picks up – accurate systems reward the user, driving more traffic to the AI system, which trains it better, in turn making the system more accurate.
How can we drive this accuracy? It’s easier than you think.
Our first suggestion is that you should use AI solutions designed for your business requirements, rather than going for the general-purpose AI products. What you need is a bespoke AI solution that can configure to your enterprise’s need better than any AI product. These solutions are specifically trained, making them more accurate. Further, responsible AI solution vendors deploy over your private cloud or on-premise infrastructure, ensuring that not a single byte of data leaves your secure firewall. More here.
In the same vein, you should work with vendors or system integrators that have already deployed successful AI solutions. Successful customer references are less common than expected today.
Another suggestion is to use interactive, multi-dimensional interfaces. Such interfaces give the user multiple alternatives to interact with AI’s output, and a reward to do that. For example, consider ActuateBots, the next evolution of now ubiquitous chatbots. They are based on the principle that a user asks a question because they need some task completed. In addition to answering that question, ActuateBots guess the intent of the user and provide multiple options to complete the task using a single click. See the video below. Getting the intent right over three to five opportunities is much more accurate than getting it right in the first shot.
Then, of course, there is the quality of your team or the vendor or system integrator that you are working with. Practical approaches almost always lead to higher accuracy than others.
Law I – Avoiding Data Preparation Costs Will Boost Your ROI
You can realize substantial gains in your ROI curve if your data scientists can bypass costs of data preparation. IBM vs. MD Anderson episode is the best example to understand this better. IBM’s Watson promised a future where a cognitive computing system will expedite clinical decision-making around the globe and match patients to clinical trials. There was one hiccup – MD Anderson had to first appropriately tag all its oncology data using expensive subject matter experts. It did so, spending $60 million over three years.
And then nothing worked. Watson requested further annotation to iterate over their models. That meant further costs with no promises. Anderson walked away, in a much-publicized break-up.
On the other hand, there are technologies, including Calibrated Quantum Mesh (CQM), that do not need to be spoon-fed to understand, process and learn from natural language.
Similarly, for structured data, it is best to use Deep Learning only if hundreds of thousands of records are available. Trying to create data for a model to train properly is the worst possible thing to do. The sidebar has more information about how much data does Deep Learning need.
These kinds of smart decisions fundamentally change our curves. Below we depict the scenarios with no data preparation costs. Please see the model for further details.
Key assumptions in our model
Now that you’ve seen what our model can do, here’s a bit of behind-the-scenes action. How did we arrive at these insights? Using simple arithmetic.
- The first thing was to realize that AI projects never work linearly. The outcomes are almost always below expectations initially, and they improve only over time. Every savvy business person dealing with AI takes a long-term outlook across multiple use cases and plans for this uncertainty. This is a key assumption in the model.
- Another critical assumption is that the cost of implementing AI is directly proportional to time. The factors involved are essentially the data science team, AI vendors/ consultants, or both. In the AI world, products don’t mean much, solutions do, and they are often priced pro-rated to how complex the problem is.
There are other mathematical assumptions detailed in the model.
A lot of first-timers think that the same AI strategy can fit every use case.
After talking to hundreds of companies, and working with dozens of them, we have found that one size doesn’t fit all in AI. Our channel and integration partners agree, but a lot of first timers miss this fact. It all comes down to the basic math. If you don’t believe it, try out the model yourself.
Write to us, and we can send you a password to test the assumptions yourself. You can always set up a meeting to learn more.