It would be very, very helpful to know what the future holds for artificial intelligence in business. Unfortunately, it is also very, very hard to predict.
With this topic, our extrapolation heuristics may not work well. We tend to extrapolate linearly, expecting the pace of past progress to continue unchanged. That is unlikely to work with AI.
Consider the fun example of Joshua Browder, a 19-year-old who built a chatbot to help people fight parking tickets. It took him about three months to develop, and in the first 21 months, the application helped people win 160,000 of 250,000 cases — a 64% success rate.
The temptation to extrapolate from this is strong, leading to thoughts like:
- With another three months of effort, the rest of the cases could be won.
- This success is just in London and New York, but the same thing could be done in every other municipality.
- It works for parking tickets, so let’s apply the same approach to more important contexts.
Although appealing, the reality is that extrapolating the future success and progress of AI is not that straightforward.
Technology has a long history of being pleasantly (and unpleasantly) nonlinear. For example, growth can be exponential, as it is with computing power or network effects. Or growth can be punctuated, with periods of rapid growth interspersed with relatively dormant periods.
The growth of AI in business is likely to similarly defy smooth, linear progression. It is difficult to build off of what has already happened to reliably determine what is likely to develop.
Why is this particularly difficult for AI? More so than the laws of robotics, three other laws may be important for AI in business:
The Pareto Principle: People focus on solving AI problems with the most potential benefits first. This is entirely rational; it just makes sense to invest efforts where the benefits are greater. The Pareto Principle describes the concept that 80% of the effects come from 20% of the causes. By focusing on a few scenarios (the 20%), AI can get solutions that address the majority of effects (~80%).
In the case of the parking tickets, the results of Freedom of Information Act requests indicated that appeals courts dismissed most parking tickets for any of just 12 reasons. As the result, the chatbot focused first on these 12 reasons. But to continue to progress, the application will need to incorporate more and more reasons, each with diminishing contribution to the total number of successful appeals.
The takeaway: For AI in business, benefits from incremental improvement may come at a diminishing rate.
The Ninety-Ninety Rule: In estimating the time to completion of a software product, Tom Cargill of Bell Labs wryly observed that “The first 90% of the code accounts for the first 90% of the development time. The remaining 10% of the code accounts for the other 90% of the development time.” Those who still manage software release dates probably don’t find this as humorous as I do now. The underlying problem is that the coding required in the remaining 10% is typically much more difficult that the code in the first 90%.
It isn’t just that there are diminishing benefits to each increase in AI scope. The effort associated with the remaining tasks will likely also increase. One of the hallmarks of modern approaches to AI is the emphasis on the role of machine learning and data rather than rule-based approaches. But as the AI solution handles the bulk of the cases, the data remaining that covers the more complex, edge cases and corner conditions gets sparser and sparser, even with the quantities of data available now. Developing good solutions to these rarer cases may require considerably more effort than for the more common cases. In the case of the parking ticket application, much less data is available to train the application the further down the “long tail” of dismissal reasons the application progresses.
The takeaway: For AI in business, efforts required may increase disproportionately as it improves.
Hofstadter’s law: In Gödel, Escher, Bach: An Eternal Golden Braid, Douglas Hofstadter prescribes a formula for estimating the time required for a complex task: “It always takes longer than you expect, even when you take into account Hofstadter’s Law.” Management is about planning and allocating resources according to that plan. In managing AI development, that is fundamentally difficult when budgets and resources for AI must balance realistic expectations for deliverables.
Parking tickets are a relatively straightforward legal scenario. When Joshua Browder applied the concept in a more complex scenario, refugee asylum in the UK, it took much longer than he expected, even after building on his experiences with transportation delay claims and mortgage payment protection insurance scenarios. Fortunately, he is working without angry bosses.
The takeaway: For AI in business, working within the constraints of goals, timelines, budgets, and expectations will be difficult when experimentation perspectives change.
There is no shortage of guesses about the future of AI and business. But these three laws make me think that the AI nonlinearity may be asymptotic rather than exponential — with every step forward benefiting less, requiring more effect, and demanding more time. The result for artificial intelligence is Zeno’s Dichotomy Paradox — where progress continues, and we get closer and closer all the time, but we never quite reach the point of achieving an artificially intelligent business.
MIT Sloan Management Review