AI lessons from Sherlock Holmes – The risks of letting machine-learning decide what is irrelevant
Sherlock Holmes’ crime-solving algorithm has been hugely successful – at least measured by the number of film and TV adaptations of Sir Arthur Conan Doyle’s detective novels. The Sherlock Algorithm states: ”Once you eliminate the impossible, whatever remains – no matter how improbable – must be the truth.”
In our current age of machine learning and AI, could we use Sherlock’s model as a general problem-solving algorithm? Let’s take a look!
For machine learning, you need sets of data that can be used for the learning and validation process. So, our first challenge would be to collect a data set containing a list of impossibles. Even if we managed that step, we would have an even bigger challenge with the list of improbables.
The underlying statistical methods of machine learning algorithms will classify something that hardly ever happens, an event in the 6 Sigma domain, as irrelevant. Thus, when such an event does occur, the machine-learned algorithm will consider it an outlier and ignore it – the exact opposite of what Sherlock would do!
So, it unfortunately looks like the Sherlock Algorithm cannot be adapted to the sphere of machine-learning. But what this thought experiment does show is that we should be careful when drawing conclusions from or basing decisions on machine-learned algorithms.
When the improbable is being ignored, some problems may never be solved.
About the author
Menno Huijben is a Senior Executive at Sofigate and a concept owner of Business Technology Transformations and Data Leadership.
Menno is interested in the realm of decision-making in business, especially where a data-driven mindset meets intuition and experience. His motto is ”Don’t forget the Human Factor!”