Explainability

Last week I started getting lots of stories about Kendrick Lamar and SZA in my Google Now news feed on my phone.  I thought to myself “why all of sudden does Google think I’m interested in Kendrick Lamar and SZA?”

Then I recalled sending a text message to my son about the new Kendrick/SZA song from the Black Panther film and thought “Google saw that text message and added Kendrick to my interests.” I don’t know if that is in fact the case, but the fact that I thought it is really all that I am talking about right now.

That whole “why did I get this recommendation” line of thinking is what the machine learning industry calls Explainability. It’s a very human emotion and I bet that all of us have it, maybe as often as multiple times a day now.

I like this bit I saw on a blog post on the topic today:

Explainability is about trust. It’s important to know why our self-driving car decided to slam on the breaks, or maybe in the future why the IRS auto-audit bots decide it’s your turn. Good or bad decision, it’s important to have visibility into how they were made, so that we can bring the human expectation more in line with how the algorithm actually behaves. 

What I want on my phone, on my computer, in Alexa, and everywhere that machine learning touches me, is a “why” button I can push (or speak) to know why I got that recommendation. I want to know what source data was used to make the recommendation, and I’d also like to know what algorithms were used to produce confidence in it.

This is coming. I have no doubt about it. And the companies that offer it to us will build the trust that will be critical to remaining relevant in the age of machine learning.

4 Views
 0
 0