At 51:50 he talks more about how pushing for understandability alters choices made. For example raft has four message types where a competitive algorithm has 10. Additionally every part of the algorithm must be motivated by something. There is less extraneous stuff. (#)
"As a first step, you might consider autoscaling based on multiple custom metrics. This is possible to do, but I don't advise it for two reasons. Most important, I think, is that a multi-metric autoscale policy makes communication about its behavior difficult to reason about. "Why did the group scale?" is a very important question, one which should be answerable without elaborate deduction." (#)
Hard to reason about because reason was a distant goal behind "making it work well enough to ship" (#)
One way to make things understandable is to create tools whose explicit aim is to help people understand things: These models will be combined with state-of-the-art human-computer interface techniques capable of translating models into understandable and useful explanation dialogues for the end user. (#)
"People must be able to correct and understand curation decisions" (#)
When your dataset isn't that big, doing something simpler is often both more interpretable and it works just as well due to potential overfitting (#)
"When you've got algorithms weighing hundreds of factors over a huge data set, you can't really know why they come to a particular decision or whether it really makes sense" (#)
At 53, the software shows where it would like to take a peek, and where it actually decided to take a peek. (#)
At 24:40 the robot has a self-image that it hones over time, based on its experiences, and they can see what that self-image looks like, to see how it is thinking about itself (#)
It will understand our rules, communicate its desires, be legible to our eyes and minds. (#)
Caruana may have brought clarity to his own project, but his solution only underscored the fact the explainability is a kaleidoscopic problem. The explanation a doctor needs from a machine isn't the same as the one a fighter pilot might need or the one an N.S.A. analyst sniffing out a financial fraud might need. Different details will matter, and different technical means will be needed for finding them. You couldn't, for example, simply use Caruana's techniques on facial data, because they don't apply to image recognition. There may, in other words, eventually have to be as many approaches to explainability as there are approaches to machine learning itself. (#)