Google researchers developed a way to peer inside the minds of deep-learning systems, and the results are delightfully weird.

What they did: The team built a tool that combines several techniques to provide people with a clearer idea of how neural networks make decisions. Applied to image classification, it lets a person visualize how the network develops its understanding of what is, for instance, a kitten or a Labrador. The visualizations, above, are ... strange.

Why it matters: Deep learning is powerful—but opaque. That’s a problem if you want it to, say, drive a car for you. So being able to visualize decisions behind image recognition could help reveal why an autonomous vehicle has made a serious error. Plus, humans tend to want to know why a decision was made, even if it was correct.

But: Not everyone thinks machines needs to explain themselves. In a recent debate, Yann Lecunn, who leads Facebook’s AI research, argued that we should simply focus on their behavior. After all, we can’t always explain the decisions humans make either.