What's up in

Interpretability

Latest Articles

Novel Architecture Makes Neural Networks More Understandable

September 11, 2024

By tapping into a decades-old mathematical principle, researchers are hoping that Kolmogorov-Arnold networks will facilitate scientific discovery.

Tiny Language Models Come of Age

October 5, 2023

To better understand how neural networks learn to simulate writing, researchers trained simpler versions on synthetic children’s stories.

Q&A

The Computer Scientist Peering Inside AI’s Black Boxes

April 27, 2023

Cynthia Rudin wants machine learning models, responsible for increasingly important decisions, to show their work.

Researchers Gain New Understanding From Simple AI

April 14, 2022

Language processing programs are notoriously hard to interpret, but smaller versions can provide important insights into how they work.

Computer Scientists Prove Why Bigger Neural Networks Do Better

February 10, 2022

Two researchers show that for neural networks to be able to remember better, they need far more parameters than previously thought.

A New Link to an Old Model Could Crack the Mystery of Deep Learning

October 11, 2021

To help them explain the shocking success of deep neural networks, researchers are turning to older but better-understood models of machine learning.

Foundations Built for a General Theory of Neural Networks

January 31, 2019

Neural networks can be as unpredictable as they are powerful. Now mathematicians are beginning to reveal how a neural network’s form will influence its function.

Q&A

A New Approach to Understanding How Machines Think

January 10, 2019

Neural networks are famously incomprehensible, so Been Kim is developing a “translator for humans.”

New Theory Cracks Open the Black Box of Deep Learning

September 21, 2017

A new idea is helping to explain the puzzling success of today’s artificial-intelligence algorithms — and might also explain how human brains learn.

Get highlights of the most important news delivered to your email inbox