What's up in
Neural networks
Latest Articles
Why Do Researchers Care About Small Language Models?
Larger models can pull off greater feats, but the accessibility and efficiency of smaller models make them attractive tools.
The Physicist Working to Build Science-Literate AI
By training machine learning models with enough examples of basic science, Miles Cranmer hopes to push the pace of scientific discovery forward.
Chatbot Software Begins to Face Fundamental Limitations
Recent results show that large language models struggle with compositional tasks, suggesting a hard limit to their abilities.
Can AI Models Show Us How People Learn? Impossible Languages Point a Way.
Certain grammatical rules never appear in any known language. By constructing artificial languages that have these rules, linguists can use neural networks to explore how people learn.
The Year in Computer Science
Researchers got a better look at the thoughts of chatbots, amateurs learned exactly how complicated simple systems can be, and quantum computers passed an essential milestone.
Debate May Help AI Models Converge on Truth
How do we know if a large language model is lying? Letting AI systems argue with each other may help expose the truth.
The Computer Scientist Who Builds Big Pictures From Small Details
To better understand machine learning algorithms, Lenka Zdeborová treats them like physical materials.
How ‘Embeddings’ Encode What Words Mean — Sort Of
Machines work with words by embedding their relationships with other words in a string of numbers.
Novel Architecture Makes Neural Networks More Understandable
By tapping into a decades-old mathematical principle, researchers are hoping that Kolmogorov-Arnold networks will facilitate scientific discovery.