What's up in
Artificial intelligence
Latest Articles
To Make Language Models Work Better, Researchers Sidestep Language
We insist that large language models repeatedly translate their mathematical processes into words. There may be a better way.
Where Does Meaning Live in a Sentence? Math Might Tell Us.
The mathematician Tai-Danae Bradley is using category theory to try to understand both human and AI-generated language.
Intelligence Evolved at Least Twice in Vertebrate Animals
Complex neural pathways likely arose independently in birds and mammals, suggesting that vertebrates evolved intelligence multiple times.
Why Do Researchers Care About Small Language Models?
Larger models can pull off greater feats, but the accessibility and efficiency of smaller models make them attractive tools.
The Physicist Working to Build Science-Literate AI
By training machine learning models with enough examples of basic science, Miles Cranmer hopes to push the pace of scientific discovery forward.
The Poetry Fan Who Taught an LLM to Read and Write DNA
By treating DNA as a language, Brian Hie’s “ChatGPT for genomes” could pick up patterns that humans can’t see, accelerating biological design.
Chatbot Software Begins to Face Fundamental Limitations
Recent results show that large language models struggle with compositional tasks, suggesting a hard limit to their abilities.
Can AI Models Show Us How People Learn? Impossible Languages Point a Way.
Certain grammatical rules never appear in any known language. By constructing artificial languages that have these rules, linguists can use neural networks to explore how people learn.
The Year in Computer Science
Researchers got a better look at the thoughts of chatbots, amateurs learned exactly how complicated simple systems can be, and quantum computers passed an essential milestone.