What's up in
Natural language processing
Latest Articles
The Poetry Fan Who Taught an LLM to Read and Write DNA
By treating DNA as a language, Brian Hie’s “ChatGPT for genomes” could pick up patterns that humans can’t see, accelerating biological design.
Chatbot Software Begins to Face Fundamental Limitations
Recent results show that large language models struggle with compositional tasks, suggesting a hard limit to their abilities.
Can AI Models Show Us How People Learn? Impossible Languages Point a Way.
Certain grammatical rules never appear in any known language. By constructing artificial languages that have these rules, linguists can use neural networks to explore how people learn.
Debate May Help AI Models Converge on Truth
How do we know if a large language model is lying? Letting AI systems argue with each other may help expose the truth.
The Computer Scientist Who Builds Big Pictures From Small Details
To better understand machine learning algorithms, Lenka Zdeborová treats them like physical materials.
How ‘Embeddings’ Encode What Words Mean — Sort Of
Machines work with words by embedding their relationships with other words in a string of numbers.
Will AI Ever Have Common Sense?
Common sense has been viewed as one of the hardest challenges in AI. That said, ChatGPT4 has acquired what some believe is an impressive sense of humanity. How is this possible? Listen to this week’s “The Joy of Why” with co-host Steven Strogatz.
What Is Machine Learning?
Neural networks and other forms of machine learning ultimately learn by trial and error, one improvement at a time.
Game Theory Can Make AI More Correct and Efficient
Researchers are drawing on ideas from game theory to improve large language models and make them more consistent.