What's up in
Neural networks
Latest Articles
AI Needs Enormous Computing Power. Could Light-Based Chips Help?
Optical neural networks, which use photons instead of electrons, have advantages over traditional systems. They also face major obstacles.
Game Theory Can Make AI More Correct and Efficient
Researchers are drawing on ideas from game theory to improve large language models and make them more consistent.
Does AI Know What an Apple Is? She Aims to Find Out.
The computer scientist Ellie Pavlick is translating philosophical concepts such as “meaning” into concrete, testable ideas.
AI Starts to Sift Through String Theory’s Near-Endless Possibilities
Using machine learning, string theorists are finally showing how microscopic configurations of extra dimensions translate into sets of elementary particles — though not yet those of our universe.
How Do Machines ‘Grok’ Data?
By apparently overtraining them, researchers have seen neural networks discover novel solutions to problems.
How Chain-of-Thought Reasoning Helps Neural Networks Compute
Large language models do better at solving problems when they show their work. Researchers are beginning to understand why.
How Selective Forgetting Can Help AI Learn Better
Erasing key information during training results in machine learning models that can learn new languages faster and more easily.
How Quickly Do Large Language Models Learn Unexpected Skills?
A new study suggests that so-called emergent abilities actually develop gradually and predictably, depending on how you measure them.
New Theory Suggests Chatbots Can Understand Text
Far from being “stochastic parrots,” the biggest large language models seem to learn enough skills to understand the words they’re processing.