What's up in

Natural language processing

Latest Articles

How Quickly Do Large Language Models Learn Unexpected Skills?

February 13, 2024

A new study suggests that so-called emergent abilities actually develop gradually and predictably, depending on how you measure them.

New Theory Suggests Chatbots Can Understand Text

January 22, 2024

Far from being “stochastic parrots,” the biggest large language models seem to learn enough skills to understand the words they’re processing.

Tiny Language Models Come of Age

October 5, 2023

To better understand how neural networks learn to simulate writing, researchers trained simpler versions on synthetic children’s stories.

Q&A

Why Mathematical Proof Is a Social Compact

August 31, 2023

Number theorist Andrew Granville on what mathematics really is — and why objectivity is never quite within reach.

Secret Messages Can Hide in AI-Generated Media

May 18, 2023

In steganography, an ordinary message masks the presence of a secret communication. Humans can never do it perfectly, but a new study shows it’s possible for machines.

Chatbots Don’t Know What Stuff Isn’t

May 12, 2023

Today’s language models are more sophisticated than ever, but they still struggle with the concept of negation. That’s unlikely to change anytime soon.

A New Approach to Computation Reimagines Artificial Intelligence

April 13, 2023

By imbuing enormous vectors with semantic meaning, we can get machines to reason more abstractly — and efficiently — than before.

The Unpredictable Abilities Emerging From Large AI Models

March 16, 2023

Large language models like ChatGPT are now big enough that they’ve started to display startling, unpredictable behaviors.

To Teach Computers Math, Researchers Merge AI Approaches

February 15, 2023

Large language models still struggle with basic reasoning tasks. Two new papers that apply machine learning to math provide a blueprint for how that could change.

Get highlights of the most important news delivered to your email inbox