AI’s Black Boxes Just Got a Little Less Mysterious

One of the weirder, more unnerving things about today’s leading artificial intelligence systems is that nobody — not even the people who build them — really knows how the systems work.

That’s because large language models, the type of A.I. systems that power ChatGPT and other popular chatbots, are not programmed line by line by human engineers, as conventional computer programs are.

Instead, these systems essentially learn on their own, by ingesting massive amounts of data and identifying patterns and relationships in language, then using that knowledge to predict the next words in a sequence.

One consequence of building A.I. systems this way is that it’s difficult to reverse-engineer them or to fix problems by identifying specific bugs in the code. Right now, if a user types “Which American city has the best food?” and a chatbot responds with “Tokyo,” there’s no real way of understanding why the model made that error, or why the next person who asks may receive a different answer.

And when large language models do misbehave or go off the rails, nobody can really explain why. (I encountered this problem last year, when a Bing chatbot acted in an unhinged way during an interaction with me, and not even top executives at Microsoft could tell me with any certainty what had gone wrong.)

The inscrutability of large language models is not just an annoyance but a major reason some researchers fear that powerful A.I. systems could eventually become a threat to humanity.

After all, if we can’t understand what’s happening inside these models, how will we know if they can be used to create novel bioweapons, spread political propaganda or write malicious computer code for cyberattacks? If powerful A.I. systems start to disobey or deceive us, how can we stop them if we can’t understand what’s causing that behavior in the first place?

See also  Is Black Wine the New Orange?

To address these problems, a small subfield of A.I. research known as “mechanistic interpretability” has spent years trying to peer inside the guts of A.I. language models. The work has been slow going, and progress has been incremental.

There has also been growing resistance to the idea that A.I. systems pose much risk at all. Last week, two senior safety researchers at OpenAI, the maker of ChatGPT, left the company amid conflict with executives about whether the company was doing enough to make their products safe.

But this week, a team of researchers at the A.I. company Anthropic announced what they called a major breakthrough — one they hope will give us the ability to understand more about how A.I. language models actually work, and to possibly prevent them from becoming harmful.

The team summarized its findings this week in a blog post called “Mapping the Mind of a Large Language Model.”

The researchers looked inside one of Anthropic’s A.I. models — Claude 3 Sonnet, a version of the company’s Claude 3 language model — and used a technique known as “dictionary learning” to uncover patterns in how combinations of neurons, the mathematical units inside the A.I. model, were activated when Claude was prompted to talk about certain topics. They identified roughly 10 million of these patterns, which they call “features.”

They also found that manually turning certain features on or off could change how the A.I. system behaved, or could get the system to even break its own rules.

For example, they discovered that if they forced a feature linked to the concept of sycophancy to activate more strongly, Claude would respond with flowery, over-the-top praise for the user, including in situations where flattery was inappropriate.

Chris Olah, who led the Anthropic interpretability research team, said in an interview that these findings could allow A.I. companies to control their models more effectively.

“We’re discovering features that may shed light on concerns about bias, safety risks and autonomy,” he said. “I’m feeling really excited that we might be able to turn these controversial questions that people argue about into things we can actually have more productive discourse on.”

Other researchers have found similar phenomena in small- and medium-size language models. But Anthropic’s team is among the first to apply these techniques to a full-size model.

Jacob Andreas, an associate professor of computer science at M.I.T., who reviewed a summary of Anthropic’s research, characterized it as a hopeful sign that large-scale interpretability might be possible.

“In the same way that understanding basic things about how people work has helped us cure diseases, understanding how these models work will both let us recognize when things are about to go wrong and let us build better tools for controlling them,” he said.

Mr. Olah, the Anthropic research leader, cautioned that while the new findings represent important progress, A.I. interpretability is still far from a solved problem.

See also  Computer Theorist Wins $1 Million Turing Award

For starters, he said, the largest A.I. models likely contain billions of features representing distinct concepts — many more than the 10 million or so features that Anthropic’s team claims to have discovered. Finding them all would require massive amounts of computing power and would be too costly for all but the richest A.I. companies to attempt.

Even if researchers were to identify every feature in a large A.I. model, they would still need more information to understand the full inner workings of the model. There is also no guarantee that A.I. companies would act to make their systems safer.

Still, Mr. Olah said, even prying open these A.I. black boxes a little bit could allow companies, regulators and the general public to feel more confident that these systems can be controlled.

“There are lots of other challenges ahead of us, but the thing that seemed scariest no longer seems like a roadblock,” he said.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *