Four Takeaways on the Race to Amass Data for A.I.

Online data has long been a valuable commodity. For years, Meta and Google have used data to target their online advertising. Netflix and Spotify have used it to recommend more movies and music. Political candidates have turned to data to learn which groups of voters to train their sights on.

Over the last 18 months, it has become increasingly clear that digital data is also crucial in the development of artificial intelligence. Here’s what to know.

The success of A.I. depends on data. That’s because A.I. models become more accurate and more humanlike with more data.

In the same way that a student learns by reading more books, essays and other information, large language models — the systems that are the basis of chatbots — also become more accurate and more powerful if they are fed more data.

Some large language models, such as OpenAI’s GPT-3, released in 2020, were trained on hundreds of billions of “tokens,” which are essentially words or pieces of words. More recent large language models were trained on more than three trillion tokens.

Tech companies are using up publicly available online data to develop their A.I. models, faster than new data is being produced. According to one prediction, high-quality digital data will be exhausted by 2026.

In the race for more data, OpenAI, Google and Meta are turning to new tools, changing their terms of service and engaging in internal debates.

See also  OpenAI Says It Has Begun Training a New Flagship A.I. Model

At OpenAI, researchers created a program in 2021 that converted the audio of YouTube videos into text and then fed the transcripts into one of its A.I. models, going against YouTube’s terms of service, people with knowledge of the matter said.

(The New York Times has sued OpenAI and Microsoft for using copyrighted news articles without permission for A.I. development. OpenAI and Microsoft have said they used news articles in transformative ways that did not violate copyright law.)

Google, which owns YouTube, also used YouTube data to develop its A.I. models, wading into a legal gray area of copyright, people with knowledge of the action said. And Google revised its privacy policy last year so it could use publicly available material to develop more of its A.I. products.

At Meta, executives and lawyers last year debated how to get more data for A.I. development and discussed buying a major publisher like Simon & Schuster. In private meetings, they weighed the possibility of putting copyrighted works into their A.I. model, even if it meant they would be sued later, according to recordings of the meetings, which were obtained by The Times.

OpenAI, Google and other companies are exploring using their A.I. to create more data. The result would be what is known as “synthetic” data. The idea is that A.I. models generate new text that can then be used to build better A.I.

Synthetic data is risky because A.I. models can make errors. Relying on such data can compound those mistakes.

See also  A.I. Is Getting Better Fast. Can You Tell What’s Real Now?

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *