At the first of two Google I/O keynotes this week, Google announced LaMDA 2, the sequel to an AI system, LaMDA, that the company showcased at Google I/O 2021. Language Abbreviation Models for Dialog Applications, Google says LaMDA 2 can break down complex topics into simple, digestible explanations and steps, as well as generate suggestions in response to questions.
LaMDA 2, an AI system designed for “dialogue apps”, can understand millions of topics and generate “natural conversations” that never take the same path twice, according to Google. Like most AI systems, LaMDA 2 learns the probability of words appearing in a body of text – usually a sentence – based on very many text examples. The examples come in the form of documents in training datasets, which contain terabytes to petabytes of data pulled from social media, Wikipedia, books, software hosting platforms like GitHub, and other apps. other sources on the public web.
During an onstage segment of the keynote, Google CEO Sundar Pichai walked through a demo where a user asked LaMDA 2 to describe the Mariana Trench in a series of questions. The model answered questions about what creatures might live in the trench and others about topics it hadn’t been explicitly trained to answer, such as submarines and bioluminescence. In another demo, LaMDA 2 provided tips on planting a vegetable garden, offering a list of tasks and sub-tasks related to the location of the garden and what could be planted in the garden, such as tomatoes. , lettuce or garlic.
However, the jury is out on LaMDA 2’s accuracy in all tasks, given that the system’s predecessor sometimes gave false “facts” in internal testing – in one case, repeatedly offering false information about Mount Everest. Pichai acknowledged that models like LaMDA 2 aren’t perfect, but highlighted the sophistication of the technology’s high-level capabilities while promising that work is underway to fill in the gaps.
“These experiments show the potential for language models to help us one day with things like planning, learning about the world, and more,” Pichai said.
Alongside LaMDA 2, Google unveiled AI Test Kitchen, an interactive hub for AI demonstrations powered by models like LaMDA 2. Available as an app, users can interact with the models in restricted ways, such as exploring a particular topic with LaMDA 2 (e.g. dogs) and going deeper into the subtopics of that topic (how dogs smell). Google says it will continue to add “other emerging areas of AI” to AI Test Kitchen, both in natural language processing and beyond.
“Each [demo in AI Test Kitchen is] meant to give you a taste of what it could be like to have lambda in your hands and use it for things you care about,” Pichai said. “These are not products – they are quick sketches that allow us to explore what [models like LaMDA 2] can do with you.
AI Test Kitchen roll in the United States in the coming months, but will not be widely available. Google hasn’t fully decided how it will offer access, but according to at The Verge, the company plans to reach out to select academics, researchers, and policymakers to begin with.