“What is surprising about these large language models is how well they know how the world works just by reading all the things they can find,” Chris ManningHe is a professor at Stanford University who specializes in artificial intelligence and language.
But GPT and its ilk are basically very talented statistical parrots. They learn how to recreate the word patterns and grammar found in language. This means that they can divulge nonsense, Very inaccurate facts, And hateful language From the dark corners of the web.
Amnon ShashuaD., a professor of computer science at the Hebrew University of Jerusalem, and co-founder of another startup that is building a bilingual model of artificial intelligence based on this approach. He knows a thing or two about AI marketing after he sold his last company, Mobileye, which pioneered the use of artificial intelligence to help cars detect objects on the road, to Intel Corporation in 2017 for $15.3 billion.
New Shashua Company, AI21That came out of stealth last week, has developed an artificial intelligence algorithm, called Jurassic-1, that demonstrates amazing language skills in both English and Hebrew.
In demos, Jurassic-1 can create paragraphs of text on a specific topic, dream up catchy titles for blog posts, write simple bits of computer code, and more. Shashua says the model is more complex than GPT-3, and he believes future versions of Jurassic may be able to build some kind of sound understanding of the world from the information it collects.
Other efforts to recreate GPT-3 reflect the diversity of languages in the world – and on the Internet -. In April, researchers at HuaweiChinese tech giant Posted details The GPT-like Chinese language model is called PanGu-alpha (written as PanGu-α). in May, navigator, the South Korean search giant, said it has developed its own language model, called HyperCLOVA, that “speaks” Korean.
Ji Tang, a professor at Tsinghua University, leads a team at Beijing Academy of Artificial Intelligence which developed another form of Chinese called Wudao (meaning “enlightenment”) with help from government and industry.
Wudao’s model is much larger than any other, which means its simulated neural network is spread across more cloud computers. Increasing the size of the neural network was a key factor in making GPT-2 and -3 more capable. Wudao can also work with both images and text, and Tang has set up a company to market it. “We think this could be the cornerstone of all AI,” Tang says.
This enthusiasm appears to be justified by the capabilities of these new AI programs, but the race to commercialize such language models may also move more quickly than efforts to add barriers to protection or limit misuse.
Perhaps the most pressing concern about AI language models is how they can be misused. Since forms can produce compelling text on a topic, some people fear that they can easily be used to create fake reviews, spam, or fake news.
“I would be surprised if the operators of disinformation did not at least invest serious energy in trying these models,” he says. Mika Moser, Research Analyst at Georgetown University Linguistic models can spread misinformation.
Moser says research indicates that it will not be possible to use AI to detect misinformation generated by AI. It is unlikely that there will be enough information in a tweet for a machine to judge whether it was written by a machine.
More problematic biases may be lurking within these giant language paradigms, too. Research shows that language models are trained on Chinese Internet content will reflect censorship that made up this content. Programs also deterministically capture and reproduce subtle and explicit biases about race, gender, and age in the language they consume, including hateful statements and ideas.
Likewise, these large language models may fail in surprising or unexpected ways, he adds Percy Liang, another professor of computer science at Stanford University and principal investigator at new center Dedicated to studying the potential of powerful general-purpose AI models such as GPT-3.