USA
This article was added by the user . TheWorldNews is not responsible for the content of the platform.

Fun AI apps are everywhere now. But safety "reckoning" is coming

If you've spent a lot of time on Twitter lately, you may have seen a black and white viral image ofJar JarBinks {. Snoop Dogg'scourt sketches

sued by Snoopy in the Nuremberg Trials 7}, or these surrealistic pieces are popular web apps that create images on demand. It is a product of Dall-E Mini. When you enter the prompt, a handful of cartoon images depicting everything you request is quickly generated.

Currently, more than 200,000 people use the Dall-E Mini every day, its creator says. The number continues to grow. The Twitter account "Weird Dall-E Enerations" created in February has more than 890,000 followers at the time of publication. One of the most popular tweets to date is the response to "CCTV footage of Jesus Christ stealing a bicycle."

If the Dall-E Mini seems to be innovative, it's just a rough imitation of what is possible with more powerful tools. As the name "Mini" suggests, this tool is effectively a copycat version of Dall-E. It's a much more powerful text-to-image tool created by one of the world's most advanced artificial intelligence labs.

Its lab, OpenAI, boasts the ability of (real) Dall-E to generate photorealistic images online. However, OpenAI has not released Dall-E. This is due to concerns that Dall-E "may be used to generate a variety of deceptive or other harmful content." Image generation tools are not the only ones trapped in a closed room by the creator. While investigating the risks and limitations of the tool, Google continues to limit its own similarly powerful image generation tool called Imagen.

According to both Google and OpenAI, the risks of text-to-image tools include the possibility of supercharging bullying and harassment. Generate images that reproduce racismor gender stereotypes. Disseminate false information. They can even reduce public confidence in real photographs of reality.

Text can be even more difficult than images. Both OpenAI and Google have developed their own synthetic text generators that are the basis for chatbots. We also chose not to make it publicly available, as it could create incorrect information and encourage bullying.

Read more:How AI will completely change our lives over the next 20 years

Google And OpenAI, at least for now, pointed out their decision to limit these potentially dangerous tools to specific user groups and have long explained that they are working on the secure development of AI. But it still doesn't prevent you from publicly promoting the tool, announcing its features, and explaining how it was created. It affected the wave of copycats with less ethical issues. Tools pioneered within Google and OpenAI are more often mimicked by knock-off apps that are more widely distributed online, helping to increase the sense that the public Internet is at stake. doing.

"The platform makes it easy to create and share different types of technology without a strong background in computer science," said computer scientist and Google. Margaret Mitchell, a former co-leader of Ethical, said. Artificial intelligence team. "By the end of 2022, the public's understanding of this technology and everything it can do will change radically."

Imitation effect

The rise of the Dall-E Mini is just one example of a "mimicking effect." This is a term used by defense analysts to understand how enemies can inspire each other. Military research and development. "The imitation effect is when we confirm that the feature has been proven.Oh, it's possible," said Trey Herr, director of the Atlantic Council's Cyberstate Craft Initiative. increase. "What we are currently looking at with Dall-E Mini is that we can recreate a system that can output these based on the capabilities of Dall-E. It greatly reduces uncertainty. Therefore, we train the system in that direction. I'm confident that we can get there if we have the resources and technical chops to do it. ”

That's exactly what Houston, Texas-based machine learning researcher Boris Dayma does. That's what happened. When I saw OpenAI's explanation online about what Dall-E can do, he was encouraged to create a Dall-E Mini. "IOh, it was like a very cool," Dayma told TIME. "I wanted to do the same."

"Large groups like Google and OpenAI have to show that they are at the forefront of AI, so what's going on as soon as possible? We'll talk about what we can do, "says Dayma. "[OpenAI] has published a paper that contains a lot of very interesting details about how they created [Dall-E]. They didn't provide the code, but many. We provided important elements. Without the papers they published, we wouldn't have been able to develop the program. "

In June, the creators of Dall-E Mini were from OpenAI. In response to "To avoid confusion" request, we announced that we will rename the tool to Craiyon.

Advocates of restraint, such as Mitchell, say that accessible image and text generation tools inevitably open up a world of creative opportunities, but portray people in danger. It also opens a box of Pandora's terrible applications. Or create an army of hate speech bots to relentlessly bully vulnerable people online.

Read more:Artificial intelligence helped write this play. May contain racism

However, Dayma can ignore the danger because the images produced by the Dall-E Mini are far from photorealistic. I am convinced. "In a way, that's a big advantage," he says. “You can let people discover the technology without taking risks.”

Some other imitation projects carry even more risks. In June, a program called GPT-4chan appeared. It was a text generator or chatbot that was trained in texts from 4chan, a well-known forum for racism, sexism, and homosexuality. All the new sentences it generated sounded toxic as well.

Like the Dall-E Mini, this tool was created by an independent programmer, but was inspired by research at OpenAI. Its name, GPT-4chan, is named after OpenAI's flagship text generator,GPT-3. Unlike the copycat version, GPT-3 is trained with text stripped from a wide range of the Internet, and its creator, OpenAI, allows only some users access to GPT-3. increase.

A new frontier for online safety

In June, after GPT-4chan's racist and racist text output was widely criticized online, the app , Has been removed from the website that hosted it, Hugging Face. It violates the terms of use.

Hugging Face allows you to access machine learning-based apps from your web browser. This platform has become a reliable place for open source AI apps, including the Dall-E Mini.

Clement Delangue, CEO of Hugging Face, told TIME that his business is booming, and what he said is a new era of computing, machine learning.

However, the controversy over GPT-4chan was also a sign of new challenges in the world of online security. The final online revolution, social media, has spawned millionaires from the platform's CEO, andis in a position to determinecontent that is acceptable online (and unacceptable). I put them down. The suspicious decision has hurt the once lustrous reputation. Today, smaller machine learning platforms like Hugging Face, which have far less resources, are becoming a new kind of gatekeeper. As open source machine learning tools such as Dall-E and GPT-4chan become more prevalent online, it is up to the host platform such as HuggingFace to set tolerance limits.

Delangue states that this gatekeeping role is a ready task for the Hugging Face. “We are very excited because we think there are many potential positive impacts on the world,” he says. "But that means you don't make the mistakes that many older players make, like social networks. That is, think technology is value-neutral and withdraw from ethical discussions.

Still, like the early approaches of social media CEOs, Delangue hints at a preference for moderation of light-touch content. According to him, the site's policy is now to politely ask creators to modify the model, and as an "extreme" last resort, simply remove the model altogether.

However, Hugging Face is also informed by the latest research on AI harm and encourages authors to remain transparent about tool limitations and prejudices. Former Google AI ethicist Mitchell is currently working on these issues at Hugging Face.She helps the platform imagine what a new content moderation paradigm for machine learning will look like.

"Obviously, trying to balance all these ideas about open source and public sharing of very powerful technologies with what a malicious attacker can do and what misuse looks like. , There is art there, "says Mitchell. She is in her position as an independent machine learning researcher, not as an employee of Hugging Face. Part of her role is "to shape the AI ​​so that the worst actors and scary scenarios that can be easily predicted do not occur."

Mitchell says that a group of schoolchildren is like GPT-4chan. Imagine the worst scenario of training a text generator and teasing your classmates via text, direct messages, Twitter, Facebook and WhatsApp. The point at which the victim decided to end his life. "Calculations will be done," says Mitchell. "We know this is happening. It's foreseeable. But AI and the latest technology have a breathtaking fandom, and it's about to happen, and it's already happening. The danger of AI hype

That "breathtaking fandom" caused controversy this month yet yet. Encapsulated in an AI project. In early June, Google engineer Blake Lemoine claimed that one of the company's chatbots, called LaMDA, based on the company's synthetic text generation software, was perceptual. Google rejected his claim and took him on leave. Around the same time, Ilya Sutskever, a senior executive at OpenAI, suggested on Twitterthat the computer brain is beginning to imitate the human brain. "Psychology should become more and more applicable as AI gets smarter," he said.

In a statement, Google spokesman Brian Gabriel said the company "takes a restraining and cautious approach to LaMDA to better consider legitimate concerns about fairness and factuality. I'm taking it. " OpenAI declined to comment.

For some experts, the discussion of LaMDA's possible sensibilities was, at worst, distracting. Instead of discussing whether chatbots are emotional, AI's most influential players need to hurry to educate people about the potential harm of such technology.

"This may be a moment to better educate the general public about what this technology is really doing," said the University of Washington, which is studying machine learning technology. Emily Bender, a professor of linguistics, says. "Or maybe it's a moment when more and more people are embraced and participate in hype." Vendors, even the term "artificial intelligence," are far from "intelligent," or actually conscious technology. He added that it was an incorrect name because it was used to represent it.

Still, Bender says image generators like the Dall-E Mini may have the ability to teach the public about the limits of AI. It's easy to fool people with chatbots. Because humans tend to look for meaning in language no matter where they come from, she says. Our eyes are hard to deceive. The images released by the Dall-E Mini look strange and glitchy, and certainly far from photorealistic. "People playing with the Dall-E Mini don't believe these images are from the real world," says the vendor.

Despite AI's hype that big companies are making noise, crude tools like the Dall-E Mini show how far technology has to go. When you type "CEO," the Dall-E Mini spits out nine images of a white man in a suit. If you type "female", all the images represent white women. The results reflect the bias of the data trained by both Dall-E Mini and OpenAI's Dall-E. In other words, it's an image taken from the internet. This inevitably includes racists, sexists, other problematic stereotypes, and mass pornography and violence. Even if researchers carefully exclude the worst content (as both Dayma and OpenAI say they did), more subtle biases will inevitably remain.

Read more:Why Timnit Gebru isn't waiting for Big Tech to solve AI problems

AI is impressive in technology, but this kind of fundamental drawback still plagues many areas of machine learning. And that's the main reason Google and OpenAI are refusing to publicly release their image and text generation tools. “Large AI labs are responsible for cutting it out with hype and making it very clear what they actually built,” says the vendor. "And I see the opposite."

Billy Perrigo (billy .perrigo@time.com)