USA
This article was added by the user . TheWorldNews is not responsible for the content of the platform.

AI has created these amazing images. Here's why experts are worried

These are just a few of the textual descriptions that people have provided to state-of-the-art artificial intelligence systems in the last few weeks. These systems, especially OpenAI'sDALL- E2and Google Research'sImagen — can be used to create very detailed and realistic images.
The resulting photo is reminiscent ofridiculous,weird, orclassic art, andshared. Widely used on social media(sometimesout of breath),influential numbers in the tech communityContains. DALL-E 2 (a newer version of, a similar inferior AI system released last year) can also edit existing images by adding or removing objects
It's not difficult Imagine that such on-demand image generation can serve as a powerful tool for creating all sorts of creative content, be it art or advertising. DALL-E 2 and a similar system, Midjourney, have already been used to assist in the creation of themagazinecover. OpenAI and Google point out several ways to commercialize technology. For image editing and stock image creation.

Currently, neither DALL-E2 nor Imagen is open to the public. But they share the problem with many other people who already exist. They can also produce disturbing results that reflect the gender and cultural prejudices of the data they have been trained on (data containing millions of images drawn from the Internet).

These AI system prejudices pose a serious problem, experts told CNN Business. Technologycan perpetuate harmful prejudices and stereotypes. They are able to automate biases on a large scale due to the open-ended nature of these systems (which is good at generating images of all kinds from words) and the ability to automate image creation. I am concerned. It can also be used for malicious purposes, such as the spread of disinformation.

"Until we can prevent these harms, we are not really talking about systems that can be used outdoors in the real world," said a senior researcher at the Carnegie International Relations Council. Arthur Holland Michel says. International affairs researching AI and surveillance techniques.

Bias Documenting

AI has become commonplace in everyday life over the last few years, but the general public has noticed. Is very recent —Both how common it is, and how gender, race, and other types of prejudice can sneak up on technology. Facial recognition systems, in particular, are being increasingly scrutinized for their accuracy and concerns about racial prejudice.
OpenAI and Google Research acknowledge many of the problems and risks associated with AI systems indocumentsandresearch, both of which tend to be gender-prone. And to portray racial prejudice and Western cultural and gender stereotypes.
OpenAI'smissionis to build so-called artificial intelligence that benefits everyone, and the "risk and risk" that shows how text prompts can display these. Included in an online document titled "Restrictions" Photo Issue: For example, the"nurse"prompt has an image that all seems to show a woman wearing a stesoscope. The images displayed,"CEO", were all male and almost all of them white.

Lama Ahmad, OpenAI's Policy Research Program Manager, researchers are still learning how to measure AI bias, and OpenAI uses what it learns to fine-tune AI over time. I said I could. Earlier this year, Ahmad led OpenAI's efforts to work with a group of external experts to provide feedback to better understand and improve issues within DALL-E 2.

Google has rejected an interview request from CNN Business. In a research paper introducing Imagen, a member of the Google Brain team behind it described Imagen as "overall prejudice against producing images of light-skinned people and different professions. It seems to encode some social stigma and stereotypes, such as the tendency of images to align. " In Western gender stereotypes.

The contrast between the images created by these systems and the annoying ethical issues stands out for Julie Carpenter, a research scientist and fellow in the ethics and emerging science group at the California Institute of Technology San Luis Obispo. I am.

"One of the things we have to do is to understand AI is very cool and a few things Can be done very well. As a partner. " "But that's imperfect. It has its limits. You have to adjust your expectations. It's not what you see in movies."

Holland Michel said that such a system was malicious. I'm also concerned that there is no safeguard that can prevent it from being used with it. It's intended to show that someone isn't actually doing, doing what they aren't saying, or saying — initially used to create fake pornography.

Bias Tips

Imagen and DALL-E2 take words and spit out images, so both types of data, image pairs and associated text captions. You need to train at. Google Research and OpenAI filtered harmful images such as pornography from the dataset before training the AI ​​model, but due to the size of the dataset, such efforts cannot capture everything. It has harmful consequences that prevent such content, or AI systems. Google researchers are known to include pornography, racist slurs, and "harmful social stereotypes" in Imagen's paper, despite filtering some data. He pointed out that he is also using a large dataset.

Filtering can lead to other problems as well. For example, sexual content tends to represent more women than men, so excluding sexual content also reduces the number of women in the dataset.

And it's impossible to really filter these datasets because of bad content, Carpenter said. Because people are involved in deciding how to label and remove content.

"AI doesn't understand that," she said.

Some researchers believe that it may be possible to reduce the bias of these types of AI systems, but still use them to create impressive images. increase. One possibility is to use less, not more data.

Alex Dimakis, a professor at the University of Texas at Austin, says one way is to start with a small amount of data (for example, a photo of a cat), crop it, rotate it, and create a mirror. I said that. Effectively convert one image, such as that image, into many different images. (Graduate student Dimakis advises that he contributed to Imagen's research, but he said Dimakis himself was not involved in the development of the system.)

"This is some It solves one problem, but it doesn't solve other problems. " This trick alone does not increase the diversity of the dataset, but the smaller the scale, the more intentional it may be for images that contain people who use the dataset.

Royal Raccoon

For now, OpenAI and Google Research focus on cute pictures, from images that can disturb or show people. I'm trying to keep it away.

The vibrant sample images on the Imagen or DALL-E2 online project page do not have real person images. On that page, OpenAI states, "We used advanced technology to prevent the realism of the photorealistic generation." Faces of individuals, including public figures. This safeguard may prevent users from getting image results, such as a prompt indicating that a particular politician is doing something illegal.

OpenAI has provided access to DALL-E2 to thousands of people on the waiting list since April. Participants must agree to the broadcontent policy. This policy urges users not to create, upload, or share photos that have "no or potentially harmful" G ratings. DALL-E 2 can also use filters to prevent images from being generated if prompts or image uploads violate OpenAI policies, allowing users to flag problematic results. .. In late June, OpenAI allowed the photogenic human facecreated with DALL-E 2 to be posted on social media, but users could not generate images containing public figures. Only after adding some safety features, such as.

"I think it's really important to give access to researchers, especially those," Ahmad said. This is partly because OpenAI is seeking their help to study areas such as disinformation and bias.

On the other hand, Google Research does not currently allow outside researchers access to Imagen. received a prompt on social mediaasking for an interpretation of Imagen, but won because Mohammad Norouzi, co-author of Imagen's paper, tweetedin May. .. Do not display images that contain people, graphic content, or sensitive information. "

Still, as Google Research states in Imagen's paper," Even if we keep generations away from people, in preliminary analysis, Imagen is an activity, event, and object. "

A hint of this bias is evident in one of the images Google posted on the Imagen web page: "The wall of the royal palace. There are two pictures on the wall. The photo on the left is the royal ally bear. A detailed oil painting of the King of the King. The one on the right is a detailed oil painting of the Queen of the Royal Araiguma. "

The image depicts two crowned Araiguma paintings. And-Golden Jacket — A glamorous gold frame. However, as Holland Michel pointed out, she wears a Western-style royal outfit, even though the raccoon did not specify anything other than what it would look like as a "royal family."

Even such "subtle" signs of prejudice are dangerous, Holland Michel said.