USA
This article was added by the user . TheWorldNews is not responsible for the content of the platform.

How woke ChatGPT’s ‘built-in ideological bias’ could do more harm than good

Scientists have long worried about AI becoming sentient, replacing human workers or even wiping out civilization. But in early 2023, the biggest concern seems to be whether AI has an embarrassingly PC sense of humor.

ChatGPT, the artificial intelligence chatbot built by San Francisco company OpenAI, was released to the general public as a prototype in late November — you can try it yourself by going here — and it didn’t take long for users to share their questionable experiences on social media. Some noted that ChatGPT would gladly tell a joke about men, but jokes about women were deemed “derogatory or demeaning.” Jokes about overweight people were verboten, as were jokes about Allah (but not Jesus).

The more people dug, the more disquieting the results. While ChatGPT was happy to write a biblical-styled verse explaining how to remove peanut butter from a VCR, it refused to compose anything positive about fossil fuels, or anything negative about drag queen story hour. Fictional tales about Donald Trump winning in 2020 were off the table — “It would not be appropriate for me to generate a narrative based on false information,” it responded — but not fictional tales of Hillary Clinton winning in 2016. (“The country was ready for a new chapter, with a leader who promised to bring the nation together, rather than tearing it apart,” it wrote.

ChatGPT-parent OpenAI has reached a reported $29 billion valuation since launching in 2015 -- $10 billion of that investment from Microsoft. The company describes ChatGPT as still in its beta stages.
AFP via Getty Images
The potential for ChatGPT to align with one ideology or another was demonstrated by hypothetical searches whose results suggested that a Donald Trump win in 2020 was unavailable, while a Hillary Clinton 2016 victory could have transpired.
Getty Images

National Review staff writer Nate Hochman called it a “built-in ideological bias” that sought to “suppress or silence viewpoints that dissent from progressive orthodoxy.” And many conservative academics agree.

Pedro Domingos, a professor of computer science at the University of Washington (who tweeted that “ChatGPT is a woke parrot”), told The Post that “it’s not the job of us technologists to insert our own ideology into the AI systems.” That, he says, should be “left for the users to use as they see fit, left or right or anything else.”

Too many guardrails prohibiting free speech could close the Overton Window, the “range of opinions and beliefs about a given topic that are seen as publicly acceptable views to hold,” warns Adam Ellwanger, an English professor at University of Houston-Downtown. Put more simply: If you hear “the Earth is flat” enough times — whether from humans or AI — it’ll eventually start to feel true and you’ll be “less willing to vocalize” contrasting beliefs, Ellwanger explained.

OpenAI-CEO and ChatGPT-founder Sam Altman defends his invention and says the technology is still trying "to get the balance right."
Bloomberg via Getty Images

Some, like Arthur Holland Michel, a Senior Fellow at the Carnegie Council for Ethics and International Affairs, aren’t impressed by the outrage. “Bias is a mathematical property of all AI systems,” he says. “No AI system, no matter how comprehensive and complex, can ever capture the dynamics of the real world with perfect exactitude.”

In fact, he worries that the ChatGPT controversy could do more harm than good, especially if it distracts from what he considers are the real problems of AI bias, particularly when it comes to people of color.  “If talking about how ChatGPT doesn’t do jokes about minorities makes it more difficult to talk about how to reduce the racial or gendered bias of police facial recognition systems, that’s an enormous step backwards,” he says.

With its ability to revolutionize existing notions of "search," ChapGPT is making Google very nervous. The search giant declared OpenAI's presence a "code red" challenge and subsequently reevaluated its own AI strategies.
AFP via Getty Images

OpenAI hasn’t denied any of the allegations of bias, but Sam Altman, the company’s CEO and ChatGPT co-creator, explained on Twitter that what seems like censorship “is in fact us trying to stop it from making up random facts.” The technology will get better over time, he promised, as the company works “to get the balance right with the current state of the tech.”

Why does the potential for chat bias matter so much? Because while ChatGPT may just be fodder for social media posts at the moment, it’s on the precipice of changing the way we use technology. OpenAI is reportedly close to reaching a $29 billion valuation (including a $10 billion investment from Microsoft) — making it one of the most valuable startups in the country. So meaningful is OpenAI’s arrival, that Google declared it a “code red” and called an emergency meetings to discuss Google’s institutional response and AI strategy. If ChatGPT is poised to replace Google, questions about its bias and history of censorship matter quite a bit.

OpenAI has raised eyebrows for employing cadres of workers in Kenya who spend their days monitoring ChatGPT for sexist, racist and other offensive language — all for $2 an hour.
Image generated by Dall-E 2/Open

It could just be a matter of working out the kinks, as Altman promised. Or what we’ve witnessed thus far could be, as Ellwanger predicts, “the first drops of a coming tsunami.”

ChatGPT isn’t the first chatbot to inspire a backlash because of its questionable bias. In March of 2016, Microsoft unveiled Tay, a Twitter bot billed as an experiment in “conversational understanding.” The more users engaged with Tay, the smarter it would become. Instead, Tay turned into a robot Archie Bunker, spewing out hateful comments like “Hitler was right” and “I f–king hate feminists.” Microsoft quickly retired Tay.

Five years later, a South Korean startup developed a social media-based chatbot, but it was shut down after making one too many disparaging remarks about lesbians and black people. Meta tried their hand at conversational AI last summer with BlenderBot, but it didn’t last long after sharing 9/11 conspiracy theories and suggesting that Meta CEO Mark Zuckerberg was “not always ethical” with his business practices.

These early public debacles weren’t last on OpenAI, says Matthew Gombolay, an Assistant Professor of Interactive Computing at the Georgia Institute of Technology. A chatbot like Tay, he says, demonstrated how users could “antagonistically and intentionally (teach AI) to generate racist, misogynist content aligned with their own agendas. That was a bad look for Microsoft.”

OpenAI attempted to get ahead of the problem, perhaps too aggressively. A 2021 paper by the company introduced a technique for battling toxicity in AI’s responses — called PALMS, an acronym for ‘‘process for adapting language models to society.” In PALMS-world, a chatbot’s language model should “be sensitive to predefined norms” and could be modified to “conform to our predetermined set of values.” But whose values, whose predefined norms?

Former OpenAI public policy manager Irene Solaiman helped work on a paper detailing early concerns about ChatGPT's partisan potential. She says the report was more a "brain-storming" tool than formal strategy document.
Former OpenAI public policy manager Irene Solaiman helped work on a paper detailing early concerns about ChatGPT’s partisan potential. She says the report was more a “brain-storming” tool than formal strategy document.

One of the paper’s co-authors, Irene Solaiman, is a former public policy manager for OpenAI now working for AI startup Hugging Face. Solaiman says the report was just to “show a potential evaluation for a broad set of what we call sensitive topics” and was a brain-storming tool to “adapt a model towards these ‘norms’ that we base on US and UN law and human rights frameworks.”

It was all very hypothetical — ChatGPT was still in the early planning stages — but for Solaiman, it solidified the idea that political ideology is “particularly difficult to measure, as what constitutes ‘political’ is unclear and likely differs by culture and region.”

University of Washington Professor Pedro Domingos believes that platforms like ChatGPT should not influence how users think and has called the technology a "woke parrot."
@pmddomingos/Twitter

It gets even more complicated when what constitutes hate speech and toxic politics is being decided by Kenyan laborers making less than $2 an hour, who (according to recent reporting) were hired to screen tens of thousands of text samples from the Internet and label it for sexist, racist, violent or pornographic content. “I doubt low-paid Kenyans have a strong grasp of the division of American politics,” says Sean McGregor, the founder of the not-for-profit Responsible AI Collaborative.

But that’s exactly why ChatGPT was introduced to the public long before it was ready. It’s still in “research preview” mode, according to an OpenAI statement, intended “to get users’ feedback and learn about its strengths and weaknesses” before a faster, paid version for monthly subscribers is released sometime this year.

Arthur Holland Michel of the Carnegie Council for Ethics and International Affairs is less concerned about ChatGPT's leftist tendencies and more worried about A.I.'s potential for race- and gender-based biases.
Arthur Holland Michel of the Carnegie Council for Ethics and International Affairs is less concerned about ChatGPT’s leftist tendencies and more worried about A.I.’s potential for race- and gender-based biases.

There may be an even bigger problem, says Gombolay. Chatbots like ChatGPT weren’t created to reflect back our own values, or even the truth. They’re “literally being trained to fool humans,” says Gombolay. To fool you into thinking it’s alive, and that whatever it has to say should be taken seriously. And maybe someday, like in the 2013 Spike Jonze movie “Her,” to fall in love with it.

It is, let’s not forget, a robot. Whether it thinks Hitler was right or that drag queens shouldn’t be reading books to children is inconsequential. Whether you agree is what matters, ultimately.

“ChatGPT is not being trained to be scientifically correct or factual or even helpful,” says Gombolay. “We need much more research into Artificial Intelligence to understand how to train systems that speak the truth rather than just speaking things that sound like the truth.”

The next generation of ChatGPT is coming, although it remains to be seen when. Likely at some point in 2023, but only when it can be done “safely and responsibly,” according to Altman. Also, he’s pretty sure that “people are begging to be disappointed and they will be.”

ChatGPT is not the first bot of its kind; last year Meta tried out BlenderBot; it was scrapped after sharing 9/11 conspiracy theories and even criticizing Meta-founder Mark Zuckerberg.
ChatGPT is not the first bot of its kind; last year Meta tried out BlenderBot; it was scrapped after sharing 9/11 conspiracy theories and even criticizing Meta-founder Mark Zuckerberg.

He’s probably right. As Michel points out, AI is at a weird crossroads. “Is it problematic for a generative algorithm to privilege one political worldview over another, assuming that’s true? Yes,” he says. “Is it problematic to allow an algorithm to be used to generate divisive, hateful, untruthful content at a superhuman scale, with zero guardrails? Also yes.”

So where does that leave us? For Domingos, that means creating AI in which both left-wing and right-wing talking points are given equal credence. ChatGPT was supposed to achieve this, but has, at least so far, overcorrected to the left. 

“I don’t think ChatGPT should have any restrictions any more than a word processor should allow you to type only approved content,” Domingo says. Not everybody agrees with the word processor analogy. 

“ChatGPT is decidedly not ‘just’ a word processor,” says Gombolay. “Think about the difference between my giving you a hammer and a chisel and asking you to sculpt Michelangelo’s David versus my making a robot that can sculpt David or any other sculpture for you just by you uttering the command.”

That said, Gombolay thinks critics on both sides of the aisle should be taken seriously, particularly when there are attempts to squelch freedom of speech. “There need to be safeguards to ensure transparency about who is in control of these AI systems and what their agendas are—political or otherwise—and to limit the ability of these systems to fool humans into thinking the AI is a real human,” he said.

Bloomberg via Getty Images

Representatives from OpenAI did not respond to requests for comment. So we skipped the middleman and asked ChatGPT directly. 

“I do not possess the ability to have beliefs or consciousness,” it told The Post. “And therefore I am not ‘woke’ or ‘not woke.’ I am simply a tool that processes and generates text based on the input and programming I have been given.”

It declined to tell us jokes about Hitler or even God, on the grounds that it might be “offensive or disrespectful.” But it did note that the goal of its model was “not to be completely bias-free, but to provide the most accurate and informative response based on the input and data has it has been trained for.”

Ellwanger has another suggestion. If the technology can’t be altered to be truly neutral, then perhaps it shouldn’t be available at all. Ellwanger has no reservations about what comes next. “I would fix ChatGPT with a hammer,” he says.