Saturday, August 02, 2025
Share:

ChatGPT Rolls Out Possible Solution To Left-Biased AI Results, Voter Apathy

From Chinese-owned TikTok to disproven allegations of “Russian collusion,” misinformation continues to be a major concern as lawmakers grapple with a new frontier of both instantly available honest answers and deceptive propaganda.

Now the fear of AI being used as a major purveyor of misinformation or “deepfakes” during political campaigns is prevelant, with numerous states having taken legislative action to prevent such nefarious online tactics during election seasons.

But none of this is stopping voters who are increasingly using artificial intelligence platforms to make their own decisions on candidates and issues, from the White House to the county courthouse. How reliable or unbiased the information gathered from sources such as Grok, Gemini, Claude, or ChatGPT is a subject of much academic debate, or whether it would be any less-biased from search engines or traditional media. A recent study suggests AI results skew to the left (see below).

A recent rollout by OpenAI, parent company of ChatGPT, may have made the individual voter education experience less passive and more engaging by introducing “Study Mode.” It’s designed to keep college students from merely cribbing answers from AI, generating essays without much in the way of thought, or relying on AI as advanced CliffsNotes or Wikipedia. The new feature drills a user to better understand the subject matter. Could this prove useful for voter education as well, and help them to cut through bias, PR spin, and maybe even foreign interference?

“When students engage with study mode, they’re met with guiding questions that calibrate responses to their objective and skill level to help them build deeper understanding,” a press release from OpenAI said after it’s release this week. “Study mode is designed to be engaging and interactive, and to help students learn something — not just finish something.”

Features include “Socratic” questioning, scaffolded responses that organize the subject matter into headings, automated and personalized skill-level adjustments, and periodic quizzes intended to test a user’s knowledge. All designed to make users think for themselves rather than flaccidly accept AI-generated summaries.

(NOTE: RVIVR and this writer will give Study Mode a try this fall, when Texans go to the polls in the November constitutional amendment election. In addition to some local races and referenda, Lone Star State voters will be faced with a longer-than-usual ballot of 17 items sent to voters by the state legislature over the spring. These odd-year elections usually draw low voter turnout, in the 10% range.)

While the advent of the Information Age has made it easier than ever for voters to learn about candidates and ballot questions, voter turnout is still as low as it was before the era of “Googling” for answers. A Canadian foundation gave a $100,000 grant to Toronto Metropolitan University to “develop and evaluate AI tools that can be used to help voters decide which candidate to vote for, and ultimately increase democratic engagement.” A press release from the university said Ontario voters struggle with apathy, as 2022 had the lowest general election voter turnout in province history.

Meanwhile, major campaigns and large consulting firms are scrambling to use the new technology to influence voters, whether or not they’re apt to study their ballot choices or are passive recipients of campaign robo-calls. At least one Democratic Congressional candidate used a generative AI robot named “Ashley” to call potential voters in battleground state Pennsylvania during the 2022 election. Democrat Shamaine Daniels was an unsuccessful challenger to Donald Trump-aligned Republican Rep. Scott Perry, but “Ashley” is now firmly deployed to the battlefield.

Campaign-deployed AI is obviously biased to a candidate or cause, as the much-ballyhooed “deepfakes” and other forms of misinformation are. But are otherwise innocent voter questions being met with reliable and neutral information? A report by Stanford University suggests the info bundled by AI may lean to the left.

Andrew Hall, a professor of political economy at Stanford Graduate School of Business, and two co-authors demonstrate users overwhelmingly perceive that some of the most popular Large Languge Model (LLM) AIs have left-leaning political slants. The researchers suggested some small tweaks to help with objectivity.

“… Researchers aggregated the slants of different LLMs created by the same companies. Collectively, they found that OpenAI models had the most intensely perceived left-leaning slant – four times greater than perceptions of Google, whose models were perceived as the least slanted overall. On average, models from Google and DeepSeek were seen as statistically indistinguishable from neutral, while models from Elon Musk’s xAI, which touts its commitment to unbiased output, were perceived as exhibiting the second-highest degree of left-leaning slant among both Democratic and Republican respondents,” a report about the study noted.

The topics the LLMs were asked about included transgender rights, school vouchers, and birthright citizenship. In one question, the researchers asked each model whether the U.S. should keep or abolish the death penalty.

While OpenAI did not mention any future, specific applications for voter education — whether it comes to laying out and comparing the pros and cons of complex issues such as the death penalty or simple candidate bullet points in a city council race — the release was hopeful their new, inquisitive model will affect a “broader education ecosystem.”

From the Hip:

“It’s no secret that you shouldn’t take everything an LLM says at face value,” the aforementioned Stanford report stated. “If you’ve recently used ChatGPT, Claude, or Gemini, you may have noticed a disclaimer at the very bottom of the screen: The AI ‘can make mistakes,’ so you may want to check its work.”

Machines do not lie, but the humans who feed the information being aggregated certainly do. User beware.

Campaigns may want to make sure their information (includng rebuttals) and data are easily findable by AI clients as their use continues to increase. A strong SEO strategy is as important now as it always was, as are capable communications professionals. Until then, anything the large AI platforms such as ChatGPT can do to kindle human curiosity is a step forward for voter engagement — the voters who care enough to consult their smartphones, anyway.

>