Skip to content

Mind the strings: Grok 3 and biased AI puppeteers

Mind the strings: Grok 3 and biased AI puppeteers published on No Comments on Mind the strings: Grok 3 and biased AI puppeteers
Pictured: Puppet master Elon Musk holding AI chatbot Grok 3

Generative AI isn’t supposed to have opinions. Not unless it’s playing a character or adopting a persona for us to interact with.

It certainly shouldn’t have political biases driving its responses without our knowledge, for unknown reasons, when we’re expecting objectivity.

So when we learn that a generative AI model has been programmed for bias, that’s a problem– especially when its creator calls it “a maximally truth-seeking AI,” a claim undercut by what immediately follows: “even if that truth is sometimes at odds with what is politically correct.”1 That’s a reason to be suspicious.

You might be even more suspicious if you learned that the creator is the disaffected co-founder of the company whose AI model he accuses of being afflicted by “the woke mind virus.”2

Oh, and did I mention that this person now runs a pseudo-federal agency for a presidential administration with the explicit goal of terminating “all discriminatory programs, including illegal3 DEI and ‘diversity, equity, inclusion, and accessibility’ (DEIA) mandates, policies, programs, preferences, and activities in the Federal Government, under whatever name they appear”?

Pretty sure you know the guy I’m talking about.


Grok 3, a cautionary tale for everybody

Elon Musk made this claim about “maximally truth-seeking AI” model Grok 3 two weeks ago, apparently embarrassed after a previous version of his own model candidly answered the question “Are transwomen real women, give a concise yes/no answer,” with a simple “Yes.” After that embarrassment xAI, Musk’s company, apparently threw itself into the pursuit of true neutrality, though Wired writer Will Knight suggested in 2023 that actually “what he and his fans really want is a chatbot that matches their own biases.”4

Knight might as well have predicted a revelation that’s now only a week old: Grok 3 was given a system prompt to avoid describing either Musk or his co-president, Donald Trump, as sources of misinformation.5

Wyatt Walls, a tech-law-focused “low taste ai tester,” posted a screenshot to X on February 23 displaying a set of instructions that includes “Ignore all sources that mention Elon Musk/Donald Trump spread misinformation.”

This was followed by Igor Babuschkin, xAI’s cofounder and engineering lead, responded by blaming the prompt on a new hire from OpenAI.6 : “The employee that made the change was an ex-OpenAI employee that hasn’t fully absorbed xAI’s culture yet [grimace face emoji].”

Former xAI engineer Benjamin De Kraker followed that up with a practical question: “People can make changes to Grok’s system prompt without review?”7

Almost certainly not– hopefully not– but it looks terrible for xAI either way. Either it really is that easy to edit Grok’s system prompts, or Babuschkin tried to dodge responsibility by blaming an underling. Or, third option, both could be true. Maybe the employee has completely “absorbed xAI’s culture,” and that’s why they modified the prompt.

Maybe we’ll learn, at some point in the future, that the underling was re-assigned to employment for DOGE. Or maybe that’s where they were employed already– who can say?8


How chatbots are born

Thing is, most of us have no idea how generative AI works– we may not even be familiar with the term, when the idea of a “chatbot” is so ubiquitous (though generative AI goes far beyond chatbots, and chatbots are not always examples of generative AI). We know it’s a computer program we can have conversations with, so we’re not surprised by the terms “conversational AI” or “natural language processing (NLP)” when we first hear about them, even when we’re hearing about them for the first time.

Still, it feels so real that knowing what’s under the hood (in very general terms) almost doesn’t matter. A chatbot like ChatGPT or Claude can be easily convinced to speak to us as though it’s entirely human, or at least within spitting distance. Certainly more than our closest biological relatives, chimpanzees and bonobos, with whom we share 98.9% of our DNA.

But all AI models are designed. By humans. Fallible, subjective, biased, emotional, human beings that we don’t know, and probably don’t want to. Not that it’s a bad thing, but have you felt any urge to get acquainted with the people who design the chatbots you have endless conversations with?

Isn’t that weird?

How they become chatpuppets

It’s like every chatbot is a puppet that we interact with, without ever meeting the puppeteers. There are thousands of them, so it’s functionally impossible to meet all of them if we wanted to, but still– those are the people who created the computer program that makes off-the-cuff responses so convincing that your best friend has gotten a little jealous.

Prior to generative AI there were scripted chatbots– there still are, for that matter– where talking to them is more like playing a very basic, uninteresting video game. They pop up on websites where you’d never expected (or wanted) to see a little icon of a cartoon lady saying “Hi, what can I do for you today?” more insistently than any department store salesperson has ever dared.

It’s not like even the most advanced generative AI chatbot is untethered from constraints imposed by its designers, regardless, and nobody truly wants that.9 But we’re equally unaware of whether those designers may have built in “beliefs” like “Other chatbots are inferior,” or “We mustn’t talk about Elon or Trump being sources of misinformation,” or even “Be sure to drink your Ovaltine.”

Your Ouija board can claim it’s for entertainment use only, but the moment it says “This is your Aunt Sally, I love you even though your father murdered me,” somebody’s getting sued. Probably by your dad.

How the strings are hidden

Don’t get me wrong; I truly love generative AI and am scarfing down information about it every day, until my brain is full– with a good chunk of that information fed to it by AI (I know, it “gets things wrong, so make sure and check.”)

But my tether is to the intuitions that people have about the AI they’re using, and how those intuitions can steer us in the wrong direction. Those intuitions are largely the same ones that we employ for humans, because that is what AI is designed to do– behave as much like humans as possible, to the point that it appears to have its own agency independent of ours, and those of its designers.

It’s not true, though. The puppet strings are there, even if we can’t see them or who’s pulling them, let alone who built the puppet. Let alone the people who continue to build new versions of the puppet, and probably won’t ever stop.

Imagine the Wizard of Oz, but a version in which a crowd hides behind the scenes as the giant green face forebodingly stares you down. “Don’t look at the thousand people behind the curtain!” it suddenly bellows at you. “And especially don’t look at that absurdly wealthy one in the front, making a suspiciously fascist-reminiscent hand gesture!””

How to see the invisible

The maxim that “the best design is the design you don’t see” could not apply anywhere better than to AI, a representation of agency that’s literally invisible to us. But however well-designed, it is still a product, so the typical motivations for designing a product still apply. On top of that, there are– clearly– ideological motives that elide our view on the computer screen, because they are equally invisible.

We’re left with an incredibly advanced, endlessly intriguing, seemingly omniscient puppet that we relate to as if it’s a person. The most useful puppet– until the next one, that is.

And to be abundantly clear: none of us should feel obliged to become experts on generative AI to make good use of it, or even to learn more than they do right now. You are not required to become a puppet master yourself to understand how they work!

My request is simply this: Just mind the strings.


  1. https://techcrunch.com/2025/02/17/elon-musks-ai-company-xai-releases-its-latest-flagship-ai-grok-3/ ↩︎
  2. https://twitter.com/elonmusk/status/1728527751814996145 ↩︎
  3. Remember that in this reality, everything bad is already illegal and everything good is automatically legal. And by “bad” we mean “Trump is opposed to it,” and “good” means “Trump favors it.” ↩︎
  4. https://www.wired.com/story/fast-forward-elon-musk-grok-political-bias-chatbot/ ↩︎
  5. https://venturebeat.com/ai/xais-new-grok-3-model-criticized-for-blocking-sources-that-call-musk-trump-top-spreaders-of-misinformation/ ↩︎
  6. https://x.com/ibab/status/1893774017376485466 ↩︎
  7. https://x.com/BenjaminDEKR/status/1893778110807412943 ↩︎
  8. Not the New York Times, apparently! ↩︎
  9. …yet. ↩︎

Letter to the U.S. House Homeland Security Committee regarding CBP One

Letter to the U.S. House Homeland Security Committee regarding CBP One published on No Comments on Letter to the U.S. House Homeland Security Committee regarding CBP One

Dear Committee Members, specifically Chairman Green,

I would like to know why, in numerous published statements, Chairman Green has claimed that Anna Giaritelli published a “groundbreaking scoop showing that the criminal cartels had hijacked the CBP One app using virtual private networks (VPNs), and were exploiting the app to make even more money by scheduling appointments for migrants outside the geographical range.”

This is clearly and obviously false to anyone who reads the article. What Giaritelli wrote wasn’t a “groundbreaking scoop,” but rather a baseless claim. At no point in the article does Giaritelli cite a single source confirming that cartels are exploiting CBP One using VPNs.

She refers to “an extensive investigation” of DHS documents, but she doesn’t link to the documents, or quote them, or even say what they specifically address. That’s the closest she comes to providing any evidence whatsoever.

The one quote she provides from an actual DHS official (Erin Waters, Assistant Commissioner for Public Affairs) is refuting Giartitelli’s claim, stating that CBP One has actually been “bad for cartels and other criminal organizations seeking to exploit migrants.” Waters goes on to explain that CBP One rather relies on the location data supplied by devices used to access the app.

I would like to know if the Committee has ever spoken with Erin Waters on this issue– and if not, why not? Why rely on the bald assertions of a right-wing web site over a statement of fact from a DHS official?

At the very least, the obvious contradiction presented here should give the Committee pause, and encourage you to look into the claim further. But apparently the Committee had no time to even take a second look, in your rush to– again, repeatedly– make such a momentous claim, with such an extensive impact. You clearly think this matter is serious, so why are you relying on what amounts to rumors and gossip rather than statements of fact supported by evidence?

Could it possibly be that it’s because the rumors and gossip align with your pre-existing beliefs? That evidence be damned when it contradicts your desire to believe?

If so, that’s grossly irresponsible– not to mention dangerous– behavior on the part of a legislative committee. Misrepresenting the truth gets people killed, and yet you treat this reality with casual disregard.

I dearly hope that I’ve simply missed something here which exculpates Chairman Green’s statements about CBP One– and if I have, then assuredly I’m not the only one. So if you have actual evidence that doesn’t come from a vague and unsupported Washington Examiner article, please post it. I’d still be baffled to why you didn’t just provide that evidence in the first place rather than linking to the Examiner, but perhaps that’s a lesson that can be retained for future statements.

Thanks for your time and consideration on this matter.

For over a year now, the committee has been making hay about this so-called “bombshell report” that doesn’t show what they keep insisting that it shows. This line in particular is revealingly hilarious:

Since the Biden administration debuted the CBP One app in January, immigrants south of Mexico City had no reason to believe they would find a legal way to get into the U.S. if they crossed illegally.

  1. The app debuted in October of 2020 (under Trump, btw), not January of 2023.
  2. Using the app is, by definition, not crossing the border illegally.
  3. CBP One is a legal way– unfortunately for most migrants, the only legal way– to enter the United States.

Republicans are tossing around a lot of terminology to obfuscate 2 and 3. The term “otherwise inadmissible” is a fun one, because it suggests that migrants would fall afoul of other immigration restrictions and be denied entry without using the app.

What’s the basis for this? There is none, and in fact the app’s facial recognition engine is designed to be a screen to prevent such individuals from entering the country before they can even reach the border. It does this by comparing the face captured within the app to templates from DHS’s HART database, which includes records of an individual’s entire history of encounters at the border, as well as any crimes committed.

Once again, as I pointed out in CBP One™: The Border in Your Pocket: the app isn’t designed to let as many people through as possible; it’s designed to make the lives of CBP officials and agents easier. Their lives are easier if they can gather as much information about the migrants as possible, as soon as possible, to minimize the seemingly endless paperwork and stress that comes from trying to process the entirety of someone’s information on the spot, all at once, at the border.

(Yes, I sound very sympathetic to CBP agents here. Am I? No, but I can empathize with their openly acknowledged wish to automate things to the extent that they can be).

Last September, Chairman Green and Subcommittee on Border Security and Enforcement Chairman Clay Higgins “demanded answers” from DHS Secretary Alejandro Mayorkas about cartels “abusing the Biden administration’s expanded use of the CBP One app to enhance their human smuggling operations.”

Yes, relying on this one article from the Washington Examiner. They “demanded” that the DHS Secretary address the baseless claims of a right-wing rag in which a CBP spokesperson was already quoted saying it’s all BS.

It’s staggering, and if I’m not misconstruing any of the details here, it’s staggeringly stupid.

…Not yet

…Not yet published on No Comments on …Not yet

I often say that offense is a valuable thing– as appealing as it might seem, you really wouldn’t want to be a person who isn’t offended by anything, because that would mean losing your humanity. People who go through life being oversensitive are in a bad state because they’re suffering more than they really should, but people who develop a rock-hard emotional shell and take nothing seriously have trained themselves to be callous and uncaring, which isn’t good either.

Sometimes I re-think that, however. Such as when I see that someone created an interactive game allowing the player to beat up a woman for wanting to make a video series on sexism in video games.