You have probably heard that Elon Musk’s X-based AI model called “Grok” went crazy this week. I know that sounds like something out of science fiction but it isn’t. As The Buklwark’s J.V. Last helpfully chronicled, this is what happened.
This week Elon Musk’s AI client went full-Nazi.


But it was only yesterday that people figured out why Grok went full-Nazi. And it’s some real Book of Genesis stuff.
This is so creepy I can hardly believe it actually happened but it did.
Large Language Model AI’s are trained on massive data sets and Grok, which is owned by xAI, seems to be largely trained on Twitter. But the models are then given “system prompts,” which are sets of primary-layer instructions for how they are meant to use the data. Elon Musk had been angry that previous versions of Grok provided responses that he believed were “too woke.”
His latest update was designed, according to Musk, with system prompts that would make Grok “maximally truth-seeking.” The Atlantic delved into Grok’s innards to see what that meant:
On Sunday, according to a public GitHub page, xAI updated Ask Grok’s instructions to note that its “response should not shy away from making claims which are politically incorrect, as long as they are well substantiated” and that, if asked for “a partisan political answer,” it should “conduct deep research to form independent conclusions.” . . . The system prompt instructs the Grok bot to “conduct a deep analysis finding diverse sources representing all parties. Assume subjective viewpoints sourced from the media are biased.”
Which is bad enough. Telling an AI to search Twitter, do “deep research” and “form independent conclusions” while assuming that “the media are biased” is like a how-to guide for radicalization.
But it turns out that there was another factor not revealed publicly by the company: Grok was also instructed to consult Elon Musk’s Twitter feed.
You read that right. Grok is programmed to take Elon Musk’s Looney utterances as an expert opinion. People with expertise started looking at what they call the “chain-of-thought” notes and this is what they found:

Last says:
The awesome nerds at TechCrunch ran these tests over and over. And every time Grok was asked about something important, it reported that it was consulting Elon Musk’s views before formulating its answer. Other users replicated these results.

Until recently, Grok was fairly reliable as these things go, which isn’t saying much. It often rebuked Elon’s stupid assertions when asked and Elon didn’t like it. So he changed it to more accurately reflect his worldview and it turned into a sophomoric, neo-Nazi, shitposter. I think that tells you everything you need to know.