Our tech overlords decree it

Palantir CEO Alex Karp spoke with CNBC on Thursday. On Iran, the Pentagon’s fight with Anthropic (over killer robots), and tech giants’ vision for America’s A.I.-fueled future.
Karp warns that the technology is incredibly disruptive to the economic and political lives of a large swath of mostly female and Democratic voters and transfer their power to vocationally trained, mostly male voters (quoted in The Ink):
The one thing that I think even now is underestimated by all actors in industry, including in Silicon Valley, is how disruptive these technologies are. If you are going to disrupt the economic and therefore political power significantly of one party space — highly educated, often female voters, who vote mostly Democrat — and military and working-class people who do not feel supported, and you feel like that… you believe that’s going to work out politically, you’re in an insane asylum.
You cannot… this technology disrupts humanities-trained — largely Democratic — voters, and makes their economic power less. And increases the economic power of vocationally trained, working-class, often male, working-class voters. And so these disruptions are gonna disrupt every aspect of our society. And to make this work, we have to come to an agreement of what it is we’re going to do with the technology; how are we gonna explain to people who are likely gonna have less good, and less interesting jobs from their perspective.
If Karp has ideas about how “we” come to such an agreement, he isn’t saying. These technologies are “dangerous societally,” Karp warns, and the only justification for the mass dislocation they bring that might sell politically is that if we don’t control them militarily our adversaries will. And that would threaten “our ability to be American. ”
While the technologies control friend and foe alike?
Meanwhile, OpenAI’s Sam Altman told BlackRock’s U.S. Infrastructure Summit on Wednesday that “One of the most important things in the future is that we make intelligence, to borrow an old phrase from the energy industry that didn’t quite work: ‘Too cheap to meter.’” But someone will, to be sure, and he/they will sell the sum of human intelligence (that the A.I.’s appropriated for free) back to you. That is, they’ll privatize knowledge like everything else.
What kind of world is that where intelligence become a utility? The Ink asks:
Maybe it’s the one that political scientists Stacie E. Goddard and Abraham Newman have described as “neoroyalism”: a post-everything world vision that Donald Trump and his oligarchic enablers seem to share, under which a new class of kingly rulers own everything and they extract their wealthe from the rest of humanity that simply rents, using whatever they make as they work piecemeal in gig employment. Or they’re warfighters, sacrificing for that kingly vision of nations, whoever’s “rule of law” they happen to fall under.
And subsidized and guaranteed by you, the nuevo poor taxpayer, suggests Gizmodo:
The way Altman is talking, suggesting that intelligence could be a utility, it’s hard not to recall previous comments from him and OpenAI CFO Sarah Friar calling on the federal government to essentially guarantee their investments. Friar said she expects a federal “backstop” to guarantee the company will be able to finance its massive and rapidly expanding data center infrastructure. Altman echoed the comments in a separate appearance, stating, “Given the magnitude of what I expect AI’s economic impact to look like, I do think the government ends up as the insurer of last resort.”
The execs later walked back the suggestion that the government treats them as “too big to fail,” but it seems like Altman is once again dabbling in that suggestion, albeit less directly. By suggesting intelligence as a “utility,” there is a tacit acknowledgement that it will need to be subsidized by the government, the way other utilities are. He’s just seemingly left out that particular part of his roadmap to the future.
All hail our new tech overlords!