I've accidentally backed myself into a career as a content creator and need your help figuring out what to do next. Please take this short survey, even if you don't subscribe! bit.ly/4s4drzz
I've accidentally backed myself into a career as a content creator and need your help figuring out what to do next. Please take this short survey, even if you don't subscribe! bit.ly/4s4drzz
i want one that says PRIMCETON
watch here! open.substack.com/live-stream/...
going live on substack w/ @lioneltrolling.bsky.social at 2
yeah exactly. one reason he keeps talking about grok's basedness is that it's the only area in which grok is plausibly on the cutting edge. unfortunately probably not a entirely terrible recruiting strategy, lots of ai guys would love to work on the cutting edge of racism, but not good enough.
this report cites "burnout" and "musk's erratic behavior" but that's always been true of his management style—the big difference between an xai worker in 2026 and a spacex worker in 2016 is that there's somewhere else the xai worker can go that will pay them as much to do basically the same thing
the conditions that made elon successful at tesla and spacex and starlink (zirp, nonexistent sectoral competition, not addicted to ketamine) no longer hold true. its hard to be a failure at $700b net worth but i dont xai keeping pace
watching a bunch of oai and anthro employees on twitter confront the reality of power/law/bureaucracy/politics in the wake of the DoD stuff last week is interesting. i dont thnk frontier lab workers have a great sense of 'the real world' and their predictions w/r/t it should be treated w/ skepticism
important to read max on the silicon valley political divide between awful nerds and evil nerds
funny memory about this story: Balaji reached out privately to say how much the boys at a16z loved it, after which they all spent the next few years letting Twitter drive them completely insane www.nytimes.com/2018/08/15/m...
Cry "Havoc!" and let slop the dogs of war
write this piece!!
not for nothing but the insistence that graham platner is "the new fetterman" feels like a misunderstanding of both platner and fetterman
Yes they really sincerely believe they're raising gifted, precocious children. This colors everything from how they think about "personas" (which may or may not exist) to who they hire to what contracts they seek
The best thing I’ve read on the Anthropic dispute by far 👇
The saint of bright doors!
What this does to military A.I. capabilities is beyond the brief of this newsletter, except to say that I think it’s “bad” for Grok, the pedophile mechahitler A.I., to be involved with weapons in really any way. What I am interested in, here, is what this reveals about the state of politics in Silicon Valley. In a sentence, I think what’s happening is (1) basic (i.e. normal) cutthroat competition between rival firms for government contracts, which is both driving and being driven by (2) an open and ongoing political-ideological dispute between two factions of Silicon Valley capital, which is in turn informing and being informed by (3) an almost religious disagreement about the nature of the god being built on the computer.
To start, it seems quite obvious that the Tech Right--a bloc of right-wing, Trump-aligned executives, investors, podcasters, Twitter personalities, firms, and companies, among them Palantir’s Joe Lonsdale and Alex Karp, Anduril’s Palmer Luckey, and, of course, xAI’s Elon Musk--with its extensive links to the administration, has been exerting behind-the-scenes pressure on Hegseth and the Pentagon to sever ties with or otherwise punish Anthropic. It was a Palantir executive, after all, who snitched on Anthropic to the D.o.D., and Hegseth’s speech in January about “objectively truthful AI capabilities” was a close echo of Musk’s ramblings about his “maximally truth-seeking” model Grok. The Tech Right’s contempt for Anthropic is first and foremost financial in nature. Musk, obviously, would like xAI to be first in line for any government contracts. (Indeed, Hegseth announced a deal with xAI this week to use Grok under the Pentagon’s preferred “all lawful use” terms.) And I suspect Palantir, Anthropic client though it may be, has the same existential fear of Claude as McKinsey or Salesforce or any other consultancy or software-as-a-service provider. If Anthropic is aggressively courting the D.o.D. to contract directly, and if Claude is as good as every thinks, what does Palantir’s future as a data-analytics-in-camo platform actually look like?
This doesn’t necessarily separate him from any other Silicon Valley liberal. But I think it’s good to attend to the valence of his liberalism. Amodei, like most of the Anthropic executives and many people in the A.I. in general, has long been associated with the worlds of Bay Area Rationalism and Effective Altruism--wonkily utilitarian philosophical and philanthropic practices focused on self-described rationalist inquiry and self-improvement. Bay Area Rationalism is a loose and diverse movement, containing a host of political perspectives, but it’s always had a particular concern with moral philosophy as it relates to the expected development of artificial superintelligence. To be a Rationalist liberal democrat (small-L small-D), e.g., might mean orienting your liberal democrat-ness toward its practical applications around the eschatological scenario of hard-takeoff A.G.I.
I don’t mean to suggest that Amodei’s commitments to liberal democracy are inauthentic. More that, as far he is concerned the stakes of this commitment go well beyond his own moral or ethical culpability. The decisions he makes now, and his consistent practice to his espoused beliefs, could mean the difference between a benevolent computer god and a wrathful one. X avatar for @hlntnr Helen Toner @hlntnr One thing the Pentagon is very likely underestimating: how much Anthropic cares about what *future Claudes* will make of this situation. Because of how Claude is trained, what principles/values/priorities the company demonstrate here could shape its "character" for a long time. Andrew Curran @AndrewCurran_ Update on the meeting; according to Axios Defense Secretary Pete Hegseth gave Dario Amodei until Friday night to give the military unfettered access to Claude or face the consequences, which may even include invoking the Defense Production Act to force the training of a WarClaude 4:26 PM · Feb 25, 2026 · 227K Views 41 Replies · 122 Reposts · 1.96K Likes And this has placed him, and Anthropic, on a collision course with the Tech Right. Musk, too, believes he is bringing superintelligence into existence at xAI. But for him the urgenct imp
one way of seeing anthropic vs. the pentagon is as a fissure between the two silicon valley tribes most enthusiastic about ai: "rationalists" and "accelerationists"
maxread.substack.com/p/what-anthr...
probably the best racist writer of racist horror since hp lovecraft
interested to read when it’s out!
this is all very on point and it reminds me how many protests were going on and bubbling up around Cambridge MIT Google etc and the Tech Workers Coalition circa 2018/19, project maven. not mentioned here but the misogyny, the Epstein stuff, mit media lab, was also mixed into this around then
Dying at the idea that the Pentagon gave Dario Amodei a hypo that was essentially the dril woke sniper tweet
This hadn’t gone exactly how they’d like. In December, Bloomberg reported recently, “a senior US defense official posed a hypothetical scenario” to Amodei: What if a nuclear-armed intercontinental ballistic missile were hurtling towards the US with only 90 seconds to spare, and Anthropic’s AI were the only way to trigger a missile response to save the country, but the company’s safeguards wouldn’t allow it, the senior official mused in a December phone call. “Call me,” was how Pentagon officials interpreted Amodei’s answer, according to another senior defense official briefed on the discussion, who described being astounded by the billionaire’s response. LOL. Our beautiful generals have many medals and some even have battlefield experience, but I can say with some confidence that they have never engaged with a Rationalist online and are deeply unprepared for what it means to pick a fight with one.
[Dario Amodei Bane voice] “Oh, you think stupid elaborate hypotheticals are your ally. But you merely adopted elaborate and weirdly specific hypothetical scenarios; I was born in them, molded by them. I didn’t see a normal argument until I was already a man, by then it was nothing to me but BLINDING!”
one of the funniest things to keep coming out of the anthropic reporting is that the pentagon was trying to convince amodei by proposing elaborate hypothetical scenarios. buddy do you think an EFFECTIVE ALTRUIST has never considered a bizarrely specific and elaborate hypothetical scenario??
What this does to military A.I. capabilities is beyond the brief of this newsletter, except to say that I think it’s “bad” for Grok, the pedophile mechahitler A.I., to be involved with weapons in really any way. What I am interested in, here, is what this reveals about the state of politics in Silicon Valley. In a sentence, I think what’s happening is (1) basic (i.e. normal) cutthroat competition between rival firms for government contracts, which is both driving and being driven by (2) an open and ongoing political-ideological dispute between two factions of Silicon Valley capital, which is in turn informing and being informed by (3) an almost religious disagreement about the nature of the god being built on the computer.
To start, it seems quite obvious that the Tech Right--a bloc of right-wing, Trump-aligned executives, investors, podcasters, Twitter personalities, firms, and companies, among them Palantir’s Joe Lonsdale and Alex Karp, Anduril’s Palmer Luckey, and, of course, xAI’s Elon Musk--with its extensive links to the administration, has been exerting behind-the-scenes pressure on Hegseth and the Pentagon to sever ties with or otherwise punish Anthropic. It was a Palantir executive, after all, who snitched on Anthropic to the D.o.D., and Hegseth’s speech in January about “objectively truthful AI capabilities” was a close echo of Musk’s ramblings about his “maximally truth-seeking” model Grok. The Tech Right’s contempt for Anthropic is first and foremost financial in nature. Musk, obviously, would like xAI to be first in line for any government contracts. (Indeed, Hegseth announced a deal with xAI this week to use Grok under the Pentagon’s preferred “all lawful use” terms.) And I suspect Palantir, Anthropic client though it may be, has the same existential fear of Claude as McKinsey or Salesforce or any other consultancy or software-as-a-service provider. If Anthropic is aggressively courting the D.o.D. to contract directly, and if Claude is as good as every thinks, what does Palantir’s future as a data-analytics-in-camo platform actually look like?
This doesn’t necessarily separate him from any other Silicon Valley liberal. But I think it’s good to attend to the valence of his liberalism. Amodei, like most of the Anthropic executives and many people in the A.I. in general, has long been associated with the worlds of Bay Area Rationalism and Effective Altruism--wonkily utilitarian philosophical and philanthropic practices focused on self-described rationalist inquiry and self-improvement. Bay Area Rationalism is a loose and diverse movement, containing a host of political perspectives, but it’s always had a particular concern with moral philosophy as it relates to the expected development of artificial superintelligence. To be a Rationalist liberal democrat (small-L small-D), e.g., might mean orienting your liberal democrat-ness toward its practical applications around the eschatological scenario of hard-takeoff A.G.I.
I don’t mean to suggest that Amodei’s commitments to liberal democracy are inauthentic. More that, as far he is concerned the stakes of this commitment go well beyond his own moral or ethical culpability. The decisions he makes now, and his consistent practice to his espoused beliefs, could mean the difference between a benevolent computer god and a wrathful one. X avatar for @hlntnr Helen Toner @hlntnr One thing the Pentagon is very likely underestimating: how much Anthropic cares about what *future Claudes* will make of this situation. Because of how Claude is trained, what principles/values/priorities the company demonstrate here could shape its "character" for a long time. Andrew Curran @AndrewCurran_ Update on the meeting; according to Axios Defense Secretary Pete Hegseth gave Dario Amodei until Friday night to give the military unfettered access to Claude or face the consequences, which may even include invoking the Defense Production Act to force the training of a WarClaude 4:26 PM · Feb 25, 2026 · 227K Views 41 Replies · 122 Reposts · 1.96K Likes And this has placed him, and Anthropic, on a collision course with the Tech Right. Musk, too, believes he is bringing superintelligence into existence at xAI. But for him the urgenct imp
one way of seeing anthropic vs. the pentagon is as a fissure between the two silicon valley tribes most enthusiastic about ai: "rationalists" and "accelerationists"
maxread.substack.com/p/what-anthr...
Here’s a dumb little blog about the experience of witnessing Wemby in person: defector.com/victor-wemba...
nets game is packed and energy is completely dead. only solution: bring back the Brooklyn knight. The people want him back
RIP tren-addled crypto kick streamer gold chain zuck 2024-2026
still hits
king !!