Peter Wildeford's Avatar

Peter Wildeford

@peterwildeford

Globally ranked top 20 forecaster, former data scientist As seen on TV! The Daily Show, Good Morning America Protecting liberty and prosperity in the age of superintelligence

2,842
Followers
200
Following
471
Posts
11.04.2023
Joined
Posts Following

Latest posts by Peter Wildeford @peterwildeford

πŸ”΄Rep Johnson (SD)
πŸ”΅Rep Liccardo (CA)
πŸ”΄Rep Kiley (CA)
πŸ”΅Rep Lieu (CA)
πŸ”΄Rep Mace (SC)
πŸ”΅Rep Moulton (MA)
πŸ”΄Rep Moran (TX)
πŸ”΅Rep Sherman (CA)
πŸ”΄Rep Paulina Luna (FL)
πŸ”΅Rep Tokuda (HI)
πŸ”΄Rep Perry (PA)
πŸ”΅Rep Whitesides (CA)

06.03.2026 14:44 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

πŸ”΄Sen Lee (UT)
πŸ”΅Sen Sanders (VT)
πŸ”΄Sen Lummis (WY)
πŸ”΅Sen Schumer (NY)
πŸ”΄Rep Biggs (AZ)
πŸ”΅Rep Beyer (VA)
πŸ”΄Rep Burleson (MO)
πŸ”΅Rep Casten (IL)
πŸ”΄Rep Crane (AZ)
πŸ”΅Rep Foster (IL)
πŸ”΄Rep Dunn (FL)
πŸ”΅Rep Krishnamoorthi (IL)
(continued)

06.03.2026 14:44 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

30 current members of Congress have publicly discussed AGI, AI superintelligence, AI loss of control, recursive self-improvement, or the Singularity:

πŸ”΄Sen Banks (IN)
πŸ”΅Sen Blumenthal (CT)
πŸ”΄Sen Blackburn (TN)
πŸ”΅Sen Hickenlooper (CO)
πŸ”΄Sen Hawley (MO)
πŸ”΅Sen Murphy (CT)
(continued)

06.03.2026 14:44 πŸ‘ 5 πŸ” 0 πŸ’¬ 1 πŸ“Œ 1

agree that's key. It's obviously harder to measure but it seems to be increasing at a roughly similar rate but from a lower base.

01.03.2026 20:20 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

x.com/sama/status/...

01.03.2026 03:14 πŸ‘ 5 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

But people should know what the "red lines" rest on, which is just "trust us bro". Nothing else.

01.03.2026 03:14 πŸ‘ 10 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

I expect Sam Altman has a much better relationship with the Pentagon, so maybe this will work. I certainly wish him and OpenAI luck and I hope they can de-escalate the situation.

01.03.2026 03:14 πŸ‘ 6 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

And recall that this is the same Pentagon that just went "0 to 60" nuclear in declaring Anthropic a supply chain risk despite this previously being a Cold War national security technique normally only used for Chinese and Soviet companies.

01.03.2026 03:14 πŸ‘ 8 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

To emphasize - OpenAI's "red lines" are just held together by trust that the Pentagon won't screw OpenAI over on this.

01.03.2026 03:14 πŸ‘ 8 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

This is a way one can go about doing this, and it's OpenAI's right to decide how to do business. But this is a lot less reassuring than what and OpenAI had originally been saying.They had said that their approach was more ironclad than Anthropic's and it's just... not.

01.03.2026 03:14 πŸ‘ 9 πŸ” 1 πŸ’¬ 2 πŸ“Œ 0

I asked this question to Sam Altman and the way I interpreted his reply was that they are going to use the "deployment architecture and safety stack" and they expect the Pentagon to be good people and not push back. And if they do push back, then OpenAI would decide what to do.

01.03.2026 03:14 πŸ‘ 8 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0

The Pentagon can just say "we both know your model can do this, you should remove that safeguard". And then OpenAI would have to comply or be sued.

01.03.2026 03:14 πŸ‘ 8 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

The way OpenAI bridges this is by saying the protections live in this "deployment architecture and safety stack" rather than the contract language. But if this contract says "all lawful purposes" and your safety stack prevents a lawful purpose, you're in breach of contract.

01.03.2026 03:14 πŸ‘ 10 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

OpenAI is trying to claim simultaneously that (a) their contract with the Pentagon allows for "all lawful purposes" and (b) also that their red lines are fully protected.

01.03.2026 03:14 πŸ‘ 20 πŸ” 2 πŸ’¬ 1 πŸ“Œ 1
Preview
The Pentagon's War on Anthropic The Pentagon has a legitimate principle, and a terrible strategy for enforcing it

The Pentagon has a legitimate principle that private companies shouldn't hold moral vetoes over military doctrine.

But they agreed to the contract. And now they're using unprecedented + disproportionate coercion. This should trouble everyone.

My latest - peterwildeford.substack.com/p/the-pentag...

27.02.2026 14:28 πŸ‘ 10 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Preview
Cold War lessons for the AI era

Are there Cold War lessons to learn for AI?

We've had very fierce competition with the Soviets, and did not trust the Soviets at all, but we were still able to make mutually verified treaties.

In Politico today I'm quoted saying we should do the same with China β†’ www.politico.com/newsletters/...

26.02.2026 22:42 πŸ‘ 5 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

Adversaries can tamper with or poison leading US models. There also can be risks from insider threats, including potentially the AIs themselves.

Dave Banerjee at IAPS has a roadmap for how to defend -> www.iaps.ai/research/ai-...

25.02.2026 16:26 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

AI is a real thing

23.02.2026 23:57 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image Post image

yeah I agree - that's a good point

Maybe you'd see major progress on CAIS's Remote Labor Index or OpenAI's "OpenAI Proof Q&A"?

23.02.2026 22:20 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

agree - probably measurement noise (in both estimates)

23.02.2026 20:54 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

16. Joseph Sifakis (Turing Award '07)
17. John C. Mather (Physics '06)
18. Frank Wilczek (Physics '04)
19. Joseph Stiglitz (Economics '01)
20. Andrew Yao (Turing Award '00)

*The Turing Award is equivalent to the CS Nobel

23.02.2026 20:48 πŸ‘ 3 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

7. Giorgio Parisi (Physics '21)
8. Jennifer Doudna (Chemistry '20)
9. Yoshua Bengio (Turing Award '18*)
10. Beatrice Fihn (Peace '17)
11. Oliver Hart (Economics '16)
12. Juan Manuel Santos (Peace '16)
13. Ahmet Üzümcü (Peace '13)
14. Jean Jouzel (Peace '07)
15. Riccardo Valentini (Peace '07)

23.02.2026 20:48 πŸ‘ 3 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

20 Nobel Prize winners have warned that we may someday lose human control over advanced AI systems

1. Geoffrey Hinton (Physics '24)
2. John Hopfield (Physics '24)
3. Demis Hassabis (Chemistry '24)
4. Daron Acemoglu (Economics '24)
5. Ben Bernanke (Economics '22)
6. Maria Ressa (Peace '21)

23.02.2026 20:48 πŸ‘ 8 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Post image

The infamous METR graph is going vertical.

Current trends suggested ~8h-9h time horizons but instead we're seeing ~14.5h time horizons!

Based on this, I would project ~2-3.5 workweek time horizons by end of year (!!). That could have significant implications for the economy.

20.02.2026 19:52 πŸ‘ 40 πŸ” 3 πŸ’¬ 4 πŸ“Œ 3

πŸ”΄Rep Kiley (CA)
πŸ”΅Rep Moulton (MA)
πŸ”΄Rep Mace (SC)
πŸ”΅Rep Sherman (CA)
πŸ”΄Rep Moran (TX)
πŸ”΅Rep Tokuda (HI)
πŸ”΄Rep Paulina Luna (FL)
πŸ”΅Rep Whitesides (CA)
πŸ”΄Rep Perry (PA)

18.02.2026 19:09 πŸ‘ 4 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

πŸ”΅Sen Schumer (NY)
πŸ”΄Rep Burleson (MO)
πŸ”΅Rep Beyer (VA)
πŸ”΄Rep Crane (AZ)
πŸ”΅Rep Foster (IL)
πŸ”΄Rep Dunn (FL)
πŸ”΅Rep Krishnamoorthi (IL)
πŸ”΄Rep Johnson (SD)
πŸ”΅Rep Lieu (CA)
(continued)

18.02.2026 19:09 πŸ‘ 4 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

27 current members of Congress have publicly discussed AGI, superintelligence, AI loss of control, or the Singularity:

πŸ”΄Sen Blackburn (TN)
πŸ”΅Sen Blumenthal (CT)
πŸ”΄Sen Hawley (MO)
πŸ”΅Sen Hickenlooper (CO)
πŸ”΄Sen Lee (UT)
πŸ”΅Sen Murphy (CT)
πŸ”΄Sen Lummis (WY)
πŸ”΅Sen Sanders (VT)
πŸ”΄Rep Biggs (AZ)
(continued)

18.02.2026 19:09 πŸ‘ 12 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Thanks!

06.02.2026 23:04 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

And in order to do that, we will need to build the verification technology to verify that deal. And that technology will need to be built now so that it is ready in time for a deal. This will build us important optionality for the future.

We need a second button, if only to have another option.

06.02.2026 23:04 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

And in that case, we may want to make a deal with China to mutually slow down the race to superintelligence so we can proceed with more foresight.

06.02.2026 23:04 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0