Epoch AI's Avatar

Epoch AI

@epochai

We are a research institute investigating the trajectory of AI for the benefit of society. epoch.ai

1,262
Followers
20
Following
1,285
Posts
22.11.2024
Joined
Posts Following

Latest posts by Epoch AI @epochai

We have now updated these numbers! Read more here:
x.com/Jsevillamol/...

06.03.2026 21:59 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Microsoft’s recent $68 billion in physical assets additions were driven by AI-related purchases Microsoft acquired $68 billion of physical assets during the second half of 2025, not including replacements for retired goods β€” nearly as much as the entire prior year. Their financial filings suggest that acquisitions were dominated by spending on new data centers. IT equipment, including GPUs and servers, contributed 57% of the growth, while buildings made up 39%.

For more details on our analysis, check out our latest data insight by Isabel Juniewicz: epoch.ai/data-insigh...

06.03.2026 18:49 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Hyperscaler capex has quadrupled since GPT-4’s release, nearing half a trillion dollars in 2025 The combined capital expenditures at Alphabet, Amazon, Meta, Microsoft, and Oracle have been growing at an average of 72% per year, driven by investments in AI infrastructure, nearing half a trillion dollars in 2025. If this trend continues, they will spend $770 billion in 2026.

Our new data insight focuses on PP&E, allowing us to break down physical asset acquisitions into finer categories. This follows our prior work on capex, a related but distinct metric, finding that it has been growing rapidly for hyperscalers like Microsoft. epoch.ai/data-insigh...

06.03.2026 18:49 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Microsoft added $68B in physical assets in the second half of 2025 β€” almost as much as the entire prior fiscal year.

57% was IT equipment (GPUs, servers). 39% was buildings, dominated by data centers.

06.03.2026 18:49 πŸ‘ 10 πŸ” 2 πŸ’¬ 2 πŸ“Œ 0
Preview
FrontierMath FrontierMath is an AI benchmark consisting of extremely challenging math problems, including open research problems that remain unsolved by mathematicians.

Check out our website for more results and commentary about FrontierMath overall!

epoch.ai/frontiermath

05.03.2026 18:33 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Finiteness Problem for Diophantine Equations

We also evaluated GPT-5.4 Pro on FrontierMath: Open Problems. It did not solve any problems. It made some novel observations on one problem, but of a form that the author had anticipated and characterized as relatively uninteresting. More here:

epoch.ai/frontiermat...

05.03.2026 18:33 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 1

Across all runs ever, 42% (20/48) of the problems in Tier 4 have now been solved at least once.

05.03.2026 18:33 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

We ran GPT-5.4 (xhigh) an additional ten times on Tier 4 to get a pass@10 score. This was 38%. In one of these runs, it solved another problem no model had solved before. This problem was by Bartosz NaskrΔ™cki.

05.03.2026 18:33 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

GPT-5.4 Pro solved one Tier 4 problem that no model had solved before. In a preliminary analysis, it appeared to have found a preprint from 2011 which let it shortcut much of the intended work. The problem author was unaware of this preprint.

05.03.2026 18:33 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

On Tiers 1–3 GPT-5.4 Pro solved 52% of the non-held-out set and 42% of the held-out set. On Tier 4, GPT-5.4 Pro solved 25% of the non-held-out set and 55% of the held-out set. Neither of these differences is statistically significant.

05.03.2026 18:33 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

FrontierMath was funded by OpenAI, who has exclusive access to: all 290 problems in Tiers 1–3; solutions to 237 of these problems; 28 of the 48 problems in Tier 4; solutions to these 28 problems. Epoch holds out the rest.

05.03.2026 18:33 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 2
Post image

GPT-5.4 set a new record on FrontierMath, our benchmark of extremely challenging math problems! We had pre-release access to evaluate the model. On Tiers 1–3, GPT-5.4 Pro scored 50%. On Tier 4 it scored 38%.

See thread for commentary and additional experiments.

05.03.2026 18:33 πŸ‘ 11 πŸ” 3 πŸ’¬ 1 πŸ“Œ 1
Preview
Data on AI Companies Our database of AI company data, with data on revenue, funding, staff, and compute for many of the key players in frontier AI.

Check out our website for more data on AI company finances!

epoch.ai/data/ai-com...

27.02.2026 21:37 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

OpenAI’s recent funding round nearly triples the amount they have raised so far. The Information has reported that OpenAI projects a $157B cash burn through 2028. This round, combined with $40B cash on hand, essentially matches that projection.

27.02.2026 21:37 πŸ‘ 18 πŸ” 1 πŸ’¬ 2 πŸ“Œ 1
Preview
Careers Explore Epoch AI’s career opportunities, apply to open positions, and help shape the future of AI.

These roles are fully remote and we can hire in many countries, although we prefer significant overlap with North American timezones. Applications are rolling, so apply soon! epoch.ai/careers

27.02.2026 16:57 πŸ‘ 3 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

We’re looking for candidates for both roles who are up-to-date with AI trends, adept at managing multiple detailed workstreams, and excited about contributing to our mission: improving our understanding of the future of AI.

27.02.2026 16:57 πŸ‘ 3 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Post image

Our team is expanding! We’re hiring for a Researcher to work with our Senior Researcher, JS Denain, on pitching and developing new projects, and a Special Projects Associate to provide operational support for making great new benchmarks.

27.02.2026 16:57 πŸ‘ 7 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

For more details on this analysis, see our website: epoch.ai/data-insigh...

26.02.2026 19:19 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Each company defines "capex" differently on earnings calls. Some include finance leases; some don't. So to build a consistent measure, we went directly to companies' financial filings and identified cash spending and new finance leases using standardized regulatory tags.

26.02.2026 19:19 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Company statements and analyst projections also anticipate continued rapid spending growth in capital expenditures in 2026, though slower than this trend extrapolation.

26.02.2026 19:19 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Driven by investments in AI, hyperscaler capital expenditures have grown 70% per year since the release of GPT-4, nearing half a trillion dollars in total during 2025.

If this trend continues, Alphabet, Amazon, Meta, Microsoft and Oracle will spend $770 billion on capex in 2026.

26.02.2026 19:19 πŸ‘ 5 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Preview
The least understood driver of AI progress An opinionated guide to β€œalgorithmic progress” and why it matters

This week's Gradient Update was written by Anson Ho.

All Gradient Updates are informal and opinionated analyses that represent the views of individual authors, not Epoch AI as a whole.

You can read the full post here: epoch.ai/gradient-up...

26.02.2026 17:01 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Epoch AI (@epochai.bsky.social) AI training compute efficiency has improved extremely fast: each year, you need several times less training compute to reach the same capability. But AI architectures/algorithms haven’t changed *that* much in recent years. So where do these efficiency improvements come from? 🧡

The implications could be bigger still. In scenarios like AI 2027 and Situational Awareness, automating AI R&D hugely accelerates software progress and hence capability growth.

The big open question is whether this acceleration can actually happen, which we discuss here:
bsky.app/profile/epo...

26.02.2026 17:01 πŸ‘ 2 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Preview
What Happened With Bio Anchors? ...

This has huge implications. For instance, it explains how DeepSeek caught up with o1 with less compute in a matter of months. The specific rate of AI software progress might even shift your AGI timelines by over a decade!
www.astralcodexten.com/p/what-happ...

26.02.2026 17:01 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

In fact, AI software progress seems to be extremely fast. Each year, you need several times less training compute to reach the same level of capabilities:

26.02.2026 17:01 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

This improvement also lets researchers push AI capabilities with the same training compute. So if software progress is fast enough, we could reach much greater capabilities without scaling training.

That’s why GPT-5 outperformed GPT-4.5 with less training compute.

26.02.2026 17:01 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

One way is to say that better AI software reduces the compute needed to reach the same capability.

For example, imagine a curve relating a measure of capabilities to log(training compute). After making an algorithmic innovation, the curve shifts to the left, saving compute:

26.02.2026 17:01 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

There are many ways to improve algorithms and data. For example, you could change model architectures, build better RL environments, and improve training recipes.

But how do you concretize what makes some AI software better than others?

26.02.2026 17:01 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Developing more powerful AI isn’t just about scaling compute. It’s also about improving algorithms and data quality, which let you build better models with the same compute.

We call this β€œAI software progress” β€” here’s everything you need to know about it: 🧡

26.02.2026 17:01 πŸ‘ 9 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Preview
The least understood driver of AI progress An opinionated guide to β€œalgorithmic progress” and why it matters

This week's Gradient Update was written by Anson Ho.

All Gradient Updates are informal and opinionated analyses that represent the views of individual authors, not Epoch AI as a whole.

You can read the full post here: epoch.ai/gradient-up...

26.02.2026 16:32 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0