79% of companies paying for OpenAI also pay for Anthropic.
Not choosing. Hedging.
The model isn't the bottleneck for most of these teams. Getting everyone aligned on what to build and how to spec it out is. No subscription fixes that.
79% of companies paying for OpenAI also pay for Anthropic.
Not choosing. Hedging.
The model isn't the bottleneck for most of these teams. Getting everyone aligned on what to build and how to spec it out is. No subscription fixes that.
Claude and ChatGPT are powerful. But they only do what we ask and see what we give them. Continuous, system-level context that feeds product decisions feels like a gap.
Iβm curious how others are thinking about this or if anyone has wired up a solution in this arena that they like.
What we donβt really have yet is an always-on product agent that continuously ingests company context, tracks patterns over time, and proactively suggests high-leverage projects and priorites based on whatβs actually happening across the business.
They paste it into an LLM or pull from and MCP connection, ask for insights, and then repeat the process a week or a month later.
It works, but itβs episodic and dependent on one person holding everything together.
Huge step forward, but thereβs still a structural gap.
Today, the synthesis layer is manual. A PM gathers context from Slack threads, sales calls, churn reports, dashboards, roadmap docs, and data warehouse queries.
LLMs have dramatically improved how product managers work.
You can synthesize ten customer interviews in seconds. You can analyze hundreds of support tickets and extract clear themes. Drafting PRDs, refining strategy, even pressure-testing ideas is faster than ever.
Same has always been true. Difference is the judgement of a human dev being in the loop.
Today, coding agents doesn't slow down to ask clarifying questions.
It ships something plausible.
Spec was right: review and merge. Spec was vague: two hours debugging instead of 20 minutes writing it down up front.
Adding people to a late project just makes it more late.
Planning cycles have become shorter.
Ther is not moat. Build anyway.
Exactly. In this new world, itβs all that matters.
A well-defined feature with 1 engineer will ship faster than a vague one with 3.
Product leaders: the best allocation strategy is better requirements. Not more people.
The most expensive allocation mistake is not putting the wrong person on a task.
It is putting the right person on a task that was never defined well.
Senior engineers are fast, but they often are stuck translating vague requests into actual work. Fix the input.
Beginners use AI to avoid writing complex code.
Experts use AI to avoid reading documentation.
Masters use AI to create entire production features.
The goal isn't to let the machine do the thinking. It is to let the machine do the lower level work that steals your time.
In startups, you either win or you learn with everything you do.
Had a blast at @AITinkerers Seattle last week. Thereβs nothing quite like being in a room full of people who are shipping and building
innovative things.
Huge thanks to the community for the energy and the great conversations around Spec Driven Development. Itβs an exciting time to be building!
Biggest drag with AI coding on a team is review. The agent can write code fast, but it canβt explain the decisions it made. If the spec and constraints arenβt written down, code review turns into detective work.
AI coding workflow:
Start session
Paste in context
Realize context is missing important info
Update context
Kick off coding
Realize you forgot a constraint
Update context again
Restart coding
Validate locally, realize you missed a use case
Update context again
Finally ship something
Love how Boris Cherny runs 5 Claudes with a CLAUDE. md that learns from every mistake. He's dogfooding his own product and it shows. The real unlock isn't a better model, it's building the system around it.
Why does any PR from more than a week ago look like an ancient manuscript?
AI coding in 2026:
Step 1: Generate 400 lines in 60 seconds
Step 2: Spend 45 minutes reviewing, testing and reworking
Step 3: Write post about how βAI made engineering 10x fasterβ
How shipping product feels at timesπ₯Ή
AI coding is rarely blocked by generation.
Itβs blocked by clarity and review.
If the spec is vague, the agent still ships code, you just pay for it later.
Project managers when the βspecβ is actually just a chat transcript.
Happy circle back day to all those who celebrate.
AI coding tools are creating more technical debt, not less.
Why? Junior engineers are jumping straight to solutions without proper specs.
Your AI is only as good as your requirements. Garbage in, garbage out.
Agile is dead and AI killed it.
Scrum was built for human handoffs and 2-week sprints. AI agents work in 2-hour cycles and don't need standups.
Your 'junior developer' can code for 48 hours straight, but only if you give it the right context and specs upfront.
Software Engineers are transitioning into Product Engineers.
If you know how to use AI tools, coding has become more straightforward than ever. The real challenge now lies not in execution but in design and architecture choices.
Clear ownership is so important for teams, even when you are tiny. Simply identifying who owns a given decision can speed up cycle time tremendously.