Real-time feedback on relative effort makes the hardest-working team members work less.
If you fall into a collective intelligence rabbit hole, there's like a ton of such counterintuitive findings.
www.youtube.com/watch?v=njor...
@pawelbrodzinski
Leader of an org where anyone can make any decision (Lunar Logic). Doing anything that no one else wants to do. A mouthful on product development, startups, org design, lean/agile, IT in general. Also AI. AI too.
Real-time feedback on relative effort makes the hardest-working team members work less.
If you fall into a collective intelligence rabbit hole, there's like a ton of such counterintuitive findings.
www.youtube.com/watch?v=njor...
A non-deterministic product may produce different outputs with each use, which makes the traditional means of testing and debugging less useful.
"It works well on my machine" will become even more of a joke than it is right now.
It feels like the difference between the deterministic and stochastic nature of tools should be the central piece of discussions about AI. And it's a side table at best. A very small side table.
Tech industry: With AI, investor loyalty is dead.
Me: You must be kidding. Investor loyalty? That a thing? They'd throw a startup under the bus to marginally improve their chances of going supernova.
pawelbrodzinski.substack.com/p/is-vc-broken
If you wonder why we won't unleash fully autonomous AI agents anytime soon, it's not because of technical capabilities, or lack thereof. It's because of psychology 101: brodzinski.com/2025/10/no-t...
"This private-market funding round is about four times larger than the biggest IPO ever."
Is anybody doing the math on this?
Save for circular economy where money goes back Nvidia and Amazon in infrastructure bills, that is.
share.google/w7T1WCnPBtOJ...
"We've grown from 20 to 150 people in the last year."
"Today we're introducing a set of new company values at Lovable."
Anton Osika, Lovable.
With that growth speed, their 130 new hires ARE the new culture.
1000 words about the recent "substack post that crashed the stock market" drama.
"* Producing code faster than we can understand it paints us into a corner
* The fastest way to understand code is to write it"
@jasongorman.bsky.social
"When we are under cognitive load, carrying too much in active memory, our executive function degrades."
@ourfounder.bsky.social
Do the math.
The most important question a product manager should keep asking, especially in the AI era: Should we build it?
My guest article at Ibrahim Bashir's Run the Business: runthebusiness.substack.com/p/invalidati...
We could discuss whether OpenAI is worth $300B or $500B, but they at least have the revenue and growth we can challenge.
The new wave of AI startups? No product, thus no traction, thus no growth, thus no growth curve, thus no possible challenge. Welcome to the fantasyland!
bit.ly/4rrYcjT
That kinda assumes some of the customers don't have brains of their own.
I've seen Disney just released a fresh Star Wars movie trailer. Where the hell do I see the movie this weekend? Said no one :)
OK, almost no one. I'm not that optimistic about the human race ;)
I empathize with the problem, of course. I've had many discussions arguing why we need to refactor/rewrite already.
Also, I understand that contexts differ. We work with more corpo clients, too, and some prototypes are deemed to be "successful" no matter what. Then I don't treat them as such.
Yes, and in my world, most prototypes don't end up being "working" (not in business terms).
If a prototype has 80% chance of being discarded, I would rather go with @tastapod.com's spike & stabilize rather than XP from the outset.
The former gives me a cheaper option for spike & abandon.
And "good enough," considered only from a perspective from my desk (and to hell with everyone downstream), was always dysfunctional, AI or not.
AI simply makes it so much easier.
Interesting take on "good enough."
I'd still aim for good enough, but my perspective is more compound. What's good enough for *the whole team*, *the client*, and *time perspective* we consider.
A prototype will have different "good enoughness" than a validated product that we change.
Having said that, I increasingly see the opposite take. As Claude can do more, I see devs changing their stance from "human review is mandatory" (half a year ago) to "YOLO, we can make guardrails just good enough."
The interesting part is, of course, how it would work for them in the long run.
I'd agree with the sentiment, but count that as just one voice. I wrote about it half a year back, and so far I think it aged well: pawelbrodzinski.substack.com/p/vibe-codin...
Oh, and in case you asked, I'm rather fond of the results.
;-)
OH: So I got this code to review. Looked like AI-generated. I asked questions about it, but the developer had no clue. If I have to talk about the code with GPT, I would very much prefer to remove the proxy.
Given:
- your strong opinions on the topic
- the fact that the poll was run across your followers
- psychological tendency to stay in bubbles/echo chambers
I wouldn't draw too many conclusions from this.
It's not to criticize. It's just that the context matters.
Q: βI wonder how much money OpenAI has lost in electricity costs from people saying βpleaseβ and βthank youβ to their models.β
A: "Tens of millions of dollars."
Burning fossil fuel through being nice. Oh, well...
techcrunch.com/2025/04/20/y...
If we consider dropping the idea altogether after a discovery phase a failure, then at least half of well-run discovery phases should fail.
pawelbrodzinski.substack.com/p/50-of-disc...
It hell lot sounds like that fastest designer could have been a modern AI tool :D
The Starbucks Test
1. Check the revenues and profits of a SaaS
2. Assume itβs Starbucks
3. Figure out the valuation of the company
Surprisingly, SaaS pass the test, i.e., their valuations are sensible, only *after* the so-called SaaSpocalypse (or SaaS culling).
substack.com/home/post/p-...
Stumbled upon a profile of a person I've known for almost 2 decades.
He used to be an agile/lean thought leader.
Then, a crypto advocate.
Now, it's all about AI careers.
Most curious: the past "careers" are suspiciously gone from his current profile.
Go figure.
And if we look at things purely from autonomy perspective, there's no reason to bet on us giving AI agents keys and letting them drive as they wish.
We can't get agents to care or to be aligned with our goals. Thus, there can't be full autonomy.
brodzinski.com/2025/10/no-t...
So much this.
"We might get 80% of the way and think weβre one-fifth away from full autonomy, but the long and checkered history of AI research is littered with the discarded bones of approaches that got us βmost of the wayβ. Close, but no cigar."
"Software architecture and design is replete with woolly concepts β what exactly is a βresponsibilityβ, for example? How could we instruct a computer to recognise when a function or a class has more than one reason to change?"
by @jasongorman.bsky.social
codemanship.wordpress.com/2026/02/18/1...
Digital product isn't a pizza. Not everyone has to like it.
Enough people liking it enough to pay enough for it is, well, enough.