The snow is getting less normal, though I suspect that's not what they were referring to...
The snow is getting less normal, though I suspect that's not what they were referring to...
I always figured I'd retire as a gardener.
Another win for the `git fetch` gang.
B2B SaaS exists today fundamentally because they amortise the cost of product discovery, software development and maintenance across all their customers.
AI will drive those costs towards zero, reducing the outsourcing benefits.
Bespoke software offers more flexibility and less supply-chain risk.
Prediction: The pure B2B SaaS market will be largely gone in a number of years.
Instead:
- Companies build almost all software bespoke in-house.
- Open source exists as a shared substrate.
- PaaS providers remain as the backbone.
Offering anything coding agents can replicate is a dead end.
Our conversation about "anthropomorphizing models" could really use a bit of the flexibility that literary theorists bring to the question of personification. E.g., is the character represented in an autobiography the same as the physical body of the author?
Has science gone too far
But will you agree it makes no sense to be angry with the insulin for that?
That's a faulty generalisation, though. There are plenty of AI proponents who envisage some kind of socialistic society once it gets capable enough.
What befuddles me more is the reverse statement: Why aren't more of those who desire socialism delighted in the possibilities of AI?
Light up that evil vector.
Incidentally, buying a gun is pretty much what the international community has been spending decades trying to prevent Iran from doing.
A bunch of supermarkets in Sweden, where I live, have gotten in-store hydroponic farms for lettuce, herbs and such in recent years. We've probably got cheaper electricity than most of Europe, but certainly not in the too cheap to meter range.
How does Sam figure he can rely on technical safeguards instead of contracts without operational control? I feel like he's arguing against himself here.
Det stΓΆrsta problemet med idΓ©n om att "ΓΆka incitamenten fΓΆr att sΓΆka arbete" Γ€r att det bygger pΓ₯ en premiss som inte lΓ€ngre hΓ₯ller, nΓ€r allt hΓΆgre grad av arbetsmarknaden kommer att ersΓ€ttas av AI. Det i sig Γ€r bra, men vi mΓ₯ste bΓΆrja fΓΆrbereda oss fΓΆr ett samhΓ€lle dΓ€r alla inte fΓΆrvΓ€rvsarbetar.
A statement in support of Anthropic's defiance of the current US administration, perchance?
Anthropic sending me a separate invoice and receipt for every single additional person I've added to the account over the last month, or me automating the resulting paperwork with Claude Code and MCP servers; I'm not quite sure who is outsmarting whom here. π€
The rational behaviour is to plan as if the middle scenario will occur with total certainty.
In either of the singularity scenarios, none of your actions are likely to impact the outcome in any predictable way, so the marginal utility of different options effectively collapses to equivalence.
A little bit of volatility there.
ChatGPT has ruined em dashes like awful people have ruined certain names.
I'm damaged and initially read "The server complimented me" as ChatGPT is being sycophantic again.
Controlling the means of production might turn out to be an unobtainable goal, but opposing the means of production is a self-defeating goal.
While I'm sympathetic to the dislike of interventions that presume the left is powerful, those aren't the interventions I usually see. Rather, they question taking a reactionary stance against the technology itself, instead of focusing on the institutions controlling the technology and its uses.
A thing that makes developing with agents more fatiguing is that all the little things you used to be doing "on autopilot", mentally speaking, are now actually on autopilot, and very fast.
So proportionally more of your time is taken up by decision making and other mentally taxing tasks.
Getting shown up in the arena of elite impunity by *the British monarchy* is an incredible βAmerica at 250!β achievement
It concerns me less that my past expertise is becoming obsolete, and more that things are moving so quickly that I don't know what new things will be worth investing time getting good at.
I think this is to be expected. To make a biological analogy, biomes have plenty of primitive and simple organisms that have been chugging along happily in their niches for millions of years. Not everything, or even most things, needs to be sophisticated to accomplish a goal.
A curiously large portion of ideas being brought up around how to (re)structure code and development to aid AI agents match how I have been doing it for years for my own sake...
Am I...?
A modernised reimagination of the pre-germ theory era of illness as the result of moral guilt, only more individualised than during the medieval period.
So I guess flagellation won't help us this time.
This is one reason collective shame is problematic, especially when tied to a group or identity you didn't choose to belong to.
Now isolate two identical human twins in sensory deprivation tanks and make them talk to each other over intercoms.