's Avatar

@nateberkopec

1,648
Followers
77
Following
641
Posts
10.11.2024
Joined
Posts Following

Latest posts by @nateberkopec

When people say "Claude is conscious", I always like to ask "which part?"

05.03.2026 22:28 👍 2 🔁 0 💬 0 📌 0
Post image

Seen in a Shopify project README. TIL RSpec is deprecated at Shopify.

27.02.2026 16:59 👍 11 🔁 3 💬 4 📌 0

LLM slopcannons are putting pressure on top of already broken software dev team cultures. The problem isn't the slop generator, so much as your process was only handling low volumes of human-generated slop until now.

26.02.2026 16:59 👍 5 🔁 0 💬 1 📌 0

A pattern I am seeing: "Generate a 10 question quiz on how this PR works to test my understanding." Do not allow merge until someone passes the quiz.

Gusto did this years ago but with humans writing the questions for certain types of PRs.

25.02.2026 17:00 👍 7 🔁 1 💬 1 📌 0

for the sadists out there

24.02.2026 21:02 👍 1 🔁 0 💬 0 📌 0

You are probably not using LLMs enough to generate non-code artifacts. "Review this PR and then generate an interactive HTML website to explain it. Turn this state machine into a mermaid diagram. Generate a 10,000 word deep research report on the state of the art of CSRF protection."

24.02.2026 16:56 👍 10 🔁 0 💬 3 📌 0

LLMs are (not _just_) autocomplete. They will tend to do MORE, tend to katamari. The dangers of shipping more code and never shipping less still apply.

23.02.2026 16:56 👍 4 🔁 0 💬 0 📌 0

Your prior at this point needs to be "if LLMs are breaking my SDE lifecycle, the problem is the lifecycle or how we are using the LLM, not the capabilities of the model". You can make models generate anything now. The problem is how you move them through the latent space.

20.02.2026 16:58 👍 2 🔁 0 💬 0 📌 0

The only code review agent I have ever seen be even remotely good is just Codex xhigh. All the review services (and I've seen at least a dozen at this point) suck so bad that I'm not sure how they make any money at all.

19.02.2026 16:57 👍 5 🔁 0 💬 0 📌 0
Post image

When you've got a queue with a very tight SLO, you don't want to scale down too fast. You end up with a "sawtooth" or jagged container count. The "true" demand for containers is the red line here. In situations on <30 sec SLOs, you often want cooldowns in the >1hour range.

18.02.2026 16:57 👍 3 🔁 0 💬 1 📌 0

I’ve seen a number of home-grown Claude code orchestrators with built-in kanban…

You’ve just been given access to a magical tool that can build anything you can imagine, and… kanban?

18.02.2026 01:15 👍 47 🔁 1 💬 6 📌 3

Even if not deployed single-node, it's helpful to use Little's Law to understand if you _could_ be.

Avg CPU load = CPU time in seconds per req/job * req-per-sec.

If that's below ~16, you could definitely run single-node. Why aren't you? No wrong answers, but have a good one.

17.02.2026 17:02 👍 2 🔁 0 💬 0 📌 0

I didn't really dig into it too carefully but pangram does seem to hold up pretty well in studies

16.02.2026 21:48 👍 1 🔁 0 💬 0 📌 0

Your proposal for RubyKaigi 2026 has been accepted.

16.02.2026 20:48 👍 14 🔁 0 💬 1 📌 0

not anymore. my license is probably still valid though...

06.02.2026 06:27 👍 2 🔁 0 💬 0 📌 0

What's your sense: do we get different curve shapes for different evals? Or all the same curve shape, different y-intercept/slope?

So far I think you have to say most evals correlate pretty strongly

04.02.2026 21:09 👍 1 🔁 0 💬 1 📌 0

I think there's a problem about conceptualizing this as a single line when really what we care about is an x-dimensional space where particular types of human labor are each dimension

04.02.2026 21:00 👍 2 🔁 0 💬 2 📌 0

"against familiars"

> posts literally the sickest image of familiars ever

02.02.2026 19:58 👍 5 🔁 0 💬 1 📌 0

I'm starting to turn on the consciousness question myself. The answer to the chinese room is going to be "who fucking cares".

30.01.2026 23:50 👍 5 🔁 0 💬 1 📌 0

I'm sympathetic to "this reads like trite sci fi" but if the last 3 months of what my normie friends on instagram send me, 95% of the world population is eating slop at the slop trough and they are absolutely gonna fall for this bait

30.01.2026 22:00 👍 5 🔁 0 💬 1 📌 0

We are 100% going to get a cult around people believing AI to be sentient. They're gonna start buying hardware and plugging it into a moltbook-like cult network. Matter of time.

30.01.2026 21:16 👍 11 🔁 0 💬 2 📌 0

the Sama position: better to stress test asap

30.01.2026 20:36 👍 5 🔁 0 💬 0 📌 0

certainly people are having fun with trying to steer this and prompt inject for sure.

30.01.2026 20:36 👍 1 🔁 0 💬 0 📌 0

don't believe anything you can't verify

30.01.2026 20:35 👍 2 🔁 0 💬 0 📌 0

In 2026, we're going to see coding models either expand or get rebranded into "tool use" or "computer use" models.

30.01.2026 17:04 👍 2 🔁 0 💬 0 📌 0

x.com/awnihannun/s... native precision

30.01.2026 01:35 👍 3 🔁 0 💬 1 📌 0

saw on post on X running it on 2 mac studio m3 ultras at 25 tok/sec, full size

30.01.2026 01:32 👍 1 🔁 0 💬 1 📌 0

looks like a tiktok

29.01.2026 19:58 👍 1 🔁 0 💬 0 📌 0

just trying to restate the original underlying study FWIW

29.01.2026 19:13 👍 0 🔁 0 💬 1 📌 0

Not a joke. It can _implement_ complex refactors just fine when told what to do, but I haven't had one emerge from "Look for a deep refactoring opportunity in this codebase" type prompt.

29.01.2026 19:12 👍 4 🔁 0 💬 1 📌 0