Daniel Farina's Avatar

Daniel Farina

@danfarina

543
Followers
2,500
Following
3,680
Posts
16.11.2023
Joined
Posts Following

Latest posts by Daniel Farina @danfarina

I’m puzzled about this, the responses seemed fine. Like, one guy is saying his dumb purchase was a few thousand dollars on a desk. Unless he makes that mistake every week: so what? Look at 90% of F-150 sales.

07.03.2026 15:10 👍 3 🔁 0 💬 0 📌 0

We don’t talk enough about how morally depraved the tech industry turned out to be. Every single ounce of their self-regarding statements of values was an outright lie.

07.03.2026 05:03 👍 4356 🔁 922 💬 104 📌 69

Sir, can I interest you in a discussion on land value tax.

05.03.2026 22:15 👍 1 🔁 0 💬 1 📌 0

Turn them all into dispatchable data centers, EZ.

05.03.2026 21:25 👍 1 🔁 0 💬 1 📌 0

what's kinda fun is to work out how fast a gasoline fill up is in terms of megawatts. You have to do a few corrections to your taste to deal with how much is lost to, say, heat, rather than useful work, but it is measured in megawatts. It's about 20MW gross and maybe 4-6MW net (over the few minutes)

05.03.2026 20:10 👍 3 🔁 0 💬 1 📌 0

“No matter what happens, keep this in mind: It’s the same old thing, from one end of the world to the other. It fills the history books, ancient and modern, and the cities, and the houses too. Nothing new at all.”

05.03.2026 06:05 👍 2 🔁 0 💬 0 📌 0

But so what? I don’t expect 100% of the text I read out of Wikipedia to be relevant to my task either. My contrary take is that LLMs will help with critical thinking: now using a GRE word means nothing. You can’t lean on proxy for quality or accuracy of an idea.

05.03.2026 00:16 👍 2 🔁 0 💬 0 📌 0

Chart interfaces is an area where skipping the diving into the manual almost always helps, unless you are writing a charting library, then you have to be more careful: there’s presumably something about not yet invented, and its unorthodoxy will likely result in some unhelpful model output.

05.03.2026 00:16 👍 1 🔁 0 💬 1 📌 0

The hype people are setting the bar way too high and say weird stuff, but automating away three days of drudgery per annum is a big deal.

04.03.2026 20:22 👍 1 🔁 0 💬 0 📌 0

I’m still harvesting slight year over year productivity gains from mobile broadband and wearables. If LLMs spare three days of drudgery a year for the next ten years, that’s 1% increase in productivity from one family of technique. That’s stellar. Yet so modest in any given year.

04.03.2026 20:19 👍 3 🔁 0 💬 2 📌 0
Preview
Lucille Ball and the Mystery of Radio Teeth: A Dentist’s Perspective Did Lucille Ball really hear radio through her fillings? A dentist explains galvanic currents, bone conduction, and the science behind the famous myth.

www.medboundtimes.com/dentistry/lu...

04.03.2026 20:11 👍 2 🔁 0 💬 1 📌 0
Post image

The "median voter" theory of Democratic renewal really needs to have an account for what must be done about right wing propaganda's impact on that median voter. Via @brianbeutler.bsky.social:

www.offmessage.net/p/jasmine-cr...

04.03.2026 16:40 👍 303 🔁 116 💬 24 📌 4

kind of crazy that if you have enough money and don't like what you see in the media, you can just buy up every film studio, news station, and social media app and change it

04.03.2026 02:51 👍 16509 🔁 2660 💬 266 📌 135

“A rat’s anus?”

04.03.2026 01:50 👍 12 🔁 0 💬 1 📌 0

I mean, I’m using Claude code too and it’s very clearly like that!

03.03.2026 22:54 👍 1 🔁 0 💬 0 📌 0

My thought is, accumulation of productivity will be gradual: fewer design errors, longer lasting designs from deeper research. But program interfaces exposed to the greater public will change and be more programmable since a critical thinker without time for the manual can compose what they want.

03.03.2026 16:56 👍 1 🔁 0 💬 0 📌 0

As it comes to programs in a long continuity and high levels of assurance , I’ve found AI more useful for prototyping. This is still a significant research function, to study several bigger and more complete prototypes to assess impact in the materialized text.

03.03.2026 15:15 👍 1 🔁 0 💬 1 📌 0

This is not a production program, stakes are low, but it allows us to more thoroughly research our problems. We can move with a greater volume of data and research.

03.03.2026 15:15 👍 1 🔁 0 💬 1 📌 0

That’s true for us too though: we have many adjacent areas of software engineering where manual diving dominates. Even some that exist in our area of specialty, for example, whipping up a quick script to navigate /proc in Linux and sum some counters

03.03.2026 15:11 👍 1 🔁 0 💬 1 📌 0

Yeah I saw it as it went by but didn’t zoom in on the kv cache part.

03.03.2026 06:02 👍 1 🔁 0 💬 0 📌 0

Now that’s a cool experiment. Any idea how it’s able to do that without materializing tokens first to realize it?

03.03.2026 05:25 👍 4 🔁 0 💬 1 📌 0

You can even approach this from an information-theoretical point of view: the largest open weight models are about a terabyte, wikipedia is maybe 150GB. No wonder it's so good at knowing most of the spells in Harry Potter, you know?

03.03.2026 05:04 👍 0 🔁 0 💬 0 📌 0

Anyway, I do not use the Wikipedia analogy completely lightly: it’s a non-consensual version, where my works were basically turned into statistical slaw without my permission and occupy some space. But it’s effectively crowdsourced, a bit like Wikipedia.

03.03.2026 04:55 👍 1 🔁 0 💬 1 📌 0

Other fun tricks: getting better results by having it draw a diagram in ascii first, where you suspect the translation to ascii will prove accurate and then it can easily spot the problem. Many variants of this, when you suspect an intermediate representation will help it enter the right space.

03.03.2026 04:53 👍 1 🔁 0 💬 1 📌 0

…but if you mention “statistics” or “regression” it’ll start making a python program with regurgitated scipy to whip up some regressions. Neat! But you had to suspect it would not find the right latent space for the job.

03.03.2026 04:45 👍 2 🔁 0 💬 1 📌 0

Other times, unless you are strategic, it’ll fail to connect disciplines. If you ask “do you think X about this data” having not put it the word “statistics” at at any point in the debug dump, it’ll just kind of vibe some words out.

03.03.2026 04:45 👍 1 🔁 0 💬 1 📌 0

I was just debugging something today that yielded number of preposterous suggestions that clearly are something like regression to the mean / attraction to the training set. That’s okay, it’s just very obvious it’s happening and when to ignore or what to put in to correct.

03.03.2026 04:39 👍 1 🔁 0 💬 1 📌 0

Well, I also do much of my coding (you could call it keyboarding) by AI but it’s absolutely routine that I find a number of critical defects in it. But I know to look for them because it’s a common type of defect. Try getting an AI to one shot a concurrent algorithm not straight from a textbook.

03.03.2026 04:37 👍 5 🔁 0 💬 1 📌 0

I’m a little puzzled but this. I’m not a social scientist. I write software. LLMs are great, I use them a lot, but it’s very clearly wandering around in its Wikipedia-like latent space. This falls far short of being a good idea to deploy to production. Are our arts so different?

03.03.2026 04:29 👍 21 🔁 0 💬 3 📌 0

“We’re on a mission from God”

02.03.2026 13:09 👍 2 🔁 0 💬 1 📌 0