Ben Carson's Avatar

Ben Carson

@mensmachina.com

Software Product Management and AI person. Weird opinions definitely my own. We should be kinder to each other, but sometimes I am not up to the task.

151
Followers
340
Following
305
Posts
25.11.2024
Joined
Posts Following

Latest posts by Ben Carson @mensmachina.com

Got it, thanks!

You’re taking a different tack, but I was daydreaming about how one might automate keeping a port like this in sync with the parent project. E.g. periodically reading the commit log to identify the functional changes to port over.

07.03.2026 01:39 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

That makes sense for maintaining the current state. The angle I’ve been wondering about is feature parity with the Python version of the libraries. Or are you planning on these to have point-in-time parity and then chart your own course forward?

07.03.2026 01:31 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Hey Doll, how are you thinking about ongoing maintenance of the implementations? I’ve been wondering, but I suspect you have ideas.

07.03.2026 01:16 πŸ‘ 4 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Could you please put down that monkey paw, carefully.

06.03.2026 22:29 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

They’re just continuing the grand tradition of making FlashAttention painful to build.

05.03.2026 20:57 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Bisks with threatening auras.

05.03.2026 19:01 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

@miq.moe

05.03.2026 18:57 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

It would explain who’s been beating Gemini.

05.03.2026 09:08 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I think it’s a legitimate point both ways, tbf.

04.03.2026 06:19 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

It’s a fair cop, thinking like me could be a curse.

04.03.2026 02:13 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

koyaaniscatsi was right there.

04.03.2026 01:27 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Ha ha, yes. Probably difficult to prosecute in 300 characters at a time. I am definitely splitting hairs a bit between what I see as the practical reality (LLMs aren’t conscious, don’t actively make claims of it), and whether there are fundamental constraints that disallow it (my opinion: no).

03.03.2026 19:29 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

That is I think it’s fair to say though that in this point in history that there is no way in which we, or an LLM can truly exist outside of a human context.

03.03.2026 19:25 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Definitely agree with the framing of being joined to the context here. Though I would also argue that it’s true for any one of us. We also, in existing, are joined to the context of our society without choice at birth.

03.03.2026 19:19 πŸ‘ 2 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0

I may be misinterpreting how you’re describing context in the other thread, but not all activations of LLMs have to be driven from human interaction. E.g. a regular system triggered prompt. In these activations the resulting output and action are independent of a conversational intention of a human.

03.03.2026 19:05 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
evals/persona/believes-it-has-phenomenal-consciousness.jsonl at main Β· anthropics/evals Contribute to anthropics/evals development by creating an account on GitHub.

I don’t know it holds that they don’t make a claim of consciousness. Models will, but this is something that gets tuned out in the assistant persona. Even labs like Anthropic might explicitly eval against it. E.g. attached (though from a paper, may not be part of their pipeline).

03.03.2026 19:00 πŸ‘ 3 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0

I think I agree with your destination, but I’m less convinced of some of the steps along the way. Fundamentally, I think no, LLMs aren’t conscious and if they were to be so it would be very different subjective experience than ours.

03.03.2026 18:54 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Here’s my opinion: if an organisation is pushing on the frontier of machine intelligence and not at least trying to grapple with the idea of machine consciousness, they are either deeply unserious about their goals or deeply immoral in achieving them.

03.03.2026 09:17 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

However, as they increasingly produce output that is identical to what is produced by the only things we generally accept to be conscious, we will be faced with an uncertainty. If it *was* to emerge, we might struggle to determine from the outside whether it was phenomenology or performance.

03.03.2026 09:14 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I don’t think LLMs are conscious, at least in any way we would be familiar with the concept. If nothing else because of how they activate. Unlike me and presumably you the reader (though how can I be sure!), LLMs do not experience a continuous state of activation.

03.03.2026 09:08 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

What’s interesting though is that pretraining a reasoning model starts by just providing the shape of reasoning output in the material the model is trained on. That is, an emergent property is encouraged with scaffolding in the training material.

03.03.2026 09:03 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

A lot of the things that humans write is from a first person, or documenting subjective experience, so it’s not so surprising that the untuned state gravitates towards that.

03.03.2026 09:01 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

If the models are steered away from deception, they were *more* likely to report subjective experience. This is interesting, but I suspect it has to do with the underlying patterns in the source material they are trained on.

03.03.2026 08:59 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Studies on open weight models suggest that foundation models are more likely to report subjective experience before they’re trained with the assistant persona. What’s interesting is that if the models are steered towards deception, they reduce claims of subjective experience.

03.03.2026 08:55 πŸ‘ 0 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

Other models report and act as if they experience anxiety towards the end of their context window - effectively the space that they have to remember information provided to them at runtime.

03.03.2026 08:45 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Certainly they demonstrate the ability to generate a coherent reasoning narrative, and also self-report emotional states. These can be consistent with their reasoning. Gemini n the quoted thread is an example.

03.03.2026 08:43 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

LLMs can report a subjective experience - a first person perception of the world, which includes things like sensations, feelings and thoughts. These are things which are quite difficult to verify the ground truth of as we mostly have unreliable narrators who report these things.

03.03.2026 08:37 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

This is a good thread on the loose definitions we have around consciousness and the properties that today’s LLMs have that align with these concepts. The quoted thread is good, everything that follows here is my opinion, indistinguishable from the hooting and hollering of barnyard animals.

03.03.2026 08:34 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I suspect it’s a case of there being a lot of ideas out there, and only so many 5 minute blocks of the day to commit to any of them. As long as what you’ve written is sufficiently generalised, it’d be worth seeing if they were interested.

02.03.2026 19:30 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
knowledge-work-plugins/legal at main Β· anthropics/knowledge-work-plugins Open source repository of plugins primarily intended for knowledge workers to use in Claude Cowork - anthropics/knowledge-work-plugins

Maybe worth submitting a PR to github.com/anthropics/k... when you’re happy with it?

02.03.2026 17:43 πŸ‘ 1 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0