Got it, thanks!
Youβre taking a different tack, but I was daydreaming about how one might automate keeping a port like this in sync with the parent project. E.g. periodically reading the commit log to identify the functional changes to port over.
07.03.2026 01:39
π 1
π 0
π¬ 1
π 0
That makes sense for maintaining the current state. The angle Iβve been wondering about is feature parity with the Python version of the libraries. Or are you planning on these to have point-in-time parity and then chart your own course forward?
07.03.2026 01:31
π 0
π 0
π¬ 1
π 0
Hey Doll, how are you thinking about ongoing maintenance of the implementations? Iβve been wondering, but I suspect you have ideas.
07.03.2026 01:16
π 4
π 0
π¬ 1
π 0
Could you please put down that monkey paw, carefully.
06.03.2026 22:29
π 0
π 0
π¬ 0
π 0
Theyβre just continuing the grand tradition of making FlashAttention painful to build.
05.03.2026 20:57
π 1
π 0
π¬ 0
π 0
Bisks with threatening auras.
05.03.2026 19:01
π 1
π 0
π¬ 0
π 0
@miq.moe
05.03.2026 18:57
π 2
π 0
π¬ 1
π 0
It would explain whoβs been beating Gemini.
05.03.2026 09:08
π 3
π 0
π¬ 0
π 0
I think itβs a legitimate point both ways, tbf.
04.03.2026 06:19
π 2
π 0
π¬ 0
π 0
Itβs a fair cop, thinking like me could be a curse.
04.03.2026 02:13
π 1
π 0
π¬ 1
π 0
koyaaniscatsi was right there.
04.03.2026 01:27
π 2
π 0
π¬ 1
π 0
Ha ha, yes. Probably difficult to prosecute in 300 characters at a time. I am definitely splitting hairs a bit between what I see as the practical reality (LLMs arenβt conscious, donβt actively make claims of it), and whether there are fundamental constraints that disallow it (my opinion: no).
03.03.2026 19:29
π 2
π 0
π¬ 1
π 0
That is I think itβs fair to say though that in this point in history that there is no way in which we, or an LLM can truly exist outside of a human context.
03.03.2026 19:25
π 1
π 0
π¬ 0
π 0
Definitely agree with the framing of being joined to the context here. Though I would also argue that itβs true for any one of us. We also, in existing, are joined to the context of our society without choice at birth.
03.03.2026 19:19
π 2
π 0
π¬ 2
π 0
I may be misinterpreting how youβre describing context in the other thread, but not all activations of LLMs have to be driven from human interaction. E.g. a regular system triggered prompt. In these activations the resulting output and action are independent of a conversational intention of a human.
03.03.2026 19:05
π 1
π 0
π¬ 0
π 0
evals/persona/believes-it-has-phenomenal-consciousness.jsonl at main Β· anthropics/evals
Contribute to anthropics/evals development by creating an account on GitHub.
I donβt know it holds that they donβt make a claim of consciousness. Models will, but this is something that gets tuned out in the assistant persona. Even labs like Anthropic might explicitly eval against it. E.g. attached (though from a paper, may not be part of their pipeline).
03.03.2026 19:00
π 3
π 0
π¬ 2
π 0
I think I agree with your destination, but Iβm less convinced of some of the steps along the way. Fundamentally, I think no, LLMs arenβt conscious and if they were to be so it would be very different subjective experience than ours.
03.03.2026 18:54
π 3
π 0
π¬ 1
π 0
Hereβs my opinion: if an organisation is pushing on the frontier of machine intelligence and not at least trying to grapple with the idea of machine consciousness, they are either deeply unserious about their goals or deeply immoral in achieving them.
03.03.2026 09:17
π 0
π 0
π¬ 0
π 0
However, as they increasingly produce output that is identical to what is produced by the only things we generally accept to be conscious, we will be faced with an uncertainty. If it *was* to emerge, we might struggle to determine from the outside whether it was phenomenology or performance.
03.03.2026 09:14
π 0
π 0
π¬ 1
π 0
I donβt think LLMs are conscious, at least in any way we would be familiar with the concept. If nothing else because of how they activate. Unlike me and presumably you the reader (though how can I be sure!), LLMs do not experience a continuous state of activation.
03.03.2026 09:08
π 0
π 0
π¬ 1
π 0
Whatβs interesting though is that pretraining a reasoning model starts by just providing the shape of reasoning output in the material the model is trained on. That is, an emergent property is encouraged with scaffolding in the training material.
03.03.2026 09:03
π 0
π 0
π¬ 1
π 0
A lot of the things that humans write is from a first person, or documenting subjective experience, so itβs not so surprising that the untuned state gravitates towards that.
03.03.2026 09:01
π 0
π 0
π¬ 1
π 0
If the models are steered away from deception, they were *more* likely to report subjective experience. This is interesting, but I suspect it has to do with the underlying patterns in the source material they are trained on.
03.03.2026 08:59
π 0
π 0
π¬ 1
π 0
Studies on open weight models suggest that foundation models are more likely to report subjective experience before theyβre trained with the assistant persona. Whatβs interesting is that if the models are steered towards deception, they reduce claims of subjective experience.
03.03.2026 08:55
π 0
π 1
π¬ 1
π 0
Other models report and act as if they experience anxiety towards the end of their context window - effectively the space that they have to remember information provided to them at runtime.
03.03.2026 08:45
π 0
π 0
π¬ 1
π 0
Certainly they demonstrate the ability to generate a coherent reasoning narrative, and also self-report emotional states. These can be consistent with their reasoning. Gemini n the quoted thread is an example.
03.03.2026 08:43
π 0
π 0
π¬ 1
π 0
LLMs can report a subjective experience - a first person perception of the world, which includes things like sensations, feelings and thoughts. These are things which are quite difficult to verify the ground truth of as we mostly have unreliable narrators who report these things.
03.03.2026 08:37
π 0
π 0
π¬ 1
π 0
This is a good thread on the loose definitions we have around consciousness and the properties that todayβs LLMs have that align with these concepts. The quoted thread is good, everything that follows here is my opinion, indistinguishable from the hooting and hollering of barnyard animals.
03.03.2026 08:34
π 3
π 0
π¬ 1
π 0
I suspect itβs a case of there being a lot of ideas out there, and only so many 5 minute blocks of the day to commit to any of them. As long as what youβve written is sufficiently generalised, itβd be worth seeing if they were interested.
02.03.2026 19:30
π 2
π 0
π¬ 0
π 0