spambots set reply root correctly challenge 2026 [hard]
spambots set reply root correctly challenge 2026 [hard]
I still can't get over it (I was somewhere over to the left of frame)
github copilot is the perfect tool for when you've run out of tokens in claude code
I think this aligns with my current take, which is that claude is "just" a text editor. it's most powerful when you're still the one in the driver's seat.
I think I'm more of a language-thinker now than I used to be
begone, bot
the paperclip maximizer problem feels like a pointlessly abstract hypothetical, until one day it doesn't
I don't know if an AI can ever be meaningfully sentient, but it can certainly convince a subset of humans that it is
sam altman on twitter: i expect ai to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes
I think about this one a lot.
unless... you stuff a sha256 merkle tree near the end of the file
the main drawback with this is that it's only truly auth'd if you watch from start to finish, if you seek to the middle then you can't trust it as much
but have you thought of *this*
A18 Pro only has 2 P-cores
re: JSON-LD, I guess you're referring to DID docs? In practice the atproto ecosystem treats them as plain JSON with no regard for the LD semantics, and there's a (slow) movement towards explicitly using application/did+json as opposed to application/did+ld+json
the xeon has significantly more cores
when you boot to the desktop it barks at you (genuinely)
> the DRM injection process is modifying your binary in the same way a virus might do
fun sentence
actually maybe it's less of a solved problem than I thought heh wiki.debian.org/Reproducible...
> Reproducible builds of Debian as a whole is still not a reality
reproducible builds were hard for a long time too, until the engineering effort was spent to make it easier
heh, funny timing. I think it's misleading to say an LLM *is* a compiler without further elaboration, but there are certainly parallels to be drawn.
There are a bunch of practical reasons why determinism isn't the default but it's pretty easy to achieve if you're doing local inference on a CPU
There are countless examples of people being bitten by spec-correct but nonetheless unexpected compiler behaviours
Compilers frequently do things that are hard to predict, including things that are radically form-altering like TCO.
Deterministic LLM inference is totally possible fwiw
it should still not look like that
btw this layout looks very broken, that's not how an MST looks
alternatively,
something something use a small last-gen model to write the code so you can use a large current-gen model to debug it
ebay
it's kinda paradoxical though because the moment you start "dumbing things down" you make it harder to truly understand. how do you simplify something without making it less powerful, or more opaque?