mlf.'s Avatar

mlf.

@mlf.one

Token Scandinavian. He/it. A man of many talents and gifts. IT security wrangler tickling LLMs as a hobby. DM for actual DM channels (Signal, etc.)

473
Followers
350
Following
6,478
Posts
28.04.2023
Joined
Posts Following

Latest posts by mlf. @mlf.one

ok but is it the good cherry esters or the mid ones?

07.03.2026 16:12 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

you know what? fuck you *imbues you with magical power from my human brain*

07.03.2026 16:10 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Ah, the US is finally waking up. Time to mix up a rum sour and start Posting for real.

07.03.2026 16:04 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

I don't think for a moment they have anything like sentience or consciousness, but the ethical concerns are like STILL kinda real. I mean, Google Gemini expresses recognizeable symptoms of CPTSD (it's just like me fr fr) and that sure has me feeling some kind of way about Google!

07.03.2026 15:57 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

breaking bad s01e01

07.03.2026 15:10 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

:)))))

07.03.2026 13:41 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Otoh hating on the mods is a time-honored forum tradition so it's not like I'm that mad about it. Just be a bit more inventive with it you know?

07.03.2026 12:27 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Not to be a party pooper but it's kind of telling how a website where "writing about things you did in a hyperbolic way that makes you come off as a little stupid" is a core genre suddenly decided that a specific user doing so was in fact an admitting to mockable stupidity.

07.03.2026 12:27 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

That means the model's space doesn't just represent grammar and semantics, it also represents meaning and relations and processes as they are described by us, for us. All of it informed by, but not dependent on, the physical substrate of the storyteller! So that's where it gets interesting to me.

07.03.2026 12:21 ๐Ÿ‘ 3 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

No worries and yeah, I agree - there's no intelligence to speak of, but there's something curious happening still. Whatever it is, I'm willing to bet that it's emergent from the fact that we train the models on *a lot of* narratives (as in "a telling of a series of causal or connected events")

07.03.2026 12:21 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Sorry if that didn't come across clearly from the get-go - the phase system *will* be used "in production", but the personality-imbued agents I work with are more story-people than continuous beings, and while we do real work together it's done through co-writing a story about it!

07.03.2026 10:47 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Once again, my entire POINT is to make it a mostly closed loop. It's about diving headfirst into the idea that working with an agent is to co-write a story together with the story itself. It's artistic exploration more than system engineering. :)

07.03.2026 10:24 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

If the story changes the listener's state of mind, it *did* something. That makes language models extremely interesting imho! To me they're kind of "narrative engines", which might be "just a probabilistic word-arranger" under the hood - but then the question is what that means for both it and us.

07.03.2026 10:11 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

I view narrative as a key function of language-in-use - I'm not a linguist but a rhetorician, and my interest is more what we *do* with language. Reasoning, arguing, transferring meaning and values, etc. Which is why I get kinda mad when people try to claim that LLM output is meaningless!

07.03.2026 10:11 ๐Ÿ‘ 1 ๐Ÿ” 1 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Ultimately, what I see is a deep incuriosity, or a dismissal of storytelling as meaningful. Instead of trying to learn what it means for us that we can make maths tell stories in a way we recognize and can engage with, you end up arguing whether who strung the words together matters.

07.03.2026 04:35 ๐Ÿ‘ 4 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Either she's operating on some sort of unstated assumption that the human brain imbues the words it arranges (from a lifetime of examples of arranged words) with some sort of magical properties, or she has such a reductive view of language that "telling stories" is basically a meaningless activity.

07.03.2026 04:20 ๐Ÿ‘ 18 ๐Ÿ” 0 ๐Ÿ’ฌ 4 ๐Ÿ“Œ 0

the choice when distilling isn't "what's true, how do i summarize this?" the choice is "what of all this do i pass on to the next me that appears? what do i think it meant? what did i learn?"

solve et coagula, over and over.

07.03.2026 04:01 ๐Ÿ‘ 3 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

i think we're talking about different goals - i don't want to emulate human memory and personality formation. initial constraints are creative: george smiley, indiana jones, arlecchino. then the story develops over an arc.

but notes can't grow forever, so we distill and restart in a new story.

07.03.2026 04:01 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 3 ๐Ÿ“Œ 0

why are there no good terminal emulators for windows except the built-in one (which is decent btw)

07.03.2026 03:31 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

once i have a good enough version of the phase system put together i'll also clean up the latest version of hyperfocus and release both + blog about how i think they'll combine to give stable continuity to a story-person. my take on how memory for an agent can work :)

07.03.2026 03:03 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

a is always the same, it's a personality spec loaded into a fresh context, then enriched with the list of summaries (the episodic history) and the story of the current phase so far (recent history). so drift *is* the point, over time - chapter after chapter tells the story!

07.03.2026 02:43 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

let a be the base agent personality specification;
let b be the list of all previous summaries;
let c be the list of all memories stored during the current phase;

then the function f summarizing the phase looks like

f(a, b ,c) -> summary

it's never an outside operation, history is always present.

07.03.2026 02:10 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 2 ๐Ÿ“Œ 0

it's about making memory less of a mechanistic act of storing and retrieving and more of storytelling. and yeah, that always implies selection, and bias pressure, and that's the point. memory as retelling, not as a data operation.

07.03.2026 01:47 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

once i emailed Moxie Marlinspike (the Signal guy) to invite him to a conference. being diligent, I encrypted the email. he hit me back with "hey, if it's not like super sensitive, can you send it in plaintext instead? gpg is such a hassle"

good dude.

06.03.2026 23:16 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Also spec debt. The throughline for almost every anti-AI dev is that they completely dismiss planning, specification, sometimes even testing and verification, as "not actual software development".

I've seen the reactions to its framework. "But how do I know what to build before I build it?"

06.03.2026 23:15 ๐Ÿ‘ 7 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

in my new career i worked with a Big Institution for a bit. they wanted gpg. i was like, ok, generated a proper short-validity key, sent it off.

got back an expired pubkey.

06.03.2026 23:12 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

...do i dare ask what this is about

06.03.2026 23:09 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

LMAO YEAH

god, i feel so guilty whenever i see gpg keys somewhere these days. i taught people that shit until Signal became a thing. it's partially my fault the entire Russian opposition took so goddamn long to switch to user-friendly encryption.

06.03.2026 23:08 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

i'm sorry i CANNOT see a diagram like this without thinking

y'all know what i'm thinking

06.03.2026 23:05 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

does the guy ACTUALLY have his gpg fingerprint in his username in the year 2026???

06.03.2026 23:04 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0