Guardrails don’t fail in code—they fail in adoption: if people don’t use the governed workflow, the guardrails never fire. Reliability collapses back to whatever the probabilistic generator does that day—which, by definition, won’t always be what you want. (3/3)
20.01.2026 20:32
👍 0
🔁 0
💬 0
📌 0
LLMs are probabilistic generative engines that need deterministic guardrails (automated constraints plus selective human oversight) to function reliably. If the flow isn’t rethought, the “AI gains” are mostly illusion: higher throughput of the same waste, plus new failure modes. (2/3)
20.01.2026 20:32
👍 0
🔁 0
💬 1
📌 0
Seen this movie before in ICT: automating the current mess just makes more mess faster. LLMs won’t improve processes by themselves —because they’re not digital employees; www.juicebox.com.au/insights/ai-... (1/3)
20.01.2026 20:32
👍 1
🔁 0
💬 1
📌 0
Picture of
Summer was fun and inspiring... let's get that new academic year started! 💪
31.08.2025 19:24
👍 4
🔁 0
💬 0
📌 0
“Bluesky may support a more interpretive, reflective mode of science communication.”
“Interactions on Bluesky were an order of magnitude higher than on X.”
“Bluesky may well become the next X for scientific discussion and will persist in the long term.”
31.08.2025 12:29
👍 1
🔁 0
💬 0
📌 0
Research posts on Bluesky are more original — and get better engagement
Bluesky posts about science garner more likes and reposts than similar ones on X.
Research confirms: Bluesky is a better home for academia! Posts about science on Bluesky are more original and get significantly more engagement than on X, according to a large-scale analysis reported by Nature.
A promising shift for science communication! www.nature.com/articles/d41...
31.08.2025 12:26
👍 1
🔁 0
💬 1
📌 0
Compliance is no longer optional… and change a certitude. Very nice summary of today’s cybersecurity reality.
30.08.2025 11:32
👍 3
🔁 0
💬 0
📌 0
I was putting together a PowerPoint slide with the headlines of major cyber incidents in our region from just the past two months, and I’ve already run out of space.
21.08.2025 15:15
👍 0
🔁 0
💬 0
📌 0
Apple Patches CVE-2025-43300 Zero-Day in iOS, iPadOS, and macOS Exploited in Targeted Attacks
Apple patches CVE-2025-43300 zero-day in iOS, iPadOS, and macOS after active exploitation reports.
This has been a wild summer. Now a zero-day on Apple OS'es, CVSS score 7.8, "this issue may have been exploited in an extremely sophisticated attack against specific targeted individuals". Lovely.
thehackernews.com/2025/08/appl...
21.08.2025 15:13
👍 0
🔁 0
💬 1
📌 0
The criticism of Orange for downplaying the risks of its incident, as noted in the article, is striking - especially since Orange also sells cybersecurity services to others. Hard to square that with the transparency, containment and mitigation NIS2 is supposed to drive.
21.08.2025 11:39
👍 1
🔁 0
💬 1
📌 0
It's a stochastic parrot. But this may challenge us to rethink what understanding really implies. If a purely statistical model consistently produces coherent, useful, context-aware answers… at what point do we just treat that (at least functionally) as understanding? And does it not have value?
19.08.2025 17:55
👍 0
🔁 0
💬 0
📌 0
Surprise surprise, commercial generic models aren’t the definitive solution to all problems. Specialized, open, and well-tuned beats big and generic. Curious to see how this shift will play out at the edge...
16.08.2025 21:56
👍 0
🔁 0
💬 0
📌 0
Fact is, LLMs learn only by predicting the next word—yet (some) reasoning and abstraction emerge. Maybe that’s a clue: language isn’t just a tool for thought, but a big part of how we perceive (and structure) the world. Not the whole story, but maybe more central than we guessed?
12.08.2025 22:59
👍 1
🔁 0
💬 0
📌 0
I had the Courier in those days... still remember flashing it to support X2 and getting the amazing 56k throughput (which was "almost as much as a digital 64k line!"). With the iconic "oink-oink" handshake noises at the start of each session...
12.08.2025 19:19
👍 1
🔁 0
💬 1
📌 0
That may sound blunt, but given the ongoing instability, it’s clear MITRE’s reliability as a cornerstone partner is compromised. That doesn’t mean they’ve lost all relevance, but sole dependence on MITRE for stewardship of key frameworks has now become risky.
10.08.2025 13:37
👍 2
🔁 0
💬 0
📌 0
"MITRE just terminated senior leadership at the Center for Threat-Informed Defense in a cost cutting exercise. It appears, at this early stage, MITRE can no longer be relied upon to be the steward of the back bone frameworks for global threat-informed cyber security."
10.08.2025 13:27
👍 2
🔁 0
💬 3
📌 0
We're closer to anthropomorphic misinterpretations than to Rise of the Machines.
10.08.2025 13:08
👍 1
🔁 0
💬 0
📌 0
Carnegie-Mellon research seems to confirm: LLMs often fail at long-horizon reasoning, social interactions, and tasks requiring genuine understanding. They perform noticeably better when tasks align with their training data, not when abstract or novel reasoning is required. arxiv.org/pdf/2412.14161
10.08.2025 13:06
👍 0
🔁 0
💬 1
📌 0
To me, this reads like we’re recreating Atlas Shrugged, but with venture capital and blockchains. Now it’s upgraded with crypto, AGI, network states, and Substack manifestos.
08.08.2025 09:59
👍 1
🔁 0
💬 0
📌 0
Exciting to see PHP adding one of my favourite Clojure features: function piping/threading. A big step forward for anyone with a soft spot for functional programming! Now let's get this finalized in JavaScript as well 👍
08.08.2025 09:31
👍 1
🔁 0
💬 0
📌 0
Updating Our Community on the Cyber Incident | Office of Public Affairs
Just a gentle reminder that universities - even top-tier ones with significant operating budgets - are not safe from cyber threats and incidents. If it seems that "nothing ever happens" in your academic environment, that's because people are constantly hard at work to avoid it. bit.ly/3HhpibK
08.08.2025 09:21
👍 0
🔁 0
💬 0
📌 0
"Quite simply, 'LLMs are doing reasoning' is the 'look, my dog is smiling' of technology." Provocative article that certainly puts the AI/LLM hype into some sobering perspective. Though IMHO even then the Stochastic Parrot can have some value - when recognized as such.
05.08.2025 21:16
👍 2
🔁 0
💬 1
📌 0
It does have one great feature though - it'll still have security patches after October 14, 2025.
23.07.2025 22:22
👍 0
🔁 0
💬 0
📌 0
Discomforting conclusion: the global incident around shocking zero-day CVE-2025-53770 that took everyone by surprise… wasn’t surprising at all. Just a remix of a familiar pattern. And more like it? Not surprises, but inevitable.
23.07.2025 22:18
👍 0
🔁 0
💬 0
📌 0
Nothing new here. The real value of AI is in augmenting humans, not replacing them. LLMs require a human in the loop and verified tracebacks to be reliable — and that takes significant manual effort. No automated method today can guarantee accuracy in high-stakes use.
23.07.2025 20:47
👍 0
🔁 0
💬 1
📌 0
Taalvereisten aan de Universiteit Gent voor ZAP leden
Het is niet zozeer een vlaag van zinsverbijstering maar eerder de toepassing van de wettelijke vereisten in de Codex Hoger Onderwijs, waarbij de instelling zich ook aan controle mag verwachten, en niemand gebaat is met complicaties. Zie o.a. www.ugent.be/nl/jobs/taal...
23.07.2025 20:32
👍 0
🔁 0
💬 0
📌 0
Edge computing on steroids: LLMs run locally on regular laptops, thanks to the rise of NPU-accelerated PCs. With AI hardware baked into new chips, running models like LLaMA or Mistral offline will become a private, practical way to handle your data.
23.07.2025 14:41
👍 1
🔁 0
💬 0
📌 0
Education tops the list with 31% of its public-facing assets vulnerable. This inherently open, dynamic environment demands understanding of asset ownership, purpose, context, and attacker perspective to manage exposure. That’s exactly what #NIS2 is pushing for. Security starts with context.
23.07.2025 14:14
👍 2
🔁 0
💬 0
📌 0