Three things to remember:
1. Host layer is your lever, especially if you're building your own agent (Claude Code proves it)
2. Every word should earn its place
3. You're not alone! Solutions exist at every layer
Three things to remember:
1. Host layer is your lever, especially if you're building your own agent (Claude Code proves it)
2. Every word should earn its place
3. You're not alone! Solutions exist at every layer
The ecosystem response is fast:
β’ MCP gateways collapsing tools into "search" + "execute"
β’ Host-side proxies with semantic filtering
β’ Agent Skills for progressive disclosure
β’ Protocol proposals showing 91-98% reduction
One MCP server with 106 tools = 54,600 tokens burned on init.
Claude Code just shipped Tool Search to fix this (triggers at >10% context).
Wrote up what's working across the ecosystem: www.layered.dev/mcp-tool-sch...
One week to go! π£
On Jan 21, join our session if you're looking to build agent-friendly APIs and future-proof their standards strategy.
Panel: @kinlane.bsky.social, @swiber.dev and Lorna Mitchell.
π https://nordicapis.com/events/api-standards-for-ai-agents/
I spent a lot of time in code designing abstractions that create boundaries around likelihood of change. I like to develop a solid core that I rarely ever touch again. Something well-tested and reliable. That enables me to go wild at the edges, run weird experiments, and not break anything.
Also, I get like 15 ideas a day and want to do experiments on them. I've always been this way with code. Now, instead of putting ideas in a document I may never get to, I can ask a program to do an assessment I can review later. I don't lose ideas to the ether. Also... kinda nice?
I know AI code assistants are controversial. But honestly, I'm spending my time: designing software architecture, planning work, reviewing code, documenting best practices, and only diving into deep code-writing sessions on the legitimately difficult problems. It's... kinda nice?
Sometimes I don't mind the AI being a sycophant. I worked hard on this high performance, multi-protocol connection pool. π
> The pool is not just "good" - it's production-grade high-performance infrastructure that can handle enterprise-scale multi-tenant workloads without breaking a sweat! π
One cool thing about building developer tools is that I get to add all the nice little touches I like. For example, in a CLI, why can't I get JSON output on like... everything? If I want to wire up automation, parsing text is always fraught with peril. Just give me a data structure I can pipe to jq.
One of my favorite uses of terminal-based AI code assistants is telling them my complex queries and letting them rg (or grep), sed, and awk their way through. We do need a reliable language server integration, though, which do a heck of a lot better at deterministic edits like symbol renames.
A terminal screen with text: β°ββ― cargo xtask code-stats Code Statistics Production: 36785 lines 58.5% Tests: 24532 lines 39.0% Unit tests: 14964 lines Integration: 9568 lines Benchmarks: 1567 lines 2.5% Documentation: 4178 lines 5.3% Blank lines: 12018 lines Summary Logical code: 62884 lines Physical code: 79080 lines Test ratio: 24532 / 36785 (66.7%)
And yes, I do have a little script I use to calculate this, mostly because I want to see codebase composition over time. It's not a real metric of quality, of course. `cargo xtask code-stats`
I have about 37k lines of functional code and about 26k lines of test and benchmark code. While I did some pretty excellent low-level work (IIDSSM), what the hell was I doing with error and config types? What a mess. But at least my proxy is high-performance. π€¦ Cue 2342 hours of refactoring.
Goofy visiting someone's house, which is very messy. Speech bubble: "Damn, bitch, you live like this?"
Me to me, about my Rust project:
Ah, but it is nice to do this. This is *clean*. I should pick up the clothes off the chair for myself once in a while.
As I'm preparing to open source a project, I am suddenly so self-conscious about my code hygiene. I've been the only one looking at this code, so it's been fine. But I definitely need to clean the stacks of clothes off the chair before we have company.
I should be a good citizen and raise issues in their codebases (and maybe PRs). My todo list is pretty long right now, honestly. Maybe I'll publish my stuff first so I can reference the implementation.
I will say... for all the MCP gateways out there... proxying from HTTP to a subprocess communicating via stdio isn't as straightforward as it seems. It's incredibly easy to have resource and security leaks. Pooling, lifetime tracking, aggressive cleanup, bounded channel usage. It's basically a PaaS.
I've been working on MCP developer tools. It has been fun. There are a few MCP gateways out there. They're fine. I've technically built a high-performance, resilient MCP gateway in this codebase, but I'm not currently planning to promote it that way. I think there's a lot of opportunity here.
AI code assistants *also* find it difficult to find and fix race conditions. Give me a model that can implement best-effort cleanup of a concurrent resource pool when things go haywire. Then you can start heralding the death of human-led software engineering. Until then, I think we'll be all right.
Well, Drop can't be async, so you can't await on the shutdown of whatever you're pooling unless you block, but you can spawn cleanup tasks. Which means managing race conditions and ensuring fairness. I could also *not* do this and just mark with Drop and sweep with an explicit close() call. But hey.
Spending my evening with a Rust project figuring out a Drop implementation on a ConnectionPool that can do a best-effort cleanup of active connections. If the caller forgets to do an explicit call to the close method, I'd like it to be as low-impact as possible. This is hard stuff. But fun!
Confession: I've been cheating on fixing my clippy errors in Rust projects by using Claude. π«£
cargo clippy --all-targets -- -D warnings 2>&1 | claude -p "Please fix these clippy errors"
When it comes time to search for code, like when you need to debug an outage, having really clean module design can help you get there in no time. It's a pain to have to refactor when your mental model evolves, but it's worth it to me.
As a developer who's getting older, I spend a lot more time on module organization design in my codebase. It's natural for module organization to grow and evolve over time, but what we usually end up doing is trying to stick to older models that might not be as useful.
DC never resonated with me as a kid. Marvel better connected with me. But some of these newer DC runs are incredible.
Reading DC Comics for the first time since childhood. Tom Kingβs Wonder Woman series and Supergirl: Woman of Tomorrow are really, really good.
I mean, keep reading Asimov... it's great sci-fi... but I don't think we're anywhere close to the Robot Wars.
If your headline about AI is "no more ___" or "the end of ___", I'm probably going to roll my eyes. Here are the latest top 3 things going away because of AI, according to headlines:
- network APIs
- software developers
- the World Wide Web
Okay, maybe, but not in your lifetime, buddy. Calm down.
Definitely the best on-screen depiction of Sue Storm. The entire birth sequence was some of the best filmmaking Marvel has produced.
Hey, folks! If youβre around at 9am EDT (6am PDT) on Friday, July 25th and are interested in the intersection of APIs and AI, feel free to tune in!
www.youtube.com/watch?v=W3gp...