Comprehensive context creates maintenance burden. Staleness creeps in, accuracy drops, no one owns it. Curated context can be kept current, validated, owned. IDP is your friend here.
Comprehensive context creates maintenance burden. Staleness creeps in, accuracy drops, no one owns it. Curated context can be kept current, validated, owned. IDP is your friend here.
Teams ask "how do I give Claude context?" Instinct: dump repos, link all Confluence, create massive CLAUDE.md. Wrong. More comprehensive ≠ better. Quality over quantity.
MTTR improved once context existed. Debugging sped up when Claude could query actual architectural decisions instead of guessing from training data. Context infrastructure pays off in metrics.
The onboarding agent pattern: configure Claude NOT to generate code, but to guide learning. Pull context from Confluence + GitHub + Jira, generate bespoke plan. That's the cycle working.
Set up Skyscanner Claude Plugin Marketplace - central repo with codeowners, provisioned via managed Claude config. The place is absolutely buzzing!
Infrastructure work isn't flashy. Building knowledge bases, auto-generating docs with quality gates, creating skills. It's not free. That's what makes coding agents work in enterprise.
Layering LLM on LLM to compensate for missing context degrades output. Text gets duller. The breakthrough wasn't better agents - it was simpler agents with better context.
Built "bee" - declarative YAML layer on AWS Strands. Define agents, tools, prompts, workflows in YAML. Boilerplate dropped ~70%.
Debugging speed improved measurably once context existed. MTTR dropped. Claude could query actual architectural decisions, ownership chains. Context infrastructure shows up in metrics.
A team built an onboarding agent that doesn't generate code - it guides learning. MCPs pull context from Confluence, GitHub, Jira. Generates bespoke onboarding plan. Simple AND practical.
We seconded a junior engineer to a new team. With curated docs in the knowledge graph, they ramped up fast - even in an unfamiliar language. Curation over comprehensiveness works.
The echo chamber risk: AI generates docs - docs feed AI - AI generates more docs. Without human validation, you get AI slop multiplying. Design for humans-in-the-loop from day one.
Curation beats comprehensiveness. We give Claude 50 standards that matter (curated, owned, current) not 5,000 Confluence pages (stale, unowned, noisy). Signal over noise wins every time.
We rolled out Claude Code to hundreds of engineers. Output quality varies wildly. Same model, different results. The differentiator isn't the tool - it's how well we give it Skyscanner's knowledge.
Curated knowledge beats comprehensive. Don't ingest everything. Tie docs to IDP (e.g Backstage). Quality over quantity. Ownership enables accountability.
Your competitive advantage in AI: not which model you use, but how effectively you feed any model your org's context. It really is all about building the right infra, not flashy demos.
Context engineering has four dimensions (Curate, Persist, Isolate, Compress). Most teams only do Curate (stuff everything in prompts). The other three are why your context doesn't scale.
Spec-Driven Development + context engineering is the closest to autonomy now. Engineer prompts spec, agent queries knowledge base for standards, generates code/tests/docs, validates, human reviews. Then we track number of interventions
In another 6 months, frontier models will shuffle again. What won't commoditise? Your standards, processes, patterns, accumulated decisions
I managed to exhaust limits on standard team subscriptions for Copilot, Claude and Cursor (all 3 of them!) for 2 months in a row 🫠
At Anthropic's Builder Summit, they introduced context engineering: Curate, Persist, Isolate, Compress. This isn't prompt engineering—it's infrastructure for feeding models organizational knowledge at scale.
Local-first architecture pattern: distribute knowledge as Docker images, teams run locally. No central service to maintain. No on-call burden
Built cool Go pipeline for GraphRag - Confluence/GitHub docs to Neo4j to MCP server. The trick is not just another KB though, but the smart "pick only those docs" in the middle aka the governance
Your engineers use the same models and agents yet get wildly different results. The differentiator isn't the model - it's how well you feed it your org's context
Four of those five weeks we spent in the US, traveling from DC to Orlando to Yosemite to San Francisco to New York and then home. It was our first time and the set of emotions and experiences was just unforgettable 😅
I’ve been working here for 5.5 years now and this summer, first time in my life, I had uninterrupted 5 weeks of rest. It was a great time to spend with the family and see the world.
I got my first part time job when I was 16. It was the rise of the Internet and by mid-twenties I was running a successful local consultancy. When moving to London, I wrapped up with my partners on Friday evening and started here on Monday. It was the same with @skyscannerofficial.bsky.social
Thanks for seeing this through end-to-end. You’re awesome and your new place is lucky to have you 🫡
While folks are saying that this place is like Twitter from 2019, it also highlights the gap in tech social media for 5 freaking years, just think about it!