[New Post] 2g. VCS: SourceForge
retrotech.outsider.dev/episodes/2g
The Next Two Years of Software Engineering
addyosmani.com/blog/next-tw...
자극적으로 한쪽에 치우친 예측이 아니라 현재 상황을 통해 균형잡힌 시각을 보여준다는 점이 좋았다.
[New Post] [Book] 플랫폼 엔지니어링 - 개발과 운영을 아우르는 플랫폼 관리의 핵심 원칙
blog.outsider.ne.kr/1781
[New Post] [Book] Hypermedia Systems
blog.outsider.ne.kr/1780
좋은 책이다.
- Most workloads use less than 50% of requested resources for memory and less than 25% for CPU.
- Two-thirds of organizations using Kubernetes use HPA.
- Only 20% of Deployments using HPA utilize custom metrics
- Karpenter adoption has surpassed Cluster Autoscaler
- The usage of Arm is expanding
www.datadoghq.com/state-of-con...
Datadog's report on container/serverless trends analyzed with customer data
- GPU adoption is increasing.
- AI workloads are starting to be included in popular categories in containers.
blog.cloudflare.com/async-quic-a...
Cloudflare has open-sourced the tokio-quiche library, which combines the QUIC implementation they created six years ago with the Rust asynchronous runtime Tokio, to facilitate easy support for HTTP/3 and QUIC.
9월 19일에 있었던 "당근 SRE 밋업"의 발표 영상을 모두 YouTube에 올렸습니다.
www.youtube.com/playlist?lis...
요약본은 여기서 보실 수 있습니다: secondb.ai?channel=%EB%...
[New Episode] 2e. VCS: Visual SourceSafe
retrotech.outsider.dev/episodes/2e
[New Episode] 2d. VCS: ClearCase
retrotech.outsider.dev/episodes/2d
clickhouse.com/blog/llm-obs...
about whether LLMs can identify the cause of service disruptions using Claude 4 Sonnet, GPT o3, GPT 4.1, Gemini Pro, and GPT 5, presenting 4 scenarios and summarizing the process of investigation, which was quite interesting as it's a current area of interest.
Linear sent me down a local-first rabbit hole
bytemash.net/posts/i-went...
These days I'm considering whether to switch the issue tracker to Linear... There are many interesting aspects not only in terms of the product itself but also from an engineering perspective.
OpenAI or Anthropic handle significant traffic even as web services, so how do they manage to handle it so well even with rate limits? It's a pity that there is so little information about the serving infrastructure.
Perhaps because I've been working in infrastructure, I'm more curious about how to serve the LLM model than the model itself (maybe because I don't know it at all).