If you've ever had to delete and recreate an entire Cognito app client just to rotate a secret, you know how overdue this was.
#AWS #CloudSecurity
4/4
@matteodepascale
AWS Cloud Architect | Daily tech content creator ๐ | DevOps Expert | AWS Community Builder | Helping tech professionals master cloud Blog: ๐ https://cloudnature.net Disclaimer: Opinions expressed are solely my own
If you've ever had to delete and recreate an entire Cognito app client just to rotate a secret, you know how overdue this was.
#AWS #CloudSecurity
4/4
Why this matters:
โ Custom secrets = easier migration from other auth systems
โ On-demand rotation via AddUserPoolClientSecret API
โ Fits actual compliance and security requirements
Not just a nice-to-have โ this was a real gap.
3/4
How Cognito secret rotation works now:
1. Add a second secret (up to 2 active per client)
2. Update your apps to use the new one
3. Delete the old secret
No downtime. No big bang credential swaps. Works via Console, CLI, SDKs, and CloudFormation.
2/4
Amazon Cognito now lets you bring your own client secrets and rotate them without downtime.
Before: one secret, no rotation, delete the whole client to change it.
Now: proper lifecycle management. Finally.
1/4
What it monitors:
โข Kernel issues & process limits
โข VPC CNI/network failures
โข Storage I/O & throughput limits
โข Container runtime failures
โข GPU/accelerator errors (NVIDIA, AWS Neuron)
All open source. All transparent.
#CloudEngineering #DevSecOps
5/5
Key features:
๐น Runs as DaemonSet on every node
๐น Monitors: Kernel, Networking, Storage, Runtime, GPU
๐น Auto-detects 5+ issue categories
๐น Integrates with EKS auto-repair
๐น Open source = full transparency
Check it out on GitHub!
#OpenSource #Kubernetes #AWS
4/5
Now with the open source EKS Node Monitoring Agent:
โ
Automated detection across 5 categories
โ
Kernel, network, storage, runtime, GPU monitoring
โ
NodeConditions for auto-repair integration
โ
Full visibility into detection logic
#CloudComputing #DevOps
3/5
Before this agent:
โ Manual node health checks
โ SSH + dmesg guesswork
โ Stale GPU errors going unnoticed for hours
โ Manual monitoring across kernel, network, storage, runtime
#DevOps #CloudNative
2/5
๐ Big news for #Kubernetes users! Amazon EKS Node Monitoring Agent is now OPEN SOURCE!
No more SSHing into nodes to debug pod failures. The EKS Node Monitoring Agent is now available on GitHub for everyone.
#AWS #Kubernetes
1/5
One reframe:
Stop: "how I built it"
Start: "how I designed, owned, and drove impact"
Been through an Amazon loop? What surprised you most?
#AWS #SoftwareEngineering #TechCareers
6/6
Common mistakes:
โ Tasks instead of ownership
โ Vague stories, no data
โ Dodging failure questions
โ Wrong communication level
5/6
Prep that works:
- 12-16 stories remixable across LPs
- Map each to 3-5 LPs + different angles
- STAR + Impact
- Concrete metrics, tradeoffs, "what I'd change"
4/6
The Bar Raiser:
- Outside the hiring team
- Digs into shallow/polished stories
- Ensures you meet the bar for your level
You can't fake this one
3/6
Each interviewer covers 2-3 LPs.
10-12 LPs tested across the panel.
Even technical rounds go deep on behavior:
โ Metrics
โ Alternatives
โ Consequences
โ Scope at your level, not just skills
2/6
Amazon loop interviews โ "just more interviews."
3-5 hours. Back-to-back. Every round tests Leadership Principles โ even technical ones.
Here's what actually happens ๐งต
1/6
What's the most complex scheduling problem you've solved on AWS?
#AWS #Serverless #CloudComputing
8/8
More tips:
โ Clean up one-time schedules after execution (they count against quotas)
โ Batch small tasks to maximize free tier
โ Templated targets for Lambda/SQS/SNS, universal for everything else
7/8
Pro tips:
โ Flexible time windows for non-critical tasks
โ DLQs on every schedule, not just critical ones
โ Monitor InvocationAttempts vs SuccessfulInvocationAttempts ratio
6/8
Built-in resilience:
โ At-least-once delivery guarantee
โ 185 retry attempts over 24h w/ exponential backoff
โ DLQ integration for anything that still fails
No more custom retry logic from scratch
5/8
Universal target support:
โ 270+ AWS services, 6K+ API operations
โ Templated targets for Lambda, SQS, SNS, Step Functions
โ Universal targets for any supported AWS API
4/8
3 schedule types out of the box:
โ rate-based for regular intervals
โ cron-based with timezone + DST handling
โ one-time for precise single executions
3/8
The old way:
โ Max 5 TPS
โ 20 targets
โ 300 rules per account
โ Custom retry logic
EventBridge Scheduler:
โ
Thousands of TPS
โ
270+ services, 6K+ API ops
โ
10M schedules per region
โ
185 retries over 24h built-in
2/8
Stop struggling with AWS scheduling at scale
Still using cron jobs and custom polling loops?
There's a better way โ Amazon EventBridge Scheduler
Here's why you should switch ๐งต
#AWS #Serverless
1/8
2026 default stack:
โ CloudFront + TLS 1.3 + HTTP/2/3 + HTTPโHTTPS redirect
โ ACM for auto cert management
โ ALB/NLB with TLS listeners
โ PrivateLink/VPC endpoints for private traffic
No excuses left
#cloud #security #softwareengineering
5/5
Still valid pushback:
โ Microsecond-latency trading systems
โ Constrained IoT/embedded devices
โ Legacy systems where retrofitting TLS has real cost
Otherwise? Encrypt everything
4/5
Edge/CDN termination puts the handshake close to the user
Crypto offloaded from your origin
Actual CPU cost? ~2% at peak. Noise on modern hardware
3/5
The real overhead was never CPU
It was handshake round-trips
TLS 1.3 cuts those in half. 0-RTT resumption for repeat clients
HTTP/2+3 means you rarely pay that cost anymore
2/5
"HTTPS everywhere is unnecessary overhead" is a 2010 argument
In 2026, it's mostly a myth
Here's why ๐งต
1/5
How to enable nested virtualization on EC2:
New instance โ Advanced details โ Nested virtualization โ Enable
Existing instance โ Stop โ Actions โ Instance settings โ Change CPU options โ Enable
That's it.
#AWS #DevOps #CloudComputing
4/4
Before you enable it โ know the gotchas:
โ ๏ธ Windows: VSM auto-disabled
โ ๏ธ No hibernate/resume support
โ ๏ธ Windows capped at 192 vCPUs (no m8i.96xl)
โ ๏ธ Latency-sensitive workloads โ still use bare metal
โ ๏ธ Security inside nested VMs = your responsibility
3/4