Matteo Depascale's Avatar

Matteo Depascale

@matteodepascale

AWS Cloud Architect | Daily tech content creator ๐Ÿš€ | DevOps Expert | AWS Community Builder | Helping tech professionals master cloud Blog: ๐Ÿ”— https://cloudnature.net Disclaimer: Opinions expressed are solely my own

527
Followers
306
Following
719
Posts
10.11.2024
Joined
Posts Following

Latest posts by Matteo Depascale @matteodepascale

If you've ever had to delete and recreate an entire Cognito app client just to rotate a secret, you know how overdue this was.

#AWS #CloudSecurity
4/4

02.03.2026 15:00 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Why this matters:

โ†’ Custom secrets = easier migration from other auth systems
โ†’ On-demand rotation via AddUserPoolClientSecret API
โ†’ Fits actual compliance and security requirements

Not just a nice-to-have โ€” this was a real gap.
3/4

02.03.2026 15:00 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

How Cognito secret rotation works now:

1. Add a second secret (up to 2 active per client)
2. Update your apps to use the new one
3. Delete the old secret

No downtime. No big bang credential swaps. Works via Console, CLI, SDKs, and CloudFormation.
2/4

02.03.2026 15:00 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

Amazon Cognito now lets you bring your own client secrets and rotate them without downtime.

Before: one secret, no rotation, delete the whole client to change it.
Now: proper lifecycle management. Finally.
1/4

02.03.2026 15:00 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

What it monitors:
โ€ข Kernel issues & process limits
โ€ข VPC CNI/network failures
โ€ข Storage I/O & throughput limits
โ€ข Container runtime failures
โ€ข GPU/accelerator errors (NVIDIA, AWS Neuron)

All open source. All transparent.

#CloudEngineering #DevSecOps
5/5

27.02.2026 14:00 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Key features:
๐Ÿ”น Runs as DaemonSet on every node
๐Ÿ”น Monitors: Kernel, Networking, Storage, Runtime, GPU
๐Ÿ”น Auto-detects 5+ issue categories
๐Ÿ”น Integrates with EKS auto-repair
๐Ÿ”น Open source = full transparency

Check it out on GitHub!

#OpenSource #Kubernetes #AWS
4/5

27.02.2026 14:00 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Now with the open source EKS Node Monitoring Agent:
โœ… Automated detection across 5 categories
โœ… Kernel, network, storage, runtime, GPU monitoring
โœ… NodeConditions for auto-repair integration
โœ… Full visibility into detection logic

#CloudComputing #DevOps
3/5

27.02.2026 14:00 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Before this agent:
โŒ Manual node health checks
โŒ SSH + dmesg guesswork
โŒ Stale GPU errors going unnoticed for hours
โŒ Manual monitoring across kernel, network, storage, runtime

#DevOps #CloudNative
2/5

27.02.2026 14:00 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

๐Ÿš€ Big news for #Kubernetes users! Amazon EKS Node Monitoring Agent is now OPEN SOURCE!

No more SSHing into nodes to debug pod failures. The EKS Node Monitoring Agent is now available on GitHub for everyone.

#AWS #Kubernetes
1/5

27.02.2026 14:00 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

One reframe:

Stop: "how I built it"
Start: "how I designed, owned, and drove impact"

Been through an Amazon loop? What surprised you most?

#AWS #SoftwareEngineering #TechCareers
6/6

26.02.2026 13:00 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Common mistakes:

โŒ Tasks instead of ownership
โŒ Vague stories, no data
โŒ Dodging failure questions
โŒ Wrong communication level
5/6

26.02.2026 13:00 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Prep that works:

- 12-16 stories remixable across LPs
- Map each to 3-5 LPs + different angles
- STAR + Impact
- Concrete metrics, tradeoffs, "what I'd change"
4/6

26.02.2026 13:00 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

The Bar Raiser:

- Outside the hiring team
- Digs into shallow/polished stories
- Ensures you meet the bar for your level

You can't fake this one
3/6

26.02.2026 13:00 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Each interviewer covers 2-3 LPs.
10-12 LPs tested across the panel.

Even technical rounds go deep on behavior:
โ†’ Metrics
โ†’ Alternatives
โ†’ Consequences
โ†’ Scope at your level, not just skills
2/6

26.02.2026 13:00 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

Amazon loop interviews โ‰  "just more interviews."

3-5 hours. Back-to-back. Every round tests Leadership Principles โ€” even technical ones.

Here's what actually happens ๐Ÿงต
1/6

26.02.2026 13:00 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

What's the most complex scheduling problem you've solved on AWS?

#AWS #Serverless #CloudComputing
8/8

25.02.2026 13:00 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

More tips:

โ†’ Clean up one-time schedules after execution (they count against quotas)
โ†’ Batch small tasks to maximize free tier
โ†’ Templated targets for Lambda/SQS/SNS, universal for everything else
7/8

25.02.2026 13:00 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Pro tips:

โ†’ Flexible time windows for non-critical tasks
โ†’ DLQs on every schedule, not just critical ones
โ†’ Monitor InvocationAttempts vs SuccessfulInvocationAttempts ratio
6/8

25.02.2026 13:00 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Built-in resilience:

โ†’ At-least-once delivery guarantee
โ†’ 185 retry attempts over 24h w/ exponential backoff
โ†’ DLQ integration for anything that still fails

No more custom retry logic from scratch
5/8

25.02.2026 13:00 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Universal target support:

โ†’ 270+ AWS services, 6K+ API operations
โ†’ Templated targets for Lambda, SQS, SNS, Step Functions
โ†’ Universal targets for any supported AWS API
4/8

25.02.2026 13:00 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

3 schedule types out of the box:

โ†’ rate-based for regular intervals
โ†’ cron-based with timezone + DST handling
โ†’ one-time for precise single executions
3/8

25.02.2026 13:00 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

The old way:

โŒ Max 5 TPS
โŒ 20 targets
โŒ 300 rules per account
โŒ Custom retry logic

EventBridge Scheduler:

โœ… Thousands of TPS
โœ… 270+ services, 6K+ API ops
โœ… 10M schedules per region
โœ… 185 retries over 24h built-in
2/8

25.02.2026 13:00 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

Stop struggling with AWS scheduling at scale

Still using cron jobs and custom polling loops?

There's a better way โ†’ Amazon EventBridge Scheduler

Here's why you should switch ๐Ÿงต

#AWS #Serverless
1/8

25.02.2026 13:00 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

2026 default stack:
โ†’ CloudFront + TLS 1.3 + HTTP/2/3 + HTTPโ†’HTTPS redirect
โ†’ ACM for auto cert management
โ†’ ALB/NLB with TLS listeners
โ†’ PrivateLink/VPC endpoints for private traffic

No excuses left

#cloud #security #softwareengineering
5/5

24.02.2026 13:00 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Still valid pushback:
โ†’ Microsecond-latency trading systems
โ†’ Constrained IoT/embedded devices
โ†’ Legacy systems where retrofitting TLS has real cost

Otherwise? Encrypt everything
4/5

24.02.2026 13:00 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Edge/CDN termination puts the handshake close to the user

Crypto offloaded from your origin

Actual CPU cost? ~2% at peak. Noise on modern hardware
3/5

24.02.2026 13:00 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

The real overhead was never CPU

It was handshake round-trips

TLS 1.3 cuts those in half. 0-RTT resumption for repeat clients

HTTP/2+3 means you rarely pay that cost anymore
2/5

24.02.2026 13:00 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

"HTTPS everywhere is unnecessary overhead" is a 2010 argument

In 2026, it's mostly a myth

Here's why ๐Ÿงต
1/5

24.02.2026 13:00 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

How to enable nested virtualization on EC2:

New instance โ†’ Advanced details โ†’ Nested virtualization โ†’ Enable

Existing instance โ†’ Stop โ†’ Actions โ†’ Instance settings โ†’ Change CPU options โ†’ Enable

That's it.

#AWS #DevOps #CloudComputing
4/4

23.02.2026 14:00 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Before you enable it โ€” know the gotchas:
โš ๏ธ Windows: VSM auto-disabled
โš ๏ธ No hibernate/resume support
โš ๏ธ Windows capped at 192 vCPUs (no m8i.96xl)
โš ๏ธ Latency-sensitive workloads โ†’ still use bare metal
โš ๏ธ Security inside nested VMs = your responsibility
3/4

23.02.2026 14:00 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0