Amazon ElastiCache: The SAA-C03 Study Guide

Amazon ElastiCache is a fully managed, in-memory caching service that supports high-performance use cases. By storing frequently accessed data in RAM rather than on disk, ElastiCache reduces pressure on primary databases (like RDS or DynamoDB) and lowers application latency to sub-millisecond levels.

Real-World Analogy

Imagine a busy chef in a restaurant. The RDS Database is the walk-in freezer in the basement (reliable but slow to access). ElastiCache is the small prep table right next to the stove. The chef keeps the most popular ingredients (frequently accessed data) on that table to speed up cooking times significantly.

Core Engines: Redis vs. Memcached

The most common SAA-C03 questions involve choosing between the two available engines. Use the table below to distinguish them:

Feature Redis (OSS & Serverless) Memcached
Data Types Complex (Lists, Sets, Hashes, Geospatial) Simple (Strings, Objects)
Persistence Yes (AOF and Snapshots) No (Purely ephemeral)
High Availability Multi-AZ with Auto-Failover No (If a node fails, data is lost)
Scaling Horizontal (Sharding) and Vertical Horizontal (Add/Remove nodes)
Use Case Leaderboards, Pub/Sub, Session Store Simple web caching, database offloading

Caching Strategies

1. Lazy Loading (Cache-Aside)

The application only loads data into the cache when there is a “cache miss.” If data isn’t in the cache, the app fetches it from the DB and then writes it to the cache for next time.

  • Pros: Only requested data is cached; handles node failures gracefully.
  • Cons: Cache miss penalty on first request; potential for stale data.

2. Write-Through

The application updates the cache immediately whenever it writes to the database.

  • Pros: Data in cache is never stale.
  • Cons: Write latency is higher; most data might never be read (wasted memory).

Decision Matrix / If–Then Guide

If the requirement is… Then choose…
Sub-millisecond latency for a SQL DB ElastiCache (Lazy Loading)
Multi-Region replication / Disaster Recovery ElastiCache for Redis (Global Datastore)
Simple caching to scale horizontally ElastiCache for Memcached
Storing user session state with persistence ElastiCache for Redis

Exam Tips and Gotchas

  • Multi-AZ: Only Redis supports Multi-AZ with automatic failover. Memcached does not support replication; if a node dies, you start with an empty cache.
  • Encryption: ElastiCache supports encryption at rest (KMS) and in-transit (TLS). Always look for these in “highly secure” requirements.
  • Redis Auth: You can require a password (token) before allowing clients to execute commands.
  • Global Datastore: Used for low-latency reads across regions. One primary region provides writes, and secondary regions receive updates.
  • Scaling: Redis Cluster Mode allows you to scale beyond the limits of a single primary node by partitioning data across multiple shards.

Topics covered:

Summary of key subtopics covered in this guide:

  • Redis vs. Memcached architectural differences.
  • Caching strategies (Lazy Loading vs. Write-Through).
  • High Availability and Multi-AZ failover mechanisms.
  • Security features (Encryption and Redis AUTH).
  • Performance optimization and Global Datastores.

ElastiCache Architecture & Ecosystem

EC2 / Lambda 1. Check Cache 2. DB Fallback (Miss) ElastiCache (Redis / Memcached) RDS / Aurora

Standard Cache-Aside Pattern: App queries ElastiCache first to avoid expensive DB hits.

Performance

Sub-millisecond Latency

By moving data from disk-based RDS to memory-based ElastiCache, you eliminate I/O wait times.

Use Case: Real-time gaming leaderboards or financial ticker data.

Security

VPC & IAM

Deploy inside a Private Subnet. Use Security Groups to restrict access to the application tier only. Enable At-Rest Encryption.

Cost

Reserved Nodes

For predictable workloads, use Reserved Nodes to save up to 70% over On-Demand pricing. Use Serverless for unpredictable traffic.

Production Use Case: Session Store

In a load-balanced environment, user sessions can be stored in ElastiCache for Redis. This makes the application tier stateless, allowing EC2 instances to be terminated or scaled by Auto Scaling without losing user login data.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top