If you come from the worlds of Python, Ruby, or Node, you likely have a distinct muscle memory when starting a new project. You spin up the app server, you set up the database, and — almost automatically — you spin up a Redis instance.
It’s the default answer to “Where do I put my temporary data?”
When I started working with Elixir, I brought that same baggage with me. I spent hours configuring connection pools, handling network errors, and worrying about serialisation overhead. Then I realised I was ignoring a superpower built directly into the language.
I’m talking about ETS (Erlang Term Storage).
After ditching Redis for native caching in a recent project, the results were startling enough that I had to write them down. We are talking about a 8x performance improvement and a 40% drop in CPU usage.
Here is why you might want to delete {:redix, ...} from your mix.exs file.
The “Zero-Config” Reality Check
Let’s look at the barrier to entry. If you want to use Redis, you are committing to infrastructure.
The Redis Tax:
- Install Redis (Homebrew, Apt, or Docker).
- Configure the
redis-server. - Add a dependency like Redix.
- Set up a supervision tree for connection pools.
- Handle connection errors (what happens when the network blips?).
The ETS Alternative:
ETS is already running. It is part of the Erlang VM (BEAM). It is the standard library.
Here is the entire setup configuration:
# That's it. You're done.
:ets.new(:my_cache, [:named_table, :public, :set])Code language: PHP (php)
There are no extra processes to monitor, no configuration files to inject, and no network ports to open.
The Speed of “No Network”
I ran the standard benchmarks that have been circulating in the community — specifically looking at 1,000 lookups for both systems. The difference isn’t just marginal; it’s architectural.
- Redis: 446.06ms
- ETS: 49.93ms
ETS was roughly 9x faster.
Why is the gap so wide?
It comes down to physics and the cost of serialization. When you fetch data from Redis, your app has to serialize the data, send it over the network (even localhost has overhead), wait for the single-threaded Redis instance to process it, receive the packet, and deserialize it back into Elixir terms.
ETS, on the other hand, offers direct memory access.
There is no “shipping” of data. There is no serialization. You are simply reading memory that your VM already owns.
The Hidden CPU Cost
Speed isn’t the only metric. Serialisation eats CPU cycles. In one production case study, a team found that simply deserializing JSON from Redis was consuming nearly a third of their CPU capacity.
By switching to ETS, their CPU usage dropped from 31.87% to 11.92%. Because ETS data lives in memory as native terms, the Garbage Collector has less work to do, and your CPU can focus on actual business logic.
“But What About Expiration?”
A common critique is that Redis gives you TTL (Time To Live) out of the box, while ETS is just a storage engine.
While true, implementing a TTL wrapper in Elixir is trivial. Here is a robust pattern that handles freshness checks effortlessly:
defmodule SimpleCache do
def get(key, opts \\ []) do
case :ets.lookup(:my_cache, key) do
[{^key, val, expiry}] ->
if expiry > :os.system_time(:seconds), do: val, else: nil
[] -> nil
end
end
def put(key, value, ttl) do
expiry = :os.system_time(:seconds) + ttl
:ets.insert(:my_cache, {key, value, expiry})
end
endCode language: Elixir (elixir)
With about 12 lines of code, we’ve replicated the core feature you needed from Redis, without the infrastructure overhead.
The “Distributed” Myth: Killing Redis Cluster
The most common counter-argument against ETS is: “But ETS is local to the node! I need a cluster!”
This is where the complexity curve usually spikes. With Redis, scaling means moving to Redis Cluster — managing slots, sharding, rebalancing, and sentinels.
In Elixir, distribution is a first-class citizen. You can build a “native” cluster alternative that is often faster and drastically simpler to operate using Consistent Hashing.
The “Smart Client” Architecture
In a Redis Cluster, if you hit the wrong node, it redirects you. In Elixir, we can determine exactly which node holds the data before we make the call.
- Hash the Key: Use a consistent hash (like
libring) to map a key (e.g.,user:123) to a specific Elixir node in your cluster. - Direct Access: If the node is
self(): Read from local ETS (Microseconds). - If the node is remote: Send a direct message to that node to read its ETS table (Milliseconds).
def get_distributed(key) do
node = ConsistentHash.node_for(key)
if node == Node.self() do
# Local speed!
:ets.lookup(:local_cache, key)
else
# RPC call to sibling
:erpc.call(node, :ets, :lookup, [:local_cache, key])
end
endCode language: Elixir (elixir)
This approach eliminates the need for external load balancers or complex Redis Sentinel setups. You utilize the nodes you are already paying for.
When You Actually DO Need Redis
I am not saying Redis is dead. It is still the right tool for specific jobs. You should stick with Redis if:
- You have a Polyglot Stack: Your Node.js frontend and Elixir backend need to read the exact same session data.
- Persistence is Non-Negotiable: You need the cache to survive a full application restart.
- Strict Consistency: Every node in your cluster must see a data change the exact millisecond it happens.
The Hybrid Approach
If you are on the fence, you can have your cake and eat it too. Use ETS as an L1 (Level 1) cache and Redis as an L2 fallback.
Check ETS first. If it’s missing, hit Redis, and populate ETS on the way back. This gives you the blistering speed of direct memory access for hot keys while keeping the distributed reliability of Redis as a backup.

Final Thoughts
We often over-engineer our stacks because we are used to languages that need external help to be performant. Elixir is different. The platform provides the building blocks that other languages outsource to infrastructure.
For your next Elixir project, try this: Skip the Redis installation. Use ETS.
You get 9x faster reads, 40% less CPU usage, and one less service to wake you up at 3 AM when it crashes.

