Skip to main content

Redis Caching Strategies for Fintech Applications

00:09:21:29

Redis Is Not Just a Cache

Most developers think of Redis as "that thing that makes database queries faster." In fintech, Redis is infrastructure. It handles session management for thousands of concurrent traders, enforces rate limits on API endpoints that talk to payment processors, powers real-time leaderboards for copy-trading platforms, and manages distributed locks that prevent double-processing of financial transactions.

I've been running Redis in production across multiple fintech platforms for years, and the difference between teams that use Redis well and teams that use it poorly often comes down to understanding its data structures. Redis gives you sorted sets, streams, hyperloglogs, and pub/sub — and each one solves a specific class of fintech problems elegantly.

Session Management for Trading Platforms

Trading platforms have a unique session requirement: a user might be logged in on a web dashboard, a mobile app, and a desktop terminal simultaneously. You need to track all active sessions, enforce maximum concurrent sessions per license tier, and invalidate specific sessions without disrupting others.

php
class RedisSessionManager
{
    private Redis $redis;
    private int $maxSessionsTTL = 86400; // 24 hours

    public function __construct(Redis $redis)
    {
        $this->redis = $redis;
    }

    public function createSession(int $userId, string $deviceType, string $ip): string
    {
        $sessionId = bin2hex(random_bytes(32));
        $key = "sessions:user:{$userId}";

        $sessionData = json_encode([
            'session_id' => $sessionId,
            'device_type' => $deviceType,
            'ip_address' => $ip,
            'created_at' => time(),
            'last_active' => time(),
        ]);

        // Store session in a hash, keyed by session ID
        $this->redis->hSet($key, $sessionId, $sessionData);
        $this->redis->expire($key, $this->maxSessionsTTL);

        // Check concurrent session limits
        $this->enforceSessionLimits($userId);

        return $sessionId;
    }

    public function enforceSessionLimits(int $userId): void
    {
        $key = "sessions:user:{$userId}";
        $sessions = $this->redis->hGetAll($key);
        $maxSessions = $this->getMaxSessions($userId);

        if (count($sessions) > $maxSessions) {
            // Sort by last_active, remove oldest
            $decoded = array_map(fn($s) => json_decode($s, true), $sessions);
            usort($decoded, fn($a, $b) => $a['last_active'] <=> $b['last_active']);

            $toRemove = array_slice($decoded, 0, count($decoded) - $maxSessions);
            foreach ($toRemove as $session) {
                $this->redis->hDel($key, $session['session_id']);
            }
        }
    }

    public function heartbeat(int $userId, string $sessionId): bool
    {
        $key = "sessions:user:{$userId}";
        $data = $this->redis->hGet($key, $sessionId);

        if (!$data) {
            return false; // Session expired or was evicted
        }

        $session = json_decode($data, true);
        $session['last_active'] = time();
        $this->redis->hSet($key, $sessionId, json_encode($session));

        return true;
    }
}

The hash-per-user approach is critical here. Using individual keys for each session creates a key explosion problem at scale — with 10,000 active users averaging 2 sessions each, you'd have 20,000 keys to manage. Hashes keep it to 10,000 keys with much better memory efficiency.

Rate Limiting for Payment Endpoints

When your platform talks to PSPs (Payment Service Providers) and banking APIs, rate limiting isn't optional — it's contractual. Exceed the PSP's rate limit and you get blocked. Hit a banking API too hard and you get fined. I use a sliding window approach that's more accurate than fixed windows.

php
class SlidingWindowRateLimiter
{
    private Redis $redis;

    public function __construct(Redis $redis)
    {
        $this->redis = $redis;
    }

    /**
     * Check if the action is allowed under the rate limit.
     *
     * @param string $key      Unique identifier (e.g., "psp:deposit:user:123")
     * @param int    $limit    Max requests allowed in the window
     * @param int    $windowSeconds  Window size in seconds
     * @return array{allowed: bool, remaining: int, retry_after: int|null}
     */
    public function attempt(string $key, int $limit, int $windowSeconds): array
    {
        $now = microtime(true);
        $windowStart = $now - $windowSeconds;

        $pipe = $this->redis->pipeline();

        // Remove entries outside the current window
        $pipe->zRemRangeByScore($key, '-inf', (string) $windowStart);

        // Count entries in the current window
        $pipe->zCard($key);

        // Add current request (optimistically)
        $pipe->zAdd($key, $now, $now . ':' . mt_rand());

        // Set expiry on the sorted set
        $pipe->expire($key, $windowSeconds);

        $results = $pipe->exec();
        $currentCount = $results[1];

        if ($currentCount >= $limit) {
            // Over limit — remove the optimistic entry
            $this->redis->zRemRangeByRank($key, -1, -1);

            // Calculate when the earliest entry in the window expires
            $earliest = $this->redis->zRange($key, 0, 0, true);
            $retryAfter = !empty($earliest)
                ? (int) ceil(reset($earliest) + $windowSeconds - $now)
                : $windowSeconds;

            return [
                'allowed' => false,
                'remaining' => 0,
                'retry_after' => $retryAfter,
            ];
        }

        return [
            'allowed' => true,
            'remaining' => $limit - $currentCount - 1,
            'retry_after' => null,
        ];
    }
}

In practice, I wrap this in a middleware that applies different rate limits based on the endpoint category:

php
class PspRateLimitMiddleware
{
    private array $limits = [
        'deposit'    => ['limit' => 30, 'window' => 60],    // 30 per minute
        'withdrawal' => ['limit' => 10, 'window' => 60],    // 10 per minute
        'kyc_check'  => ['limit' => 5,  'window' => 60],    // 5 per minute
        'card_token'  => ['limit' => 20, 'window' => 60],   // 20 per minute
    ];

    public function handle(Request $request, Closure $next, string $category): Response
    {
        $config = $this->limits[$category] ?? ['limit' => 60, 'window' => 60];
        $key = "ratelimit:{$category}:user:{$request->user()->id}";

        $result = app(SlidingWindowRateLimiter::class)->attempt(
            $key,
            $config['limit'],
            $config['window']
        );

        if (!$result['allowed']) {
            return response()->json([
                'error' => 'Rate limit exceeded',
                'retry_after' => $result['retry_after'],
            ], 429)->withHeaders([
                'X-RateLimit-Remaining' => 0,
                'Retry-After' => $result['retry_after'],
            ]);
        }

        return $next($request)->withHeaders([
            'X-RateLimit-Remaining' => $result['remaining'],
        ]);
    }
}

Real-Time Leaderboards for Copy Trading

Copy-trading and social trading platforms need leaderboards that update in real time. Traders are ranked by return percentage, profit, or a composite score. Redis sorted sets handle this beautifully — they maintain ordering automatically and support range queries in O(log(N)) time.

python
import redis
import json
from datetime import datetime, timedelta

class TradingLeaderboard:
    def __init__(self, redis_client: redis.Redis):
        self.r = redis_client

    def update_trader_score(self, trader_id: int, score: float,
                            period: str = "monthly") -> None:
        """Update a trader's score on the leaderboard."""
        key = f"leaderboard:{period}:{datetime.now().strftime('%Y-%m')}"

        # ZADD with the score — Redis keeps the set sorted
        self.r.zadd(key, {str(trader_id): score})

        # Store trader metadata for display
        meta_key = f"trader:meta:{trader_id}"
        if not self.r.exists(meta_key):
            # Populated from DB on first encounter; cached for 1 hour
            trader_data = self._fetch_trader_from_db(trader_id)
            self.r.setex(meta_key, 3600, json.dumps(trader_data))

    def get_top_traders(self, period: str = "monthly",
                        offset: int = 0, count: int = 50) -> list[dict]:
        """Get top traders with pagination."""
        key = f"leaderboard:{period}:{datetime.now().strftime('%Y-%m')}"

        # ZREVRANGE returns highest scores first
        results = self.r.zrevrange(key, offset, offset + count - 1,
                                    withscores=True)

        traders = []
        for rank, (trader_id, score) in enumerate(results, start=offset + 1):
            meta = self.r.get(f"trader:meta:{trader_id.decode()}")
            trader_info = json.loads(meta) if meta else {}
            traders.append({
                "rank": rank,
                "trader_id": int(trader_id),
                "score": round(score, 2),
                "name": trader_info.get("name", "Unknown"),
                "avatar": trader_info.get("avatar"),
                "total_followers": trader_info.get("followers", 0),
            })

        return traders

    def get_trader_rank(self, trader_id: int,
                        period: str = "monthly") -> dict | None:
        """Get a specific trader's rank and score."""
        key = f"leaderboard:{period}:{datetime.now().strftime('%Y-%m')}"

        rank = self.r.zrevrank(key, str(trader_id))
        if rank is None:
            return None

        score = self.r.zscore(key, str(trader_id))
        total = self.r.zcard(key)

        return {
            "rank": rank + 1,
            "score": round(score, 2),
            "total_traders": total,
            "percentile": round((1 - rank / total) * 100, 1) if total else 0,
        }

The key insight is that ZREVRANGE and ZREVRANK are O(log(N)) operations. Whether you have 100 traders or 100,000, the response time stays under a millisecond. Compare that to a SQL ORDER BY score DESC query hitting the database on every page load.

Distributed Locking for Transaction Processing

In financial systems, processing a transaction twice is catastrophic. When you have multiple workers pulling from a queue, you need distributed locks to guarantee exactly-once processing semantics.

php
class RedisDistributedLock
{
    private Redis $redis;

    public function __construct(Redis $redis)
    {
        $this->redis = $redis;
    }

    public function acquire(string $resource, int $ttlMs = 5000): ?string
    {
        $token = bin2hex(random_bytes(16));
        $key = "lock:{$resource}";

        // SET with NX (only if not exists) and PX (millisecond expiry)
        $acquired = $this->redis->set($key, $token, ['NX', 'PX' => $ttlMs]);

        return $acquired ? $token : null;
    }

    public function release(string $resource, string $token): bool
    {
        $key = "lock:{$resource}";

        // Lua script ensures atomic check-and-delete
        $script = <<<'LUA'
            if redis.call("GET", KEYS[1]) == ARGV[1] then
                return redis.call("DEL", KEYS[1])
            else
                return 0
            end
        LUA;

        return (bool) $this->redis->eval($script, [$key, $token], 1);
    }
}

// Usage in a withdrawal processor
class WithdrawalProcessor
{
    public function process(Withdrawal $withdrawal): void
    {
        $lock = app(RedisDistributedLock::class);
        $resource = "withdrawal:{$withdrawal->id}";

        $token = $lock->acquire($resource, ttlMs: 30000);

        if (!$token) {
            Log::warning('Could not acquire lock for withdrawal', [
                'withdrawal_id' => $withdrawal->id,
            ]);
            // Re-queue for later processing
            ProcessWithdrawal::dispatch($withdrawal)->delay(now()->addSeconds(5));
            return;
        }

        try {
            // Idempotency check
            $withdrawal->refresh();
            if ($withdrawal->status !== 'pending') {
                return;
            }

            $this->callPspApi($withdrawal);
            $withdrawal->update(['status' => 'processing']);
        } finally {
            $lock->release($resource, $token);
        }
    }
}

The Lua script in the release method is essential. Without it, you risk a race condition: Worker A's lock expires, Worker B acquires it, then Worker A deletes Worker B's lock. The Lua script atomically checks that the token matches before deleting.

Queue Optimization with Redis Streams

For high-throughput transaction processing, I've moved from Laravel's default Redis list-based queues to Redis Streams. Streams give you consumer groups, message acknowledgment, and dead-letter handling that lists can't provide.

php
class StreamQueueProducer
{
    public function dispatch(string $stream, array $payload): string
    {
        $redis = app('redis')->connection('queue');

        // XADD appends to the stream with auto-generated ID
        $messageId = $redis->xadd($stream, '*', [
            'payload' => json_encode($payload),
            'dispatched_at' => microtime(true),
        ]);

        return $messageId;
    }
}

class StreamQueueConsumer
{
    public function consume(string $stream, string $group,
                            string $consumer, int $batchSize = 10): void
    {
        $redis = app('redis')->connection('queue');

        // Create consumer group if it doesn't exist
        try {
            $redis->xgroup('CREATE', $stream, $group, '0', true);
        } catch (\Exception $e) {
            // Group already exists
        }

        while (true) {
            // Read new messages for this consumer
            $messages = $redis->xreadgroup(
                $group, $consumer,
                [$stream => '>'],
                $batchSize,
                2000 // Block for 2 seconds if no messages
            );

            if (empty($messages)) {
                // Also check for pending messages that weren't acknowledged
                $this->reclaimStale($redis, $stream, $group, $consumer);
                continue;
            }

            foreach ($messages[$stream] ?? [] as $messageId => $data) {
                try {
                    $payload = json_decode($data['payload'], true);
                    $this->processMessage($payload);

                    // Acknowledge successful processing
                    $redis->xack($stream, $group, $messageId);
                } catch (\Exception $e) {
                    Log::error('Stream message processing failed', [
                        'stream' => $stream,
                        'message_id' => $messageId,
                        'error' => $e->getMessage(),
                    ]);
                    // Message remains pending — will be reclaimed
                }
            }
        }
    }
}

Memory Management

Redis is in-memory, and memory is finite. In fintech, you're often caching large datasets — account balances, trade history, instrument prices. Set explicit TTLs on everything and use memory policies wisely.

bash
# redis.conf for fintech workloads
maxmemory 4gb
maxmemory-policy allkeys-lfu

# LFU (Least Frequently Used) is better than LRU for fintech
# because frequently accessed data (like active trader sessions
# and popular instrument prices) should stay cached, while
# rarely accessed historical data can be evicted.

Monitor your memory usage and key distribution:

bash
# Check memory usage by key pattern
redis-cli --bigkeys

# Monitor real-time commands hitting Redis
redis-cli MONITOR | head -100

# Check memory for a specific key
redis-cli MEMORY USAGE "leaderboard:monthly:2026-01"

Key Takeaways

  1. Use Redis hashes for session management instead of individual keys. Hashes give you per-user session grouping with far better memory efficiency at scale.
  2. Implement sliding window rate limiting for PSP and banking API calls. Fixed windows have burst problems at window boundaries; sorted-set-based sliding windows give you accurate, smooth rate enforcement.
  3. Sorted sets are purpose-built for leaderboards. They maintain order automatically and support rank lookups in O(log(N)) — no need to hit the database on every leaderboard view.
  4. Always use Lua scripts for distributed lock release. The check-and-delete must be atomic, or you risk one worker releasing another worker's lock.
  5. Consider Redis Streams over lists for financial transaction queues. Consumer groups, acknowledgment, and dead-letter handling give you the reliability guarantees that financial processing demands.
  6. Set explicit TTLs and use LFU eviction policy. LFU is superior to LRU for fintech because it preserves frequently-accessed active data while evicting rarely-touched historical entries.

Want to discuss this topic or work together? Get in touch.

Contact me

Related Articles

agile-leadership-in-fintech-teams

api-design-for-fintech-platforms

building-scalable-fintech-crms-with-laravel