Why WebSockets Matter in Trading
Traders live in real-time. A 5-second delay on a position update can mean the difference between cutting a loss and watching it grow. IB partners want to see commissions tick up as their clients trade. Risk managers need instant exposure visibility.
HTTP polling doesn't cut it at scale. Polling 10,000 clients every 2 seconds means 300,000 requests per minute — most returning "no changes." WebSockets flip the model: push updates only when something changes.
Architecture
[MT4/MT5 CPlugin]
│
▼
[Redis Streams] ◄── [Trade Events]
│
▼
[Go WebSocket Gateway]
│
├──▶ Client A (portfolio update)
├──▶ Client B (position closed)
├──▶ IB Partner (commission earned)
└──▶ Risk Manager (exposure change)
The WebSocket gateway is a separate service (we use Go) that maintains persistent connections and routes messages to the right clients.
Connection Management
Each authenticated user gets a dedicated connection:
type Hub struct {
clients sync.Map // userID -> *Client
register chan *Client
unregister chan *Client
}
type Client struct {
userID int
userType string // "client", "ib", "admin"
conn *websocket.Conn
send chan []byte
subs map[string]bool // subscribed channels
}
func (h *Hub) Run() {
for {
select {
case client := <-h.register:
h.clients.Store(client.userID, client)
log.Printf("Connected: user %d (%s)", client.userID, client.userType)
case client := <-h.unregister:
h.clients.Delete(client.userID)
close(client.send)
}
}
}
Channel-Based Subscriptions
Not every user needs every update. Use channels:
// Channels a trading client subscribes to
func (c *Client) subscribeTrader() {
c.subs[fmt.Sprintf("portfolio:%d", c.userID)] = true
c.subs[fmt.Sprintf("orders:%d", c.userID)] = true
c.subs["prices:forex"] = true // Price feed
}
// Channels an IB subscribes to
func (c *Client) subscribeIB() {
c.subs[fmt.Sprintf("commissions:%d", c.userID)] = true
c.subs[fmt.Sprintf("network:%d", c.userID)] = true
}
When a trade closes, publish to the relevant channels:
func (h *Hub) PublishTradeClose(trade TradeEvent) {
// Notify the client
h.sendToChannel(
fmt.Sprintf("portfolio:%d", trade.ClientID),
TradeCloseMessage{
Symbol: trade.Symbol,
Profit: trade.Profit,
Volume: trade.Volume,
},
)
// Notify all ancestor IBs
for _, ib := range trade.AncestorIBs {
h.sendToChannel(
fmt.Sprintf("commissions:%d", ib.ID),
CommissionMessage{
ClientName: trade.ClientName,
Amount: ib.Commission,
Symbol: trade.Symbol,
},
)
}
}
Heartbeat and Reconnection
Connections drop. Networks flap. The client must handle reconnection gracefully:
class TradingSocket {
constructor(token) {
this.token = token;
this.reconnectAttempts = 0;
this.maxReconnectDelay = 30000;
this.connect();
}
connect() {
this.ws = new WebSocket(`wss://ws.platform.com?token=${this.token}`);
this.ws.onopen = () => {
this.reconnectAttempts = 0;
this.startHeartbeat();
};
this.ws.onclose = () => {
this.stopHeartbeat();
this.reconnect();
};
this.ws.onmessage = (event) => {
const data = JSON.parse(event.data);
this.handleMessage(data);
};
}
reconnect() {
const delay = Math.min(
1000 * Math.pow(2, this.reconnectAttempts),
this.maxReconnectDelay
);
this.reconnectAttempts++;
setTimeout(() => this.connect(), delay);
}
startHeartbeat() {
this.heartbeat = setInterval(() => {
this.ws.send(JSON.stringify({ type: 'ping' }));
}, 30000);
}
}
Server-side, terminate connections that miss 3 consecutive heartbeats — they're likely dead but the TCP socket hasn't closed yet.
Scaling Beyond One Server
A single Go instance handles 10-15K concurrent WebSocket connections comfortably. Beyond that, you need horizontal scaling:
[Load Balancer (sticky sessions)]
│
├──▶ WS Server 1 (10K connections)
├──▶ WS Server 2 (10K connections)
└──▶ WS Server 3 (10K connections)
[Redis Pub/Sub connects all servers]
When a trade event needs to reach a client, publish to Redis. All WS servers receive the message, and the one holding that client's connection delivers it:
func (h *Hub) sendToChannel(channel string, msg interface{}) {
data, _ := json.Marshal(msg)
// Check local clients first
h.clients.Range(func(key, value interface{}) bool {
client := value.(*Client)
if client.subs[channel] {
select {
case client.send <- data:
default:
// Buffer full, skip this update
}
}
return true
})
}
Message Compression
Portfolio updates can be chatty. Use delta updates instead of full snapshots:
// Instead of sending full portfolio every time
{ "type": "portfolio_full", "positions": [...50 positions...] }
// Send only what changed
{ "type": "position_update", "id": 12345, "profit": 245.50, "price": 1.0842 }
This reduces bandwidth by 90%+ and keeps the connection responsive even on mobile networks.
Key Takeaways
- Separate WebSocket gateway from your main application — use Go for performance
- Channel-based subscriptions ensure users only receive relevant updates
- Exponential backoff reconnection on the client side handles network instability
- Redis Pub/Sub bridges multiple WebSocket servers for horizontal scaling
- Delta updates over full snapshots reduce bandwidth dramatically
- Heartbeat monitoring catches dead connections that TCP hasn't noticed
Real-time data delivery is a competitive differentiator for trading platforms. Clients notice when their portfolio updates instantly versus with a delay — and they choose platforms accordingly.
