Load Balance Mideye for High Availability & RADIUS
This guide covers the network and Mideye Server requirements for placing a load balancer between your VPN/RADIUS clients and multiple Mideye Server nodes. It applies to any load balancer (F5, HAProxy, Citrix ADC, etc.) — for a Citrix ADC step-by-step walkthrough, see Citrix ADC RADIUS Load Balancing.
Why load balance Mideye Server?
Section titled “Why load balance Mideye Server?”| Goal | How load balancing helps |
|---|---|
| High availability | If one Mideye Server goes down, the remaining nodes continue to serve authentication requests |
| Load distribution | Spread RADIUS traffic across multiple nodes |
| Zero-downtime upgrades | Drain one node, upgrade it, and return it to the pool |
Architecture overview
Section titled “Architecture overview”Two rules are critical for this architecture to work:
- Session persistence — challenge-response packets must stick to the same Mideye node
- Symmetric return path — RADIUS responses must travel back through the load balancer
Both are explained in detail below.
Rule 1: Session persistence (sticky sessions)
Section titled “Rule 1: Session persistence (sticky sessions)”Why it is required
Section titled “Why it is required”Mideye Server uses the RADIUS Access-Challenge mechanism for multi-factor authentication. A typical MFA login involves multiple round-trips:
Steps 3 and 10 must reach the same Mideye node. The challenge state (which OTP was generated, which user is mid-authentication) exists only in the memory of the node that created it.
The challenge state (which OTP was generated, which user is mid-authentication) exists only in the memory of the Mideye node that created it. If step 4 lands on Mideye B instead of Mideye A, Mideye B has no knowledge of the challenge and the authentication fails.
How to configure it
Section titled “How to configure it”Configure source-IP persistence (also called “sticky sessions” or “session affinity”) on the load balancer:
| Setting | Recommended value | Notes |
|---|---|---|
| Persistence type | Source IP | Ensures all packets from the same VPN arrive at the same Mideye node |
| Persistence timeout | 120 seconds (minimum) | Must exceed the longest MFA timeout (Touch Accept wait = 25 s, plus user think time) |
Source-IP persistence works because the VPN’s IP address remains constant across all RADIUS packets in a single authentication transaction.
Rule 2: Symmetric return path
Section titled “Rule 2: Symmetric return path”Why it is required
Section titled “Why it is required”RADIUS uses UDP. The VPN sends its Access-Request to the load balancer’s VIP address (e.g. 10.0.0.50). When the VPN receives a response, it checks the source IP — it expects the response to come from 10.0.0.50, the same address it sent the request to.
If Mideye Server responds directly to the VPN instead of routing through the load balancer, the response arrives with Mideye’s own IP as the source. The VPN silently drops the packet because the source IP does not match.
Correct flow — response returns through the LB:
Broken flow — Mideye responds directly to VPN (asymmetric routing):
How to fix it
Section titled “How to fix it”Ensure that the load balancer replaces the source IP when forwarding requests to Mideye, so Mideye’s response is routed back through the load balancer:
| Method | Description |
|---|---|
| SNAT / source NAT (recommended) | The LB rewrites the source IP to its own internal IP (e.g. SNIP). Mideye responds to that IP, which routes back through the LB. This is the default behavior on most load balancers. |
| Do NOT enable “Use Source IP” / USIP / transparent mode | If enabled, the LB preserves the VPN’s original IP. Mideye may then route the response directly to the VPN, bypassing the LB. Keep this disabled. |
| Static route on Mideye | If SNAT is not available, add a static route on each Mideye Server so that traffic destined for the VPN’s subnet is routed via the load balancer’s internal IP. |
Verify with a packet capture
Section titled “Verify with a packet capture”If authentication fails silently, capture RADIUS traffic on the Mideye Server:
# Linux — watch RADIUS traffic on port 1812tcpdump -i any udp port 1812 -nnYou should see:
- Requests arriving from the load balancer’s internal IP (not from the VPN’s IP)
- Responses going to the load balancer’s internal IP (not directly to the VPN)
If you see Mideye responding directly to the VPN IP, the return path is broken.
Mideye Server configuration
Section titled “Mideye Server configuration”RADIUS shared secrets
Section titled “RADIUS shared secrets”When a load balancer sits between the VPN and Mideye, configure the shared secret in Mideye using the load balancer’s internal IP (the IP that Mideye sees as the source), not the VPN’s IP.
| Setting | Value |
|---|---|
| Source IP in RADIUS Shared Secrets | The load balancer’s internal/SNIP address (the IP that Mideye sees in incoming packets) |
| Shared Secret | Must match the secret configured on the VPN/NAS for the LB VIP |
If you have multiple Mideye nodes, the shared secret must be identical on all nodes. In Mideye Server 6 with a shared database, see Encrypted RADIUS Shared Secrets for keystore requirements.
RADIUS client configuration
Section titled “RADIUS client configuration”Create a RADIUS Client entry that matches the load balancer:
| Setting | Value |
|---|---|
| NAS IP Address | The load balancer’s internal IP (or leave blank if identifying by source IP) |
| Authentication Server | The RADIUS server profile (typically 1812) |
Client identification
Section titled “Client identification”By default, Identify RADIUS Client By Source IP is enabled in RADIUS Server settings. This means Mideye identifies the RADIUS client based on the source IP of the incoming packet — which will be the load balancer’s IP, not the VPN’s IP. This is correct behavior when a load balancer is in the path.
If you need to distinguish multiple VPNs behind the same load balancer, you can:
- Disable “Identify RADIUS Client By Source IP” and use NAS-IP-Address (attribute 4) or NAS-Identifier (attribute 32) instead
- Use separate RADIUS listener ports (one per VPN) and configure a RADIUS server entry per port — see RADIUS Servers — multiple login points behind same source IP
High availability with shared database
Section titled “High availability with shared database”Load balancing distributes traffic, but for a fully redundant Mideye deployment you also need a shared database so that configuration changes propagate to all nodes.
See Shared Database Clusters for the complete setup, including:
- Connecting multiple Mideye Servers to the same database
- Designating a Cluster Leader for scheduled database cleanup
- Copying the keystore and keystore password for encrypted RADIUS shared secrets (Mideye Server 6)
- Manual steps required when adding RADIUS servers or certificates
Cluster leader requirement
Section titled “Cluster leader requirement”Only one Mideye node in the cluster should run scheduled database cleanup jobs. This node is referred to as the Cluster Leader. All other nodes must set cluster-leader: false in their application-prod.yml:
application: cluster-leader: falseDefault path to application-prod.yml:
- Linux:
/opt/mideyeserver6/config/application-prod.yml - Windows:
C:\Program Files (x86)\Mideye Server 6\config\application-prod.yml
See Cluster Settings for the full configuration reference.
Health monitoring
Section titled “Health monitoring”Configure the load balancer to probe each Mideye node and remove unhealthy nodes from the pool automatically.
| Method | How it works |
|---|---|
| RADIUS health check (recommended) | Send a RADIUS Access-Request with a test user. Expect response code 3 (Access-Reject) to confirm the RADIUS service is running. An Access-Reject proves the full RADIUS stack is operational. |
| HTTP health check | Probe the Health Check API at /management/health. Returns HTTP 200 when the server and database are healthy. This checks the web service but does not directly verify RADIUS. |
| TCP port check | Verify that UDP port 1812 is open. Basic availability check only — does not confirm RADIUS is functional. |
Load balancer requirements checklist
Section titled “Load balancer requirements checklist”Use this checklist when configuring any load balancer for Mideye RADIUS traffic:
- Source-IP persistence enabled with timeout ≥ 120 seconds
- SNAT / source NAT enabled (load balancer replaces source IP when forwarding to Mideye)
- “Use Source IP” / USIP / transparent mode disabled
- RADIUS shared secret in Mideye matches the VPN’s secret and uses the LB internal IP as source
- RADIUS client entry in Mideye configured for the LB internal IP
- All Mideye nodes share the same database — see Shared Database Clusters
- All Mideye nodes have the same keystore and keystore password (Mideye Server 6)
- One node designated as Cluster Leader, all others set to
cluster-leader: false - Health monitor configured on the load balancer to detect failed Mideye nodes
- Return path verified with packet capture — responses go through the LB, not directly to VPN
Troubleshooting
Section titled “Troubleshooting”| Symptom | Likely cause | Check |
|---|---|---|
| User enters OTP but authentication fails | Session persistence not configured — OTP response went to a different Mideye node | Verify source-IP persistence is enabled on the LB. Check Mideye logs on both nodes — if the Access-Challenge is on node A but the OTP attempt is on node B, persistence is broken. |
| Authentication times out (no accept or reject in VPN) | Asymmetric return path — Mideye responds directly to VPN, bypassing the LB | Run tcpdump on Mideye — if responses go directly to the VPN IP instead of the LB, fix the routing. Enable SNAT or add a static route. |
| ”Unknown RADIUS client” in Mideye log | Shared secret source IP mismatch | The shared secret in Mideye must use the LB’s internal IP (the IP Mideye sees), not the VPN’s IP. |
| Authentication works on one node but not the other | Keystore or shared secret mismatch between nodes | In Mideye Server 6, copy keystore.pfx and key-store-password from the first node to all others. See Encrypted RADIUS Shared Secrets. |
| ”maximum pending requests” rejections | LB sends all traffic to one node | Verify the LB is distributing traffic. Check that all backend services show as UP. |
Vendor-specific guides
Section titled “Vendor-specific guides”| Load balancer | Guide |
|---|---|
| Citrix ADC (NetScaler) | Citrix ADC RADIUS Load Balancing — full CLI and GUI walkthrough |
Related links
Section titled “Related links”- Shared Database Clusters — multi-server clustering with shared database
- RADIUS Shared Secrets — configure source IP and shared secret
- RADIUS Clients — NAS IP Address and client identification
- RADIUS Servers — listener profiles and “Identify RADIUS Client By Source IP”
- Networking Requirements — full port and protocol reference
- Server Monitoring — Health Check API for load balancer probes
- Support Center — contact Mideye support