Skip to content

Load Balance Mideye for High Availability & RADIUS

This guide covers the network and Mideye Server requirements for placing a load balancer between your VPN/RADIUS clients and multiple Mideye Server nodes. It applies to any load balancer (F5, HAProxy, Citrix ADC, etc.) — for a Citrix ADC step-by-step walkthrough, see Citrix ADC RADIUS Load Balancing.


GoalHow load balancing helps
High availabilityIf one Mideye Server goes down, the remaining nodes continue to serve authentication requests
Load distributionSpread RADIUS traffic across multiple nodes
Zero-downtime upgradesDrain one node, upgrade it, and return it to the pool

VPN / NAS(RADIUS client)Load BalancerVIP:1812Mideye ClusterMideye Server AMideye Server BShared Database RADIUS

Two rules are critical for this architecture to work:

  1. Session persistence — challenge-response packets must stick to the same Mideye node
  2. Symmetric return path — RADIUS responses must travel back through the load balancer

Both are explained in detail below.


Rule 1: Session persistence (sticky sessions)

Section titled “Rule 1: Session persistence (sticky sessions)”

Mideye Server uses the RADIUS Access-Challenge mechanism for multi-factor authentication. A typical MFA login involves multiple round-trips:

UserVPN / FirewallLoad BalancerMideye Server AMideye Server B 1. Login (username + password)2. Access-Request3. Forward (sticky session)4. Validate password, generate OTP5. Access-Challenge ("Enter OTP")6. Access-Challenge7. Prompt for OTP8. Enter OTP9. Access-Request (with OTP) 10. Forward (same node — sticky!)11. Verify OTP ✓12. Access-Accept13. Access-AcceptAccess granted

Steps 3 and 10 must reach the same Mideye node. The challenge state (which OTP was generated, which user is mid-authentication) exists only in the memory of the node that created it.

The challenge state (which OTP was generated, which user is mid-authentication) exists only in the memory of the Mideye node that created it. If step 4 lands on Mideye B instead of Mideye A, Mideye B has no knowledge of the challenge and the authentication fails.

Configure source-IP persistence (also called “sticky sessions” or “session affinity”) on the load balancer:

SettingRecommended valueNotes
Persistence typeSource IPEnsures all packets from the same VPN arrive at the same Mideye node
Persistence timeout120 seconds (minimum)Must exceed the longest MFA timeout (Touch Accept wait = 25 s, plus user think time)

Source-IP persistence works because the VPN’s IP address remains constant across all RADIUS packets in a single authentication transaction.


RADIUS uses UDP. The VPN sends its Access-Request to the load balancer’s VIP address (e.g. 10.0.0.50). When the VPN receives a response, it checks the source IP — it expects the response to come from 10.0.0.50, the same address it sent the request to.

If Mideye Server responds directly to the VPN instead of routing through the load balancer, the response arrives with Mideye’s own IP as the source. The VPN silently drops the packet because the source IP does not match.

Correct flow — response returns through the LB:

VPN(expects replies from 10.0.0.50)Load BalancerVIP 10.0.0.50Mideye A10.0.1.10 Access-Request → 10.0.0.50Forward (src = LB internal IP) Access-Accept → LB internal IPAccess-Accept from 10.0.0.50 ✓

Broken flow — Mideye responds directly to VPN (asymmetric routing):

VPN(expects replies from 10.0.0.50)Load BalancerVIP 10.0.0.50Mideye A10.0.1.10 Access-Request → 10.0.0.50Forward (src = VPN IP — transparent) Access-Accept from 10.0.1.10 ✗ DROPPED

Ensure that the load balancer replaces the source IP when forwarding requests to Mideye, so Mideye’s response is routed back through the load balancer:

MethodDescription
SNAT / source NAT (recommended)The LB rewrites the source IP to its own internal IP (e.g. SNIP). Mideye responds to that IP, which routes back through the LB. This is the default behavior on most load balancers.
Do NOT enable “Use Source IP” / USIP / transparent modeIf enabled, the LB preserves the VPN’s original IP. Mideye may then route the response directly to the VPN, bypassing the LB. Keep this disabled.
Static route on MideyeIf SNAT is not available, add a static route on each Mideye Server so that traffic destined for the VPN’s subnet is routed via the load balancer’s internal IP.

If authentication fails silently, capture RADIUS traffic on the Mideye Server:

Terminal window
# Linux — watch RADIUS traffic on port 1812
tcpdump -i any udp port 1812 -nn

You should see:

  • Requests arriving from the load balancer’s internal IP (not from the VPN’s IP)
  • Responses going to the load balancer’s internal IP (not directly to the VPN)

If you see Mideye responding directly to the VPN IP, the return path is broken.


When a load balancer sits between the VPN and Mideye, configure the shared secret in Mideye using the load balancer’s internal IP (the IP that Mideye sees as the source), not the VPN’s IP.

SettingValue
Source IP in RADIUS Shared SecretsThe load balancer’s internal/SNIP address (the IP that Mideye sees in incoming packets)
Shared SecretMust match the secret configured on the VPN/NAS for the LB VIP

If you have multiple Mideye nodes, the shared secret must be identical on all nodes. In Mideye Server 6 with a shared database, see Encrypted RADIUS Shared Secrets for keystore requirements.

Create a RADIUS Client entry that matches the load balancer:

SettingValue
NAS IP AddressThe load balancer’s internal IP (or leave blank if identifying by source IP)
Authentication ServerThe RADIUS server profile (typically 1812)

By default, Identify RADIUS Client By Source IP is enabled in RADIUS Server settings. This means Mideye identifies the RADIUS client based on the source IP of the incoming packet — which will be the load balancer’s IP, not the VPN’s IP. This is correct behavior when a load balancer is in the path.

If you need to distinguish multiple VPNs behind the same load balancer, you can:


Load balancing distributes traffic, but for a fully redundant Mideye deployment you also need a shared database so that configuration changes propagate to all nodes.

See Shared Database Clusters for the complete setup, including:

  • Connecting multiple Mideye Servers to the same database
  • Designating a Cluster Leader for scheduled database cleanup
  • Copying the keystore and keystore password for encrypted RADIUS shared secrets (Mideye Server 6)
  • Manual steps required when adding RADIUS servers or certificates

Only one Mideye node in the cluster should run scheduled database cleanup jobs. This node is referred to as the Cluster Leader. All other nodes must set cluster-leader: false in their application-prod.yml:

application:
cluster-leader: false

Default path to application-prod.yml:

  • Linux: /opt/mideyeserver6/config/application-prod.yml
  • Windows: C:\Program Files (x86)\Mideye Server 6\config\application-prod.yml

See Cluster Settings for the full configuration reference.


Configure the load balancer to probe each Mideye node and remove unhealthy nodes from the pool automatically.

MethodHow it works
RADIUS health check (recommended)Send a RADIUS Access-Request with a test user. Expect response code 3 (Access-Reject) to confirm the RADIUS service is running. An Access-Reject proves the full RADIUS stack is operational.
HTTP health checkProbe the Health Check API at /management/health. Returns HTTP 200 when the server and database are healthy. This checks the web service but does not directly verify RADIUS.
TCP port checkVerify that UDP port 1812 is open. Basic availability check only — does not confirm RADIUS is functional.

Use this checklist when configuring any load balancer for Mideye RADIUS traffic:

  • Source-IP persistence enabled with timeout ≥ 120 seconds
  • SNAT / source NAT enabled (load balancer replaces source IP when forwarding to Mideye)
  • “Use Source IP” / USIP / transparent mode disabled
  • RADIUS shared secret in Mideye matches the VPN’s secret and uses the LB internal IP as source
  • RADIUS client entry in Mideye configured for the LB internal IP
  • All Mideye nodes share the same database — see Shared Database Clusters
  • All Mideye nodes have the same keystore and keystore password (Mideye Server 6)
  • One node designated as Cluster Leader, all others set to cluster-leader: false
  • Health monitor configured on the load balancer to detect failed Mideye nodes
  • Return path verified with packet capture — responses go through the LB, not directly to VPN

SymptomLikely causeCheck
User enters OTP but authentication failsSession persistence not configured — OTP response went to a different Mideye nodeVerify source-IP persistence is enabled on the LB. Check Mideye logs on both nodes — if the Access-Challenge is on node A but the OTP attempt is on node B, persistence is broken.
Authentication times out (no accept or reject in VPN)Asymmetric return path — Mideye responds directly to VPN, bypassing the LBRun tcpdump on Mideye — if responses go directly to the VPN IP instead of the LB, fix the routing. Enable SNAT or add a static route.
”Unknown RADIUS client” in Mideye logShared secret source IP mismatchThe shared secret in Mideye must use the LB’s internal IP (the IP Mideye sees), not the VPN’s IP.
Authentication works on one node but not the otherKeystore or shared secret mismatch between nodesIn Mideye Server 6, copy keystore.pfx and key-store-password from the first node to all others. See Encrypted RADIUS Shared Secrets.
”maximum pending requests” rejectionsLB sends all traffic to one nodeVerify the LB is distributing traffic. Check that all backend services show as UP.

Load balancerGuide
Citrix ADC (NetScaler)Citrix ADC RADIUS Load Balancing — full CLI and GUI walkthrough