During the 2025 holiday season, I traveled to India to visit family. Like any good engineer, I expected to stay connected to my homelab infrastructure back in hat America - accessing my Kubernetes clusters, using exit nodes for secure browsing, and managing various services. What I didn’t expect was just how painful that experience would become when international networks had me relying on Tailscale’s default relay infrastructure.
The Problem: DERP Across Oceans
Tailscale typically establishes direct peer-to-peer connections between devices using NAT traversal. When direct connections fail (due to restrictive firewalls, CGNAT, or other network conditions), traffic falls back to DERP (Designated Encrypted Relay for Packets) servers - Tailscale’s managed relay infrastructure that also assists with NAT traversal and connection establishment.
DERP servers work reliably, but they’re shared infrastructure - serving all Tailscale users who need relay assistance. They’re optimized for availability and broad coverage, not raw throughput for individual connections. When you’re in Delhi, India and trying to connect to infrastructure in Chicago, your traffic routes through a DERP server—sharing capacity with other users and subject to throughput limits that ensure fair access for everyone.
The real problem became apparent when I ran iperf3 tests. Sending all my traffic through DERP, across an ocean, resulted in severely throttled throughput averaging 2.2 Mbits/sec:
| |
iperf3 TCP throughput test from Delhi to Chicago over DERP. The wildly variable sender bitrate reflects DERP’s QoS shaping the connection. The receiver total (31.5 MB over 120 seconds) tells the real story: ~2.2 Mbits/sec sustained.
The connection was barely usable for anything beyond basic SSH - despite my ISP connection testing at 30-40 Mbps to international destinations under normal conditions.
Why Direct Connections Failed
India’s residential networks are notoriously difficult for peer-to-peer connectivity. Carrier-grade NAT (CGNAT), strict firewalls, and asymmetric routing meant Tailscale couldn’t establish direct connections between my laptop in Delhi and my infrastructure in Chicago. Every connection attempt fell back to DERP relay.
That’s exactly what DERP is designed for—it’s the reliable fallback when direct connections aren’t possible. But DERP is shared infrastructure, serving all Tailscale users globally. It’s optimized for availability and broad coverage, with throughput limits that ensure fair access for everyone. For occasional relay traffic, this works great. For sustained high-bandwidth connections across oceans, you start hitting those limits.
The core issue wasn’t the network path—it was that DERP’s shared capacity couldn’t deliver the throughput I needed for productive work.
The Solution: Peer Relays
Peer relays, introduced in October 2025, offer an alternative: instead of using Tailscale’s managed DERP infrastructure, you can designate your own nodes as high-throughput traffic relays within your tailnet.
The key difference is capacity. DERP servers are shared across all Tailscale users and apply throughput limits to ensure fair access. Peer relays are dedicated to your tailnet—no competing for bandwidth, no QoS throttling. You get the full capacity of whatever node you designate as a relay.
How Peer Relays Work
Peer relays function similarly to DERP servers but run on your own infrastructure:
- A node with good connectivity is configured as a peer relay
- When direct connections fail between other nodes, Tailscale routes through your peer relay instead of DERP
- Traffic remains end-to-end encrypted via WireGuard—the relay only sees encrypted packets
- Tailscale maintains preference hierarchy: direct connection > peer relay > DERP
The critical advantage is dedicated relay capacity. For bandwidth-intensive use cases like file transfers, media streaming, or in my case, managing remote infrastructure, peer relays remove the throughput ceiling that shared DERP infrastructure imposes.
Setting Up Peer Relays
The relay server requires a single UDP port to be accessible. For nodes behind NAT, this typically means port forwarding on your router or configuring security groups in cloud environments.
For my setup, I deployed peer relays on one of my homelabs in Chicago: a residential fiber connection that serves as the hub for my distributed infrastructure. This network hosts my Kubernetes clusters, exit nodes, and various services. Since the peer relay runs on the same network as the services I’m trying to reach, traffic from India now routes through infrastructure I control rather than public DERP servers.
The Results: 12.5x Improvement
After configuring peer relays, I ran the same iperf3 tests. The difference was dramatic:
| |
Same test with peer relays enabled. The first few seconds show TCP slow start ramping up (normal behavior), then throughput stabilizes at 30-50 Mbits/sec—nearly saturating the available bandwidth.
Results comparison:
| Metric | Without Peer Relays (DERP) | With Peer Relays |
|---|---|---|
| Average Throughput | 2.2 Mbits/sec | 27.5 Mbits/sec |
| Total Transfer (120s) | 32 MB | 394 MB |
| Throughput Variability | High (QoS shaping) | Low |
| Connection Stability | Throttled | Consistent |
That’s a 12.5x improvement in throughput, plus dramatically more stable connections. The peer relay test showed consistent 30-50 Mbits/sec intervals with no sustained dropout periods.
Why This Matters
DERP does its job well—it provides reliable relay connectivity for millions of Tailscale users. But “reliable” and “high-throughput” are different goals. Peer relays give you dedicated capacity when you need sustained bandwidth that shared infrastructure can’t provide.
Other Use Cases
- Large file transfers - Move data between devices without DERP throughput limits
- Media streaming - Stream video or audio between tailnet devices smoothly
- Enterprise environments - Keep relay traffic within your network perimeter
- Performance-critical applications - Guarantee relay performance by controlling the relay infrastructure
Conclusion
Peer relays transformed my holiday connectivity from barely usable to genuinely productive. Instead of waiting 30-plus seconds for kubectl commands to complete or giving up on transferring files to my homelab, I could work almost as efficiently as if I were back home.
If you’ve noticed DERP-relayed connections underperforming for bandwidth-intensive tasks, peer relays are worth exploring. The setup is minimal—designate a node, open a UDP port—and you get dedicated relay capacity that isn’t subject to shared infrastructure limits.
DERP remains the reliable fallback for the majority of relay scenarios. But when you need high throughput, peer relays fill that gap.
