FRRouting (FRR) is no longer a niche open-source curiosity. It is the routing stack inside SONiC — the open NOS running inside hyperscaler AI clusters at Meta, Microsoft, Google, and Alibaba. It is the BGP implementation pushing routes all the way to individual GPU servers in 100,000-node AI factories. It is what MANET researchers at defense labs use to test Babel, OSPF, and IS-IS convergence under adversarial conditions. And it is the protocol suite that enterprise networking teams evaluate when they want to move off expensive proprietary vendor stacks.
FRR supports more routing protocols than most commercial routers: BGP (with full RFC compliance), OSPF (v2 and v3), IS-IS (Level 1 and Level 2), Babel, RIP, LDP, PIM, BFD, SRv6, SR-MPLS, EIGRP, NHRP, PBR, and VRRP. All open-source, all containerizable, all deployable in a NetPilot cloud lab from a plain-English prompt in under two minutes.
The problem with FRR labs today
The existing FRR lab tooling is good but requires significant setup overhead:
| Tool | Approach | Setup overhead |
|---|---|---|
| netlab + containerlab | YAML topology + FRR containers | 1-2 hours (Linux host, Docker, netlab install, FRR image) |
| GitHub repos (e.g., clab-bgp-frr) | Clone + customize | 30-60 min per lab, skills required |
| GNS3 / EVE-NG | Import FRR appliance | GUI setup, performance limits |
| Manual containerlab | Write YAML + configs by hand | Protocol expertise required to configure each daemon |
| NetPilot | Describe in plain English → AI deploys | Zero (cloud-hosted, AI-configured) |
For engineers who just want to try FRR's EVPN implementation, test whether Babel outperforms OSPF in a lossy mesh, or evaluate SRv6 for an AI cluster fabric — the setup tax of existing tools is a real barrier. NetPilot removes it.
The shift: Ask the agent what you want to build — "an FRR IS-IS lab with two Level-1 areas and one Level-2 backbone" — and it deploys a working lab with real FRR containers in under two minutes. Real
vtyshaccess, real daemon configs, real protocol behavior.
BGP — the workhorse
FRR's BGP implementation is one of the most RFC-compliant stacks on the market and the default BGP daemon in SONiC, Cumulus, and DENT.
Ask the agent:
"Build a 4-router eBGP lab with FRR. AS 65001 (R1) peers with AS 65002 (R2), AS 65003 (R3), and AS 65004 (R4). R1 advertises 10.1.0.0/24. Configure BGP communities and route policies so R2 accepts all prefixes, R3 accepts only prefixes with community 65001:100, and R4 rejects everything. Verify convergence."
The agent configures all four FRR daemons, sets the policy statements, deploys to cloud ContainerLab, and confirms the expected accept/reject behavior — all without you touching a bgpd.conf file.
Direct CLI always available. Ask the agent to check policy state across all routers, or SSH in and run vtysh directly:
vtysh
show bgp summary
show bgp neighbors 10.0.0.1 policy
show bgp ipv4 unicast community 65001:100Related prompt: BGP Route Reflector Cluster
OSPF — the enterprise backbone
FRR implements OSPFv2 (RFC 2328) and OSPFv3 (RFC 5340) with full multi-area, NSSA, stub, and totally-stubby support. For open networking teams replacing Cisco OSPF with FRR, a quick interop lab is the first validation step.
Ask the agent:
"Build a 6-router FRR OSPF lab with 3 areas: area 0 (backbone, 2 ABRs), area 1 (stub, 2 routers), area 2 (NSSA, 2 routers). Advertise a /24 from each leaf area and verify that routes summarize correctly at the ABRs. Confirm reachability end-to-end."
Direct CLI available for OSPF database inspection:
vtysh
show ip ospf neighbor
show ip ospf database
show ip ospf routeIS-IS — carrier and hyperscaler fabric standard
IS-IS is the IGP of choice for carrier backbones, large enterprise fabrics, and hyperscaler spine-leaf networks. FRR's IS-IS implementation supports Level-1, Level-2, and L1/L2 routers, plus SR-MPLS adjacency-SIDs for segment routing.
Ask the agent:
"Build a 6-router FRR IS-IS network with 2 Level-2 routers as the backbone and 4 Level-1 routers in 2 separate areas. Configure Segment Routing adjacency-SIDs on the backbone links. Advertise loopback /32s from each router and verify the SR-MPLS label database."
Related blog: BGP Convergence Research Lab
Babel — mesh networks and MANET research
Babel is the routing protocol purpose-built for lossy, dynamic mesh networks — the default choice for community mesh operators (Althea, Guifi.net), wireless mesh research (MANET), and tactical network experiments. FRR's Babel implementation supports the full RFC 8966 feature set including link cost-based metric, feasibility conditions, and triggered updates.
The Navy research teams we see in our labs use Babel consistently alongside OSPF for protocol comparison studies in mesh topologies.
Ask the agent:
"Build a mesh network research lab with 8 FRR routers running both OSPF and Babel simultaneously on the same topology. Inject 15% correlated packet loss on two links between R3-R4 and R5-R6. Compare how each protocol responds to the impairment — show neighbor state and route tables for both protocols after the loss is applied."
Direct CLI for deep Babel inspection:
vtysh
show babel neighbor
show babel route
show babel interfaceRelated blog: AI-Powered MANET Research Labs
EVPN/VXLAN — open networking and SONiC fabrics
FRR's EVPN implementation is the BGP control plane used in SONiC, Cumulus Linux, and DENT for VXLAN-based fabrics. ipSpace.net published FRR EVPN/VXLAN with IPv6 next-hops in April 2026 — a signal that the practitioner community is actively exploring FRR EVPN at the edges of the spec.
Ask the agent:
"Build a 4-node FRR EVPN/VXLAN leaf-spine fabric: 2 FRR spines (AS 65000) and 2 FRR leaves (AS 65001, AS 65002). eBGP underlay between spines and leaves. BGP EVPN address family for the overlay. Configure VNI 10100 (VLAN 100) on both leaves with distributed anycast gateway 192.168.100.1/24 and anycast MAC 0000.0000.5a5a. Place one host on each leaf and verify Type-2 and Type-3 route exchange plus host-to-host reachability."
For multi-vendor EVPN interop (FRR alongside Cisco or Arista), see Debugging Cisco-Juniper EVPN Interop Issues.
SRv6 — AI cluster fabrics and carrier next-gen
SRv6 (Segment Routing over IPv6) is the next-generation forwarding plane for both hyperscaler AI backend networks and carrier next-gen IP cores. FRR 10.3+ supports SRv6 uSID with static SID allocation — the same feature set Alibaba, Cisco, Microsoft, and Nvidia are using in production AI backend fabrics.
Ask the agent:
"Build a 4-node FRR SRv6 lab: 2 provider-edge routers (PE1, PE2) and 2 provider-core routers (P1, P2). Configure IS-IS as the underlay with SRv6 Locator blocks on each router. Set up an L3VPN between PE1 and PE2 using SRv6 BGP as the signaling. Verify end-to-end VPN reachability and show the SRv6 SID table on each router."
Related blog: Scale Testing a 100-Node Network Fabric — uses FRR at 100-node scale for convergence testing.
Why FRR specifically benefits from AI-built cloud labs
Three reasons FRR labs in particular benefit from the NetPilot model:
1. Multi-daemon configuration complexity. FRR isn't a monolithic binary — it's a suite of daemons (bgpd, ospfd, isisd, babelfd, ldpd, etc.) each with its own config file and interaction with zebra. Getting a multi-protocol lab running correctly from scratch takes significant FRR expertise. The agent handles the daemon coordination, config file generation, and startup ordering.
2. Image sourcing is already solved. With physical Cisco or Juniper gear, you need licenses and images. With FRR, the image is free (Docker Hub: frrouting/frr) but sourcing, versioning, and building it into ContainerLab still takes time. NetPilot includes FRR natively on every plan — no image sourcing.
3. Protocol comparison needs reproducibility. For the research and open-networking teams using FRR, a key workflow is comparing FRR's behavior against commercial vendor implementations (Cisco IOL, Arista cEOS, Nokia SR Linux) in the same topology. A plain-English prompt handles the heterogeneous topology; NetPilot deploys all vendor containers together.
FRR adoption landscape
For context on why this matters at enterprise scale:
- SONiC — the open NOS now running in major hyperscaler fabrics — uses FRR as its routing process. The 650 Group projects $8B in SONiC-driven switching revenue by 2027.
- BGP all the way to GPU servers — Meta, Nvidia, Microsoft, and Google have publicly described their AI cluster fabrics as pure Layer-3 BGP topologies using open software stacks, with FRR at the routing layer.
- ipSpace.net published three FRR-specific posts in early 2026, including a netlab integration guide and an EVPN/IPv6 interop post — a reliable signal that the professional network engineering community is actively building with FRR.
- MANET and tactical research — defense research labs and academic groups studying mesh and adversarial network conditions use FRR because it ships Babel natively and runs well in containerized environments.
FAQ
What routing protocols does FRR support?
FRR supports BGP (v4 and multi-protocol), OSPF (v2 and v3), IS-IS (Level 1 and Level 2), Babel, RIP (v1 and v2 plus RIPng), PIM, LDP, BFD, SRv6, SR-MPLS, EIGRP (alpha), NHRP, PBR, and VRRP. It is the most comprehensive open-source routing protocol suite available.
Is FRR production-grade?
Yes. FRR is in production at hyperscalers including Meta, Microsoft, and Google (as the routing plane in SONiC and similar open stacks). Cumulus Linux and DENT ship FRR as their routing daemon. The FRRouting GitHub repository has 4000+ stars and active releases.
How does NetPilot's FRR support compare to self-hosted netlab or containerlab?
netlab and containerlab are excellent tools that give you deep control and run locally. NetPilot is cloud-hosted (zero setup), AI-configured (describe the topology in English instead of writing YAML and configs), and includes validation orchestration. The tradeoff: less control, faster iteration. For engineers who want a working FRR lab in 2 minutes without installing anything, NetPilot is the fastest path. For teams that need a version-controlled, Git-managed lab-as-code workflow, netlab/containerlab are the right choice.
Can I mix FRR with commercial vendor images in the same lab?
Yes. NetPilot natively supports FRR alongside Cisco IOL, Arista cEOS, Juniper cRPD, Nokia SR Linux, Palo Alto PAN-OS, and Fortinet FortiGate. Multi-vendor labs with FRR are a core use case — particularly for teams migrating from proprietary stacks to open networking, or for EVPN interop validation.
How do I access FRR's CLI in a NetPilot lab?
Every FRR container exposes a full vtysh shell — the unified CLI for all FRR daemons. SSH into the container from the browser or your terminal and run any FRR command: show bgp summary, show ip ospf neighbor, show babel route, etc. Full direct CLI access is always available alongside the AI agent workflow.
Can I pin a specific FRR version for reproducible research?
Yes via enterprise BYOI (bring your own image). Upload any frrouting/frr:X.Y.Z container image and NetPilot deploys that exact version in your lab. Contact sales for enterprise plan details.
Copy-paste ready: Browse FRR-specific prompts in the research and routing directories of our example library.
Running open networking or MANET research with FRR? The Network Research Lab hub covers the full enterprise research workflow — contact sales for dedicated environments and FRR version-pinning via BYOI.