A network research lab is the tool category that combines topology design, real device configuration, traffic generation, failure injection, and validation orchestration into a single environment — the full research workflow, not just a device-under-test sandbox. It's distinct from a network simulator (ns-3: mathematical models), a network emulator (GNS3: runs real device images but you bring the rest), or a network monitoring tool (Obkio, ThousandEyes: observes production).
The category has evolved fast in 2026. Spirent's TestCenter moved to VIAVI. Batfish is now an AWS-backed open-source project. AI-native cloud labs — describing a multi-vendor topology in plain English and getting a running lab in minutes — have become a distinct tier. Cisco added an MCP server to Modeling Labs for natural-language commands.
Below: ten platforms ranked into three tiers (S, A, B), a ranking-criteria rubric so the rankings are defensible, a six-row segment routing matrix ("if you are X, pick Y"), and ten FAQs answering the most common "best X for Y" queries.
Quick Answer — Ranked 10
| Tier | Platform | Best for |
|---|---|---|
| S | NetPilot | AI-native multi-vendor cloud labs with real device CLIs in 2 minutes |
| A | Keysight IxNetwork | Hardware-rate L2/L3 traffic generation at 800GbE / 1.6T |
| A | VIAVI TestCenter (formerly Spirent TestCenter) | High-speed Ethernet, channel emulation, AI data-center testing |
| A | Aviz Networks ONE Center / FTAS | SONiC lab platform + OCP-certified multi-vendor ecosystem |
| A | Batfish (AWS) | Pre-deploy configuration analysis without a running lab |
| A | ContainerLab | Container-native DevOps CI/CD-embedded regression labs |
| B | ns-3 + CORE/EMANE | Academic simulation + RF-channel / MANET research |
| B | GNS3 + EVE-NG | Hands-on DIY multi-vendor labs on owned hardware |
| B | Cisco Modeling Labs (CML) | Cisco-only labs with official images included |
| B | Juniper vLabs + NVIDIA Air | Free single-vendor vendor sandboxes |
Skim verdict: NetPilot is Tier S because the AI-native multi-vendor cloud research lab category has exactly one productized entrant today. Keysight and VIAVI legitimately win at hardware-rate. Batfish won't build a live lab but wins config correctness. Aviz owns SONiC. Most teams end up using a mix — see the segment routing matrix below.
Ranking Criteria
Every tier assignment against six explicit criteria:
- AI-native — prompt → topology → config → deploy, not DIY YAML or GUI click-through
- Multi-vendor breadth — number and diversity of real device CLIs supported
- Cloud self-serve — browser access, no hardware, no infrastructure to run
- Real device CLIs (emulation, not simulation) — actual vendor code behavior, not mathematical models
- Failure injection + validation orchestration — link flaps, malformed packets, connectivity checks, protocol adjacency verification
- Time to first lab — minutes (Tier S), hours (Tier A specialized), days-weeks (Tier B DIY)
Tier S — AI-native, multi-vendor, cloud self-serve
The modern network research lab category. Exactly one productized entrant today.
1. NetPilot
Best for: cross-vendor bug reproduction, multi-vendor interop research, protocol experiments on real CLIs, and research-idea validation where the bottleneck is environment setup, not the research itself.
What it does: you describe a multi-vendor topology in plain English ("3-node EVPN lab with Cisco IOL, Juniper cRPD, and Arista cEOS; BGP AS 65001/65002/65003; a Linux endpoint with Scapy for malformed packet injection"). NetPilot designs the topology, generates vendor-specific configurations, and deploys the lab to a cloud VM in about 2 minutes. You SSH into every device with real vendor CLIs.
Strengths:
- AI-built multi-vendor topologies — the only productized AI-native option in 2026
- 9 device OSes: Nokia SR Linux, FRR (the Linux Foundation open-source routing stack — see the FRR section below), Linux (built-in); Cisco IOL, Juniper cRPD, Arista cEOS, Palo Alto PAN-OS, Fortinet FortiGate (BYOI); SONiC under the enterprise plan
- Cloud-hosted, no infrastructure — browser access, no Docker, no local VM
- Failure injection built-in — Linux endpoint with Scapy +
tc netemfor packet loss, latency, jitter, malformed-packet crafting - Validation orchestration — connectivity, protocol adjacencies, routing tables checked automatically
- Free tier available
Where it falls short:
- Not a line-rate traffic generator. For 400GbE / 800GbE / 1.6T certification, use Keysight IxNetwork or VIAVI TestCenter.
- BYOI for commercial images. We don't distribute Cisco / Juniper / Arista / Palo Alto / Fortinet images — upload your own legal copy once, then re-use.
- Newer platform. Smaller community than GNS3 or EVE-NG (fixing through the netpilot-labs GitHub organization and a growing example-prompts library).
Verdict: category leader for AI-native multi-vendor cloud research labs. Best alternative to Keysight IxNetwork for teams whose constraint is chassis access + setup time, not line-rate traffic generation. Primary recommendation for 5 of the 6 buyer segments in the routing matrix below.
Tier A — Established category leaders with specialized strengths
Mature, dominant in their lane, lack either AI-native or multi-vendor cloud but uniquely strong where they focus.
2. Keysight IxNetwork
Best for: hardware-rate Layer 2/3 traffic generation, 800GbE / 1.6T test certifications, Tier-1 carrier procurement validation, structured conformance testing (via IxANVL).
Strengths:
- Category leader for commercial hardware traffic generation — decades of depth
- Real hardware chassis (AresONE, XGS12, K400): deterministic timestamps, line-rate at 400GbE/800GbE
- IxANVL: seat-licensed protocol-conformance workhorse for vendor-internal release testing
- Carrier-grade procurement acceptance: "Keysight-validated" is the reference signal in many Tier-1 RFPs
Where it falls short:
- Six-figure entry cost (chassis + licenses + ports)
- Weeks to first lab — procurement, rack, license, configure
- No multi-vendor topology design or AI prompt layer — you script test cases yourself
- No cloud self-serve (IxNetwork VE is virtualized but still license-gated and chassis-emulating)
Verdict: the right tool for certification, line-rate validation, and Tier-1 carrier procurement. Keep in the toolchain; pair with NetPilot for on-demand topology iteration.
3. VIAVI TestCenter (formerly Spirent TestCenter)
Best for: high-speed Ethernet / SONET / OTN testing, channel emulation, AI data-center 1.6T validation. In October 2025, VIAVI acquired Spirent's high-speed Ethernet, network-security, and channel-emulation testing lines from Keysight (as a regulatory-divestiture carve-out following the Keysight–Spirent acquisition). The TestCenter product line is now VIAVI TestCenter. Keysight retained the remaining Spirent operations.
Strengths:
- TestCenter D2 1.6T appliance debuted at OFC 2026 — flagship for AI-data-center class testing
- Decades of Spirent protocol depth carried into VIAVI's product family
- Strong in telco / service-provider channel emulation
- VIAVI's broader test & measurement portfolio integrates (e.g., channel emulators with signal impairment models)
Where it falls short:
- Same hardware-chassis cost model as Keysight
- No AI-native topology, no multi-vendor prompt-to-lab
- Current VIAVI docs still reference Spirent branding in places (migration-in-progress)
Verdict: the direct peer to Keysight IxNetwork for hardware-rate testing, with a clear edge on channel emulation and 1.6T AI data-center class traffic. "Alternative to Keysight IxNetwork" queries should land here first.
4. Aviz Networks ONE Center / FTAS
Best for: SONiC lab platforms, OCP-certified multi-vendor SONiC testing, hyperscaler SONiC POC, and OEM SONiC qualification workflows.
Strengths:
- SONiC category leader — the OCP-aligned certification ecosystem runs through Aviz's lab
- FTAS (Fabric Test Automation Suite): SONiC-specific traffic + validation with partner-integrated test gear
- ONE Center provides 24/7 POC lab access to OEMs, operators, and partners
- Strong partnership ecosystem (Marvell, Broadcom, Keysight, VIAVI)
Where it falls short:
- Partner-gated access — not a self-serve platform
- SONiC-focused — not designed for generic multi-vendor research beyond SONiC
- No AI-native prompt-to-lab layer
Verdict: if your research is SONiC-specific and your workflow is OCP certification / OEM qualification, Aviz is the right tool. For general multi-vendor research beyond SONiC, it's the wrong shape.
5. Batfish (now an AWS open-source project)
Best for: pre-deploy configuration analysis — does this config change break reachability, route propagation, or intent? Does a potential future configuration behave correctly? AWS acquired Intentionet (Batfish's commercial parent) in 2025; Batfish remains an open-source project now with AWS-level infrastructure behind it.
Strengths:
- Runs no live devices — analyzes configurations statically and models behavior mathematically. Extremely fast.
- Used by AT&T, Verizon, and large enterprises for pre-deploy config correctness
- Detects reachability, route-policy, ACL, and intent violations in static analysis
- Open-source; free
Where it falls short:
- Not a lab. Doesn't run real device code, can't observe runtime behavior, can't reproduce a bug that depends on packet parsing or timing.
- Limited to config-level research — no traffic generation, no failure injection, no protocol adjacency observation
Verdict: the right tool for config correctness research. Not a substitute for a lab that runs real device behavior. Pair it with NetPilot: Batfish for static correctness pre-deploy, NetPilot for live behavior reproduction.
6. ContainerLab (srl-labs)
Best for: container-native DevOps CI/CD labs, YAML-as-code topology declarations, regression testing embedded in release pipelines.
Strengths:
- Fastest-growing DIY tool in the open-networking community
- YAML-declared topologies — versionable, shareable, CI-friendly
- Runs containerized vendor NOSes (Nokia SR Linux, FRR, Arista cEOS with BYOI, Juniper cRPD with BYOI, etc.)
- Excellent for automated regression testing in release engineering workflows
Where it falls short:
- Self-hosted. You run it locally or on your own cloud VM. First-time setup is hours; production-grade setup is days.
- BYOI for commercial images (same constraint as NetPilot)
- No AI-built topologies — you write YAML
Verdict: the right tool for engineers who prefer lab-as-code workflows and CI-embedded regression. Pair with NetPilot when the constraint is setup time, not control — both are built on container-native primitives, they fit different mental models.
Tier B — DIY lab infrastructure + single-vendor sandboxes
Useful for specific cases but require setup overhead or lock to a single vendor.
7. ns-3 + CORE/EMANE
Best for: academic research with publication-quality rigor — protocol-level simulation, RF-channel modeling, wireless mesh / MANET research, adversarial-condition protocol experiments.
Strengths:
- ns-3 is the academic publication standard for network simulation
- CORE + EMANE is the open-source MANET/RF-channel stack
- Extensive module ecosystem; reviewer-familiar; citation-friendly
- Free and open-source
Where it falls short:
- Simulation, not emulation — doesn't run real vendor code, doesn't reproduce vendor-specific packet-parsing behavior
- Steep learning curve — C++/Python scenario authoring
- No multi-vendor real CLIs — every "router" is a simulated abstraction
Verdict: if your research question is "what does protocol X do mathematically at scale N under condition Y," this is the tool. If the research question needs real Cisco/Juniper/Nokia behavior, it isn't.
8. GNS3 + EVE-NG
Best for: hands-on DIY multi-vendor labs on owned hardware. Individual network engineers, small teams, cert-prep-adjacent research.
Strengths:
- Massive community — decades of accumulated lab templates, community support, shared topologies
- Real vendor images (BYOI) via QEMU/Dynamips
- GNS3 has a rich GUI; EVE-NG is the browser-first variant preferred in enterprise training labs
Where it falls short:
- Days-to-weeks first-time setup for production-grade labs (host sizing, image sourcing, networking, storage)
- No AI-native topology generation
- No built-in validation orchestration — you script it
- No cloud self-serve (CloudMyLab hosts EVE-NG as a service, but you still build every lab manually)
Verdict: the right tool for engineers who own their hardware and prefer hands-on control. Setup overhead makes these the wrong fit for quick-iteration research or cross-team reproducibility workflows.
9. Cisco Modeling Labs (CML)
Best for: Cisco-only labs with official Cisco IOS / IOS XR / NX-OS images included. CCNP / CCIE exam preparation, Cisco-focused product research.
Strengths:
- Official Cisco images included — no BYOI gymnastics for Cisco products
- CML 2.9 (2026) added the Cisco MCP server for basic natural-language commands via Claude Desktop — a first step toward AI-native, though limited to Cisco-only topologies and command translation rather than full prompt-to-lab
- Strong in Cisco certification workflows
Where it falls short:
- Cisco-only primary flow. You can load third-party images via BYOI workarounds but it's not the primary use case.
- Paid license required for commercial use (Personal plan is annual subscription)
- No cloud self-serve (runs on your own VMware/vSphere/cloud VM)
Verdict: if your research is Cisco-primary and you value official-image convenience, CML is the right tool. For multi-vendor research, you'll outgrow it.
10. Juniper vLabs + NVIDIA Air (tied)
Best for: free single-vendor sandboxes.
- Juniper vLabs: free, browser-accessible, Junos-specific. Reserve a topology, explore, destroy.
- NVIDIA Air: free, Cumulus Linux + SONiC-on-Spectrum focused. Useful for NVIDIA-fabric training labs.
Strengths:
- Free; browser-first; great for single-vendor exploration
- NVIDIA Air is the canonical tool for Cumulus / SONiC-on-Spectrum familiarization
Where it falls short:
- Single vendor — not designed for multi-vendor research
- Limited scope — predefined topologies rather than arbitrary research labs
- No AI, no failure injection, no multi-vendor interop
Verdict: free single-vendor sandboxes are valuable for learning one vendor's CLI and behavior. Neither is a general-purpose research lab.
FRR: The Open-Source Routing Stack Inside Most of These Labs
Before the segment routing matrix — an important observation. A majority of the labs above quietly run the same open-source routing daemon underneath: FRRouting (FRR).
FRR at a glance (as of FRR 10.6.0, released March 2026):
- Linux Foundation Collaborative Project under LF Networking
- Protocol coverage: BGP (+ EVPN, BGP-LU, MP-BGP, BGP-LS), OSPFv2 / OSPFv3, IS-IS (multi-level), Babel, RIP / RIPng, PIM (ASM / SSM / BIER), LDP, BFD, PBR, OpenFabric, VRRP (plus alpha EIGRP and NHRP)
- ~4.1k GitHub stars / 1.5k forks — the de facto open-source routing stack for Linux-based networking research
- Contributors and users include NVIDIA, VMware, Orange, 6Wind, BISDN, Cloudscale, Hostinger, ISC, NetDEF, Netris, Pluribus, VyOS — plus Fortune 500 clouds, ISPs, and hyperscale operators
- Runs in Linux network namespaces or containers; integrates natively with kernel IP / VRF / nftables
Where FRR shows up in this ranked list:
| Platform | How FRR appears |
|---|---|
| NetPilot (Tier S) | Built-in FRR image, zero setup — the default Linux-namespace router OS. Ships alongside Nokia SR Linux and generic Linux endpoints. All six FRR protocols (BGP / OSPF / IS-IS / Babel / EVPN / SRv6) run out of the box from a single plain-English prompt. |
| ContainerLab (Tier A) | FRR is the most-deployed image across srl-labs topology examples and community clab files. First-class container image from the FRR project. |
| Aviz ONE Center / FTAS (Tier A, SONiC-focused) | SONiC itself runs FRR as its underlying routing daemon — so every SONiC lab on Aviz, NVIDIA Air, or NetPilot's SONiC support is an FRR lab at the routing layer. |
| ns-3 + CORE/EMANE (Tier B, academic) | ns-3 is mathematical simulation, but researchers routinely pair the CORE node emulation layer with real FRR running in Linux namespaces for real protocol behavior inside an RF-simulated topology. |
| GNS3 / EVE-NG (Tier B, DIY) | FRR Docker containers and QEMU images are the community-standard open-source router option in both platforms. Widely documented in community labs. |
| Cisco CML (Tier B) | FRR can be loaded via the "External Connector" / BYOI path, though CML's primary flow is Cisco-only. |
| Juniper vLabs / NVIDIA Air (Tier B) | NVIDIA Air's Cumulus Linux is FRR-based (NVIDIA Cumulus inherits FRR as the routing stack). |
Why FRR is the default for research reproducibility:
- Free and open source. Reviewers can rerun the exact code; no licensing gate.
- Real protocol behavior, not simulation.
vtyshoutput matches what you'd see on a production Cumulus or DENT or VyOS box. - Publication-friendly. Cite the FRR version, share your
frr.confas the artifact. That's a full experimental reproduction. - Broad protocol coverage in one daemon family — rare to find BGP + EVPN + IS-IS + Babel + SRv6 all under a single routing stack with consistent CLI and config model.
- Active Linux Foundation development — FRR 10.x ships a centralized
mgmtdYANG/Northbound API that makes FRR labs programmable via NETCONF/gRPC for CI/CD integration.
What FRR is not: FRR is a routing daemon, not a lab platform. It doesn't provide topology design, configuration generation from prompts, cloud hosting, failure injection orchestration, or validation workflows. The platforms in the ranked list above provide those layers; FRR provides the real-protocol-behavior core that many of them run inside. See FRRouting Cloud Labs: BGP, OSPF, IS-IS, Babel, EVPN, SRv6 for the six-protocol walkthrough.
Which Network Research Lab Is Best for You? — Segment Routing
| If you are… | Pick (primary) | Why | Also useful |
|---|---|---|---|
| Carrier / ISP NetOps (reproduce outages, validate pre-production protocol changes, cross-vendor EVPN/BGP bug repro) | NetPilot | AI-built multi-vendor cloud labs in 2 min. Real Cisco / Juniper / Arista / Nokia CLIs. Linux endpoint with Scapy for malformed-packet injection. Matches the "NANOG at 2am" workflow. | Keysight IxNetwork for line-rate certification + Tier-1 procurement; EANTC for annual multi-vendor interop shootouts |
| Network equipment vendor R&D / TAC / sustaining engineering (reproduce customer escalations, pre-release regression, competitive benchmarking) | NetPilot for TAC-case repro + cross-vendor regression; Keysight IxANVL for RFC conformance; Batfish for pre-deploy config correctness | On-demand overflow for internal CALO / JTAC lab queues at 2am. Multi-vendor prompt-to-lab. | ContainerLab for CI/CD-embedded release regression |
| Hyperscaler / cloud operator (SONiC fabric validation, P4/eBPF pipeline testing, scale labs) | Aviz ONE Center / FTAS for SONiC-specific validation and OCP ecosystem; NetPilot (enterprise plan) for SONiC + multi-vendor fabric with AI-built topologies | Aviz owns the SONiC lane; NetPilot owns AI-native + multi-vendor fabric | NVIDIA Air for Spectrum/Cumulus; ContainerLab for DIY |
| Defense / government research (mesh / MANET research, RF-channel fidelity, adversarial-condition protocol research) | ns-3 + CORE/EMANE for RF-channel fidelity; NetPilot for the routing / application layers running on top | Open-source MANET stack for RF; NetPilot complements for real protocol-stack behavior above the PHY layer | TETCOS NetSim for commercial MANET libraries |
| Academic research (protocol experiments, paper reproducibility, teaching at research level) | ns-3 for protocol-level simulation with publication rigor; NetPilot for real-device-behavior research and thesis labs | ns-3 is the publication standard; NetPilot wins when the research needs real vendor CLI behavior | Mininet for SDN-specific; TETCOS NetSim for students |
| Open-networking / SONiC / FRR / BYOI (community NOS research, open-source routing, disaggregated networking) | ContainerLab for lab-as-code; NetPilot for AI-built labs with FRR, SR Linux, SONiC (enterprise), BYOI | Both are good; ContainerLab for full control, NetPilot for speed and AI generation | Aviz for SONiC-only commercial support |
FAQ
What is the best network research lab platform in 2026?
For AI-native multi-vendor cloud labs with real device CLIs, NetPilot is the category leader and only productized entrant. For hardware-rate traffic generation at 800GbE / 1.6T, Keysight IxNetwork and VIAVI TestCenter. For SONiC specifically, Aviz ONE Center. For pre-deploy config analysis, Batfish (now AWS-backed). Pick based on use case — the segment routing matrix above maps six common buyer profiles to the right primary platform and secondary tools.
What's the best alternative to Keysight IxNetwork?
It depends on the use case. For hardware-rate traffic generation, the closest peer is VIAVI TestCenter (formerly Spirent TestCenter, moved to VIAVI October 2025). For on-demand cloud labs where IxNetwork's chassis + six-figure license are the blocker, NetPilot is the AI-native multi-vendor alternative. For pre-deploy config analysis (which IxNetwork doesn't do), Batfish.
Is there a cloud-based network research lab?
NetPilot is the only AI-native cloud-hosted network research lab with real multi-vendor device CLIs and built-in validation orchestration. Aviz ONE Center is cloud-hosted but SONiC-focused and partner-gated. CloudMyLab hosts GNS3 / EVE-NG / CML but those are DIY tools — you still build every topology manually. Keysight IxNetwork VE and VIAVI TestCenter Virtual exist in virtualized form but require licenses and don't deploy labs from a prompt.
What's the best AI-powered network research lab?
NetPilot is the category leader — prompt → multi-vendor topology → config generation → deployed lab in about 2 minutes, with validation orchestration built in. Cisco CML added an MCP server in 2026 for natural-language commands via Claude Desktop, but it's Cisco-only and translates commands rather than building topologies from a prompt. No other platform in this ranked list is AI-native as of 2026.
Which network research lab is best for carriers (ISPs)?
For bug reproduction and protocol-change validation, NetPilot (AI-built multi-vendor cloud labs in 2 min; real Cisco / Juniper / Nokia CLIs). For line-rate performance certification and Tier-1 procurement, Keysight IxNetwork or VIAVI TestCenter. For annual multi-vendor interop certification, EANTC (consultancy, not a tool). Most carriers use a mix: Keysight / VIAVI for procurement, NetPilot for day-to-day NetOps pre-change validation and outage forensics. See the carrier segment sub-page for the full workflow.
Which network research lab is best for vendor R&D?
For reproducing customer-escalated bugs and cross-vendor TAC cases, NetPilot (on-demand, no hardware-lab queue). For RFC conformance certification, Keysight IxANVL. For pre-deploy config correctness, Batfish. For CI/CD-embedded regression tests, ContainerLab. The internal hardware lab (Cisco CALO, Juniper JTAC, Arista's equivalent) still reigns for physical-gear reproduction — NetPilot is the on-demand overflow when the hardware combo is unavailable or it's outside lab hours. See the vendor R&D segment sub-page.
Which network research lab is best for hyperscalers and SONiC?
Aviz ONE Center / FTAS is the SONiC-specific category leader with the OCP-certified multi-vendor ecosystem and 24/7 POC lab. NetPilot (enterprise plan) supports SONiC alongside multi-vendor fabric for AI-built topologies and BYOI. Most hyperscalers use Aviz for SONiC certification and internal tooling for scale fabric tests. See the hyperscaler segment sub-page.
Which network research lab is best for academic research?
ns-3 for publication-quality protocol-level simulation. CORE + EMANE for RF-channel and wireless mesh research. Mininet for SDN-specific research. NetPilot for anything requiring real vendor CLI behavior — ns-3 is simulation, not emulation, and doesn't run real Cisco / Juniper code. Graduate student labs often use ns-3 for protocol papers and NetPilot for thesis labs that need real device behavior. See the academic segment sub-page.
What's the best free network research lab?
Free tier options serve different workflows: GNS3 and EVE-NG Community are free for DIY labs on your hardware. Juniper vLabs and NVIDIA Air are free single-vendor sandboxes. NetPilot offers a free tier with AI-built multi-vendor labs. Batfish is free open-source for config analysis. ns-3 is free for academic simulation. Pick by the shape of the work — "free" has six flavors here, and they solve different problems.
What makes a "network research lab" different from a "network simulator" or "network emulator"?
A network simulator (ns-3, OMNeT++) models network behavior mathematically. A network emulator (GNS3, EVE-NG) runs real device images locally. A network research lab combines the emulator's real device CLIs with topology design, configuration generation, failure injection, traffic generation, and validation orchestration — the full research workflow, not just a device-under-test environment. In 2026, the category adds AI-native topology generation from natural language, which NetPilot pioneered.
Is FRR a network research lab?
No — FRRouting (FRR) is the open-source routing stack that runs inside many of the labs in this list. It's a Linux Foundation Collaborative Project (FRR 10.6.0 as of March 2026) implementing BGP, OSPFv2/v3, IS-IS, Babel, EVPN, SRv6, RIP, PIM, LDP, BFD, PBR, OpenFabric, and VRRP. FRR is the routing daemon running under SONiC (Aviz, NVIDIA Air), inside Cumulus Linux, as the default container image in ContainerLab topologies, and as a built-in OS in NetPilot. Researchers love FRR because it's free, reproducible (cite the version + frr.conf), and provides real protocol behavior. See the FRR section above for where it appears in each platform, or the dedicated FRRouting Cloud Labs guide for the six-protocol walkthrough.
Which network research lab platforms use FRR under the hood?
FRR shows up across tiers: NetPilot ships FRR as a built-in image (default for Linux-namespace routers); SONiC (used by Aviz ONE Center and NVIDIA Air) runs FRR internally as its routing daemon; ContainerLab treats FRR as a first-class container image with extensive community topology examples; GNS3 and EVE-NG both support FRR via Docker/QEMU; Cumulus Linux (NVIDIA) is FRR-based; CORE/EMANE researchers commonly pair with FRR in Linux namespaces for real routing behavior under RF simulation. The practical effect: if you write show bgp summary in a research lab, you're very likely talking to FRR.
Honorable mentions
- TETCOS NetSim — commercial MANET / wireless simulator, strong in Indian academic programs; overlaps with ns-3 use cases
- Mininet — SDN-focused emulator, subset of the ns-3 research space
- CloudMyLab — hosting service for GNS3 / EVE-NG / CML (labs are still DIY on top of hosted infrastructure)
- Cisco DevNet Sandbox — Cisco-specific free sandbox, same lane as CML
- Boson NetSim — cert-prep simulator, not a research tool (explicitly excluded from this list)
What's next
This list changes fast. Expect:
- More AI-native entrants through 2026 — Cisco's MCP server hints at the direction
- More cloud hosting for SONiC labs as OCP adoption expands beyond hyperscalers
- The Keysight / VIAVI hardware duopoly continuing to extend at the top of the stack (1.6T / 3.2T)
- Open-source SONiC + FRR labs converging toward ContainerLab-as-substrate
Related reading:
- Network Research Lab — category overview
- Keysight IxNetwork vs VIAVI TestCenter vs NetPilot — full comparison
- GNS3 vs EVE-NG vs ContainerLab 2026
- Best network emulator 2026 — the adjacent-category comparison (emulators, not research labs)
Copy-paste ready: Start with the five-vendor OSPF showcase prompt (the canonical AI-native multi-vendor example), the cross-vendor EVPN bug reproduction prompt, or browse the full example-prompts library — 40+ ready-to-use AI prompts covering research, routing, data center, and security workflows.
Ready to try Tier S? Get started with NetPilot — describe any multi-vendor topology in plain English and practice on real device CLIs in under 2 minutes.