Mobile ad-hoc network (MANET) and mesh network research has been stuck with the same tool set for over a decade: ns-3 for discrete-event simulation, CORE/EMANE for emulation with RF modeling, meshnet-lab and MeshNetSimulator for Linux-namespace mesh experiments, commercial NetSim for structured studies. Each has strengths, each has real users, and none of them let you describe a mesh topology in plain English and get a running multi-vendor lab in two minutes. This is a survey of the MANET research tool landscape plus a walkthrough of the AI-built cloud lab pattern that has emerged as a complement to existing tools — especially for reproducibility.
The existing MANET research tool landscape
Let's start with an honest survey. Each of these has real users in defense research, academic networking, and community mesh network projects.
ns-3
The gold standard for discrete-event network simulation in academic publications. Open-source, highly extensible, C++/Python-based. Used in thousands of peer-reviewed papers for performance modeling of 5G, 6G, wireless, and protocol behavior.
Strengths: Unmatched for mathematical protocol modeling, huge research community, citation-friendly. Limitations: Not a real device emulation — you write protocol models in C++ or use provided modules, not run vendor NOS code. Steep learning curve. Not designed for day-to-day engineering workflows.
CORE / EMANE
The Naval Research Laboratory's Common Open Research Emulator (CORE) with the Extendable Mobile Ad-hoc Network Emulator (EMANE) for RF modeling. Open-source, Linux-based. Used extensively in defense research programs for tactical network experimentation.
Strengths: Real Linux-network-namespace devices, integrates with EMANE for wireless channel simulation, strong in the DoD research community, used in published papers. Limitations: Self-hosted Linux setup required, not cloud-hosted, non-AI, limited vendor NOS integration (Linux routers only, no Cisco/Juniper/Arista unless you DIY), GUI-based workflow.
NetSim (TETCOS)
Commercial network simulator with strong MANET and wireless modeling. Used in some defense and academic programs.
Strengths: Polished GUI, commercial support, canned wireless models (MANET, VANET, IoT, LTE). Limitations: Commercial licensing cost, not open-source, not cloud-hosted, not multi-vendor in the commercial-router sense.
meshnet-lab and MeshNetSimulator
Lightweight open-source MANET emulation using Linux namespaces (meshnet-lab) or a simple simulator framework (MeshNetSimulator). Used by the community mesh network scene (Althea, Guifi.net adjacent).
Strengths: Free, lightweight, runs on a single Linux host, good for sketching routing protocol ideas. Limitations: Very minimal — no GUI, no multi-vendor, limited to Linux namespace routers, requires manual setup.
NetPilot (AI-built cloud lab)
Cloud-native multi-vendor lab with AI-built topology design. New addition to the landscape, targeting the reproducibility and iteration-speed gap.
Strengths: Describe any mesh topology in plain English, runs real FRR routing protocols (OSPF, IS-IS, BGP, Babel, RIP, PIM), cloud-hosted (no self-hosted infrastructure), reproducible across teams (same prompt = same lab), multi-vendor (FRR plus Cisco, Juniper, Arista, Nokia, Palo Alto, Fortinet on enterprise plan). Limitations: Requires internet, not designed for mathematical modeling (ns-3's strength), no built-in RF channel simulation at the ns-3 / EMANE level of fidelity (failure injection via tc/netem is functional but abstract compared to EMANE's RF models).
The honest verdict: each tool has a role. For paper-level mathematical modeling, ns-3. For tactical RF simulation, CORE/EMANE. For everyday research iteration and reproducibility, AI-built cloud labs.
What AI-built cloud labs add to MANET research
Three things existing MANET tools don't give you together:
- Ship time from idea to experiment. Traditional setup for a 10-node mesh experiment: 4-8 hours on first attempt. NetPilot: 2 minutes.
- Reproducibility across teams. Share the prompt; anyone gets the same lab. No "works on my Linux host" problems.
- Multi-vendor routing protocol comparison. Compare FRR Babel against a Cisco OSPF implementation in the same topology without sourcing Cisco gear.
That's the slot. Not "replaces ns-3" — "makes iteration fast and reproducible so you can try more ideas."
Walkthrough: a mesh routing protocol comparison experiment
A common research question: how do different routing protocols converge under packet loss in a mesh topology? Let's set up the experiment.
Step 1: Describe the lab
Copy into NetPilot:
Build a mesh network research lab with 8 FRR routers arranged as a mesh topology (each router connected to 3-4 others). Enable BGP, OSPF, IS-IS, and Babel on every router. Add traffic sources at 2 nodes that send flows to 2 other nodes. Include a Linux control node with tc/netem available for link impairment. Configure FRR so each routing protocol's adjacencies are independent — I want to compare them side by side on the same topology.
Step 2: Baseline convergence check
Ask the agent:
"Check routing state across all 8 routers — OSPF neighbors, OSPF/IS-IS/Babel route tables, and BGP summary. Flag any router where adjacencies aren't full."
The agent runs the right vtysh-wrapped command per protocol on every one of the 8 routers in parallel and returns a consolidated table: neighbor state, route count per protocol per router, and anomalies highlighted. In a mixed-vendor lab the agent also handles the vendor translation automatically (Cisco's show ip bgp summary vs Juniper's show bgp summary vs Arista's equivalent).
Direct CLI is always available too. SSH into any router, run vtysh, and check by hand with show ip ospf neighbor, show ip route ospf, show ip route babel, show bgp summary if you want to drill in or verify manually. Many researchers mix both — agent to scan the whole lab fast, CLI to inspect one specific router when something looks off.
# Example: if you want to verify by hand on one router
vtysh
show ip ospf neighbor
show ip route ospf
show ip route isis
show ip route babel
show bgp summaryStep 3: Inject packet loss on specific links
Ask the agent:
"Inject 20% packet loss on the link between R2 and R3."
The agent figures out which interface on the control node corresponds to that link, applies tc netem under the hood, and confirms the impairment is active. No need to map router names to interface names by hand.
Direct CLI is always available for researchers who want to customize beyond what the agent exposes — for example, applying correlated bursty loss with a custom distribution, or combining loss with delay jitter:
# On the control node — if you want a custom impairment model
tc qdisc add dev eth1 root netem loss 20% 25% delay 50ms 10msStep 4: Measure and compare
Ask the agent:
"Measure convergence time per protocol after I applied the impairment — record time-to-full-reconvergence for OSPF, IS-IS, Babel, and BGP."
The agent polls each router's routing table until stability is reached and returns a per-protocol comparison. For publication-grade statistical rigor, you'll want to script the measurement directly (see Step 6) so you can run 30+ trials and compute confidence intervals.
Step 5: Iterate
Change the topology (mesh degree, hop count, asymmetric links), change the impairment (bursty loss, latency variation, link flapping), change the protocol mix. Tell the agent what to change; redeploy or modify in place. Each iteration is 2-5 minutes instead of hours.
Step 6: Save as reproducible artifact
Commit the prompt and any custom measurement/impairment scripts to your research repo. Anyone with the same prompt can reproduce the exact lab. This is what reproducibility means in practice — not "here's my config, hope you can rebuild it" but "paste this prompt, optionally run my scripts, reproduce my figures."
Research reproducibility patterns
Three patterns that matter for publishable or defense-program-milestone research:
Deterministic topology
AI-built labs with the same prompt produce the same topology, same configs, same behavior. This is the baseline for reproducibility — the artifact is the prompt, not a tarball of 47 configs and a README.
Version-pinned protocol stacks
For publication-grade work, pin the exact FRR version (or Cisco IOS image version) via the enterprise BYOI capability. Reviewers can reproduce your exact setup even if upstream FRR has moved on.
Scripted impairments
tc netem-based impairments are deterministic and scriptable. Bursty loss, correlated loss, delay variation, duplication — all reproducible via committed scripts. Contrast with EMANE's RF models which are more realistic but less deterministic.
Automation via REST API
For large experimental sweeps (say, comparing 6 protocol combinations across 10 topologies under 5 impairment conditions = 300 experiments), use NetPilot's REST API to programmatically deploy labs, run tests, collect results, tear down.
Defense research application patterns
Defense research labs have historically relied on CORE/EMANE for MANET work. AI-built cloud labs add value as a complementary tool for:
- Rapid prototyping of new routing protocol ideas before committing to ns-3 implementation
- Multi-vendor interop where the experiment needs commercial router NOS behavior alongside open-source FRR
- Cross-team reproducibility — share a prompt, not a lab setup document
- Program milestone deliverables — attach the prompt to the deliverable so reviewers can reproduce findings
The tactical RF fidelity of EMANE still wins when the research question is "how does my protocol behave under realistic RF channel models." When the question is "how does my protocol behave under abstract packet loss and topology change," AI-built cloud labs ship faster.
FAQ
What's the difference between MANET simulation and emulation?
Simulation (ns-3, NetSim) uses mathematical models of protocol behavior — the "routers" are software models, not real implementations. Emulation (CORE, EMANE, meshnet-lab, NetPilot) runs real routing protocol code (FRR, Cisco, Juniper, Arista) in isolated environments — the behavior matches real devices. Simulations scale larger (thousands of virtual nodes); emulations are more faithful to production behavior.
Can I use NetPilot for tactical radio network research?
For the networking layer (routing protocols, packet forwarding, network-layer failure injection) — yes. For the physical radio layer (RF propagation, MIMO, signal-level modeling) — no. Pair NetPilot with EMANE or ns-3 when you need both layers.
Which routing protocols does FRR support that are relevant for mesh research?
FRR supports OSPF (v2 and v3), IS-IS, BGP, RIP (v1 and v2 plus RIPng), EIGRP, Babel, PIM, LDP, and PBR. For MANET research specifically, Babel is the most commonly studied protocol because it's designed for lossy networks. FRR is supported on all NetPilot plans.
How do I compare Babel to OSPF in a mesh under packet loss?
Deploy a NetPilot lab with both protocols enabled (FRR supports running them simultaneously). Inject packet loss via tc netem on the Linux control node. Measure convergence time and route-flap behavior from the CLI. Iterate on loss rates, topology degree, and mobility events. All impairment scripts are committable to your research repo.
Is my research reproducible if I use a cloud-hosted lab?
Yes, provided the prompt and any supporting scripts are committed. A NetPilot prompt deterministically produces the same topology and configs. Version-pin vendor images via the enterprise plan's BYOI capability for full reproducibility. Attach the prompt to your paper or program deliverable.
Copy-paste ready: The mesh network experiments prompt is the template for mesh topology research labs.
Running defense or academic network research? The Network Research Lab hub covers the workflow end to end — contact sales for enterprise plans with BYOI, FFRDC-friendly terms, and dedicated research environments.