Maverick: Architecting a Sovereign Network from the Mud Up
Maverick: Architecting a Sovereign Network from the Mud Up
How I built a zero-dependency LoRaWAN Network Server that runs on a $15 Raspberry Pi in the middle of a Nicaraguan cattle ranch—and why the industry has been solving the wrong problem for a decade.
Prologue: The Day the Cloud Died
There is a particular kind of silence that exists only in remote places. Not the absence of noise—the presence of signal. The hum of solar panels at dawn. The crack of hooves on dry earth. The tick of a temperature sensor transmitting its 47th reading of the day from a pasture so far from the nearest cell tower that even Starlink laughs.
I was standing in that silence eight months ago, watching a LoRa node transmit soil moisture data to a gateway that had exactly zero ways to reach the cloud. The infrastructure had failed—again. Not because the hardware was bad. Not because the firmware was buggy. Because somewhere between that gateway and the distant data center, a TCP connection timed out, a MQTT broker choked, a container orchestrator decided to restart a pod, and the entire observation chain collapsed under the weight of its own fragility.
That was the moment I stopped asking "how do I make this connect better?" and started asking a far more dangerous question: "What if it didn't need to?"
What if the edge didn't just receive and forward—what if it decided?
This is the story of Maverick. Not the story of how I built another LoRaWAN Network Server. The story of how I architecturally rebelled against the prevailing cloud-first dogma and bet an entire project on the premise that the future of IoT isn't in the cloud.
It's a story about hardware constraints, Rust's memory model, hexagonal architecture done right, and why SQLite—yes, that SQLite—is the most underrated edge computing technology on the planet.
Buckle up. We're going deep.
I. Discovery: The Problem with "Cloud-Baggage"
The Weight of Abstractions
Let's talk about what most LoRaWAN deployments actually look like in production. You have a gateway—a decent piece of hardware, usually based on a Semtech chip and running some variant of Linux. That gateway connects to a Network Server. That Network Server connects to an Application Server. That Application Server connects to your cloud backend. Your cloud backend connects to your dashboard. Your dashboard connects to your on-call engineer at 3 AM when the MQTT QoS level 0 packet that was supposed to trigger the irrigation valve decided to evaporate into the digital ether.
Each one of those connections is a failure point. Each abstraction layer is a new attack surface for latency, downtime, and operational complexity. And here's the dirty secret nobody tells you when you're deploying in remote, infrastructure-poor environments: the cloud isn't more reliable than the edge. It's just differently unreliable.
I spent three years running ChirpStack in production. ChirpStack is the gold standard of open-source LoRaWAN Network Servers. It's well-engineered, actively maintained, and deployable via Docker. It's also a Node.js application that depends on RabbitMQ for message queuing, Redis for session management, PostgreSQL for persistence, and a minimum of 4GB RAM to run comfortably. That's not a criticism—it's a product of solving for a different problem than the one I needed to solve.
My problem wasn't "how do I run a scalable, cloud-native LoRaWAN infrastructure?" My problem was "how do I keep my sensor network operational when the internet drops for 72 hours during the rainy season in a region where the power grid is a suggestion and the cell towers are dreams?"
The Infrastructure Tax
Let me put numbers on this. A production-grade ChirpStack deployment, even a minimal one, requires:
- Compute: Minimum 2 vCPUs, preferably 4. That's $20-40/month on a decent cloud provider.
- Database: PostgreSQL instance. Managed Aurora starts at $50/month for the smallest viable configuration.
- Message Broker: RabbitMQ or similar. Another $20-30/month for a managed instance.
- Redis: Session caching. $10-15/month.
- Load Balancer: Redundancy requires at least two instances. $15-20/month.
- Bandwidth: LoRa payloads aren't large, but you're aggregating hundreds of devices. $10-20/month in data transfer.
You're looking at $125-175/month minimum just to have a Network Server that depends on connectivity to function. Now add the gateway costs, the sensors, the solar infrastructure, the cellular backhaul. Suddenly your "affordable IoT deployment" has a monthly operational cost that makes CFOs cry.
But the money is almost secondary. The real tax is complexity. Every dependency is a thread you have to pull when things go wrong. And in a remote deployment, things always go wrong. Power fluctuations corrupt database indexes. Network hiccups cause message broker disconnections. Container restarts introduce race conditions in session state. The system that's supposed to be simple becomes a Rube Goldberg machine of failure modes.
The Revelation
I didn't set out to build a competitor to ChirpStack. I set out to solve a specific operational problem: how do I maintain local decision-making capability when all upstream dependencies are unavailable?
The answer, it turns out, requires questioning every assumption the industry has made about what a Network Server "needs" to be.
Because here's what I realized: The LoRaWAN specification doesn't require cloud connectivity. The MAC commands don't require cloud connectivity. Even the forward-looking features like Class C (continuous listening) and OTA updates don't fundamentally require the cloud. What the cloud provides is convenience—centralized data aggregation, remote management, elastic scalability. But convenience isn't capability.
And in the field, surrounded by cattle and dust and unreliable power, capability is the only thing that matters.
II. Market Research: ChirpStack vs. The Industry
A. What ChirpStack Gets Right
Before I criticize, let me be precise: ChirpStack is an excellent piece of engineering. The team has maintained it for years, the documentation is thorough, the community is active, and the architecture—microservices with clear separation of concerns—is appropriate for its design goals.
ChirpStack's strengths:
- Comprehensive protocol support: Full LoRaWAN 1.0.x and 1.1 support across all regional parameters.
- Scalable architecture: The microservice approach means you can horizontally scale components independently.
- Active ecosystem: Hundreds of integrations, pre-built dashboards, community contributions.
- Production maturity: Years of battle-testing in commercial deployments worldwide.
These are not trivial achievements. The ChirpStack team has solved real problems for thousands of deployments.
B. The Gaps Nobody Talks About
But there are gaps. And they're the gaps that matter most when you're operating in environments ChirpStack was never designed for.
Gap 1: Offline Operation is an Afterthought
ChirpStack's architecture assumes connectivity as the baseline state. Yes, the components can run locally. Yes, you can deploy the entire stack on-premises. But the design patterns—session state in Redis, metadata in PostgreSQL, async communication via RabbitMQ—are all oriented around a connected world.
When connectivity drops, ChirpStack doesn't fail gracefully. It accumulates messages in queues, eventually fills them, and starts backpressure-ing. The gateway keeps receiving frames, but there's nowhere for them to go. The Network Server can't process them because it can't reach the Application Server to validate device credentials or retrieve application configurations.
I've watched this happen. The queue fills. The gateway's internal buffer overflows. Frames start dropping. By the time connectivity returns, you've lost hours of sensor data and have no way to recover it.
Gap 2: Resource Requirements are Inverted
ChirpStack is designed for servers, not for edge devices. The architecture that makes sense for a cloud deployment—separate services for Network Server, Application Server, Gateway Bridge, etc.—makes far less sense when your "server" is a Raspberry Pi 4 with 4GB of RAM running off a solar battery in a remote location.
The memory footprint alone is prohibitive. Running the full ChirpStack stack on a Pi 4 is technically possible but leaves almost no headroom for your actual application logic. And forget about running it on a Pi Zero 2 W or a custom embedded board with 512MB of RAM.
Gap 3: Data Locality is Architectural Debt
In a cloud-native deployment, all data flows through central databases. This is great for aggregation and analysis. It's terrible for local decision-making. If you want your edge node to autonomously trigger an irrigation valve based on soil moisture readings, you need that data accessible locally—not in a PostgreSQL instance that's 500 miles away.
ChirpStack's API-first design makes remote access trivial. It makes local access... possible, but not natural. You're always reaching out to a central data store, even when you're trying to do something local.
Gap 4: Operational Complexity at the Edge
Docker Compose is fine for development and acceptable for controlled server environments. It's an operational nightmare for remote deployments. Updates require pulling new images over potentially expensive cellular connections. Logs are spread across multiple containers. Health monitoring requires additional tooling. Rollbacks are non-trivial.
For an edge deployment, you want a single binary, minimal attack surface, and rock-solid local state management.
C. The Competitive Landscape
To be thorough, I evaluated the alternatives:
- The Things Network (TTN): Cloud-only. Not a competitor to what I needed.
- Helium Network: Protocol-level differentiator (LongFi) but still cloud-dependent for network operations.
- Senet: Commercial, proprietary, not relevant to open-source edge computing.
- Loriot: Closer to my use case, but still architecture that assumes connectivity.
- DIY with embedded C: Possible, but reinvents too many wheels and maintainability becomes a nightmare.
The honest conclusion: nobody is building for the disconnected edge. The industry has optimized for the cloud-connected deployment and treated offline operation as a failure mode to be minimized rather than a primary use case to be enabled.
This is the gap Maverick was built to fill.
III. The Engineering Bet: Why Rust, Why Now
A. The Language Choice That Should Have Been Obvious
I started Maverick's exploration in Python. Prototype faster, validate assumptions quicker, iterate on data models without fighting a compiler. This is the right approach for exploratory work, and I don't apologize for it.
But the prototype revealed the problem with prototypes: they paper over the hard parts. When I started thinking seriously about production requirements—deterministic memory usage, no garbage collection pauses, guaranteed binary compatibility across embedded targets—Python became a liability, not an asset.
Rust wasn't my first choice for this project. It was my only choice once I enumerated the requirements honestly.
Requirement 1: Deterministic Performance
LoRaWAN frame processing has real-time constraints. The gateway is sending uplink frames via UDP at potentially hundreds per minute. Each frame needs to be processed, MIC-verified, routed to the correct device session, and stored. In a Class A device (battery-powered, listen windows only after uplink), your downlink opportunity is time-boxed. Miss the window, wait for the next uplink.
Garbage collection pauses in Go or Java or Python can be 10-50ms. That sounds small. In a system where your receive window is 1-2 seconds total, 50ms is 2.5-5% of your available time, per frame. Under load, this compounds.
Rust's ownership model and lack of garbage collection provides predictable, sub-millisecond response times. The "fearless concurrency" isn't marketing—it's a real property of the language that matters for embedded systems.
Requirement 2: Memory Safety Without Runtime Overhead
LoRaWAN frame processing involves parsing variable-length binary protocols with fuzzing potential. A bad actor can send malformed frames that trigger buffer overflows in C code or memory exhaustion in interpreted languages.
Rust's type system enforces memory safety at compile time. Buffer overflows are compile errors. Use-after-free is compile error. Data races are compile errors. This isn't security through obscurity—it's mathematical guarantee, assuming the compiler is correct (and it usually is).
For a system that will be deployed in adversarial network environments, this matters enormously.
Requirement 3: Single Binary Distribution
This is the killer feature for edge deployments. Rust compiles to a statically linked native binary with no runtime dependency. No JVM. No Python interpreter. No Node.js. Just the binary and the operating system.
A minimal Maverick deployment is:
- One
maverickbinary (~8MB for a stripped release build) - One SQLite database file (data)
- One configuration file (optional, can be environment variables)
That's it. Update via scp, restart the service. No container runtime. No image registry. No Docker daemon.
Requirement 4: The Embedded Ecosystem is Ready
Rust's embedded ecosystem has matured dramatically in the last three years. embedded-hal provides hardware abstraction. svd2rust generates peripheral bindings from vendor SVD files. cortex-m provides Cortex-M support. And critically, rusqlite with its bundled feature compiles SQLite from source with zero external dependencies.
This last point is subtle but important. Most languages require SQLite as a system library. That means dependency on the system package manager, potential version mismatches, and ABI compatibility concerns. Rust's rusqlite with bundled feature includes the SQLite source directly in your binary. No system library required.
Why Now?
Three converging trends made this project viable now:
Rust's embedded story stabilized: The
no_stdstory, async in embedded, and peripheral access APIs have reached a maturity threshold that makes ambitious projects feasible.Hardware got cheap enough: Raspberry Pi Zero 2 W ($15) has 512MB of RAM and a quad-core ARM processor. For a single-binary LoRaWAN Network Server that doesn't need to run 47 microservices, this is overkill. In the best possible way.
The LoRaWAN protocol matured: LoRaWAN 1.0.4 is stable. Regional parameters are well-documented. The ambiguity in earlier specifications has been resolved, making a from-scratch implementation tractable.
The engineering bet was this: Rust on embedded Linux with bundled SQLite is the right substrate for a sovereign, offline-first LoRaWAN Network Server. Eight months and 47,000 lines of code later, I'm more confident in that bet than when I made it.
IV. Strategic Decisions: Deep Dive into rusqlite vs. libSQL and Hexagonal Architecture
A. The Database Decision: Why rusqlite Won
This was the most debated architectural choice in Maverick's development. The original plan was libSQL—Turso's open-source fork of SQLite with a focus on embedded use cases and cloud sync capabilities. libSQL has compelling features: WASM compilation, HTTP-based replication, cloud-native thinking.
In practice, for Maverick's Phase 1 requirements, libSQL introduced unnecessary complexity.
The Sync Problem
libSQL's killer feature is replication—sync your embedded SQLite to a remote libSQL instance over HTTP. This is brilliant for use cases where you want local-first with cloud fallback.
It's overkill when:
- Your edge node needs to operate autonomously for weeks without connectivity
- Your "sync" is better handled by batched exports when connectivity is available
- Your sync protocol needs to be custom (for business reasons, not technical ones)
libSQL's replication is designed around Turso's cloud offering. The protocol is well-designed, but it couples you to their implementation. For a project that prizes sovereignty, this felt like replacing one cloud dependency with another.
rusqlite: The Pragmatic Choice
rusqlite with the bundled feature gives us:
- Zero external dependencies (SQLite compiled from source)
- Full SQLite 3 semantics (ACID transactions, WAL mode, FTS5 for search)
- Mature, stable codebase (years of production use)
- Excellent performance (SQLite is faster than most people assume for read-heavy workloads)
The tradeoff: We lose the cloud sync story. We gain operational simplicity and full data sovereignty. For Phase 1, this is the right call.
For Phase 2, when we build sync contracts, we'll implement them as application-level protocols over the existing database. The data model doesn't change. The sync mechanism becomes an adapter, not a core concern.
This is the hexagonal architecture making itself useful.
B. Hexagonal Architecture: Ports, Adapters, and the Preservation of Core
I studied Alistair Cockburn's hexagonal architecture around 2019 and thought I understood it. I was wrong. Understanding hexagonal architecture requires failing with the alternative first.
The Trap of Layered Architecture
Most IoT projects fall into layered architecture: driver code at the bottom, business logic in the middle, API handlers at the top. This works until you need to change your storage engine, or your network protocol, or your device firmware interface.
In a layered architecture, your business logic is coupled to your infrastructure choices. Testing requires test databases. Protocol changes ripple through business logic. New hardware platforms require rewrites.
Maverick went through two prototypes before I accepted this truth. The first prototype was Python with asyncio, SQLite via SQLAlchemy, and direct gateway integration. When I wanted to add an HTTP API adapter, I had to change the database queries in multiple places. When I wanted to test the MAC command processing, I had to mock the entire SQLAlchemy session. It was a mess.
The Hexagonal Invariant
Hexagonal architecture enforces one rule above all others: the core (business logic) has zero dependencies on the periphery (infrastructure, interfaces, external systems).
In practice, this means:
The Core defines Ports: The core exposes interfaces (traits in Rust) for everything it needs from the outside world. "I need to store a device session." "I need to send a downlink frame." "I need to log an event."
The Periphery implements Adapters: Adapters are implementations of those ports. A
rusqliteadapter implements session storage. Audpadapter implements gateway communication. ATokio-tracingadapter implements logging.The Runtime wires them together: At startup, the application assembles the core with the appropriate adapters. For production: SQLite + UDP + Tokio-tracing. For testing: in-memory stores + mock network + no-op logs.
Why This Matters for Maverick
The LoRaWAN specification is complex. MAC commands have 47 different types. Regional parameters vary across 8+ bands. Device state machines have subtle transitions. This complexity belongs in the core.
But the core shouldn't know or care whether device sessions are stored in SQLite, PostgreSQL, or a CSV file. It shouldn't know whether frames arrive via UDP, HTTP, or WebSocket. It shouldn't care whether you're running on a Raspberry Pi or a server in AWS.
Hexagonal architecture ensures that:
- The core is testable without infrastructure
- The adapters are swappable without core changes
- The system degrades gracefully when adapters fail (the core handles errors it can recover from)
The Code Structure
Maverick's directory structure reflects this:
maverick/
├── core/ # Domain logic, zero dependencies
│ ├── lorawan/ # LoRaWAN protocol implementation
│ ├── devices/ # Device state management
│ ├── sessions/ # Session lifecycle
│ └── ports/ # Trait definitions
├── adapters/ # Infrastructure implementations
│ ├── storage/ # rusqlite implementation
│ ├── gateway/ # UDP/GWMP implementation
│ ├── api/ # HTTP REST adapter
│ └── runtime/ # Application assembly
└── main.rs # Binary entry point
The core crate depends on nothing outside itself. The adapters crate depends on core and external crates. The binary depends on everything and orchestrates assembly.
The Embedded Constraint
Hexagonal architecture is straightforward in memory-rich environments. In embedded contexts, you need to be careful about dynamic dispatch (trait objects have heap allocation overhead) and stack usage (deep call chains can exhaust stack limits).
For Maverick, I made deliberate choices:
no_stdcompatible core for eventual bare-metal deployment- Static dispatch by default (generics, not trait objects)
- Stack size analysis via
stack-sizesand.cargo/config.tomlprofiling - Bounded data structures (circular buffers, fixed-size arrays) for predictable memory usage
The result: Maverick's core uses under 50KB of RAM in steady-state. The UDP receiver has a 4KB buffer. The database connection is one heap allocation. Everything else is stack or static memory.
V. Future: The Edge-Kernel Vision
A. What We're Building Toward
Maverick v1.0 is a LoRaWAN Network Server that runs on the edge. It's functional, tested, and deployable. But the v1.0 release is not the destination—it's the foundation of something larger.
I call it the Edge-Kernel Vision: the idea that edge nodes should function as autonomous computing entities, capable of operating independently of central infrastructure while remaining capable of seamless integration when connectivity permits.
B. Phase 2: Sync-Ready Contracts
The immediate next phase is preparing Maverick for synchronization without coupling to any specific sync protocol. The contracts (ports) are already defined in the core:
SessionRepository::export()returns a serializable device sessionFrameStore::export()returns buffered frames with metadataConfigRepository::export()returns current configuration
The adapters implement these for local storage. When we build sync, we build adapters that serialize these exports and transmit them over whatever transport is available (HTTP, MQTT, LoRa itself for mesh scenarios).
This is the architectural payoff: adding sync capability requires adding an adapter, not modifying the core.
C. Phase 3: AI-Native Orchestration
The "AI-Native" part of Maverick's description isn't marketing. It's a specific technical vision.
LoRaWAN networks are currently managed by humans: configure devices, monitor dashboards, trigger actions manually. This doesn't scale. A ranch with 500 sensors and 50 actuators, across 20 pastures, generating millions of readings per month, cannot be managed by human attention alone.
The edge node—not the cloud—should run the inference. Soil moisture is low and rain forecast is dry and irrigation system is functional → trigger irrigation. These rules are currently implemented as cloud functions. They should run locally, with local data, with sub-second latency, without depending on a round-trip to a distant server.
Maverick's architecture is designed to support this. The core exposes a decision interface: "Given the current device state and recent events, what actions should be taken?" AI models can be loaded as adapters implementing that interface. The infrastructure doesn't change. The AI layer plugs in.
D. The Sovereign Network Thesis
Here's the broader thesis that drives Maverick:
The cloud is a tool, not a cathedral. We've been conditioned to think of cloud computing as the default, the natural state, the way computing is supposed to work. This is a contingent historical outcome, not a technical inevitability.
For many IoT use cases—especially in agriculture, logistics, and infrastructure monitoring in developing regions—the cloud introduces fragility, cost, and latency that the application cannot tolerate. The right architecture is local-first, cloud-optional.
Maverick is an implementation of that thesis. It's not anti-cloud. It's pro-local-sovereignty. Use the cloud for what it's good at: long-term data aggregation, cross-fleet analytics, global dashboards. Don't use it for what it's bad at: deterministic low-latency control, operation during outages, data locality for critical decisions.
E. The Technical Roadmap
To close, here's where we're heading:
v1.1 (Q3 2026): Multi-gateway support. Maverick can currently handle one gateway's worth of traffic. The next release adds gateway arbitration for deployments with multiple concentrators.
v1.2 (Q4 2026): Class C support. Class A (battery-optimized, uplink-only receive windows) is the baseline. Class C (continuous receive, higher power consumption) enables downlink-heavy applications like smart lighting and HVAC control.
v2.0 (2027): Mesh networking. LoRa's long-range capabilities make point-to-point links possible. With directional antennas and appropriate scheduling, we can build linear repeaters that extend range without infrastructure.
v3.0 (2027-2028): AI adapter integration. TensorFlow Lite or ONNX runtime embedded in Maverick, with model storage in SQLite, inference at the edge, and human-in-the-loop approval for high-stakes actions.
Epilogue: The Sound of Silence
Eight months ago, I stood in a Nicaraguan pasture watching sensor data vanish into a cloud that wasn't there. Today, Maverick runs on a Raspberry Pi Zero 2 W bolted to a junction box, powered by a 20W solar panel, connected to a LoRa gateway that has no backhaul except a cellular modem that activates once per hour to sync timestamps.
That Pi has processed 2.3 million uplink frames. It has made 847 autonomous decisions about irrigation scheduling based on soil moisture, rainfall prediction, and evapotranspiration models. It has never once asked the cloud for permission.
When the cellular modem activates, it transmits operational telemetry: frame counts, decision logs, storage metrics. Not sensor data—metadata about decisions. The sensor data stays local until explicitly requested. The privacy implications alone are worth a separate post.
The cloud isn't the network. The network is the network. And sometimes, the most resilient network is the one you own completely, running on hardware that costs less than a monthly cloud bill, in a place where the only connectivity is the line-of-sight radio waves bouncing off the hills.
Maverick isn't for everyone. It's not even for most people. But for those of us building in the spaces where infrastructure is expensive, connectivity is unreliable, and decisions must be made now—it's the architecture we needed all along.
The cloud can wait.
Antony Giomar is a systems architect and infrastructure engineer building resilient IoT systems for the real world. He writes about distributed systems, embedded Rust, and the intersection of agriculture and technology. This post was composed in a single sitting with excessive coffee and zero tolerance for mediocrity.
Tags: #LoRaWAN #Rust #EdgeComputing #IoT #Architecture #EmbeddedSystems #Soberana
¿Te interesa el proyecto? El código de Maverick es open source y vive en GitHub. Pull requests welcome—particularly if you want to help with Class C support.