Inside the LynxTrac agent: lightweight, powerful, and fast
One binary covers monitoring, remote access, log shipping, and deployments. Here is how we kept it under 15 MB and well under 1% CPU.
One binary covers monitoring, remote access, log shipping, and deployments. Here’s how we kept it under 15 MB on disk and well under 1% CPU in steady state — and why that matters.
Design constraints
We started with four non-negotiables:
- One process per endpoint. Not a bundle of agents with a supervisor. One PID.
- No runtime dependencies. Ship a static binary; don’t require a particular Python or .NET.
- Outbound-only. Agent never listens on an inbound port.
- Sub-1% CPU in steady state. On the machines we manage, every 1% matters.
These constraints ruled out a lot of common patterns: no Electron, no bundled JVM, no Python-with-pip-install-this.
Language choice
Go, for three reasons: static linking out of the box, a decent standard library, and predictable memory behavior. We considered Rust — great tool, but the binary size with dependencies needed would have pushed us past 20 MB. The performance difference doesn’t matter for our workload.
What the agent actually does
- Metric collection. Procfs on Linux, WMI on Windows, sysctl on macOS. Sub-second poll intervals, configurable.
- Log shipping. Tails named files and journald; batches to the relay with backoff on network issues.
- Session brokering. Accepts session requests from the relay, opens pty or RDP-equivalent streams.
- Script runner. Executes operator-authorized scripts with output capture and timeout enforcement.
- Self-update. Signed binary upgrades, staged per policy.
How we kept the size down
- Single static binary. No shared libraries.
- No embedded UI. The agent has no GUI. Zero.
- Shared internal frameworks. Metric poll, log tail, and session brokering all use the same serialization layer.
- Strip debug symbols in release builds.
- No bundled interpreters. Scripts run via the host’s existing shell.
How we kept CPU down
- Event-driven everywhere. No busy loops. Log tailing uses inotify / kqueue / ReadDirectoryChanges.
- Batched writes. Metrics accumulate for 200ms before sending, so we don’t syscall 60 times a second.
- Compression. Logs and metrics are gzip-compressed before shipping; CPU cost is minimal, bandwidth cost drops 90%.
- Idle backoff. If no one is connected and nothing is changing, the agent is effectively silent.
Steady state numbers
On a typical 4-core, 8 GB Linux VM:
- CPU: 0.3% (occasional 1-2% spikes during metric flush)
- RSS: 38 MB
- Disk I/O: negligible
- Network: 1-4 KB/s inbound (heartbeat), 10-50 KB/s outbound (metrics + logs)
During an active shell session add ~2 MB RSS per session and bandwidth proportional to user activity.
What we sacrificed
- Rich local buffering. If the relay is unreachable for an hour, we drop old metrics rather than consume unbounded disk.
- Plugins. You can’t extend the agent itself. You can, however, run arbitrary scripts via the operator surface.
- Local dashboards. There’s no “localhost:8080” on the agent. Everything goes through the relay.
These are deliberate trade-offs. You can’t simultaneously be tiny, always-on, and Swiss-army-knife.
Try it yourself
LynxTrac is free forever for 2 servers — no credit card, no sales call. Start in under 2 minutes →
Related posts
Why we built LynxTrac: remote access without VPN headaches
We kept hitting the same VPN ceiling on every project we ran. LynxTrac is what we wished existed — remote access without the tunnel.
10 reasons IT teams are switching to LynxTrac
From VPN-free access to unified log analysis, here are the ten most common reasons teams move off legacy RMM tooling and onto LynxTrac.
Why LynxTrac is the modern RMM platform IT teams have been waiting for
A modern RMM has to do more than check boxes — it has to compress the whole IT operating loop. Here is how LynxTrac is designed around that reality.