Show HN: Linnix – eBPF observability that predicts failures before they happen
Linnix
eBPF-powered Linux observability with AI incident detection
Linnix captures every process fork, exec, and exit with lightweight CPU/memory telemetry using eBPF. Works standalone with built-in rules engine, or add AI for natural language insights.
✨ NEW: linnix-3b model now available! Download the 2.1GB quantized model from Releases or use the automated setup script.
🎯 Why Linnix?
Traditional monitoring tells you "CPU is high". Linnix tells you WHY and WHAT TO DO.
- ⚡ Zero Overhead: <1% CPU usage with eBPF probes (vs 5-15% for traditional agents)
- 🧠 AI-Powered (Optional): Natural language insights - "Fork storm in cron job. Add rate limit to /etc/cron.d/backup"
- 🎯 Works Without AI: Built-in rules engine detects incidents out-of-the-box
- 💰 Cost-Effective: 60-80% cheaper than Datadog or Dynatrace, runs on your infrastructure
- 🔓 Open Source: Apache-2.0 license, no vendor lock-in, BYO LLM (or none)
- 🚀 Production-Ready: Battle-tested on multi-node clusters, kernel 5.8+
📊 How We Compare
| Feature | Linnix (OSS) | Prometheus + Grafana | Datadog | Elastic APM |
|---|---|---|---|---|
| Setup Time | 5 minutes | 2-3 hours | 30 minutes | 1-2 hours |
| CPU Overhead | <1% (eBPF) | 2-5% (exporters) | 5-15% (agent) | 10-20% (APM) |
| Instrumentation | Zero | Manual exporters | Agent install | Code changes |
| AI Insights | ✅ Built-in | ❌ No | ❌ No | |
| Incident Detection | ✅ Auto | ✅ ML (paid) | ||
| Cost (10 nodes) | $0 | ~$50/mo hosting | ~$1,500/mo | ~$1,000/mo |
| Data Privacy | ✅ Your infra | ✅ Your infra | ❌ Vendor cloud | |
| BYO LLM | ✅ Any model | N/A | ❌ No | ❌ No |
Bottom line: We're Prometheus for process lifecycle + AI reasoning layer. Use both!
💡 Note: AI is optional! Linnix works out-of-the-box with its built-in rules engine for detecting fork storms, CPU spikes, and runaway processes. Add AI later for natural language explanations.
🎯 One-Command Setup (New!)
# Complete eBPF monitoring with AI - ready in 5 minutes git clone https://github.com/linnix-os/linnix.git && cd linnix ./setup-llm.sh # Then open: http://localhost:8080 (Web Dashboard)
What you get instantly:
- ✅ Web Dashboard: Real-time visualization at
http://localhost:8080 - ✅ eBPF Monitoring: Every process event captured with <1% overhead
- ✅ AI Insights: 3B model analyzes incidents every 30 seconds
- ✅ Live Metrics: Process tree, CPU usage, system overview
- ✅ Zero Config: Works out of the box, all data local
� What You'll See
After running ./setup-llm.sh, you'll have:
- Web Dashboard (
http://localhost:8080) - Beautiful real-time UI - API Access (
http://localhost:3000) - REST endpoints for integration - AI Analysis - Automatic incident detection with explanations
- Live Events - Real-time process monitoring stream
Quick Health Check:
curl http://localhost:3000/healthz # eBPF daemon curl http://localhost:8090/health # AI model curl http://localhost:3000/insights | jq # Get AI insights
What it does:
- Downloads TinyLlama model (800MB) or linnix-3b (2.1GB)
- Starts cognitod (eBPF daemon) + llama-server (AI inference)
- Runs health checks
- Ready for AI insights in < 5 minutes!
🐳 Docker without AI (Rules Engine Only)
git clone https://github.com/linnix-os/linnix.git && cd linnix docker-compose up -d # Stream live process events curl -N http://localhost:3000/stream # Get incident alerts from rules engine curl http://localhost:3000/insights | jq
✅ No AI/LLM required | ✅ No Rust toolchain required | ✅ Works on any Linux | ✅ < 1% CPU overhead
📦 From Source
# 1. Install cognitod curl -sfL https://raw.githubusercontent.com/linnix-os/linnix/main/scripts/install.sh | sh # 2. Start monitoring sudo systemctl start cognitod # 3. Stream live events linnix-cli stream # 4. Get AI insights export LLM_ENDPOINT="http://localhost:8090/v1/chat/completions" export LLM_MODEL="linnix-3b-distilled" linnix-reasoner --insights
🏗️ Architecture
┌──────────────────────────────────────────────────────────────┐
│ Kernel Space (eBPF) │
├──────────────────────────────────────────────────────────────┤
│ fork hook → exec hook → exit hook → CPU/mem sampling │
└────────────────────────┬─────────────────────────────────────┘
│ Perf buffers
▼
┌──────────────────────────────────────────────────────────────┐
│ User Space (cognitod) │
├──────────────────────────────────────────────────────────────┤
│ • Event processing • Process tree tracking │
│ • State management • Rules engine │
│ • HTTP/SSE API • Prometheus metrics │
└────────────────────────┬─────────────────────────────────────┘
│
┌───────────────┼───────────────┐
│ │ │
▼ ▼ ▼
┌─────────┐ ┌──────────┐ ┌─────────────┐
│ CLI │ │ Reasoner │ │ Prometheus │
│ Stream │ │ AI │ │ Grafana │
└─────────┘ └──────────┘ └─────────────┘
📊 Features
Linnix provides comprehensive eBPF-based monitoring with optional AI-powered incident detection:
- ✅ eBPF monitoring - Kernel-level process lifecycle tracking
- ✅ Real-time event streaming - SSE endpoints for live data
- ✅ Process tree tracking - Full ancestry and lineage graphs
- ✅ CPU/memory telemetry - Lightweight resource monitoring
- ✅ Local rules engine - Detects fork storms, CPU spikes, runaway processes (no AI needed)
- ✅ Prometheus integration - Standard metrics export
- ✅ LLM inference (optional) - Bring your own model for natural language insights (OpenAI, local, etc.)
- ✅ Training examples - 50+ curated incident samples included
🚀 Installation
Docker (Recommended)
docker run -d \ --name cognitod \ --privileged \ --pid=host \ --network=host \ -v /sys/kernel/btf:/sys/kernel/btf:ro \ -v /sys/kernel/debug:/sys/kernel/debug:ro \ linnixos/cognitod:latest
From Package
Ubuntu/Debian:
wget https://github.com/linnix-os/linnix/releases/latest/download/cognitod_amd64.deb sudo dpkg -i cognitod_amd64.deb sudo systemctl start cognitod
RHEL/CentOS:
wget https://github.com/linnix-os/linnix/releases/latest/download/cognitod.rpm sudo rpm -i cognitod.rpm sudo systemctl start cognitod
From Source
# Install Rust (if not already installed) curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh # Clone repository git clone https://github.com/linnix-os/linnix.git cd linnix # Build eBPF programs cargo xtask build-ebpf # Build and install cargo build --release sudo cp target/release/cognitod /usr/local/bin/ sudo cp target/release/linnix-cli /usr/local/bin/ sudo cp target/release/linnix-reasoner /usr/local/bin/
📚 Documentation
- GitHub Releases - Release notes and downloads
- Hugging Face Model - AI model documentation
- PERFORMANCE.md - Proving the <1% CPU overhead claim
- ROADMAP.md - Future development plans
- COMPARISON.md - Detailed Prometheus/Datadog/Elastic trade-offs
- HOW_IT_WORKS.md - eBPF probes, BTF offsets, and AI loop internals
- FAQ.md - Kernel support, overhead, and privacy answers
Full documentation: GitHub docs/
🔌 API Endpoints
Cognitod exposes a REST API on port 3000:
GET /health- Health checkGET /metrics- Prometheus metricsGET /processes- All live processesGET /graph/:pid- Process ancestry graphGET /stream- Server-sent events (real-time)GET /insights- AI-generated insightsGET /alerts- Active alerts from rules engine
For API examples, see cognitod/examples/.
🤖 LLM Integration
Linnix works with any OpenAI-compatible LLM endpoint:
🎁 Demo Model (Included)
We provide a distilled 3B model optimized for CPU inference:
# Download demo model (2.1GB) wget https://github.com/linnix-os/linnix/releases/download/v0.1.0/linnix-3b-distilled-q5_k_m.gguf # Serve with llama.cpp ./serve_distilled_model.sh # Starts on port 8090 # Or manually: llama-server -m linnix-3b-distilled-q5_k_m.gguf \ --port 8090 --ctx-size 4096 -t 8 # Test the model export LLM_ENDPOINT="http://localhost:8090/v1/chat/completions" export LLM_MODEL="linnix-3b-distilled" linnix-reasoner --insights
Performance: 12.78 tok/s on CPU (no GPU required!)
Bring Your Own Model
# Option 1: Local model with llama.cpp ./llama-server -m qwen2.5-7b-instruct-q5_k_m.gguf --port 8090 # Option 2: vLLM vllm serve Qwen/Qwen2.5-7B-Instruct --port 8090 # Option 3: Ollama ollama serve qwen2.5:7b # Configure endpoint export LLM_ENDPOINT="http://localhost:8090/v1/chat/completions" export LLM_MODEL="qwen2.5-7b" # Get insights linnix-reasoner --insights
You can also use commercial APIs (OpenAI, Anthropic, etc.) by pointing to their endpoints.
🔧 Configuration
Create /etc/linnix/linnix.toml:
[runtime] offline = false # Set true to disable external HTTP calls [telemetry] sample_interval_ms = 1000 # CPU/memory sampling frequency [rules] enabled = true config_path = "/etc/linnix/rules.yaml" [api] bind_address = "127.0.0.1:3000" [llm] endpoint = "http://localhost:8090/v1/chat/completions" model = "qwen2.5-7b" timeout_secs = 120
🎓 Examples
Stream events in real-time
# CLI streaming linnix-cli stream # Or use curl with SSE curl -N http://localhost:3000/stream
Get process tree
# For a specific PID curl http://localhost:3000/graph/1234 | jq . # All processes curl http://localhost:3000/processes | jq .
Detect incidents with AI
# Get AI-generated insights linnix-reasoner --insights # Output: # { # "summary": "System experiencing high CPU due to fork storm...", # "risks": ["cpu_spin", "fork_storm"] # }
Configure custom rules
Edit /etc/linnix/rules.yaml:
rules: - name: fork_storm condition: "forks_per_sec > 100" severity: critical actions: - alert - log - name: cpu_spike condition: "process.cpu_percent > 95 AND duration > 60" severity: warning actions: - alert
🤝 Contributing
We love contributions! Here's how to get started:
- Fork the repository
- Create a feature branch (
git checkout -b feat/amazing-feature) - Make your changes
- Run tests (
cargo test --workspace) - Commit (
git commit -m 'Add amazing feature') - Push (
git push origin feat/amazing-feature) - Open a Pull Request
See CONTRIBUTING.md for detailed guidelines.
Development Setup
# Clone repo git clone https://github.com/linnix-os/linnix.git cd linnix # Install dependencies cargo build --workspace # Build eBPF programs cargo xtask build-ebpf # Run tests cargo test --workspace # Run clippy cargo clippy --all-targets -- -D warnings
🐛 Bug Reports
Found a bug? Please open an issue with:
- Your OS and kernel version
- Cognitod version (
cognitod --version) - Steps to reproduce
- Expected vs actual behavior
📝 License
Linnix is licensed under the Apache License 2.0.
See LICENSE for details.
Third-Party Licenses
Linnix uses several open source libraries. See THIRD_PARTY_LICENSES for details.
eBPF Code
The eBPF programs in linnix-ai-ebpf/linnix-ai-ebpf-ebpf/ are dual-licensed under GPL-2.0 OR MIT (eBPF programs must be GPL-compatible).
🌟 Star History
If you find Linnix useful, please star the repo! It helps us grow the community.
💬 Community
- Discord: Discord (coming soon) (coming soon)
- Twitter: @linnix_os
- Blog: github.com/linnix-os/linnix/discussions
- Discussions: GitHub Discussions
� Show Your Support
If Linnix helps you catch production incidents, add this badge to your README:
[](https://github.com/linnix-os/linnix)
�🙏 Acknowledgments
Linnix is built on the shoulders of giants:
Special thanks to the eBPF community for making kernel observability accessible!
📖 Citations
If you use Linnix in research, please cite:
@software{linnix2025, author = {Shah, Parth}, title = {Linnix: eBPF-powered Linux observability with AI}, year = {2025}, url = {https://github.com/linnix-os/linnix} }
Made with ❤️ by the Linnix team
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0
