February 27, 2026 | Reading Time: 13 minutes 37 seconds
Introduction: The Kernel as Infrastructure
For decades, observability meant adding code to your applications. You instrumented your services with metrics libraries, sprinkled tracing SDKs through your call paths, and configured log shippers on every host. Each layer of visibility came with a cost: dependency management, performance overhead, code changes that needed to be deployed and maintained. Miss an endpoint and you had a blind spot. Upgrade a library and you risked breaking your telemetry pipeline.
eBPF changes this equation fundamentally. Instead of instrumenting applications, you instrument the kernel. Every system call, every network packet, every file access, every process execution passes through the Linux kernel — and eBPF lets you observe and act on these events without modifying the applications that generate them. Zero code changes. Zero SDK dependencies. Zero deployment coordination.
This is not a minor improvement. It is a different category of capability. When your monitoring layer operates at the kernel level, you see everything — including the things that applications choose not to log, the network connections that bypass your service mesh, and the processes that your container runtime does not know about.
The eBPF ecosystem has matured rapidly through 2025 and into 2026. What was once a collection of research projects and specialized tools has become production infrastructure at scale. Cilium handles networking for major cloud providers. Falco provides runtime security for Kubernetes clusters worldwide. Tetragon enforces security policies directly in the kernel. And tools like Coroot deliver full-stack observability — metrics, logs, traces, and continuous profiling — from a single eBPF-based agent that requires zero application changes.
This article explains what eBPF is, why it matters for security and observability, and how to adopt it in practice.
What eBPF Actually Is
eBPF stands for extended Berkeley Packet Filter, though the name is mostly historical at this point. Modern eBPF has moved far beyond packet filtering.
At its core, eBPF is a virtual machine inside the Linux kernel that runs sandboxed programs in response to kernel events. These programs can observe system calls, network traffic, file operations, process scheduling, and essentially any kernel-level activity — all without modifying the kernel itself or requiring kernel modules.
The Safety Model
What makes eBPF practical is its safety model. Before any eBPF program runs, the kernel verifier checks it exhaustively:
- No unbounded loops: Programs must terminate. The verifier rejects programs that could run indefinitely.
- No invalid memory access: Every pointer dereference is validated. Buffer overflows are impossible.
- Stack size limits: Programs have a fixed stack size (512 bytes), preventing stack exhaustion.
- Helper function access: Programs can only call pre-approved kernel helper functions, not arbitrary kernel code.
- Privilege requirements: Loading eBPF programs requires appropriate capabilities (typically
CAP_BPFor root).
This verification happens at load time, not runtime. Once a program passes the verifier, it runs at near-native speed with no runtime safety checks. The result is kernel-level observability with negligible performance overhead — typically less than 1-2% CPU impact for comprehensive monitoring.
Attachment Points
eBPF programs attach to specific kernel events called hooks or attachment points:
| Hook Type | Use Case | Example |
|---|---|---|
| kprobes | Trace any kernel function | Monitor sys_open to track file access |
| tracepoints | Stable kernel trace events | Track process creation via sched_process_exec |
| XDP | Network packet processing | Drop malicious packets before they reach the network stack |
| TC | Traffic control | Apply network policies at the container level |
| LSM | Linux Security Module hooks | Enforce security policies on file operations |
| uprobe | User-space function tracing | Profile specific application functions |
| perf events | CPU performance counters | Continuous CPU and memory profiling |
The breadth of attachment points is what makes eBPF so powerful. A single eBPF-based agent can simultaneously monitor network traffic, file access, process execution, DNS resolution, and system call patterns — providing a unified view that would traditionally require five or six separate tools.
eBPF for Observability: Seeing Everything
Traditional observability has three pillars: metrics, logs, and traces. eBPF enables all three without instrumentation, and adds a fourth — continuous profiling — that is impractical with application-level approaches.
Zero-Instrumentation Service Maps
When eBPF monitors network connections at the kernel level, it sees every TCP and UDP connection between services — including connections that bypass your service mesh, sidecar proxies, or application-level instrumentation. This enables automatic service discovery and dependency mapping.
Tools like Coroot use this capability to generate live topology maps showing all service dependencies. No code changes needed. No sidecar containers. No configuration per service. Deploy the agent, and within minutes you see which services communicate with which others, the latency of each connection, and the error rates across every path.
This is particularly valuable for:
- Legacy applications that cannot be instrumented without significant effort
- Third-party services where you do not control the code
- Polyglot environments where different services use different languages and frameworks
- Debugging production issues where unknown dependencies cause cascading failures
Protocol-Aware Metrics
eBPF does not just see network connections — it understands protocols. By parsing packet headers at the kernel level, eBPF agents can extract application-layer metrics without any application awareness:
HTTP/HTTPS: Request method, path, status code, latency — equivalent to what you would get from an access log, but captured at the kernel level for every service automatically.
Database protocols: PostgreSQL, MySQL, Redis, and MongoDB wire protocols are parsed to extract query latency, error rates, and connection counts. This means you get database performance metrics without installing any database monitoring agent or modifying connection strings.
gRPC: Method-level latency and error tracking for gRPC services, captured without modifying the gRPC framework configuration.
DNS: Resolution latency and failure rates for every DNS lookup, helping identify DNS-related performance issues that are notoriously difficult to debug with application-level tools.
Kafka: Producer and consumer lag measurements captured at the protocol level, providing broker-independent visibility into message pipeline performance.
Continuous Profiling
Perhaps the most underappreciated capability of eBPF-based observability is continuous profiling. Traditional profiling requires attaching a profiler to a specific process, running it for a period, and analyzing the output. This is too disruptive and resource-intensive for production use.
eBPF-based profiling works differently. It attaches to perf events and samples CPU stack traces at fixed intervals across all processes on a host. The overhead is minimal — typically less than 1% CPU — making it feasible to run continuously in production.
The practical value is significant. When a service experiences a latency spike, you do not need to reproduce the issue with a profiler attached. The profiling data is already there, captured as flame graphs that show exactly where CPU time was spent during the incident. This turns performance debugging from a reactive investigation into a retrospective analysis.
eBPF for Security: The Kernel as First Responder
Observability is only half the story. eBPF is equally transformative for runtime security, and the convergence of observability and security into a single kernel-level agent is one of the most important architectural shifts in modern infrastructure.
Runtime Threat Detection
Traditional security monitoring relies on log analysis — examining application logs, audit logs, and system logs for indicators of compromise. This approach has fundamental limitations: attackers can modify or suppress logs, applications may not log security-relevant events, and log shipping introduces latency between an event and its detection.
eBPF-based security monitoring operates at a different level. By hooking into system calls and kernel events, it observes activities that no log can suppress:
- Process execution: Every process spawned on a system, including those launched by container breakout exploits
- File access: Every file opened, read, written, or deleted, including access to sensitive paths like
/etc/shadowor cryptographic keys - Network connections: Every outbound connection, including those that bypass application-level network policies
- Privilege escalation: System calls that modify process capabilities, user IDs, or security contexts
- Kernel module loading: Attempts to load kernel modules, which can indicate rootkit installation
Policy Enforcement
Detection is valuable, but prevention is better. eBPF LSM (Linux Security Module) hooks enable security policies to be enforced directly in the kernel, blocking unauthorized actions before they take effect.
Tetragon, developed by the Cilium team, is the leading tool in this space. It provides:
Process execution policies: Define which binaries are allowed to execute in a container. If a shell spawns inside a container that should only run a Go binary, Tetragon can block the execution and generate an alert.
Network policies: Enforce which destinations a pod can connect to at the kernel level, bypassing potential container runtime vulnerabilities.
File access policies: Restrict which files and directories a process can access, providing defense-in-depth beyond filesystem permissions.
Capability restrictions: Limit which Linux capabilities a process can exercise, even if the container runtime grants them.
The enforcement happens in the kernel, which means it cannot be bypassed by application-level exploits. An attacker who gains code execution inside a container still cannot perform actions that the eBPF policy blocks, because the policy is enforced before the system call completes.
Network Security
Cilium, the most widely deployed eBPF-based networking tool, has redefined how network security works in Kubernetes environments. Traditional network policies operate at the IP address and port level. Cilium's eBPF-based policies operate at the identity and API level:
- Identity-based policies: Policies reference Kubernetes labels and service identities rather than IP addresses, eliminating the need to track pod IP allocations
- L7 filtering: HTTP, gRPC, and Kafka-aware policies that can restrict access to specific API endpoints, not just ports
- Transparent encryption: WireGuard-based encryption between nodes, implemented in the kernel via eBPF without requiring application changes
- Bandwidth management: Per-pod bandwidth limits enforced at the kernel level
The eBPF Tool Ecosystem
The eBPF ecosystem has consolidated around several key projects, each addressing a specific domain.
Networking and Security
| Tool | Purpose | Maintainer |
|---|---|---|
| Cilium | Kubernetes networking, network policy, service mesh | Isovalent (Cisco) |
| Tetragon | Runtime security enforcement | Isovalent (Cisco) |
| Falco | Runtime threat detection | Sysdig / CNCF |
| Calico eBPF | Kubernetes networking with eBPF datapath | Tigera |
Observability
| Tool | Purpose | Maintainer |
|---|---|---|
| Coroot | Full-stack observability (metrics, logs, traces, profiling) | Coroot |
| Hubble | Network observability for Cilium | Isovalent (Cisco) |
| Pixie | Kubernetes observability | New Relic / CNCF |
| Parca | Continuous profiling | Polar Signals |
| Grafana Beyla | Auto-instrumentation for HTTP and gRPC | Grafana Labs |
Tracing and Debugging
| Tool | Purpose | Maintainer |
|---|---|---|
| bpftrace | High-level tracing language for eBPF | IO Visor |
| BCC | BPF Compiler Collection toolkit | IO Visor |
| bpftool | eBPF program management utility | Linux kernel |
Practical Adoption: Getting Started
Adopting eBPF does not require deep kernel knowledge. Modern eBPF tools abstract away the complexity and present familiar interfaces — dashboards, alerts, and policy definitions.
Starting with Observability
The lowest-risk entry point is observability. Deploy an eBPF-based agent like Coroot or Grafana Beyla alongside your existing monitoring stack. The agent requires no application changes — it runs as a privileged container or DaemonSet and immediately begins collecting metrics.
For Kubernetes environments:
# Deploy Coroot with Helm
helm repo add coroot https://coroot.github.io/helm-charts
helm repo update coroot
helm install -n coroot --create-namespace coroot-operator coroot/coroot-operator
helm install -n coroot coroot coroot/coroot-ce
# Access the dashboard
kubectl port-forward -n coroot service/coroot-coroot 8080:8080
Within minutes, you will see a service map, latency metrics for HTTP and database connections, and resource utilization data — all captured without any instrumentation changes to your applications.
Adding Security Enforcement
Once observability is in place, the natural next step is security enforcement. Tetragon provides a graduated path:
Phase 1: Audit mode. Deploy Tetragon with policies in audit mode. It logs policy violations without blocking them, giving you time to understand your application behavior and refine policies before enforcement.
Phase 2: Alert mode. Connect Tetragon events to your alerting system. Receive notifications when suspicious activity occurs — unexpected processes, unauthorized network connections, sensitive file access.
Phase 3: Enforcement mode. Enable enforcement on policies that have been validated in audit mode. Start with the most critical restrictions — container breakout prevention, for example — and gradually expand coverage.
Kernel Requirements
eBPF capabilities depend on the Linux kernel version. For modern eBPF observability and security tools, you need:
| Feature | Minimum Kernel | Recommended |
|---|---|---|
| Basic eBPF | 4.4 | 5.10+ |
| BTF support | 5.2 | 5.10+ |
| LSM hooks | 5.7 | 5.15+ |
| BPF tokens | 6.9 | 6.9+ |
| Ring buffer | 5.8 | 5.10+ |
Most cloud provider managed Kubernetes services (EKS, GKE, AKS) run kernels that support all modern eBPF features. On-premises deployments should target kernel 5.10 or later for the best compatibility.
Performance Considerations
eBPF monitoring adds minimal overhead, but "minimal" is not "zero":
- CPU overhead: Typically 1-2% for comprehensive monitoring (network, process, file, profiling)
- Memory usage: 50-200 MB per node for the agent, depending on cardinality
- Network overhead: Metrics and events are shipped to a central server; bandwidth usage depends on cluster size and activity
- Storage: ClickHouse (used by Coroot) or Prometheus (used by many tools) requires storage proportional to the number of services and retention period
For most environments, these overheads are negligible compared to the visibility gained. However, high-frequency trading systems, real-time audio/video processing, and other latency-critical workloads should benchmark eBPF tools carefully before production deployment.
The Convergence of Security and Observability
The most significant trend in the eBPF ecosystem is the convergence of security and observability into unified platforms. Historically, these were separate disciplines with separate tools, separate teams, and separate budgets. eBPF erases the technical boundary.
When a single kernel-level agent captures network connections, process execution, file access, and system call patterns, the same data serves both purposes:
- Observability: "Service A's latency to the database increased by 200ms after the last deployment"
- Security: "An unexpected process spawned in service A's container and made an outbound connection to an unknown IP"
Both observations come from the same eBPF data source. The difference is in how the data is analyzed and what actions are triggered. This convergence reduces agent sprawl (one agent instead of three or four), eliminates data duplication, and enables correlation that was previously impossible — like linking a security event to its performance impact in real time.
Tools like Coroot already embody this convergence, providing observability dashboards alongside SLO tracking and anomaly detection. Cilium and Tetragon together provide networking, observability, and security enforcement from a single platform. Expect this convergence to accelerate as the ecosystem matures.
Conclusion: The New Infrastructure Layer
eBPF has moved from a Linux kernel feature to a foundational infrastructure layer. It is the technology behind the networking in most major cloud providers' Kubernetes offerings. It powers the observability platforms that replaced traditional APM agents. It enforces the security policies that protect containers at runtime.
For engineering teams, the practical takeaway is straightforward: if you are running Linux workloads — especially in Kubernetes — eBPF-based tools should be part of your infrastructure stack. The observability you gain without any code changes is remarkable. The security enforcement you can add without application modifications is transformative. And the convergence of both capabilities into unified platforms simplifies operations in ways that separate tool stacks never could.
Start with observability. Add security enforcement gradually. Let the kernel do the work that you have been asking your applications to do. The results will be worth it.