Line-rate packet capture for Java. DPDK and Napatech SmartNIC support. Full L2–L7 dissection. Inline, capture, and transmit modes. The foundation that powers ExaViewer, ExaCapture, ExaVolume, and the Vantage Platform.
Create a backend, open a channel, fork a task worker. The same pattern works across DPDK, Napatech, and libpcap — swap the backend class, keep everything else.
Net.activateLicense() searches for a commercial license
and falls back to the community edition automatically.
Switch from PcapBackend to DpdkBackend
or NtapiBackend with a single line change.
channel.acquire() blocks until a packet is available and
returns a zero-copy view into native memory.
channel.release() returns it to the pool — no GC, no allocation.
shutdownAfter(Duration) cleanly winds down all workers,
drains in-flight packets, and closes the backend. The session tree
tracks exactly what's blocking at every step.
Net.activateLicense(); // Commercial or community
// Swap to DpdkBackend or NtapiBackend — same API
try (Net net = new PcapBackend()) {
PacketChannel channel =
net.packetChannel("hello-channel");
Capture capture = net.capture("hello-capture", "en0")
.filter(PacketFilter.all())
.assignTo(channel)
.apply();
System.out.println("Capturing on: "
+ capture.getPort());
try (TaskExecutor executor =
net.executor("packet-task")) {
executor
.fork(channel, this::processPackets)
.shutdownAfter(Duration.ofSeconds(10))
.awaitCompletion();
}
System.out.printf("Done: %d packets%n",
capture.metrics().packetsAssigned());
}
// --- Task worker ---
void processPackets(PacketChannel channel)
throws SessionShutdownException,
InterruptedException {
while (channel.isActive()) {
Packet packet = channel.acquire(); // Blocks
System.out.printf("Packet #%d len=%-6d ts=%s%n",
count++,
packet.captureLength(),
packet.timestampInfo());
channel.release(packet); // Returns to pool
}
}
A single Net session can run capture-only, inline IDS,
and traffic generation simultaneously — each with their own channel
pool, worker threads, and protocol stack. This is the full Showcase
pattern used in production deployments.
try (Net net = new DpdkBackend()) {
// Each channel type gets its own pool
PacketChannel[] capChannels = net.packetChannels("capture-channel", 4);
PacketChannel[] idsChannels = net.packetChannels("inline-ids-channel", 8);
PacketChannel[] genChannels = net.packetChannels("traffic-gen-channel",4);
ProtocolChannel[] tcpChannels = net.protocolChannels("tcp-channel", 15, TcpSegment.class);
TokenChannel tcpTokens = net.tokenChannel("analysis-tokens", TcpToken.class);
// Capture-only: en0 → TCP only → 4 capture workers
Capture capture = net.capture("udp-capture", "en0")
.filter(PacketFilter.tcp())
.assignTo(capChannels)
.apply();
// Inline IDS: en1 → inspect → forward/drop → en0
Inline inline = net.inline("inline-ids", "en1")
.filter(PacketFilter.all())
.assignTo(idsChannels)
.txEnable(true)
.txPorts("en0")
.txImmediately()
.apply();
// Traffic generation: empty buffers → generate → transmit
Transmit transmit = net.transmit("traffic-gen")
.assignTo(genChannels)
.txEnable(true).txPort("en0").txImmediately()
.apply();
// TCP reassembly with protocol stack + token stream
ProtocolStack stack = new ProtocolStack()
.enableIpReassembly()
.enableTcpReassembly();
Capture tcpReassembled = net.capture("tcp-reassembled", "en0")
.filter(PacketFilter.tcp())
.assignTo(tcpChannels) // Receives TcpSegment objects
.assignTo(tcpTokens) // Tokens go here
.protocol(stack)
.apply();
// Fork all workers — 4 + 8 + 4 + 15 + 1 = 32 total
try (TaskExecutor executor = net.executor("packet-tasks")) {
executor
.onTaskException(this::handleErrors)
.maxRestarts(3).restartDelay(Duration.ofSeconds(1))
.fork(capChannels, this::capturedPackets) // 4 workers
.fork(idsChannels, this::intrusionDetection) // 8 workers
.fork(genChannels, en0, en1, this::generateTraffic) // 4 workers
.fork(tcpChannels, this::processTcpStreams) // 15 workers
.fork(tcpTokens, this::analyzeTcpTokens) // 1 worker
.shutdownAfter(Duration.ofMinutes(5))
.awaitCompletion(); // 32 workers total
}
}
PacketChannel for capture or transmit. ProtocolChannel<T> for reassembled protocol objects like TcpSegment. TokenChannel<T> for analysis event tokens. Each has its own acquire/release lifecycle.
Inline mode receives packets before transmission. Set packet.tx().setTxEnabled(false) to drop. Override the TX port per-packet. Inject timestamps, CRCs, or custom headers using mbuf-style memory segments.
Transmit channels supply empty buffers from a memory pool. Write raw bytes, set the capture length, release — packet transmits. Prefill mode cycles pre-built packets for line-rate retransmission with zero work per packet.
onTaskException() with TaskRecovery.RESTART_DELAYED or SHUTDOWN_GROUP. Configure max restarts and delay. Workers recover without restarting the entire session or losing other channels.
Token workers receive lightweight 16-byte analysis events out-of-band — completely independent of the packet path. The same token feeds ExaViewer's live UI, pcapng sidecar storage, and ML pipelines simultaneously. Subscribe to only what you need; unsubscribed packs cost exactly zero.
void analyzeTcpTokens(
TokenChannel<TcpToken> channel)
throws SessionShutdownException,
InterruptedException {
while (channel.isActive()) {
TcpToken token = channel.acquire();
TcpStream stream = token.tcpStream();
State state = stateMap.get(
stream.flowKey());
switch (token.tokenType()) {
case STREAM_SYN ->
state.handleConnectionStart(stream);
case STREAM_FIN -> {
state.handleConnectionEnd(stream);
state.markForEviction();
}
case STREAM_TIMEOUT ->
stateMap.remove(stream.flowKey());
case SEGMENT_OUT_OF_ORDER ->
state.detectAnomaly(stream, "OOO");
case WINDOW_RESIZE ->
state.monitorPerformance(stream);
case RETRANSMIT ->
state.detectLoss(stream);
case FAST_RETRANSMIT ->
state.detectCongestion(stream);
// Not interested — prune automatically
default ->
channel.disable(token.tokenType());
}
channel.release(token);
}
}
channel.disable(tokenType) tells the backend to stop emitting that token type to this specific channel. Zero serialization overhead for events you're not consuming.
TokenChannel<TcpToken> gives you typed access to TCP-specific fields — stream.flowKey(), token.frameNumber(), state machine events — without casting or instanceof checks.
Severity and LOD level are encoded in the token ID — routers filter by criticality with a bitmask, no deserialization needed. PACK_ML = 0x3000 and PACK_USER = 0xF000 are reserved for ML inference and custom analyzers.
Tokens write inline to pcapng sidecars as they're emitted. Open a historical file months later in ExaViewer — flow markers, alerts, and anomalies are already there. No re-analysis required.
Kernel bypass via DPDK. Napatech SmartNIC offload. NUMA-aware multi-queue distribution. Hash(5-tuple) keeps flows on the same ProcessorTree — no cross-thread synchronization.
Ethernet, IP, TCP, UDP, TLS, HTTP/1–3, QUIC, DNS, GTP, and 90+ more. IP and TCP reassembly. Stream objects, segment objects, and application metadata all available as typed channels.
Pre-bound packet views, CAS pool acquire/release, VarHandle field access. No new in the packet path. No exceptions. No callbacks. No locks. No map lookups in routing.
Direct read/write to ExaVolume with block-level efficiency. RDMA-striped writes across network nodes directly from adapters to SSD targets. Sidecar and index generation included.
Napatech SmartNIC offload for capture, timestamping, and filtering. FPGA-ready via Forge integration. Backend-specific capabilities surfaced through the same Net API.
Hierarchical session tree with live state inspection. See exactly which worker is blocked, on which operation, for how long. Frozen state preserved post-shutdown for post-mortem debugging.
800 Gbps requires every decision in the packet path to be measurable in nanoseconds. These are non-negotiable rules that apply to every processor in the stack.
Packet arrival (800 Gbps path): 1. packet = viewPool.allocate() // CAS on free list — no allocation 2. packet.dataMemory().bind(nativeData) // Reference assignment only 3. packet.descriptor().bind(nativeDesc) // Reference assignment only 4. processPacket(packet) // Zero allocation processing 5. emit tokens out-of-band // Independent of packet path 6. viewPool.release(packet) // CAS back to free list Rules — enforced across every processor: ────────────────────────────────────────────────────────── No new Rebind, pool, reuse. GC never sees a packet. No exceptions Errors increment counters. Stacks never unwind. No locks Single-threaded ProcessorTree. Hash(5-tuple) distribution. No callbacks Direct method calls only. Lambda dispatch not used. Bitmask prune Disabled features vanish from the instruction stream. Pre-binding VarHandle field access. All objects allocated before first packet.
Configure your license or talk to our engineering team.