Write a Linux packet sniffer from scratch with Raw socket and BPF

Chris Bao
Level Up Coding
Published in
14 min readApr 24, 2022

--

Background

When we refer to network packet sniffer, some famous and popular tools come to your mind, like tcpdump. I have shown you how to capture network packets in my previous articles with such tools. But have you ever thought about writing a packet sniffer from scratch without dependencies on any third-party libraries? We need to dig deep into the operating system and find the weapons needed to build this tool. Sounds complex, right? In this article, let us do it. After reading this article, you can find that it is not as difficult as you think.

Note that different operating system kernels have different internal network implementations. This article will focus on the Linux platform.

Note this article is originally published on my personal blog here. Thanks

Introduction

Firstly, we need to review how tcpdump is implemented. According to the official document, tcpdump is built on the library libpcap, which is developed based on the remarkable research result from Berkeley, in details you can refer to this paper.

As you know, different operating systems have different internal implementations of network stacks. libpcap covers all of these differences and provides the system-independent interface for user-level packet capture. But in this article, I want to focus on the Linux platform, so how does libpcap work on the Linux system? According to some documents, it turns out that libpcap uses the PF_PACKET socket to capture packets on a network interface.

So the next question is: what the PF_PACKET socket is?

PF_PACKET socket

In my previous article, we mentioned that the socket interface is TCP/IP’s window on the world. In most modern systems incorporating TCP/IP, the socket interface is the only way applications can use the TCP/IP suite of protocols.

PF_SOCKET

It is correct. This time, let’s dig deeper about socket by examining the system call executed when we create a new socket:

When you want to create a socket with the above system call, you have to specify which domain (or protocol family) you want to use with that socket as the first argument. The most commonly used family is PF_INET, which is for communications based on IPv4 protocols (when you create a TCP server, you use this family). Moreover, you have to specify a type for your socket as the second argument. And the possible values depend on the family you specified. For example, when dealing with the PF_INET family, the values for type include SOCK_STREAM(for TCP) and SOCK_DGRAM(for UDP). For other detailed information about the socket system call, you can refer to the socket(3) man page.

In the man page, you can find one potential value for the domain argument as follows:

AF_PACKET    Low-level packet interface

Note: AF_PACKET and PF_PACKET are same. It is called PF_PACKET in history and then renamed AF_PACKET later. PF means protocol families, and AF means address families. In this article, I use PF_PACKET.

Different from PF_INET socket, which can give you TCP segment. By PF_PACKET socket, you can get the raw Ethernet frame which bypasses the usual upper layer handling of TCP/IP stack. It might sound a little bit crazy. But, that is, any packet received will be directly passed to the application.

For a better understanding of PF_PACKET socket, let us go deeper and roughly examine the path of a received packet from the network interface to the application level.

(As shown in the image above) When the network interface card(NIC) receives a packet, it is handled by the driver. The driver maintains a structure called ring buffer internally. And write the packet to kernel memory (the memory is pre-allocated with ring buffer) with direct memory access(DMA). The packet is placed inside a structure called sk_buff(one of the most important structures related to kernel network subsystem).

After entering the kernel space, the packet goes through protocol stack handling layer by layer, such as IP processing and TCP/UDP processing. And the packet goes into applications via the socket interface. You already understand this familiar path very well.

But for the PF_PACKET socket, the packet in sk_buff is cloned, then it skips the protocol stacks and directly goes to the application. The kernel needs the clone operation, because one copy is consumed by the PF_PACKET socket, and the other one goes through the usual protocol stacks.

In future articles, I’ll demonstrate more about Linux kernel network internals.

Next step, let us see how to create a PF_PACKET socket at the code level. For brevity, I omit some code and only show the essential part. You can refer to this Github repo in detail.

Please ensure to include the system header files: <sys/socket.h> <sys/types.h>

Bind to one network interface

Without the additional settings, the sniffer captures all the packets received on all the network devices. Next step, let us try to bind the sniffer to a specific network device.

Firstly, you can use ifconfig command to list all the available network interfaces on your machines. The network interface is a software interface to the networking hardware.

For example, the following image shows information of network interface eth0:

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet 192.168.230.49 netmask 255.255.240.0 broadcast 192.168.239.255
inet6 fe80::215:5dff:fefb:e31f prefixlen 64 scopeid 0x20<link>
ether 00:15:5d:fb:e3:1f txqueuelen 1000 (Ethernet)
RX packets 260 bytes 87732 (87.7 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 178 bytes 29393 (29.3 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

Let’s bind the sniffer to eth0 as follows:

We do it by calling the setsockopt system call. I leave the detailed usage of it to you.

Now the sniffer only captures network packets received on the specified network card.

Non-promiscuous and promiscuous mode

By default, each network card minds its own business and reads only the frames directed to it. It means that the network card discards all the packets that do not contain its own MAC address, which is called non-promiscuous mode.

Next, let us make the sniffer can work in promiscuous mode. In this way, it retrieves all the data packets. Even the ones that are not addressed to its host.

To set a network interface to promiscuous mode, all we have to do is issue the ioctl() system call to an open socket on that interface.

ioctl stands for I/O control, which manipulates the underlying device parameters of specific files. ioctl takes three arguments:

  • The first argument must be an open file descriptor. We use the socket file descriptor bound to the network interface in our case.
  • The second argument is a device-dependent request code. You can see we called ioctl twice. The first call uses request code SIOCGIFFLAGS to get flags, and the second call uses request code SIOCSIFFLAGS to set flags. Do not be fooled by these two constant values, which are spelled alike.
  • The third argument is for returning information to the requesting process.

Now the sniffer can retrieve all the data packets received on the network card, no matter to which host the packets are addressed.

Packet filtering with BPF

So far the sniffer captures all the network packets received on the network card. But a powerful network sniffer like tcpdump should provide the packet filtering functionality. For instance, the sniffer can only capture the TCP segment(and skip the UPD), or it can only capture the packets from a specific source IP address. Next, let’s continue to explore how to do that.

Background of BPF

Berkeley Packet Filter(BPF) is the essential underlying technology for packet capture in Unix-like operating systems. Search BPF as the keyword online, and the result is very confusing. It turns out that BPF keeps evolving, and there are several associated concepts such as BPF cBPF eBPF and LSF. So let us examine those concepts along the timeline:

  • In 1992, BPF was first introduced to the BSD Unix system for filtering unwanted network packets. The proposal of BPF was from researchers in Lawrence Berkeley Laboratory, who also developed the libpcap and tcpdump.
  • In 1997, Linux Socket Filter(LSF) was developed based on BPF and introduced in Linux kernel version 2.1.75. Note that LSF and BPF have some distinct differences, but in the Linux context, when we speak of BPF or LSF, we mean the same packet filtering mechanism in the Linux kernel. We’ll examine the detailed theory and design of BPF in the following sections.
  • Originally, BPF was designed as a network packet filter. But in 2013, BPF was widely extended, and it can be used for non-networking purposes such as performance analysis and troubleshooting. Nowadays, the extended BPF is called eBPF, and the original and obsolete version is renamed to classic BPF (cBPF). Note that what we examine in this article is cBPF, and eBPF is not inside the scope of this article. eBPF is the hottest technology in today’s software world, and I’ll talk about it in the future.

Where to place BPF

The first question to answer is where should we place the filter.

The best solution to this question is to put the filter as early as possible in the path. Since copying a large amount of data from kernel space to the user space produces a huge overhead, which can influence the system performance a lot. So BPF is a kernel feature. The filter should be triggered immediately when a packet is received at the network interface.As the original BPF paper said To minimize memory traffic, the major bottleneck in most modern system, the packet should be filtered ‘in place’ (e.g., where the network interface DMA engine put it) rather than copied to some other kernel buffer before filtering.
Let’s verify this behavior by examining the kernel source code as follows (Note the kernel code shown in this article is based on version 2.6, which contains the cBPF implementation.):

packet_create function handles the socket creation when the application calls the socket system call. In lines 11 and 14, it attaches the hook function to the socket. The hook function executes when the packet is received.

The following code block shows the hook function packet_rcv:

packet_rcv function calls run_filter, which is just the BPF logic part(Currently, you can regard it as a black box. In the next section, we’ll examine the details). Based on the return value of run_filter the packet can be filtered out or put into the queue.

So far, you can understand BPF(or the packet filtering) is working inside kernel space. But the packet sniffer is a user-space application. The next question is how to link the filtering rules in user space to the filtering handler in kernel space.

To answer this question, we have to understand BPF itself. It’s right time to understand this great piece of work.

BPF machine

As I mentioned above, BPF was introduced in this original paper written by researchers from Berkeley. I strongly recommend you read this great paper based on my own experience. In the beginning, I felt crazy to read it, so I read other related documents and tried to understand BPF. But most documents only cover one portion of the entire system, so it is difficult to piece all the information together. Finally, I read the original paper and connected all parts together. As the saying goes, sometimes taking time is actually a shortcut.

Virtual CPU

A packet filter is simply a boolean-valued function on a packet. If the value of the function is true the kernel copies the packet for the application; if it is false the packet is ignored.

In order to be as flexible as possible and not to limit the application to a set of predefined conditions, the BPF is actually implemented as a register-based virtual machine (for the difference between stack-based and register-based virtual machine, you can refer to this article) running a user-defined program.

You can regard the BPF as a virtual CPU. And it consists of an accumulator, an index register(x), a scratch memory store, and an implicit program counter. If you’re not familiar with these concepts, I add some simple illustrations as follows:

  • An accumulator is a type of register included in a CPU. It acts as a temporary storage location holding an intermediate value in mathematical and logical calculations. For example, in the operation of “1+2+3”, the accumulator would hold the value 1, then the value 3, then the value 6. The benefit of an accumulator is that it does not need to be explicitly referenced.
  • An index register in a computer’s CPU is a processor register or assigned memory location used for modifying operand addresses during the run of a program.
  • A program counter is a CPU register in the computer processor which has the address of the next instruction to be executed from memory.

In the BPF machine, the accumulator is used for arithmetic operations, while the index register provides offsets into the packet or the scratch memory areas.

Instructions set and addressing mode

Same as the physical CPU, the BPF provides a small set of arithmetic, logical and jump instructions as follows, these instructions run on the BPF virtual machine(or CPU):

instruction set

The first column opcodes lists the BPF instructions written in an assembly language style. For example, ld, ldh and ldb means to copy the indicated value into the accumulator. ldx means to copy the indicated value into the index register. jeq means jump to the target instruction if the accumulator equals the indicated value. ret means return the indicated value. You can check the functionality of the instructions set in detail in the paper.

This kind of assembly-like style is more readable to humans. But when we develop an application (like the sniffer written in this article), we use binary code directly as the BPF instruction. This kind of binary format is called BPF Bytecode. I’ll examine the way to convert this assembly language to bytecode later.

The second column addr modes lists the addressing modes allowed for each instruction. The semantics of the addressing modes are listed in the following table:

address mode

For instance, [k] means the data at byte offset k in the packet. #k means the literal value stored in k. You can read the paper in detail to check the meaning of other address modes.

Example BPF program

Now let’s try to understand the following small BPF program based on the knowledge above:

The BPF program consists of an array of BPF instructions. For example, the above BPF program contains four instructions.

The first instruction ldh loads a half-word(16-bit) value into the accumulator from offset 12 in the Ethernet packet. According to the Ethernet frame format shown below, the value is just the Ethernet type field. The Ethernet type is used to indicate which protocol is encapsulated in the frame’s payload (for example, 0x0806 for ARP, 0x0800 for IPv4, and 0x86DD for IPv6).

ethernet frame

The second instruction jeq compares the accumulator (currently stores Ethernet type field) to 0x800(stands for IPv4). If the comparison fails, zero is returned, and the packet is rejected. If it is successful, a non-zero value is returned, and the packet is accepted. So the small BPF program filters and accepts all IP packets. You can find other BPF programs in the original paper. Go to read it, and you can feel the flexibility of BPF as well as the beauty of the design.

Kernel implementation of BPF

Next, let’s examine how kernel implements BPF. As mentioned above, the hook function packet_rcv calls run_filter to handle the filtering logic. run_filter is defined as follows:

You can find that the real filtering logic is inside sk_run_filter:

Same as we mentioned, sk_run_filter is simply a boolean-valued function on a packet. It maintains the accumulator, the index register, etc. as local variables. And process the array of BPF filter instructions in a for loop. Each instruction will update the value of local variables. In this way, it simulates a virtual CPU. Interesting, right?

BPF JIT

Since each network packet must go through the filtering function, it becomes the performance bottleneck of the entire system.

A just-in-time (JIT) compiler was introduced into the kernel in 2011 to speed up BPF bytecode execution.

  • What is a JIT compiler? A JIT compiler runs after the program has started and compiles the code(usually bytecode or some type of VM instructions) on the fly(or just in time) into a form that’s usually faster, typically the host CPU’s native instruction set. This is in contrast to a traditional compiler that compiles all the code to machine language before the program is first run.

In the BPF case, the JIT compiler translates BPF bytecode into a host system’s assembly code directly, which can optimize the performance a lot. I’ll not show details about JIT in this article. You can refer to the kernel code.

Set BPF in sniffer

Next, let’s add BPF into our packet sniffer. As we mentioned above in the application level, the BPF instructions should use bytecode format with the following data structure:

How can we convert the BPF assembly language into bytecode? There are two solutions. First, there is a small helper tool called bpf_asm(which is provided along with the Linux kernel), and you can regard it as the BPF assembly language interpreter. But it is not recommended to application developers.

Second, we can use tcpdump, which provides the converting functionality. You can find the following information from the tcpdump man page:

  • -d: Dump the compiled packet-matching code in a human-readable form to standard output and stop.
  • -dd: Dump packet-matching code as a C program fragment.
  • -ddd: Dump packet-matching code as decimal numbers (preceded with a count).

tcpdump ip means we want to capture all the IP packets. With options -d, -dd and -ddd, the output goes as follows:

Option -d prints the BPF instructions in assembly language (same as the example BPF program shown above). Options -dd prints the bytecode as a C program fragment. So tcpdump is the most convenient tool when you want to get the BPF bytecode.

The BPF filter bytecode (wrapped in the structure sock_fprog) can be passed to the kernel through setsockopt system call as follows:

setsockopt system call triggers two kernel functions: sock_setsockopt and sk_attach_filter (I’ll not show the details for these two functions), which binds the filters to the socket. And in run_filter kernel function (mentioned above), it can get the filters from the socket and execute the filters on the packet.

So far, every piece is connected. The puzzle of BPF is solved. The BPF machine allows the user-space applications to inject customized BPF programs straight into a kernel. Once loaded and verified, BPF programs execute in kernel context. These BPF programs operate inside kernel memory space with access to all the internal kernel states available to it. For example, the cBPF machine which uses the network packet data. But this power can be extended as eBPF, which can be used in many other varied applications. As someone said In some way, eBPF does to the kernel what Javascript does to the websites: it allows all sorts of new application to be created. In the future, I plan to examine eBPF in depth.

Process the packet

We examined the BPF filtering theory on the kernel level a lot in the above section. But for our tiny sniffer, the last step we need to do is process the network packet.

  • First, the recvfrom system call reads the packet from the socket. And we put the system call in a while loop to keep reading the incoming packets.
  • Then, we print the source and destination MAC address in the packet(the packet we got is a raw Ethernet frame in Layer 2, right?). And if what this Ethernet frame contains is an IP4 packet, then we print out the source and destination IP address. To understand more about it, you can study the header format of various network protocols. I will not cover in details here.

Summary

This article examined what PF_PACKET socket is, how it works and why the application can get raw Ethernet packets. Furthermore, we discussed how to bind the sniffer to one specific network interface and how can make the sniffer work in the promiscuous mode. Moreover, we examine how to add filters to our sniffer. First, we analyze why the filter should be running inside kernel space instead of the application space. Then, we discussed the BPF machine design and implementation in detail based on the paper. We reviewed the kernel source code to understand how to implement the BPF virtual machine. As I mentioned above, the original BPF(cBPF) was extended to eBPF now. But the understanding of the BPF virtual machine is very helpful to eBPF as well.

--

--

I’m a technology enthusiast who appreciates open source for the deep insight of how things work. https://organicprogrammer.com/