DEV Community

spacewander
spacewander

Posted on

Turning Rainbow into Bridge - How Nginx Proxies UDP "Connections"

As you know, UDP is not connection-based like TCP. However, there are times when we need to send multiple UDPs to a fixed address to complete a UDP request. In order to ensure that the server knows that these UDP packets constitute the same session, we need to bind a port when sending UDP packets so that those UDP packets can be separated together when the network stack is differentiated by a five-tuple (protocol, client IP, client port, server IP, server port). Normally we would call this phenomenon a UDP connection.

But then there is a new problem. Unlike TCP, where there is a handshake and a wave, a UDP connection simply means using a fixed client port. Although as a server, you know where a UDP connection should terminate because you have a fixed set of protocols agreed with the client in advance. But when a proxy server is used in the middle, how does the proxy distinguish that certain UDP packets belong to a certain UDP connection? After all, without a handshake and a wave as separators, an intermediary does not know where to put a period on a session.

We'll see how Nginx handles this problem in the following experiments.

Experiments

For the next few experiments, I'll be using a fixed client. This client will establish a UDP 'connection' to the address Nginx is listening to, and then send 100 UDP packets.

// save it as main.go, and run it like `go run main.go`
package main

import (
    "fmt"
    "net"
    "os"
)

func main() {
    conn, err := net.Dial("udp", "127.0.0.1:1994")
    if err ! = nil {
        fmt.Printf("Dial err %v", err)
        os.Exit(-1)
    }
    defer conn.Close()

    msg := "H"
    for i := 0; i < 100; i++ {
        if _, err = conn.Write([]byte(msg)); err ! = nil {
            fmt.Printf("Write err %v", err)
            os.Exit(-1)
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Basic configuration

The following is the basic Nginx configuration used in the experiments. Subsequent experiments will build on this base.

In this configuration, Nginx will have four worker processes listening on port 1994 and proxying to port 1995. Error logs will be sent to stderr, and access logs will be sent to stdout.

worker_processes 4;
daemon off;
error_log /dev/stderr warn;

events { worker_connections 10240; }

stream {
    log_format basic '[$time_local] '
                 'received: $bytes_received '
                 '$session_time';

    server {
        listen 1994 udp;
        access_log /dev/stdout basic;
        preread_by_lua_block {
            ngx.log(ngx.ERR, ngx.worker.id(), " ", ngx.var.remote_port)
        }
        proxy_pass 127.0.0.1:1995;
        proxy_timeout 10s;
    }

    server {
        listen 1995 udp;
        return "data";
    }
}

Enter fullscreen mode Exit fullscreen mode

The output is as follows.

2023/01/27 18:00:59 [error] 6996#6996: *2 stream [lua] preread_by_lua(nginx.conf:48):2: 1 51933 while prereading client data, udp client: 127.0. 0.0.1, server: 0.0.0.0:1994
2023/01/27 18:00:59 [error] 6995#6995: *4 stream [lua] preread_by_lua(nginx.conf:48):2: 0 51933 while prereading client data, udp client: 127.0. 0.0.1, server: 0.0.0.0:1994
2023/01/27 18:00:59 [error] 6997#6997: *1 stream [lua] preread_by_lua(nginx.conf:48):2: 2 51933 while prereading client data, udp client: 127.0. 0.0.1, server: 0.0.0.0:1994
2023/01/27 18:00:59 [error] 6998#6998: *3 stream [lua] preread_by_lua(nginx.conf:48):2: 3 51933 while prereading client data, udp client: 127.0. 0.0.1, server: 0.0.0.0:1994
[27/Jan/2023:18:01:09 +0800] received: 28 10.010
[27/Jan/2023:18:01:09 +0800] received: 27 10.010
[27/Jan/2023:18:01:09 +0800] received: 23 10.010
[27/Jan/2023:18:01:09 +0800] received: 22 10.010
Enter fullscreen mode Exit fullscreen mode

As you can see, all 100 UDP packets are spread out to each worker process. It seems that Nginx does not treat 100 packets from the same address as the same session. After all, each process reads UDP data.

reuseport

To have Nginx proxy UDP connections, you need to specify reuseport when you listen:

    ...
    server {
        listen 1994 udp reuseport;
        access_log /dev/stdout basic;
Enter fullscreen mode Exit fullscreen mode

Now all UDP packets will fall on the same process and be counted as one session:

2023/01/27 18:02:39 [error] 7191#7191: *1 stream [lua] preread_by_lua(nginx.conf:48):2: 3 55453 while prereading client data, udp client: 127.0. 0.0.1, server: 0.0.0.0:1994
[27/Jan/2023:18:02:49 +0800] received: 100 10.010
Enter fullscreen mode Exit fullscreen mode

When multiple processes are listening to the same address, if reuseport is set, Linux will decide which process to send to based on the hash of the quintuplet. This way, all packets inside the same UDP connection will fall to one process.

By the way, if you print the client address of the accepted UDP connection on the server on port 1995 (i.e., the address where Nginx communicates with the upstream), you will see that the address is the same for the same session. That is, when Nginx proxies to an upstream, it uses a UDP connection to pass the entire session by default.

proxy_xxx directives

As the reader has noticed, the start time of the UDP access recorded in the error log and the end time recorded in the access log are exactly 10 seconds apart. This time period corresponds to the proxy_timeout 10s; in the configuration. Since there is no hand waving in UDP connections, Nginx determines when a session terminates by default based on the timeout of each session. By default, the duration of a session is 10 minutes, except that I specifically assign 10 seconds due to my lack of patience.

Besides the timeout, what other conditions does Nginx rely on to determine session termination? Please read on.

        ...
        proxy_timeout 10s;
        proxy_responses 1;
Enter fullscreen mode Exit fullscreen mode

After adding proxy_responses 1, the output looks like this.

2023/01/27 18:07:35 [error] 7552#7552: *1 stream [lua] preread_by_lua(nginx.conf:48):2: 2 36308 while prereading client data, udp client: 127.0. 0.0.1, server: 0.0.0.0:1994
[27/Jan/2023:18:07:35 +0800] received: 62 0.003
2023/01/27 18:07:35 [error] 7552#7552: *65 stream [lua] preread_by_lua(nginx.conf:48):2: 2 36308 while prereading client data, udp client: 127.0. 0.0.1, server: 0.0.0.0:1994
[27/Jan/2023:18:07:35 +0800] received: 9 0.000
2023/01/27 18:07:35 [error] 7552#7552: *76 stream [lua] preread_by_lua(nginx.conf:48):2: 2 36308 while prereading client data, udp client: 127.0. 0.0.1, server: 0.0.0.0:1994
[27/Jan/2023:18:07:35 +0800] received: 7 0.000
2023/01/27 18:07:35 [error] 7552#7552: *85 stream [lua] preread_by_lua(nginx.conf:48):2: 2 36308 while prereading client data, udp client: 127.0. 0.0.1, server: 0.0.0.0:1994
[27/Jan/2023:18:07:35 +0800] received: 3 0.000
2023/01/27 18:07:35 [error] 7552#7552: *90 stream [lua] preread_by_lua(nginx.conf:48):2: 2 36308 while prereading client data, udp client: 127.0. 0.0.1, server: 0.0.0.0:1994
[27/Jan/2023:18:07:35 +0800] received: 19 0.000
Enter fullscreen mode Exit fullscreen mode

We see that Nginx no longer passively waits for a timeout, but terminates the session once it receives the packet from upstream. The relationship between proxy_timeout and proxy_responses is an "or" relationship.

As opposed to proxy_responses, there is a proxy_requests.

        ...
        proxy_timeout 10s;
        proxy_responses 1;
        proxy_requests 50;
Enter fullscreen mode Exit fullscreen mode

After configuring proxy_requests 50, we see that the size of each request is stabilized at 50 UDP packets.

2023/01/27 18:08:55 [error] 7730#7730: *1 stream [lua] preread_by_lua(nginx.conf:48):2: 0 49881 while prereading client data, udp client: 127.0. 0.0.1, server: 0.0.0.0:1994
2023/01/27 18:08:55 [error] 7730#7730: *11 stream [lua] preread_by_lua(nginx.conf:48):2: 0 49881 while prereading client data, udp client: 127.0. 0.0.1, server: 0.0.0.0:1994
[27/Jan/2023:18:08:55 +0800] received: 50 0.002
[27/Jan/2023:18:08:55 +0800] received: 50 0.001
Enter fullscreen mode Exit fullscreen mode

Note that the number of UDP upstream responses needed to get the session to terminate is proxy_requests * proxy_responses. In the example above, if we change proxy_responses to 2, it will take 10 seconds before the session terminates. Because after doing so, for every 50 UDP packets requested, 100 UDP packets need to be responded to before the session will be terminated, and each requested UDP packet will only get one UDP as a response, so we have to wait for the timeout.

Dynamic Proxy

Most of the time, the number of packets in a UDP request is not fixed, and we may have to determine the number of packets in a session based on a length field at the beginning, or determine when to end the current session by whether a packet has an eof flag in the header. Several of Nginx's proxy_* directives currently only support fixed values, and do not support dynamic settings with variables.

proxy_requests and proxy_responses actually just set the corresponding counters on the UDP session. So theoretically, we could modify Nginx to expose an API to dynamically adjust the value of the current UDP session's counters, enabling contextual determination of UDP request boundaries. Is there a solution to this problem without modifying Nginx?

Let's think about it another way. Can we read out all the client-side data via Lua and send it to the upstream from a cosocket at the Lua level? The idea of implementing an upstream proxy via Lua is really imaginative, but unfortunately it doesn't work at the moment.

Instead of the previous preread_by_lua_block, use the following code.

        content_by_lua_block {
            local sock = ngx.req.socket()
            while true do
                local data, err = sock:receive()

                if not data then
                    if err and err ~= "no more data" then
                        ngx.log(ngx.ERR, err)
                    end
                    return
                end
                ngx.log(ngx.WARN, "message received: ", data)
            end
        }
        proxy_timeout 10s;
        proxy_responses 1;
        proxy_requests 50;
Enter fullscreen mode Exit fullscreen mode

We will see output like this:

2023/01/27 18:17:56 [warn] 8645#8645: *1 stream [lua] content_by_lua(nginx.conf:59):12: message received: H, udp client: 127.0.0.1, server: 0.0. 0.0.0:1994
[27/Jan/2023:18:17:56 +0800] received: 1 0.000
...
Enter fullscreen mode Exit fullscreen mode

Since under UDP, ngx.req.socket:receive currently only supports reading the first packet, even if we set up a while true loop, we won't get all the client requests. Also, since content_by_lua overrides the proxy_* directive, Nginx does not use the proxy logic and assumes that there is only one packet for the current request. After changing content_by_lua to preread_by_lua, the proxy_* directive will take effect, but it will still not be able to proxy at the Lua level because it will not get all client requests.

Summary

If Nginx is proxying a single-packet UDP-based protocol like DNS, then using listen udp is sufficient. However, if you need to proxy UDP-based protocols that contain multiple packets, then you also need to add reuseport. In addition, Nginx does not yet support dynamically setting the size of each UDP session, so there is no way to accurately distinguish between different UDP sessions. the features that Nginx can use when proxying UDP protocols are more focused on those that do not require attention to individual UDP sessions, such as limiting flow.

Top comments (0)