I was asked to debug another weird issue on our network. Apparently every now and then a connection going through CloudFlare would time out with 522 HTTP error.
CC BY 2.0 image by Chris Combe
522 error on CloudFlare indicates a connection issue between our edge server and the origin server. Most often the blame is on the origin server side - the origin server is slow, offline or encountering high packet loss. Less often the problem is on our side.
In the case I was debugging it was neither. The internet connectivity between CloudFlare and origin was perfect. No packet loss, flat latency. So why did we see a 522 error?
The root cause of this issue was pretty complex. After a lot of debugging we identified an important symptom: sometimes, once in thousands of runs, our test program failed to establish a connection between two daemons on the same machine. To be precise, an NGINX instance was trying to establish a TCP connection to our internal acceleration service on localhost. This failed with a timeout error.
Once we knew what to look for we were able to reproduce this with good old netcat
. After a couple of dozen of runs this is what we saw:
$ nc 127.0.0.1 5000 -v
nc: connect to 127.0.0.1 port 5000 (tcp) failed: Connection timed out
The view from strace
:
socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 3
connect(3, {sa_family=AF_INET, sin_port=htons(5000), sin_addr=inet_addr("127.0.0.1")}, 16) = -110 ETIMEDOUT
netcat
calls connect()
to establish a connection to localhost. This takes a long time and eventually fails with ETIMEDOUT
error. Tcpdump confirms that connect()
did send SYN packets over loopback but never received any SYN+ACKs:
$ sudo tcpdump -ni lo port 5000 -ttttt -S
00:00:02.405887 IP 127.0.0.12.59220 > 127.0.0.1.5000: Flags [S], seq 220451580, win 43690, options [mss 65495,sackOK,TS val 15971607 ecr 0,nop,wscale 7], length 0
00:00:03.406625 IP 127.0.0.12.59220 > 127.0.0.1.5000: Flags [S], seq 220451580, win 43690, options [mss 65495,sackOK,TS val 15971857 ecr 0,nop,wscale 7], length 0
... 5 more ...
Hold on. What just happened here?
Well, we called connect()
to localhost and it timed out. The SYN packets went off over loopback to localhost but were never answered.
Loopback congestion
The first thought is about Internet stability. Maybe the SYN packets were lost? A little known fact is that it's not possible to have any packet loss or congestion on the loopback interface. The loopback works magically: when an application sends packets to it, it immediately, still within the send
syscall handling, gets delivered to the appropriate target. There is no buffering over loopback. Calling send
over loopback triggers iptables, network stack delivery mechanisms and delivers the packet to the appropriate queue of the target application. Assuming the target application has some space in its buffers, packet loss over loopback is not possible.
Maybe the listening application misbehaved?
Under normal circumstances connections to localhost are not supposed to time out. There is one corner case when this may happen though - when the listening application does not call accept()
fast enough.
When that happens, the default behavior is to drop the new SYN packets. If the listening socket has a full accept queue, then new SYN packets will be dropped. The intention is to cause push-back, to slow down the rate of incoming connections. The peers should eventually re-send SYN packets, and hopefully by that time the accept queue will be freed. This behavior is controlled by the tcp_abort_on_overflow
sysctl.
But this accept queue overflow did not happen in our case. Our listening application had an empty accept queue. We checked this with the ss
command:
$ ss -n4lt 'sport = :5000'
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:5000 *:*
The Send-Q
column shows the backlog / accept queue size given to listen()
syscall - 128 in our case. The Recv-Q
reports on the number of outstanding connections in the accept queue - zero.
The problem
To recap: we are establishing connections to localhost. Most of them work fine but sometimes the connect()
syscall times out. The SYN packets are being sent over loopback. Because it's loopback they are being delivered to the listening socket. The listening socket accept queue is empty, but we see no SYN+ACKs.
Further investigation revealed something peculiar. We noticed hundreds of CLOSE_WAIT sockets:
$ ss -n4t | head
State Recv-Q Send-Q Local Address:Port Peer Address:Port
CLOSE-WAIT 1 0 127.0.0.1:5000 127.0.0.1:36599
CLOSE-WAIT 1 0 127.0.0.1:5000 127.0.0.1:36467
CLOSE-WAIT 1 0 127.0.0.1:5000 127.0.0.1:36154
CLOSE-WAIT 1 0 127.0.0.1:5000 127.0.0.1:36412
CLOSE-WAIT 1 0 127.0.0.1:5000 127.0.0.1:36536
...
What is CLOSE_WAIT anyway?
CC BY 2.0 image by DaveBleasdale
Citing the Red Hat docs:
CLOSE_WAIT - Indicates that the server has received the first FIN signal from the client and the connection is in the process of being closed. This means the socket is waiting for the application to execute close()
. A socket can be in CLOSE_WAIT state indefinitely until the application closes it. Faulty scenarios would be like a file descriptor leak: server not executing close()
on sockets leading to pile up of CLOSE_WAIT sockets.
This makes sense. Indeed, we were able to confirm the listening application leaks sockets. Hurray, good progress!
The leaking sockets don't explain everything though.
Usually a Linux process can open up to 1,024 file descriptors. If our application did run out of file descriptors the accept
syscall would return the EMFILE error. If the application further mishandled this error case, this could result in losing incoming SYN packets. Failed accept
calls will not dequeue a socket from accept queue, causing the accept queue to grow. The accept queue will not be drained and will eventually overflow. An overflowing accept queue could result in dropped SYN packets and failing connection attempts.
But this is not what happened here. Our application hasn't run out of file descriptors yet. This can be verified by counting file descriptors in /proc/<pid>/fd
directory:
$ ls /proc/` pidof listener `/fd | wc -l
517
517 file descriptors are comfortably far from the 1,024 file descriptor limit. Also, we earlier showed with ss
that the accept queue is empty. So why did our connections time out?
What really happens
The root cause of the problem is definitely our application leaking sockets. The symptoms though, the connection timing out, are still unexplained.
Time to raise the curtain of doubt. Here is what happens.
The listening application leaks sockets, they are stuck in CLOSE_WAIT TCP state forever. These sockets look like (127.0.0.1:5000, 127.0.0.1:some-port). The client socket at the other end of the connection is (127.0.0.1:some-port, 127.0.0.1:5000), and is properly closed and cleaned up.
When the client application quits, the (127.0.0.1:some-port, 127.0.0.1:5000) socket enters the FIN_WAIT_1 state and then quickly transitions to FIN_WAIT_2. The FIN_WAIT_2 state should move on to TIME_WAIT if the client received FIN packet, but this never happens. The FIN_WAIT_2 eventually times out. On Linux this is 60 seconds, controlled by net.ipv4.tcp_fin_timeout
sysctl.
This is where the problem starts. The (127.0.0.1:5000, 127.0.0.1:some-port) socket is still in CLOSE_WAIT state, while (127.0.0.1:some-port, 127.0.0.1:5000) has been cleaned up and is ready to be reused. When this happens the result is a total mess. One part of the socket won't be able to advance from the SYN_SENT state, while the other part is stuck in CLOSE_WAIT. The SYN_SENT socket will eventually give up failing with ETIMEDOUT.
How to reproduce
It all starts with a listening application that leaks sockets and forgets to call close()
. This kind of bug does happen in complex applications. An example buggy code is available here. When you run it nothing will happen initially. ss
will show a usual listening socket:
$ go build listener.go && ./listener &
$ ss -n4tpl 'sport = :5000'
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:5000 *:* users:(("listener",81425,3))
Then we have a client application. The client behaves correctly - it establishes a connection and after a while it closes it. We can demonstrate this with nc
:
$ nc -4 localhost 5000 &
$ ss -n4tp '( dport = :5000 or sport = :5000 )'
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 0 0 127.0.0.1:5000 127.0.0.1:36613 users:(("listener",81425,5))
ESTAB 0 0 127.0.0.1:36613 127.0.0.1:5000 users:(("nc",81456,3))
As you see above ss
shows two TCP sockets, representing the two ends of the TCP connection. The client one is (127.0.0.1:36613, 127.0.0.1:5000), the server one (127.0.0.1:5000, 127.0.0.1:36613).
The next step is to gracefully close the client connection:
$ kill `pidof nc`
Now the connections enter TCP cleanup stages: FIN_WAIT_2 for the client connection, and CLOSE_WAIT for the server one (if you want to read more about these TCP states here's a recommended read):
$ ss -n4tp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
CLOSE-WAIT 1 0 127.0.0.1:5000 127.0.0.1:36613 users:(("listener",81425,5))
FIN-WAIT-2 0 0 127.0.0.1:36613 127.0.0.1:5000
After a while FIN_WAIT_2 will expire:
$ ss -n4tp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
CLOSE-WAIT 1 0 127.0.0.1:5000 127.0.0.1:36613 users:(("listener",81425,5))
But the CLOSE_WAIT socket stays in! Since we have a leaked file descriptor in the listener
program, the kernel is not allowed to move it to FIN_WAIT state. It is stuck in CLOSE_WAIT indefinitely. This stray CLOSE_WAIT would not be a problem if only the same port pair was never reused. Unfortunately, it happens and causes the problem.
To see this we need to launch hundreds of nc
instances and hope the kernel will assign the colliding port number to one of them. The affected nc
will be stuck in connect()
for a while:
$ nc -v -4 localhost 5000 -w0
...
We can use the ss
to confirm that the ports indeed collide:
SYN-SENT 0 1 127.0.0.1:36613 127.0.0.1:5000 users:(("nc",89908,3))
CLOSE-WAIT 1 0 127.0.0.1:5000 127.0.0.1:36613 users:(("listener",81425,5))
In our example the kernel allocated source address (127.0.0.1:36613) to the nc
process. This TCP flow is okay to be used for a connection going to the listener application. But the listener will not be able to allocate a flow in reverse direction since (127.0.0.1:5000, 127.0.0.1:36613) from previous connections is still being used and remains with CLOSE_WAIT state.
The kernel gets confused. It retries the SYN packets, but will never respond since the other TCP socket is stick in the CLOSE_WAIT state. Eventually our affected netcat
will die with unhappy ETIMEDOUT error message:
...
nc: connect to localhost port 5000 (tcp) failed: Connection timed out
If you want to reproduce this weird scenario consider running this script. It will greatly increase the probability of netcat hitting the conflicted socket:
$ for i in `seq 500`; do nc -v -4 -s 127.0.0.1 localhost 5000 -w0; done
A little known fact is that the source port automatically assigned by the kernel is incremental, unless you select the source IP manually. In such case the source port is random. This bash script will create a minefield of CLOSE_WAIT sockets randomly distributed across the ephemeral port range.
Final words
If there's a moral from the story it's to watch out for CLOSE_WAIT sockets. Their presence indicate leaking sockets, and with leaking sockets some incoming connections may time out. Presence of many FIN_WAIT_2 sockets says the problem is not on current machine but on the remote end of the connection.
Furthermore, this bug shows that it is possible for the states of the two ends of a TCP connection to be at odds, even if the connection is over the loopback interface.
It seems that the design decisions made by the BSD Socket API have unexpected long lasting consequences. If you think about it - why exactly the socket can automatically expire the FIN_WAIT state, but can't move off from CLOSE_WAIT after some grace time. This is very confusing... And it should be! The original TCP specification does not allow automatic state transition after FIN_WAIT_2 state! According to the spec FIN_WAIT_2 is supposed to stay running until the application on the other side cleans up.
Let me leave you with the tcp(7)
manpage describing the tcp_fin_timeout
setting:
tcp_fin_timeout (integer; default: 60)
This specifies how many seconds to wait for a final FIN packet
before the socket is forcibly closed. This is strictly a
violation of the TCP specification, but required to prevent
denial-of-service attacks.
I think now we understand why automatically closing FIN_WAIT_2 is strictly speaking a violation of the TCP specification.
Do you enjoy playing with low level networking bits? Are you interested in dealing with some of the largest DDoS attacks ever seen?
If so you should definitely have a look at the open positions in our London, San Francisco, Singapore, Champaign (IL) and Austin (TX) offices!