I've a client and server running on the same server(Linux machine) and TCP connection in between them. I've observed that when I kill client, Kernel/OS sends RST packet after exactly 2 seconds after the client is killed. My question is which kernel parameter os Socket options govern this timer(2 secs)?
A RST isn't ordinarily sent between peers in a normal connection termination. A FIN is. When you kill the client, a FIN is sent on the connection to indicate to the server that the client won't be sending any more data.
But the server is apparently not paying attention to the FIN it receives when the client is killed (i.e. it would need to attempt a recv on the socket and react appropriately to the end-of-file indication it will get -- usually that means close its own socket). Subsequently, the server is attempting to send data to the client but the connection is closed. That is what results in a RST packet being sent. 
RST means (roughly) "there is no active connection available to receive the data you're sending; it's pointless to send more."
And so the timing of that RST is likely based on when the server next attempts to send to the client, not on any kernel / OS configuration setting. If the server doesn't attempt to send and it doesn't close, the connection should just sit there idle forever, and no RST will be sent.
As mentioned in the UNIX Network programming Volume 1 Section Generic socket options, if a client is killed, TCP will send a FIN across the connection.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With