so tcp keepalive is different then something like nginx/apache keepalive.
tcp keepalive keeps the connection open in case an error has happened. Like the client didn't get the request so it can re-try it over the same connection. Now that rarely happens and general rule of thumb is you want to keep a high tcp keepalive on a NAT server so it doesn't lose the mapping from client to NATed server behind it. We run Ad servers that hand millions somewhere around 40million connections per day per server and our keepalive looks like
"net.ipv4.tcp_keepalive_intvl" => 2,
"net.ipv4.tcp_keepalive_probes" => 3,
"net.ipv4.tcp_keepalive_time" => 5,
I still feel 5 seconds for keepalive time is too high and given the nature of our business where if we don't return an ad in 50ms then the client timesout. So I'll probably drop that to 1. I've just been lowering that value slowly so I don't cause any major issues. I'd not recommend the same since all use cases are different.
So as I said its very different then nginx/apache keepalive. That is more persistent connections. So it can connect once and re-use that connection again. That will help reduce the latency between client and host.
Chances are if you aren't running out of tcp ports then changing your tcp keepalive won't change anything you're seeing with timeouts.