Hey Serverfault friends;
I'm trying to construct a Squid Proxy acting transparent for all egress traffic flows as a proof of concept in my environment.
I've got one question, and one problem (that I know of!).
The question:
I've got iptables set up to redirect 80 and 443 to 3129 and 3130 respectively, and the http_access works when I don't specify transparent
or intercept
, which I thought you needed to do?
The problem:
Dealing with the https intercepts. To my understanding, the best bet is to have the squid proxy act as a man-in-the-middle, inspect the request and open it up, and generate a new request to the actual destination server, which, I believe I've done below?
The squid configuration:
visible_hostname squid.my.test.tech
#cache deny all
# Log format and rotation
logformat squid %ts.%03tu %6tr %>a %Ss/%03>Hs %<st %rm %ru %ssl::>sni %Sh/%<a %mt
logfile_rotate 10
debug_options rotate=10 ALL,1 11,2
sslproxy_cert_error allow all
# Handling HTTP requests
http_port 3128
http_port 3129
acl allowed_http_sites dstdomain "/etc/squid/whitelist.txt"
acl blocked_http_sites dstdomain "/etc/squid/blacklist.txt"
http_access allow allowed_http_sites
http_access deny blocked_http_sites
# Handling HTTPS requests
sslcrtd_program /usr/local/squid/libexec/security_file_certgen -s /var/lib/ssl_db -M 4MB
https_port 3130 transparent ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=16MB cert=/etc/squid/ssl/squid.crt key=/etc/squid/ssl/squid.key
acl allowed_https_sites ssl::server_name "/etc/squid/whitelist.txt"
http_access allow allowed_http_sites
http_access deny blocked_http_sites
always_direct allow all
acl step1 at_step SslBump1
ssl_bump peek step1
ssl_bump bump all
#Drop anything explicitly permitted
http_access deny all
In the whitelist is just .google.com
and .cern.ch
, and in the blacklist is .foxnews.com
.
IP Tables are handling the prerouting:
-A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3129
-A PREROUTING -p tcp -m tcp --dport 443 -j REDIRECT --to-ports 3130
I've generated a self-signed cert, and stored it in the listed directory above (/etc/squid/ssl)
, copied it to /etc/pki/ca-trust/source/anchors
and ran sudo update-ca-trust extract
to update the CA List for the local host. Then exported copied down the cert to my workstation, added the cert to a trusted authority in firefox (it's a proof of concept, when I go bigger picture we'll look at certificate authority signing--no worries!).
Now I'm hitting the issue where if I set the my local instance of Firefox, http requests work just fine and show up in the logs:
1582926343.272 415 172.16.0.199 TCP_MISS/304 264 GET http://info.cern.ch/ - HIER_DIRECT/188.184.64.53 -
(I'll skip the log for http hits in the cache)
But https requests show as a TCP_DENIED/200:
1582926470.071 1 172.16.0.199 TCP_DENIED/200 0 CONNECT 10.161.128.139:443 - HIER_NONE/- -
With the query looking like it's being denied at Squid itself:
----------
2020/02/28 21:47:51.554 kid1| 11,2| client_side.cc(2347) parseHttpRequest: HTTP Client local=10.161.128.139:443 remote=172.16.0.199:49219 FD 12 flags=33
2020/02/28 21:47:51.554 kid1| 11,2| client_side.cc(2348) parseHttpRequest: HTTP Client REQUEST:
---------
CONNECT 10.161.128.139:443 HTTP/1.12
Host: 10.161.128.139:443
2
----------
I've modified the squid.conf
file extensively to play around with the various options, but it looks like the request is just flat dying at the squid box, and I've not a great deal of experience with squid, and with the hundred or so pages I've dug up on how2squid, I feel like I'm flailing about with guessing-and-checking, so, any guidance would be appreciated.