I am deploying MicroFocus Fortify SSC in my kubernetes cluster via a helm chart. The helm chart I am using can be found here https://repo1.dso.mil/big-bang/apps/third-party/fortify/-/tree/0.0.9-bb.2/. I am using kubernetes distribution of RKE2 v1.23.5+rke2r1, which uses containerd as a CRI. It deploys two different statefulsets in the fortify namespace. One is called fortify-fortify-ssc-webapp and fortify-mysql for it's database. Whenever I deploy it without the istio-injection: enabled
label using it's default suggested configmap.yaml the pods come up fine and the webapp pod is able to connect to it's mysql db pod. However, whenever I enable istio sidecars in the fortify namespace by adding the istio-injection: enabled
label the fortify-fortify-ssc-webapp-0 pod gets stuck in the init phase and can't seem to get past any of it's init-containers.
When Describing the fortify-fortify-ssc-webapp-0 pod the last messages look like this:
Normal Scheduled 4m45s default-scheduler Successfully assigned fortify/fortify-fortify-ssc-webapp-0 to <IP address>
Normal SuccessfulAttachVolume 4m43s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-ID"
Normal Pulled 4m39s kubelet Container image "registry1.dso.mil/ironbank/bitnami/mysql8:8.0.29-debian-10-r37" already present on machine
Normal Created 4m38s kubelet Created container seed-data-loader
Normal Started 4m38s kubelet Started container seed-data-loader
Normal Killing 10s kubelet Stopping container seed-data-loader
And stops there. It seems like it's not able to get past the seed-data-loader container. The seed-data-loader init-container contains the following:
/bin/bash
-ecv
while ! mysqladmin ping -h $DBHOST --silent; do
sleep 10
done
mysql -h $DBHOST -u $DBROOTUSER -p$DBROOTPASSWORD -e "CREATE DATABASE IF NOT EXISTS fortify CHARACTER SET utf8 COLLATE utf8_bin"
The logs of the fortify-mysql-0 pod give me this error:
seed-data-loader mysqladmin: connect to server at 'fortify-mysql' failed
seed-data-loader error: 'Can't connect to MySQL server on 'fortify-mysql:3306' (111)'
seed-data-loader Check that mysqld is running on fortify-mysql and that the port is 3306.
seed-data-loader You can check this by doing 'telnet fortify-mysql 3306'
My configmap.yaml comes from the default fortify/chart/values.yaml in the helm chart and looks like this:
# configurable parameters
storage:
volume: 5Gi
databaseSecret:
name: db-credentials-mysql
useRoot: true # Use root credentials to create database if required
istio:
enabled: true
gateways:
- istio-system/main
key_store_password: dsoppassword
key_store_cert_password : dsoppassword
fortify_autoconfig: |
appProperties:
host.validation: false
searchIndex.location: '/fortify/ssc/index'
datasourceProperties:
jdbc.url: "jdbc:mysql://fortify-mysql:3306/fortify?useSSL=false&connectionCollation=<collation>&rewriteBatchedStatements=true&max_allowed_packet=1073741824&sql_mode=TRADITIONAL"
db.driver.class: com.mysql.cj.jdbc.Driver
db.username: root
db.password: root
db.dialect: com.fortify.manager.util.hibernate.MySQLDialect
db.like.specialCharacters: '%_\\'
dbMigrationProperties:
migration.enabled: true
fortify_license: |
<License>
podAnnotations:
traffic.sidecar.istio.io/excludeOutboundPorts: "5701,8888,8081,3306,5432"
traffic.sidecar.istio.io/excludeInboundPorts: "5701,8888,8081,8091,3306,5432"
Things that I've tried to solve this issue:
added a network policy to try to allow traffic between the pods over port 3306.
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-mysql
namespace: fortify
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: fortify
ingress:
- from:
- podSelector:
matchLabels:
app.kubernetes.io/name: mysql
ports:
- protocol: TCP
port: 3306
egress:
- to:
- podSelector:
matchLabels:
app.kubernetes.io/name: mysql
ports:
- protocol: TCP
port: 3306
Added podAnnotations to the configmap.yaml to try to open those ports up.
podAnnotations:
traffic.sidecar.istio.io/excludeOutboundPorts: "5701,8888,8081,3306,5432"
traffic.sidecar.istio.io/excludeInboundPorts: "5701,8888,8081,8091,3306,5432"
Applied sidecar.yaml in an attempt to allow traffic
---
apiVersion: networking.istio.io/v1beta1
kind: Sidecar
metadata:
name: default
namespace: fortify
spec:
outboundTrafficPolicy:
mode: ALLOW_ANY
---
apiVersion: networking.istio.io/v1beta1
kind: Sidecar
metadata:
name: default
namespace: istio-system
spec:
outboundTrafficPolicy:
mode: ALLOW_ANY
created service attempting to fix it.
apiVersion: v1
kind: Service
metadata:
name: mysql-test
namespace: fortify
spec:
ports:
- port: 3306
protocol: TCP
targetPort: 3306
name: http
selector:
app.kubernetes.io/name: fortify
Created destinationrules in attempt to fix it.
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
name: "istio-client-mtls"
namespace: "fortify"
spec:
host: "*.svc.cluster.local"
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
Any help would be appreciated as I've been stuck on this error for some time now.