There was a problem with the operation of the steeltoe. Previously, we did not have a load and this problem did not arise. At the moment, the service is under load of 270-300 requests per second. And within 3 hours memory is clogged. We use 3 replicas and each with 4 GB of RAM. After removing the memory dump, it turned out that everything was clogged with strings. With a more detailed breakdown, it turned out that metrics were being collected. We have connected 2 Info and health actuators. But at the same time, all other enpoints are available by default. In configuration
Configuration
eureka:
instance:
StatusPageUrlPath: "/actuator/info"
HealthCheckUrlPath: "/actuator/health"
endpoints:
actuator:
exposure:
include: [info, health]
exclude: [cloudfoundry, dbmigrations, env, heapdump, httptrace, hypermedia, loggers, mappings, prometheus, refresh, threaddump, metrics]
cloudfoundry:
enabled: false
dbmigrations:
enabled: false
env:
enabled: false
heapdump:
enabled: false
httptrace:
enabled: false
hypermedia:
enabled: false
loggers:
enabled: false
mappings:
enabled: false
metrics:
enabled: false
prometheus:
enabled: false
refresh:
enabled: false
threaddump:
enabled: false
I’ll make a reservation right away that everything that is indicated in the config at the moment is already experiments. With these settings, enpoints such as "httptrace" are not available, but before the exception they were available and it was possible to directly see the data in the browser. It didn't solve the memory problem. When studying steeltoe sources, we found out that metrics are being cleared, but for this you need to connect "metrics" or "prometheus".
I would like to know why the metrics are saved if I specify specific 2 actuators? If I don't need this data, how can I customize the configuration without saving it?