0

We are using a K8s development tool called Tilt (https://tilt.dev/) which builds dev images of a stack with a ton of microservices and allows live reloading, etc.

I have deployed remote builder pods to an AWS K8s cluster using buildx create with 10 replicas. However, spinning off a build appears to only use two of these pods for the actual build (based on CPU and Memory metrics I am seeing in the cluster) and the other 8 stand idle. Running tilt up kicks off concurrent image builds for all microservices, so it would be ideal to spread the build steps across all 10 running builder pods.

I suspect this is because our project currently exists in a monorepo and is using the same Dockerfile.dev at the root level of the project to build all images (minor config passed in at build time using --build-arg)

  1. Am I correct in suspecting this? It's hard to tell but it seems like buildx load balances based on the Dockerfile context. If so, can this behavior be overridden?

  2. Alternatively, is there a way to manually select a buildx node? Easy enough to script a selector that loops across the existing remote nodes to spread out the building load

Zfalen
  • 878
  • 6
  • 20

1 Answers1

1

FWIW - I found a viable workaround by doing something like this:

#!/usr/bin/env bash
for dir in myMonorepoServices/*; do
  # get the name of the service folder
  SERVICE=$(echo $dir | cut -d'/' -f 2)

  echo "   Creating a remote builder for $SERVICE...   "
  $(docker buildx create --name $SERVICE-docker-builder --driver kubernetes --driver-opt replicas=1,namespace=docker-builder --use)
  docker buildx inspect --bootstrap

done

This creates an individually "named" builder pod on the remote cluster, which i can target directly using the --builder flag like so:

docker buildx build --builder=myServiceName-docker-builder

Not really "load balancing" per-say, but this does ensure that each of the services get built on their own dedicated pod.

Zfalen
  • 878
  • 6
  • 20