2

Let's say we have a services.proto with our gRPC service definitions, for example:

service Foo {
  rpc Bar (BarRequest) returns (BarReply) {}
}

message BarRequest {
  string test = 1;
}

message BarReply {
  string test = 1;
}

We could compile this locally to Go by running something like

$ protoc --go_out=. --go_opt=paths=source_relative \
    --go-grpc_out=. --go-grpc_opt=paths=source_relative \
    services.proto

My concern though is that running this last step might produce inconsistent output depending on the installed version of the protobuf compiler and the Go plugins for gRPC. For example, two developers working on the same project might have slightly different versions installed locally.

It would seem reasonable to me to address this by containerizing the protoc step. For example, with a Dockerfile like this...

FROM golang:1.18
WORKDIR /src
RUN apt-get update && apt-get install -y protobuf-compiler
RUN go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.26
RUN go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@v1.1
CMD protoc --go_out=. --go_opt=paths=source_relative --go-grpc_out=. --go-grpc_opt=paths=source_relative services.proto

... we can run the protoc step inside a container:

docker run --rm  -v $(pwd):/src $(docker build -q .)

After wrapping the previous command in a shell script, developers can run it on their local machine, giving them deterministic, reproducible output. It can also run in a CI/CD pipeline.

My question is, is this a sound approach and/or is there an easier way to achieve the same outcome?

NB, I was surprised to find that the official grpc/go image does not come with protoc preinstalled. Am I off the beaten path here?

Max
  • 9,220
  • 10
  • 51
  • 83
  • 1
    I might use a setup like this in an environment like Jenkins, where a bespoke Docker image is often the easiest way to get custom software installed and where Jenkins can correctly manage the `docker run -v` bind-mount options. For day-to-day use, though, I feel like this gets complicated to build and run fairly quickly. – David Maze Apr 04 '22 at 19:02
  • 1
    This is a fairly sound approach, and as you mentioned, ensures all engineers are compiling with the same versions of protoc plugins. I have used this technique a few times at various shops. – Noah Stride Apr 05 '22 at 12:50

1 Answers1

1

My question is, is this a sound approach and/or is there an easier way to achieve the same outcome?

It is definitely a good approach. I do the same. Not only to have a consistent across the team, but also to ensure we can produce the same output in different OSs.

There is an easier way to do that, though. Look at this repo: https://github.com/jaegertracing/docker-protobuf

The image is in Docker hub, but you can create your image if you prefer.

I use this command to generate Go:

docker run --rm -u $(id -u)  \
    -v${PWD}/protos/:/source  \
    -v${PWD}/v1:/output  \
    -w/source jaegertracing/protobuf:0.3.1  \
      --proto_path=/source \
      --go_out=paths=source_relative,plugins=grpc:/output \
      -I/usr/include/google/protobuf  \
      /source/* 
rubens21
  • 743
  • 5
  • 13