4

I am planning to use grpc to build my search API, but I am wondering how the grpc services definitions files (e.g .proto) is synced between the server and the clients (assuming all use different technologies).

Also if the server had changed one of the .proto, how the clients will be notified to regenerate their stubs in accordance to those changes.

To summarize: how to share the definitions (.proto) with clients and how clients are notified if any changes to those files had occurred?

adnanmuttaleb
  • 3,388
  • 1
  • 29
  • 46

1 Answers1

3

Simple: they aren't. All sync here is manual and usually requires a rebuild and redeploy, after you've become aware of a change, and have updated your .proto files.

Without updating, the fields and methods that you know about should at least keep working. You just won't have the new bits.

Note also: while you can extend schemas by adding new fields and services / methods, if you change the meaning of a field, or the field type, or the message types on a service: expect things to go very badly wrong.

Marc Gravell
  • 1,026,079
  • 266
  • 2,566
  • 2,900
  • So it is completely manual job, and can be manged only by a disciplined workflow? – adnanmuttaleb Jan 08 '20 at 08:38
  • @adnanmuttaleb yes; keep in mind that on a lot of platforms, the tooling works as a compile-time step that generates code in the target language, and as you say: "assuming all use different technologies"; updating your Java server isn't going to do *anything* for your C++ client - the C++ client will need to be rebuilt with an updated .proto to generate the extra C++ code needed to handle it; some libraries use runtime meta-programming so could in theory adapt to .proto changes, but that still wouldn't make your code *do anything* with the additional fields, or use the new service methods – Marc Gravell Jan 08 '20 at 10:34
  • This is sad :( So it is not so flexible actually. All clients must be managed by the same process which manages the server. It is not really a distirbution... – Gino Pane Aug 26 '21 at 09:40
  • @GinoPane that's .... frankly a really bad take; almost all APIs work this way - xml, json, etc; and that is intentional, because your code **needs to be updated** to make proper use of new data that didn't exist before; now, if you *really* want, most implementations allow you to receive and store unexpected fields, but accessing them is inherently less direct - obviously your code isn't going to just access `obj.Foo` if the code didn't know about `.Foo` when it was built. But: I'm genuinely curious about how you would, in a perfect world, want this to work. Serious question, not rhetorical. – Marc Gravell Aug 26 '21 at 09:46
  • Hm, after second thinking it looks like not an issue... If the API got breaking changes, you won't be able to use it anyway unless the client is properly updated. – Gino Pane Aug 26 '21 at 13:00
  • @GinoPane well, that's a good reason not to make breaking changes :) – Marc Gravell Aug 26 '21 at 16:00