We have a client/server application where older servers are supported by newer clients. Both support a shared set of icons.
We're representing the icons as an implicit enum that's included in the server and client builds:
enum icons_t { // rev 1.0
ICON_A, // 0
ICON_B, // 1
ICON_C // 2
};
Sometimes we retire icons (weren't being used, or used internally and weren't listed in our API), which led to the following code being committed:
enum icons_t { // rev 2.0
ICON_B, // 0
ICON_C // 1 (now if a rev 1.0 server uses ICON_B, it will get ICON_C instead)
};
I've changed our enum to the following to try and work around this:
// Big scary header about commenting out old icons
enum icons_t { // rev 2.1
// Removed: ICON_A = 0,
ICON_B = 1,
ICON_C = 2
};
Now my worry is a bad merge when multiple people add new icons:
// Big scary header about commenting out old icons
enum icons_t { // rev 30
// Removed: ICON_A = 0,
ICON_B = 1,
ICON_C = 2,
ICON_D = 3,
ICON_E = 3 // Bad merge leaves 2 icons with same value
};
Since it's an enum we don't really have a way to assert if the values aren't unique.
Is there a better data structure to manage this data, or a design change that wouldn't be open to mistakes like this? My thoughts have been going towards a tool to analyze pull requests and block merges if this issue is detected.