0

Assuming I have "build server" and a "dev machine" that both have the following characteristics

  • C# compiler
  • Identical source code
  • Identical system environment variables
  • Identical compiler & compiler flags
  • Identical architecture on the build server and dev machine

I also understand the following about the C# compiler from Eric Lippert's blog (I am using .NET but this question could be asked of any static language build process)

  • The C# compiler by design never produces the same binary twice
  • At a fundamental level, the job that the C# compiler does for you is it takes in C# code, analyzes it for correctness, and produces an equivalent program written in the PE format
  • Compiler optimizations and multi-threaded compilers are non-determinstic
  • The JIT will change the code at runtime

Knowing all this.

If I build the source on the build server and get a set of DLL's and executables that I call server binaries

Is it not redundant to rebuild the binaries on my dev machine, i.e build local binaries

Aren't the server binaries and local binaries valid substitutes for one another.

If so can I just copy all the server binaries ... but them where the local binaries would have gone and treat them as thought I built them myself.

Obviously this might seem unnecessary in the case of just 1 dev machine, but if I have n dev machines, I only have to build once time, not n times.

Are there obvious draw backs to my logic? Is this common practice for .Net shops?

samirahmed
  • 1,219
  • 2
  • 12
  • 16

1 Answers1

0

Yes in principle it's redundant to rebuild them on your machines. But your question is not specific enough and there are multiple possible scenarios that would yield different answers.

For instance if you have basic utility libraries which do not change often and are kept in a seperate solution, these are the obvious candidates for having the server build them and then being used as such. However keep in mind that simply copying the binaries over might be tedious and error-prone. You have to do it everytime the code was changed. You have to write a script for it. You have to copy the pdb files as well and have the same source code on your dev machine if you want debugging capabilities. In such scenarios it might be handy to have NuGet manage this for you. Another case like this: if the entire project is very large but modular enough so that single developpers or groups of them can work on seperate parts of the project without needing the full source of the rest. They'll gain time if they do not have to bother with building those parts of the project they don't have to deal with, but can just pull a working version from the server.

Another typical scenario is you have a solution containing a bunch of projects that logically belong together. Like the main ui project, the underlying projects with applicaton logic, the plugin projects etc. While in principle it is again possible to have the server build some of these for you, this makes no sense at all and is completely unpractical (and note that some of the disadvantages here might also apply to the above case as sometimes there is no clear distinct line between a simple utility library and something a lot of projects heavily depend on): after modifying the source, you'd have push the changes to the server, then wait for it to build, then copy over the result. This is a waste of time. Espesially since your server probably takes a long time to produce a build result as it build/tests everything in all possible configurations. Furthermore by default VS will overwrite the binaries you just copied so you'd have to unload the project or figure out some other way to make it not do that.

stijn
  • 34,664
  • 13
  • 111
  • 163