1

I have seen some real slow build times in a big legacy codebase without proper assembly decomposition, running on a 2G RAM machine. So, if I wanted to speed it up without code overhaul, would a 16G (or some other such huge number) RAM machine be radically faster, if the fairy IT department were to provide one? In other words, is RAM the major bottleneck for sufficiently large dotnet projects or are there other dominant issues?

Any input about similar situation for building Java is also appreciated, just out of pure curiosity.

EndangeringSpecies
  • 1,564
  • 1
  • 17
  • 39
  • 1
    It may not be RAM, it might be the CPU. Or I/O bound. The only way to know for sure is to figure out which resource(s) is/are being exhausted. – vcsjones May 09 '12 at 21:10
  • Very similar to this question (which has some good answers): http://stackoverflow.com/questions/867741/ssd-drives-and-visual-studio-ide-big-improvements-real-usage-stories-no-theo – Paddy May 09 '12 at 21:57

3 Answers3

2

Performance does not improve with additional RAM once you have more RAM than the application uses. You are likely not to see any more improvement by using 128GB of RAM.

We cannot guess the amount needed. Measure by looking at task manager.

usr
  • 168,620
  • 35
  • 240
  • 369
2

It certainly won't do you any harm...

2G is pretty small for a dev machine, I use 16G as a matter of course.

However, build times are going to be gated by file access sooner or later, so whilst you might get a little improvement I suspect you won't be blown away by it. ([EDIT] as a commenter says, compilation is likely to be CPU bound too).

Have you looked into parallel builds (e.g. see this SO question: Visual Studio 2010, how to build projects in parallel on multicore).

Or, can you restructure your code base and maybe remove some less frequently updated assemblies in to a separate sln, and then reference these as DLLs (this isn't a great idea in all cases, but sometimes it can be expedient). From you description of the problem I'm guessing this is easier said than done, but this is how we've achieved good results in our code base.

Community
  • 1
  • 1
Steve
  • 8,469
  • 1
  • 26
  • 37
  • My experience is it is CPU bound because file access is instant due to caching. – usr May 09 '12 at 21:16
  • @usr, ok, so in your experience CPU provides enough of a bottleneck. Got it. (In terms of disk speed, yeah, sounds dubious, it could conceivably be done with RAM disk). – EndangeringSpecies May 09 '12 at 21:22
  • Yeah. See my answer - RAM-amount does not matter after a certain point. – usr May 09 '12 at 21:29
0

The whole RAM issue is actually one of ROI (Return on Interest). The more RAM you add to a system, the less likely the application is going to have to search for a memory location large enough to store an object of a particular size and the faster it'll go; however, after a certain point it's so unlikely that the system will pick a location that is not large enough to store the object that it's pointless to go any higher. (note that read/write speeds of the RAM stick play a role in this as well).

In summary: @ 2gb RAM, you definitely should upgrade that to something more like 8gb or the suggested 16gb however doing something more than that would be almost pointless because the bottleneck will come from the processor then. ALSO it's probably a good idea to note the speed of the RAM too because then your RAM can bottleneck because it can only handle XXXXmhz clock speed at most. Generally, though, 1600mhz is fine.

d0nut
  • 2,835
  • 1
  • 18
  • 23