2

I'm preparing to write a photonic simulation package that will run on a 128-node Linux and Windows cluster, with a Windows-based client for designing jobs (CAD-like) and submitting them to to the cluster.

Most of this is well-trod ground, but I'm curious how C# stacks up to C++ in terms of real number-crunching ability. I'm very comfortable with both languages, but I find the superior object model and framework support of C# with .NET or Mono incredibly enticing. However, I can't, with this application, sacrifice too much in processing power for the sake of developer preference.

Does anyone have any experience in this area? Are there any hard benchmarks available? I'd assume that the the final machine code would be optimized using the same techniques whether it comes from a C# or C++ source, especially since that typically takes place at the pcode/IL level.

3Dave
  • 28,657
  • 18
  • 88
  • 151
  • Why would you want to combine the high-level (management/dispatching) with the lower-level (number-crunching) facilities using the same language anyway? If you wan't near-or-better-than-assembler efficiency in number crunching use c for the calculations. If you want better tools to manage the higher order organization and management of the application use C# (.NET or Mono) or Python. – Evan Plaice Jun 21 '10 at 22:03
  • @Evan Of course C, given a good programmer, will produce great results, and it has the advantage that it skips several levels of abstraction present in C#. However, I'd like to keep the app in one language for several reasons. Maintainability is *very* important. Integration with a web-based reporting tool would be great - keeping things in C# makes it possible to easily reuse libraries written for the cluster node app in any other context. Never underestimate the benefits of sticking with a single platform. – 3Dave Jun 22 '10 at 16:57
  • Ok, I just wanted to point out that it's common practice to use a higher level language as a 'glue' language that way you get the convenience **and** the performance (win win). Embedding C/C++ in Python is common practice in the world of scientific computing because it's so easy to use C/C++ code in python and you can still leverage the performance where needed. See http://docs.python.org/release/2.5.2/ext/simpleExample.html. I only offered it as a comment and not an answer because, although it doesn't fit your requirements, it's not a bad option. – Evan Plaice Jun 22 '10 at 20:30

3 Answers3

2

The optimisation techniques employed by C# and native C++ are vastly different. C# compilers emit IL, which is only marginally optimised and then JIT'ed to binary code when it is about to execute for the first time. Most of the optimisation work happens inside the JIT compiler.

This has pros and cons. JIT has time budgets, which limits how much effort it can expend on optimisation. But it also has intimate knowledge of the hardware it is actually running on, so it can (in theory) make transparent use of newer CPU opcodes and detailed knowledge of performance data such as a pipeline hazards database.

In practice, I don't know how significant the latter is. I do know that at least Mono will parallelise some loops automatically if it finds itself running on a CPU with SSE (SSE2, perhaps?), which may be a big deal for your scenario.

Marcelo Cantos
  • 181,030
  • 38
  • 327
  • 365
  • Makes sense. Any idea what effect pre-compiling the assemblies with NGen will have? I'm not sure if it performs any extra optimization levels. – 3Dave Jun 21 '10 at 22:18
1

I did a quick search and found this:

http://www.drdobbs.com/184401976;jsessionid=232QX0GU3C3KXQE1GHOSKH4ATMY32JVN

Edit: Bear in mind (on reading the article) that this was done 5 years ago so performance is likely to be better all round!

Goz
  • 61,365
  • 24
  • 124
  • 204