-1

When I was starting .c programming my natural inclination was to write one "main" .c file, then add/organize extra features by #include-ing the .h files with function declarations, typedefs, variables etc. This workflow is very simple - no function prototypes, one build file, etc. This workflow is intuitive - #include is just like copy-pasting code! And it does get the job done.

However, when I look at the file structure of professional projects (i.e. on Github), I see that the general practice is to split the program into multiple .c files (complete with their own #includes, function prototypes, build files etc), then link them together. Why is this workflow more widespread even though it does involve a lot of extra work relative to "one .c file to rule them all"?

I researched on and tried both styles of project file organization, but still ended up using the "one .c , multiple .h" approach. I'd like to know what are the pros and cons of both.

Noideas
  • 31
  • 3
  • Multiple c files minimizes incremental compile time. Only the files that change have to be recompiled before link. Single .c file allows for global optimization. If you use a single c file why bother with multiple .h? – Allan Wind Jan 16 '23 at 06:24
  • 1
    With a single source file, even if you make a single very small change in a single header file, ***all*** files must be recompiled. – Some programmer dude Jan 16 '23 at 06:27
  • "data (and functionality) encapsulation"... Once a sub-system works, it need not be revisited or its implementation (re-)considered. Try to write a "unit test" when all the code is in one source file. (All in one? I hope every function declaration begins with `static`... :-) – Fe2O3 Jan 16 '23 at 06:33
  • PS: "code re-use"... once you've written (and tested) a capable subsytem, it's very easy to transfer either the source code itself (in a source file and its header) or link in the object file (alone or from a "personal library") to use in another project. Extracting from a single source file the particular code for, for instance, a balanced BST (many different distinct functions) from an app that uses it for use in another project would be a nightmare.) – Fe2O3 Jan 16 '23 at 06:42
  • @Fe2O3 Just to play devils advocate here... why are mono-repos not bad then? – Allan Wind Jan 16 '23 at 06:46
  • @AllanWind If "mono-repos" means "monolithic source", I can only guess that the "finished version" may have been consolidated from separate sources used during development. Certainly, once something earns the moniker "stable release", its at the end of its (r)evolutionary development involving re-compilation every few 'tweaks'... Just a guess... `:-)` – Fe2O3 Jan 16 '23 at 06:50
  • @Fe2O3 It means all source code from your company is one repository (Google is known for that). They talk about being able to refactor across the whole code base. Extracting something for release as open source seems to be painful for them (to your point). – Allan Wind Jan 16 '23 at 06:52
  • 1
    @AllanWind I'm going to take a pass on this one, and not pretend to understand the work practices of others... My limited experience is of large-ish, multi-program apps that all used some form of common library or "runtime services". Naturally, the code for "common methodologies" was held separate from any "application layer" code. – Fe2O3 Jan 16 '23 at 07:00
  • 1
    https://stackoverflow.com/questions/27139349/c-project-files-and-modular-organization is similar – Allan Wind Jan 16 '23 at 07:07

1 Answers1

2
single c file multiple c files
encapsulation: data hiding/protection & function access +
minimize incremental build time +
global code optimization + + (LTO)
simple build process +

(roughly in order of importance; most important first)

@Fe2O3 points encapsulation in general. If you use a layered architecture, you may not want to allow functions from layer n to call functions in layer n + 1. You have no way of enforcing by design.
There are other organizing principles such keep code that change in concert together, or separate frequent and infrequently changed code. static global variables are of course still scoped to the file, but it's all your code so essentially just regular global variables.

Multiple c files minimizes incremental compile time. Only the files that change have to be rebuild before linking the binary. Let's say a clean build takes 300s, and incremental build takes 30.

Single c file permits the compiler to do global optimizing in that compilation unit. If there are any non-linear optimization steps the compilation time maybe worse that many small units (>300s; memory usage may be an issue, too). There is at least one well known project (I was going to say SQLite but could very well remember wrong) that concatenates all source code for release builds. It turns out, at least with gcc, you get the same optimizations if you use link time optimizations (-flto).

Allan Wind
  • 23,068
  • 5
  • 28
  • 38