To a high-level programmer, BCD is not important. To a low-level programmer in the early days, imagine the following situation, even simpler than a calculator - you have an integer variable in your code that you want to show to the user on a nine-segment display.
It would be easy to display in hex, but the users prefer to use decimal numbers. You will find the need to have a hex-to-decimal conversion and then you will need to internally represent the decimal digits you want to display (in hex).
Very early they identified that it would be easy to use the bit sequences 0000 - 1001 to represent decimal digits from 0 to 9. While wasteful, with 1 byte you can represent two digits, and while you are at it, why not implement arithmetic directly on these decimal digits? Then no extra conversion is needed to interact with the user, and combining with more bytes you can have more digits.
They noticed that with the help of some extra CPU instructions, they could 'correct' the binary arithmetic instructions present in the CPUs to operate on BCD. They could perform all calculations in BCD, the preferred way for calculators. As a bonus, they could handle a decimal point and fractional numbers such as 0.1 which require no ugly approximations, as in float representation. BCD was adopted in this domain for quite some time.
When we get to C language, we are already far away from these considerations. People consider C a 'low-level language' but this is true only in relative terms. The C language does not expose the CPU architecture, such as the availability of BCD instructions and even the carry flag, which is so important to implement variable precision arithmetic.
Instead of using a few assembly instructions, you can write a much more complicated code in C to handle BCD, but it is up to the compiler to recognize and map it back to these special instructions. Most likely it will not happen, and reverse-engineering this code is a very complex task for the compiler... Most compilers simply ignore these instructions.