7

I understand that the operating system generates a core dump sometimes when a signal is sent (usually upon a crash). Is there a way to tell the operating system from inside C/C++ via a #define or compiler flag that no, or a somehow specifically limited core dump can be generated via that executable? The only way I know of to control it is via ulimit -c. My test system is Linux. The no core dump does not need to be implemented system wide, only for a specific program.

For the interested, this has to do with CVE-2019-15947 in Bitcoin Core's bitcoin-qt which still has no solution.

Some of this discussion is at the Bitcoin GitHub bug tracking page.

The other option would be to obfunscate and or encrypt the wallet.dat in memory so it's not easily retrievable via core dumps. Please note the second option can already be accomplished, though is not enabled by default.

oxagast
  • 397
  • 1
  • 12
  • 1
    Have you investigated setrlimit() and it's RLIMIT_CORE parameter? There are plenty of runtime ways to turn off cores, but probably none that are as simple as a compiler flag. – Max Mar 12 '20 at 21:41
  • Briefly, yes, but I couldn't get RLIMIT_CORE to work on my test case. – oxagast Mar 12 '20 at 21:47
  • 2
    No, not really. You can make it hard, you can do things like set core file size limit to zero or handle `SIGSEGV` yourself, but if your code is running on my system, using my libraries, running under my kernel, and I want your code to dump core when it gets a `SIGSEGV`, it's going to dump core when it gets a `SIGSEGV`. – Andrew Henle Mar 12 '20 at 21:48
  • create a directory named "coredump" in the current directory? – Jean-François Fabre Mar 12 '20 at 21:51
  • 1
    I see. I would think some type of encryption (which is in place, but optional and often not used) against the wallet.dat's is the best bet. Then the wallet and priv keys would have to be recovered and decrypted. So if it does dump core, it won't dump a retrievable wallet.dat within the core to another part of the system, or upload to where misc users can access it on a bug tracker. – oxagast Mar 12 '20 at 21:52
  • @oxagast Pretty much. And you need to be sure the keys aren't also in memory any longer than necessary to encrypt/decrypt the data you want to protect - that's not as easy as it seems - [`memset()` calls that only effect memory that gets freed, for example, can be elided by optimizing compilers](https://stackoverflow.com/questions/15538366/can-memset-function-call-be-removed-by-compiler). – Andrew Henle Mar 12 '20 at 21:55
  • Also telling it not to dump core at all is... not ideal, anyway. Bitcoin isn't a trivial program and debugging from core dumps is sometimes necessary anyway. I'm also thinking when a user crashes it, they'll have no saved backtrace to send to devs. – oxagast Mar 12 '20 at 22:01
  • There are third party products which aim to solve this exact problem of securing crypto keys within running SW. I won't advertise any particular product but just FYI in case you want to do your own research on whether that is a viable approach for your situation. – kaylum Mar 12 '20 at 22:04
  • @oxagast Is the key stored in a global variable? – S.S. Anne Mar 12 '20 at 22:29
  • 1
    There are also side channel attacks. Ssh stores keys encrypted in memory with a big enough key [this](https://marc.info/?l=openbsd-cvs&m=156109087822676&w=2). – KamilCuk Mar 12 '20 at 22:32
  • I don't know why `setrlimit` wouldn't work. Can you post your complete test case and explain what happened when you ran it? – Nate Eldredge Mar 12 '20 at 23:39
  • In fact it did work in [my test](https://gist.github.com/neldredge/fbe678dcf66975f8516399d6897c2a34). You still get the message `Segmentation fault (core dumped)` but no core file is actually produced. – Nate Eldredge Mar 12 '20 at 23:49
  • @NateEldredge Did you check coredumpctl if you have that on your system? I still get the core dumped message, so I checked (https://termbin.com/h0sm) and saw it still did produce a core. – oxagast Mar 13 '20 at 03:46
  • 1
    @oxagast: I don't have coredumpctl installed. I just saw that without setrlimit it created a `core` file in the current working directory, and with setrlimit it did not. – Nate Eldredge Mar 13 '20 at 03:57
  • It looks like you are asking two questions; for this to be a useful question on SO, the 'other option' paragraph should be removed (into another question, if you can make it a specific question about programming, but I fear it falls into the category of asking us to implement an entire system). If the question is to be one question, it would be: how do I stop a program from generating a coredump file when it segfaults? – TamaMcGlinn Apr 02 '20 at 10:10

3 Answers3

3

Depending on your definition of "in code/compile-time", you can install a signal handler and wipe memory upon receiving that signal.

However, crashes are handled by the kernel, not the compiler or the executable. You cannot stop memory from being dumped by the kernel into a core from inside the executable, no matter what you do.

Therefore, the other option sounds best.

S.S. Anne
  • 15,171
  • 8
  • 38
  • 76
  • 1
    Weather the debugging info (ex. -ggdb) is compiled into the binary is irrelevant to weather it dumps core. As in you could compile with -ggdb0 (or flag at all) and it can still dump core, just not with the extra symbols. – oxagast Mar 12 '20 at 23:20
  • @oxagast Thanks, removed. Can you please respond to my comment on the question? – S.S. Anne Mar 12 '20 at 23:29
1

The key primitive you will want to use is madvise(..., MADV_DONTDUMP), which notifies Linux (since 3.4) that you do not wish a series of pages to be included in the dump. The flag is also called VM_DONTDUMP in kernel space. (Note that some versions of gdb do not respect this flag, which could be relevant for cores generated by gcore or other helpers rather than by the kernel.)

You will also need to ensure that when processing keys and other sensitive data stored in these pages information is not disclosed via registers or spilled to the stack sufficient for a compromise should a core dump after that time.

ecatmur
  • 152,476
  • 27
  • 293
  • 366
  • It's pretty hard to process keys without ever having them in registers... – Nate Eldredge Mar 12 '20 at 23:50
  • But this is Linux-specific. I think Bitcoin is trying to be somewhat portable. – S.S. Anne Mar 13 '20 at 00:03
  • As far as being Linux specific, I think there are similar things mentioned in the GitHub thread about MAP_CONCEAL and MAP_NOCORE under some BSDs. Not sure how Windows handles this stuff. It's also probably pretty client specific. – oxagast Mar 13 '20 at 03:33
  • @NateEldredge well, that's why I said "sufficient for a compromise" - if a key is 1024 bits long but you only process 64 bits at a time that would be safer (as long as an attacker can't force repeated crashes). Problem would be if the compiler auto-vectorizes your code and you end up with half the key in an AVX register. – ecatmur Mar 13 '20 at 10:30
0

It's theoretically possible to set the limit from within a program with setrlimit():

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

#include <sys/resource.h>
#include <errno.h>


int setNoCores( void )
{
    int result;
    struct rlimit core_limiter;

    core_limiter.rlim_cur = 0;    // *no* core
    core_limiter.rlim_max = 0;

    // Tell the OS no core-files
    result = setrlimit( RLIMIT_CORE, &core_limiter );

    // Was it OK for you?
    if ( result != 0 )
    {
        switch( errno )
        {
            case EFAULT:   fprintf( stderr, "setNoCores() - EFAULT\n" ); break;
            case EINVAL:   fprintf( stderr, "setNoCores() - EINVAL\n" ); break;
            case EPERM:    fprintf( stderr, "setNoCores() - EPERM - No Permissions!\n" ); break;
            case ESRCH:    fprintf( stderr, "setNoCores() - ESRCH\n" ); break;
        }
    }

    return result;
}


int main( int argc, char **argv )
{
    if ( setNoCores() == 0 )
    {
        printf( "Core Files off\n") ;
    }
    else
    {
        printf( "Failed to change limits, core-file generation at default\n" );
    }

    return 0;
}

Of course, the process needs permission to adjust the limits. Depending on the user-id, the request may be denied, and EPERM error set.

Core files are also controllable in your OS via the ulimit command. With the -c parameter, this sets the maximum size of the core file. The size-number is the number of 1024 or 512-byte blocks [ref: Solaris man page (512), Linux '--help' (1024)]. Setting this in the shell the program executes from should limit the result for only that shells child-processes.

For no core-files:

    ulimit -c 0 

To turn it back on again:

ulimit -c <some large number>

For more details, try ulimit --help in your shell.

Kingsley
  • 14,398
  • 5
  • 31
  • 53