I have an ML problem. I have an machine learning classification task where the classifications are either -1, 0 or 1. In reality, the vast majority of the time the correct classification is 0, and approx 1% of the time, the answer is -1 or 1.
When training (I'm using auto_ml but I think this is a general problem) I'm finding that my model decides it can get a 99% accuracy by just predicting a 0 every time.
Is this a known phenomenon? Is there anything I can do to work around this other than come up with more classifications? Maybe something which splits the 0s into different classes.
Any advice, or pointers at what to read up on next are appreciated.
Thanks.