I'am running a decision tree classifier on the data within the picture. In the picture you can see that there are type's of data like time signature and signature key that need to be one hot encoded with 1's and 0's. However, within the dataframe all 0 and 1's are of type float. Therefore, my decision tree classifier is unable to make the distinction between if a feature is present or not, but makes a classification if a feature is useful by using 0.5's as can be seen in the second picture. How to fix this?
I already tried turning all float into int's, but didn't exactly figure out how