You might want to categorize your features, then to one-hot-encode them. LightGBM works best with sparse features such as one-hot-encoded ones due to its EFB (Effective Feature Bundling) which enhances computation efficiency of LightGBM significantly. Moreover, you will definitely get rid of the floating parts of the numbers.
Think categorization as that; let’s say that values of one of the numerical features vary between 36 to 56, you can digitize it as [36,36.5,37,....,55.5,56] or [40,45,50,55] to make it categorical. Up to your expertise and imagination. You can refer to scikit-learn for one-hot-encoding, it has built-in function for that.
PS: With a numerical feature, always inspect the statistical properties of it, you can use pandas.describe() which summarizes its mean, max, min, std etc.