2

I have the following Numpy array.

a = np.array([0.1, 0.9, 0.17, 0.47, 0.5])

How can I find a number such that, when multiplied by the array, turns every element into an integer?

For example, in this case, that number should be 100 because a times 100 would make every element an integer.

>>> a * 100
array([10., 90., 17., 47., 50.])

I have tried taking 10 to the power of the longest decimal length. However, I need to find the smallest possible number (and it has to be greater than 0).

b = np.array([1.0, 0.5, 0.25, 0.75, 0.5])

As a result, that number should be 4 instead of 100 in this case.

>>> b * 4
array([4., 2., 1., 3., 2.])
Troll
  • 1,895
  • 3
  • 15
  • 34
  • If you want the smallest possible number, then for the first array, the number shouldn't be 100. It should be 20. `np.array([0.1, 0.9, 0.25, 0.75, 0.5]) * 20 == array([2.0, 18.0, 5.0, 15.0, 10.0])` – Stef Oct 31 '21 at 11:45
  • @Stef Sorry, I was just bad at giving examples. I have fixed it. – Troll Oct 31 '21 at 11:47

1 Answers1

3

Start with your strategy. Get the minimal power of 10 that makes your array integer. Then convert it to an integer and get the common divisor. The number you want is the power of 10 divided by the common divisor:

b = np.array([1.0, 0.5, 0.25, 0.75, 0.5])
d = np.gcd.reduce((b * 100).astype(int))

d is 25 here, and the number you want is 100/254.

mozway
  • 194,879
  • 13
  • 39
  • 75
  • It does work… `0.001` would be multiplied by `1000` leaving you with `1.` if you then did `1000/1` it would equal `1000`. Can you describe how it didn’t work? – Jarvis Oct 31 '21 at 09:28