You seem to be mixing bases without specifying when you're using which one. It also helps to know that an IEEE754 implementation will give correctly rounded operations after each operation.
As you pointed out, 0.1 can't be represented exactly as a binary fraction. The nearest value (for a 64bit float) is:
0.1000000000000000055511151231257827021181583404541015625
when you're saying "0.00011 0011 0011 0011... 0011" it would help to say that these are base-2 digits, not decimal digits.
When you say 0.1 * 10, you're actually asking for:
10 * 0.1000000000000000055511151231257827021181583404541015625
and the computer does that exact calculation and then round that to the nearest representable float. The other values near 1.0 (which can be represented accurately by a 64bit binary float) are:
0.99999999999999988897769753748434595763683319091796875
1.0000000000000002220446049250313080847263336181640625
These are both approx 1.7e-16 away from the number, while the error associated with choosing 1 is only ~0.5e-16. Hence the FPU should choose 1.
The same applies to 0.2, which gets rounded to exactly 2.
Unfortunately I don't think Javascript exposes many useful primitives to see what's going on, so I've been using Python as it's good for interactively exploring things like this.
# arbitrary precision decimal
from decimal import Decimal as D
# see https://en.cppreference.com/w/c/numeric/math/nextafter
from math import inf, nextafter
print(D(0.1))
# => 0.1000000000000000055511151231257827021181583404541015625
print(D(nextafter(1, inf)))
# => 1.0000000000000002220446049250313080847263336181640625
print(D(nextafter(1, -inf)))
# => 0.99999999999999988897769753748434595763683319091796875
print(D(nextafter(1, -inf)) - D(0.1)*10)
# => -1.665334536935156540423631668E-16
hopefully that gives you some idea of what's going on internally!