In the following, I'm going to assume that our decimals fall in between 0 and 1. It should be straightforward to adapt this to larger numbers and negative numbers.
Probably the easiest thing to do would be to choose the largest denominator that you would find acceptable and then create a list of fractions between 0 and 1 which have that denominators less than or equal to them. Be sure to avoid any fractions which can be simplified. Obviously, once you've listed 1/2, you don't need 2/4. You can avoid fractions which can be simplified by checking that the GCD of the numerator and denominator is 1 suing Euclid's algorithm. Once you have your list. Evaluate these as floating point numbers (probably doubles, but the data type obviously depends on your choice of programming language). Then insert them into a balanced binary search tree storing both the original fraction and the floating point evaluation of the fraction. You should only need to do this once to set things up initially so the n*log(n) time (where n is the number of fractions) isn't very much.
Then, whenever you get a number, simply search the tree to find the closest number to it which is in the search tree. Note that this is slightly more complicated than searching for an exact match because the node you're looking for may not be a leaf node. So, as you traverse the tree keep a record of the closest valued node that you have visited. Once you reach a leaf node and compare that one to your closest valued node that you have visited, you are done. Whichever your closest one is, it's fraction is your answer.