First of all, the reason why Distributed Cache exists, is that all the mappers have (read) access to a common file(s), e.g. a list of stopwords. If you don't need that, then you don't need a Distributed Cache. Furthermore, if the two files that you describe are of the same format and you handle them in the same way, then just pass their root directory(ies) as input to your mapper. Hadoop will handle both of them the same way and split both of them. If that is not the case, then continue reading my answer.
If you want to use the output of the first mapper as the (single) input of the second mapper, then you can use a ChainMapper.
But I guess that you also want to use the second input file.
So you can split your job in a chain of two jobs. Then the input of the second job's mapper can be a combination of both input files: the output of the first job and a file, as long as they are in the same format. You can use the addInputPath method for this reason.
Otherwise, you can get your file directly from the filesystem, as described here.
Note that if your large file is larger than a block's size (by default 64 MB), and it is splittable, hadoop splits its "automatically", when it is given as input to a mapper.