So the answer is really that what you are noticing is job specific. Depending on the job the mappers/reducers will write more or less bytes to local file compared to the hdfs.
In your mapper case, you have a similar amount of data that was read in from both local and HDFS locations, there is no problem there. Your Mapper code just happens to need to read about the same amount of data locally as it reads from HDFS. Most of the time the Mappers are being used to analyze an amount of data greater than it's RAM, so it's not surprising to see it possibly writing the data it gets from the HDFS to a local drive. The number of bytes read from HDFS and local are not always going to look like they sum up to the local write size (which they don't even in your case).
Here is an example using TeraSort, with 100G of data, 1 billion key/value pairs.
File System Counters
FILE: Number of bytes read=219712810984
FILE: Number of bytes written=312072614456
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=100000061008
HDFS: Number of bytes written=100000000000
HDFS: Number of read operations=2976
HDFS: Number of large read operations=0
Things to notice. The number of bytes read and written from the HDFS is nearly exactly 100G. This is because 100G needed to be sorted, and the final sorted files need to be written. Also notice that it needs to do a lot of local read/writes to hold and sort the data, 2x and 3x the amount of data it read!
As a final note, unless you just want to run a job without caring about the result. The amount of HDFS bytes written should never be 0, and yours is HDFS_BYTES_WRITTEN 0