I' new in Hadoop, I'm using a cluster and I have a disk quote of 15GB. If I try to execute the wordcount sample on a big dataset (about 25GB) I receveid always the exception "The DiskSpace quota of xxxx is exceeded: ".
I checked my disk usage after exception and it is so far from the quote. Is this due to the temporary files or intermediate jobs? Is possible to delete temporary/intermediate files?
(I can modifify the configuration by Java code, I have no access directy to the .xml configaration file)
Thanks! ;)