0

I'm trying to run wordCount using Map Reduce but I'm facing the following issue while running in Intellij.

Exception in thread "main" ExitCodeException exitCode=-1073741701: 
at org.apache.hadoop.util.Shell.runCommand(Shell.java:1008)
at org.apache.hadoop.util.Shell.run(Shell.java:901)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1213)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:1307)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:1289)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:978)
at org.apache.hadoop.fs.RawLocalFileSystem.mkOneDirWithMode(RawLocalFileSystem.java:660)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:700)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:672)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:699)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:672)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:699)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:677)
at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:336)
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:162)
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:113)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:148)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1571)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1568)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:691)
at java.base/javax.security.auth.Subject.doAs(Subject.java:427)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1568)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:576)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:571)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:691)
at java.base/javax.security.auth.Subject.doAs(Subject.java:427)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:571)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:562)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:873)
at wordCount.main(wordCount.java:51)

My POM.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>org.example</groupId>
    <artifactId>MapReduce</artifactId>
    <version>1.0-SNAPSHOT</version>

    <properties>
        <maven.compiler.source>16</maven.compiler.source>
        <maven.compiler.target>16</maven.compiler.target>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client</artifactId>
            <version>3.3.1</version>
        </dependency>

       <dependency>-->
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-core</artifactId>
            <version>1.2.1</version>
        </dependency>

        <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-common -->
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-common</artifactId>
            <version>3.3.1</version>
        </dependency>


    </dependencies>

</project>

I've set my input and output directory in the parameters. Output folder is not being created. I want to run the wordCount program on a text file which contains names. I've tried setting up permission for each and every relevant folder to "Full control".

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
  • For starters, you should not be mixing Hadoop dependency versions. Secondly, last I checked, Hadoop requires Java 8, not 16 – OneCricketeer Sep 30 '21 at 19:21
  • normally those permission shell-out problems stem from the unavailability from `chmod`. Are you running it on linux? – Thomas Jungblut Sep 30 '21 at 19:35
  • 1
    @ThomasJungblut I'm running it on Windows 10. Given full control to the folders in which Hadoop needs to create the output directory. – Arpit Pachori Oct 01 '21 at 05:47
  • @OneCricketeer Could that be a problem for permissions? My peers have the same configuration systems but it is working fine on theirs. – Arpit Pachori Oct 01 '21 at 05:48
  • you need some kind of linux runtime like cygwin to have the proper permission sets to work using chmod. – Thomas Jungblut Oct 01 '21 at 09:28
  • You'd use winutils to manage chmod permissions, not run hadoop in cygwin @Thomas – OneCricketeer Oct 01 '21 at 13:04
  • 2
    @OneCricketeer that wasn't available in 1.x as far as I remember, but yes. If it would've find winutils in the path you don't need cygwin for it. But nowadays it's much easier to run the whole thing via WSL. – Thomas Jungblut Oct 01 '21 at 13:25
  • 1
    It's also much easier to just use Spark to compute wordcount. No one actually writes mapreduce anymore – OneCricketeer Oct 01 '21 at 13:42

0 Answers0