Announcement

Collapse
No announcement yet.

Running WordCount Java Program

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Running WordCount Java Program

    I get this error when I run wordcount program using command

    hadoop jar wc.jar WordCount /usr/local/hadoop/input /usr/local/hadoop/output

    Stack trace: ExitCodeException exitCode=1:
    at org.apache.hadoop.util.Shell.runCommand(Shell.java :981)
    at org.apache.hadoop.util.Shell.run(Shell.java:884)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor. execute(Shell.java:1180)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultC ontainerExecutor.launchContainer(DefaultContainerE xecutor.java:293)
    at org.apache.hadoop.yarn.server.nodemanager.containe rmanager.launcher.ContainerLaunch.launchContainer( ContainerLaunch.java:425)
    at org.apache.hadoop.yarn.server.nodemanager.containe rmanager.launcher.ContainerLaunch.call(ContainerLa unch.java:285)
    at org.apache.hadoop.yarn.server.nodemanager.containe rmanager.launcher.ContainerLaunch.call(ContainerLa unch.java:88)
    at java.util.concurrent.FutureTask.run(FutureTask.jav a:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker( ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)


    Container exited with a non-zero exit code 1. Last 4096 bytes of stderr :
    Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster

    I am facing this error from last 2 weeks and I have tried everything

    1) Setting up the mapreduce.application.classpath in mapred-site.xml
    2) Setting up the yarn.application.classpath in yarn-site.xml
    3) tried to change the ownership of the folder, in case its some permission issue

    also followed everything as suggested below

    http://stackoverflow.com/questions/3...hadoop-mapredu

    I AM SO MUCH FRUSTRATED , CAN ANYONE REALLY HELP ? PLZZZZZZZZZ

    ------------------------------------------------------------------------------------------------------------------------------------
    ------------------------------------------------------------------------------------------------------------------------------------

    My Hadoop Settings

    Env Variables in bashrc

    export JAVA_HOME=/usr/lib/jvm/java-8-oracle
    export HADOOP_INSTALL=/usr/local/hadoop
    export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
    export HADOOP_COMMON_HOME=$HADOOP_INSTALL
    export HADOOP_HDFS_HOME=$HADOOP_INSTALL
    export YARN_HOME=$HADOOP_INSTALL
    export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
    export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib/native"
    export PATH=$PATH:$HADOOP_INSTALL/sbin:$HADOOP_INSTALL/bin:$JAVA_HOME/bin
    export HADOOP_CONF_DIR=$HADOOP_INSTALL/etc/hadoop
    ------------------------------------------------------------------------------------------------------------------------------------
    core-site.xml

    <property>
    <name>hadoop.tmp.dir</name>
    <value>/usr/local/hadoop/tmp</value>
    <description>A base for other temporary directories.</description>
    </property>
    <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:54310</value>
    <description>The name of the default file system. A URI whose
    scheme and authority determine the FileSystem implementation. The
    uri's scheme determines the config property (fs.SCHEME.impl) naming
    the FileSystem implementation class. The uri's authority is used to
    determine the host, port, etc. for a filesystem.</description>
    </property>

    ------------------------------------------------------------------------------------------------------------------------------------
    mapred-site.xml


    <property>
    <name>mapred.job.tracker</name>
    <value>localhost:9001</value>
    </property>
    <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
    </property>

    ------------------------------------------------------------------------------------------------------------------------------------
    yarn-site.xml

    <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
    </property>
    <property>
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>

    ------------------------------------------------------------------------------------------------------------------------------------
    hdfs-site.xml


    <property>
    <name>dfs.replication</name>
    <value>1</value>
    <description>Default block replication.
    The actual number of replications can be specified when the file is created.
    The default is used if replication is not specified in create time.
    </description>
    </property>
    <property>
    <name>dfs.namenode.name.dir</name>
    <value>file:/usr/local/hadoop_store/hdfs/namenode</value>
    </property>
    <property>
    <name>dfs.datanode.data.dir</name>
    <value>file:/usr/local/hadoop_store/hdfs/datanode</value>
    </property>
    ------------------------------------------------------------------------------------------------------------------------------------

Working...
X