Friday, January 04, 2013

Learn - FATAL mapred.JobTracker: ENOENT: No such file or directory

 Nothing much for this post. I just needed to keep information in my blog. After I installed hadoop (rpm). I had started namenode & datanode, bit had an issue about jobtracker. When I started it. It showed error - FATAL mapred.JobTracker: ENOENT: No such file or directory.
-bash-4.1$ hadoop jobtracker &
[1] 14455
-bash-4.1$ 13/01/04 20:07:30 INFO mapred.JobTracker: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting JobTracker
STARTUP_MSG:   host = centos/192.168.111.80
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.1.1
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1411108; compiled by 'hortonfo' on Mon Nov 19 10:51:29 UTC 2012
************************************************************/
13/01/04 20:07:30 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
13/01/04 20:07:30 INFO impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
13/01/04 20:07:30 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 60 second(s).
13/01/04 20:07:30 INFO impl.MetricsSystemImpl: JobTracker metrics system started
13/01/04 20:07:30 INFO impl.MetricsSourceAdapter: MBean for source QueueMetrics,q=default registered.
13/01/04 20:07:31 INFO impl.MetricsSourceAdapter: MBean for source ugi registered.
13/01/04 20:07:31 INFO delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
13/01/04 20:07:31 INFO mapred.JobTracker: Scheduler configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT, limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
13/01/04 20:07:31 INFO util.HostsFileReader: Refreshing hosts (include/exclude) list
13/01/04 20:07:31 INFO delegation.AbstractDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
13/01/04 20:07:31 INFO delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
13/01/04 20:07:31 INFO mapred.JobTracker: Starting jobtracker with owner as hdfs
13/01/04 20:07:31 INFO impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort9000 registered.
13/01/04 20:07:31 INFO ipc.Server: Starting SocketReader
13/01/04 20:07:31 INFO impl.MetricsSourceAdapter: MBean for source RpcActivityForPort9000 registered.
13/01/04 20:07:31 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
13/01/04 20:07:32 INFO http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
13/01/04 20:07:32 INFO http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context WepAppsContext
13/01/04 20:07:32 INFO http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
13/01/04 20:07:32 INFO http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
13/01/04 20:07:32 INFO http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50030
13/01/04 20:07:32 INFO http.HttpServer: listener.getLocalPort() returned 50030 webServer.getConnectors()[0].getLocalPort() returned 50030
13/01/04 20:07:32 INFO http.HttpServer: Jetty bound to port 50030
13/01/04 20:07:32 INFO mortbay.log: jetty-6.1.26
13/01/04 20:07:32 INFO mortbay.log: Started SelectChannelConnector@centos:50030
13/01/04 20:07:32 INFO impl.MetricsSourceAdapter: MBean for source jvm registered.
13/01/04 20:07:32 INFO impl.MetricsSourceAdapter: MBean for source JobTrackerMetrics registered.
13/01/04 20:07:32 INFO mapred.JobTracker: JobTracker up at: 9000
13/01/04 20:07:32 INFO mapred.JobTracker: JobTracker webserver: 50030
13/01/04 20:07:32 INFO ipc.Server: IPC Server Responder: starting
13/01/04 20:07:32 INFO ipc.Server: IPC Server listener on 9000: starting
13/01/04 20:07:32 INFO ipc.Server: IPC Server handler 0 on 9000: starting
13/01/04 20:07:32 INFO ipc.Server: IPC Server handler 2 on 9000: starting
13/01/04 20:07:32 INFO ipc.Server: IPC Server handler 1 on 9000: starting
13/01/04 20:07:32 INFO ipc.Server: IPC Server handler 3 on 9000: starting
13/01/04 20:07:32 INFO ipc.Server: IPC Server handler 4 on 9000: starting
13/01/04 20:07:32 INFO ipc.Server: IPC Server handler 5 on 9000: starting
13/01/04 20:07:32 INFO ipc.Server: IPC Server handler 6 on 9000: starting
13/01/04 20:07:32 INFO ipc.Server: IPC Server handler 7 on 9000: starting
13/01/04 20:07:32 INFO ipc.Server: IPC Server handler 8 on 9000: starting
13/01/04 20:07:32 INFO mapred.JobTracker: Setting safe mode to true. Requested by : hdfs
13/01/04 20:07:32 INFO ipc.Server: IPC Server handler 9 on 9000: starting
13/01/04 20:07:33 INFO mapred.JobTracker: Setting safe mode to false. Requested by : hdfs
13/01/04 20:07:33 INFO mapred.JobTracker: Cleaning up the system directory
13/01/04 20:07:33 INFO namenode.FSNamesystem: Number of transactions: 17 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 16 SyncTimes(ms): 451
13/01/04 20:07:33 INFO util.NativeCodeLoader: Loaded the native-hadoop library
13/01/04 20:07:33 FATAL mapred.JobTracker: ENOENT: No such file or directory
        at org.apache.hadoop.io.nativeio.NativeIO.chmod(Native Method)
        at org.apache.hadoop.fs.FileUtil.execSetPermission(FileUtil.java:699)
        at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:654)
        at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509)
        at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344)
        at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:189)
        at org.apache.hadoop.mapred.CompletedJobStatusStore.<init>(CompletedJobStatusStore.java:81)
        at org.apache.hadoop.mapred.JobTracker.initialize(JobTracker.java:2048)
        at org.apache.hadoop.mapred.JobTracker.offerService(JobTracker.java:2325)
        at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4789)

13/01/04 20:07:33 INFO mapred.JobTracker: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down JobTracker at centos/192.168.111.80
************************************************************/

[1]+  Exit 255                hadoop jobtracker
So, i rechecked "hadoop.log.dir" in hadoop configuration, checked permission and changed.
[root@centos ~]# grep hadoop.log.dir /etc/hadoop/*
/etc/hadoop/log4j.properties:hadoop.log.dir=.
/etc/hadoop/log4j.properties:log4j.appender.DRFA.File=${hadoop.log.dir}/${hadoop.log.file}
/etc/hadoop/log4j.properties:log4j.appender.DRFAS.File=${hadoop.log.dir}/${hadoop.security.log.file}
/etc/hadoop/log4j.properties:log4j.appender.DRFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
/etc/hadoop/log4j.properties:log4j.appender.MRAUDIT.File=${hadoop.log.dir}/mapred-audit.log
/etc/hadoop/log4j.properties:#log4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file}
/etc/hadoop/log4j.properties:log4j.appender.JSA.File=${hadoop.log.dir}/${hadoop.mapreduce.jobsummary.log.file}
/etc/hadoop/log4j.properties.bak:hadoop.log.dir=.
/etc/hadoop/log4j.properties.bak:log4j.appender.DRFA.File=${hadoop.log.dir}/${hadoop.log.file}
/etc/hadoop/log4j.properties.bak:log4j.appender.DRFAS.File=${hadoop.log.dir}/${hadoop.security.log.file}
/etc/hadoop/log4j.properties.bak:log4j.appender.DRFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
/etc/hadoop/log4j.properties.bak:log4j.appender.MRAUDIT.File=${hadoop.log.dir}/mapred-audit.log
/etc/hadoop/log4j.properties.bak:#log4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file}
/etc/hadoop/log4j.properties.bak:log4j.appender.JSA.File=${hadoop.log.dir}/${hadoop.mapreduce.jobsummary.log.file}
/etc/hadoop/taskcontroller.cfg:hadoop.log.dir=/var/log/hadoop/mr
/etc/hadoop/taskcontroller.cfg.bak:hadoop.log.dir=/var/log/hadoop/mr
[root@centos ~]# grep hadoop.log.dir /etc/hadoop/*.cfg
hadoop.log.dir=/var/log/hadoop/mr
[root@centos ~]#
[root@centos ~]#
[root@centos ~]# ls -la /var/log/hadoop/mr
total 8
drwxr-xr-x. 2 root root   4096 Jan  4 19:19 .
drwxrwxr-x. 6 root hadoop 4096 Jan  4 19:35 ..
[root@centos ~]# ls -la /var/log/hadoop/
total 40
drwxrwxr-x.  6 root   hadoop 4096 Jan  4 19:35 .
drwxr-xr-x. 12 root   root   4096 Jan  4 19:09 ..
-rw-r--r--.  1 hdfs   hadoop    6 Jan  4 19:35 hadoop-hdfs-datanode.pid
-rw-r--r--.  1 hdfs   hadoop    6 Jan  4 19:36 hadoop-hdfs-namenode.pid
-rw-r--r--.  1 mapred hadoop    6 Jan  4 19:37 hadoop-mapred-jobtracker.pid
-rw-r--r--.  1 mapred hadoop    6 Jan  4 19:51 hadoop-mapred-tasktracker.pid
drwxr-xr-x.  3 hdfs   hadoop 4096 Jan  4 19:41 hdfs
drwxr-xr-x.  3 mapred hadoop 4096 Jan  4 19:51 mapred
drwxr-xr-x.  2 root   root   4096 Jan  4 19:19 mr
drwxr-xr-x.  2 root   root   4096 Jan  4 19:23 root

[root@centos ~]# chown -R hdfs -R /var/log/hadoop/mr/
[root@centos ~]# su - hdfs
-bash-4.1$ hadoop jobtracker &
[1] 14558
-bash-4.1$ 13/01/04 20:10:15 INFO mapred.JobTracker: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting JobTracker
STARTUP_MSG:   host = centos/192.168.111.80
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.1.1
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1411108; compiled by 'hortonfo' on Mon Nov 19 10:51:29 UTC 2012
************************************************************/
13/01/04 20:10:15 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
13/01/04 20:10:16 INFO impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
13/01/04 20:10:16 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 60 second(s).
13/01/04 20:10:16 INFO impl.MetricsSystemImpl: JobTracker metrics system started
13/01/04 20:10:16 INFO impl.MetricsSourceAdapter: MBean for source QueueMetrics,q=default registered.
13/01/04 20:10:17 INFO impl.MetricsSourceAdapter: MBean for source ugi registered.
13/01/04 20:10:17 INFO delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
13/01/04 20:10:17 INFO mapred.JobTracker: Scheduler configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT, limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
13/01/04 20:10:17 INFO delegation.AbstractDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
13/01/04 20:10:17 INFO util.HostsFileReader: Refreshing hosts (include/exclude) list
13/01/04 20:10:17 INFO delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
13/01/04 20:10:17 INFO mapred.JobTracker: Starting jobtracker with owner as hdfs
13/01/04 20:10:17 INFO impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort9000 registered.
13/01/04 20:10:17 INFO impl.MetricsSourceAdapter: MBean for source RpcActivityForPort9000 registered.
13/01/04 20:10:17 INFO ipc.Server: Starting SocketReader
13/01/04 20:10:17 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
13/01/04 20:10:17 INFO http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
13/01/04 20:10:17 INFO http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context WepAppsContext
13/01/04 20:10:17 INFO http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
13/01/04 20:10:17 INFO http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
13/01/04 20:10:17 INFO http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50030
13/01/04 20:10:17 INFO http.HttpServer: listener.getLocalPort() returned 50030 webServer.getConnectors()[0].getLocalPort() returned 50030
13/01/04 20:10:17 INFO http.HttpServer: Jetty bound to port 50030
13/01/04 20:10:17 INFO mortbay.log: jetty-6.1.26
13/01/04 20:10:18 INFO mortbay.log: Started SelectChannelConnector@centos:50030
13/01/04 20:10:18 INFO impl.MetricsSourceAdapter: MBean for source jvm registered.
13/01/04 20:10:18 INFO impl.MetricsSourceAdapter: MBean for source JobTrackerMetrics registered.
13/01/04 20:10:18 INFO mapred.JobTracker: JobTracker up at: 9000
13/01/04 20:10:18 INFO mapred.JobTracker: JobTracker webserver: 50030
13/01/04 20:10:18 INFO ipc.Server: IPC Server Responder: starting
13/01/04 20:10:18 INFO ipc.Server: IPC Server listener on 9000: starting
13/01/04 20:10:18 INFO ipc.Server: IPC Server handler 0 on 9000: starting
13/01/04 20:10:18 INFO ipc.Server: IPC Server handler 1 on 9000: starting
13/01/04 20:10:18 INFO ipc.Server: IPC Server handler 2 on 9000: starting
13/01/04 20:10:18 INFO ipc.Server: IPC Server handler 3 on 9000: starting
13/01/04 20:10:18 INFO ipc.Server: IPC Server handler 4 on 9000: starting
13/01/04 20:10:18 INFO ipc.Server: IPC Server handler 5 on 9000: starting
13/01/04 20:10:18 INFO ipc.Server: IPC Server handler 6 on 9000: starting
13/01/04 20:10:18 INFO ipc.Server: IPC Server handler 7 on 9000: starting
13/01/04 20:10:18 INFO ipc.Server: IPC Server handler 8 on 9000: starting
13/01/04 20:10:18 INFO mapred.JobTracker: Setting safe mode to true. Requested by : hdfs
13/01/04 20:10:18 INFO ipc.Server: IPC Server handler 9 on 9000: starting
13/01/04 20:10:18 INFO mapred.JobTracker: Setting safe mode to false. Requested by : hdfs
13/01/04 20:10:18 INFO mapred.JobTracker: Cleaning up the system directory
13/01/04 20:10:18 INFO namenode.FSNamesystem: Number of transactions: 20 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 19 SyncTimes(ms): 475
13/01/04 20:10:18 INFO util.NativeCodeLoader: Loaded the native-hadoop library
13/01/04 20:10:18 INFO mapred.CompletedJobStatusStore: Completed job store activated/configured with retain-time : 0 , job-info-dir : file:////var/log/hadoop/mr/jobstatus
13/01/04 20:10:18 INFO hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /mapred/mapredsystem/jobtracker.info. blk_6461680580026906898_1001
13/01/04 20:10:19 INFO hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 192.168.111.81:50010 is added to blk_6461680580026906898_1001 size 4
13/01/04 20:10:19 INFO hdfs.StateChange: Removing lease on  file /mapred/mapredsystem/jobtracker.info from client DFSClient_NONMAPREDUCE_-1946176410_1
13/01/04 20:10:19 INFO hdfs.StateChange: DIR* NameSystem.completeFile: file /mapred/mapredsystem/jobtracker.info is closed by DFSClient_NONMAPREDUCE_-1946176410_1
13/01/04 20:10:19 INFO hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 192.168.111.80:50010 is added to blk_6461680580026906898_1001 size 4
13/01/04 20:10:19 INFO mapred.CapacityTaskScheduler: Initializing 'default' queue with cap=100.0, maxCap=-1.0, ulMin=100, ulMinFactor=1.0, supportsPriorities=false, maxJobsToInit=3000, maxJobsToAccept=30000, maxActiveTasks=200000, maxJobsPerUserToInit=3000, maxJobsPerUserToAccept=30000, maxActiveTasksPerUser=100000
13/01/04 20:10:19 INFO mapred.CapacityTaskScheduler: Scheduler configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT, limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1,-1,-1,-1)
13/01/04 20:10:19 INFO mapred.CapacityTaskScheduler: Added new queue: default
13/01/04 20:10:19 INFO mapred.CapacityTaskScheduler: Capacity scheduler initialized 1 queues
13/01/04 20:10:19 INFO mapred.JobTracker: Starting the recovery process for 0 jobs ...
13/01/04 20:10:19 INFO mapred.JobTracker: Recovery done! Recoverd 0 of 0 jobs.
13/01/04 20:10:19 INFO mapred.JobTracker: Recovery Duration (ms):0
13/01/04 20:10:19 INFO mapred.JobTracker: Refreshing hosts information
13/01/04 20:10:19 INFO util.HostsFileReader: Setting the includes file to /etc/hadoop/mapred.include
13/01/04 20:10:19 INFO util.HostsFileReader: Setting the excludes file to /etc/hadoop/mapred.exclude
13/01/04 20:10:19 INFO util.HostsFileReader: Refreshing hosts (include/exclude) list
13/01/04 20:10:19 INFO mapred.JobTracker: Decommissioning 0 nodes
13/01/04 20:10:19 INFO mapred.JobTracker: Starting RUNNING
Cool! :)

No comments: