java.io.IOException: Cannot run program "bash": java.io.IOException: error=12, Cannot allocate memory
This is caused by that Map/Reduce task can not get sufficient memory. In my cluster, each node only has 1GB memory. I did research on Hadoop cluster's memory requirement. Here is the summary.
By default, datanode takes 1000MB RAM, tasktracker takes 1000MB and each task(map or reduce) takes 200MB RAM.
By default, maximumly 2 map tasks and 2 reduce tasks can concurrently run on a single node.
Hence, it’s better to have 1000+1000+2*200+2*200=2800MB RAM on a worker node.
We can increase or reduce Hadoop cluster's heap size by modifying conf/hadoop-env.sh The variable is HADOOP_HEAPSIZE.
But we first need to make sure that we have enough physical RAM on the worker node. The recommend minimum RAM requirement is 2GB.
Reference:
The whole exception stack is:
11/04/08 15:33:25 INFO mapred.JobClient: map 100% reduce 5%
11/04/08 15:33:32 INFO mapred.JobClient: map 100% reduce 8%
11/04/08 15:33:53 INFO mapred.JobClient: map 100% reduce 9%
11/04/08 15:33:56 INFO mapred.JobClient: map 100% reduce 10%
11/04/08 15:34:05 INFO mapred.JobClient: Task Id : attempt_201104081512_0004_r_000000_0, Status : FAILED
java.io.IOException: Task: attempt_201104081512_0004_r_000000_0 - The reduce copier failed
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:380)
at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: java.io.IOException: Cannot run program "bash": java.io.IOException: error=12, Cannot allocate memory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:475)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:149)
at org.apache.hadoop.util.Shell.run(Shell.java:134)
at org.apache.hadoop.fs.DF.getAvailable(DF.java:73)
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:329)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:124)
at org.apache.hadoop.mapred.MapOutputFile.getInputFileForWrite(MapOutputFile.java:160)
at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$InMemFSMergeThread.doInMemMerge(ReduceTask.java:2537)
at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$InMemFSMergeThread.run(ReduceTask.java:2501)
Caused by: java.io.IOException: java.io.IOException: error=12, Cannot allocate memory
at java.lang.UNIXProcess.<init>(UNIXProcess.java:164)
at java.lang.ProcessImpl.start(ProcessImpl.java:81)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:468)
... 8 more
attempt_201104081512_0004_r_000000_0: log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapred.ReduceTask).
attempt_201104081512_0004_r_000000_0: log4j:WARN Please initialize the log4j system properly.
attempt_201104081512_0004_r_000000_0: log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapred.ReduceTask).
attempt_201104081512_0004_r_000000_0: log4j:WARN Please initialize the log4j system properly.
No comments:
Post a Comment