记录日常工作关于系统运维,虚拟化云计算,数据库,网络安全等各方面问题。

Hadoop3.2启动任务时am请求资源超yarn配置解决方案


1,把Hadoop3.2基于Yarn部署到3台4G内存的vm主机上,yarn 的最小分配内存128M,最大512M 的配置会由于资源不足导致简单的任务也无法执行成功。


[spug@hadoop51 data]$ hadoop jar hadoop-3.2.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.4.jar wordcount /input /wcoutput


2022-08-28 21:39:09,979 INFO client.RMProxy: Connecting to ResourceManager at hadoop52/192.168.1.52:8032
2022-08-28 21:39:10,906 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/spug/.staging/job_1661693889212_0001
2022-08-28 21:39:11,822 INFO input.FileInputFormat: Total input files to process : 0
2022-08-28 21:39:12,158 INFO mapreduce.JobSubmitter: number of splits:0
2022-08-28 21:39:12,500 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1661693889212_0001
2022-08-28 21:39:12,502 INFO mapreduce.JobSubmitter: Executing with tokens: []
2022-08-28 21:39:12,785 INFO conf.Configuration: resource-types.xml not found
2022-08-28 21:39:12,786 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2022-08-28 21:39:12,953 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/spug/.staging/job_1661693889212_0001
java.io.IOException: org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request! Cannot allocate containers as requested resource is greater than maximum allowed allocation. Requested resource type=[memory-mb], Requested resource=<memory:1536, vCores:1>, maximum allowed allocation=<memory:256, vCores:1>, please note that maximum allowed allocation is calculated by scheduler based on maximum resource of registered NodeManagers, which might be less than configured maximum allocation=<memory:256, vCores:1>
----------------

--------------
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException): Invalid resource request! Cannot allocate containers as requested resource is greater than maximum allowed allocation. Requested resource type=[memory-mb], Requested resource=<memory:1536, vCores:1>, maximum allowed allocation=<memory:256, vCores:1>, please note that maximum allowed allocation is calculated by scheduler based on maximum resource of registered NodeManagers, which might be less than configured maximum allocation=<memory:256, vCores:1>



2,导致Yarn无法正常执行任务的主要资源限制是在内存上,所以可以调整限制Yarn所能使用的物理内存。

yarm的容器内存,CPU配置如下:

  <!-- yarn-site.xml -->
  

     <!-- yarn容器允许分配的最大最小内存 -->
    <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>128</value>
    </property>
    <property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        <value>256</value>
    </property>
    
    <!-- yarn容器允许管理的物理内存大小 -->
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>512</value>
    </property>


  <property>
     <name>yarn.scheduler.minimum-allocation-vcores</name>
     <value>1</value>
  </property>

  <property>
    <name>yarn.scheduler.maximum-allocation-vcores</name>
    <value>1</value>
  </property>



配置后如果有超过该资源的请求,则会被拒接。


3,默认情况下AM的请求1.5G的内存,降低am的资源请求配置项到分配的物理内存限制以内。


 <!-- mapred-site.xml -->
 <property>
        <name>yarn.app.mapreduce.am.resource.mb</name>
        <value>256</value>
  </property>

4,默认对mapred的内存请求都是1G,也降低和合适的值。


<!-- mapred-site.xml -->
    <property>
        <name>mapreduce.map.memory.mb</name>
        <value>256</value>
    </property>
    <property>
        <name>mapreduce.reduce.memory.mb</name>
        <value>256</value>
    </property>


5,再次运行wordcount任务时就会正常运行。


6,如果太低也会出现OFM的问题。


ERROR [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.init(MapTask.java:1000)
at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:408)
at org.apache.hadoop.mapred.MapTask.access100 ( M a p T a s k . j a v a : 82 ) a t o r g . a p a c h e . h a d o o p . m a p r e d . M a p T a s k 100(MapTask.java:82) at org.apache.hadoop.mapred.MapTask100(MapTask.java:82)atorg.apache.hadoop.mapred.MapTaskNewOutputCollector.(MapTask.java:710)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:782)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)




转载请标明出处【Hadoop3.2启动任务时am请求资源超yarn配置解决方案】。

《www.micoder.cc》 虚拟化云计算,系统运维,安全技术服务.

网站已经关闭评论