Background:
Divide the data into 60 pieces according to the business requirements, and start 60 applications to run each piece of data. The submission script of the application is as follows:
#/bin/sh
#LANG=zh_CN.utf8
#export LANG
export SPARK_KAFKA_VERSION=0.10
export LANG=zh_CN.UTF-8
jarspath=''
for file in `ls /home/dx/pro2.0/app01/sparkjars/*.jar`
do
jarspath=${file},$jarspath
done
jarspath=${jarspath%?}
echo $jarspath
./bin/spark-submit.sh \
--jars $jarspath \
--properties-file ../conf/spark-properties.conf \
--verbose \
--master yarn \
--deploy-mode cluster \
--name Streaming-$2-$3-$4-$5-$1-Agg-Parser \
--driver-memory 9g \
--driver-cores 1 \
--num-executors 1 \
--executor-cores 12 \
--executor-memory 22g \
--driver-java-options "-XX:+TraceClassPaths" \
--class com.dx.app01.streaming.Main \
/home/dx/pro2.0/app01/lib/app01-streaming-driver.jar $1 $2 $3 $4 $5
368166;32676; 21547; 36816; 28857; 4328857; 27599;28857; 32622; 22914140;24VCores 64G
yarn32622;- 209171;p>
yarn.scheduler.minimum-allocation-mb | 21333;”22120;” 35831;”23384G |
yarn.scheduler.maximum-allocation-mb | 21333G |
yarn.nodemanager.resource.cpu-vcores | NodeManager24635;”34394;” 25311;”CPU21vcores |
yarn.nodemanager.resource.memory-mb | 27599;-28857;- 23384RM-2000420540;- 199811;-242122;- 35813;- 36229;- 36807;- 27492G |
<<<<<<<<<<<<<<<<<<<<<19191919191933319191919191920202020202099999999999999999979999999999999999999999333333333191919191919191919191919191919191919191919191919191919191919191919191919191919191919191919191919191919191919191919191919191919191919191919191919191919191919191919191919191919191919191919191919191919191919191919191919AcceptedAccepted,29366;, 24577; 25353;, 29031;, 20917;, 25191;, 4321153d;
36807;- yarn node-list20196;- 30475;- 24403;- 28857;- 36816;- containers20917;- 229141;-p>
Node-Id | Node-State | Node-Http-Address | Number-of-Running-Containers |
node-53:45454 | RUNNING | node-53:8042 | 1 |
node-62:45454 | RUNNING | node-62:8042 | 4 |
node-44:45454 | RUNNING | node-44:8042 | 3 |
node-37:45454 | RUNNING | node-37:8042 | 0 |
node-35:45454 | RUNNING | node-35:8042 | 1 |
node-07:45454 | RUNNING | node-07:8042 | 0 |
node-30:45454 | RUNNING | node-30:8042 | 0 |
node-56:45454 | RUNNING | node-56:8042 | 2 |
node-47:45454 | RUNNING | node-47:8042 | 0 |
node-42:45454 | RUNNING | node-42:8042 | 2 |
node-03:45454 | RUNNING | node-03:8042 | 6 |
node-51:45454 | RUNNING | node-51:8042 | 2 |
node-33:45454 | RUNNING | node-33:8042 | 1 |
node-04:45454 | RUNNING | node-04:8042 | 1 |
node-48:45454 | RUNNING | node-48:8042 | 6 |
node-39:45454 | RUNNING | node-39:8042 | 0 |
node-60:45454 | RUNNING | node-60:8042 | 1 |
node-54:45454 | RUNNING | node-54:8042 | 0 |
node-45:45454 | RUNNING | node-45:8042 | 0 |
node-63:45454 | RUNNING | node-63:8042 | 1 |
node-09:45454 | RUNNING | node-09:8042 | 1 |
node-01:45454 | RUNNING | node-01:8042 | 1 |
node-36:45454 | RUNNING | node-36:8042 | 3 |
node-06:45454 | RUNNING | node-06:8042 | 0 |
node-61:45454 | RUNNING | node-61:8042 | 1 |
node-31:45454 | RUNNING | node-31:8042 | 0 |
node-40:45454 | RUNNING | node-40:8042 | 0 |
node-57:45454 | RUNNING | node-57:8042 | 1 |
node-59:45454 | RUNNING | node-59:8042 | 1 |
node-43:45454 | RUNNING | node-43:8042 | 1 |
node-52:45454 | RUNNING | node-52:8042 | 1 |
node-34:45454 | RUNNING | node-34:8042 | 1 |
node-38:45454 | RUNNING | node-38:8042 | 0 |
node-50:45454 | RUNNING | node-50:8042 | 4 |
node-46:45454 | RUNNING | node-46:8042 | 1 |
node-08:45454 | RUNNING | node-08:8042 | 1 |
node-55:45454 | RUNNING | node-55:8042 | 1 |
node-32:45454 | RUNNING | node-32:8042 | 0 |
node-41:45454 | RUNNING | node-41:8042 | 2 |
node-05:45454 | RUNNING | node-05:8042 | 1 |
node-02:45454 | RUNNING | node-02:8042 | 1 |
node-58:45454 | RUNNING | node-58:8042 | 0 |
node-49:45454 | RUNNING | node-49:8042 | 0 |
24456;, 26174444432676;, 3682426377;”20998;” 28857;”26410;” 34987;”36164;” 283044;”20805;” 36275;”
37027;”24212;” 35813;”33021;” 20132;”43″21153;”25165;” 23545; 20294;”21482;” 20132;”2421153;” Yarn36824;”38169035823;”
[Tue Jul 30 16:33:29 +0000 2019] Application is added to the scheduler and is not yet activated.
Queue's AM resource limit exceeded. Details : AM Partition = <DEFAULT_PARTITION>;
AM Resource Request = <memory:9216MB(9G), vCores:1>;
Queue Resource Limit for AM = <memory:454656MB(444G), vCores:1>;
User AM Resource Limit of the queue = <memory:229376MB(224G), vCores:1>;
Queue AM Resource Usage = <memory:221184MB(216G), vCores:24>;
Solution:
Error log: “queue am resource usage = & lt; memory:221184MB (216G), vC ores:24>; “means that 24 apps have been run (in the horn cluster mode, each app contains a driver, which is equivalent to AM): the driver of each app contains one vcores, occupying a total of 24 vcores; The driver memory of each app is 9g, 9g * 24 = 216g
error log: User am resource limit of the queue = & lt; memory:229376MB (224G), vC ores:1>; </The maximum number of resources used to run the application applicationmaster in the cluster is 224g, which is determined by the parameter yard.scheduler.capacity.maximum-am-resource-percent
yarn.scheduler.capacity.maximum-am-resource-percent
/ yarn.scheduler.capacity.< queue-path>. maximum-am-resource-percent |
the upper limit of the proportion of resources used to run the application applicationmaster in the cluster. This parameter is usually used to limit the number of active applications. The parameter type is floating-point, the default is 0.1, which means 10%
the upper limit of applicationmaster resource proportion of all queues can be set by the parameter yarn.scheduler.capacity.maximum-am-resource-percentage (which can be regarded as the default value), and that of single queue can be set by the parameter yarn.scheduler.capacity. & lt; queue-path>. Maximum am resource percentage sets the value that suits you |
1) yarn.scheduler.capacity.maximum-am-resource-percentage
<property>
<!-- Maximum resources to allocate to application masters
If this is too high application masters can crowd out actual work -->
<name>yarn.scheduler.capacity.maximum-am-resource-percent</name>
<value>0.5</value>
</property>
2) reduce driver memory
For more official questions about yarn capacity, please refer to the official website document: Hadoop: capacity scheduler
Similar Posts:
- Yarn configures multi queue capacity scheduler
- Tuning and setting of memory and CPU on yarn cluster
- [Solved] CDH6.3.2 Hive on spark Error: is running beyond physical memory limits
- YARN Restart Issue: RM Restart/RM HA/Timeline Server/NM Restart
- [Solved] ERROR 1805 (HY000): Column count of mysql.user is wrong. Expected 45, found 42. The table is probably corrupted
- [Solved] MYSQL Error: [Warning] Changed limits: max_open_files: 1024
- [Solved] Hive export MYSQL Error: Container [pid=3962,containerID=container_1632883011739_0002_01_000002] is running 270113280B beyond the ‘VIRTUAL’ memory limit.
- The key technologies in hadoop-3.0.0 configuration yarn.nodemanager.aux -Services item
- [Solved] Failed to configure a DataSource: ‘url’ attribute is not specified and no embedded datasource could be configured.
- [Solved] docker info Check Error: WARNING: No swap limit support