JVM reports an error: failed to write core dump. Core dumps have been disabled
In the high concurrency big data scenario, the Linux server reports an error fork: retry: the resource is temporarily unavailable
the JVM will generate an HS_err_Pid74299.log log log files like this
By default, the core file size of the Linux service is set to 0. This parameter needs to be adjusted, but this parameter does not solve the problem;
The root cause of the problem is that the maximum number of open files and the maximum number of processes of the running application of the server are relatively small, and the default is 4096
The following configuration needs to be modified:
vi /etc/security/limits.conf
* soft nofile 327680
* hard nofile 327680
hdfs soft nproc 131072
hdfs hard nproc 131072
mapred soft nproc 131072
mapred hard nproc 131072
hbase soft nproc 131072
hbase hard nproc 131072
zookeeper soft nproc 131072
zookeeper hard nproc 131072
hive soft nproc 131072
hive hard nproc 131072
root soft nproc 131072
root hard nproc 131072