The system access is abnormal. Log in to the background to check the nginx log, and find that there are a lot of (to many open files) problems in the error.log.
In this case, there are two general investigations:
1. Number of linux open file handles
ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 63455 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 4096 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
The default value of open files is 1024. You can increase the number in two ways:
1. Execute the command:
ulimit -n 65535
Effective immediately, invalid after restart
2. Modify the system configuration file:
vim /etc/security/limits.conf
Most files added last
* soft nofile 65535
* hard nofile 65535
2. Modify nginx
worker_rlimit_nofile 65535
Reload nginx (nginx -s reload)
lsof:
list open files, lists open files, including open files, established connections (TCP/UDP, etc.)
The commonly used parameters are:
1 lsof abc.txt shows the process that opened the file abc.txt 2 lsof -c abc shows the files currently opened by the process beginning with the letter abc 3 lsof -p 1234 lists the files opened by the process with the process number 1234 4 lsof -g gname/gid Display the process status of gname or gid 5 lsof -u uname/uid Display the process status of uname or uid 6 lsof +d /usr/local/ Display the files opened by the process in the directory 7 lsof +D /usr/ local/ Same as above, but it will search the directories under the directory for a long time. 8 lsof -d 4 Displays the process using fd as 4 9 lsof -i Used to display the conditions of the process 10 lsof -i[46] [protocol][ @hostname|hostaddr][:service|port]
Ningx tuning:
A nigix configuration file:
1. Worker_processes 8 is generally the number of CPUs.
2. Worker_cpu_affinity allocates CPU for each process
3. Worker_rlimit_nofile 65535. This instruction refers to the maximum number of file descriptors opened by an nginx process. The theoretical value should be the maximum number of open files (ulimit -n) divided by the number of nginx processes, but nginx allocation requests are not so uniform, so it is best to use ulimit The value of -n remains the same.
4. Use poll. Using epoll’s I/O model is an implementation of multiplexing by the linux kernel
5. Worker_connections 65536, the maximum number of connections allowed per process.
6, keepalive_timeout 30; timeout time in seconds
Similar Posts:
- [Solved] MYSQL Error: [Warning] Changed limits: max_open_files: 1024
- Nginx report 500 internal server error
- [Solved] Elasticsearch Startup Error: node validation exception
- OSError: [Errno 24] Too many open files [How to Solve]
- ORA-12519: TNS:no appropriate service handler f…
- Waiting for ACK request when starting easygbs: call [809709832] cseq [127 invite] timeout [10s]
- [Solved] Nginx Log Error: open() “/opt/Nginx/nginx/nginx.pid” failed (2: No such file or directory)
- Ubuntu solves sudo: Source: command not found error
- [Solved] JVM Error: Failed to write core dump. Core dumps have been disabled.(jar was Forced to Exit)
- Error: too many open files [How to Solve]