Category Archives: Error

nginx: [error] CreateFile() “D:\nginx-1.20.1/logs/nginx.pid“ failed (2: The system cannot find the

nginx: [error] CreateFile() “D:\nginx-1.20.1/logs/nginx.pid“ failed (2: The system cannot find the

 

After downloading and decompressing nginx, double-click nginx Exe post access http://127.0.0.1/ , the welcome interface can appear

However, when the nginx service is closed on the command line (nginx – s quit), an error is reported: nginx: [error] createfile() “D: \ nginx-1.20.1/logs/nginx. PID” failed

According to the error message, nginx cannot be found in the logs file under the nginx installation directory PID file, check the corresponding file and find that there is no such file

 

 

Solution:
force the nginx process to close in the task manager,

 

 

Then restart with the start nginx command on the command line,

 

 

Now I find that nginx.com appears in the logs file under the nginx installation directory PID file,

 

 

Then use the command nginx – s quit to close the nginx process normally,

 

 

Cannot access at this time http://127.0.0.1/ , nginx was successfully closed

Cause analysis:
to kill the previous nginx process when nginx is started or restarted, you need to go through nginx PID to find the original process, and nginx PID stores the original process ID. Without the process ID, the system cannot find the original nginx process and cannot shut down naturally

After the test, either double-click nginx If nginx is started by exe or CMD command, nginx will be automatically configured under the logs file PID files can be closed normally. I don’t know why I can’t do it the first time
 

[Solved] linux configure: error: no acceptable C compiler found in $PATH

preface

When installing PgSQL on linux, execute /Configure -- prefix =/usr/local/PgSQL error, the same as the following:

[root@instance-0qymp8uo postgresql-14.1]# ./configure --prefix=/usr/local/pgsql
checking build system type... x86_64-pc-linux-gnu
checking host system type... x86_64-pc-linux-gnu
checking which template to use... linux
checking whether NLS is wanted... no
checking for default port number... 5432
checking for block size... 8kB
checking for segment size... 1GB
checking for WAL block size... 8kB
checking for gcc... no
checking for cc... no
configure: error: in `/root/postgresql-14.1':
configure: error: no acceptable C compiler found in $PATH
See `config.log' for more details

The reason for the error here is that we can’t find a suitable C compiler. We need to update GCC

./Configure is used to detect the target characteristics of your installation platform. This step is used to generate makefile to prepare for the next compilation. – prefix = is the specified software installation directory. You can set the configuration files of some software by specifying the – sys config = parameter. Some software can also add parameters such as – with, – enable , – without, – disable to control compilation. You can allow /Configure – help View detailed help instructions.

Solution:

Centos

 yum install gcc

Ubuntu

apt-get install gcc

How to Solve backend startup error: Quartz scheduler [ruoyischeduler]

Exception in thread “Quartz Scheduler [RuoyiScheduler]” org. springframework. scheduling. SchedulingException: Could not start Quartz Scheduler after delay; nested exception is org. quartz. SchedulerException: The Scheduler cannot be restarted after shutdown() has been called. Back end startup error

1. You should not brush the table related to scheduled tasks or dirty data appears in the scheduled task table. Try to clear the qrtz prefix table data and try again

2. The 80tomcat port is occupied. Use another port

[Solved] Mrjob running error: /bin/bash: /bin/java: No such file or directory

reference: https://stackoverflow.com/questions/41906993/hadoop-2-7-3-exception-from-container-launch-failed-due-to-am-container-exit-co

Am container is Java_Reasons for home

ip:8088/cluster/app/application_1637120527577_0001

It is written in the error message

/bin/bash: /bin/java: No such file or directory

Solution:

Input in shell

All you need to return/usr/local/Java/bin/Java is this executable java file

Enter/usr/local/Java/bin/Java – version to view the version, which indicates that there is no problem

Connect with soft link

ln -s /usr/local/java/bin/java /bin/java

Run again successfully

[Solved] Syntax Error: TypeError: eslint.CLIEngine is not a constructor

Error Messages:
ERROR Failed to compile with 1 error
Syntax Error: TypeError: eslint.CLIEngine is not a constructor

You may use special comments to disable some warnings.
Use // eslint-disable-next-line to ignore the next line.
Use /* eslint-disable */ to ignore all warnings in a file.

Solution:

Method 1: Open package.json and delete the following code and run again (stop the project and restart npm run serve)

Method 2: Open VUE .config.js adding the following code

Internal error (java.lang.UnsupportedClassVersionError): has been compiled by a more recent version of the Java Runtime (class file version 55.0)

Internal error (java.lang.UnsupportedClassVersionError): org/xblackcat/frozenidea/jps/ModelSerializerExtension has been compiled by a more recent version of the Java Runtime (class file version 55.0), this version of the Java Runtime only recognizes class file versions up to 52.0

 

Solution:

Check the plugin of idea, such as org/xblackcat/frozenidea/jps/ModelSerializerExtension, then disable the corresponding xblackcat plugin

Openvino Use the Calculation stick Error: [ERROR] CAN NOT INIT MYRIAD DEVICE: NC_ERROR

Use Intel NCS2 on Ubuntu:
Create a file and edit: gedit 97-usbboot.rules
add the following content
SUBSYSTEM“usb”, ATTRS{idProduct}“2150”, ATTRS{idVendor}“03e7″, GROUP=”users”, MODE=”0666″, ENV{ID_MM_DEVICE_IGNORE}=”1″
SUBSYSTEM
“usb”, ATTRS{idProduct}“2485”, ATTRS{idVendor}“03e7″, GROUP=”users”, MODE=”0666″, ENV{ID_MM_DEVICE_IGNORE}=”1″
SUBSYSTEM“usb”, ATTRS{idProduct}“f63b”, ATTRS{idVendor}==”03e7″, GROUP=”users”, MODE=”0666″, ENV{ID_MM_DEVICE_IGNORE}=”1″

Then execute the following command:
sudo cp 97-usbboot.rules /etc/udev/rules.d/
sudo udevadm control –reload-rules
sudo udevadm trigger
sudo ldconfig
rm 97-usbboot.rules

[Solved] RocketMQ nameserver startup error: Error: Could not create the Java Virtual Machine.

1. Reasons for error reporting

/usr/local/rocketmq/bin/runserver.sh: 70: [[: not found
Unrecognized option: -Xlog:gc*:file=/dev/shm/rmq_srv_gc_%p_%t.log:time,tags:filecount=5,filesize=30M
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.

2. Solution:

Use vim to edit the runserver.sh file in  Edit/usr/local/rocketmq/bin/

3. Error reason

Since the java version installed on the machine is 1.8
But there is an error of unknown reason in the java version judgment on line 70 in the /usr/local/rocketmq/bin/runserver.sh file
This led to the execution of the java 9+ version of the command
Therefore, you only need to comment the judgment and commands of the java 9+ version.

And vice versa, if the java version installed on the machine is 11
Then only need to comment the judgments and commands of the java 1.8 version

[Solved] wordcount Error: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist:

Exception in thread "main" org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://192.168.25.128:9000/export/yang/log.1
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:323)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:265)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:387)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
at hadoop1.WordCount.main(WordCount.java:53)

When I was running the wordcount instance that comes with the Hadoop cluster, the error message was that the input path did not exist. I searched the Internet for a long time and did not solve it. Finally, I found that the log.1 I created was created locally and was not uploaded. To the hdfs cluster, so an error will be reported when running, the solution is: execute the command:
[root@master ~]# hadoop fs -put log.1/ # (upload the log.1 file to the / directory)
After the operation, you can run the command again:
[root@master ~]# hadoop jar /export/servers/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount /1.log /result
The execution results are as follows:

File System Counters
FILE: Number of bytes read=312
FILE: Number of bytes written=237571
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=300
HDFS: Number of bytes written=206
HDFS: Number of read operations=6
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Launched map tasks=1
Launched reduce tasks=1
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=7544
Total time spent by all reduces in occupied slots (ms)=5156
Total time spent by all map tasks (ms)=7544
Total time spent by all reduce tasks (ms)=5156
Total vcore-milliseconds taken by all map tasks=7544
Total vcore-milliseconds taken by all reduce tasks=5156
Total megabyte-milliseconds taken by all map tasks=7725056
Total megabyte-milliseconds taken by all reduce tasks=5279744
Map-Reduce Framework
Map input records=1
Map output records=35
Map output bytes=342
Map output materialized bytes=312
Input split bytes=97
Combine input records=35
Combine output records=25
Reduce input groups=25
Reduce shuffle bytes=312
Reduce input records=25
Reduce output records=25
Spilled Records=50
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=230
CPU time spent (ms)=2110
Physical memory (bytes) snapshot=306843648
Virtual memory (bytes) snapshot=4163534848
Total committed heap usage (bytes)=142278656
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=203
File Output Format Counters
Bytes Written=206

Run Successfully!