Category Archives: Linux

[Solved] Nginx Log Error: open() “/opt/Nginx/nginx/nginx.pid” failed (2: No such file or directory)

After nginx is started successfully, it cannot be accessed

Check that nginx.pid does not have a port number, but the startup is normal,

[ root@rzk nginx]# ./nginx -t

nginx: the configuration file /opt/Nginx/nginx/nginx.conf syntax is ok

nginx: configuration file /opt/Nginx/nginx/nginx.conf test is successful

Check the logs. Error.log to confirm the errors in these lines

2021/11/22 17:26:51 [error] 188975#0: open() “/opt/Nginx/nginx/nginx.pid” failed (2: No such file or directory)

2021/11/22 17:27:07 [notice] 188978#0: signal process started

2021/11/22 17:27:07 [error] 188978#0: invalid PID number “” in “/opt/Nginx/nginx/nginx.pid”

Solution (I)

/Opt/nginx/nginx/nginx.pid check whether the file exists. If not, create a new MKDIR nginx.pid, save and exit

If the above problems cannot be solved

Solution (II)

Open comment

Enter the configuration file VIM/opt/nginx/nginx/nginx. Conf

Look at line 7 and you can see that the PID is annotated

Modify PID storage path

It can be modified to PID/opt/nginx/nginx/nginx.pid; Here I exist under the nginx root path

After saving, execute the following command. Because/opt/nginx/nginx/nginx.pid is opened, the execution configuration file will generate the corresponding nginx. PID

-C specify a file, that is, specify the accessory file

[root@rzk nginx]# ./nginx -c /opt/Nginx/nginx/nginx.conf  # If you want to specify a configuration file to start then use the following command,
[root@rzk nginx]# cat nginx.pid View pid
189585

Then you can check the nginx startup

[Solved] Phoenix startup error: issuing: !connect jdbc:phoenix:hadoop162:2181 none…

/opt/module/phoenix-5.0.0/bin » ./sqlline.py hadoop162:2181 atguigu@hadoop162
Setting property: [incremental, false]
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix:hadoop162:2181 none none org.apache.phoenix.jdbc.PhoenixDriver
Connecting to jdbc:phoenix:hadoop162:2181
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/module/phoenix-5.0.0/phoenix-5.0.0-HBase-2.0-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
21/11/24 11:07:15 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
21/11/24 11:07:15 WARN client.ConnectionImplementation: Retrieve cluster id failed
java.util.concurrent.ExecutionException: org.apache.phoenix.shaded.org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase/hbaseid
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
at org.apache.hadoop.hbase.client.ConnectionImplementation.retrieveClusterId(ConnectionImplementation.java:527)
at org.apache.hadoop.hbase.client.ConnectionImplementation.(ConnectionImplementation.java:287)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:219)
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:114)
at org.apache.phoenix.query.HConnectionFactory$HConnectionFactoryImpl.createConnection(HConnectionFactory.java:47)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(ConnectionQueryServicesImpl.java:430)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.access$400(ConnectionQueryServicesImpl.java:272)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2556)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2532)
at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2532)
at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
at sqlline.Commands.connect(Commands.java:1064)
at sqlline.Commands.connect(Commands.java:996)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
at sqlline.SqlLine.dispatch(SqlLine.java:809)
at sqlline.SqlLine.initArgs(SqlLine.java:588)
at sqlline.SqlLine.begin(SqlLine.java:661)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)
Caused by: org.apache.phoenix.shaded.org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase/hbaseid
at org.apache.phoenix.shaded.org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
at org.apache.phoenix.shaded.org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$ZKTask$1.exec(ReadOnlyZKClient.java:168)
at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:323)
at java.lang.Thread.run(Thread.java:748)

–> It is caused by Hbase not being up.
–> it’s because hadoop cluster is not started properly and datanode nodes are not up
–> The cluster ID of the datanode is not the same as the cluster ID of the namenode, so dn can’t find the cluster

Duplicate formatting causes the clusterID of the datanode and the clusterID of the new namenode to be inconsistent, so when the cluster is started there is only the namenode, not the datanode

Workaround.
1, stop the cluster and reboot to restart (to ensure that Hadoop’s services are killed) the node where the namenode is located
2. Clean up.
Delete all data and logs in the root directory of hadoop (three nodes): rm -rf $HADOOP_HOME/data $HADOOP_HOME/logs
Delete all contents under /tmp: sudo rm -rf /tmp/*
Use the cleanup script to delete the contents of the three nodes: data, logs, and tmp directories in one click.
3, reformat
hdfs namenode -format
Finally, restart the cluster successfully.


Cleanup script
!/bin/bash
for host in hadoop102 hadoop103 hadoop104
do
ssh $host rm -rf $HADOOP_HOME/data $HADOOP_HOME/logs
ssh $host sudo rm -rf /tmp/*
done

[Solved] MAC m1 Command Error: complete:13: command not found: compdef

Mac M1 command line error   complete:13: command not found: compdef,

Solution:

code ~/.zshrc

Add in file

autoload -Uz compinit

compinit

Save and reopen the command line without any error.

autoload -Uz compinit
compinit
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion

fatal: refusing to merge unrelated histories [How to Solve]

Today, a new project on git was pulled locally and git pull origin development was found to remind fatal: refusing to merge unrelated histories

Query result: the reason is that the two branches have different versions and different submission history

Solution: (– allow-unrelated-histories)

git pull origin develop –allow-unrelated-histories

You can allow irrelevant history and forced merger, which has indeed solved this problem.

How to Solve [rejected] Master – > Master (non fast forward) error

[rejected] Master -> Master (non fast forward) error solution

1. Cause analysis

It roughly means that the local library and remote library are not synchronized, so the merge cannot be submitted, and the conflict leads to the failure to push

2. Solution

Then it’s easy to find the reason. Just synchronize the local library with the remote library

git pull origin main --allow-unrelated-histories //Pull the unrelated history from the remote repository
git push origin main //push to the remote main branch

[Solved] CentOS7 clickhouse Install error: Missing the Dependecy libicudata.so.50

The following error occurred when installing Clickhouse under CentOS 7

root@localhost 7]# rpm -ivh clickhouse-common-static-18.14.13-1.el7.x86_64.rpm
Error: Dependency detection failed.
libicudata.so.50()(64bit) is required by clickhouse-common-static-18.14.13-1.el7.x86_64
libicui18n.so.50()(64bit) Required by clickhouse-common-static-18.14.13-1.el7.x86_64
libicuuc.so.50()(64bit) required by clickhouse-common-static-18.14.13-1.el7.x86_64

use

yum install libicu.x86_64

 

Solution:

It doesn’t work for CentOS 8

Please check for details

https://centos.pkgs.org/7/centos-x86_64/libicu-50.2-4.el7_7.x86_64.rpm.html

[Solved] Linux Start solr Server Error: Your open file limit is currently 1024

1. In the Linux system, when installing applications such as elastic search and Solr, the Linux system will prompt various installation failures due to various restrictions. Here are some problems encountered.

2. Various restrictions
1. View all system restrictions
ulimit – A

2. Modify the limit on the number of open files
phenomenon:

***[WARN]***Your open file limit is currently 1024. It should be set to65000 to avoid operational disruption.

Solution:

a) Switch to the root account first (note that the operation is unsuccessful if it is not switched)
b) modify the/etc/security/limits.conf file as root and add
* hard nofile 65000
* soft nofile 65000 at the end

3. Modify the opening process limit
phenomenon:

***[WARN]***Your Max Processes Limit is currently 2048.It should be set to 65000 to avoid operational disruption.

Solution:

a) Switch to the root account first (note that the operation is unsuccessful if it is not switched)
b) modify the/etc/security/limits.conf file as root and add it at the end

* hard nproc 65000
* soft nproc 65000

[Solved] VScode cmpm Error: cnmp: Cannot Loading the File…

Report the wrong diagram first:

Solution:

Right-click the vscode icon and select run as administrator;

Execute get executionpolicy in the terminal and display restricted, indicating that the status is prohibited;

At this time, execute set executionpolicy remotesigned;

At this time, execute the get executionpolicy and display remotesigned, which indicates that the status is unblocked and can be run

[Solved] Ubantu18.04 Use APT to install Go environment instruction Error

Under ubantu, the sudo apt install golang go instruction is used to install the go environment. No error is reported during the installation process. The instruction cannot be recognized during use. The error reports are as follows:

root@sh001:~# go env -w GOPROXY=https://goproxy.io,directflag provided but not defined: -w
usage: env [-json] [var ...]
Run 'go help env' for details.
root@sh001:~# go env GO111MODULE = on

Reason: the go environment is not completely installed, and the version installed with apt may be old

Solution:

apt-get install software-properties-common

sudo add-apt-repository ppa:longsleep/golang-backports 
sudo apt-get update 
sudo apt-get install golang-go

test

 go version