Category Archives: Error

[Solved] dpkg-deb: error: paste subprocess was killed by signal (Broken pipe)

Problem phenomenon

Unpacking nvidia-340 (340.107-0ubuntu0~gpu18.04.1) ...
dpkg: error processing archive /var/cache/apt/archives/nvidia-340_340.107-0ubuntu0~gpu18.04.1_amd64.deb (--unpack):
 trying to overwrite '/lib/udev/rules.d/71-nvidia.rules', which is also in package nvidia-kernel-common-396 396.45-0ubuntu0~gpu18.04.2
dpkg-deb: error: paste subprocess was killed by signal (Broken pipe)
Errors were encountered while processing:
 /var/cache/apt/archives/nvidia-340_340.107-0ubuntu0~gpu18.04.1_amd64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)

Solution:

Using the following instructions, it will solve the “attempt to overwrite error” with the power of powerful dpkg.

sudo dpkg -i --force-overwrite /var/cache/apt/archives/nvidia-340_340.107-0ubuntu0~gpu18.04.1_amd64.deb    
//Note that the last /var... .amd64.deb section, which is the path name of the file you reported the error to

Then run the following command to repair any damaged packages.

sudo apt -f install

[Solved] Error setting null for parameter #6 with JdbcType OTHER…

Environmental Science:

Oracle Database

Error content:

### Cause: org. apache. ibatis. type. TypeException: Error setting null for parameter #10 with JdbcType OTHER . Try setting a different JdbcType for this parameter or a different jdbcTypeForNull configuration property. Cause: java.sql.Sqlexception: invalid column type

When mybatis inserts a null value, you need to specify a jdbctype because mybatis cannot convert

Solution:

In the insert statement, add JDBC type

For example:

#{name,jdbcType=VARCHAR}

Kafka Start Error: Corrupt index found & org.apache.kafka.common.protocol.types.SchemaException: Error reading field ‘version’: java.nio.BufferUnderflowException

Today, after starting Kafka, it was found that only one node was started successfully and the other two failed. Check the log as follows

After multi-party inspection, it turned out that the service was not normally shut down before

Solution:

According to the log prompt, we need to manually delete the two index files under each partition, restart the cluster, and Kafka will automatically rebuild the index files

find /opt/module/kafka/logs/ -name "*.timeindex" |xargs rm -f
find /opt/module/kafka/logs/ -name "*.index" |xargs rm -f

Just restart the service.

A large number of ESC symbols appear in the log file of logback

When using the spring boot project of ruoyi, if you find that the console log has no color, you want to add the color, and then modify the logback XML, add% highlight,% cyan,% red, etc., and then the console will have color.

The problem is that a large number of ESCs appear in the log file, as shown in the following figure:

 

Solution:

That is, the color mark is used for console printing, and the color mark is removed when printing to the file, that is, at least two sets of patterns and complete logback are configured The XML is as follows:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <property name="log.path" value="/home/ruoyi/logs" />
    <property name="log.pattern" value="%d{yyyy-MM-dd HH:mm:ss.SSS} %highlight(%5p) %magenta(${PID}) [%16.16t] %cyan(%-40.40logger{39}): %msg%n" />
    <property name="log.file" value="%d{yyyy-MM-dd HH:mm:ss.SSS} %5p ${PID} [%16.16t] %-40.40logger{39}: %msg%n" />
    <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>${log.pattern}</pattern>
        </encoder>
    </appender>
    
    <appender name="file_info" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${log.path}/sys-info.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <fileNamePattern>${log.path}/sys-info.%d{yyyy-MM-dd}.log</fileNamePattern>
            <maxHistory>60</maxHistory>
        </rollingPolicy>
        <encoder>
            <pattern>${log.file}</pattern>
        </encoder>
        <filter class="ch.qos.logback.classic.filter.LevelFilter">
            <level>INFO</level>
            <onMatch>ACCEPT</onMatch>
            <onMismatch>DENY</onMismatch>
        </filter>
    </appender>
    
    <appender name="file_error" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${log.path}/sys-error.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <fileNamePattern>${log.path}/sys-error.%d{yyyy-MM-dd}.log</fileNamePattern>
            <maxHistory>60</maxHistory>
        </rollingPolicy>
        <encoder>
            <pattern>${log.file}</pattern>
        </encoder>
        <filter class="ch.qos.logback.classic.filter.LevelFilter">
            <level>ERROR</level>
            <onMatch>ACCEPT</onMatch>
            <onMismatch>DENY</onMismatch>
        </filter>
    </appender>
    
    <appender name="sys-user" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${log.path}/sys-user.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <fileNamePattern>${log.path}/sys-user.%d{yyyy-MM-dd}.log</fileNamePattern>
            <maxHistory>60</maxHistory>
        </rollingPolicy>
        <encoder>
            <pattern>${log.file}</pattern>
        </encoder>
    </appender>
    
    <logger name="com.ruoyi" level="info" />
    <logger name="org.springframework" level="warn" />

    <root level="info">
        <appender-ref ref="console" />
    </root>
    
    <root level="info">
        <appender-ref ref="file_info" />
        <appender-ref ref="file_error" />
    </root>
    
    <logger name="sys-user" level="info">
        <appender-ref ref="sys-user"/>
    </logger>
</configuration>

Then there’s no problem

[Solved] ES Startup Error: maybe these locations are not writable or multiple nodes were started without increasing

May these locations are not writable or multiple nodes were started without increasing

1. Possible these locations are not writable or multiple nodes were started without increasing

When using elk, sometimes the server is suddenly down, which makes es unable to restart. One common error is that maybe these locations are not writable or multiple nodes were started without increasing [[node. Max_local_storage_nodes] (was [1])

The specific error information is as follows:

[elk@master elasticsearch-7.2.1]$ ./bin/elasticsearch
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
[2021-08-13T11:02:10,661][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [master] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: failed to obtain node locks, tried [[/data/elasticsearch-7.2.1/data]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
    at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:163) ~[elasticsearch-7.2.1.jar:7.2.1]
    at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150) ~[elasticsearch-7.2.1.jar:7.2.1]
    at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-7.2.1.jar:7.2.1]
    at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) ~[elasticsearch-cli-7.2.1.jar:7.2.1]
    at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-7.2.1.jar:7.2.1]
    at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:115) ~[elasticsearch-7.2.1.jar:7.2.1]
    at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-7.2.1.jar:7.2.1]
Caused by: java.lang.IllegalStateException: failed to obtain node locks, tried [[/data/elasticsearch-7.2.1/data]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
    at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:298) ~[elasticsearch-7.2.1.jar:7.2.1]
    at org.elasticsearch.node.Node.<init>(Node.java:271) ~[elasticsearch-7.2.1.jar:7.2.1]
    at org.elasticsearch.node.Node.<init>(Node.java:251) ~[elasticsearch-7.2.1.jar:7.2.1]
    at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:221) ~[elasticsearch-7.2.1.jar:7.2.1]

The reason for this error is that before starting es with root user, this error will be reported when switching user operation. To solve this error, you only need to delete the nodes file in the data directory under the root directory, and then start it with other users

2. Start the message permission denied

The specific error information is as follows:

[elk@testmachine root]$ 2021-09-27 10:51:02,939 main ERROR RollingFileManager (/home/elasticsearch/logs/my-application_server.json) java.io.FileNotFoundException: /home/elasticsearch/logs/my-application_server.json (Permission denied) java.io.FileNotFoundException: /home/elasticsearch/logs/my-application_server.json (Permission denied)
        at java.base/java.io.FileOutputStream.open0(Native Method)
        at java.base/java.io.FileOutputStream.open(FileOutputStream.java:291)
        at java.base/java.io.FileOutputStream.<init>(FileOutputStream.java:234)
        at java.base/java.io.FileOutputStream.<init>(FileOutputStream.java:155)
        at org.apache.logging.log4j.core.appender.rolling.RollingFileManager$RollingFileManagerFactory.createManager(RollingFileManager.java:640)
        at org.apache.logging.log4j.core.appender.rolling.RollingFileManager$RollingFileManagerFactory.createManager(RollingFileManager.java:608)
        at org.apache.logging.log4j.core.appender.AbstractManager.getManager(AbstractManager.java:113)
        at org.apache.logging.log4j.core.appender.OutputStreamManager.getManager(OutputStreamManager.java:114)
        at org.apache.logging.log4j.core.appender.rolling.RollingFileManager.getFileManager(RollingFileManager.java:188)
        at org.apache.logging.log4j.core.appender.RollingFileAppender$Builder.build(RollingFileAppender.java:145)
        at org.apache.logging.log4j.core.appender.RollingFileAppender$Builder.build(RollingFileAppender.java:61)
        at org.apache.logging.log4j.core.config.plugins.util.PluginBuilder.build(PluginBuilder.java:123)
        at org.apache.logging.log4j.core.config.AbstractConfiguration.createPluginObject(AbstractConfiguration.java:959)
        at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:899)
        at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:891)
        at org.apache.logging.log4j.core.config.AbstractConfiguration.doConfigure(AbstractConfiguration.java:514)
        at org.apache.logging.log4j.core.config.AbstractConfiguration.initialize(AbstractConfiguration.java:238)
        at org.apache.logging.log4j.core.config.AbstractConfiguration.start(AbstractConfiguration.java:250)
        at org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:547)
        at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:263)
        at org.elasticsearch.common.logging.LogConfigurator.configure(LogConfigurator.java:225)

It can be seen from the error message that it is logs/my application_ server. For the permission problem of JSON file, you can enter the logs directory of ES and view the users of all files through the LS – L command. You will find that some files belong to the root user, which is the reason for the error

Solution: go back to the root directory and execute the following command

chown -R elk:elk logs
chmod -R 777 logs

Enter the logs directory to check whether all files belong to the elk: Elk user

If so, restart es and start successfully

[Solved] HBase Startup Error: master.HMaster: Failed to become active master

situation:

Zookeeper and HDFS have been started, and then HBase has been started. Although the startup is successful, HBase automatically shuts down after a few seconds, and an error is reported.

Complete error reporting information:

master.HMaster: Failed to become active master
org.apache.hadoop.hbase.util.FileSystemVersionException: HBase file layout needs to be upgraded. You have version null and I want version 8. 
Consult http://hbase.apache.org/book.html for further information about upgrading HBase. Is your hbase.rootdir valid?If so, you may need to run 'hbase hbck -fixVersionFile'.

Solution:

#Login to hdfs user
su hdfs

#delete the /hbase/data directory
hadoop fs -rmr /apps/hbase/data #Older version
hdfs dfs -rm -r /apps/hbase/data #newer

#login to ZooKeeper
zkCli.sh

#Check if the /hbase-unsecure directory exists
ls /

#Delete the /hbase-unsecure directory
rmr /hbase-unsecure #Older version
deleteall /hbase-unsecure #newer version

Finally, restart HBase

Attached:

If the command in the error message is executed:

hbase hbck -fixVersionFile

Then a new error will be reported, saying apps/hbase/data/.tmp/hbase-hbck.lock is occupied and you need to delete the lock file first

Delete command:

hdfs dfs -rm /apps/hbase/data/.tmp/hbase-hbck.lock

[Solved] pom.xml File Error: web.xml is missing and is set to true

pom. XML file error web xml is missing and < failOnMissingWebXml> Is set to true, but Web XML has been created in webapp – & gt; WEB-INF file added

The reason for the error is: War is selected by packaging in Maven project

Solution:

1 Right click the item – properties -> Deployment Assembly -> Add -> Folder, select the directory Src/main/webapp, click apply and OK.

Note: if the Src/main/webapp directory already exists before selection, you can delete it first, and then perform the operation

2. Select the project, project – clean, and POM XML error will disappear

[Azure Function] VS Code Javascript Function Cannot Debug Locally: Value cannot be null. (Parameter ‘provider’)

Problem description

Refer to the official documents and create a JavaScript function through CS code, which appears when traveling locally:

Value cannot be null. (Parameter ‘provider’)

 

problem analysis

Step 1: open the detailed log of function, enter the directory of function in vs code, and then start local debugging in terminal

Input:   func start --verbose

Step 2: analyze logs

In the detailed log, it is found that the error occurred in the step of downloading extension bundle:

[2022-01-06T07:47:24.404Z] Loading functions metadata
[2022-01-06T07:47:24.509Z] Reading functions metadata
[2022-01-06T07:47:24.518Z] 0 functions found
[2022-01-06T07:47:24.535Z] 0 functions loaded
[2022-01-06T07:47:24.540Z] Looking for extension bundle Microsoft.Azure.Functions.ExtensionBundle at C:\Users\Administrator\.azure-functions-core-tools\Functions\ExtensionBundles\Microsoft.Azure.Functions.ExtensionBundle
[2022-01-06T07:47:24.544Z] Fetching information on versions of extension bundle Microsoft.Azure.Functions.ExtensionBundle available on https://functionscdn.azureedge.net/public/ExtensionBundles/Microsoft.Azure.Functions.ExtensionBundle/index.json
[2022-01-06T07:47:25.432Z] Looking for extension bundle Microsoft.Azure.Functions.ExtensionBundle at C:\Users\Administrator\.azure-functions-core-tools\Functions\ExtensionBundles\Microsoft.Azure.Functions.ExtensionBundle
[2022-01-06T07:47:25.435Z] Fetching information on versions of extension bundle Microsoft.Azure.Functions.ExtensionBundle available on https://functionscdn.azureedge.net/public/ExtensionBundles/Microsoft.Azure.Functions.ExtensionBundle/index.json
[2022-01-06T07:47:27.250Z] Downloading extension bundle from https://functionscdn.azureedge.net/public/ExtensionBundles/Microsoft.Azure.Functions.ExtensionBundle/3.3.0/Microsoft.Azure.Functions.ExtensionBundle.3.3.0_any-any.zip to C:\Users\Administrator\AppData\Local\Temp\fc4f430a-f517-4cd9-9192-c6ce9368f679\Microsoft.Azure.Functions.ExtensionBundle.3.3.0.zip
Value cannot be null. (Parameter 'provider')
[2021-12-22T01:34:45.182Z] Stopping host...
[2021-12-22T01:34:45.187Z] Host shutdown completed.

 

Solution:

After problem analysis, the root cause of the problem is the failure to download the extension bundle. Therefore, you may need to manually try to download the extension bundle file several times.

Download link: https://functionscdn.azureedge.net/public/ExtensionBundles/Microsoft.Azure.Functions.ExtensionBundle/3.3.0/Microsoft.Azure.Functions.ExtensionBundle.3.3.0_any-any.zip

After downloading, unzip the file into the azure Function Core Tools Directory, and then run function to solve the problem of value cannot be null (parameter ‘provider’) problem

 

Extension bundle file directory

C:\Users\Administrator\.azure-functions-core-tools\Functions\ExtensionBundles\Microsoft.Azure.Functions.ExtensionBundle\3.3.0

**Outcome:**

[Solved] xadmin Error: ImportError: cannot import name ‘DEFAULT_FORMATS’ from ‘import_export.admin’ (/home/lijun/app/hippo/hippo_api/venv/lib/python3.8/site-packages/import_export/admin.py)

xadmin Error:
File “/home/lijun/app/hippo/hippo_api/venv/lib/python3.8/site-packages/xadmin/plugins/importexport.py”, line 48, in <module>
from import_export.admin import DEFAULT_FORMATS, SKIP_ADMIN_LOG, TMP_STORAGE_CLASS
ImportError: cannot import name ‘DEFAULT_FORMATS’ from ‘import_export.admin’ (/home/lijun/app/hippo/hippo_api/venv/lib/python3.8/site-packages/import_export/admin.py)
Django Version: 2.2
Xadmin Version: 2.0.1
python Version: 3.8.10
Solution:
Click to go to the error report directory at: importexport.py
Note the error message line:

# from import_export.admin import DEFAULT_FORMATS, SKIP_ADMIN_LOG, TMP_STORAGE_CLASS

And add the following code:

from import_export.formats.base_formats import DEFAULT_FORMATS
from import_export.admin import ImportMixin, ImportExportMixinBase

As shown below:

Other Master Nodes Add to the Cluster After Reset Error: CONTEXT DEADLINE EXCEEDED

[root@k8s-master01 ~]# kubectl get po,svc -n kube-system -o wide

[root@k8s-master01 ~]# kubectl exec -it -n kube-system etcd-k8s-master01 sh
# export ETCDCTL_API=3

# alias etcdctl='etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key'

# etcdctl member list
3a67291a5caf7ac3, started, k8s-master01, https://172.16.229.126:2380, https://172.16.229.126:2379
bae945fddfb47140, started, k8s-master02, https://172.16.229.127:2380, https://172.16.229.127:2379
8603a002380ffd4e, started, k8s-master03, https://172.16.229.128:2380, https://172.16.229.128:2379

--Delete the bad master03 node.
# etcdctl member remove 8603a002380ffd4e

# reset join.