Category Archives: Error

[Solved] redis Start Error: FATAL CONFIG FILE ERROR: Bad directive or wrong number of arguments

Redis version, above 6.0

Error record: the failed redis uses port 6380 as the slave node.

Error reason:

The reason is that it is not allowed to add comments after the same valid code line in some configuration files (such as. Conf files, which are widely seen here). The following comments will be compiled and executed as the parameters passed in by the current command line; however, the command configuration has no parameters or the format of the parameters passed in is wrong. Naturally, an error will be reported.
as shown in the figure:

Change the comment to a separate line to solve the problem.

Incidentally, there is a receipt. If redis fails to start, you might as well look at the configured log file

[Solved] OpenCV 4 (C++) Error: “error: ‘CV_FOURCC’ was not declared in this scope”

In OpenCV 3 we use CV_FOURCC to identify codec , for example:

writer.open("image.jpg", CV_FOURCC('M', 'J', 'P', 'G'), fps, size);

In this line of code, we name the storage target file of cv::VideoWriter writer image.jpg, use the codec of MJPG (here MJPG is the abbreviation of motion jpeg), and enter the corresponding number of frames per second and the video lens size.

 

Unfortunately, in OpenCV 4 (4.5.4-dev) in, CV_FOURCC VideoWriter has been replaced by a function of the fourcc. If we continue to use the macro CV_FOURCC, an error will be reported when compiling:

error: ‘CV_ FOURCC’ was not declared in this scope

Here is a simple usage of FourCC in opencv4:

writer.open("image.jpg", cv::VideoWriter::fourcc('M', 'J', 'P', 'G'), fps, size);

In order to better show fourcc standard usage, and how the video is stored in OpenCV 4, below we give a complete example of how to store video:

 1 #include <opencv2/core.hpp>
 2 #include <opencv2/videoio.hpp>
 3 #include <opencv2/highgui.hpp>
 4 #include <iostream>
 5 #include <stdio.h>
 6 using namespace cv;
 7 using namespace std;
 8 int main(int, char**)
 9 {
10     Mat src;
11     // use default camera as video source
12     VideoCapture cap(0);
13     // check if we succeeded
14     if (!cap.isOpened()) {
15         cerr << "ERROR! Unable to open camera\n";
16         return -1;
17     }
18     // get one frame from camera to know frame size and type
19     cap >> src;
20     // check if we succeeded
21     if (src.empty()) {
22         cerr << "ERROR! blank frame grabbed\n";
23         return -1;
24     }
25     bool isColor = (src.type() == CV_8UC3);
26     //--- INITIALIZE VIDEOWRITER
27     VideoWriter writer;
28     int codec = VideoWriter::fourcc('M', 'J', 'P', 'G');  // select desired codec (must be available at runtime)
29     double fps = 25.0;                          // framerate of the created video stream
30     string filename = "./live.avi";             // name of the output video file
31     writer.open(filename, codec, fps, src.size(), isColor);
32     // check if we succeeded
33     if (!writer.isOpened()) {
34         cerr << "Could not open the output video file for write\n";
35         return -1;
36     }
37     //--- GRAB AND WRITE LOOP
38     cout << "Writing videofile: " << filename << endl
39          << "Press any key to terminate" << endl;
40     for (;;)
41     {
42         // check if we succeeded
43         if (!cap.read(src)) {
44             cerr << "ERROR! blank frame grabbed\n";
45             break;
46         }
47         // encode the frame into the videofile stream
48         writer.write(src);
49         // show live and wait for a key with timeout long enough to show images
50         imshow("Live", src);
51         if (waitKey(5) >= 0)
52             break;
53     }
54     // the videofile will be closed and released automatically in VideoWriter destructor
55     return 0;
56 }

[Solved] PostgreSQL configure: error: readline library not found

preface

An error occurs when installing PostgreSQL, as shown below

configure: error: readline library not found
If you have readline already installed, see config.log for details on the
failure.  It is possible the compiler isn't looking in the proper directory.
Use --without-readline to disable readline support.

Solution:

Check whether the readLine package is installed in the system

rpm -qa | grep readline

Install readline-devel pack

yum -y install -y readline-devel

Execute configure again successfully

The explanation of readLine comes from the official website

--without-readline
Prevents use of the Readline library (and libedit as well). Thisoption disables command-line
editing and history in psql, so it is notrecommended.

Note: when you execute configure, you can add “– without readLine” to avoid this error, but PostgreSQL officials do not recommend it

[Solved] Msbuild Error: error msb3428: Cannot Load the C + + Group “vcbuild. Exe”

problem

MSBUILD : error MSB3428: The Visual C++ component "VCBuild.exe" could not be loaded. To solve this problem, 1) install .NET Framework 2.0 SDK; 2) install Microsoft Visual Stu

Solution:

Run the following command as Administrator

npm install --global --production windows-build-tools

If there is any other issue:
Please restart this script from an administrative PowerShell

[Solved] Pyinstallerimporterror: failed to load dynlib/dll (The packaged exe can run normally, but the shortcut generated by exe cannot be started)

#Solution: The created shortcut needs to add the starting position StartIn=str(target).replace(s, "")
import winshell
import os
def create_shortcut_to_desktop():
    target = sys.argv[0]
    title = 'XX shortcut'
    s = os.path.basename(target)
    fname = os.path.splitext(s)[0]
    winshell.CreateShortcut(
        Path=os.path.join(winshell.desktop(), fname + '.lnk'),
        StartIn=str(target).replace(s, ""),
        Target=target,
        Icon=(target, 0),
        Description=title)

def delete_shortcut_from_startup():
    target = sys.argv[0]
    s = os.path.basename(target)
    fname = os.path.splitext(s)[0]
    # delfile = micpath.join(winshell.startup(), fname + '.lnk')
    delfile = os.path.join(winshell.desktop(), fname + '.lnk')
    if os.path.exists(delfile):
        winshell.delete_file(delfile)

[Solved] puppeteer Install Error: Failed to download Chromium

There are many solutions on the Internet. Most of them are to modify the chromium download source, or use cnpm, or skip the download and manually download chromium on the official website.

The self-test is not very easy to use.

Find the solution of international friends from the issue of puppeter.

Solution:

npm install puppeteer --unsafe-perm=true --allow-root

failure: repodata/filelists.sqlite.bz2 from teamviewer: [Errno 256] No more mirrors to try

When installing CHD, enter Yum install cloudera-manager-agent-6.2.1-1426065. el7.x86_64.rpm – y error

[root@master x86_64]# yum install cloudera-manager-agent-6.2.1-1426065.el7.x86_64.rpm -y
Loaded plugins: fastestmirror, langpacks
Checking cloudera-manager-agent-6.2.1-1426065.el7.x86_64.rpm: cloudera-manager-agent-6.2.1-1426065.el7.x86_64
cloudera-manager-agent-6.2.1-1426065.el7.x86_64.rpm will be installed
Resolving dependencies
--> Checking the transaction
---> Package cloudera-manager-agent.x86_64.0.6.2.1-1426065.el7 will be installed
--> Processing the dependency /lib/lsb/init-functions, which is required by the package cloudera-manager-agent-6.2.1-1426065.el7.x86_64
Loading mirror speeds from cached hostfile
 * base: mirrors.bupt.edu.cn
 * extras: mirrors.bupt.edu.cn
 * updates: mirrors.bupt.edu.cn
base/7/x86_64/filelists_db                                                                                    | 7.2 MB  00:00:04     
extras/7/x86_64/filelists_db                                                                                  | 259 kB  00:00:00     
mysql-connectors-community/x86_64/filelists_db                                                                | 120 kB  00:00:00     
mysql-tools-community/x86_64/filelists_db                                                                     | 414 kB  00:00:00     
mysql57-community/x86_64/filelists_db                                                                         | 1.6 MB  00:00:00     
teamviewer/x86_64/filelists_db                                                                                | 106 kB  00:00:00     
https://linux.teamviewer.com/yum/stable/main/binary-x86_64/repodata/filelists.sqlite.bz2: [Errno -1] Metadata file does not match checksum
Trying another mirror.


 One of the configured repositories failed (TeamViewer - x86_64),
 and yum doesn't have enough cached data to continue. At this point the only
 safe thing yum can do is fail. There are a few ways to work "fix" this:

     1. Contact the upstream for the repository and get them to fix the problem.

     2. Reconfigure the baseurl/etc. for the repository, to point to a working
        upstream. This is most often useful if you are using a newer
        distribution release than is supported by the repository (and the
        packages for the previous distribution release still work).

     3. Run the command with the repository temporarily disabled
            yum --disablerepo=teamviewer ...

     4. Disable the repository permanently, so yum won't use it by default. Yum
        will then just ignore the repository until you permanently enable it
        again or use --enablerepo for temporary usage:

            yum-config-manager --disable teamviewer
        or
            subscription-manager repos --disable=teamviewer

     5. Configure the failing repository to be skipped, if it is unavailable.
        Note that yum will try to contact the repo. when it runs most commands,
        so will have to try and fail each time (and thus. yum will be be much
        slower). If it is a very temporary problem though, this is often a nice
        compromise:

            yum-config-manager --save --setopt=teamviewer.skip_if_unavailable=true

failure: repodata/filelists.sqlite.bz2 from teamviewer: [Errno 256] No more mirrors to try.
https://linux.teamviewer.com/yum/stable/main/binary-x86_64/repodata/filelists.sqlite.bz2: [Errno -1] Metadata file does not match checksum

Solution:

yum clean all

yum makecache

[Solved] Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.

1. An error is reported when running the serialization test of the local cluster

[INFO] 
[INFO] --- exec-maven-plugin:3.0.0:exec (default-cli) @ HadoopWritable ---
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
    at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:116)
    at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:109)
    at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:102)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1540)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1536)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
    at org.apache.hadoop.mapreduce.Job.connect(Job.java:1536)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1564)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588)
    at com.alex.FlowDriver.main(FlowDriver.java:45)
[ERROR] Command execution failed.
Command execution failed.


org.apache.commons.exec.ExecuteException: Process exited with an error: 1 (Exit value: 1)
    at org.apache.commons.exec.DefaultExecutor.executeInternal (DefaultExecutor.java:404)
    at org.apache.commons.exec.DefaultExecutor.execute (DefaultExecutor.java:166)
    at org.codehaus.mojo.exec.ExecMojo.executeCommandLine (ExecMojo.java:982)
    at org.codehaus.mojo.exec.ExecMojo.executeCommandLine (ExecMojo.java:929)
    at org.codehaus.mojo.exec.ExecMojo.execute (ExecMojo.java:457)
    at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:208)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:154)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:146)
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
    at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56)
    at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
    at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
    at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
    at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
    at org.apache.maven.cli.MavenCli.execute (MavenCli.java:954)
    at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:288)
    at org.apache.maven.cli.MavenCli.main (MavenCli.java:192)
    at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke (Method.java:498)
    at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:289)
    at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:229)
    at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:415)
    at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:356)
    at org.codehaus.classworlds.Launcher.main (Launcher.java:47)

2. Reason: lack of jar package

Solution: import Hadoop MapReduce client common package

Adding dependencies using maven

        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-mapreduce-client-common</artifactId>
            <version>3.1.3</version>
            <scope>compile</scope>
        </dependency>

Failed to parse multipart servlet request; nested exception is java.io.IOException: org.apache.tomcat.util.http.fileupload.FileUploadException: Stream closed

spring.mvc.hiddenmethod.filter.enabled=true
spring.servlet.multipart.max-file-size=500MB
spring.servlet.multipart.max-request-size=2048MB