Category Archives: Error

[Solved] You have 18 unapplied migration(s). Your project may not work properly until you apply the migrations for…

Create a new Django project using Python manage When the PY runserver command runs, you will be prompted

You have 18 unapplied migration(s). Your project may not work properly until you
apply the migrations for app(s): admin, auth, contenttypes, sessions.
Run ‘python manage. py migrate’ to apply them. error

The last line of information prompts you to run Python manage.Py migrate to apply them

Solution: Ctrl + C to end the service, execute the command Python manage.py migrate

jenkins+sonar-scanner Scan Error: Failed to find ‘typescript’ module.Please check, NODE_PATH contains location of global ‘typescript’ or install locally in your project

Step: Jenkins sends out the task and uses the shell script to trigger the sonar scanner scan. report errors:

Failed to find ‘typescript’ module. Please check, NODE_ PATH contains location of global ‘typescript’ or install locally in your project

After trying for a day, it was finally solved. The method is as follows:

Premise: nodejs is installed.

1. Install typescript globally (I installed it. When I only installed the global separately, the task issued by Jenkins still reported an error. This step may be useless or useful. But I did, so it’s safer to record it).

Online installation method: NPM install – G typescript

————————————————————-

Offline installation method: first find a machine that can connect to the Internet and dig the package download address: the command is NPM info typescript Copy the tarball connection in the output and download it with the browser.

Copy to the sonar scanner machine, NPM install – G typescript-4.5.4.tgz

2. Locally install typescript (this step is very important and the key to success)

The path to execute the sonar scanner command (!! note here that it must be the path used to execute this command)

NPM install typescript (in offline mode, it is NPM install typescript-4.5.4.tgz. You will find that it will get stuck after running. If it’s OK, just exit. Three files node_modules. Package.json package-lock.json will appear in the directory)

Sudo Jenkins switches users and runs the sonar scanner command to see if there is still an error. If no error is reported, Copy the three JSON files (node_modules. package.json. package-lock.) into a new folder under rook.

The shell script in the Jenkins task reads as follows:

cd ${WORKSPACE}
sudo cp -r /root/workspace_bk/node_modules/ .
sudo cp -r /root/workspace_bk/package.json .
sudo cp -r /root/workspace_bk/package-lock.json .
sudo sonar-scanner ....

adb: failed to install Magisk-v23.0.apk: Failure [INSTALL_FAILED_ALREADY_EXISTS: Attempt to re-install com.topjohnwu.magisk without first uninstalling.]

adb android magisk error: adb: failed to install Magisk-v23.0.apk: Failure [INSTALL_FAILED_ALREADY_EXISTS: Attempt to re-install com.topjohnwu.magisk without first uninstalling.]

C:\Users\16613\Desktop\shuaji8.1>adb install Magisk-v23.0.apk
adb: failed to install Magisk-v23.0.apk: Failure [INSTALL_FAILED_ALREADY_EXISTS: Attempt to re-install com.topjohnwu.magisk without first uninstalling.]

C:\Users\16613\Desktop\shuaji8.1>adb install -r C:\Users\16613\Desktop\shuaji8.1\Magisk-v23.0.apk
Success

When using ADB install to install APK, a version has been installed on the device. If you install it again, the installation will fail. The following prompt message appears: install_FAILED_ALREADY_EXISTS

In this case, simply add – r to the command to overwrite the installation:

ADB install – r your APK full path

IDEA Could Not pull Codes to Local Error: Can’t Update No tracked branch configured for branch master or the branch doesn’t exist.

When you pull your own code to the local area, you can’t pull it alive or dead, and an error message appears:

Can't Update
No tracked branch configured for branch master or the branch doesn't exist.
To make your branch track a remote branch call, for example,
git branch --set-upstream-to=origin/master master 

Reason: there are many explanations on the Internet. In short, I don’t know the pull, but I only have a project branch. Because it is a personal project, I didn’t make a branch, which is very outrageous.

Solution:

Open the GIT window under the project directory and enter

git branch --set-upstream-to=origin/master

Attached with reference to the old man’s original station:

Can’t update no tracked branch configured for branch master or the branch

Prompt can’t update no tracked branch configured for branch when solving git update_u012556114

IAR contains unknown tools [How to Solve]

There are three files under IAR project to describe the project, with suffixes respectively .eww, .ewp, .ewd:

.eww  –> IAR · EWARM workspace file, which describes the projects contained in the workspace;

.ewd   –> C-spy debugger project setting file;

.ewp   –> IAR # EWARM project file, which contains all the configuration information about the project;

So if the following error occurs

“The project ‘…’ contains the unknown tools ‘Coder’  ”

We need to revise it The EWP file, for example, the unknown tool here is’ coder ‘. After opening the file, we find the following line:

<settings>
<name>Coder</name>
<archiveVersion>0</archiveVersion>
<data/>
</settings>

(Note: if you can’t find the upper line in the .EWP file, you can find it in the .EWD file)

Delete the above lines, repeat the search for the above lines containing ‘coder’, delete them all, and save the modifications.

Reopen the project, OK, and no more errors will be reported.

WARNING: POSSIBLE DNS SPOOFING DETECTED [How to Solve]

Everything was fine. Suddenly one day, an error was reported with git push :

WARNING: POSSIBLE DNS SPOOFING DETECTED!

The above errors are generally caused by the migration of the company’s Git warehouse, resulting in ip changes. There is a known_hostsfile in the git directory that stores the connected domain name and the corresponding ip. Every time there is a remote operation, it will verify whether the information in it matches, and the warehouse is migrated. known_hostsThe domain name and ip in the latter file must not match, so the above error will occur. The easiest way is to delete known_hoststhe domain name and the information behind it or empty the file.

: > ~/.ssh/known_hosts

Don’t forget the previous : number, and then re-verify the connection

ssh -T [email protected]

be accomplished!

[Solved] gpg: keyserver receive failed: Invalid argument

Solution

Go to this website: http://keyserver.ubuntu.com/

Find the email address in the error message and enter the email address on this website to search

Search for these keys and click the first one

Click in to see the contents of the key

Copy all, create a text file on Linux, take any name, and copy the content

touch gpgKey
vim gpgKey
sudo apt-key add gpgKey

Then you can

Error reporting process

Before installing mysql, an error was reported when executing apt update:

Err:2 http://repo.mysql.com/apt/ubuntu bionic InRelease                                     
  The following signatures were invalid: EXPKEYSIG 8C718D3B5072E1F5 MySQL Release Engineering <[email protected]>

It is said on the Internet that this method:

apt-key adv  --keyserver hkp://keyserver.ubuntu.com --recv yourKey

The result is still not good, and an error is reported:

root@xx:/var/log/mysql# apt-key adv  --keyserver hkp://keyserver.ubuntu.com --recv 8C718D3B5072E1F5
Executing: /tmp/apt-key-gpghome.WnlsI9s8pX/gpg.1.sh --keyserver hkp://keyserver.ubuntu.com --recv 8C718D3B5072E1F5
gpg: keyserver receive failed: Invalid argument

[Solved] Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try.

Error message: failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try (Nodes: current=[DatanodeInfoWithStorage[192.168.13.130:50010,DS-d105d41c-49cc-48b9-8beb-28058c2a03f7,DISK]], original=[DatanodeInfoWithStorage[192.168.13.130:50010,DS-d105d41c-49cc-48b9-8beb-28058c2a03f7,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via ‘dfs. client. block. write. replace-datanode-on-failure. policy’ in its configuration

This error occurs because I append the local file to the txt file of HDFS. When I added the second time, the error message became

From the first error message, we can find dfs.client.block.write.replace-datanode-on-failure.policy.

So I went to check hadoop in etc/hdfs-site.xml. found that I did not define the number of copies, so in which to add. restart can be.

 <property>
             <name>dfs.client.block.write.replace-datanode-on-failure.policy</name>
             <value>NEVER</value>
        </property>

Analysis: By default, the number of copies is 3. When performing the write to HDFS operation, when one of my Datenodes fails to write, it has to keep the number of copies as 3, it will look for an available DateNode node to write, but there are only 3 on the pipeline, all resulting in the error Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try.

The following lines of code do not exist, then the number of copies default to 3. Refer to the official apache documentation that NEVER: never add a new datanode is equivalent to After setting to NEVER, no new DataNode will be added. Generally speaking, it is not recommended to turn on DataNode nodes less than or equal to 3 in a cluster.

<property>
                <name>dfs.replication</name>
                <value>3</value>
        </property>