Category Archives: Error

[Solved] Pycharm Error: TypeError: init() missing 1 required positional argument: ‘on_delete’

TypeError: init() missing 1 required positional argument: ‘on_ delete’

After Django 2.0, you need to add on when defining foreign keys and one-to-one relationships_delete option. This parameter is used to avoid data inconsistency between the two tables, otherwise an error will be reported:

Original: user=models.OneToOneField(User)
Now: user=models.OneToOneField(User,on_delete=models.CASCADE) The original (models.CASCADE) exists by default

on_delete includes cascade, protect and set_NULL, SET_Default and set() are five selectable values

CASCADE:This value is set to cascade delete. PROTECT: This value is set to report an integrity error. SET_NULL: This value is set to set the foreign key to null, provided it is allowed to be null. SET_DEFAULT: This value is set to set as the default value of the foreign key. SET(): This value is set to call the outside value, which can be a function. CASCADE is used in general.

After Django 2.0, the editing mode in include will be changed

Change: url(r'^student/', include('student.ursl', namespace="student"))
to:url(r'^student/', include(('student.ursl',"student"), namespace="student"))

GeoServer source code offline environment debugging startup error [How to Solve]

Question:
The geoserver source code has been compiled successfully, but the debugging fails to start normally in the IDEA environment, reporting the following error:

Failed startup of context o.e.j.w.WebAppContext@74a6a609{/geoserver, file:///D:/work/geoserver/src/web/app/src/main/webapp/, UNAVAILABLE}{src/main/webapp}
java.net.ConnectException: Connection timed out: connect
at java.net.DualStackPlainSocketImpl.connect0(Native Method)
at java.net.DualStackPlainSocketImpl.sochetConnect(DualStackPlainSocketImpl.java:79)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)

Solution:
The above error is caused by the start of debugging will be network access (java.sun.com/23.33.94.164:80), modify the gs-web-app project under the src/main/webapp/WEB-INF/web.xml, the DOCTYPE tag can be commented up, modified as follows.

<?xml version="1.0" encoding="UTF-8"?>
<!--<!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD WebApplication 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd">-->
<web-app>
    <display-name>GeoServer></display-name>

    <context-param>
        <param-name>serviceStrategy</param-name>
    ......

[Solved] Failed to instantiate [applets.nature.mapper.LogInfoMapper]: Specified class is an interface

I. Origin of the problem
When the project was tested on Sunday afternoon, something needed to be modified temporarily, and I had already typed a package to deploy to the test server for deployment. In the process of testing, I found a problem, that is
Now the code is running without problems, only that other people and things have not yet done, so temporarily modify the code logic, so that the test can proceed smoothly. themselves to modify the code immediately as needed.
And after deploying a version before, you have been modifying the new code yourself, some requirements are just set down. After the code was modified, the code was cleaned, compiled and packaged with idea without any problem.
However, an error was reported after startup as shown in the title. The detailed error message is as follows:

Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2021-10-31 15:40:12.035 [] [main] ERROR o.s.boot.SpringApplication[826] - Application run failed
org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'webLogAspect': Unsatisfied dependency expressed through field 'logInfoMapper'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'logInfoMapper' defined in file [D:\JavaWorkSpace\BigstuffParent\AppletsBackend\target\classes\applets\nature\mapper\LogInfoMapper.class]: Instantiation of bean failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [applets.nature.mapper.LogInfoMapper]: Specified class is an interface
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:643)
at org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:130)
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessProperties(AutowiredAnnotationBeanPostProcessor.java:399)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1422)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:594)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:517)
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:323)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:321)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:879)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:878)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:550)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:141)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:747)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:397)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:315)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1226)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1215)
at applets.AppletsApplication.main(AppletsApplication.java:25)
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'logInfoMapper' defined in file [D:\JavaWorkSpace\BigstuffParent\AppletsBackend\target\classes\applets\nature\mapper\LogInfoMapper.class]: Instantiation of bean failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [applets.nature.mapper.LogInfoMapper]: Specified class is an interface
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:1320)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1214)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:557)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:517)
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:323)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:321)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)
at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:276)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1287)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1207)
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:640)
... 19 common frames omitted
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [applets.nature.mapper.LogInfoMapper]: Specified class is an interface
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:70)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:1312)
... 30 common frames omitted
Disconnected from the target VM, address: '127.0.0.1:13616', transport: 'socket'

The key error message is Failed to instantiate [applets.nature.mapper.LogInfoMapper]: Specified class is an interface
Second, the problem analysis
themselves to this problem to the search engine inside a throw, immediately appear a large number of results. For example, the information provided by this blog post.
According to the problem encountered by the blogger in the article, there is a mapper.java interface file renamed, so when the start of the problem. This gave me an idea to solve the problem, could I also have the same problem?
With this question in mind, I immediately started looking for the cause of the problem.
III. Solution
After finding a direction to solve the problem, I immediately started to check whether the interface class applets.nature.mapper.LogInfoMapper has a duplicate name. In the project to repeatedly search, are not
I found only this one interface. Is the blogger’s answer incorrect? There is clearly only one interface in my project. This reason was temporarily ruled out by myself, so I continued to investigate the cause.
Checking way two:
Since this interface frequently reports errors when starting, I temporarily logged out this interface to try to see if it can start successfully. When I logged out the interface, I restarted it and found that it still reported errors, but no longer
LogInfoMapper class is not instantiated, but other classes cannot be instantiated. The error message is as follows:
Field userTaskMapper in applets.task.service.impl.UserTaskServiceImpl required a bean of type ‘applets.task.mapper.UserTaskMapper’ that could not be UserTaskMapper’ that could not be found.
The LogInfoMapper interface is the first to be called and is placed in the aop interceptor for logging. After logging it out, it reports that other interfaces are not instantiated properly.
Troubleshooting method three.
Since I got to this step, I simply backed up the code and rolled back all the modified code to the previous state, restarted the project and started successfully. Then I analyzed step by step whether I had added something or modified something
I added or modified the code. I found that when I added a new mapper.java interface and xml file, the project did not start properly, as shown below:

When I searched the project, I found two mapper interfaces with the same name

I immediately removed the newly added mapper interface, restarted the project, and the problem was solved.
From here I also learned that sometimes you can’t just look at the problem itself, you need to analyze the problem in depth. Just like this bug, the interface that reported the error is not duplicated.
Rather, the newly added interface is duplicated, resulting in all the mapper interfaces can not be properly instantiated, can not be instantiated can not be properly injected into the class, resulting in a series of subsequent problems.

[Solved] Pytorch: loss.backward (retain_graph = true) of back propagation error

The backpropagation method in RNN and LSTM models, the problem at loss.backward()
The problem tends to occur after updating the pytorch version.
Problem 1:Error with loss.backward()

Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.
(torchenv) star@lab407-1:~/POIRec/STPRec/Flashback_code-master$ python train.py

Prolem 2: Use loss.backward(retain_graph=True)

one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [10, 10]], which is output 0 of AsStridedBackward0, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

Solution.
Some pitfalls about loss.backward() and its argumenttain_graph
First of all, the function loss.backward() is very simple, is to calculate the gradient of the current tensor associated with the leaf nodes in the graph
To use it, you can of course use it directly as follows

optimizer.zero_grad() clearing the past gradients.
loss.backward() reverse propagation, calculating the current gradient.
optimizer.step() updates the network parameters according to the gradient

or this case
for i in range(num):
loss+=Loss(input,target)
optimizer.zero_grad() clears the past gradients.
loss.backward() back-propagate and compute the current gradient.
optimizer.step() update the network parameters according to the gradient

However, sometimes, such errors occur: runtimeerror: trying to backward through the graph a second time, but the buffers have already been free

This error means that the mechanism of pytoch is that every time. Backward() is called, all buffers will be free. There may be multiple backward() in the model, and the gradient stored in the buffer in the previous backward() will be free because of the subsequent call to backward(). Therefore, here is retain_Graph = true, using this parameter, you can save the gradient of the previous backward() in the buffer until the update is completed. Note that if you write this:

optimizer.zero_grad() clearing the past gradients.
loss1.backward(retain_graph=True) backward propagation, calculating the current gradient.
loss2.backward(retain_graph=True) backward propagation, calculating the current gradient.
optimizer.step() updates the network parameters according to the gradient

Then you may have memory overflow, and each iteration will be slower than the previous one, and slower and slower later (because your gradients are saved and there is no free)
the solution is, of course:

optimizer.zero_grad() clearing the past gradients.
loss1.backward(retain_graph=True) backward propagation, calculating the current gradient.
loss2.backward() backpropagate and compute the current gradient.
optimizer.step() updates the network parameters according to the gradient

That is: do not add retain to the last backward()_Graph parameter, so that the occupied memory will be released after each update, so that it will not become slower and slower.

Someone here will ask, I don’t have so much loss, how can such a mistake happen? There may be a problem with the model you use. Such problems occur in both LSTM and Gru. The problem exists with the hidden unit, which also participates in backpropagation, resulting in multiple backward(),
in fact, I don’t understand why there are multiple backward()? Is it true that my LSTM network is n to N, that is, input n and output n, then calculate loss with n labels, and then send it back? Here, you can think about BPTT, that is, if it is n to 1, then gradient update requires all inputs of the time series and hidden variables to calculate the gradient, and then pass it forward from the last one, so there is only one backward(), In both N to N and N to m, multiple losses need to be backwarded(). If they continue to propagate in two directions (one from output to input and the other along time), there will be overlapping parts. Therefore, the solution is very clear. Use the detach() function to cut off the overlapping backpropagation, (here is only my personal understanding. If there is any error, please comment and point it out and discuss it together.) there are three ways to cut off, as follows:

hidden.detach_()
hidden = hidden.detach()
hidden = Variable(hidden.data, requires_grad=True)

 

[Solved] Compile Error: virtual memory exhausted: Cannot allocate memory

1. Question

When installing the virtual machine, the system does not set the swap size or the memory is too small. The compiler will have the problem of virtual memory exhausted: cannot allocate memory. You can use swap to expand memory.

2. Solution

When free – M is executed, the prompt cannot allocate memory:

(swap files can be placed in their favorite location, such as/var/swap)

[root@Byrd byrd]# free -m
             total       used       free     shared    buffers     cached
Mem:           512        108        403          0          0         28
-/+ buffers/cache:         79        432
Swap:            0          0          0
[root@Byrd ~]# mkdir /opt/images/
[root@Byrd ~]# rm -rf /opt/images/swap
[root@Byrd ~]# dd if=/dev/zero of=/opt/images/swap bs=1024 count=2048000
2048000+0 records in
2048000+0 records out
2097152000 bytes (2.1 GB) copied, 82.7509 s, 25.3 MB/s
[root@Byrd ~]# mkswap /opt/images/swap
mkswap: /opt/images/swap: warning: don't erase bootbits sectors
        on whole disk. Use -f to force.
Setting up swapspace version 1, size = 2047996 KiB
no label, UUID=59daeabb-d0c5-46b6-bf52-465e6b05eb0b
[root@hz mnt]# swapon /opt/images/swap
[root@hz mnt]# free -m
             total       used       free     shared    buffers     cached
Mem:           488        481          7          0          6        417
-/+ buffers/cache:         57        431
Swap:          999          0        999

Memory is too small. Increasing memory can solve this problem

You can turn off swap after use:

[root@hz mnt]# swapoff swap
[root@hz mnt]# rm -f /opt/images/swap

Swap files can also not be deleted and kept for future use. The key is that your virtual machine hard disk is enough.

[Solved] Vite build Error: Expected a JavaScript module script but the server responded with a MIME type.

Problem Description:

After vite project build is deployed, the following error messages appear when accessing in the browser:

Expected a JavaScript module script but the server responded with a MIME type of "text/html"

Strict MIME type checking is enforced for module scripts per HTML spec.

Solution:

The reason for the above problem is that the correct static resource path is not found after the project is built. The solution is as follows

// In the vite.config.js directory, modify the following.
export default viteConfig = {
    base: '/'
}

[Solved] C++ Compile Error: undefined reference to `log@GLIBC_2.29′

problem

Copy the existing workspace to the local modified code for compilation, and report an error when encountering opencv related dependencies

libopencvxxx.so ... undefined reference to `log@GLIBC_2.29'

Solution:

The original computer compiles the workspace normally, eliminates the problem of the code itself, and starts to find dependencies

After careful comparison, it is found that my environment Ubuntu 18.04 has built-in gcc7.5, and the target environment Ubuntu 20.04 has built-in gcc9.3

libopencv.So compiles the source code of the target machine, so it depends on the high version of glibc. Delete opencv and the error will disappear after local recompilation.

Git OpenSSH Upgrade Error: Unable to negotiate with 47.98.49.44 port 22: no matching host key type found. Their offer: ssh-rsa

An error is reported when migrating out after upgrading openssh

$ git pull
Unable to negotiate with 47.98.49.44 port 22: no matching host key type found. Their offer: ssh-rsa
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

Solution:

$ cat ~/.ssh/config
Host *
    HostkeyAlgorithms +ssh-rsa
    PubkeyAcceptedKeyTypes +ssh-rsa

[Solved] K8s EFK Install Error: Cluster is not yet ready (request params: “wait_for_status=green&timeout=1s”)

problem

There is a problem with the elasticsearch ready probe when running on a single replica cluster.

Warning  Unhealthy               91s (x14 over 3m42s)  kubelet          Readiness probe failed: Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )
Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )

Solution:

If you are running a single copy cluster, add the following helm values:

clusterHealthCheckParams: "wait_for_status=yellow&timeout=1s"

For a single replica cluster, your status will never turn green.

The following values should be valid:

replicas: 1
minimumMasterNodes: 1
clusterHealthCheckParams: 'wait_for_status=yellow&timeout=1s'