Category Archives: Error

[Solved] TypeError: Cannot read properties of undefined (reading ‘templateName’)

It encapsulates a component and never reports an error at the beginning. After adding data, it starts to report an error

TypeError: Cannot read properties of undefined (reading 'templateName')

When I add a new component, I always report an error, but when I modify it, I don’t report an error. I always thought it was caused by El-tab-pane . I thought that the dynamic rendering component had been debugged for a long time. I thought the problem was always in dynamic rendering. Later, I found that it was not this problem

The final problem is that my child component uses the data transmitted by the parent component, but the parent component does not transmit this data when adding

This leads to an error. Therefore, when adding a new component, the parent component passes in a null value, or determines whether to add or modify to control the assignment in the child component

[Solved] HTTP receives the return value using tencoding Utf8 encoding error “no mapping for the Unicode…”

Today, I saw the error message sent by the boss of wumoonlight and the subsequent solutions in the group. I feel that I may encounter it in the future. Record it so that I can’t deal with it in the future. Thank you, boss Guang.

The problem is that some of the data in the return value is encoded in utf8 and some are not. Use tencoding When receiving the return value in utf8 format, an error “no mapping for the Unicode…”

The solution is: replace tencoding.com with tutf8encodeex format UTF8

[Solved] Error processing tar file(exit status 1): open /src/wwwroot/emsadmin/styles.js.map: no space left on device

Exception generating project image

Exception:

Reason:

Docker is full by default

Use DF – h to view the storage of the disk

df -h /var

Use vgdisplay to view the expandable size

From the above, there is no expandable size

Solution:

Expansion disk

[Solved] error C1090: PDB API call failed, error code ‘0’

 

Error c1090

 

PDB API call failed with error code “error number”: message

Error while processing PDB files.

Error C1090 is a catch-all for an uncommon compiler PDB file error that is not reported separately. We have provided only general recommendations to resolve this issue.

Perform a cleanup operation in the generation directory, then perform a full generation of the solution.

Restart the computer or check in TaskManager for processes that are or are not responding to mspdbsrv.exe.

Close the antivirus check in the project directory.

/Zf If /MP is used with MSBuild or other parallel generation processes, use the compiler option.

Try to use 64-bit managed toolset generation.

Phoenix Startup Error: Error: ERROR 726 (43M10): Inconsistent namespace mapping properties. Cannot initiate connection as SYSTEM:CATALOG is found but client does not have phoenix.schema.

Phoenix startup error:

Error: ERROR 726 (43M10): Inconsistent namespace mapping properties. Cannot initiate connection as SYSTEM:CATALOG is found but client does not have phoenix.schema.isNamespaceMappingEnabled enabled (state=43M10,code=726)

One more system: catalog table is found in HBase. The error message indicates connection mapping error.

Check the HBase configuration file and find that the following lines can not be used (the culprit)

<property>
  <name>phoenix.schema.isNamespaceMappingEnabled</name>
  <value>true</value>
</property>

<property>
  <name>phoenix.schema.mapSystemTablesToNamespace</name>
  <value>true</value>
</property>

After deleting some configuration information, distribute the file to other machines.

Delete the system: catalog table in HBase (the deletion instructions are as follows)

Restart HBase and Phoenix

appear:

Inconsistent namespace mapping properties. Ensure that config phoenix.schema.isNamespaceMappingEnabled is consistent on client and server. (state=43M10,code=726)

Reason: the HBase configuration file is not synchronized to Phoenix. Continue to repeat the previous steps, delete multiple tables, and modify the Phoenix file.

Restart HBase and Phoenix again

Phoenix will create several new tables, as shown in the figure above. Success.

HBase configuration files are basically hbase-site.xml

[Solved] HDFS Filed to Start namenode Error: Premature EOF from inputStream;Failed to load FSImage file, see error(s) above for more info

I. Description

After starting Hadoop, it was found that the HDFS web interface could not be opened and the 50070 could not be opened, so JPS found that a namenode was missing:

Check the log information and find the following errors:

2022-01-03 23:54:10,993 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Planning to load image: FSImageFile(file=/export/servers/hadoop-3.1.4/hadoopDatas/namenodeDatas/current/fsimage_0000000000000052563, cpktTxId=0000000000000052563)
2022-01-03 23:54:10,999 ERROR org.apache.hadoop.hdfs.server.namenode.FSImage: Failed to load image from FSImageFile(file=/export/servers/hadoop-3.1.4/hadoopDatas/namenodeDatas/current/fsimage_0000000000000052563, cpktTxId=0000000000000052563)
java.io.IOException: Premature EOF from inputStream
    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:212)
    at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:222)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:962)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:946)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:807)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:738)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:336)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1132)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:747)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:652)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:966)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:939)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1705)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1772)
2022-01-03 23:54:11,015 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Planning to load image: FSImageFile(file=/export/servers/hadoop-3.1.4/hadoopDatas/namenodeDatas2/current/fsimage_0000000000000052563, cpktTxId=0000000000000052563)
2022-01-03 23:54:11,015 ERROR org.apache.hadoop.hdfs.server.namenode.FSImage: Failed to load image from FSImageFile(file=/export/servers/hadoop-3.1.4/hadoopDatas/namenodeDatas2/current/fsimage_0000000000000052563, cpktTxId=0000000000000052563)
java.io.IOException: Premature EOF from inputStream
    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:212)
    at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:222)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:962)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:946)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:807)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:738)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:336)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1132)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:747)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:652)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:966)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:939)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1705)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1772)
2022-01-03 23:54:11,021 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage
java.io.IOException: Failed to load FSImage file, see error(s) above for more info.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:752)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:336)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1132)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:747)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:652)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:966)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:939)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1705)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1772)

II. Solution

According to the error information, it is found that there is an error in reading the file, followed by the file path name. Here, I make an error in reading two files, and then delete the file with the error in the log (it can be backed up in advance)

I have deleted (.MD5 can not be deleted). After restarting HDFS, you can find that namenode has started

Re-enter the HDFS page

[Solved] Idea Error: Error running ‘Application‘: Command line is too long

When a new project is started in idea, it sometimes reports an error running ‘application’: command line is too long Short command line for application or aalso for spring boot default configuration, the error message is as follows:

Solution:

First find the idea/workspace.xml file inside the project, then find the <component name=”PropertiesComponent”></component > tag, which looks like this

Then add a line in the component tag <property name=”dynamic.classpath” value=”true” />, which would look like this

In this way, no error will be reported when starting the project.

redis Connect Error: InvalidDataAccessApiUsageException: MISCONF Redis is configured to save RDB snapshots

Application startup connection exception.

org.springframework.dao.InvalidDataAccessApiUsageException: MISCONF Redis is configured to save RDB snapshots, but it is currently not able to persist on disk. Commands that may modify the data set are disabled, because this instance is configured to report errors during writes if RDB snapshotting fails (stop-writes-on-bgsave-error option). Please check the Redis logs for details about the RDB error.. channel: [id: 0x2f971a28, L:/127.0.0.1:57418 - R:127.0.0.1/127.0.0.1:6379] command: (BRPOP), params: [[114, 101, 116, 114, 121, 95, 109, 115, 103], 216000]; 

The problem is that persistence is abnormal. I searched a lot on the Internet.

By modifying redis The stop writes on bgsave error option in the conf configuration can be solved

stop-writes-on-bgsave-error no