Recently, the distributed system has exploded again
Before the pseudo distributed system, hadoop-1.2.1, hadoop-2.5.2 are good, no problem
A fully distributed system would explode
This is wrong
17/09/02 04:18:53 WARN ssl.FileBasedKeyStoresFactory: The property ‘ssl.client.truststore.location’ has not been set, no TrustStore will be loaded
17/09/02 04:18:54 FATAL namenode.NameNode: Exception in namenode join
java.lang.IllegalArgumentException: URI has an authority component
at java.io.File.< init>( File. java:423 )
at org.apache.hadoop.hdfs.server.namenode.NNStorage.getStorageDirectory(NNStorage. java:327 )
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditLog. java:261 )
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournalsForWrite(FSEditLog. java:233 )
at org.apache.hadoop.hdfs.server .namenode.NameNode.format(NameNode. java:920 )
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode. java:1354 )
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode. java:1473 )
17/09/02 04:18:54 INFO util.ExitUtil: Exiting with status 1
Search everywhere, ask everywhere, and there is no solution, and the tutorial is right, and there is no problem. Journal node and zookeeper are started, but they can’t format the namenode, and they have a headache for two days. So they continue to compare, and find that there is a different place, which is a configuration in core-site.xml
My idea is this
< property>
< name> hadoop.tmp.dir</ name>
< value> file:/opt/data2/tmp</ value>
</ property>
And then it’s like this in the tutorial
< property>
< name> hadoop.tmp.dir</ name>
< value>/ usr/local/hadoop/tmp</ value>
</ property>
I don’t think it’s a problem. The pseudo distribution is good. Later, I wanted to delete this file to see if it would be better, so I deleted it, and then the namenode was formatted successfully
In summary, file: in pseudo distributed system, it can be added or not. In fully distributed system, it cannot be added
Summary: File: don’t write directly on the distributed system
Similar Posts:
- [Solved] HDFS Filed to Start namenode Error: Premature EOF from inputStream;Failed to load FSImage file, see error(s) above for more info
- JAVA api Access HDFS Error: Permission denied in production environment
- Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster
- [Solved] hadoop:hdfs.DFSClient: Exception in createBlockOutputStream
- [Solved] Call to localhost/127.0.0.1:9000 failed on connection exception:java.net.ConnectException
- Hadoop Connect hdfs Error: could only be replicated to 0 nodes instead of minReplication (=1).
- Hadoop command error: permission problem [How to Solve]
- Hive appears to refuse connection ConnectionRefused Solution
- [Solved] Hadoop Error: Input path does not exist: hdfs://Master:9000/user/hadoop/input
- [Solved] Hbase Exception: java.io.EOFException: Premature EOF: no length prefix available