Caused by: java.net.ConnectException: Call From nn1.hadoop/192.168.10.6 to nn2. hadoop:9000 failed On connection exception: java.net.connectexception: refuse connection; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
If you fail to connect to hive, you can refer to the official document if you report an error https://cwiki.apache.org/confluence/display/HADOOP2/ConnectionRefused
First check whether Hadoop is running normally, open the HDFS web page of Hadoop to browse whether the two nodes are running normally. A node may hang up or something, just start the node
Check whether the namenode is running normally, check whether the namenode is stable, and restart the zkfc service
Maybe MySQL hung up, just get up
Similar Posts:
- IDEA Run mapreduce error: PATH Set Error [How to Solve]
- [Solved] Call to localhost/127.0.0.1:9000 failed on connection exception:java.net.ConnectException
- [Solved] Hadoop Error: The directory item limit is exceeded: limit=1048576 items=1048576
- [Solved] Exception in thread “main“ java.net.ConnectException: Call From
- [Solved] HDFS Filed to Start namenode Error: Premature EOF from inputStream;Failed to load FSImage file, see error(s) above for more info
- [Solved] Hbase Exception: java.io.EOFException: Premature EOF: no length prefix available
- Hadoop Connect hdfs Error: could only be replicated to 0 nodes instead of minReplication (=1).
- Namenode Initialize Error: java.lang.IllegalArgumentException: URI has an authority component
- [Solved] Hadoop runs start-dfs.sh error: attempting to operate on HDFS as root
- This server is in the failed servers list: localhost/127.0.0.1:16000 (Error starting hbase api call)