hadoop fsck -delete
This problem occurs after deleting data on several HDFS. The exception information is
In HDFS, data is stored in block mode (blk_1073748128 is a block without understanding error), and After BLK_1073748128 is deleted, the metadata is still there, but the data block is gone, so this error is reported. However, this part of the data is not needed, so the metadata information of the abnormal file block can be deleted directly.
- [Solved] hadoop:hdfs.DFSClient: Exception in createBlockOutputStream
- HDFS problem set (1), use the command to report an error: com.google.protobuf.servicee xception:java.lang.OutOfMemoryError :java heap space
- “Execution error, return code 1 from org. Apache. Hadoop. Hive. QL. Exec. Movetask” error occurred when hive imported data locally
- [Solved] hbase ERROR: org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet
- Namenode Initialize Error: java.lang.IllegalArgumentException: URI has an authority component
- [Solved] Delete hdfs Content Error: rm: Cannot delete /wxcm/ Name node is in safe mode.
- Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster
- [Solved] Hbase Exception: java.io.EOFException: Premature EOF: no length prefix available
- [Solved] Hbase Error: org.apache.hadoop.hbase.ipc.FailedServerException
- Mapreduce:Split metadata size exceeded 10000000