Tag Archives: hbase

[Solved] Hbase Startup Normally but Execute Error: Server is not running yet

Error reporting information

hbase:001:0> list
TABLE

ERROR: org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet

There are two solutions
the first is because of the Hadoop mode
the cluster is not shut down normally, and Hadoop enters the installation mode, resulting in external inaccessibility. You can access it by turning off the security mode.

Start Hadoop first and then

HDFS dfsadmin – safemode get # view security mode
HDFS dfsadmin – safemode leave # turn off security mode
you can also view it through the Hadoop web page

Then restart HBase to access the client. The general problem can be solved.

Method 2: jar package conflict
slf4j-log4j12-1.7.25.Jar exists in both Hadoop and HBase and is started at the same time. The service cannot be accessed due to occupation

Solution: delete slf4j-log4j12-1.7.25.jar in HBase. this file is in HBase/lib/client-facing-hirdparty/.

Then remove the export HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP=”true” comment from hbase-env.sh. You can remove the quotes, or just insert them if the configuration file does not have this information. The default value is false, which means that Hadoop’s libs are included.

Then restart HBase. If stop-HBase.sh is invalid. You can kill the process with the kill command

It is recommended to use one and two closing sequences at the same time. Close HBase first and then Hadoop

[Solved] HBase Startup Error: master.HMaster: Failed to become active master

situation:

Zookeeper and HDFS have been started, and then HBase has been started. Although the startup is successful, HBase automatically shuts down after a few seconds, and an error is reported.

Complete error reporting information:

master.HMaster: Failed to become active master
org.apache.hadoop.hbase.util.FileSystemVersionException: HBase file layout needs to be upgraded. You have version null and I want version 8. 
Consult http://hbase.apache.org/book.html for further information about upgrading HBase. Is your hbase.rootdir valid?If so, you may need to run 'hbase hbck -fixVersionFile'.

Solution:

#Login to hdfs user
su hdfs

#delete the /hbase/data directory
hadoop fs -rmr /apps/hbase/data #Older version
hdfs dfs -rm -r /apps/hbase/data #newer

#login to ZooKeeper
zkCli.sh

#Check if the /hbase-unsecure directory exists
ls /

#Delete the /hbase-unsecure directory
rmr /hbase-unsecure #Older version
deleteall /hbase-unsecure #newer version

Finally, restart HBase

Attached:

If the command in the error message is executed:

hbase hbck -fixVersionFile

Then a new error will be reported, saying apps/hbase/data/.tmp/hbase-hbck.lock is occupied and you need to delete the lock file first

Delete command:

hdfs dfs -rm /apps/hbase/data/.tmp/hbase-hbck.lock

Solution to unknown hostexception error when HBase starts regionserver

Error when HBase starts regionserver:

ERROR [main] regionserver.HRegionServer: Failed construction RegionServer
java.lang.IllegalArgumentException: java.net.UnknownHostException: xxx

*Where “XXX” is the value of dfs.nameservices attribute in hdfs-site.xml

The reason is that the value of hbase.rootdir attribute in hbase-site.xml file uses the entry of zookeeper, that is, the value of dfs.nameservices

 

resolvent:

Copy the core-site.xml and hdfs-site.xml files in the/Hadoop/etc/Hadoop/directory to the/HBase/conf/directory

cd /hadoop/etc/hadoop
cp core-site.xml hdfs-site.xml /hbase/conf

*Each node (each machine) needs to perform this operation

 

[Solved] Hbase Error: org.apache.hadoop.hbase.ipc.FailedServerException

HBase error:

2021-10-24 18:55:25,514 WARN  [RSProcedureDispatcher-pool3-t914] procedure.RSProcedureDispatcher: request to server node2.jacky.com,16020,1635064460828 failed due to org.apache.hadoop.hbase.ipc.FailedServerException: Call to node2.jacky.com/192.168.1.251:16020 failed on local exception: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: node2.jacky.com/192.168.1.251:16020, try=78546, retrying...
------------------------------------------------------------------------


Solution: Turn off hadoop safe mode

#/usr/local/hadoop-2.8.5/bin/hdfs dfsadmin -safemode leave
#/usr/local/hadoop-2.8.5/bin/hdfs dfsadmin -safemode get

Then stop hbase, hadoop, zookeeper services
Start zookeeper, hadoop, hbase services in turn
Solved.

[HBase] – the default port is occupied, and an error is reported when starting

When starting HBase, you can set   export HBASE_MANAGES_ZK = true, let HBase use its own ZK

At that time, the following errors were reported when starting:

starting master, logging to /home/wde/hbase/hbase/bin/../logs/hbase-wde-master-ict003.out
Could not start ZK at requested port of 2181. ZK was started at port: 2182. Aborting as clients (e.g. shell) will not be able to find this ZK quorum.

 

It seems that the default 2181 port has been occupied. If the ZK port is not specified in hbase-site.xml, the default 2181 port is used. Once port 2181 is occupied, it will cause startup failure

Modify hbase-site.xml and add the following line:

  & lt; property>

          & lt; name> hbase.zookeeper.property.clientPort</ name>

          & lt; value> 2182</ value>                                                                                                                                              

  & lt;/ property>  

 

Then you can start HBase normally.

Hive connection HBase external table error, can’t get the locations

Knowledge map advanced must read: read how large-scale map data efficient storage and retrieval>>>

Execute the create HBase external table in hive, and execute the create script:

hive>CREATEEXTERNALTABLEhbase_userFace(idstring,mobilestring,namestring)
>STOREDBY'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
>WITHSERDEPROPERTIES("hbase.columns.mapping"=":key,faces:mobile,faces:name")
>TBLPROPERTIES("hbase.table.name"="userFace");

The error is as follows:

FAILED:ExecutionError,returncode1fromorg.apache.hadoop.hive.ql.exec.DDLTask.MetaException(message:org.apache.hadoop.hbase.client.RetriesExhaustedException:Can'tgetthelocations
atorg.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:312)
atorg.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:153)
atorg.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:61)
atorg.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
atorg.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320)
atorg.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:295)
atorg.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:160)
atorg.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:155)
atorg.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:811)
atorg.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:602)
atorg.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366)
atorg.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:303)
atorg.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:313)
atorg.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTable(HBaseStorageHandler.java:200)
atorg.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:664)
atorg.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:657)
atsun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethod)
atsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
atsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
atjava.lang.reflect.Method.invoke(Method.java:606)
atorg.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156)
atcom.sun.proxy.$Proxy8.createTable(UnknownSource)
atorg.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:714)
atorg.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4135)
atorg.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:306)
atorg.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
atorg.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)
atorg.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1653)
atorg.apache.hadoop.hive.ql.Driver.execute(Driver.java:1412)
atorg.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1195)
atorg.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)
atorg.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
atorg.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
atorg.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
atorg.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
atorg.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
atorg.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
atorg.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
atsun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethod)
atsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
atsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
atjava.lang.reflect.Method.invoke(Method.java:606)
atorg.apache.hadoop.util.RunJar.run(RunJar.java:221)
atorg.apache.hadoop.util.RunJar.main(RunJar.java:136)
)

The error should be that HBase cannot be connected. HBase is managed by zookeeper. Test as follows:

1. Test the connection of single node HBase

$hive-hiveconfhbase.master=master:60000

After entering hive’s cli, execute the script to create the external table, and find that the error is still reported

2. Test the connection of HBase in cluster

hive-hiveconfhbase.zookeeper.quorum=slave1,slave2,master,slave4,slave5,slave6,slave7

After entering hive’s cli, execute the script to create the external table, and find that the creation is successful

It can be seen that an error occurred when hive read the zookeeper of HBase. Check the hive-site.xml file, there is a property named hive.zookeeper.quorum, copy a property changed to hbase.zookeeper.quorum. As follows:

<property>
<name>hbase.zookeeper.quorum</name>
<value>slave1,slave2,master,slave4,slave5,slave6,slave7</value>
<description>
</description>
</property>

So far, the problem is solved and the creation of HBase external table is successful