JAVA api Access HDFS Error: Permission denied in production environment

An error occurs when the Java API accesses the HDFS file system without specifying the access user

        //1. Create the object of hadoop configuration
//        Configuration conf = new Configuration();
        Configuration conf = new Configuration();
        conf.set("fs.defaultFS","hdfs://linux121:9000");
        //2. Create the object of hadoop FileSystem
//        FileSystem fileSystem = FileSystem.get(new URI("hdfs://linux121:9000"), conf, "root");
        FileSystem fileSystem = FileSystem.get(conf);
        //3. Create the files
        fileSystem.mkdirs(new Path("/tp_user"));
        //4. Close
        fileSystem.close();

The following error messages appear:

org.apache.hadoop.security.AccessControlException: Permission denied: user=QI, access=WRITE, inode="/":root:supergroup:drwxr-xr-x
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:350)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:251)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:189)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1753)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1737)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1696)
	at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2990)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1096)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB

If the access name is not specified, the default is to use the current system user name for access, and an error will be reported if the permission is insufficient. The access permission of HDFS user is weak, which can not prevent bad people from doing bad things!!

In the case of production, there are three solutions:

Specify the user information to obtain the file system object and close the HDFS cluster permission verification

Turn off HDFS cluster permission verification

vim hdfs-site.xml
#Add the following properties
<property>
<name>dfs.permissions</name>
<value>true</value>
</property>

Based on the weak characteristics of HDFS permissions, we can completely give up HDFS permission verification. If we are in a production environment, we can consider using Kerberos, sentry and other security frameworks to manage the security of big data clusters. Therefore, we directly modify the root directory permission of HDFS to 777

hadoop fs -chmod -R 777 /

Similar Posts: