An error occurs when the Java API accesses the HDFS file system without specifying the access user
//1. Create the object of hadoop configuration // Configuration conf = new Configuration(); Configuration conf = new Configuration(); conf.set("fs.defaultFS","hdfs://linux121:9000"); //2. Create the object of hadoop FileSystem // FileSystem fileSystem = FileSystem.get(new URI("hdfs://linux121:9000"), conf, "root"); FileSystem fileSystem = FileSystem.get(conf); //3. Create the files fileSystem.mkdirs(new Path("/tp_user")); //4. Close fileSystem.close();
The following error messages appear:
org.apache.hadoop.security.AccessControlException: Permission denied: user=QI, access=WRITE, inode="/":root:supergroup:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:350) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:251) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:189) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1753) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1737) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1696) at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2990) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1096) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB
If the access name is not specified, the default is to use the current system user name for access, and an error will be reported if the permission is insufficient. The access permission of HDFS user is weak, which can not prevent bad people from doing bad things!!
In the case of production, there are three solutions:
Specify the user information to obtain the file system object and close the HDFS cluster permission verification
Turn off HDFS cluster permission verification
vim hdfs-site.xml #Add the following properties <property> <name>dfs.permissions</name> <value>true</value> </property>
Based on the weak characteristics of HDFS permissions, we can completely give up HDFS permission verification. If we are in a production environment, we can consider using Kerberos, sentry and other security frameworks to manage the security of big data clusters. Therefore, we directly modify the root directory permission of HDFS to 777
hadoop fs -chmod -R 777 /
Similar Posts:
- [Solved] HDFS Error: org.apache.hadoop.security.AccessControlException: Permission denied
- [Solved] Call to localhost/127.0.0.1:9000 failed on connection exception:java.net.ConnectException
- Namenode Initialize Error: java.lang.IllegalArgumentException: URI has an authority component
- [Solved] HDFS Filed to Start namenode Error: Premature EOF from inputStream;Failed to load FSImage file, see error(s) above for more info
- HDFS: How to Operate API (Example)
- Hadoop Connect hdfs Error: could only be replicated to 0 nodes instead of minReplication (=1).
- Hadoop command error: permission problem [How to Solve]
- [Solved] IDEA Remote Operate hdfs Hadoop Error: Caused by: java.net.ConnectException: Connection refused: no further information
- IOException: No FileSystem for scheme: hdfs
- [Solved] hadoop:hdfs.DFSClient: Exception in createBlockOutputStream