HDFS WebUI Access Issues

Problem Description

A Hadoop cluster composed of 3 datanodes, 1 namenode, and 1 secondary namenode. Entering the following command to check the status, everything appears normal.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
(base) root@node4:~# hdfs dfsadmin -report
Configured Capacity: 168292061184 (156.73 GB)
Present Capacity: 128058142720 (119.26 GB)
DFS Remaining: 128058011648 (119.26 GB)
DFS Used: 131072 (128 KB)
DFS Used%: 0.00%
Replicated Blocks:
	Under replicated blocks: 0
	Blocks with corrupt replicas: 0
	Missing blocks: 0
	Missing blocks (with replication factor 1): 0
	Low redundancy blocks with highest priority to recover: 0
	Pending deletion blocks: 0
Erasure Coded Block Groups: 
	Low redundancy block groups: 0
	Block groups with corrupt internal blocks: 0
	Missing block groups: 0
	Low redundancy blocks with highest priority to recover: 0
	Pending deletion blocks: 0

-------------------------------------------------
Live datanodes (3):

Yes, it seems there are no problems. However, the HDFS web page on port 50070 cannot be accessed, nor can the HDFS file port 9000.

Investigation

After ruling out the firewall and security group issues, check the usage of all service ports:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
(base) root@node4:~# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      516/systemd-resolve 
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1460/sshd: /usr/sbi 
tcp        0      0 192.168.0.165:8088      0.0.0.0:*               LISTEN      111907/java         
tcp        0      0 127.0.0.1:6010          0.0.0.0:*               LISTEN      107217/sshd: root@p 
tcp        0      0 127.0.0.1:6011          0.0.0.0:*               LISTEN      110809/sshd: root@p 
tcp        0      0 127.0.0.1:6013          0.0.0.0:*               LISTEN      105767/sshd: root@p 
tcp        0      0 192.168.0.165:8030      0.0.0.0:*               LISTEN      111907/java         
tcp        0      0 192.168.0.165:8031      0.0.0.0:*               LISTEN      111907/java         
tcp        0      0 192.168.0.165:8032      0.0.0.0:*               LISTEN      111907/java         
tcp        0      0 192.168.0.165:8033      0.0.0.0:*               LISTEN      111907/java         
tcp        0      0 192.168.0.165:9000      0.0.0.0:*               LISTEN      111496/java         
tcp        0      0 0.0.0.0:9870            0.0.0.0:*               LISTEN      111496/java         

Two issues were identified:

  1. Port 50070 is not on the list.
  2. Port 9000 is opened to an internal IP.

Solution

Issue 1: No service on port 50070

After a thorough investigation, it was discovered that the times have changed. While Hadoop 2.x HDFS WEB port was 50070, in Hadoop 3.x it has changed to 9870. Therefore, accessing 9870 will allow normal access to the Web UI interface.

Issue 2: No service on port 9000

This was due to the default listening address configuration being modified. Updating the fs.defaultFS value in the configuration file solves the problem.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
<configuration>
        <property>
                 <name>fs.defaultFS</name>
                 <value>hdfs://0.0.0.0:9000</value>
         </property>
         <property>
                 <name>hadoop.tmp.dir</name>
                 <value>file:/usr/hadoop/tmp</value>
                 <description>A base for other temporary directories.</description>
         </property>
</configuration>
Buy me a coffee~
Tim AlipayAlipay
Tim PayPalPayPal
Tim WeChat PayWeChat Pay
0%