HDFS WebUI无法访问问题

问题描述

3个datanode、1个namenode和1个secondary namenode组成的hadoop集群,输入命令查看状态如下,一切正常。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
(base) root@node4:~# hdfs dfsadmin -report
Configured Capacity: 168292061184 (156.73 GB)
Present Capacity: 128058142720 (119.26 GB)
DFS Remaining: 128058011648 (119.26 GB)
DFS Used: 131072 (128 KB)
DFS Used%: 0.00%
Replicated Blocks:
	Under replicated blocks: 0
	Blocks with corrupt replicas: 0
	Missing blocks: 0
	Missing blocks (with replication factor 1): 0
	Low redundancy blocks with highest priority to recover: 0
	Pending deletion blocks: 0
Erasure Coded Block Groups: 
	Low redundancy block groups: 0
	Block groups with corrupt internal blocks: 0
	Missing block groups: 0
	Low redundancy blocks with highest priority to recover: 0
	Pending deletion blocks: 0

-------------------------------------------------
Live datanodes (3):

是的,看起来没什么问题。但是HDFS的web页面50070无法访问,HDFS文件端口9000也无法访问使用。

排查

排除防火墙和安全组等问题后,检查所有的服务端口使用情况:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
(base) root@node4:~# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      516/systemd-resolve 
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1460/sshd: /usr/sbi 
tcp        0      0 192.168.0.165:8088      0.0.0.0:*               LISTEN      111907/java         
tcp        0      0 127.0.0.1:6010          0.0.0.0:*               LISTEN      107217/sshd: root@p 
tcp        0      0 127.0.0.1:6011          0.0.0.0:*               LISTEN      110809/sshd: root@p 
tcp        0      0 127.0.0.1:6013          0.0.0.0:*               LISTEN      105767/sshd: root@p 
tcp        0      0 192.168.0.165:8030      0.0.0.0:*               LISTEN      111907/java         
tcp        0      0 192.168.0.165:8031      0.0.0.0:*               LISTEN      111907/java         
tcp        0      0 192.168.0.165:8032      0.0.0.0:*               LISTEN      111907/java         
tcp        0      0 192.168.0.165:8033      0.0.0.0:*               LISTEN      111907/java         
tcp        0      0 192.168.0.165:9000      0.0.0.0:*               LISTEN      111496/java         
tcp        0      0 0.0.0.0:9870            0.0.0.0:*               LISTEN      111496/java         

发现两个问题:

  1. 50070端口不在列表里
  2. 9000开到内网ip上去了

解决

问题1:50070端口无服务

经过仔细排查,发现时代变了,Hadoop 2.x HDFS WEB端口是50070,但Hadoop 3.x 就变成了9870。因此访问9870即可正常访问Web UI界面。

问题2: 9000端口无服务

又是默认监听地址的配置修改,在配置文件中修改fs.defaultFS的值即可。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
<configuration>
        <property>
                 <name>fs.defaultFS</name>
                 <value>hdfs://0.0.0.0:9000</value>
         </property>
         <property>
                 <name>hadoop.tmp.dir</name>
                 <value>file:/usr/hadoop/tmp</value>
                 <description>Abase for other temporary directories.</description>
         </property>
</configuration>
Buy me a coffee~
Tim 支付宝支付宝
Tim 贝宝贝宝
Tim 微信微信
0%