报错如下:
java.net.connectexception: call from v_lz/192.168.53.1 to hadoop2:8020 failed on connection exception: java.net.connectexception: connection refused: no further information; for more details see: http://wiki.apache.org/hadoop/connectionrefused
at sun.reflect.nativeconstructoraccessorimpl.newinstance0(native method)
at sun.reflect.nativeconstructoraccessorimpl.newinstance(nativeconstructoraccessorimpl.java:62)
at sun.reflect.delegatingconstructoraccessorimpl.newinstance(delegatingconstructoraccessorimpl.java:45)
at java.lang.reflect.constructor.newinstance(constructor.java:423)
at org.apache.hadoop.net.netutils.wrapwithmessage(netutils.java:791)
at org.apache.hadoop.net.netutils.wrapexception(netutils.java:731)
at org.apache.hadoop.ipc.client.call(client.java:1472)
at org.apache.hadoop.ipc.client.call(client.java:1399)
at org.apache.hadoop.ipc.protobufrpcengine$invoker.invoke(protobufrpcengine.java:232)
at com.sun.proxy.$proxy9.getfileinfo(unknown source)
at org.apache.hadoop.hdfs.protocolpb.clientnamenodeprotocoltranslatorpb.getfileinfo(clientnamenodeprotocoltranslatorpb.java:752)
at sun.reflect.nativemethodaccessorimpl.invoke0(native method)
at sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl.java:62)
at sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl.java:43)
at java.lang.reflect.method.invoke(method.java:498)
at org.apache.hadoop.io.retry.retryinvocationhandler.invokemethod(retryinvocationhandler.java:187)
at org.apache.hadoop.io.retry.retryinvocationhandler.invoke(retryinvocationhandler.java:102)
at com.sun.proxy.$proxy10.getfileinfo(unknown source)
at org.apache.hadoop.hdfs.dfsclient.getfileinfo(dfsclient.java:1988)
at org.apache.hadoop.hdfs.distributedfilesystem$18.docall(distributedfilesystem.java:1118)
at org.apache.hadoop.hdfs.distributedfilesystem$18.docall(distributedfilesystem.java:1114)
at org.apache.hadoop.fs.filesystemlinkresolver.resolve(filesystemlinkresolver.java:81)
at org.apache.hadoop.hdfs.distributedfilesystem.getfilestatus(distributedfilesystem.java:1114)
at org.apache.hadoop.fs.filesystem.exists(filesystem.java:1400)
at com.lens.task.hdfsoperate.main(hdfsoperate.java:22)
caused by: java.net.connectexception: connection refused: no further information
at sun.nio.ch.socketchannelimpl.checkconnect(native method)
at sun.nio.ch.socketchannelimpl.finishconnect(socketchannelimpl.java:717)
at org.apache.hadoop.net.socketiowithtimeout.connect(socketiowithtimeout.java:206)
at org.apache.hadoop.net.netutils.connect(netutils.java:530)
at org.apache.hadoop.net.netutils.connect(netutils.java:494)
at org.apache.hadoop.ipc.client$connection.setupconnection(client.java:607)
at org.apache.hadoop.ipc.client$connection.setupiostreams(client.java:705)
at org.apache.hadoop.ipc.client$connection.access$2800(client.java:368)
at org.apache.hadoop.ipc.client.getconnection(client.java:1521)
at org.apache.hadoop.ipc.client.call(client.java:1438)
... 18 more
process finished with exit code 0
原因:端口不一致,客户端无法访问服务端
解决方法:把客户端的端口改成服务端一致的端口
ok,大功告成 。
8020端口:
8020端口在hadoop1.x中默认承担着namenode 和 datanode之间的心跳通信,也兼顾filesystem默认的端口号(hdfs客户端访问hdfs集群的rpc通信端口),
但是在hadoop2.x中,8020只承担了namenode 和 datanode之间的心跳通信,当然这些端口的设置是指的默认设置。
fs.defaultfs
hdfs://hadoop01:8020
9000端口:
9000端口是在hadoop2.x中将filesystem通讯端口拆分出来了,默认为9000,在hadoop2.x中我们可以在hdfs-size.xml中进行配置。
50070端口:
50070端口是httpservice访问端口,供于浏览器进行访问namenode节点,监控其各个datanode的服务端口,同样在hadoop2.x中我们可以在hdfs-size.xml中进行配置。