rest - webhdfs two steps upload a file -


i build hadoop cluster 4 machines:

  • {hostname}: {ip-address}
  • master: 192.168.1.60
  • slave1: 192.168.1.61
  • slave2: 192.168.1.62
  • slave3: 192.168.1.63

i use httpfs upload file hdfs restful way, there contains 2 steps finish task.

the server return result like:

location:http://slave1:50075/webhdfs/v1/user/haduser/myfile.txt?op=create&user.name=haduser&namenoderpcaddress=master:8020&overwrite=false

  • step 2: use response address upload file.

in step 1, how datanode's ip address(192.168.1.61) rather hostname (slave1)?

if hadoop version>=2.5, @ every datanode config ${hadoop_home}/etc/hadoop/hdfs-site.xml file. add: property dfs.datanode.hostname, value datanodes's ip address.


Comments

Popular posts from this blog

php - Invalid Cofiguration - yii\base\InvalidConfigException - Yii2 -

How to show in django cms breadcrumbs full path? -

ruby on rails - npm error: tunneling socket could not be established, cause=connect ETIMEDOUT -