hadoop - How to find different fragments of a file in HDFS -
is there way find out have fragments of file have put in hdfs gone? mean information file fragments stored in hdfs?
you can use fsck command:
#> hadoop fsck /path/to/file -files -blocks -locations -racks this lists file, blocks , associated metadata:
- block name/id
- block length
- block replication
- locations (datanodeip:port)
- rack (prefix datanode ip's associated rack id)
for example:
/user/chris/file1.txt 123 bytes, 1 block(s): ok 0. blk_432678432632_3426532 len=123 repl=2 [/rack1/1.2.3.4:50010, /rack2/4.5.6.7:50010]
Comments
Post a Comment