linux - How does HDFS write many files to lower-layer local file systems at the same time? -


i want know how hdfs keep high performance if writes many files @ same time before understand completely.

for example, there 100 files reading or writing @ 1 data node. think doesn't use few threads normal synchronous io operations. hdfs create 100 worker threads handle them, or use asynchronous io mechanism without many threads?

yes, datanode serve request 100 threads. hadoop hdfs datanode has upper bound on number of files serve @ 1 time. upper bound parameter dfs.datanode.max.xcievers. default uppper bound 256.


Comments

Popular posts from this blog

java - Play! framework 2.0: How to display multiple image? -

gmail - Is there any documentation for read-only access to the Google Contacts API? -

php - Controller/JToolBar not working in Joomla 2.5 -