Shuffling in Spark vs Hadoop MapReduce | Wei Shung Chung
"…a sort-based shuffle implementation that takes advantage of an Ordering for keys (or just sorts by hashcode for keys that don't have it) would likely improve performance and memory usage in very large shuffles. Our current hash-based shuffle needs an open file for each reduce task, which can fill up a lot of memory for compression buffers and cause inefficient IO. This would avoid both of those issues."Read full article from Shuffling in Spark vs Hadoop MapReduce | Wei Shung Chung
No comments:
Post a Comment