Optimize hgetall for large hash table#4
Conversation
|
@sunxiaoguang It seems like an optimization about the special encoding |
|
@soloestoy Here is the comment I posted on the original pull request to vanilla version. And I copied it here for your convenience as well. The code to generate test data and benchmark the difference can be found here. Please use the 'optimize_hgetall_unstable_comparaison' branch which contains commands for both the original implementation and new implementation for reference. Some test runs of the benchmark program on a E5-2670 v3 server demonstrate for a 16000 fields hash table (the test h5) the new way can save couple hundreds of microseconds on average for a hgetall call. Running the same test on servers with less cache can observe some more improvements as the iterator way access data in a more scattered way therefore renders cache less efficient. |
|
|
Although hgetall is inherently slow and should be used with caution, but there are cases it's necessary to fetch everything at once. This patch tries to optimize this code path.