Optimize UnicodeHash lookup performance using std::unordered_map #30
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Pull Request: Optimize UnicodeHash lookup performance using std::unordered_map
Description
This PR optimizes the
Hash::UnicodeHashclass by replacing the internal custom Trie implementation (Tries::UniTrie) with a standardstd::unordered_map.Profiling of the dependent
timblapplication revealed thatUnicodeHash::lookupandUnicodeHash::hashwere significant performance bottlenecks, consuming over 50% of the CPU time during the learning phase on large datasets. This was due to the linear time complexity O(L) of the Trie structure for string lookups. switching to a hash map provides O(1) average time complexity.Changes
include/ticcutils/UniHash.h:Tries::UniTrie<UniInfo> _treewithstd::unordered_map<icu::UnicodeString, UniInfo*, UnicodeStringHash> _map.UnicodeStringHashstruct to supporticu::UnicodeStringkeys.src/UniHash.cxx:hash()andlookup()to usestd::unordered_mapAPI (find,insert).Normalizer2::isNormalizedto avoid expensive normalization if the input string is already in NFC format.~UnicodeHash()to explicitly deleteUniInfopointers stored in the map to prevent memory leaks.Performance Analysis
Benchmarks were run using
timblon theedufineweb_train_000001-100kdataset (~4.9M lines).Profiling Details (Top Functions)
Before (Original):
```text
% cumulative self name
38.44 101.33 101.33 Hash::UnicodeHash::lookup
16.66 145.26 43.93 Hash::UnicodeHash::hash
```
After (Optimized):
```text
% cumulative self name
1.92 96.51 2.19 Hash::UnicodeHash::lookup
1.05 101.13 1.20 Hash::UnicodeHash::hash
```
The bottleneck has been effectively eliminated, shifting the primary processing time to the core algorithm logic (
ClassDistribution::IncFreq).ticcutils_optimization_pr.zip