Skip to content
This repository was archived by the owner on Jan 21, 2026. It is now read-only.

[pull] memtable_as_log_index from topling:memtable_as_log_index#2

Merged
imbajin merged 175 commits intohugegraph:memtable_as_log_indexfrom
topling:memtable_as_log_index
Jan 12, 2026
Merged

[pull] memtable_as_log_index from topling:memtable_as_log_index#2
imbajin merged 175 commits intohugegraph:memtable_as_log_indexfrom
topling:memtable_as_log_index

Conversation

@pull
Copy link

@pull pull bot commented Aug 6, 2025

See Commits and Changes for more details.


Created by pull[bot] (v2.0.0-alpha.3)

Can you help keep this open source service alive? 💖 Please sponsor : )

@pull pull bot locked and limited conversation to collaborators Aug 6, 2025
@pull pull bot added the ⤵️ pull label Aug 6, 2025
@rockeet rockeet force-pushed the memtable_as_log_index branch from 8f386fc to d34aeaf Compare August 10, 2025 09:48
@rockeet rockeet force-pushed the memtable_as_log_index branch 5 times, most recently from 98475e4 to 73e3caa Compare August 14, 2025 17:18
@rockeet rockeet force-pushed the memtable_as_log_index branch from 73e3caa to 92f97ae Compare August 14, 2025 17:21
@rockeet rockeet force-pushed the memtable_as_log_index branch 2 times, most recently from 3546546 to 9925d00 Compare October 28, 2025 08:03
and squashed commits:

  - merging_iterator.cc: extract macro MERGE_ITER_CMP_PREFIX(cmp)

  - merging_iterator.cc: change sizeof(HeapItemAndPrefixFast.prefix_cache) to 24 on avx512

    Because iter_type is useless for HeapItemAndPrefixFast, prefix_cache can be 24,
    and 24 fit 6 uint32 fields, in MyTopling, it fits 5 user fields(except 1 uint32
    prefix for table/index id)

  - merging_iterator.cc: UpdatePrefixCache() bug fix: use user_key & load/store mask is different
This makes the code simple and NOT repeat
@rockeet rockeet force-pushed the memtable_as_log_index branch from 55e8f40 to 3e4c9d5 Compare October 28, 2025 12:50
blsi(x) = x & -x, while lt is a subset of neq, so (lt & neq) == lt,
so lt & (neq & -neq) == (lt & neq) & -neq == lt & -neq
VirtualCmpNoTS is not our optimization target,
just let it works, this is still faster than upstream
rocksdb by static bound condtion
`prefix_same_as_start_` and `iterate_upper_bound_`.
@rockeet rockeet force-pushed the memtable_as_log_index branch from 67ede94 to 5241049 Compare October 29, 2025 14:07
@rockeet rockeet force-pushed the memtable_as_log_index branch from a7c5c2e to 03aeb5a Compare October 31, 2025 01:43
If Next() reach a file_iter boundary, then call Prev(),
reset scan func is needed, for example:

```c++
iter->Seek(); assert(iter->Valid());
iter->Next(); assert(iter->Valid());
iter->Next(); assert(iter->Valid()); // switch to next file
iter->Prev(); assert(iter->Valid()); // switch to prev file
iter->Next(); assert(iter->Valid()); // switch to next file
```
This is mainly for MergingIterator and works well for others.
compiler optimzation can fold top_changed branch with
update_top() after inline expansion.
…etch

Set value fetch functions to reduce calling chains, just optimize
forward scan, keep backward scan safe and correct.
DBIter is embeded in ArenaWrappedDBIter -- although the overhead is
very small, this commit provide a way to remove it.
@rockeet rockeet force-pushed the memtable_as_log_index branch from bb00de1 to 4bac285 Compare October 31, 2025 14:04
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants