Hi,
Thanks for the amazing work. While reviewing the documentation at LightX2V Readme
, we noticed there are no optimizations mentioned for B200 GPUs, particularly around model quantization and attention mechanisms.
Could you share any guidance or best practices for optimizing performance on B200 (sm100) GPUs? Specifically, we’d love to learn more about:
Recommended quantization settings or workflows
Attention optimizations (e.g. FlashAttention, SageAttention, or other kernels)
Known limitations or compatibility notes for sm100 architectures
Thanks.
Hi,
Thanks for the amazing work. While reviewing the documentation at LightX2V Readme
, we noticed there are no optimizations mentioned for B200 GPUs, particularly around model quantization and attention mechanisms.
Could you share any guidance or best practices for optimizing performance on B200 (sm100) GPUs? Specifically, we’d love to learn more about:
Recommended quantization settings or workflows
Attention optimizations (e.g. FlashAttention, SageAttention, or other kernels)
Known limitations or compatibility notes for sm100 architectures
Thanks.