Skip to content

Conversation

@leejet
Copy link
Owner

@leejet leejet commented Dec 6, 2025

Previously, not all ggml ops had good support for BF16 — for example, conv2d/conv3d — so in the code I was forcing BF16 to be converted to F32. However, as more and more models (such as z-image) overflow with FP16 and require BF16, I plan to remove this restriction in the code. For ops that do not support BF16, their weights will need to be converted from BF16 to another type (e.g., FP16 or F32) during weight loading. Developers who need to modify sd.cpp should be aware of this.

@leejet leejet merged commit bfbb929 into master Dec 6, 2025
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants