-
Notifications
You must be signed in to change notification settings - Fork 203
perf: eliminate overdraw for opaque image fills #1327
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Sounds like a big win for large images (which I am very happy to see, because this is a case (the main case?) where Vello renderers seem to me to be noticably slower than alternatives). |
Yeah, I think it’s a really nice win, though I should point out that it mainly applies to overlapping images.
Oh, I thought we didn’t have image benchmarks in the Blend2 performance suite, but after reading the options documentation more carefully, I realized that
|
|
In the Blend2D suite, it's using transparent images so it probably wouldn't make a difference there. But yes, I there probably is more that can be done to optimize the performance of images. linebender/fearless_simd#171 might also help there. |
|
Oh and another thing, in that particular case the images only have 1x scaling and are pixel-aligned and I believe Blend2D has a special case for that, that's why we are much slower there. If you take a look at FillRectU and FillRectRot for example, the story looks different already. |
I think I've also seen discussed that a lot of renderers generate mipmaps for images, which would presumably greatly help performance in cases with large downscaling factors. https://servo.org for example features several images with native dimensions of in some cases over 4000px, some of which are rendered downscaled by a factor of ~10x. |
|
I'm not sure this would help with performance though? I actually think it would be more costly, because you need to compute the mipmaps, which you don't need currently. When rendering we only sample the affected pixels, so the size of the image doesn't really make a difference here, I think. |
Hmm... I had assumed that we would be doing what https://en.wikipedia.org/wiki/Image_scaling describes as "box sampling" for when downscaling with a scale factor > 2x. But perhaps we're currently just dropping pixels? |
|
If you are using NN, then yes, the pixels will just be dropped right now. For bilinear/bicubic we do still sample neighboring pixels, but that isn't enough if the downscale factor is larger. |
2d4526c to
422ece4
Compare

This change mirrors what we do for solid colors, where we clear commands if a solid color covers the entire wide tile. We now apply the same approach to fully opaque images.
The benchmark scene below shows roughly a 30% performance improvement for this case, although the exact gain depends on how many images overlap across full wide tiles.
I’ll also open a follow-up PR to update this where
has_opacitiescurrently returnstruefor all images.