Skip to content

Rendering degradation occurs when using quantitative camera poses #17

@saulgooodman

Description

@saulgooodman

Hello, NeoVerse is truly a Great Work!

However, I encountered some issues during my implementation. Here's my situation: I have a static scene video from which I uniformly sampled 21 images. Following the 4DGS feed-forward + video diffusion pipeline provided in the paper, I set the scene as static and configured both the 4DGS rendered timestamps and K to be consistent. Now I want to customize the camera movement trajectory based on the reconstructed 4DGS pose.

My approach: Using the feed-forward 4DGS reconstructed poses from the paper as the global coordinate system for the entire scene, I specify poses for rendering + generation within this coordinate system. However, I've run into a problem: I first used the 21 poses reconstructed by VGGT as the specified trajectory, interpolated them to 81 frames, and then fed them into the reconstructed 4DGS for rendering. The rendering quality severely degrades even when using render_viewmats=feed_forward_reconstructed pose.

Image Image

Here is an example of the rendering degradation results I'm experiencing.

Where might I have gone wrong in my implementation? Thank you for your patience!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions