Skip to content

Question about FPS measurement #5

@RuixiangXue

Description

@RuixiangXue

Hi, thanks for sharing this great work!

I noticed that FPS is computed differently between your repo and the OctreeGS implementation.
In your repo, the FPS is measured as follows:
times = []
for _ in range(10):
__ = render(views[0], gaussians, pipeline, background, use_trained_exp=train_test_exp, separate_sh=separate_sh)
for idx, view in enumerate(tqdm(views, desc="Rendering progress")):
rendering_pack = render(view, gaussians, pipeline, background, use_trained_exp=train_test_exp, separate_sh=separate_sh)
rendering = rendering_pack["render"]
render_time = rendering_pack["render_time"]
fpss.append(1.0 / render_time)
print("fps=", torch.tensor(fpss).mean())

while in other implementations, it is sometimes measured like this:

t_list = []
for idx, view in enumerate(tqdm(views, desc="Rendering progress")):
torch.cuda.synchronize(); t0 = time.time()
rendering = render(view, gaussians, pipeline, background)["render"]
torch.cuda.synchronize(); t1 = time.time()
t_list.append(t1-t0)
t = np.array(t_list)
fps = 1.0 / t.mean()
print(f'Test FPS: {fps:.5f}')

Could you please clarify the main difference between these two FPS measurement methods?
In my experiments, the results differ quite a lot — the FPS computed by your method is significantly higher than latter.(using same 3DGS model, same rasterization from your work)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions