GaussianFlow:用于创建4D内容的高斯动态分布
请参考这个仓库获取我们用于计算高斯流的CUDA变量实现。
目前,我们使用以下代码来计算上述变量的高斯流。
### 在计算GaussianFlow时,我们分离与t_1相关的变量,使梯度反向传播只作用于t_2的变量,
### 同时保持t_1的变量不变,因为t_1的变量已经在t_1 - 1时使用相同的逻辑更新过。
### 这可以加速训练过程,因为需要更新的变量更少。顺便说一下,不分离t_1的变量不会降低性能,
### 但会减慢训练速度。
# t_1时刻的高斯参数
proj_2D_t_1 = render_t_1["proj_2D"]
gs_per_pixel = render_t_1["gs_per_pixel"].long()
weight_per_gs_pixel = render_t_1["weight_per_gs_pixel"]
x_mu = render_t_1["x_mu"]
cov2D_inv_t_1 = render_t_1["conic_2D"].detach()
# t_2时刻的高斯参数
proj_2D_t_2 = render_t_2["proj_2D"]
cov2D_inv_t_2 = render_t_2["conic_2D"]
cov2D_t_2 = render_t_2["conic_2D_inv"]
cov2D_t_2_mtx = torch.zeros([cov2D_t_2.shape[0], 2, 2]).cuda()
cov2D_t_2_mtx[:, 0, 0] = cov2D_t_2[:, 0]
cov2D_t_2_mtx[:, 0, 1] = cov2D_t_2[:, 1]
cov2D_t_2_mtx[:, 1, 0] = cov2D_t_2[:, 1]
cov2D_t_2_mtx[:, 1, 1] = cov2D_t_2[:, 2]
cov2D_inv_t_1_mtx = torch.zeros([cov2D_inv_t_1.shape[0], 2, 2]).cuda()
cov2D_inv_t_1_mtx[:, 0, 0] = cov2D_inv_t_1[:, 0]
cov2D_inv_t_1_mtx[:, 0, 1] = cov2D_inv_t_1[:, 1]
cov2D_inv_t_1_mtx[:, 1, 0] = cov2D_inv_t_1[:, 1]
cov2D_inv_t_1_mtx[:, 1, 1] = cov2D_inv_t_1[:, 2]
# B_t_2
U_t_2 = torch.svd(cov2D_t_2_mtx)[0]
S_t_2 = torch.svd(cov2D_t_2_mtx)[1]
V_t_2 = torch.svd(cov2D_t_2_mtx)[2]
B_t_2 = torch.bmm(torch.bmm(U_t_2, torch.diag_embed(S_t_2)**(1/2)), V_t_2.transpose(1,2))
# B_t_1 ^(-1)
U_inv_t_1 = torch.svd(cov2D_inv_t_1_mtx)[0]
S_inv_t_1 = torch.svd(cov2D_inv_t_1_mtx)[1]
V_inv_t_1 = torch.svd(cov2D_inv_t_1_mtx)[2]
B_inv_t_1 = torch.bmm(torch.bmm(U_inv_t_1, torch.diag_embed(S_inv_t_1)**(1/2)), V_inv_t_1.transpose(1,2))
# 计算 B_t_2*B_inv_t_1
B_t_2_B_inv_t_1 = torch.bmm(B_t_2, B_inv_t_1)
# 计算 cov2D_t_2*cov2D_inv_t_1
# cov2D_t_2cov2D_inv_t_1 = torch.zeros([cov2D_inv_t_2.shape[0],2,2]).cuda()
# cov2D_t_2cov2D_inv_t_1[:, 0, 0] = cov2D_t_2[:, 0] * cov2D_inv_t_1[:, 0] + cov2D_t_2[:, 1] * cov2D_inv_t_1[:, 1]
# cov2D_t_2cov2D_inv_t_1[:, 0, 1] = cov2D_t_2[:, 0] * cov2D_inv_t_1[:, 1] + cov2D_t_2[:, 1] * cov2D_inv_t_1[:, 2]
# cov2D_t_2cov2D_inv_t_1[:, 1, 0] = cov2D_t_2[:, 1] * cov2D_inv_t_1[:, 0] + cov2D_t_2[:, 2] * cov2D_inv_t_1[:, 1]
# cov2D_t_2cov2D_inv_t_1[:, 1, 1] = cov2D_t_2[:, 1] * cov2D_inv_t_1[:, 1] + cov2D_t_2[:, 2] * cov2D_inv_t_1[:, 2]
# GaussianFlow的各向同性版本
#predicted_flow_by_gs = (proj_2D_next[gs_per_pixel] - proj_2D[gs_per_pixel].detach()) * weights.detach()
# GaussianFlow的完整公式
cov_multi = (B_t_2_B_inv_t_1[gs_per_pixel] @ x_mu.permute(0,2,3,1).unsqueeze(-1).detach()).squeeze()
predicted_flow_by_gs = (cov_multi + proj_2D_next[gs_per_pixel] - proj_2D[gs_per_pixel].detach() - x_mu.permute(0,2,3,1).detach()) * weights.detach()
# 光流监督损失
large_motion_msk = torch.norm(optical_flow, p=2, dim=-1) >= flow_thresh # flow_thresh = 0.1或其他值用于过滤噪声,这里我们假设已经在某处加载了预计算的光流作为伪真值
Lflow = torch.norm((optical_flow - predicted_flow_by_gs.sum(0))[large_motion_msk], p=2, dim=-1).mean()
loss = loss + flow_weight * Lflow # flow_weight可以是1、0.1或任何你想要的值。