Rendering

In many respects, point-based $\alpha$-blending, volumetric rendering NeRF-style, and Gaussian Splatting share the low-level component for rendering the scene: the color at a pixel in the image is approximated by integrating samples along the ray that passes through this pixel. The final color is a weighted sum of the colors of the 3D points sampled along this ray, weighted according to the transmittance.

The color $C$ of a pixel is the integral of the various densities encountered along the optical ray $\mathbf{r} = \mathbf{o} + t \mathbf{d}$ over the interval $[t_n, t_f]$:

\begin{displaymath}
C = \int_{t_n}^{t_f} T(t) \sigma \left( \mathbf{r}(t) \right) \mathbf{c} \left( \mathbf{r}(t), \mathbf{d} \right) dt
\end{displaymath} (9.93)

where
\begin{displaymath}
T(t) = \exp \left( - \int_{t_n}^{t} \sigma \left( \mathbf{r}(s) \right) ds \right)
\end{displaymath} (9.94)

The function $T(t)$ denotes the accumulated transmittance along the ray from $t_n$ to $t$, that is, the probability that the ray travels from $t_n$ to $t$ without hitting any other particles.

Integrals can be transformed into sums of various contributions. The color $C$ of a pixel can therefore be viewed as a summation of different contributions:

\begin{displaymath}
C = \sum_{i=1}^{N} T_i \left( 1 - \exp( - \sigma_i \delta_i) \right) \mathbf{c}_i
\end{displaymath} (9.95)

where $T_i = \exp \left( - \sum_{j=1}^{i-1} \sigma_j \delta_j \right)$ and $\delta_i = t_{i+1} - t_{i}$.

This representation reduces to the classic $\alpha$-blending using $\alpha_i = 1 - \exp \left( - \sigma_i \delta_i \right)$. In this way, the transmittance can also be expressed as $T_i = \prod_{j=1}^{i-1} (1 - \alpha_i)$.

In typical neural point-based approaches, the color $C$ of a pixel is obtained by blending the $\mathcal{N}$ ordered points that underlie the pixel itself:

\begin{displaymath}
C = \sum_{i \in \mathcal{N} } c_i \alpha_i \prod_{j=1}^{i-1} \left( 1 - \alpha_j \right)
\end{displaymath} (9.96)

Paolo medici
2025-10-22