Menu

Topics

Insights from the Unseen – Occlusion in Forest Laser Scanning

Browse:
/
/
Insights from the Unseen – Occlusion in Forest Laser Scanning

When something is in the way, it’s in the way.

When a laser pulse meets a leaf, branch, or trunk, it’s either reflected back to the sensor or blocked from travelling further. The result is occlusion, a feature of the environment that leaves gaps in the three-dimensional data used to describe forest structure. While LiDAR has revolutionized our ability to measure forests in exquisite detail, it still struggles to capture what lies hidden behind dense canopies.

In their preprint Insights from the Unseen – Occlusion in Forest Laser Scanning, Kükenbrink et al. (2025) offer the most comprehensive exploration to date of this often-overlooked issue. The paper establishes a framework for defining, measuring, and mitigating occlusion, laying the groundwork for more reliable forest structural analysis across terrestrial, mobile, and aerial laser scanning platforms. The work turns an invisible problem into a quantifiable one, and, in doing so, reframes how researchers think about what LiDAR doesn’t see.

Occlusion matters because it undermines the completeness of LiDAR-derived forest data. Forest structure and biomass estimates depend on point clouds that faithfully represent both vegetation and the empty spaces between it. When parts of the forest are hidden from view, those missing data distort how we understand tree volume, canopy layering, or even biodiversity indicators that depend on structural metrics.

The authors identify three main types of occlusions, each with distinct causes and implications. (1) Absolute occlusions occur when laser pulses can never reach a target, such as the inside of a trunk or areas beneath water or dense surfaces. (2) Geometric occlusions happen when one object, like a tree stem, blocks the view of another — a problem that can be reduced by scanning from multiple viewpoints. (3) Sub-footprint occlusions arise when beams partially hit edges or fine structures like leaves, scattering or weakening the signal below the sensor’s detection threshold.

Together, these effects produce the hidden irony of LiDAR data: the denser and more structurally complex a forest is, the greater the data gap. The very ecosystems most worth studying, from old-growth temperate stands to tropical rainforests, are those where occlusion is most severe. Different capture methods are affected differently when it comes to occlusions.

Terrestrial laser scanners (TLS), positioned on the ground, offer detailed views of trunks and understory vegetation but struggle to see the canopy above, a phenomenon known as understory occlusion. Mobile laser scanners (MLS), whether handheld or backpack-mounted, improve coverage by allowing operators to move around obstacles, yet still face limitations in upper canopy penetration. Unoccupied aerial systems (ULS or UAV-LiDAR) flip the problem: they see the canopy well but miss much of what lies below.

The forest’s seasonal and structural state also plays a major role. Under “leaf-on” conditions, broadleaf forests block far more pulses than in the “leaf-off” months of winter or early spring. Coniferous stands, with their year-round needles and dense crowns, consistently produce higher levels of occlusion. Among all ecosystems, evergreen tropical rainforests pose the greatest challenge, their complex vertical layering and persistent canopy lead to some of the highest data losses observed.

Kükenbrink and colleagues go beyond describing the problem by providing a theoretical framework to map and measure occlusion. Their approach is based on voxelization, dividing the forest into a grid of tiny three-dimensional cubes, or voxels, that can be classified according to whether they are filled (hit by a return), empty, unobserved, or occluded. Using ray-tracing algorithms, the trajectory of each laser pulse is reconstructed through space, identifying which volumes were “seen” and which were hidden.

This voxel-based approach not only quantifies occlusion but also visualizes it, turning an abstract concept into tangible 3D maps. In case examples presented in the study, the authors illustrate occlusion patterns from both terrestrial and aerial perspectives. The resulting maps show how occlusion accumulates in specific layers: TLS data reveal shadowed regions in the upper canopy, while ULS data highlight blind spots beneath dense foliage. When combined, these complementary views offer a fuller picture of the forest volume that remains unseen.

Understanding occlusion is the first step; mitigating it is the next. The authors also outline a series of strategies to minimize data loss. At the most basic level, multi-viewpoint scanning, acquiring data from multiple positions, remains the most effective method for reducing geometric occlusion. For terrestrial surveys, this means placing scanners in a grid or circular pattern; for aerial systems, it involves overlapping flight lines from different angles. More advanced strategies combine data from multiple platforms, merging TLS and ULS point clouds to fill gaps from both above and below. To read more about LiDAR data fusion, check out another article on this website. In tropical and temperate forests, such integration has been shown to reduce occluded volume to less than 2% of the total canopy space, a near-complete structural representation.

Looking ahead, the authors envision intelligent, adaptive scanning systems capable of adjusting in real time to detected occlusion. By integrating occlusion mapping algorithms into the scanning process, future LiDAR systems could plan flight paths or scanner placements dynamically, prioritizing under-sampled areas and reducing redundancy. Artificial intelligence, coupled with robotic or drone-based platforms, could one day enable autonomous, occlusion-aware forest mapping that adapts to each landscape’s complexity.

The study concludes with a reminder: occlusion may be inevitable, but ignorance isn’t. By explicitly accounting for occlusion in both data collection and analysis, researchers can move toward more transparent, reproducible, and reliable structural metrics. Understanding what LiDAR doesn’t see is just as important as interpreting what it does. In forest remote sensing, acknowledging the unseen is not a limitation, it’s a step toward clarity. When scientists integrate occlusion awareness into survey design, data processing, and model interpretation, they don’t just capture trees more accurately; they capture forests more truthfully.

Text is a summarization of a following preprint paper:

Kükenbrink, D., Gassilloud, M., Brede, B., Bornand, A., Calders, K., Cherlet, W., Eichhorn, M.P., Frey, J., Gretler, C.M., Höfle, B. and Kattenborn, T., 2025. Insights from the Unseen-Occlusion in Forest Laser Scanning. https://doi.org/10.31223/X5N16X

Text is authored by Henry Cerbone – Department of Biology – University of Oxford