Menu

Topics

No single sensor sees the whole forest!

Browse:
/
/
No single sensor sees the whole forest!

LiDAR Data Fusion: The Next Frontier in Forest Attribute Estimation

Although ground-based LiDAR has led to major improvements in forestry inventory and research in recent decades, LiDAR can still struggle to capture the complete picture of a forest ecosystem. For terrestrial or mobile laser scanners operating from the ground are limited by dense vegetation and tree height, all forms of occlusion. For more on the impacts of this problem, see another article in this series on the work, “Insights from the Unseen – Occlusion in Forest Laser” by Kükenbrink et al.  Scanning Airborne laser scanners, by contrast, provide a sweeping view from above but can rarely capture structure beneath the canopy. Even when point clouds appear detailed, they are still structurally constrained: LiDAR reveals the shape of forests, not their inner workings. It excels at geometry but remains blind to the physiology, composition, and chemistry that drive forest function.

To enhance the application potential of LiDAR data, researchers have increasingly turned to data fusion, the process of combining information from multiple sensing modalities to enhance interpretation and accuracy. By merging LiDAR with complementary data sources such as multispectral, hyperspectral, radar, or even satellite imagery, data fusion aims to deliver a more complete, multidimensional understanding of forests and their structure, species, and function.

In their comprehensive review published in Remote Sensing, Balestra et al. (2024) take stock of this fast-evolving field. The paper examines how LiDAR data fusion is being applied across scales, from single-tree studies to continental monitoring, and details the operational considerations that determine when fusion truly adds value. They also, by leveraging the wider 3DForEcoTech COST Action ecosystem, provide a singular, synthesized definition for “data fusion”. Their definition, the result of 11 experts assembled across the network, “the merging of data or derived features from different sources, (instruments/devices) of which at least one is LiDAR, to improve the characteristics of the LiDAR dataset and/or enable enhanced forest observations,” promises to better unify the field going forward. The work is both a technical synthesis and a practical guide for researchers navigating the expanding ecosystem of 3D remote sensing.

To assess how far LiDAR fusion has advanced, Balestra and colleagues conducted a structured review of the scientific literature between 2014 and 2023. Using a PRISMA-based screening approach, they identified 151 relevant studies published worldwide. Most studies fused airborne LiDAR with optical datasets, often multispectral or hyperspectral imagery collected from satellites or drones. This combination has become a de facto standard for improving species classification and biomass estimation. Fewer studies incorporated terrestrial or mobile LiDAR, though interest in combining ground-based and aerial point clouds has grown rapidly since 2016. Spaceborne LiDAR, such as ICESat-2 and GEDI, has also entered the fusion arena, particularly for upscaling local measurements to global contexts.

Data fusion transforms isolated measurements into multi-dimensional forest intelligence. It allows researchers to scale up from plot to landscape, linking micro-level structure with macro-level patterns of biomass, biodiversity, and change. LiDAR data fusion is not a single technique but a family of approaches. Balestra et al. distinguish two dominant categories (1) data-level fusion in which raw datasets are combined together such as point-cloud to point-cloud or point-cloud to spectral bands and (2) feature-level fusion wherein LiDAR derived products are combined with features, spectral bands, or indices. The former accounted for 22% of surveyed literature whereas the latter was far more common, accounting for 78% of the papers.  

A growing number of studies also explore AI-driven fusion, using neural networks to learn relationships between modalities automatically. Such models promise scalable and automated analysis, though they remain data-hungry and difficult to interpret. Another persistent challenge lies in data alignment and standardisation. Differences in coordinate systems, point densities, and acquisition times can introduce inconsistencies that are difficult to reconcile. Moreover, the lack of common processing frameworks means that workflows are often customized from scratch for each project, limiting reproducibility.

The authors find gains across several areas, such as segmentation and trees species classification, whereas metrics like tree height can be calculated effectively from LiDAR alone. Many of the current gains are marginal, and thus further operationalization of methods is needed before widespread adoption. Looking forward, the authors envision a shift toward fusion-ready infrastructures: standardized data formats, interoperable software, and shared benchmarks that allow researchers to compare methods more easily. The rise of open datasets and cloud-based processing environments may help close the gap between research prototypes and operational practice. Additionally, more studies in the Southern Hemisphere are needed to develop a global view.

The review by Balestra et al. paints a picture of a field in need of operational maturity. Over the past decade, fusion has evolved from a promising experiment into an actionable practice. The next decade will determine whether it becomes routine practice, embedded in forest monitoring workflows from national inventories to carbon reporting schemes. Achieving that vision will depend on more than new algorithms. It will require collaboration across disciplines, shared data standards, practical cost–benefit evaluations, and globalization to ensure that fusion delivers not just better models, but better management outcomes. When these pieces come together, multiple sensors will indeed speak one language, revealing forests in all their structural, spectral, and ecological complexity.

Text is a summarization of a following paper

Balestra, M., Marselis, S., Sankey, T.T. et al. LiDAR Data Fusion to Improve Forest Attribute Estimates: A Review. Curr. For. Rep. 10, 281–297 (2024). https://doi.org/10.1007/s40725-024-00223-7

Text is authored by Henry Cerbone – Department of Biology – University of Oxford