Full Text

Turn on search term navigation

© 2023. This work is published under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

In Earth observation, multimodal data fusion is an intuitive strategy to break the limitation of individual data. Complementary physical contents of data sources allow comprehensive and precise information retrieval. With current satellite missions, such as ESA Copernicus programme, various data will be accessible at an affordable cost. Future applications will have many options for data sources. Such a privilege can be beneficial only if algorithms are ready to work with various data sources. However, current data fusion studies mostly focus on the fusion of two data sources. There are two reasons; first, different combinations of data sources face different scientific challenges. For example, the fusion of synthetic aperture radar (SAR) data and optical images needs to handle the geometric difference, while the fusion of hyperspectral and multispectral images deals with different resolutions on spatial and spectral domains. Second, nowadays, it is still both financially and labour expensive to acquire multiple data sources for the same region at the same time. In this paper, we provide the community with a benchmark multimodal data set, MDAS, for the city of Augsburg, Germany. MDAS includes synthetic aperture radar data, multispectral image, hyperspectral image, digital surface model (DSM), and geographic information system (GIS) data. All these data are collected on the same date, 7 May 2018. MDAS is a new benchmark data set that provides researchers rich options on data selections. In this paper, we run experiments for three typical remote sensing applications, namely, resolution enhancement, spectral unmixing, and land cover classification, on MDAS data set. Our experiments demonstrate the performance of representative state-of-the-art algorithms whose outcomes can serve as baselines for further studies. The dataset is publicly available at https://doi.org/10.14459/2022mp1657312 and the code (including the pre-trained models) at https://doi.org/10.5281/zenodo.7428215 .

Details

Title
MDAS: a new multimodal benchmark dataset for remote sensing
Author
Hu, Jingliang 1 ; Liu, Rong 1 ; Hong, Danfeng 2   VIAFID ORCID Logo  ; Camero, Andrés 3 ; Yao, Jing 4   VIAFID ORCID Logo  ; Schneider, Mathias 5 ; Kurz, Franz 5 ; Segl, Karl 6 ; Xiao Xiang Zhu 7 

 Data Science in Earth Observation (SiPEO), Technical University of Munich (TUM), 80333 Munich, Germany 
 Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), 82234 Wessling, Germany; now at: Key Laboratory of Computational Optical Imaging Technology, Aerospace Information Research Institute, Chinese Academy of Sciences, 100094 Beijing, China 
 Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), 82234 Wessling, Germany; Helmholtz AI, 85764 Neuherberg, Germany 
 Data Science in Earth Observation (SiPEO), Technical University of Munich (TUM), 80333 Munich, Germany; now at: Key Laboratory of Computational Optical Imaging Technology, Aerospace Information Research Institute, Chinese Academy of Sciences, 100094 Beijing, China 
 Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), 82234 Wessling, Germany 
 German Research Center for Geosciences (GFZ), Helmholtz Center Potsdam, Telegrafenberg A17, 14473 Potsdam, Germany 
 Data Science in Earth Observation (SiPEO), Technical University of Munich (TUM), 80333 Munich, Germany; Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), 82234 Wessling, Germany 
Pages
113-131
Publication year
2023
Publication date
2023
Publisher
Copernicus GmbH
ISSN
18663508
e-ISSN
18663516
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2761834449
Copyright
© 2023. This work is published under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.