1. Introduction
With the transformation and upgrade of China’s digital city to a smart city, the traditional two-dimensional model data can no longer meet the needs of the current application. With its intuitive characteristics, the three-dimensional model has been gradually used more and more widely in engineering [1,2].
Unmanned aerial vehicles (UAVs) are unmanned aircrafts with their own power unit and navigation module [3]. They rely on radio remote control equipment or computer preprogramming to independently control their flight [4]. They were first used in the military and then gradually developed civil functions. They are mainly used in agricultural research [5,6,7], various types of data monitoring [8,9,10], building maintenance [11,12], dynamic site layout planning in large-scale construction projects [13], real-scene three-dimensional reconstruction [14,15,16,17], and so on. Compared with traditional measurement methods, the three-dimensional reconstruction of oblique photography based on the UAV is highly efficient, low-cost, and suitable for large-scale operations. UAVs have gradually become a popular application technology.
At present, UAVs can be divided into consumption-level, professional-level, and industrial application-level models according to their application field. A higher level of aircraft has a stronger operating capacity but a larger volume and weight. Some aircrafts need professional pilots and have to apply for airspace, which limits their flexibility. The volume and mass of consumer UAVs (such as Phantom4Pro) are small, and the quality of PTZ cameras that can be carried is limited. Therefore, the flight efficiency of these models is not as high as that of industry-application UAVs (such as M300RTK and M600Pro), because these UAVs can carry professional multi-lens PTZ cameras, which are heavier [18].
The accuracy of the UAV oblique photography 3D reconstruction model is mainly affected by the selection of aircraft type and equipment, the distribution of control points, flight route planning, and other factors [18,19,20,21,22]. Zhang Chunbin et al. [18] verified the accuracy of topographic survey data from the Dajiang small consumer UAV (Phantom 3 professional). Through an analysis of the image point cloud, digital surface model, and digital orthophoto map generated by oblique photography at different flight altitudes, it was shown that the measurement accuracy of the small consumer UAV under a flight control system can be less than 10 cm. Thus, this is a low-cost, fast, and flexible new measurement method for field geographic and ecological investigators. By analyzing the technical characteristics and performance indexes of different types of UAV flight platforms (rotor UAV and fixed-wing UAV), Sun Jie et al. [19] proposed the selection of UAV models and PTZ camera configuration schemes under different application scenarios, which improved the operation efficiency. Cheng Libo et al. [20] explored the influence of the accuracy of the real-scene three-dimensional model from the perspective of control-point distribution uniformity and proposed a more universal measurement index for control-point uniformity. Wang Yunchuan et al. [21] compared the impact of route planning with two different “vertical” and “parallel” images on the modeling quality of long strip distributed buildings, and concluded that the “parallel route” three-dimensional modeling method is better than the “vertical route” method, as it can aid in auxiliary decision making for route planning in urban three-dimensional modeling. Liu Dandan et al. [22] conducted three-dimensional modeling experiments and an accuracy analysis using four different image control-point layout schemes. Their research showed that, when using a UAV for a cadastral survey, the number of photos and control points can be reduced appropriately to shorten the modeling time without significantly reducing the accuracy of the 3D reconstruction model.
The aim of this study was to analyze the influence of the UAV model and flight altitude on the accuracy and precision of the 3D reconstruction model. We hope that our results will be used to guide people to select the appropriate UAV model and aerial photography altitude to obtain a 3D model that meets the accuracy and clarity requirements. In previous studies, the influences of route planning and the image control-point layout scheme on the accuracy of the three-dimensional real-scene model have been studied [20,21,22]. However, only certain UAVs have been studied and analyzed, while a horizontal comparison between different UAV models is lacking. In order to better meet the needs of engineering applications, this study selected four different levels of UAV and conducted comparative tests at different aerial photography heights. We analyzed and compared their field operations, quantified the 3D reconstruction model indicators through size measurements and the Tenengrad function, and compared the accuracy of the 3D reconstructed model.
2. Data Acquisition and Processing
2.1. Experimental Area and Equipment
The experimental area used in this study is located in Putuo District, Shanghai, and the flying object used was a primary school. The primary school is located in a residential area, covering an area of 5584 m2. It is composed of a three-story teaching building and a two-story administrative building. The four UAVs and cameras used in this paper were the DJI industry-application-level Matrice 300 RTK equipped with a Zenmuse P1 PTZ camera, the DJI industry-application-level Matrice 600 Pro equipped with a Hongpeng AP5600 five-lens camera, the DJI professional-level Inspire2 equipped with a Zenmuse X5S PTZ camera, and the DJI consumption-level Phantom4Pro equipped with its own camera. The main parameters of the four devices are shown in Table 1. The above four UAV models are mainstream models used in China’s aviation surveying and mapping industry, and they also represent the three application scenarios used in China’s UAV market, namely, household UAV, photography UAV and surveying, and mapping UAV. M300RTK and M600Pro are professional aerial survey models, Inspire2 is a professional photography model, and Phantom4Pro is a household model. The photos of the four UAVs are shown in Figure 1.
2.2. Image-Data Acquisition
In order to facilitate the measurement of the accuracy of the 3D model after reconstruction in the later stage, targets (Figure 2) were placed at different positions in the primary school before undergoing oblique photography. The distance between targets was 10–20 m, and targets were evenly distributed throughout the whole site as much as possible. The specific placement positions are shown in Figure 3. Four different types of UAV carried out complete image acquisition at heights of 50, 60, 70, 80, and 90 m, respectively. The flight altitudes chosen were used because most civil buildings in China are below 50 m, so aerial photography starts from 50 m. At the same time, in urban building informatization research, flight altitudes of 50–90 m can better meet the requirements of City Information Modeling (CIM), so these five altitudes were selected. See Table 2 for the specific flight parameters used.
According to the theory of, and experience with, oblique photography, it was predicted that the accuracy of 3D reconstruction models produced by oblique photography is mainly affected by three factors: the image resolution, flight altitude, and overlap rate. The overlap rate refers to the proportion of adjacent images found along the same route (or adjacent routes) containing in the image part of the same ground object, which is divided into the heading overlap rate and the side overlap rate. Regarding the international standard of aerial photography, it is clear that the heading overlap rate is generally 60–65% and the lateral overlap rate is generally 30–35% [17,18]. This study did not investigate the influence of the overlap rate, so the overlap rate of all heading and side directions in this study was set to 85%.
In order to ensure a better 3D modeling effect, all UAV flight routes used in this test were automatically planned using route planning software.
The specific field operation process used was as follows: Step 1, Install the UAV, PTZ camera, propeller blade, etc., and connect these with the remote controller; Step 2, Open the software to check whether the sensors of the UAV are normal; Step 3, Open the route planning software and set the flight range, flight altitude, overlap rate, camera oblique angle, and other parameters; Step 4, Check all parameter settings and take off; and Step 5, Wait for the UAV to complete its mission (some UAV models need to complete multiple sorties at some altitudes) and land.
3. Three-Dimensional Reconstruction
At present, the mainstream 3D reconstruction software products available on the market include the ContextCapture Center [23] (formerly Smart 3D) (Bentley in Exton, United States), Metashape [24] (formerly Photoscan) (Agisoft in St.Petersburg, Russia), and RealityCapture [25] (CapturingReality in Cary, United States). Metashape has the strongest aerial triangulation function, but its generated model texture effect is average, and the reconstruction effect is not as good as that of the ContextCapture Center [26,27]. For RealityCapture, which carries out the three-dimensional reconstruction of film and television drama scenes, the accuracy of the model size is not as good as that of the other software types, and it is not suitable for use in the surveying and mapping industry [28]. The reconstruction model effect of the ContextCapture Center is the best, and the manual repair workload in the later stage is low, but the price of this software is much higher than that of other software [27,28].
For this paper, ContextCapture Center 4.4.18 was selected for 3D reconstruction. This is an automatic three-dimensional model construction and operation software based on the GPU (Graphic Processing Unit). The modeling object of the software is static and is supplemented by camera sensor attributes (including the focal length, sensor size, principal point, lens distortion, etc.), photo position parameters, photo attitude parameters, control points, and other information to carry out aerial triangulation calculations and generate an ultra-high-density point cloud. After reconstruction and calculation, a high-resolution 3D real-scene model based on the real-image texture is generated for browsing. At the same time, the model can be converted into common compatible formats, such as 3MX, OBJ, S3C, and OSGB, which can be conveniently imported into other types of 3D model editing software for post-processing.
The specific indoor operation process used is as follows [29]: Step 1, Import the sorted photos into the oblique photography 3D reconstruction software to ensure that each photo can be read correctly by the software; Step 2, Set the parameters and carry out the aerial triangulation calculation, and the software will automatically extract image feature points, match feature points with the same name, and reverse-calculate the external orientation elements of the image [30]; and Step 3, The three-dimensional reconstruction of the model can be carried out after the successful completion of the aerial triangulation calculation. The system automatically matches the corresponding feature points according to the image-matching algorithm and obtains more feature points from the image to form a dense point cloud. The irregular triangular network is constructed from the relationship between the images established by the aerial triangulation calculation, forming a model without texture. Finally, the three-dimensional reconstruction of the overall model is completed by mapping the texture to the model without texture at the corresponding positions [31]. To avoid the situation that control points may affect the accuracy of the final 3D reconstruction model, control points were not added during the indoor data processing of this test.
3.1. Establish Sparse Point Cloud
The aerial triangulation calculation mentioned in the above paragraph uses the Structure from Motion (SfM) technology to establish a sparse point cloud. SfM is a technology to obtain camera parameters and 3D scene structure by analyzing image sequences taken from different angles, and it is the premise for obtaining a sparse point cloud [32]. Its purpose is to calculate the point information of the three-dimensional structure and the camera parameters of each image from the images taken from different angles. The SfM algorithm is mainly divided into three types: incremental SfM, hierarchical SfM, and global SfM. Although incremental SFM has the lowest efficiency, it is most widely used because of its most common characteristics, good robustness, and accuracy. Incremental SFM is an iterative and serialized processing process which is mainly divided into two parts: image correlation and incremental reconstruction [33].
3.1.1. Image Correlation
SfM needs to obtain accurate and reliable corresponding points in multiple images, calculate the fundamental matrix through multi-view geometry relation and multiple groups of corresponding points, obtain the attitude information of the image and the coordinates of feature points, and complete the reconstruction of a sparse point cloud [32]. As the image is disordered, the goal of image association is to associate the overlapping images, and output the geometrically verified associated image set and the image projection points corresponding to each point. There are three main points:
(1). Image feature extraction
Image features refer to those corner points (such as spire, room corner, etc.) in the image that will have significant changes in pixel values when moving in any direction. They can be divided into point features, line features, and area features. There are different extraction operators based on different features.
For each image, firstly extract a series of local features and descriptors. These data features should be geometrically invariant, so that SfM can accurately and uniquely identify the feature. Scale-invariant feature transform (SIFT) is a good feature descriptor [33].
-
(2). Matching corresponding points
The most common method is the iterative method, namely, one feature in one image is extracted and matched with the features in all other images. This method is time-consuming and complex, which is not desirable in large-scale image sets. An effective method is to find out the possible image overlapping sets through various methods, and then match the feature points on this basis.
-
(3). Geometric verification
Estimate the transformation relationship between the two images, namely, the projection geometric relationship verifies the overlapping relationship between images. Different geometric relationships are calculated according to different spatial configurations. If a transformation contains enough inliers, they are considered to comply with geometric constraints.
False matching points and gross errors are inevitable in image matching. The random sample consensus (RANSAC) algorithm is usually used to remove false matching points and gross errors [33,34].
3.1.2. Incremental Reconstruction
The input of incremental reconstruction is the scene graph structure processed by image correlation, and the output is a series of camera pose estimations and the corresponding reconstructed scene-structure point cloud. Camera attitude and the reconstruction of scene geometry are described as a joint optimization problem of nonlinear objective function, namely, minimizing the sum of errors between the projection point of the image square and the observation value of the corresponding image point [34,35,36]:
(1)
and represent 3D point coordinates and camera parameters respectively; is the projection point of the three-dimensional point on the camera ; is the corresponding image point; indicates L2 norm; is the indication function, and when the three-dimensional point is visible in the camera, , = 1, otherwise, = 0.
Incremental reconstruction mainly includes four steps: Step 1, Initialization. It is very important to select the appropriate image as the initial image of reconstruction, which is directly related to the quality of reconstruction. Selecting the initial image with more overlapping images will make the result more robust; Step 2, Image registration. The new image is registered into the existing model by Perspective-n-Point (PnP), and the interference of the outline is eliminated by RANSAC algorithm; Step 3, Triangulation. The newly registered image must be able to observe the existing scene points, otherwise the position, attitude, and other parameters of the new frame cannot be determined. Whenever a new image is added, new triangulated scene points can be generated. Triangulation is very important in SfM because it adds new scene points, which increases the redundancy of existing models; Step 4, Bundle adjustment. The process of restoring the two-dimensional information in the image to a three-dimensional space strictly depends on the accuracy of the matched corresponding points, but there are still wrong matching points no matter what matching method and error elimination method are used. Due to the image noise, there will be a certain position deviation when the obtained three-dimensional points are back-projected onto the image, which becomes the back-projection error. Therefore, the least-square principle is used to minimize the cumulative square sum of the distance residual between the back-projection and the original image point, namely, the square sum of the back-projection error. Bundle adjustment is to optimize the camera parameters and 3D points in an objective function at the same time, and the optimal solution can be obtained theoretically. Therefore, in the process of SfM, it is generally necessary to call bundle adjustment frequently to optimize camera parameters and three-dimensional points in order to reduce or eliminate error accumulation [32,34].
The sparse point cloud based on SfM algorithm is shown in Figure 4.
3.2. Establish Dense Point Cloud
The way SfM obtains point cloud determines that it is impossible to directly generate a dense point cloud. The multi-view stereo (MVS) algorithm matches almost every pixel in the photo and reconstructs their three-dimensional coordinates. In this way, the density of the obtained points can be closer to the clarity shown by the image. The theoretical basis for its implementation is that there are epipolar geometry constraints in the same three-dimensional geometric structure taken between multi view photos [32].
When the sparse point cloud is completed, the MVS algorithm is used to establish the dense point cloud, and the surface reconstruction (SR) algorithm is used to establish the mesh. Finally, the texture mapping (TM) algorithm is used to establish the three-dimensional model [33]. The final 3D reconstruction model is shown in Figure 5.
4. Comparative Analysis of the 3D Reconstruction Models’ Accuracy
4.1. Three-Dimensional Model Size
In order to evaluate the accuracy of the 3D reconstruction model, this study compared the distance between points (i.e., targets) in the 3D reconstruction model after UAV modeling with the actual distances between targets in the real field [37] and analyzed the difference. The actual distance between targets was measured by all stations on the site. The measurement of the distance between targets using the model is based on the three-dimensional reconstruction model using the measurement tool in ContextCapture Viewer software and by selecting the center position of each target on the three-dimensional reconstruction model. Due to the large amount of data, some data from the model measurement are shown in Table 3, and the measurement data from all stations are shown in full in Table 4. Due to the location of point 11#, its measurement was inconvenient, so it was not considered in subsequent data-processing steps.
The verification index showing the dimensional accuracy of the 3D reconstruction model accuracy used root-mean-square error. In order to make the conclusion universal, dimensionless treatment was carried out:
(2)
(3)
is the root-mean-square error of line segment length; is the root-mean-square error of the line segment length after dimensionless treatment; is the number of measured segments; is the measured value of the length of the line segment on the 3D reconstruction model; and is the length of the line segment measured by all stations. The final results are shown in Table 5.
According to Table 5, the dimensionless accuracy of 3D reconstruction models of M300RTK is between 0.0011 and 0.0014; that is, the 3D reconstruction model error per kilometer is about 1.1–1.4 m. By sorting the 3D model accuracy after the reconstruction of the oblique photography of the four UAV models mentioned above, the dimensional accuracy of M300RTK oblique photography 3D reconstruction model was found to be the highest, followed by that of M600Pro (0.0015–0.0036), Inspire2 (0.0018–0.0042), and Phantom4Pro (0.0024–0.0056). At the same time, it was found that the dimensional accuracy of the 3D model is not directly related to the flight altitude but mainly depends on the selection of UAV equipment.
4.2. Three-Dimensional Model Clarity
In addition to the 3D model size, another factor associated with the 3D reconstruction model accuracy is the clarity of the 3D reconstruction model. Compared with traditional orthophoto images, oblique photography allows the acquisition of the side texture of a building. Therefore, this study compared the clarity of the 3D reconstruction model by determining the building side texture at the same position on different 3D reconstruction models (Figure 6 is a screenshot of the texture of parts of the 3D reconstruction models). The evaluation standard is the commonly used image definition evaluation function, the Tenengrad function [38]:
(4)
(5)
is the current image, is the given edge detection threshold, and and are the convolutions of the Sobel horizontal and vertical edge detection operators at pixels .
The Tenengrad function is a gradient-based function that extracts the gradient values in the horizontal and vertical directions through the Sobel operator. It is a weighted average first and then a differential, and it carries out template operations in the horizontal and vertical directions, which has a certain inhibitory effect on noise [39,40]. After Sobel-operator processing, the average grayscale value of the image is obtained: the larger the value, the clearer the representative image is. The texture screenshots of all models are imported into the graphics sharpness evaluation software, which automatically outputs the results after a calculation based on the Tenengrad function. All results are presented in Figure 7 (the clarity of the M300RTK 3D reconstruction model at a height of 50 m is the best, so the software set it as the benchmark value of 100, and the values of other photos are the relative values obtained through comparisons with this photo):
The above figure shows that the higher the flight altitude is, the lower the definition of the 3D reconstruction model is. At a given altitude, the clarity of the 3D reconstruction model produced with M300RTK was the highest, followed by those produced with Inspire2, Phantom4Pro, and M600Pro. However, with an increase in the UAV flight altitude, the gap between 3D reconstruction models also gradually decreases. For a given aircraft, the higher the flight altitude is, the lower the clarity of the 3D reconstruction model is: for every 10 m increase in the flight altitude of M300RTK, the clarity of the 3D reconstruction model decreased by 16.81%, on average. The values of the other UAVs were 6.79% (Inspire2), 5.43% (Phantom4Pro), and 3.38% (M600Pro). Combined with the image resolution data shown in Table 1, it can be seen that the higher the resolution of the PTZ camera, the greater the impact of the flight altitude on the change in 3D reconstruction model clarity; Inspire2 and Phantom4Pro were found to have similar clarity levels after 3D reconstruction. Under the condition of meeting other needs, we can give priority to the more portable consumer UAV Phantom4Pro; M300RTK has obvious advantages from low-altitude flights, but its advantages are significantly reduced as the flight altitude increases.
4.3. Other Factors
In addition to the model size and clarity mentioned above, there are other factors that affect the quality of the model, such as the presence of holes in the wall surface, distortions in the 3D reconstruction model, and the weather during aerial photography.
By observing all 20 3D reconstruction models, it was found that most of the holes existed on the low white wall of the roof. The M300RTK and Inspire2 3D reconstruction models were of good quality with small wall holes (Figure 8) and a small number of holes (fewer than 10). There were no holes in the 3D models reconstructed by mM300RTK at aerial photography altitudes of 50 m and 60 m. However, with an increase in the flight altitude, the number of holes increased, but the total number was still below 10. The models reconstructed with the Phantom4Pro had more holes (more than 15), and the 3D reconstruction models produced by M600Pro at the five altitudes had large holes (Figure 9). This mainly occurred because the wall is white and lacks feature points, while the M600Pro and Phantom4Pro cameras could not clearly capture the subtle features on the white wall, resulting in holes in the reconstruction of the model.
On the other hand, with an increase in the aerial photography height, the three-dimensional reconstruction model showed surface distortions. By observing the windows and external units of the air conditioner in all models, it was found that the 3D model reconstructed with M600Pro had surface distortions at the five flight heights; the 3D models reconstructed with Inspire2 and Phantom4Pro began to show surface distortions at a flight height of 60 m; further, 3D models reconstructed by M300RTK began to show surface distortions at a flight height of 80 m. The surface distortions in each model are shown in Figure 10.
The quality of the 3D reconstruction model will also be affected by weather factors. The presence of fog and haze during aerial photography decreases the quality of photos taken as well as the clarity of the three-dimensional reconstruction model. When it is sunny, the shadows of buildings are more present and change with time, affecting the quality of the 3D reconstruction model. Therefore, it is recommended that aerial photography is conducted on cloudy days.
5. Comparative Analysis of Operation Efficiency
In practical engineering, project time is a very important consideration. In order to maximize the benefits of a project in addition to the economic costs normally considered, the time cost has gradually received greater attention: the less time consumed by a project, the higher the economic benefits it can bring [41,42].
In data processing, the three-dimensional reconstruction of a model requires a large amount of data to be analyzed and calculated by the computer, which occupies a large amount of computer resources. However, there is an obvious gap between the computing power of a single computer and that of multiple computers. In order to improve the efficiency of reconstruction modeling, the cluster operation was adopted in this experiment, and eight computers were used for the operation at the same time, greatly shortening the time taken for three-dimensional reconstruction. In terms of the computer cluster used at this time, the configuration of the host was as follows: the CPU was Intel Core i7 8700, the graphics card was ROG STRIX GTX1080 TI, and the memory was Kingston FURY DDR4 3000 MHz 16 GB × 2. The configuration of the seven auxiliary machines was as follows: the CPU was Intel Core i7 6700, the graphics card was Gigabyte GTX1050 TI OC, and the memory was Kingston FURY DDR4 3000 MHz 16 GB.
For this test, the flight time of the UAV and the three-dimensional reconstruction time of the 3D model are shown in Figure 11 and Figure 12, respectively (the specific time was affected by the different environmental characteristics, such as weather factors, battery performance, computer configuration, network conditions, etc.). It can be seen from Figure 11 that the lower the flying altitude of a UAV, the longer the time required. Even if a UAV is disturbed by natural wind and other factors during its actual use, the overall flight time still shows a linear trend: for every 10 m decrease in the flight altitude of M300RTK, the flight time increased by 19.3% on average. The values of the other UAVs were 22.1% (Inspire2), 31.9% (Phantom4Pro), and 35.4% (M600Pro). This is mainly because the lower the flight altitude of a given UAV, the closer it is to the subject, and the smaller the range collected by a single image is. Therefore, more images are required to meet the modeling requirements, so a longer flight time is required. Since M600Pro is equipped with a five lens PTZ camera, it can fly one route to achieve the effect of other UAV models flying five routes, so its flight time is greatly reduced and its efficiency is significantly greater. In Figure 12, the three-dimensional reconstruction time of each model also shows an overall downward trend with an increase in flight altitude. The three-dimensional reconstruction modeling time for M300RTK is significantly longer than that of other UAV models. This is because the Zenmuse P1 PTZ camera carried by M300RTK is a full-frame camera, and its image resolution is much higher than that of other cameras. Therefore, during office processing, the number of feature points extracted and matched by the computer also increases, so the three-dimensional reconstruction time increases significantly. The image resolution of the P1 camera is 8192 × 5460, which is about 2.2 times that of the other three cameras. Therefore, the time required for M300RTK modeling is about 6–12 times that of other UAV models. Since a cluster operation is adopted in this modeling, which is greatly affected by network factors and the computer configuration, only an approximate trend range can be given.
Based on the previous test conclusions, 3D reconstruction models produced with M300RTK have the highest quality but take the longest to be produced and are not the best in terms of efficiency. If the accuracy of a 3D reconstruction model is not high enough, other UAVs can be selected. M600Pro can significantly improve the efficiency of field operations, and the duration required for 3D reconstruction modeling is also similar to that of Inspire2 and Phantom4Pro. The only deficiency is the clarity of the 3D construction model. For applications in the measurement industry, this UAV should be used as the first choice. The 3D reconstruction models produced with Inspire2 and Phantom4Pro have similar levels of quality in all aspects. The accuracy of the 3D reconstruction model produced by Inspire2 is slightly better than that of Phantom4Pro, but this model is not as good as Phantom4Pro in terms of the operation efficiency. Users can choose UAV models according to their own needs.
6. Conclusions
In this study, four different levels of UAV were used for oblique photography. The image data in the test area were obtained at five different flight altitudes (50, 60, 70, 80, and 90 m), and a three-dimensional reconstruction model was generated. The accuracy levels of the measured ground-target spacing and the three-dimensional model distance were compared, and the model definition was compared and analyzed. The conclusions are as follows:
(1). The dimensional accuracy of the three-dimensional reconstruction model is not directly related to the flight altitude during aerial photography;
(2). After reconstruction, the 3D model size accuracy of the image obtained by M300RTK was the highest, with an error of about 1.1–1.4 m per kilometer, followed by those of M600Pro (1.5–3.6 m), Inspire2 (1.8–4.2 m), and Phantom4Pro (2.4–5.6 m);
(3). The flight altitude has a significant impact on the clarity of the 3D reconstruction model: the lower the flight altitude of a given type of equipment, the clearer the texture of the 3D reconstruction model. For every 10 m decrease in flight altitude, the clarity of the 3D reconstruction model can be improved by 16.81% (M300RTK), 6.79% (Inspire2), 5.43% (Phantom4Pro), and 3.38% (M600Pro), respectively;
(4). The clarity of the 3D reconstruction model also depends on the performance of the UAV PTZ camera, where the higher the image resolution is, the clearer the texture of the three-dimensional reconstructed model is. The order of best to worst clarity is as follows: M300RTK + P1, Inspire2 + X5S, Phantom4Pro, M600Pro + AP5600;
(5). For a given aircraft type, the higher the flight altitude is, the shorter the flight duration during field operation is. For every 10 m increase in the flight altitude, the flight time can be decreased by 19.3% (M300RTK), 22.1% (Inspire2), 31.9% (Phantom4Pro), and 35.4% (M600Pro), respectively;
(6). The higher the image resolution of the PTZ camera is, the longer the duration required to reconstruct its 3D model. When the resolution increases by 2.2 times, the modeling time required increases by about 6–12 times.
In this study, we compared and analyzed the accuracy and clarity of a three-dimensional reconstruction model of oblique photography of different levels of UAVs at different flight altitudes. Our results can be used as an effective reference for engineers and researchers with relevant application needs and may help them to choose a UAV model and aerial photography altitude.
In the process of this experiment, for the three-dimensional reconstruction of the model, ContextCapture software was adopted. Whether the use of other three-dimensional reconstruction software would have an impact on the accuracy of the model remains to be verified. At the same time, adding control points in 3D reconstruction and changing the overlap rate in aerial photography may affect the accuracy of the reconstructed model as well. In future work, we will further study the above influencing factors and use more types of UAV models for testing. The aerial photography site is small, and a larger range of sites will be selected for oblique photography in the future to study the impact of the aerial photography range on the accuracy of the 3D reconstruction model.
Conceptualization, D.W.; methodology, H.S.; software, D.W.; validation, D.W.; formal analysis, D.W. and H.S.; investigation, D.W. and H.S.; resources, H.S.; data curation, H.S.; writing—original draft preparation, H.S.; writing—review and editing, D.W.; visualization, H.S.; supervision, D.W.; project administration, H.S.; funding acquisition, D.W. All authors have read and agreed to the published version of the manuscript.
Not applicable.
Not applicable.
Data is available at
The authors declare no conflict of interest.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figure 1. Four different-level UAVs. (a) The photo of M300RTK. (b) The photo of M600Pro. (c) The photo of Inspire2. (d) The photo of Phantom4Pro.
Figure 3. Target spatial position distribution. (1#, 2#, 3#...... represents the number of the target.)
Figure 5. Three-dimensional reconstruction model of different UAVs at a height of 50 m. (a) Three-dimensional reconstruction model of M300RTK at a height of 50 m. (b) Three-dimensional reconstruction model of M600Pro at a height of 50 m. (c) Three-dimensional reconstruction model of Inspire2 at a height of 50 m. (d) Three-dimensional reconstruction model of Phantom4Pro at a height of 50 m.
Figure 6. Side texture of the 3D reconstruction model. (a) M300RTK 50 m height. (b) M300RTK 90 m height. (c) Inspire2 50 m height. (d) Inspire2 90 m height. (e) Phantom4Pro 50 m height. (f) Phantom4Pro 90 m height. (g) M600Pro 50 m height. (h) M600Pro 90 m height.
Figure 7. Clarity comparison of different 3D reconstruction models using the same equipment.
Figure 8. Small hole on the wall of the 3D model reconstructed with M300RTK at an altitude of 80 m.
Figure 9. The holes on the wall of the 3D model reconstructed with M600Pro at an altitude of 50 m.
Figure 10. Surface distortion in different 3D reconstruction models. (a) Surface distortion in 3D reconstruction model of M300RTK at 80 m height. (b) Surface distortion in 3D reconstruction model of M600Pro at 50 m height. (c) Surface distortion in 3D reconstruction model of Inspire2 at 60 m height. (d) Surface distortion in 3D reconstruction model of Phantom4Pro at 60 m height.
Figure 12. Three-dimensional reconstruction time for each 3D reconstruction model.
Performance parameters of four UAVs and PTZ cameras.
Parameter | M300RTK + P1 | M600Pro + AP5600 | Inspire2 + X5S | Phantom4Pro |
---|---|---|---|---|
Weight | 6300 g + 800 g | 10,000 g + 2500 g | 3440 g + 461 g | 1375 g |
Battery capacity | 5935 mAh × 2 | 5700 mAh × 6 | 4280 mAh × 2 | 5870 mAh |
Maximum flight time | 55 min | 25 min | 23 min | 30 min |
Satellite positioning module | GPS + GLONASS + BeiDou + Galileo | DJ001-T | GPS + GLONASS | GPS + GLONASS |
Image resolution | 8192 × 5460 | 5456 × 3632 | 5280 × 3956 | 5472 × 3648 |
Focal length | 35 mm | 20 mm | 15 mm | 8.8 mm |
FOV | 63.5° | 70° | 72° | 84° |
Price | CNY 81,200 + CNY 49,300 | CNY 48,000 + CNY 124,000 | CNY 19,999 + CNY 12,499 | CNY 9999 |
Flight parameters.
UAV |
Flight Altitude (m) | Number of |
Number of Flight |
Overlap Rate (%) | Ground Sampling Distance (cm) |
---|---|---|---|---|---|
M300RTK | 50 | 2269 | 1 | 85 | 0.629 |
60 | 1653 | 1 | 85 | 0.754 | |
70 | 1423 | 1 | 85 | 0.880 | |
80 | 1270 | 1 | 85 | 1.006 | |
90 | 1097 | 1 | 85 | 1.131 | |
M600Pro | 50 | 1950 | 1 | 85 | 1.063 |
60 | 1180 | 1 | 85 | 1.275 | |
70 | 960 | 1 | 85 | 1.488 | |
80 | 750 | 1 | 85 | 1.700 | |
90 | 640 | 1 | 85 | 1.913 | |
Inspire2 | 50 | 1014 | 2 | 85 | 1.100 |
60 | 688 | 2 | 85 | 1.320 | |
70 | 472 | 2 | 85 | 1.540 | |
80 | 373 | 1 | 85 | 1.760 | |
90 | 311 | 1 | 85 | 1.980 | |
Phantom4Pro | 50 | 659 | 2 | 85 | 1.369 |
60 | 551 | 2 | 85 | 1.643 | |
70 | 455 | 1 | 85 | 1.917 | |
80 | 330 | 1 | 85 | 2.191 | |
90 | 184 | 1 | 85 | 2.465 |
Measurement distances for different 3D reconstruction models with different UAVs at different heights. (1#, 2#, 3#...... represents the number of the target.)
UAV Model | Start Number | End Number | Slope Distance/m | ||||
---|---|---|---|---|---|---|---|
50 m | 60 m | 70 m | 80 m | 90 m | |||
M300RTK | 2# | 1# | 10.588 | 10.590 | 10.595 | 10.591 | 10.600 |
2# | 16# | 66.321 | 66.318 | 66.321 | 66.316 | 66.321 | |
3# | 2# | 9.886 | 9.884 | 9.880 | 9.884 | 9.875 | |
3# | 4# | 7.829 | 7.828 | 7.825 | 7.829 | 7.830 | |
…… | |||||||
16# | 17# | 14.798 | 14.800 | 14.793 | 14.792 | 14.789 | |
M600Pro | 2# | 1# | 10.554 | 10.559 | 10.600 | 10.602 | 10.558 |
2# | 16# | 66.112 | 66.208 | 66.349 | 66.371 | 66.091 | |
3# | 2# | 9.841 | 9.833 | 9.881 | 9.883 | 9.848 | |
3# | 4# | 7.796 | 7.793 | 7.834 | 7.832 | 7.794 | |
…… | |||||||
16# | 17# | 14.755 | 14.795 | 14.817 | 14.806 | 14.735 | |
Inspire2 | 2# | 1# | 10.612 | 10.574 | 10.573 | 10.606 | 10.538 |
2# | 16# | 66.456 | 66.261 | 66.259 | 66.429 | 66.067 | |
3# | 2# | 9.895 | 9.868 | 9.859 | 9.894 | 9.829 | |
3# | 4# | 7.828 | 7.815 | 7.823 | 7.828 | 7.813 | |
…… | |||||||
16# | 17# | 14.797 | 14.773 | 14.802 | 14.801 | 14.726 | |
Phantom4Pro | 2# | 1# | 10.591 | 10.549 | 10.606 | 10.624 | 10.620 |
2# | 16# | 66.546 | 66.001 | 66.422 | 66.617 | 66.454 | |
3# | 2# | 9.860 | 9.835 | 9.897 | 9.936 | 9.901 | |
3# | 4# | 7.788 | 7.808 | 7.845 | 7.911 | 7.837 | |
…… | |||||||
16# | 17# | 14.803 | 14.727 | 14.860 | 14.920 | 14.844 |
True distances between targets.
Start Number | End Number | Slope Distance/m |
---|---|---|
2# | 1# | 10.588 |
16# | 66.358 | |
3# | 2# | 9.855 |
4# | 7.841 | |
5# | 15.073 | |
6# | 25.815 | |
9# | 50.071 | |
10# | 57.840 | |
8# | 7# | 12.492 |
9# | 18.908 | |
10# | 20.804 | |
12# | 32.913 | |
13# | 12# | 17.277 |
14# | 15.644 | |
15# | 38.818 | |
16# | 48.268 | |
18# | 24.805 | |
16# | 17# | 14.745 |
Accuracy comparison of the 3D models.
Flight Altitude | M300RTK | M600Pro | Inspire2 | Phantom4Pro | ||||
---|---|---|---|---|---|---|---|---|
RMSE |
|
RMSE |
|
RMSE |
|
RMSE |
|
|
50 m | 20.51151 | 0.00132 | 99.70958 | 0.00319 | 64.13094 | 0.00271 | 146.96957 | 0.00393 |
60 m | 21.55742 | 0.00135 | 76.51833 | 0.00299 | 78.78170 | 0.00320 | 160.27442 | 0.00463 |
70 m | 18.90473 | 0.00121 | 25.43073 | 0.00154 | 48.93590 | 0.00195 | 51.63601 | 0.00247 |
80 m | 19.97220 | 0.00124 | 28.84537 | 0.00162 | 41.23038 | 0.00186 | 139.05315 | 0.00552 |
90 m | 18.17966 | 0.00111 | 118.85379 | 0.00358 | 135.09873 | 0.00418 | 65.74826 | 0.00266 |
References
1. Yu, L. Research on Urban 3D Real Scene Modeling Based on UAV Tilt Photogrammetry Technology. Geomat. Spat. Inf. Technol.; 2021; 44, pp. 86-88.
2. Mao, Y. 3D Modeling Based on Oblique Photogrammetry and BIM Technology. Jiangxi Build. Mater.; 2021; 264, pp. 92–93+95.
3. Elnabty, I.A.; Fahmy, Y.; Kafafy, M. A survey on UAV placement optimization for UAV-assisted communication in 5G and beyond networks. Phys. Commun.; 2022; 51, 101564. [DOI: https://dx.doi.org/10.1016/j.phycom.2021.101564]
4. Lin, J.; Gan, S. Application of Consumer level UAV in Surveying 1:500 Strip Topographic Map. Softw. Guide; 2021; 20, pp. 168-173.
5. Zhao, F.; He, Y. Crop Yield Measurement Based on Unmanned Aerial Vehicle Remote Sensing Image. Radio Eng.; 2021; 51, pp. 1110-1115.
6. Bouguettaya, A.; Zarzour, H.; Kechida, A.; Taberkit, A.M. Deep learning techniques to classify agricultural crops through UAV imagery: A review. Neural Comput. Appl.; 2022; [DOI: https://dx.doi.org/10.1007/s00521-022-07104-9]
7. Feng, J.; Sun, Y.; Zhang, K.; Zhao, Y.; Ren, Y.; Chen, Y.; Zhuang, H.; Chen, S. Autonomous Detection of Spodoptera frugiperda by Feeding Symptoms Directly from UAV RGB Imagery. Appl. Sci.; 2022; 12, 2592. [DOI: https://dx.doi.org/10.3390/app12052592]
8. Wang, H. Using UAV to Carry out Fire Site Inspection. Fire Ind. (Electron. Version); 2021; 7, pp. 87+89.
9. Alioua, A.; Djeghri, H.E.; Cherif, M.E.T.; Senouci, S.M.; Sedjelmaci, H. UAVs for traffic monitoring: A sequential game-based computation offloading/sharing approach. Comput. Netw.; 2020; 177, 107273. [DOI: https://dx.doi.org/10.1016/j.comnet.2020.107273]
10. Pérez, J.J.; Senderos, M.; Casado, A.; Leon, I. Field Work’s Optimization for the Digital Capture of Large University Campuses, Combining Various Techniques of Massive Point Capture. Buildings; 2022; 12, 380. [DOI: https://dx.doi.org/10.3390/buildings12030380]
11. Zhang, W.; Fan, H.; Liu, Y.; Lin, N.; Zhang, H. Ancient Building Reconstruction Based on 3D Laser Point Cloud Combined with UAV Image. Bull. Surv. Mapp.; 2019; 512, pp. 130–133+144.
12. Munawar, H.S.; Ullah, F.; Shahzad, D.; Heravi, A.; Qayyum, S.; Akram, J. Civil Infrastructure Damage and Corrosion Detection: An Application of Machine Learning. Buildings; 2022; 12, 156. [DOI: https://dx.doi.org/10.3390/buildings12020156]
13. Hammad, A.; Da Costa, B.; Soares, C.; Haddad, A. The Use of Unmanned Aerial Vehicles for Dynamic Site Layout Planning in Large-Scale Construction Projects. Buildings; 2021; 11, 602. [DOI: https://dx.doi.org/10.3390/buildings11120602]
14. Qu, L.; Feng, Y.; Zhi, L.; Gao, W. Study on Real 3D Modeling of Photograpgic Data Based on UAV. Geomat. Spat. Inf. Technol.; 2015; 38, pp. 38–39+43.
15. Wang, H.; Lv, W.; Gao, X. 3D Modeling and Accuracy Evaluation of UAV Tilt Photography. Geomatics & Spat. Inf. Technol.; 2020; 43, pp. 74-78.
16. Ozimek, A.; Ozimek, P.; Skabek, K.; Łabędź, P. Digital Modelling and Accuracy Verification of a Complex Architectural Object Based on Photogrammetric Reconstruction. Buildings; 2021; 11, 206. [DOI: https://dx.doi.org/10.3390/buildings11050206]
17. Yang, R. Application of UAV Tilt Photogrammetry on Urban Renovation. Geomat. Spat. Inf. Technol.; 2021; 44, pp. 217-220.
18. Zhang, C.; Yang, S.; Zhao, C.; Lou, H.; Zhang, Y.; Bai, J.; Wang, Z.; Guan, Y.; Zhang, Y. Topographic Data Accuracy Verification of Small Consumer UAV. National Remote Sensing Bulletin; 2018; 22, pp. 185-195.
19. Sun, J.; Xie, W.; Bai, R. UAV Oblique Photogrammetric System and Its Application. Sci. Surv. Mapp.; 2019; 44, pp. 145-150.
20. Cheng, L.; Li, J.; Duan, P.; Wang, Y. Accuracy Analysis of UAV Real Scene 3D Modeling Considering the Uniformity of Control Points. GNSS World China; 2021; 46, pp. 20-27.
21. Wang, Y.; Duan, P.; Li, J.; Yao, Y.; Cheng, L. Analysis on 3D Modeling Quality of UAV Images for Different Route Planning. Remote Sens. Inf.; 2020; 35, pp. 121-126.
22. Liu, D.; Liu, J.; Bai, Y.; Tian, M.; Zhai, H. 3D Model Establishment and Accuracy Analysis of Oblique Photogrammetry. Geomat. Spat. Inf. Technol.; 2020; 43, pp. 1-4.
23. Bentley: ContextCapture Center, April 2022. Available online: https://www.bentley.com/zh/products/brands/contextcapture (accessed on 27 April 2022).
24. Agisoft: Mateshape, April 2022. Available online: http://www.agisoft.cn/ (accessed on 27 April 2022).
25. CapturingReality: RealityCapture, April 2022. Available online: https://www.realitycapture.com.cn/ (accessed on 27 April 2022).
26. Cai, P.; Yue, X.; Li, X.; Ai, H.; Qi, W.; Yang, W. A Progressive Practical Teaching Mode of UAV Aerial Survey. Beijing Surv. Mapp.; 2021; 35, pp. 1484-1488.
27. Nikolov, I.; Madsen, C. Benchmarking close-range structure from motion 3D reconstruction software under varying capturing conditions. Euro-Mediterranean Conference; Springer: Cham, Switzerland, 2016; pp. 15-26.
28. Zhou, S.; Chen, X.; Wang, Y.; Qi, W.; Yu, W. Application of Tilt Photography in the Design of 500 kV Transmission Line. Electr. Power Surv. Des.; 2020; 141, pp. 61-66.
29. Luo, X.; Zhang, Z.; Wang, Z.; Wu, B.; Wu, Y.; Li, Y. Three Dimensional Modeling of Campus Based on UAV Tilt Photography. Sci. Technol. Innov.; 2021; 593, pp. 80-81.
30. Fan, P.; Li, L. A Three-dimensional Modeling Study Based on the Technique of Low-altitude UAV Oblique Photogrammetry and Smart3D Software. Bull. Surv. Mapp.; 2017; pp. 77-81. [DOI: https://dx.doi.org/10.13474/j.cnki.11-2246.2017.0678]
31. Liu, Z. Study and Practice of Large-scale City Real 3D Modeling Technology Based on Oblique Photography. Geomat. Spat. Inf. Technol.; 2019; 42, pp. 187–189+193.
32. Wang, L. Processing and Application of Unmanned Aerial Vehicle(UAV) Tilted Imaged Intensive Matching Point Cloud. Master’s Thesis; Guizhou Normal University: Guiyang, China, 2021; [DOI: https://dx.doi.org/10.27048/d.cnki.ggzsu.2021.000745]
33. Li, F.; Wei, W.; Sun, X.; Zhou, S.; Yang, J.; Yang, H. Method for Volume Measurement and Calculation of Asphalt Aggregate Based on UAV Technology. J. Beijing Univ. Technol.; 2022; pp. 1-10. Available online: http://kns.cnki.net/kcms/detail/11.2286.T.20220412.1516.002.html (accessed on 27 April 2022).
34. Özyeşil, O.; Voroninski, V.; Basri, R.; Singer, A. A survey of structure from motion. Acta Numer.; 2017; 26, pp. 305-364. [DOI: https://dx.doi.org/10.1017/S096249291700006X]
35. Jiang, S. Research on Efficient SfM Reconstruction of Oblique UAV Images. Master’s Thesis; Wuhan University: Wuhan, China, 2018.
36. Jiang, S.; Xu, Z.; Zhang, F.; Liao, R.; Jiang, W. Solution for Efficient SfM Reconstruction of Oblique UAV Images. Geomat. Inf. Sci. Wuhan Univ.; 2019; 44, pp. 1153-1161. [DOI: https://dx.doi.org/10.13203/j.whugis20180030]
37. Wang, L.; Huang, H.; Li, R.; Zhao, D. Study on Key Technology of Oblique Image Acquisition for Consumer Unmanned Aerial Vehicle. Bull. Surv. Mapp.; 2017; pp. 41-45. [DOI: https://dx.doi.org/10.13474/j.cnki.11-2246.2017.0612]
38. Chen, L.; Li, W.; Chen, C.; Qin, H.; Lai, J. Efficiency Contrast of Digital Image Definition Functions for General Evaluation. Comput. Eng. Appl.; 2013; 49, pp. 152–155+235.
39. Zhao, H.; Bao, G.; Tao, W. Experimental Research and Analysis of Automatic Function for Imaging Measurement. Opt. Precis. Eng.; 2004; 12, pp. 531-536.
40. Hu, S.; Li, Z.; Wang, S.; Ai, M.; Hu, Q. A Texture Selection Approach for Cultural Artifact 3D Reconstruction Considering Both Geometry and Radiation Quality. Remote Sens.; 2020; 12, 2521. [DOI: https://dx.doi.org/10.3390/rs12162521]
41. Martínez-Carricondo, P.; Agüera-Vega, F.; Carvajal-Ramírez, F. Use of UAV-Photogrammetry for quasi-vertical wall surveying. Remote Sens.; 2020; 12, 2221. [DOI: https://dx.doi.org/10.3390/rs12142221]
42. Qin, Z.; Zhang, X.; Zhang, X.; Lu, B.; Liu, Z.; Guo, L. The UAV Trajectory Optimization for Data Collection from Time-Constrained IoT Devices: A Hierarchical Deep Q-Network Approach. Appl. Sci.; 2022; 12, 2546. [DOI: https://dx.doi.org/10.3390/app12052546]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Unmanned Aerial Vehicle (UAV) oblique photography technology has been applied more and more widely for the 3D reconstruction of real-scene models due to its high efficiency and low cost. However, there are many kinds of UAVs with different positioning methods, camera models, and resolutions. To evaluate the performance levels of different types of UAVs in terms of their application to 3D reconstruction, this study took a primary school as the research area and obtained image information through oblique photography of four UAVs of different levels at different flight altitudes. We then conducted a comparative analysis of the accuracy of their 3D reconstruction models. The results show that the 3D reconstruction model of M300RTK has the highest dimensional accuracy, with an error of about 1.1–1.4 m per kilometer, followed by M600Pro (1.5–3.6 m), Inspire2 (1.8–4.2 m), and Phantom4Pro (2.4–5.6 m), but the accuracy of the 3D reconstruction model was found to have no relationship with the flight altitude. At the same time, the resolution of the 3D reconstruction model improved as the flight altitude decreased and the image resolution of the PTZ camera increased. The 3D reconstruction model resolution of the M300RTK + P1 camera was the highest. For every 10 m decrease in flight altitude, the clarity of the 3D reconstruction model improved by 16.81%. The UAV flight time decreased as the UAV flying altitude increased, and the time required for 3D reconstruction of the model increased obviously as the number and resolution of photos increased.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer