1. Introduction The experimental method, whose founder is considered to be Galileo Galilei (1564–1642), is the basis of all applied sciences and engineering. Its fundamental principle is that science is based on experience, the starting point to formulate scientific laws and the criterion to verify their validity. Inevitably, the experimental method requires the definition of procedures for performing tests as well as technologies and instruments to acquire and eventually post-process the experimental outcomes. If attention is focused on civil engineering, experimental testing involves different levels: materials, structural components, connections and sub-assemblies, small-scale structures, full-scale structures, and infrastructures. Loading tests on structural materials are common practice to evaluate their mechanical characteristics and performances, e.g., strength and deformation capacity under monotonic or cyclic conditions. Many of such material tests follow specific protocols detailed by building codes when used for the qualification of construction materials. Laboratory testing of structural components, connections, sub-assemblies, small structures such as downscaled prototypes, or full-scale building splices are, for example, essential to validate or calibrate prediction models and support the development of innovative solutions. Such structural tests are commonly performed up to different levels of damage or even failure; many testing options are possible depending on the structural aspects being investigated, e.g., from quasi-static monotonic and cyclic loading tests to high-energy dynamic tests such as those performed on shake tables to simulate destructive seismic events.
The situation changes when the size of the structure increases and experimental testing is required to study the behavior of large structures and infrastructures. In this case, high-energy loading is generally not feasible (because of technical and economic limits) or simple not acceptable (because of the subsequent damages that would be determined). Static loading tests are generally limited in space, e.g., loading of a partition of a floor level in a large multi-story frame building and loading of a given span of a long multi-span bridge. The information gained in this way is indeed useful, but generally does not provide a big picture of the structural behavior. On the other hand, low-energy dynamic tests allow studying the time-varying response of structures, leading to the identification of their stiffness, mass, and damping properties. The dynamic input in low-energy tests can be either directly assigned (such as, for example, through electromechanical shakers) or be an unmeasured operational/ambient noise (such as, for example, wind- or traffic-induced vibrations). In the former case, reference is made to the methods of experimental modal analysis (EMA), e.g., [1]; in the latter case, the procedures of operational modal analysis (OMA) are used, e.g., [2,3]. In particular, it is OMA that in the past decades has attracted much interest in civil engineering and became widely used thanks to its most appealing feature, i.e., large structures can be analyzed under their normal operational conditions, typically without service interruptions, with simple instrumentations [2,3]. Data collected from vibration monitoring permits many structural engineering applications, ranging from model updating and system identification, e.g., [4,5,6,7,8,9,10,11,12,13]; specific activities such as fine calibration of tuned mass dampers, e.g., [14,15,16,17,18]; up to their wide use in condition monitoring and damage detection, e.g., [19,20,21,22,23,24,25].
The process of acquiring data from experimental tests is inevitably influenced by the available technologies with their characteristics and limitations. The consolidated approach in data acquisition in civil engineering is based on contact point sensors (could be displacement, strain, velocity, or acceleration sensors) whose measurements are transferred via wired connections to the data acquisition hardware. For example, high-sensitivity accelerometers and high quality shielded cables are commonly adopted for data acquisitions in OMA [2,3]. Alternative technologies exist to avoid the use of connection cables (given the difficulties and efforts in cabling large structures) with the use of wireless sensors, e.g., [26,27,28,29,30]. Other alternatives provide distributed monitoring possibilities with optical fiber sensors [31,32,33,34,35] to overcome the pointwise nature of traditional approaches. However, non-contact monitoring technologies probably attracted most of the recent interest of the engineering community, e.g., [36]. Various approaches were explored in civil engineering for vibration testing; that is, laser Doppler vibrometry, e.g., [37,38,39,40]; microwave radar interferometry, e.g., [41,42,43,44,45]; infrasound, e.g., [46,47]; global positioning system (GPS) sensing, e.g., [48,49,50,51]; satellite remote sensing, e.g., [52,53,54,55,56,57]; theodolites and total stations, e.g., [58,59]; optical methods based on the moiré effect, e.g., [60,61,62,63,64]; and optical vision-based methods using digital image correlation (DIC), e.g., [65,66,67,68,69,70,71,72,73]. Indeed, the possibility to eliminate physical installations of sensors is very attractive, especially for structures that might not be easily or safely accessible, yet requiring rapid assessment of their conditions, for example, following extreme events such as strong earthquakes, explosions, or floods.
Among contactless technologies, attention is here focused on vision-based methods, given that a large number of applications involving objects ranging from the tiniest dimension to large constructions showed great potentialities even with low-cost consumer-grade instrumentations [65,66,67,68,69,70,71,72,73]. The idea is simple in its principle: images of the object to be monitored are acquired and subsequently analyzed to extract motion information, thus obtaining displacement time histories eventually used to compute strains, velocities, and accelerations. One video camera is used for detecting in-plane movements; two video cameras in stereo mode enable three-dimensional tracking. If the movement of the entire object to be monitored can be acquired with sufficient spatial resolution within the available combinations of cameras and lenses, then a full-field representation of the mechanical response is possible. This is typically the case of small specimens tested in the laboratory. Otherwise, the selected portions of the object to be monitored are acquired with a number of synchronized cameras. This is the case of field monitoring of large structures and infrastructures. The movement is evaluated following the position, frame after frame, of installed targets or prepared speckle patterns, material texture, building edges and corners, or visible structural details such as bolts in steel structures [65,66,67,68,69,70,71,72,73].
The early history of image-based measurements belongs to the field of photogrammetry—see, for example, Sutton et al. [65] for a wide review of the initial developments. The first laboratory applications in experimental structural mechanics to evaluate displacements and strains date back to the mid 1980s [74,75]. Since then, significant progress has been made thanks to the advancements in computer vision algorithms, e.g., [65,76,77,78,79,80,81,82,83], that found their way into commercial software, such as, for example, in a toolbox for programming [84]; in dedicated software, e.g., [85]; and in complete vision-based monitoring systems derived from earlier research studies at the University of South Carolina [86] and at the University of Bristol [87].
If attention is focused on the use of vision-based monitoring for low-energy/low-amplitude vibrations, as is the case of most operational conditions in civil engineering, many challenges have to be faced, e.g., [67,71,72,73]. Even if the frame rate of the camera (acquired images per second) is high enough to avoid aliasing, effective algorithms (backed by adequate computational resources) as well as camera/lens performances/quality are crucial to accurately track small structural motions.
Regarding video processing, the term optical flow, e.g., [65,66,67,68,69,70,71,72,73], is used to identify general computer vision techniques associating pixels in a reference image with corresponding pixels in another image of the same scene. Among available algorithms, the most popular in structural engineering applications [71,72,73] appears to be correlation-based template matching and feature point matching.
Template matching searches for an area in a new frame most closely resembling the reference (or template) predefined as a rectangular subset in the initial frame. The advantage of template matching is the minimal user intervention, limited to specifying the template region in the reference frame. The limitations of template matching are related to its sensitivity to changes in lighting and background conditions, in addition to tracking problems with very deformable structures. Feature point matching is an efficient approach based on key-point detection and matching. Key-points are defined in computer vision as points that are stable, distinctive, and invariant to image transformation, e.g., building corners, peculiar building details, and bolts in steel structures. Instead of the raw image intensities as in template matching, a feature descriptor, i.e., a complex representation based on the shape and appearance of a small window around the key-point, is used for matching. In this way, feature matching is less sensitive to illumination change, shape change, and scale variation. However, feature point matching requires the target regions to have rich textures.
Particularly promising and interesting for monitoring low-energy/low-amplitude vibrations are recent methodologies for motion magnifications [88,89,90,91,92,93], able to reveal subtle changes in the images, in the original Lagrangian formulation (tracking a specific feature in a video in time and space), as well as in the Eulerian formulation where a pixel with a fixed coordinate is selected and its value is monitored in time. In fact, the amplification of information relevant to the motion betters the signal-to-noise ratio in vision-based monitoring, which might be critical in many operational conditions in civil engineering structures and infrastructures.
Recent articles analyzed the state-of-the-art in dynamic monitoring of structures using vision-based methodologies with applications focused on structural health monitoring [68,71,72,73]. These contributions, published by some of the leading experts in the subject, are a great source of information on a multitude of aspects, such as performance and improvements in motion tracking algorithms, comparisons of the performance of various consumer- and industrial-grade cameras, and the influence of environmental conditions on acquired measurements. The abundancy of information is inevitable, given the large number of research contributions published in the last decade, as detailed in the third paragraph of this review article. All such information could be overwhelming to structural engineers and stakeholders with no background on vision-based monitoring, although possibly familiar with consolidated monitoring methodologies. Accordingly, the objective of this review article is to provide an introductory discussion of the latest state-of-the-art of vision-based dynamic monitoring of structures and infrastructures through an overview of the results achieved in field monitoring of vibrations for full-scale case studies. In this way, engineers and stakeholders interested in the possibilities of contactless monitoring of structures and infrastructure could have an overview of up-to-date achievements of vision-based techniques to support a first evaluation of the feasibility and convenience for future monitoring tasks.
2. Brief Overview of Vision-Based Monitoring Systems 2.1. Monitoring Process
A vision-based system could consist of a set of video cameras connected to a computer installed with software having real-time processing capacity of the acquired images, or could be made by a set of video cameras whose recordings are only acquired during monitoring and later processed. Depending on the distance between the cameras and the structure to be monitored, appropriate lenses must be selected to obtain images with adequate resolution, indispensable to track the motion of the selected targets with sufficient accuracy, e.g., [65,66,71,73]. Lighting lamps could be added for conducting measurements in positions with scarce illumination or even at night.
The monitoring process roughly consists of the following phases: (1) installation, i.e., the video cameras equipped with the selected lenses are placed on tripods in the most convenient locations, connected to the computer and synchronized; for each video camera the targets to be tracked are set (depending on post-processing procedures, they could be, for example, applied markers or existing textures in the structure surface); (2) calibration, i.e., the relationship between the pixel coordinates and the physical coordinates is obtained, usually based on known physical dimension on the object surface and its corresponding image dimension in pixels; and (3) video acquisition and processing, i.e., the videos are recorded and the motion of each target is tracked in the image sequences; as a result, the displacement time history is given as output. A schematic representation of this simple flowchart is depicted in Figure 1, with the sources of errors and uncertainties discussed in the following paragraph.
2.2. Errors and Uncertainties
Differently from other measurement approaches where the accuracy of the employed sensors/systems is provided by their manufacturers and generally remains stable within assigned operational conditions during a given calibration time span, the accuracy of vision-based systems cannot be related solely to the technical specifications of the video cameras. The accuracy determination in vision-based monitoring is a rather complex problem as it depends on a multifaceted combination and interaction of different parameters. The sources of errors and uncertainties in vision-based monitoring can be subdivided in three groups: (1) intrinsic to the monitoring hardware, e.g., optical distortions and aberrations in the lenses, limitations in the resolution, and performance of the sensor of the video camera; (2) relevant to the software and calibration/synchronization process, e.g., limitations in the motion tracking algorithm, synchronization lags among cameras, and round-offs in camera calibration; and (3) environmental, e.g., influence of the location where the camera is installed, vibrations induced in the camera-tripod system, variable ambient light, and non-uniform air refraction due to variable temperatures between installed cameras and the structure being monitored. These sources inevitably influence each other, for example, the resolution of the hardware influences the precision that can be achieved in the calibration, which is in turn influenced by the environmental conditions. The scheme depicted in Figure 1 summarizes the possible interactions between the three phases of the vision-based monitoring process and the sources of errors and uncertainties.
Investment can be made in the hardware (high quality cameras and lenses), in up-to-date software, in efforts to access the most favorable locations for camera installation, and in accurate controls of the calibration and synchronization. Nevertheless, the variability of the environmental parameters might still jeopardize the quality of the results; this is a concern especially for long-term field monitoring as required in structural health monitoring, which faces large variations in ambient light, temperature, humidity, wind, and other possible interferences inducing vibrations in the cameras. As a consequence, these sources of errors and uncertainties have a larger impact on vision-based monitoring as compared to the case of conventional monitoring procedures when sensors are in direct contact with the object being monitored.
Different studies on the assessment of the errors and uncertainties in vision-based monitoring can be found in the earlier works, e.g., [65,71], as well as in the recent literature, e.g., [94,95,96,97,98,99,100,101,102,103,104,105,106,107], mostly through theoretical analyses and laboratory testing aimed at evaluating the influence of specific aspects related to the hardware or to external causes. For example, D’Emilia et al. [97] investigated how two different types of camera could influence the accuracy of vibration monitoring based on video acquisition. Both cameras had the same sensor resolution (1280 × 1024 pixels), but with two different maximum frame rates at full resolution: 25 frames per second (FPS), as found in low cost consumer-grade cameras, and 2000 FPS, as found in high-speed more expensive industrial cameras. Tests were made in a laboratory under controlled conditions; laser vibrometry as well as contact accelerometers were used to evaluate the accuracy of the vision-based systems. It was observed that, if a slow camera (25 FPS) is used, together with the techniques of controlled aliasing, the experimental results showed that the vibration uncertainty is in the order of 3.4% of the vibration amplitude for vibrations in the range of 10–70 Hz. If a high speed camera (2000 FPS) is used, the experimental data showed 8% relative uncertainty to the vibration amplitude in the frequency range of 100–300 Hz and 13% in the frequency range of 400–600 Hz. Given that the frequency range of interest in civil structures and infrastructure is well below 70 Hz, the effect of the acquisition frame rate on the evaluation of the measured amplitude of vibration could be expected to be limited.
Another example of uncertainty evaluation is a laboratory study by Liu et al. [102], focused on the influence of the distance between camera and object being monitored, focal length of the lens, and calibration process. The results showed that the uncertainty in the measurements (displacements in the considered tests) increased with distance and decreased with the increase of focal length; using a longer known length in the calibration process could greatly reduce the measurement uncertainty; the measurement uncertainty was more sensitive to the uncertainty of the known length used in calibration than the projection of the known length in the image. However, as the distance increased, the sensitivity to the known length was weaker and the sensitivity to the projection of the known length in the image was stronger. When a longer focal length was used, the influence of the working distance to the sensitivity was weaker. Indeed, these indications provide useful support in preliminary choices when designing the configuration and installation of a vision-based monitoring system.
In addition to the studies on the assessment of the errors and uncertainties mentioned earlier [65,71,94,95,96,97,98,99,100,101,102,103,104,105,106,107], important information on the accuracy of vision-based systems could be obtained from the outcomes of field application, as discussed in the following paragraph. Inevitably, monitoring of full scale structures and infrastructures poses more difficulties in the evaluation of the sources of errors and uncertainties, given the number of possible concurring causes in the field as compared with the laboratory.
3. Recent Field Applications of Vision-Based Vibration Monitoring in Civil Engineering 3.1. General Overview
Many published works presenting applications of vision-based monitoring in civil engineering can be found in the technical literature. Contributions (only refereed journal articles are here considered) can be organized in three areas of monitoring applications: (1) measurements of displacements and strains under static and quasi-static loadings [108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130]; (2) measurements of displacement time histories in prototypes or small-scale structures in controlled environmental conditions, typically in a laboratory, [131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162]; (3) field measurements of displacement time histories in full scale structures [163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200]; (4) development of sensors using vision-based techniques [201,202,203,204,205,206,207]; and (5) field measurements of moving components, as in the case of wind turbines, e.g., [208,209,210,211]. Such a subdivision is made regardless of the adopted vision-based techniques and image processing algorithms. It should be remarked that overlaps exist between these monitoring applications, as in some cases, there are publications that, prior to field testing, illustrate preliminary laboratory validations. Hence, the proposed subdivision should be considered on the basis of the main contribution provided.
Attention in this review article is given to the analysis of recent results obtained in vibration (displacement time histories) monitoring of civil engineering structures and infrastructures in the field, as documented in refereed journal articles published in the last four years [183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200]. The results presented are subdivided into six structural groups: steel bridges, steel footbridges, steel structures for sport stadiums, reinforced concrete structures, masonry structures, and timber footbridge. For each field study, a short description of the monitored structure is provided, with a summary of the main information and conclusions provided in the publication. A list of the considered applications is reported in Table 1; it is observed that half of them are in the U.S.A. and that bridges/footbridges are the most recurring structures.
For each reference, some essential information on the adopted hardware is provided in Table 2, alongside video processing (optical flow, template matching, feature matching, motion magnification, and proprietary commercial software), loading condition during monitoring, as well as comparisons with monitoring using other technologies. In this way, Table 1 and Table 2 are supposed to serve as a guide to the following paragraphs, each dedicated to one of the six structural groups, presented in the same order used in the tables.
It is anticipated that comparisons in all cases provided good correlations between vision-based monitoring and the other considered technologies, with one exception being the steel footbridge (vertical truss frames) tested by Dong et al. [199], where differences between accelerometers and vision-based measurements were not negligible. It should be remarked that, in four cases, no direct comparisons were made: Shariati and Schumacher [183], as well as Feng and Feng [187], compared the magnitudes of the measurements to those obtained in previous tests, concluding that such comparisons were favorable; in Dhanasekar et al. [195], the outcomes of the experimental monitoring were satisfactory compared with numerical simulations in terms of magnitude of the monitored structural parameters; and in Lydon et al. [196], vision-based monitoring was part of an integrated monitoring system that included fiber optics with the objectives to complement the two systems.
3.2. Steel Bridges
Feng and Feng [187] presented the outcomes of vision-based field monitoring of the Manhattan Bridge (New York, NY, USA) using a single camera for remote real-time displacement measurements at one single point and simultaneously at multiple points. The Manhattan Bridge, opened to traffic in 1909, is a suspension bridge spanning the East River in New York City, connecting Manhattan and Brooklyn; the main span is 448 m long; the deck is 36.5 m wide, including seven lanes in total and four subway lanes. The camera was placed on stable stone steps around 300 m away from the bridge mid-span and the video recording was made using a frame rate of 10 FPS. The known dimensions (7.2 m) of the vertical trusses were used for camera calibration. Displacement responses at one single point at the mid-span region were measured during the passage of subway trains, having estimated the scale factor as 20.5 mm/pixel. The authors commented that the dynamic displacement response was similar to that measured by GPS and interferometric radar systems in previous studies. Then, by zooming out the lens to obtain a large field of view (FOV), i.e., the area that is visible in the image, three points at the mid-span region were selected and a scaling factor of about 36 mm/pixel was estimated. The authors commented that such measures displayed more fluctuations, especially for small displacement amplitudes, as a consequence of the larger FOV, determining a decreased measurement resolution compared with the single point case. In addition, the authors studied the influence of the camera vibration during the field measurements. Such a test was conducted by looking at a building in the background and tracking its apparent motion; the camera motion was estimated with the assumption that the building was not moving. The authors concluded that, compared with the bridge displacement, the camera motion was insignificant.
Feng and Feng [187] presented the outcomes of vision-based field monitoring of the Manhattan Bridge (New York, NY, USA) using a single camera for remote real-time displacement measurements at one single point and simultaneously at multiple points. The Manhattan Bridge, opened to traffic in 1909, is a suspension bridge spanning the East River in New York City, connecting Manhattan and Brooklyn; the main span is 448 m long; the deck is 36.5 m wide, including seven lanes in total and four subway lanes. The camera was placed on stable stone steps around 300 m away from the bridge mid-span and the video recording was made using a frame rate of 10 FPS. The known dimensions (7.2 m) of the vertical trusses were used for camera calibration. Displacement responses at one single point at the mid-span region were measured during the passage of subway trains, having estimated the scale factor as 20.5 mm/pixel. The authors commented that the dynamic displacement response was similar to that measured by GPS and interferometric radar systems in previous studies. Then, by zooming out the lens to obtain a large field of view (FOV), i.e., the area that is visible in the image, three points at the mid-span region were selected and a scaling factor of about 36 mm/pixel was estimated. The authors commented that such measures displayed more fluctuations, especially for small displacement amplitudes, as a consequence of the larger FOV, determining a decreased measurement resolution compared with the single point case. In addition, the authors studied the influence of the camera vibration during the field measurements. Such a test was conducted by looking at a building in the background and tracking its apparent motion; the camera motion was estimated with the assumption that the building was not moving. The authors concluded that, compared with the bridge displacement, the camera motion was insignificant.
Chen et al. [189] illustrated their application of field vision-based monitoring of the WWI Memorial Bridge, a vertical-lift truss bridge, spanning the Piscataqua River (USA) from Portsmouth, New Hampshire, to Badger’s Island in Kittery, Maine, with a total length of 366 m. Measurements of the vibrations due to the lift span impact excitation and normal in-service traffic were made with a single video camera and compared with the results from accelerometers and strain gauges. The camera was placed over 80 m away from the bridge in a nearby park, on a heavy tripod with an accelerometer installed to measure the camera motions. Manual calibration was made based on known dimensions of the structural elements. The videos were processed using a technique inspired by motion magnification and detailed in [140,188]. Two accelerometers and two strain gauges were placed on the bridge to compare the remote video camera measurements with those from the contact sensors. The results for both lift span impact and in-service traffic were identical in terms of peaks in the frequency spectra; in addition, the authors observed that the time-series measurements also compared favorably.
Xu et al. [194] presented their experience in field vision-based monitoring of the Mineral Line Bridge, a skew steel girder bridge with a span length of 14.7 m, carrying the West Somerset Railway near Watchet (UK). Three sensing systems were adopted and compared: one consumer-grade camera, a high-end commercial vision-based system, and two accelerometers. The authors observed that a vision-based monitoring system using a single consumer-grade camera could provide an accurate characterization of the bridge in favorable test conditions, which included choosing salient target patterns for tracking and avoiding any camera shake. Regarding the control of the camera shake, the criterion for camera stability evaluation was proposed by the authors based on the tracked motions of a stationary target, and the correction was performed only when necessary, given that tracking the nominal motion of an adjacent stationary object was very effective to remove the low-frequency drift error, but the measurement resolution was possibly reduced in this way. In addition, the authors investigated a data fusion method to combine the vision-based measurement with data from accelerometers. Such a method was shown to be capable of denoising the measurement and providing better estimates. Accordingly, the authors concluded that mixed systems consisting of cameras and accelerometers overcame the field testing limitations of vision-based monitoring and had the potential for accurate and robust sensing on bridge structures.
3.3. Steel Footbridges
Xu et al. [193] illustrated the activities for field vision-based monitoring of the Baker Bridge, a cable-stayed footbridge spanning 109 m over the A379 dual-carriageway in Exeter (UK). The bridge provides cyclist and pedestrian access to the Sandy Park Stadium and experiences heavy pedestrian traffic on match days. The bridge comprises a single A-shaped tower that supports the continuous steel deck over a simple support at the pylon cross-beam and via seven pairs of stay cables. Because of the range of frequencies of its first vibration modes, the bridge is prone to noticeable vibration response owing to pedestrian traffic. A consumer-grade camera was mounted on the top of a tripod at the central reservation of the A379 carriageway below and approximately 55.30 m from the bridge tower. Video recording was done at 30 FPS. Camera calibration was set using the known structural dimensions from the as-built drawings, using a narrow FOV setting. Four triaxial wireless accelerometers were installed in the bridge deck to validate the results obtained from processing the images acquired by the video camera. The monitoring of the bridge included periods when large crowds of spectators crossed the deck. The results in terms of identified modal frequencies of the bridge deck as obtained from vision-based monitoring accurately matched those obtained for the contact accelerometers. In addition, measurements of cable vibration using the vision-based system were performed and compared to the results from two triaxial wireless accelerometers installed on the cables. The authors concluded that the vision-based system works better to capture the lower modal frequencies of cables, whereas the accelerometers provide reliable estimations of higher frequency modes. However, the multipoint deformation data obtained using the vision system proved to be effective for tracking cable dynamic properties at the same time as bridge deformation, allowing for the effect of varying load on cable tensions to be observed. In this way, a powerful diagnostic capability for larger cable-supported structures was achieved.
Lydon et al. [196] presented the field vision-based monitoring of The Peace Bridge over the Foyle River in Derry (North Ireland), a self-anchored suspension bridge with a single 96.3 m suspended central span, two suspended 63.4 m (east and west) side spans, and sections not supported by cables carried by guided supports between the side spans and abutments. The bridge was monitored using 14 accelerometers and a low-cost camera installed on a tripod on the east bank of the river, at 71.2 m from the mid-span of the east span. The video camera pointed at the position of an accelerometer to validate the results obtained from video acquisition. Monitoring was performed for ambient input as well as during the flow of a large number of pedestrians during a local event. Camera vibration caused by environmental conditions was removed through image stabilization using the stationary building in the background of the image as a reference point. The authors observed a very good correlation of the measurements achieved from the camera with those obtained from the accelerometer, with the findings considered very promising for this low cost monitoring system. The authors also commented that the single camera was set up in a matter of minutes, compared with several hours required to place and run cables to the accelerometers along the 312 m footway.
Hoskere et al. [197] illustrated the field vision-based monitoring of the Little Golden Gate Bridge over Lake of the Woods in Mahomet, Illinois, about 18 km northwest of Champaign, IL (USA). A pedestrian suspension bridge made of steel girders and cable with wooden slats on the deck, spanning 67 m with 10 m tall posts on either side, the deck is suspended by cables with hangers at every meter. Bridge vibrations were monitored using a camera installed on an unmanned aerial vehicle (UAV) with thirteen markers affixed at regular intervals on one side of the bridge. For comparison, four accelerometers were installed on the first half span. The bridge was excited by three pedestrians jumping on the second half-span. The test was conducted in challenging field conditions with wind speed between 25 and 35 km/h. The modal properties as determined by the vision-based approach were compared with the results from accelerometers. The corresponding modal assurance criterion (MAC) values were all above 0.925, and the difference in the natural frequencies was less than 1.6% for all three compared modes. Thus, the authors concluded that these results demonstrate the efficacy of the proposed vision-based approach to conduct modal surveys of full-scale infrastructure. Sophisticated algorithms such as those employed are able to go beyond complex situations and can handle video image processing from video cameras installed in UAVs for structural monitoring purposes.
Dong et al. [199] presented the field vision-based monitoring of a footbridge on a campus in the southeast of the USA, made by vertical truss frames connected via splice connection in the middle and spanning an entire length of 39 m over a pond; the deck width is 4.17 m, and it serves light pedestrian traffic and small vehicles such as golf carts. A single video camera with resolution of 1920 × 1080 pixels and rate speed of 60 FPS was located near one of the abutments and employed to monitor the vertical vibration of the mid-span. Bolts in the truss system were adopted as a target in vision-based monitoring. An accelerometer was installed at mid-span for comparison purposes. The footbridge was excited under different types of human loading (walking, running, and jumping with different paces). The authors highlighted that the differences in the acceleration spectra between vision-based acquisition and contact accelerometer were not always negligible. However, serviceability assessment of the footbridge for the different loading cases provided the same outcomes using vision-based data or accelerometer recordings.
3.4. Steel Structures for Sport Stadiums
Khuc and Catbas [184,185,198] illustrated a campaign of field vision-based monitoring of the steel superstructures of a football stadium in the USA with approximately 45,000 seating capacity that exhibited considerable vibration levels, especially at the sections of the highly active local team supporters. The vision-based method and framework as implemented by the authors was verified under different experimental conditions including altering light conditions, different camera locations (distances and angles), and camera frame rates (30 and 60 FPS). Specifically, a beam under the grandstand was selected for monitoring predetermined measurement points. A displacement potentiometer and an accelerometer were installed for comparative purposes. The contact sensors and camera recorded the structural vibrations synchronously during periods of intense crowd motion throughout football games. The authors concluded that the results from vision-based measurements were consistent with those from contact measurements and the first three operational modal frequencies under a human jumping load were almost the same. In addition, the authors commented that, although quite accurate results for defined measurement ranges and conditions could be achieved through a completely non-contact vision-based implementation with low-cost hardware, some issues such as data storage requirement for clips and images, processing time for image data, and limitation for horizontal displacement measurement needed to be addressed in future developments.
Feng et al. [186] presented the field vision-based monitoring of the Hard Rock Stadium, home to the Miami Dolphins NFL team in Florida (USA). Specific attention was given to the monitoring of the cable forces during the construction phases of a new long-span, cable-supported canopy covering the entire seating bowl. An industrial video camera with a maximum resolution of 1280 × 1024 pixel and maximum rate of 150 FPS was used with manual focus optical lens, having a focal length in the range of 16 mm to 160 mm. Considering that tensioned cables in similar civil engineering infrastructure have a fundamental frequency typically under 10 Hz, the authors decided to adopt for video recording a sampling rate of 50 FPS (meaning any frequency beyond 25 Hz would be aliased according to the Nyquist criteria), which would make it possible to capture enough cable vibration components. The vibrations of the four tie down cables at each quad of the stadium were simultaneously measured using one single camera, while the vibration of the inclined cables was measured by one single camera placed remotely on the seating bowl. It was found that the measured cable forces using the vision-based method agreed with the results from load cell readings installed for validation purposes, with a maximum discrepancy of 5.6%. The authors noted that the noncontact measurement capacity of the vision sensor eliminated the need to access the cable to install sensors, an operation typically highly difficult and risky. Compared with the expensive and time-consuming method of using conventional accelerometers and associated data acquisition systems, it was concluded that the noncontact vision-based acquisition approach represented a convenient low-cost method for either periodic or long-term monitoring of cable-supported structures.
3.5. Reinforced Concrete Structures
Shariati and Schumacher [183] documented the field vision-based monitoring of the Streicker Bridge, a footbridge in the Princeton University campus (New Jersey, USA) with a straight main deck section supported by a steel truss system underneath and four curved ramps leading up to the straight sections. Structurally, the main span is a deck-stiffened arch and the legs are curved continuous girders supported by steel columns. The legs are horizontally curved and the shape of the main span follows this curvature. The arch and columns are weathering steel, while the main deck and legs are made of reinforced post-tensioned concrete. A consumer-grade camera with a zoom lens was used to acquire a 60 FPS video of one of the ramps while a number of volunteers jumped up and down on it. A target mounted on the edge of the bridge slab was used to track displacement time histories. Such a target was set up by a research team from Columbia University that also investigated the same footbridge with their own video-based monitoring system [202] a few years earlier. In addition, the Streicker Bridge was equipped with two fiber-optic sensing technologies, i.e., discrete long-gauge sensing, based on fiber Bragg-Gratings, and truly-distributed sensing, based on Brillouin optical time domain analysis; both sensors were embedded in concrete during the construction. The natural frequencies obtained by the authors in their tests were found to be the same as those measured by the fiber-optic measurement system and by the other vision-based method in [202]. In addition to the frequency contents, the two vision-based measurements gave comparable amplitude of displacements, showing the replicability of the obtained results.
Harvey and Elisha [190] aimed at demonstrating that existing cameras installed within buildings, such as surveillance cameras, might capable of extracting the structural response by tracking the interstory drifts. To this end, a full-scale five-story reinforced concrete building tested on the unidirectional large high performance outdoor shake table at the University of California San Diego (UCSD) was used as a case study. The test protocol consisted of six different earthquake ground motions applied to the building. The building was heavily instrumented with an array of analog sensors and cameras. The adopted method involved the extraction of vision-based dynamic displacement measurements from the recorded video footages and the estimation of the dynamic properties of the building to which the cameras were attached. The results showed that the footage captured by these cameras was adequate to identify the natural frequencies of the building vibration during free and forced (seismic) responses.
Lydon et al. [196] presented the field vision-based monitoring of the Governors Bridge, a three-span reinforced concrete beam-slab bridge that crosses the River Lagan to the south of Belfast City (North Ireland). The bridge has an overall length of 62.6 m and carries two lanes of west bound traffic from the Annandale embankment to the Stranmillis embankment. The field test was carried out under normal, non-rush hour, vehicular traffic loading on the bridge. Two low-cost wireless action cameras were used as vision-based monitoring; one was located 3.75 m from one of the deck beams to monitor its displacements (determined scale factor 0.0798 mm/pixel) and a second camera to identify the load above the deck. For comparative purposes, a fiber optic displacement gauge with a resolution of 0.03 mm was installed and data acquisition was carried out using a dynamic interrogator at a scanning rate of 25 Hz, synchronized with the video acquisition of the camera set at 25 FPS. The identified displacements based on the two systems showed excellent agreement. Therefore, it was concluded that the same accuracy in displacement measurement could be obtained from the vision sensor as compared with the fiber optic displacement gauge, even if low-cost cameras were adopted.
3.6. Masonry Structures
Fioriti et al. [191] presented monitoring of two cultural heritage constructions in Italy, i.e., the temple of Minerva Medica, a ruined nymphaeum of the ancient Imperial Rome, and Ponte delle Torri in Spoleto, an aqueduct and pedestrian bridge with multiple arches having a total length of 230 m and piers of height up to 80 m, completed in the Middle Ages and possibly built over Roman ruins. The Minerva Medica ruins are very close to a tramway producing strong vibrations whose effects were clearly evident in the video taken using a low-cost consumer grade camera at a distance of 9 m. Modal analysis by motion magnification of the field video recordings was performed and compared to the results obtained through conventional contact velocimeters; the differences were limited to just a few percentage points. Satisfactory results were also achieved for the Ponte delle Torri, despite the small level of structural excitation due to the wind action and the low resolution of the adopted video cameras. The authors commented that such results constituted a remarkable starting point for future experimentations and improvements. Indeed, monitoring the ambient vibration of a massive multiple-arch masonry structure under normal conditions through vision-based monitoring appears to be a major successful case study, considering the oppositions often found in installing contact sensors in cultural heritage.
Acikgoz et al. [192] illustrated the field vision-based monitoring of Marsh Lane viaduct, a masonry bridge with multiple arches on the Leeds-Selby route (UK) carrying two electrified train tracks. A commercial vision-based system was used to monitor the displacements of two consecutive arches of the viaduct and complemented a fiber optic system installed in the bridge. The objective was to estimate rigid body rotations of the monitored masonry arch segments. The vision-based system consisted of two video cameras and a system controller. The cameras recorded videos of the monitored structure at 50 FPS. Data processing consisted of tracking the sub-pixel position of natural brick texture in the image and scaling of pixel movements to metric movements with the use of a new registration technique proposed by the authors [192]. In order to understand the viaduct behavior, two different camera location configurations were investigated with two-dimensional DIC. In the first configuration, the cameras monitored planar movements of two arches in the vertical plane directly under the northern tracks, aligned with the bridge longitudinal axis. In the second configuration, the cameras monitored the movements in the vertical planes lying under the northern and southern tracks. In both configurations, the cameras were positioned centrally in line with the crown of the arches. This setup allowed capturing all the targets with a declared 0.08 mm resolution in each plane, using the natural brick texture in the image for motion tracking. As already mentioned in the introduction to this paragraph, the installed vision-based monitoring was part of an integrated monitoring system that included fiber optics with the objectives to complement the two systems and provide a comprehensive description of the structural response and damage mechanisms activated. The authors commented that the quasi-distributed nature of data allowed extensive measurements of time histories of displacements, rotations, and strains under the transit of trains. Such extensive measurements provided unique data that enabled new insight into understanding of the rigid body motions and damage mechanisms of the viaduct.
Dhanasekar et al. [195] presented field vision-based investigations on two masonry arch rail bridges in Australia. Digital images of speckled patches in three key regions (crown, support, and quarter point) of one-half of an arch were acquired from three independent cameras, each focusing on one of the patches from approximately 4 m under the passages of trains at night. Images were acquired using industrial monochrome cameras at 50 FPS. The time histories of the deflections and strains were measured. The wheel positions, train lengths, and speeds were ascertained using three lasers. The wheel position was identified to be the critical element for the deflection and strain in the arch. A three-dimensional finite element model was implemented to compare the field strain magnitudes obtained in the vision-based monitoring to those from numerical simulations, obtaining a favorable agreement. The authors concluded that vision-based monitoring was a suitable method to measure deflection and strains on masonry arch rail bridges provided adequate care is taken to ensure the quality of images.
3.7. Timber Footbridge
Fradelos et al. [200] illustrated the field vision-based monitoring of the Kanellopoulos timber arch footbridge (Patras, Greece), 30 m long and 2.9 m wide, made of glulam wood and metallic elements. The omission of X-bracing below the deck and poor construction of the metal X-bracing at its roof made the footbridge prone to lateral oscillations. The bridge was monitored using satellite systems, robotic theodolites, and accelerometers. Videos were made during testing using common low-cost cameras without the initial intention for vision-based monitoring. Such video recordings were later examined and used to try to estimate the dynamic horizontal deflections of specific points of the footbridge. It was shown that the analysis of low-cost video images using a simple approximate technique permitted the reconstruction of the movements of the bridge and the computation of some of its structural characteristics. This result was possible under ideal conditions: the movement was two-dimensional, displacements of the selected target points were characterized by a signal exceeding the pixel resolution, the camera was in a fixed position and the video image covered stable points defining a reference system, and structural elements near the selected target points allowed to scale the photo in the two examined axes. As a result, the first lateral natural frequency of the footbridge obtained from video processing differed by less than 2% from that estimated using accelerometers and geodetic sensors.
4. Discussion The overview of the vision-based field applications presented in this review article led to the following remarks involving four main aspects: camera installation, hardware, software, and hybrid contact–contactless solutions. Regarding the camera installation, the possibility to place the video camera in a good vantage point, both stable and allowing views of the structural displacements with few perspective distortions, appears to be the most important aspect in the considered applications. If this is the case, good results can be achieved even with low-cost video cameras and simple video processing algorithms. This condition inevitably sets the inherent limits of video-based monitoring: only points that are clearly visible from the video camera can be monitored; major difficulties are expected in locating good vantage points in urban environments, for example, when monitoring tall buildings in crowded downtown areas. Regarding the hardware, it is essential to choose the appropriate camera lens so that the obtained field of view is suitable for testing. In fact, the sensitivity is controlled by the scale factor (the ratio between the physical displacement and the pixels in the recorded image); a lower scale factor results in higher resolution of the measure and in lower noise. Accordingly, narrower fields of view (zooming in the lens) provide better resolution of the monitored structure, hence decreasing the scale factor. On the other hand, a wider field of view (zooming out the lens) provides less resolution of the monitored structure and reduces the quality of the displacement measures, even if more monitoring points can be identified and tracked with the same camera. Other ways to reduce the scale factor might be the use of higher camera resolutions, i.e., more pixels for the same displacement. However, the increment in the size of the digital image would be demanding in terms of video footage storage and post-processing; the latter point might compromise the possibility of real-time processing. Nevertheless, the achievable resolution in video-based monitoring is a quantity that, of course, does not make sense if not compared to the magnitude of the structural response. In the examined case studies, there were situations with limited resolution in absolute terms that, however, led to satisfactory results, as the monitored structure had important displacements when excited, e.g., lively footbridges under heavy pedestrian traffic, bridges under train passages, and masonry structures close to subways. Another aspect involving the hardware discussed in the field applications considered in the presented overview is the image sampling rate. The maximum frame rate of most conventional video cameras is in the range of 30 to 60 frames per second; such speeds are indicated as sufficient for most civil engineering structures. Industrial video cameras are available with much higher speeds; however, such speeds do not find application for field monitoring in civil engineering, mostly owing to low frequency contents of structures and infrastructures, as well as the fact that, the higher the frame rate, the more difficult it is to achieve real-time processing. Regarding the software, there are many possibilities in image processing given the number of algorithms available in the technical literature. Template matching and feature matching appear to be the most common approaches. However, recent motion magnification algorithms were tested for field applications and provided very interesting results, even with stiff and massive structures. The final remark is made on the fact that some field applications used vision-based monitoring together with conventional contact sensors. In most cases, such combined use was for comparisons or validation purposes of the vision-based monitoring. However, some studies highlighted significant benefits in combining the results obtained from two such different technologies by means of appropriate data fusion methods. In this way, it is possible to successfully combine the benefits of each technology in a hybrid contact–contactless monitoring system. 5. Conclusions A general review of the vision-based approach as a prominent methodology for contactless monitoring of civil engineering structures and infrastructures was provided. Specific attention was given to the overview of recent applications in field monitoring of the structural dynamic response of full-scale case studies. From the examined articles, the following main conclusions can be made: (1) vision-based monitoring might be able to provide results equivalent to those obtained with consolidated monitoring technologies such as the use of contact accelerometers and displacement transducers; (2) vision-based monitoring appears to be the most convenient solution for monitoring cable structures and, more in general, those structure and infrastructures with elements where the installation of contact sensors is demanding; (3) successful applications of vision-based monitoring depend on the combination of the adopted hardware-software system (video camera and lens, tripod, monitoring of camera movements, video processing algorithms for motion tracking, and motion magnification) and the influence of the environment (accessibility of favorable locations for installing the video camera, weather conditions, and their variability during video acquisition); (4) hybrid monitoring schemes combining contact sensors and contactless vision-based approaches appear to be very interesting solutions that benefit from the advantages of each of the two approaches, without the limitations inherent to the use of a single technology; and (5) the use of vision-based technologies for long-term or permanent monitoring is to date an unexplored field of application.
Figure 1. Diagram of the vision-based monitoring and relations with the sources of errors and uncertainties.
Group | Structure | Country | Authors and Reference |
---|---|---|---|
Steel bridges | Suspension bridge | U.S.A. | Feng and Feng [187] |
Truss with vertical lift | U.S.A. | Chen et al. [189] | |
Skew girder | U.K. | Xu et al. [194] | |
Steel footbridges | Cable-stayed bridge | U.K. | Xu et al. [193] |
Suspension bridge | North Ireland | Lydon et al. [196] | |
Suspension bridge | U.S.A. | Hoskere et al. [197] | |
Vertical truss frames | U.S.A. | Dong et al. [199] | |
Steel structures for sport stadiums | Grandstands | U.S.A. | Khuc and Catbas [184,185,198] |
Superstructure cables | U.S.A. | Feng et al. [186] | |
Reinforced concrete structures | Deck on arch footbridge | U.S.A. | Shariati and Schumacher [183] |
Five-story building | U.S.A. | Harvey and Elisha [190] | |
Beam-slab bridge | North Ireland | Lydon et al. [196] | |
Masonry structures | Heritage ruins and arch bridge | Italy | Fioriti et al. [191] |
Arch bridge | U.K. | Acikgoz et al. [192] | |
Arch bridge | Australia | Dhanasekar et al. [195] | |
Timber footbridge | Deck-stiffened arch | Greece | Fradelos et al. [200] |
Reference | Camera, Pixel Resolution, and Frame Rate (FPS) | Video Processing Algorithm | Loading Condition during Monitoring | Comparisons with Other Monitoring Technologies |
---|---|---|---|---|
[187] | Point Grey, 1280 × 1024, 10 | Template mat. | Passage of subway trains | No direct, GPS, and radar |
[189] | Point Grey, 800 × 600, 30 | Optical flow | Lift impact, normal traffic | Accelerom., strain gauges |
[194] | Go Pro, 1920 × 1080, 25 Imetrum, 2048 × 1088, 30 | Template mat. Imetrum [87] | Passage of trains | Low cost and high-end vision-based, accelerometers |
[193] | Go Pro, 1920 × 1080, 30 | Template mat. | Crowd of pedestrians | Wireless accelerometers |
[196] | Go Pro, 1920 × 1080, 25 | Template mat. | Crowd of pedestrians | Accelerometers |
[197] | DJI 3840 × 2160, 30 | Optical flow | Walk, running, jumping | Accelerometers |
[199] | Low cost, 1920 × 1080, 60 | Feature mat. | Walk, running, jumping | Accelerometers |
[184,185,198] | Canon, N/A, 30 and 60 | Feature mat. | Crowd during game | Accelerom., displ. transd. |
[186] | Point Grey, 1280 × 1024, 50 | Template mat. | Operational, shaken | Load cell |
[183] | Canon, N/A, 60 | Motion magn. | Pedestrian jumping | No direct, vision-based |
[190] | N/A, 1056 × 720, 25 | Feature mat. | Outdoor shake table | Accelerometers |
[196] | Go Pro, 1920 × 1080, 25 | Template mat. | Normal vehicular traffic | No direct, integr. fiber optics |
[191] | N/A | Motion magn. | Tram vibrations, wind | Velocimeters |
[192] | Imetrum, N/A, 50 | Imetrum [87] | Passage of trains | Fiber optics |
[195] | Sony, 1936 × 1216, 50 | Dantec [85] | Passage of trains | No direct, numerical |
[200] | Low cost, 1920 × 1080, 30 | Optical flow | Group of pedestrians | Accelerom., GPS, theodolite |
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Data sharing not applicable.
Acknowledgments
The author acknowledges the constructive comments of the anonymous reviewers that helped improving this review article.
Conflicts of Interest
The author declares no conflict of interest.
1. Ewins, D.J. Modal Testing: Theory, Practice and Application, 2nd ed.; Wiley: Chichester, UK, 2000; pp. 1-576.
2. Rainieri, C.; Fabbrocino, G. Operational Modal Analysis of Civil Engineering Structures: An Introduction and Guide for Applications, 1st ed.; Springer: New York, NY, USA, 2014; pp. 1-314.
3. Brincker, R.; Ventura, C. Introduction to Operational Modal Analysis, 1st ed.; Wiley: Chichester, UK, 2015; pp. 1-360.
4. Mottershead, J.E.; Friswell, M.I. Model updating in structural dynamics: A survey. J. Sound Vib. 1993, 167, 347-375.
5. Friswell, M.I.; Mottershead, J.E. Finite Element Model Updating in Structural Dynamics, 1st ed.; Springer: New York, NY, USA, 1995; pp. 1-286.
6. Paultre, P.; Proulx, J.; Talbot, M. Dynamic testing procedures for highway bridges using traffic loads. J. Struct. Eng. 1995, 121, 362-376.
7. De Callafon, R.A.; Moaveni, B.; Conte, J.P.; He, X.; Udd, E. General realization algorithm for modal identification of linear dynamic systems. J. Eng. Mech. 2008, 134, 712-722.
8. Moaveni, B.; He, X.; Conte, J.P.; Restrepo, J.I.; Panagiotou, M. System identification study of a 7-story full-scale building slice tested on the UCSD-NEES shake table. J. Struct. Eng. 2011, 137, 705-717.
9. Shahidi, S.G.; Pakzad, S.N. Generalized response surface model updating using time domain data. J. Struct. Eng. 2014, 140, A4014001.
10. Asgarieh, E.; Moaveni, B.; Stavridis, A. Nonlinear finite element model updating of an infilled frame based on identified time-varying modal parameters during an earthquake. J. Sound Vib. 2014, 333, 6057-6073.
11. Asgarieh, E.; Moaveni, B.; Barbosa, A.R.; Chatzi, E. Nonlinear model calibration of a shear wall building using time and frequency data features. Mech. Syst. Signal Process. 2017, 85, 236-251.
12. Meggitt, J.W.R.; Moorhouse, A.T. Finite element model updating using in-situ experimental data. J. Sound Vib. 2020, 489, 115675.
13. Rainieri, C.; Notarangelo, M.A.; Fabbrocino, G. Experiences of dynamic identification and monitoring of bridges in serviceability conditions and after hazardous events. Infrastructures 2020, 5, 86.
14. Li, Q.; Fan, J.; Nie, J.; Li, Q.; Chen, Y. Crowd-induced random vibration of footbridge and vibration control using multiple tuned mass dampers. J. Sound Vib. 2010, 329, 4068-4092.
15. Caetano, E.; Cunha, A.; Magalhães, F.; Moutinho, C. Studies for controlling human-induced vibration of the Pedro e Inês footbridge, Portugal. Part 1: Assessment of dynamic behavior. Eng. Struct. 2010, 32, 1069-1081.
16. Caetano, E.; Cunha, A.; Magalhães, F.; Moutinho, C. Studies for controlling human-induced vibration of the Pedro e Ines footbridge, Portugal. Part 2: Implementation of tuned mass dampers. Eng. Struct. 2010, 32, 1082-1091.
17. Dall'Asta, A.; Ragni, L.; Zona, A.; Nardini, L.; Salvatore, W. Design and experimental analysis of an externally prestressed steel and concrete footbridge equipped with vibration mitigation devices. J. Bridge Eng. 2016, 21, C5015001.
18. Liu, P.; Zhu, H.X.; Moaveni, B.; Yang, W.G.; Huang, S.Q. Vibration monitoring of two long-span floors equipped with tuned mass dampers. Int. J. Struct. Stab. Dyn. 2019, 19, 1950101.
19. Doebling, S.W.; Farrar, C.R.; Prime, M.B. A summary review of vibration-based damage identification methods. Shock Vib. Dig. 1998, 30, 91-105.
20. Carden, E.P.; Fanning, P. Vibration based condition monitoring: A review. Struct. Health Monit. 2004, 3, 355-377.
21. Teughels, A.; De Roeck, G. Damage detection and parameter identification by finite element model updating. Arch. Comput. Methods Eng. 2005, 12, 123-164.
22. Farrar, C.; Lieven, N. Damage prognosis: The future of structural health monitoring. Philos. Trans. A Math. Phys. Eng. Sci. 2007, 365, 623-632.
23. Fraser, M.; Elgamal, A.; He, X.; Conte, J. Sensor network for structural health monitoring of a highway bridge. J. Comput. Civ. Eng. 2009, 24, 11-24.
24. Farrar, C.R.; Worden, K. Structural Health Monitoring: A Machine Learning Perspective, 1st ed.; Wiley: Chichester, UK, 2012; pp. 1-631.
25. Limongelli, M.P.; Celebi, M. Seismic Structural Health Monitoring: From Theory to Successful Applications, 1st ed.; Springer: New York, NY, USA, 2019; pp. 1-447.
26. Lynch, J.; Loh, K. A summary review of wireless sensors and sensor networks for structural health monitoring. Shock Vibrat. Dig. 2006, 38, 91-128.
27. Li, J.; Mechitov, K.A.; Kim, R.E.; Spencer, B.F. Efficient time synchronization for structural health monitoring using wireless smart sensor networks. Struct. Control Health Monit. 2016, 23, 470-486.
28. Noel, A.B.; Abdaoui, A.; Elfouly, T.; Ahmed, M.H.; Badawy, A.; Shehata, M.S. Structural health monitoring using wireless sensor networks: A comprehensive survey. IEEE Commun. Surv. Tutor. 2017, 19, 1403-1423.
29. Sabato, A.; Niezrecki, C.; Fortino, G. Wireless MEMS-based accelerometer sensor boards for structural vibration monitoring: A review. IEEE Sens. J. 2017, 17, 226-235.
30. Abdulkarem, M.; Samsudin, K.; Rokhani, F.Z.; Rasid, M.F.A. Wireless sensor network for structural health monitoring: A contemporary review of technologies, challenges, and future direction. Struct. Health Monit. 2020, 19, 693-735.
31. Bastianini, F.; Matta, F.; Rizzo, A.; Galati, N.; Nanni, A. Overview of recent bridge monitoring applications using distributed Brillouin fiber optic sensors. J. Nondestruct. Test. 2007, 12, 269-276.
32. Li, S.; Wu, Z. Development of distributed long-gage fiber optic sensing system for structural health monitoring. Struct. Health Monit. 2007, 6, 133-143.
33. Kim, D.H.; Feng, M.Q. Real-time structural health monitoring using a novel fiber-optic accelerometer system. IEEE Sens. J. 2007, 7, 536-543.
34. Matta, F.; Bastianini, F.; Galati, N.; Casadei, P.; Nanni, A. Distributed strain measurement in steel bridge with fiber optic sensors: Validation through diagnostic load test. J. Perform. Constr. Facil. 2008, 22, 264-273.
35. Barrias, A.; Casas, J.R.; Villalba, S. A review of distributed optical fiber sensors for civil engineering applications. Sensors 2016, 16, 748.
36. Narasimhan, S.; Wang, Y. Noncontact sensing technologies for bridge structural health assessment. J. Bridge Eng. 2020, 25, 02020001.
37. Xia, H.; De Roeck, G.; Zhang, N.; Maeck, J. Experimental analysis of a high-speed railway bridge under Thalys trains. J. Sound Vib. 2003, 268, 103-113.
38. Nassif, H.H.; Gindy, M.; Davis, J. Comparison of laser Doppler vibrometer with contact sensors for monitoring bridge deflection and vibration. NDT E Int. 2005, 38, 213-218.
39. Rothberg, S.J.; Allen, M.S.; Castellini, P.; Di Maio, D.; Dirckx, J.J.J.; Ewins, D.J.; Halkon, B.J.; Muyshondt, P.; Paone, N.; Ryan, T.; et al. An international review of laser Doppler vibrometry: Making light work of vibration measurement. Opt. Lasers Eng. 2017, 99, 11-22.
40. Garg, P.; Moreu, F.; Ozdagli, A.; Taha, M.R.; Mascareñas, D. Noncontact dynamic displacement measurement of structures using a moving laser doppler vibrometer. J. Bridge Eng. 2019, 24, 04019089.
41. Farrar, C.R.; Darling, T.W.; Migliori, A.; Baker, W.E. Microwave interferometers for non-contact vibration measurements on large structures. Mech. Syst. Signal Process. 1999, 13, 241-253.
42. Pieraccini, M.; Parrini, F.; Fratini, M.; Atzeni, C.; Spinelli, P.; Micheloni, M. Static and dynamic testing of bridges through microwave interferometry. NDT E Int. 2007, 40, 208-214.
43. Gentile, C.; Bernardini, G. An interferometric radar for noncontact measurement of deflections on civil engineering structures: Laboratory and full-scale tests. Struct. Infrastruct. Eng. 2010, 6, 521-534.
44. Gentile, C. Deflection measurement on vibrating stay cables by non-contact microwave interferometer. NDT E Int. 2010, 43, 231-240.
45. Gentile, C.; Cabboi, A. Vibration-based structural health monitoring of stay cables by microwave remote sensing. Smart Struct. Syst. 2015, 16, 263-280.
46. Whitlow, R.D.; Haskins, R.; McComas, S.L.; Crane, C.K.; Howard, I.L.; McKenna, M.H. Remote bridge monitoring using infrasound. J. Bridge Eng. 2019, 24, 04019023.
47. Lobo-Aguilar, S.; Zhang, Z.; Jiang, Z.; Christenson, R. Infrasound-based noncontact sensing for bridge structural health monitoring. J. Bridge Eng. 2019, 24, 04019033.
48. Brown, C.J.; Karuna, R.; Ashkenazi, V.; Roberts, G.W.; Evans, R.A. Monitoring of structures using the Global Positioning System. Proc. Inst. Civil Eng. 1999, 134, 97-105.
49. Roberts, G.W.; Meng, X.; Dodson, A.H. Integrating a global positioning system and accelerometers to monitor the deflection of bridges. J. Surv. Eng. 2004, 130, 65-72.
50. Meng, X.; Dodson, A.H.; Roberts, G.W. Detecting bridge dynamics with GPS and triaxial accelerometers. Eng. Struct. 2007, 29, 3178-3184.
51. Moschas, F.; Stiros, S. Measurement of the dynamic displacements and of the modal frequencies of a short-span pedestrian bridge using GPS and an accelerometer. Eng. Struct. 2011, 33, 10-17.
52. Hoppe, E.; Bruckno, B.; Campbell, E.; Acton, S.; Vaccari, A.; Stuecheli, M.; Bohane, A.; Falorni, G.; Morgan, J. Transportation infrastructure monitoring using satellite remote sensing. In Materials and infrastructures 1; Torrenti, J.M., La Torre, F., Eds.; Wiley: Chichester, UK, 2016; Chapter 14; pp. 185-198.
53. Huang, Q.; Monserrat, O.; Crosetto, M.; Crippa, B.; Wang, Y.; Jiang, J.; Ding, Y. Displacement monitoring and health evaluation of two bridges using Sentinel-1 SAR images. Remote Sens. 2018, 10, 1714.
54. Lazecky, M.; Hlavacova, I.; Bakon, M.; Sousa, J.J.; Perissin, D.; Patricio, G. Bridge displacements monitoring using space-borne X-band SAR interferometry. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 205-210.
55. Zhu, M.; Wan, X.; Fei, B.; Qiao, Z.; Ge, C.; Minati, F.; Vecchioli, F.; Li, J.; Costantini, M. Detection of building and infrastructure instabilities by automatic spatiotemporal analysis of satellite SAR interferometry measurements. Remote Sens. 2018, 10, 1816.
56. Cavalaglia, N.; Kita, A.; Falco, S.; Trillo, F.; Costantini, M.; Ubertini, F. Satellite radar interferometry and in-situ measurements for static monitoring of historical monuments: The case of Gubbio, Italy. Remote Sens. Environ. 2019, 235, 11453.
57. Hoppe, E.J.; Novali, F.; Rucci, A.; Fumagalli, A.; Del Conte, S.; Falorni, G.; Toro, N. Deformation monitoring of posttensioned bridges using high-resolution satellite remote sensing. J. Bridge Eng. 2019, 24, 04019115.
58. Psimoulis, P.A.; Stiros, S.C. Measurement of deflections and of oscillation frequencies of engineering structures using Robotic Theodolites (RTS). Eng. Struct. 2007, 29, 3312-3324.
59. Psimoulis, P.A.; Stiros, S.C. Measuring deflections of a short-span railway bridge using a robotic total station. J. Bridge Eng. 2013, 18, 182-185.
60. Forno, C.; Brown, S.; Hunt, R.A.; Kearney, A.M.; Oldfield, S. The measurement of deformation of a bridge by moirè photography and photogrammetry. Strain 1991, 27, 83-87.
61. Ri, S.; Fujigaki, M.; Morimoto, Y. Sampling moiré method for accurate small deformation distribution measurement. Exp. Mech. 2010, 50, 501-508.
62. Ri, S.; Muramatsu, T.; Saka, M.; Nanbara, K.; Kobayashi, D. Accuracy of the sampling moiré method and its application to deflection measurements of large-scale structures. Exp. Mech. 2012, 52, 331-340.
63. Kulkarni, R.; Gorthi, S.S.; Rastogi, P. Measurement of in-plane and out-of-plane displacements and strains using digital holographic moiré. J. Mod. Opt. 2014, 61, 755-762.
64. Chen, X.; Chang, C.C. In-plane movement measurement technique using digital sampling moiré method. J. Bridge Eng. 2019, 24, 04019013.
65. Sutton, M.A.; Orteu, J.J.; Schreier, H.W. Image Correlation for Shape, Motion and Deformation Measurements: Basic Concepts, Theory and Applications, 1st ed.; Springer: New York, NY, USA, 2009; pp. 1-316.
66. Kohut, P.; Holak, K. Vision-Based Monitoring System. In Advanced Structural Damage Detection, 1st ed.; Stepinski, T., Uhl, T., Staszewski, W., Eds.; Wiley: Chichester, UK, 2013; pp. 279-320.
67. Schumacher, T.; Shariati, A. Monitoring of structures and mechanical systems using virtual visual sensors for video analysis: Fundamental concept and proof of feasibility. Sensors 2013, 13, 16551-16564.
68. Ye, X.W.; Dong, C.Z.; Liu, T. A review of machine vision-based structural health monitoring: Methodologies and applications. J. Sens. 2016, 2016, 7103039.
69. Ye, X.W.; Yi, T.H.; Dong, C.Z.; Liu, T. Vision-based structural displacement measurement: System performance evaluation and influence factor analysis. Measurement 2016, 88, 372-384.
70. Baqersad, J.; Poozesh, P.; Niezrecki, C.; Avitabile, P. Photogrammetry and optical methods in structural dynamics-A review. Mech. Syst. Signal Process. 2017, 86, 17-34.
71. Feng, D.; Feng, M.Q. Computer vision for SHM of civil infrastructure: From dynamic response measurement to damage detection-A review. Eng. Struct. 2018, 156, 105-117.
72. Spencer, B.F.; Hoskere, V.; Narazaki, Y. Advances in computer vision-based civil infrastructure inspection and monitoring. Engineering 2019, 5, 199-222.
73. Dong, C.Z.; Catbas, F.N. A review of computer vision-based structural health monitoring at local and global levels. Struct. Health Monit. 2020. in print.
74. Peters, W.H.; Ranson, W.F. Digital imaging techniques in experimental stress analysis. Opt. Eng. 1982, 21, 427-431.
75. Chu, T.C.; Ranson, W.F.; Sutton, M.A. Applications of digital-image-correlation techniques to experimental mechanics. Exp. Mech. 1985, 25, 232-244.
76. Pan, B.; Qian, K.; Xie, H.; Asundi, A. Two-dimensional digital image correlation for inplane displacement and strain measurement: A review. Meas. Sci. Technol. 2009, 20, 062001.
77. Pan, B.; Li, K. A fast digital image correlation method for deformation measurement. Opt. Lasers Eng. 2011, 49, 841-847.
78. Pan, B.; Li, K.; Tong, W. Fast, robust and accurate digital image correlation calculation without redundant computations. Exp. Mech. 2013, 53, 1277-1289.
79. Pan, B. Bias error reduction of digital image correlation using Gaussian pre-filtering. Opt. Lasers Eng. 2013, 51, 1161-1167.
80. Pan, B. An evaluation of convergence criteria for digital image correlation using inverse compositional Gauss-Newton algorithm. Strain 2014, 50, 48-56.
81. Wang, Z.; Kieu, H.; Nguyen, H.; Le, M. Digital image correlation in experimental mechanics and image registration in computer vision: Similarities, differences and complements. Opt. Lasers Eng. 2015, 65, 18-27.
82. Pan, B.; Wang, B. Digital image correlation with enhanced accuracy and efficiency: A comparison of two subpixel registration algorithms. Exp. Mech. 2016, 56, 1395-1409.
83. Zhong, F.; Quan, C. Efficient digital image correlation using gradient orientation. Opt. Laser Technol. 2018, 106, 417-426.
84. Mathworks MATLAB Computer Vision Toolbox. Available online: https://mathworks.com/products/computer-vision.html (accessed on 29 October 2020).
85. Dantec Dynamics, Laser Optical Measurements Systems and Sensors. Available online: https://www.dantecdynamics.com/ (accessed on 29 October 2020).
86. Correlated Solutions, Leaders in Non-Contact Measurements Solutions. Available online: https://www.correlatedsolutions.com/ (accessed on 29 October 2020).
87. IMETRUM Non-Contact Precision Measurement. Available online: https://www.imetrum.com/ (accessed on 29 October 2020).
88. Liu, C.; Torralba, A.; Freeman, W.T.; Durand, F.; Adelson, E.H. Motion magnification. ACM Trans. Graphics 2005, 24, 519-526.
89. Wu, H.Y.; Rubinstein, M.; Shih, E.; Guttag, J.V.; Durand, F.; Freeman, W.T. Eulerian video magnification for revealing subtle changes in the world. ACM Trans. Graphics 2012, 31, 1-8.
90. Wadhwa, N.; Rubinstein, M.; Durand, F.; Freeman, W.T. Phase-based video motion processing. ACM Trans. Graphics 2013, 32, 80.
91. Davis, A.; Rubinstein, M.; Wadhwa, N.; Mysore, G.; Durand, F.; Freeman, W. The visual microphone: Passive recovery of sound from video. ACM Trans. Graph. 2014, 33, 79.
92. Ngo, A.C.L.; Phan, R.C.W. Seeing the invisible: Survey of video motion magnification and small motion analysis. ACM Comput. Surv. 2019, 52, 114.
93. Harmanci, Y.E.; Gülan, U.; Holzner, M.; Chatzi, E. A novel approach for 3D-structural identification through video recording: Magnified tracking. Sensors 2019, 19, 1229.
94. Wang, Y.Q.; Sutton, M.A.; Bruck, H.A.; Schreier, H.W. Quantitative error assessment in pattern matching: Effects of intensity pattern noise, interpolation, strain and image contrast on motion measurements. Strain 2009, 45, 160-178.
95. Bornert, M.; Brémand, F.; Doumalin, P.; Dupré, J.C.; Fazzini, M.; Grédiac, M.; Hild, F.; Mistou, S.; Molimard, J.; Orteu, J.J.; et al. Assessment of digital image correlation measurement errors: Methodology and results. Exp. Mech. 2009, 49, 353-370.
96. Amiot, F.; Bornert, M.; Doumalin, P.; Dupré, J.C.; Fazzini, M.; Orteu, J.J.; Poilâne, C.; Robert, L.; Rotinat, R.; Toussaint, E.; et al. Assessment of digital image correlation measurement accuracy in the ultimate error regime: Main results of a collaborative benchmark. Strain 2013, 49, 483-496.
97. D'Emilia, G.; Razzè, L.; Zappa, E. Uncertainty analysis of high frequency image-based vibration measurements. Measurement 2013, 46, 2630-2637.
98. Zappa, E.; Mazzoleni, P.; Matinmanesh, A. Uncertainty assessment of digital image correlation method in dynamic applications. Opt. Lasers Eng. 2014, 56, 140-151.
99. Zappa, E.; Matinmanesh, A.; Mazzoleni, P. Evaluation and improvement of digital image correlation uncertainty in dynamic conditions. Opt. Lasers Eng. 2014, 59, 82-92.
100. Mazzoleni, P.; Matta, F.; Zappa, E.; Sutton, M.A.; Cigada, A. Gaussian pre-filtering for uncertainty minimization in digital image correlation using numerically-designed speckle patterns. Opt. Lasers Eng. 2015, 66, 19-33.
101. Mazzoleni, P.; Zappa, E.; Matta, F.; Sutton, M.A. Thermo-mechanical toner transfer for high-quality digital image correlation speckle patterns. Opt. Lasers Eng. 2015, 75, 72-80.
102. Liu, C.; Yuan, Y.; Zhang, M. Uncertainty analysis of displacement measurement with Imetrum Video Gauge. ISA Trans. 2016, 65, 547-555.
103. Gao, Z.; Xu, X.; Su, Y.; Zhang, Q. Experimental analysis of image noise and interpolation bias in digital image correlation. Opt. Lasers Eng. 2016, 81, 46-53.
104. Blaysat, B.; Grédiac, M.; Sur, F. On the propagation of camera sensor noise to displacement maps obtained by DIC-An experimental study. Exp. Mech. 2016, 56, 919-944.
105. Gao, Z.; Zhang, Q.; Su, Y.; Wu, S. Accuracy evaluation of optical distortion calibration by digital image correlation. Opt. Lasers Eng. 2017, 98, 143-152.
106. Xu, X.; Su, Y.; Zhang, Q. Theoretical estimation of systematic errors in local deformation measurements using digital image correlation. Opt. Lasers Eng. 2017, 88, 265-279.
107. Su, Y.; Gao, Z.; Zhang, Q.; Wu, S. Spatial uncertainty of measurement errors in digital image correlation. Opt. Lasers Eng. 2018, 110, 113-121.
108. Sutton, M.A.; Wolters, W.J.; Peters, W.H.; Ranson, W.F.; McNeill, S.R. Determination of displacements using an improved digital correlation method. Image Vision Comput. 1983, 1, 133-139.
109. Sutton, M.A.; Cheng, M.; Peters, W.H.; Chao, Y.J.; McNeill, S.R. Application of an optimized digital correlation method to planar deformation analysis. Image Vision Comput. 1986, 4, 143-150.
110. Lee, J.J.; Shinozuka, M. Real-time displacement measurement of a flexible bridge using digital image processing techniques. Exp. Mech. 2006, 46, 105-114.
111. Yoneyama, S.; Kitagawa, A.; Iwata, S.; Tani, K.; Kikuta, H. Bridge deflection measurement using digital image correlation. Exp. Tech. 2007, 31, 34-40.
112. Park, J.W.; Lee, J.J.; Jung, H.J.; Myung, H. Vision-based displacement measurement method for high-rise building structures using partitioning approach. NDT E Int. 2010, 43, 642-647.
113. Peddle, J.; Goudreau, A.; Carlson, E.; Santini-Bell, E. Bridge displacement measurement through digital image correlation. Bridge Struct. 2011, 7, 165-173.
114. Sładek, J.; Ostrowska, K.; Kohut, P.; Holak, K.; Gaska, A.; Uhl, T. Development of a vision based deflection measurement system and its accuracy assessment. Measurement 2013, 46, 1237-1249.
115. Park, S.W.; Park, H.S.; Kim, J.H.; Adeli, H. 3D displacement measurement model for health monitoring of structures using a motion capture system. Measurement 2015, 59, 352-362.
116. Quan, C.; Tay, C.J.; Sun, W.; He, X. Determination of three-dimensional displacement using two-dimensional digital image correlation. Appl. Opt. 2008, 47, 583-593.
117. Yoneyama, S.; Ueda, H. Bridge deflection measurement using digital image correlation with camera movement correction. Mater. Trans. 2012, 53, 285-290.
118. Hoult, N.A.; Take, W.A.; Lee, C.; Dutton, M. Experimental accuracy of two dimensional strain measurements using digital image correlation. Eng. Struct. 2013, 46, 718-726.
119. Gencturk, B.; Hossain, K.; Kapadia, A.; Labib, E.; Mo, Y.L. Use of digital image correlation technique in full-scale testing of prestressed concrete structures. Measurement 2014, 7, 505-515.
120. Ghorbani, R.; Matta, F.; Sutton, M.A. Full-field deformation measurement and crack mapping on confined masonry walls using digital image correlation. Exp. Mech. 2015, 55, 227-243.
121. Almeida Santos, C.; Oliveira Costa, C.; Batista, J. A vision-based system for measuring the displacements of large structures: Simultaneous adaptive calibration and full motion estimation. Mech. Syst. Signal Process. 2016, 72-73, 678-694.
122. Shan, B.; Wang, L.; Huo, X.; Yuan, W.; Xue, Z. A bridge deflection monitoring system based on CCD. Adv. Mater. Sci. Eng. 2016, 4857373.
123. Pan, B.; Tian, L.; Song, X. Real-time, non-contact and targetless measurement of vertical deflection of bridges using off-axis digital image correlation. NDT E Int. 2016, 79, 73-80.
124. Lee, J.; Lee, K.C.; Cho, S.; Sim, S.H. Computer vision-based structural displacement measurement robust to light-induced image degradation for in-service bridges. Sensors 2017, 17, 2317.
125. Park, J.W.; Moon, D.S.; Yoon, H.; Gomez, F.; Spencer, B.F.; Kim, J.R. Visual-inertial displacement sensing using data fusion of vision-based displacement with acceleration. Struct. Control Health Monit. 2018, 25, e2122.
126. Alipour, M.; Washlesky, S.J.; Harris, D.K. Field deployment and laboratory evaluation of 2D digital image correlation for deflection sensing in complex environments. J. Bridge Eng. 2019, 24, 04019010.
127. Carmo, R.N.F.; Valença, J.; Bencardino, F.; Cristofaro, S.; Chiera, D. Assessment of plastic rotation and applied load in reinforced concrete, steel and timber beams using image-based analysis. Eng. Struct. 2019, 198, 109519.
128. Halding, P.S.; Christensen, C.O.; Schmidt, J.W. Surface rotation correction and strain precision of wide-angle 2D DIC for field use. J. Bridge Eng. 2019, 24, 04019008.
129. Lee, J.; Lee, K.C.; Jeong, S.; Lee, Y.J.; Sim, S.H. Long-term displacement measurement of full-scale bridges using camera ego-motion compensation. Mech. Syst. Signal Process. 2020, 140, 106651.
130. Dong, C.Z.; Celik, O.; Catbas, F.N.; O'Brien, E.J.; Taylor, S. Structural displacement monitoring using deep learning-based full field optical flow methods. Struct. Infrastruct. Eng. 2020, 16, 51-71.
131. Schmidt, T.; Tyson, J.; Galanulis, K. Full-field dynamic displacement and strain measurement using advanced 3d image correlation photogrammetry: Part 1. Exp. Tech. 2003, 27, 47-50.
132. Chang, C.C.; Ji, Y.F. Flexible videogrammetric technique for three-dimensional structural vibration measurement. J. Eng. Mech. 2007, 133, 656-664.
133. Jurjo, D.L.B.R.; Magluta, C.; Roitman, N.; Gonçalves, P.B. Experimental methodology for the dynamic analysis of slender structures based on digital image processing techniques. Mech. Syst. Signal Process. 2010, 24, 1369-1382.
134. Choi, H.S.; Cheung, J.H.; Kim, S.H.; Ahn, J.H. Structural dynamic displacement vision system using digital image processing. NDT E Int. 2011, 44, 597-608.
135. Yang, Y.S.; Huang, C.W.; Wu, C.L. A simple image-based strain measurement method for measuring the strain fields in an RC-wall experiment. Earthq. Eng. Struct. Dyn. 2012, 41, 1-17.
136. Wang, W.; Mottershead, J.E.; Siebert, T.; Pipino, A. Frequency response functions of shape features from full-field vibration measurements using digital image correlation. Mech. Syst. Signal Process. 2012, 28, 333-347.
137. Mas, D.; Espinosa, J.; Roig, A.B.; Ferrer, B.; Perez, J.; Illueca, C. Measurement of wide frequency range structural microvibrations with a pocket digital camera and sub-pixel techniques. Appl. Opt. 2012, 51, 2664-2671.
138. Wu, L.J.; Casciati, F.; Casciati, S. Dynamic testing of a laboratory model via vision-based sensing. Eng. Struct. 2014, 60, 113-125.
139. Feng, D.M.; Feng, M.Q. Vision-based multi-point displacement measurement for structural health monitoring. Struct. Control Health Monit. 2015, 23, 876-890.
140. Chen, J.G.; Wadhwa, N.; Cha, Y.J.; Durand, F.; Freeman, W.T.; Buyukozturk, O. Modal identification of simple structures with high-speed video using motion magnification. J. Sound Vib. 2015, 345, 58-71.
141. Oh, B.K.; Hwang, J.W.; Kim, Y.; Cho, T.; Park, H.S. Vision-based system identification technique for building structures using a motion capture system. J. Sound Vib. 2015, 356, 72-85.
142. Lei, X.; Jin, Y.; Guo, J.; Zhu, C.A. Vibration extraction based on fast NCC algorithm and high-speed camera. Appl. Opt. 2015, 54, 8198-8206.
143. Zheng, F.; Shao, L.; Racic, V.; Brownjohn, J. Measuring human-induced vibrations of civil engineering structures via vision-based motion tracking. Measurement 2016, 83, 44-56.
144. McCarthy, D.M.J.; Chandler, J.H.; Palmieri, A. Monitoring 3D vibrations in structures using high-resolution blurred imagery. Photogramm. Rec. 2016, 31, 304-324.
145. Yoon, H.; Elanwar, H.; Choi, H.; Golparvar-Fard, M.; Spencer, B.F. Target-free approach for vision-based structural system identification using consumer-grade cameras. Struct. Control Health Monit. 2016, 23, 1405-1416.
146. Mas, D.; Ferrer, B.; Acevedo, P.; Espinosa, J. Methods and algorithms for video-based multi-point frequency measuring and mapping. Measurement 2016, 85, 164-174.
147. Poozesh, P.; Sarrafi, A.; Mao, Z.; Avitabile, P.; Niezrecki, C. Feasibility of extracting operating shapes using phase-based motion magnification technique and stereo-photogrammetry. J. Sound Vib. 2017, 407, 350-366.
148. Khuc, T.; Catbas, F.N. Structural identification using computer vision-based bridge health monitoring. J. Struct. Eng. 2018, 144, 04017202.
149. Yang, Y.; Dorn, C.; Mancini, T.; Talken, Z.; Nagarajaiah, S.; Kenyon, G.; Farrar, C.; Mascareñas, D. Blind identification of full-field vibration modes of output-only structures from uniformly-sampled, possibly temporally-aliased (sub-Nyquist), video measurements. J. Sound Vib. 2017, 390, 232-256.
150. Feng, D.; Feng, M.Q. Identification of structural stiffness and excitation forces in time domain using noncontact vision-based displacement measurement. J. Sound Vib. 2017, 406, 15-28.
151. Cha, Y.J.; Chen, J.G.; Büyüköztürk, O. Output-only computer vision based damage detection using phase-based optical flow and unscented Kalman filters. Eng. Struct. 2017, 132, 300-313.
152. Javh, J.; Slavič, J.; Boltežar, M. The subpixel resolution of optical-flow-based modal analysis. Mech. Syst. Signal Process. 2017, 88, 89-99.
153. Xu, F. Accurate measurement of structural vibration based on digital image processing technology. Concurr. Comput. Pract. Exp. 2019, 31, e4767.
154. Dong, C.Z.; Ye, X.W.; Jin, T. Identification of structural dynamic characteristics based on machine vision technology. Measurement 2018, 126, 405-416.
155. Guo, J.; Jiao, J.; Fujita, K.; Takewaki, I. Damage identification for frame structures using vision-based measurement. Eng. Struct. 2019, 199, 109634.
156. Hosseinzadeh, A.Z.; Harvey, P.S. Pixel-based operating modes from surveillance videos for structural vibration monitoring: A preliminary experimental study. Measurement 2019, 148, 106911.
157. Kuddusa, M.A.; Lia, J.; Hao, H.; Lia, C.; Bi, K. Target-free vision-based technique for vibration measurements of structures subjected to out-of-plane movements. Eng. Struct. 2019, 190, 210-222.
158. Durand-Texte, T.; Simonetto, E.; Durand, S.; Melon, M.; Moulet, M.H. Vibration measurement using a pseudo-stereo system, target tracking and vision methods. Mech. Syst. Signal Process. 2019, 118, 30-40.
159. Civera, M.; Zanotti, F.L.; Surace, C. An experimental study of the feasibility of phase-based video magnification for damage detection and localisation in operational deflection shapes. Strain 2020, 56, e12336.
160. Eick, B.A.; Narazaki, Y.; Smith, M.D.; Spencer, B.F. Vision-based monitoring of post-tensioned diagonals on miter lock gate. J. Struct. Eng. 2020, 146, 04020209.
161. Lai, Z.; Alzugaray, I.; Chli, M.; Chatzi, E. Full-field structural monitoring using event cameras and physics-informed sparse identification. Mech. Syst. Signal Process. 2020, 145, 106905.
162. Ngeljaratan, L.; Moustafa, M.A. Structural health monitoring and seismic response assessment of bridge structures using target-tracking digital image correlation. Eng. Struct. 2020, 213, 110551.
163. Stephen, G.A.; Brownjohn, J.M.W.; Taylor, C.A. Measurements of static and dynamic displacement from visual monitoring of the Humber Bridge. Eng. Struct. 1993, 154, 197-208.
164. Olaszek, P. Investigation of the dynamic characteristic of bridge structures using a computer vision method. Measurement 1999, 25, 227-236.
165. Wahbeh, A.M.; Caffrey, J.P.; Masri, S.F. A vision-based approach for the direct measurement of displacements in vibrating systems. Smart Mater Struct 2003, 12, 785-794.
166. Lee, J.J.; Shinozuka, M. A vision-based system for remote sensing of bridge displacement. NDT E Int. 2006, 39, 425-431.
167. Ji, Y.F.; Chang, C.C. Nontarget image-based technique for small cable vibration measurement. J. Bridge Eng. 2008, 13, 34-42.
168. Chang, C.C.; Xiao, X.H. An integrated visual-inertial technique for structural displacement and velocity measurement. Smart Struct. Syst. 2010, 6, 1025-1039.
169. Fukuda, Y.; Feng, M.Q.; Shinozuka, M. Cost-effective vision-based system for monitoring dynamic response of civil engineering structures. Struct. Control Health Monit. 2010, 17, 918-936.
170. Caetano, E.; Silva, S.; Bateira, J. A vision system for vibration monitoring of civil engineering structures. Exp. Tech. 2011, 4, 74-82.
171. Mazzoleni, P.; Zappa, E. Vision-based estimation of vertical dynamic loading induced by jumping and bobbing crowds on civil structures. Mech. Syst. Signal Process. 2012, 33, 1-12.
172. Ye, X.W.; Ni, Y.Q.; Wai, T.T.; Wong, K.Y.; Zhang, X.M.; Xu, F. A vision-based system for dynamic displacement measurement of long-span bridges: Algorithm and verification. Smart Struct. Syst. 2013, 12, 363-379.
173. Kim, S.W.; Kim, N.S. Dynamic characteristics of suspension bridge hanger cables using digital image processing. NDT E Int. 2013, 59, 25-33.
174. Kohut, P.; Holak, K.; Uhl, T.; Ortyl, Ł.; Owerko, T.; Kuras, P.; Kocierz, R. Monitoring of a civil structure's state based on noncontact measurements. Struct. Health Monit. 2013, 12, 411-429.
175. Ribeiro, D.; Calcada, R.; Ferreira, J.; Martins, T. Non-contact measurement of the dynamic displacement of railway bridges using an advanced video-based system. Eng. Struct. 2014, 75, 164-180.
176. Busca, G.; Cigada, A.; Mazzoleni, P.; Zappa, E. Vibration monitoring of multiple bridge points by means of a unique vision-based measuring system. Exp. Mech. 2014, 54, 255-271.
177. Feng, M.Q.; Fukuda, Y.; Feng, D.; Mizuta, M. Nontarget vision sensor for remote measurement of bridge dynamic response. J. Bridge Eng. 2015, 20, 04015023.
178. Feng, D.; Feng, M. Model updating of railway bridge using in situ dynamic displacement measurement under trainloads. J. Bridge Eng. 2015, 20, 04015019.
179. Bartilson, D.T.; Wieghaus, K.T.; Hurlebaus, S. Target-less computer vision for traffic signal structure vibration studies. Mech Syst Signal Process 2015, 60-61, 571-582.
180. Ferrer, B.; Mas, D.; García-Santos, J.I.; Luzi, G. Parametric study of the errors obtained from the measurement of the oscillating movement of a bridge using image processing. J. Nondestruct. Eval. 2016, 35, 53.
181. Guo, J.; Zhu, C. Dynamic displacement measurement of large-scale structures based on the Lucas-Kanade template tracking algorithm. Mech. Syst. Signal Process 2016, 66-67, 425-436.
182. Ye, X.W.; Dong, C.Z.; Liu, T. Image-based structural dynamic displacement measurement using different multi-object tracking algorithms. Smart Struct. Syst. 2016, 17, 935-956.
183. Shariati, A.; Schumacher, T. Eulerian-based virtual visual sensors to measure dynamic displacements of structures. Struct. Control. Health Monit. 2017, 24, e1977.
184. Khuc, T.; Catbas, F.N. Completely contactless structural health monitoring of real-life structures using cameras and computer vision. Struct. Control Health Monit. 2017, 24, e1852.
185. Khuc, T.; Catbas, F.N. Computer vision-based displacement and vibration monitoring without using physical target on structures. Struct. Infrastruct. Eng. 2017, 13, 505-516.
186. Feng, D.; Scarangello, T.; Feng, M.Q.; Ye, Q. Cable tension force estimate using novel noncontact vision-based sensor. Measurement 2017, 99, 44-52.
187. Feng, D.; Feng, M.Q. Experimental validation of cost-effective vision-based structural health monitoring. Mech. Syst. Signal Process. 2017, 88, 199-211.
188. Chen, J.G.; Davis, A.; Wadhwa, N.; Durand, F.; Freeman, W.T.; Büyüköztürk, O. Video camera-based vibration measurement for civil infrastructure applications. J. Infrastruct. Syst. 2017, 23, B4016013-1.
189. Chen, J.G.; Adams, T.M.; Sun, H.; Bell, E.S.; Büyüköztürk, O. Camera-based vibration measurement of the World War I Memorial Bridge in Portsmouth, New Hampshire. J. Struct. Eng. 2018, 144, 04018207.
190. Harvey, P.S.; Elisha, G. Vision-based vibration monitoring using existing cameras installed within a building. Struct. Control Health Monit. 2018, 25, e2235.
191. Fioriti, V.; Roselli, I.; Tatì, A.; Romano, R.; De Canio, G. Motion magnification analysis for structural monitoring of ancient constructions. Measurement 2018, 129, 375-380.
192. Acikgoz, S.; DeJong, M.J.; Kechavarzi, C.; Soga, K. Dynamic response of a damaged masonry rail viaduct: Measurement and interpretation. Eng. Struct. 2018, 168, 544-558.
193. Xu, Y.; Brownjohn, J.; Kong, D. A non-contact vision-based system for multipoint displacement monitoring in a cable-stayed footbridge. Struct Control Health Monit. 2018, 25, e2155.
194. Xu, Y.; Brownjohn, J.M.W.; Huseynov, F. Accurate deformation monitoring on bridge structures using a cost-effective sensing system combined with a camera and accelerometers: Case study. J. Bridge Eng. 2019, 24, 05018014.
195. Dhanasekar, M.; Prasad, P.; Dorji, J.; Zahra, T. Serviceability assessment of masonry arch bridges using digital image correlation. J. Bridge Eng. 2019, 24, 04018120.
196. Lydon, D.; Lydon, M.; Taylor, S.; Martinez Del Rincon, J.; Hester, D.; Brownjohn, J. Development and field testing of a vision-based displacement system using a low cost wireless action camera. Mech. Syst. Signal Process. 2019, 121, 343-358.
197. Hoskere, V.; Park, J.W.; Yoon, H.; Spencer, B.F. Vision-based modal survey of civil infrastructure using unmanned aerial vehicles. J. Struct. Eng. 2019, 145, 04019062.
198. Dong, C.Z.; Celik, O.; Catbas, F.N. Marker-free monitoring of the grandstand structures and modal identification using computer vision methods. Struct. Health Monit. 2019, 18, 1491-1509.
199. Dong, C.Z.; Bas, S.; Catbas, F.N. Investigation of vibration serviceability of a footbridge using computer vision-based methods. Eng. Struct. 2020, 224, 111224.
200. Fradelos, Y.; Thalla, O.; Biliani, I.; Stiros, S. Study of lateral displacements and the natural frequency of a pedestrian bridge using low-cost cameras. Sensors 2020, 20, 3217.
201. Fukuda, Y.; Feng, M.Q.; Narita, Y.; Kaneko, S.; Tanaka, T. Vision-based displacement sensor for monitoring dynamic response using robust object search algorithm. IEEE Sens. J. 2013, 13, 4725-4732.
202. Feng, D.; Feng, M.Q.; Ozer, E.; Fukuda, Y. A vision-based sensor for noncontact structural displacement measurement. Sensors 2015, 15, 16557-16575.
203. Zhang, D.; Guo, J.; Lei, X.; Zhu, C. A high-speed vision-based sensor for dynamic vibration analysis using fast motion extraction algorithms. Sensors 2016, 16, 572.
204. Choi, I.; Kim, J.H.; Kim, D. A target-less vision-based displacement sensor based on image convex hull optimization for measuring the dynamic response of building structures. Sensors 2016, 16, 2085.
205. Hu, Q.; He, S.; Wang, S.; Liu, Y.; Zhang, Z.; He, L.; Wang, F.; Cai, Q.; Shi, R.; Yang, Y. A high-speed target-free vision-based sensor for bus rapid transit viaduct vibration measurements using CMT and ORB algorithms. Sensors 2017, 17, 1305.
206. Luo, L.; Feng, M.Q.; Wu, Z.Y. Robust vision sensor for multi-point displacement monitoring of bridges in the field. Eng. Struct. 2018, 163, 255-266.
207. Erdogan, Y.S.; Ada, M. A computer-vision based vibration transducer scheme for structural health monitoring applications. Smart Mater. Struct. 2020, 29, 085007.
208. Park, J.H.; Huynh, T.C.; Choi, S.H.; Kim, J.T. Vision-based technique for bolt-loosening detection in wind turbine tower. Wind. Struct. 2015, 21, 709-726.
209. Poozesh, P.; Baqersad, J.; Niezrecki, C.; Avitabile, P.; Harvey, E.; Yarala, R. Large-area photogrammetry based testing of wind turbine blades. Mech. Syst. Signal Process. 2017, 86, 98-115.
210. Sarrafi, A.; Mao, Z.; Niezrecki, C.; Poozesh, P. Vibration-based damage detection in wind turbine blades using phase-based motion estimation and motion magnification. J. Sound Vib. 2018, 421, 300-318.
211. Poozesh, P.; Sabato, A.; Sarrafi, A.; Niezrecki, C.; Avitabile, P.; Yarala, R. Multicamera measurement system to evaluate the dynamic response of utility-scale wind turbine blades. Wind Energy 2020, 23, 1619-1639.
Alessandro Zona
School of Architecture and Design, University of Camerino, 63100 Ascoli Piceno, Italy
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2021. This work is licensed under http://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Contactless structural monitoring has in recent years seen a growing number of applications in civil engineering. Indeed, the elimination of physical installations of sensors is very attractive, especially for structures that might not be easily or safely accessible, yet requiring the experimental evaluation of their conditions, for example following extreme events such as strong earthquakes, explosions, and floods. Among contactless technologies, vision-based monitoring is possibly the solution that has attracted most of the interest of civil engineers, given that the advantages of contactless monitoring can be potentially obtained thorough simple and low-cost consumer-grade instrumentations. The objective of this review article is to provide an introductory discussion of the latest applications of vision-based vibration monitoring of structures and infrastructures through an overview of the results achieved in full-scale field tests, as documented in the published technical literature. In this way, engineers new to vision-based monitoring and stakeholders interested in the possibilities of contactless monitoring in civil engineering could have an outline of up-to-date achievements to support a first evaluation of the feasibility and convenience for future monitoring tasks.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer