Introduction
Surface ocean currents contribute to the characterization of the Earth's climate . Knowledge of ocean surface velocities is a key and cross-cutting issue that impacts on many societal challenges far beyond the research context in geophysical fluid dynamics. As such, ocean surface currents have been included in the list of essential climate variables . Indeed, ocean currents transport and redistribute heat, dissolved salts, sediments, plankton, nutrients and ocean pollutants. Strong ocean currents define corridors used by marine mammals, birds and fish and sustain their migration in search for food, breeding sites and spawning areas. As a result, knowledge of the detailed structure and variability of ocean currents is required for fisheries and environmental management. Furthermore, surface currents directly affect many important socioeconomic activities such as global maritime trade and shipping or issues such as marine pollution and safety, to mention a few.
Ocean surface currents are the result of a nontrivial combination of different types of periodic and aperiodic phenomena whose ranges span a continuous spectra of space scales and timescales, from basin-wide motions ( 1000 km) to fast narrow currents and mesoscale eddies (30–100 km wide), submesoscale features (1–10 km), and quasi-three-dimensional turbulence scales (1–100 m). Due to the complexity of the currents' power spectra, the meaning and representativeness of any velocity average (and the corresponding residual current) are a function of the averaging period and region and its time and location .
The technologies to observe ocean currents have progressed in parallel to the
history of ocean research. First measurements were already undertaken
during the HMS Challenger expedition (1872–1876). For several decades,
the main source of information about the ocean currents had been ship-drift
reports. Using about 4 million observations of ship-drift data,
calculated annual and monthly mean surface currents
in a 2 5 grid. His charts served to identify large
gaps in the international databases, especially after the Second World War.
Although mechanical current meters have been used since the 1920s, their extended
use by the oceanographic community started in the 1960s thanks to improved
design, accuracy and reliability of rotor-type current meters and the
commercialization of modern acoustic Doppler current meters
. Simultaneously, attempts to infer deep ocean velocities
by tracking drifting devices exploiting
the sound fixing and ranging (SOFAR) channel located at around 1200 m depth
Current distribution of the global drifter array. Map regularly
update by NOAA at
[Figure omitted. See PDF]
Summary of current observations from moorings and
met-ocean buoys.
Map available at Woods Hole Institution at
[Figure omitted. See PDF]
The next major breakthrough was the launch of altimeter missions as
Topex/Poseidon and ERS-1/2 in the early 1990s. Taking advantage of the
precise measurements of sea level, global, near-real-time maps of geostrophic
velocities were derived for the first time on scales of several hundred
kilometers and 5–10 days. Finally, it has been demonstrated that surface
ocean currents can be directly measured using the Doppler effect, i.e., the
frequency shift of an emitted electromagnetic wave due to the relative motion
between the emitter and the sea surface. This phenomenon is being exploited
to retrieve current information from both satellite measurements provided by
synthetic aperture radar
Anticipating the goal of this review, today's ocean velocity observing systems can be divided according to their regional extent: global and coastal.
On the global scale, the observations provided by mooring instruments are
located mostly near and along the coasts, particularly in the Northern
Hemisphere
No global simulations of the ocean circulation are assimilating ocean surface current observations. The main reason is the shortness of the records of direct retrievals of surface currents on global scales. As stated in the previous paragraph, long series of global surface current maps have been derived from altimeters, drifters and surface winds. However, most of that information is already being directly assimilated (at a daily rate) in global simulations, providing constraining boundary conditions to the ocean circulation. As mesoscale is not well captured by these so-derived velocity maps, little improvement (if any) would be expected from their assimilation in global simulations. On regional scales, most of the assimilation efforts have focused on assimilating in situ observations of currents derived from acoustic Doppler profiles and surface drifters. See, for example and the references therein. In the context of remotely sensed velocity fields were able to reduce the errors of the surface currents in a simulation of the Indian Ocean by assimilating 5-day, OSCAR currents. More recently, found that adding OSCAR velocities in their assimilation scheme did not improve the forecasting skill obtained when drifters were assimilated alone. One of the reasons pointed out by the authors was the low-frequency sampling (5 days) of the OSCAR currents, together with the variable coverage of the satellite data used to derive OSCAR.
In coastal regions, the observation of surface currents has evolved differently because such an effort is driven by the need for risk assessment, environmental monitoring of marine protected areas and marine security. Together with in situ moored current meters, the use of HF radar systems in coastal areas has rapidly increased after the first decade of this century. Coastal HF radar has been shown to be able to resolve rapid changes. However, although the number of HF radars have been augmented, their coverage remains limited. Drifters can also be deployed in coastal zones, although their coverage remains sparse due to the elevated risk of beaching and/or equipment loss.
Contrary to the case of global and regional assimilation experiments, a number of studies have been conducted to assess the advantages of assimilating remotely sensed ocean currents in coastal simulations, as the number of coastal HF radar has increased in areas of strong economic activity.
As a kind of synthesis, the diagram in Fig. illustrates how different components of the ocean observing system capture different parts of the range of processes associated with surface ocean currents. As such, a combination of direct measurements of surface currents by satellite and HF coastal radar is a promising approach to cope with both the resolution and fast dynamics characteristic of coastal areas and the mesoscale and slower evolution of surface currents in the open ocean regions. As stated before, direct measurements of surface currents by satellites remain quite limited. This situation has prompted the development of various indirect methods, either by assuming dynamical constraints to sea surface temperature (SST) images by applying pattern recognition techniques as neural networks or the maximum cross-correlation (MCC) technique . A better understanding of the dynamics in the upper layers of the ocean has allowed the proposal of a new framework based on the surface quasi-geostrophic (SQG) equations , which is able to retrieve sea surface currents from a single SST image . These methods open the way to develop techniques for direct assimilation of sea surface currents into general ocean forecasting systems, a question that, as commented above, has not yet impacted dynamic predictions, except for coastal radar applications.
Spatio-temporal coverage by different technologies to measure sea surface currents. Adapted from the specifications sheet provided by the Global Ocean Observing System (GOOS), available at http://www.goosocean.org/components/com_oe/oe.php?task=.
[Figure omitted. See PDF]
The aim of this paper is to focus on reviewing two aspects of remote sensing of ocean surface currents. On the one hand, we are reviewing the different approaches that can be used to produce estimates of sea surface currents from remote sensing data (Sects. and ). On the other hand, we review the advances in assimilation of sea surface currents, specifically centered on HF radar in coastal regions which is, up to now, the only source of remote sensing current measurements (Sect. ). Is is expected that gained experience and the lessons learned from assimilating currents from HF radar can be translated, and applied, to global data assimilation systems if real-time, quasi-synoptic maps of ocean currents were available either from incoming satellite missions (e.g., SKIM, DopSCAT, SeaStar) or derived from the methods reviewed in Sect. 2.
The structure of the paper is as follows. Section 2 provides an overview of the available approximations to retrieve currents from existing satellite observations. In particular, Sect. 2.1 reviews the retrieval of geostrophic velocities from sea level measurements. Section 2.2 is devoted to analysis of the complex upper layer dynamics, taking into account all the elements of the ocean–atmosphere interaction such as wind and waves. In Sect. 2.3 we introduce the geometrical approaches used to infer sea surface velocity fields from the turbulent structure of the sea surface, as seen from multiple satellite sensors. Section 2.4 reviews the latest developments and the requirements to infer the sea surface velocity fields by inverting the potential vorticity field applied to a single image. Section 3 focuses on the basic principles and sampling characteristics of coastal HF radar, while Sect. 4 reviews the attempts and limitations of the different assimilation techniques applied to HF radar observations: nudging, sequential and four-dimensional variational (4DVAR) methods. Finally, Sect. provides a discussion about potential candidates to bridge the gap between global and coastal remote sensing of ocean currents.
Retrieval from satellite observations
On large scales Earth's rotation dominates the dynamics of ocean currents. However, the inertia contribution will become increasingly important as the scales of the flow reduce or the flow curvature grows. This motivates the introduction of the Rossby number: where , and represent the characteristic velocity, length scale and the Coriolis parameter, respectively. measures the relative importance of the advective and the Coriolis terms in the momentum equation. At small values, and without other sources of momentum such as wind and waves, the flow is close to the geostrophic balance, implying an equilibrium between the Coriolis and pressure forces. Then, ocean currents can be simply derived from pressure measurements (or density, or sea surface height) invoking the geostrophic approximation. Ageostrophic contributions to ocean currents have two different sources: wind and waves on one side, and the departure from the geostrophic approximation due to larger values of on the other. It is worth mentioning that, although the geostrophic and ageostrophic contributions can conceptually be separated, any measurement of the ocean current is the result of all the contributions, making it difficult to assess the relative contribution of each one and the accuracy of the estimations.
Currents from sea surface height
At zeroth order (i.e., ) and in absence of other sources of momentum (such as wind and waves)
the horizontal velocity field is nondivergent. As such, it is possible to define a stream function that
only depends parametrically on the vertical coordinate , such that
the geostrophic velocity field is given by
Sea surface temperature from MODIS Aqua with sea surface height from AVISO (black lines) obtained from the combination of measurements provided by different altimeters. Lines show the available measurements in the period of 12 h around the time the image was taken provided by Jason-1 (red), Envisat (blue) and GFO (purple). Arrows correspond to the cross-track geostrophic velocities.
[Figure omitted. See PDF]
Current altimeters provide measurements of SSH along the satellite track with
a sampling frequency of 20 Hz, implying a spatial resolution on the order of
300 m. The power spectral density (PSD) of these measurements shows the
presence of white noise
During the last few years there have been major improvements in radar altimetry technology that have not only reduced noise levels but also reduced the impact of inhomogeneities in measurements . Nevertheless, current altimeters still present strong limitations in observing small-scale features (10 km) not only due to noise but also due to temporal sampling . Finally, it is worth mentioning that current altimeters still have difficulties in providing measurements at distances between 10 and 50 km from the coast in spite of the advances made during the recent years .
Altimeter measurements only allow retrieval of the velocity perpendicular to
the satellite track (Eq. ). Two-dimensional fields are then
typically obtained through the interpolation of measurements in space and
time using classical optimal interpolation (OI) schemes
This example illustrates the two main problems of this technique. On the one
hand, the separation between tracks and the time sampling reduce the spatial
resolution in comparison with the one achieved by the along-track
measurements. estimated that the shortest wavelength
that can be achieved by the interpolated two-dimensional fields is
150–200 km, implying that vortices with diameters smaller than
75–100 km cannot be observed by altimeters. This gives rise to the
so-called altimetric gap, i.e., the range of scales that cannot be currently
observed by altimeters. On the other hand, the limited amount of altimeters
as well as the rapid evolution of some structures may induce errors in the
location and geometry of ocean vortices. showed that the
difference between using two or four altimeters induces RMS difference in sea
level anomalies up to 10 cm and differences in the eddy kinetic energy as big as
400 cm s, and comparison against drifting buoys unveiled important
errors in the location of some vortices
Several efforts have been made during the last few years to improve the ability
to obtain two-dimensional velocities from along-track data. For example,
have recently proposed a new approach to interpolate
the sparse altimetric measurements into a regular grid based on the advection
of potential vorticity (see Sect. below) during short periods
of time ( days) on scales smaller than km. This method has
been recently adapted to the interpolation of along-track altimetric
measurements improving the performance of the classical OI schemes
. Other proposed approaches attempt to improve
altimetric maps using a two-step
approach. That is, after the standard maps are computed, the residuals to respect along-track data are reinterpolated
using different correlation functions that may include bathymetric constraints
Another approach aiming to improve the direction of currents derived from altimetric measurements is based on the use of complementary satellite observations such as those obtained from thermal and visible measurements. Measurements of sea surface temperature, particularly those from infrared observations, are very precise in locating ocean structures such as fronts. Strong fronts have a tendency to be aligned with currents. This allows retrieval of two-dimensional velocity fields associated with thermal fronts, or even chlorophyll concentration patterns. In particular, given the cross-track geostrophic velocity , the along-track component can be estimated as where is the angle between the front and the vector orthogonal to the altimetric track. This approach has some drawbacks: it is sensitive to noise, and it is only valid for strong fronts becoming a region-dependent approximation . The underlying idea can also be pushed to correct two-dimensional altimetric maps. As before, under the assumption that strong fronts are a proxy of the geostrophic streamlines, the information is propagated along fronts using a Lagrangian framework and the altimetric velocities are corrected in both, the direction using the orientation of the front and the speed using the variation of intensity of the thermal gradient .
The advective term, i.e., , in the momentum
equation is absent in the geostrophic approximation because is
in the expansion in terms of
. If the flow is considered to be axisymmetric, , the advection term becomes
, where is the radius of curvature and
and are the radial and tangential unit vectors.
Momentum equations can be then easily solved, giving rise to the gradient wind
solution
Currents from wind and waves
Altimeter-derived geostrophic currents only account for a part of the surface circulation. The ocean response to atmospheric forcing (the most relevant component of the surface current) must be added to the geostrophic currents. The launch of scatterometers has allowed the measurement of several parameters characterizing the processes in the ocean–atmosphere interface (wind stress, roughness, wave height, etc.), enabling quantification of the wind-driven components of the sea surface currents. To understand and review the recent efforts to include atmosphere–ocean processes in retrieving the sea surface currents we start from the classical approach by W. Ekman who solved the momentum equations for a semiinfinite ocean forced by wind (Sect. ). However, his solution did not include the contribution from waves, which were added much later (Sect. ). Both solutions solve the momentum equations for a steady, hydrostatic and Boussinesq flow, while recent approaches generalized the problem by writing the momentum equations in terms of the turbulent stress (Sect. ).
Momentum equations
The momentum equations for a steady, hydrostatic and Boussinesq flow are
given by
where is the total horizontal velocity field,
is the turbulent stress, is buoyancy, and and are a
perturbation pressure and a perturbation density with respect to the
reference density , such that and which has
associated a reference pressure given by , and
is gravity. Contrary to the standard formulation of the Boussinesq flow
At the ocean surface, can be obtained from Eqs. () and () using satellite observations. The perturbation pressure at the ocean surface can be derived from altimetric measurement of SSH through . The buoyancy can be expressed in terms of and : using SST from infrared and microwave radiometers and sea surface salinity (SSS) from microwave radiometers as well. Here, is the thermal expansion coefficient and is the haline contraction coefficient. Finally, the wind stress term can be derived from scatterometer measurements. This approach is used to generate ocean current products such as OSCAR and GEKCO , without including the Stokes drift ( term in Eq. ).
The Stokes drift contribution is difficult to retrieve from satellite
observations. As has been seen above, it can be defined as the difference
between the Eulerian and Lagrangian velocities due to wave motion averaged
over a wave period. In the case of a monochromatic wave, the Stokes drift can
be computed as
where is the wave amplitude, the direction of propagation
in complex notation, is the wavenumber and the wave
frequency. This equation is unrealistic for the real ocean, where the wave
field is the result of the combination of many modes. It is therefore
necessary to have information of wave statistics. In particular, the Stokes
drift is proportional to the third moment of the wave spectrum
Ageostrophic velocity field for the Ekman component (green), the “Eulerian” Stokes component (blue), the Ekman–Stokes component (red) and the resulting velocity (black). The parameters used are the same as in . Wind and wave propagation is in the direction. All velocities are normalized by the friction velocity . The paremeters used are the same as in . Arrows in the lower-right plot correspond to the total (black) and Ekman (Green) transport a SVP and a CODE drifter would see, obtained by integrating velocities for the layers marked with gray bands.
[Figure omitted. See PDF]
Since the momentum balance of Eqs. () and () is linear and, assuming that pressure gradients are not related to local wind or waves, they are often separated into a geostrophic velocity field , which depends on the pressure gradients and can be derived from SSH measurements (Eqs. and ), and an ageostrophic field driven by wind and waves.
Besides, the parametrization of turbulent stress in terms of the velocity
field allows combining Eqs. () and (), resulting in a second-order linear equation for the
velocity. However, an alternative approach is obtained by differentiating
Eq. () and manipulating it to obtain an
equation for the turbulent stress known as the
generalized Ekman model or
the turbulent thermal wind balance :
Once stress has been retrieved, velocity can be computed using Eq. (). This is the approach used by the OSCAR product
without including the Coriolis–Stokes term. This approach improves the
solution of and has been extensively validated
Finally, it is worth mentioning that the use of forcing data (SST and SSH) with different effective resolutions in Eq. () may induce unphysical imbalance associated with the different spatial resolution of products such as SST (on the order of 1 km for IR radiometers) and SSH (on the order of 50–100 km for altimetric maps). Consequently, the spatial resolution of this approach is limited by the field with the lowest effective resolution. A possible approach to increase the spatial resolution of altimetric maps (see the discussion in Sect. ) consists of merging altimetric maps with Lagrangian measurements. Indeed, proposed a variational algorithm that has been successfully used by to combine CODE data and altimetric maps, who found that not only it is possible to restore some of the variability missed in altimetric maps but also ageostrophic contributions beyond the simple Ekman model. Obviously, this approach is limited by the availability of enough drifter data.
Wind solution
provided a solution to the ageostrophic part of Eq. () by setting , , where is a constant, and modifying the bottom boundary condition (Eq. ) by This solution only depends on the wind stress and the constant value given to , where and is the Ekman depth (see Fig. ). If turbulent stress (Eq. ) is assumed to be a linear function of depth, i.e., the resulting ageostrophic velocities are given by which is the so-called slab model characterized by a vertically homogeneous ageostrophic velocity field.
Both solutions depend on , which can be retrieved from
satellite measurements, and some parameters, i.e., and , that have to
be determined. Notice, however, the key differences between these two
solutions. The Ekman solution has the ageostrophic velocity field that
decreases
with depth and surface velocities are at rad to the right
(left) of wind in the Northern (Southern) Hemisphere while, in the slab model
solution, velocities are vertically homogeneous in the upper layer and
surface velocity is at rad to the right (left) of wind in the
Northern (Southern) Hemisphere. The main approaches to retrieving the
wind-induced currents usually do not attempt to reconstruct the vertical
profile of velocities but focus on determining the average motion of a layer
and may take into account the singularity at the equator due to the Coriolis
parameter
Rather than using the theoretical models given by Eqs. () and (), some approaches to determine the wind-induced
ageostrophic contribution of the velocity field are physically based
statistical models calibrated with independent observations of the velocity
field, typically surface drifters
Wave solution
The interaction of the Stokes drift with planetary vorticity introduces an additional force on the momentum equations known as the Coriolis–Stokes force. As a consequence, the ideal solution of the ageostrophic component of the velocity has additional terms to respect Eq. () given by : assuming the same boundary conditions as in the classical Ekman solution (Eq. ). Here, is the Ekman current at the ocean surface (Eq. ), the stokes velocity and , where is the wave vector (see Eq. ). The Coriolis–Stokes forcing changes the direction of the ageostrophic component. It also has an exponentially decaying vertical contribution that could be of the same extent as the Ekman term. Therefore, the heuristic model given by Eq. (), when fitted to wind measurements and drifter trajectories, might mix the wind and the wave contributions.
Figure plots the ideal solutions given by Eq. (). It shows the total solution (black) decomposed into the three
solution discussed above: Ekman (green), “Eulerian” Stokes (blue) and
Ekman–Stokes (red) as well as the integration of these solutions for the
depths of the CODE and SVP drogues. The values used are the same as in
. As is evident in Fig. , these drifters are
expected to have a different direction in comparison with SVP drifters.
Although the determination of upper wind and wave-driven currents provided by
the above equation may not be accurate
Currents from a sequence of tracer images
The apparent motion of surface tracers such as SST and chlorophyll
concentration suggests the use of sequences of satellite images to retrieve
the velocity field that originated this motion. This is being done using two
main approaches: feature tracking and inverting the conservation equation for
the tracer, which, in general is given by
where can be SST or chlorophyll concentration or even the MSS
and are the sources and sinks of this tracer, including the
vertical advection contribution, i.e., , where is the
vertical velocity component. It is important to realize that the advection
term is the inner product between velocity and
tracer gradients, which implies that only the velocity component parallel to
tracer gradient can be retrieved by inverting Eq. ().
This is what is known as the aperture problem. However, while the wealth of
satellite measurements of SST points to their use for estimating ocean
currents, this approach is not necessarily the best choice in
certain situations. The skin depth of SST is on the order of a few
microns implying that air–sea interactions can mask the presence of oceanic
structures. Moreover, the algorithms used to retrieve SST introduce
additional noise. Therefore, in some situations brightness temperature (BT)
is better suited than SST for the estimation of currents
The standard approach used in feature tracking is the so-called maximum
cross-correlation method . The
underlying idea is quite simple: given a template of grid
points in an image at time , it consists of searching which subwindow
of size has the maximum cross-correlation within a larger
search window in an image at time and taking the displacement
vector between images as the velocity field. This approach has been mainly
applied to SST
An alternative approach consists of tracking the biogenic surface slicks.
These are slicks formed by monomolecular slicks that modify the surface
tension and therefore affect capillary waves reducing the backscatter or
microwave radar emissions. This allows the observation of such slicks in MSS images
provided by SAR and use the MCC technique to retrieve currents. This approach
was successfully tested by , who used SAR data from Envisat
and ERS-2 separated by only 30 min. Although the use of SAR data allows the limitation imposed by cloud coverage to be overcome, the interpretation of
MSS is strongly dependent on weather conditions , implying that it can only be applied for winds within the
range 2–7 m s . proposed
improvement of
the MCC approach using a two-step procedure: in the first step, image
segmentation is used to unveil the patterns present in the image, which are
tracked in the second step. This tracking combines MCC vectors and optical
flow methods, i.e., inversion of Eq. () with
. In general, the resulting velocity field is sparse and is
post-processed to retrieve a smoother field
An alternative to feature tracking is to solve the heat equation, which
provides an equation for the evolution of SST. Integrated over the mixed
layer (ML), the heat equation can be written as
where are the heat fluxes, is the thermal diffusion,
is the entrainment velocity at the base of the ML which is nonzero
only if there is a deepening of the ML
The need to solve the differential Eq. () imposes constraints on the spatial resolution , which is controlled by the spacing between satellite images and the cross-isotherm velocity , i.e., Taking km day and day gives km, while h implies km. If altimetric maps are used to solve the aperture problem, then the effective spatial resolution will be reduced to that of altimetry (see Sect. ).
Sea surface temperature from AVHRR. Upper left: absolute dynamic topography from AVISO (black lines) and the associated geostrophic velocities (arrows). Top right: velocities derived from a sequence of thermal images using the MCC method (arrows). Bottom: velocities derived from the thermal image using a Butterworth filter (arrows).
[Figure omitted. See PDF]
Currents from a single tracer image
The methods described in Sects. – rely on altimetric measurements to obtain the topology component of the velocity field. As discussed in Sect. , altimeters are limited by current technology (noise level, distance to coast) and sampling geometry (difficulty to retrieve two-dimensional currents). This fact has motivated the development of alternative approaches that exploit the characteristics of SST measurements.
Singularity exponents derived from the brightness temperature of the image shown in Fig. .
[Figure omitted. See PDF]
The necessary framework can be found at in the so-called quasi-geostrophic approximation . Within this framework, the potential vorticity (PV) anomaly is related to the geostrophic stream function (Eq. ) through where is the Brunt–Väisälä frequency. The hydrostatic equation provides the appropriate boundary conditions at the ocean surface: where is the sea surface buoyancy (SSB), and at the ocean bottom () Alternatively, where it is assumed that the bottom is far enough. Then, using the principle of invertibility of PV , the geostrophic stream function can be computed from the knowledge of surface buoyancy, that can be retrieved from SST and SSS measurements (Eq. ); that can be obtained from climatologies or density profiles from Argo buoys and the knowledge of PV. Unfortunately, PV is not known and cannot be derived from satellite measurements. Nevertheless, showed that the large-scale forcing in density and PV can lead to the property that the interior PV mesoscale anomalies are correlated to the surface buoyancy anomalies in the upper ocean. In that case, the PV anomaly can be separated as with being a function that specifies the amplitude of PV anomaly. Equation () can be used to retrieve the stream function from surface buoyancy, i.e., from SST and SSS measurements.
and proposed to solve this problem by splitting it into two solutions: That is, the sum of a surface solution , obtained assuming nonzero surface buoyancy and zero interior PV ( and ), and an interior solution , obtained assuming zero surface buoyancy and nonzero interior PV anomaly ( and ).
Assuming a constant stratification and an ocean of depth , the surface solution is where stands for the Fourier transform, is the wave vector, its modulus and , which becomes the classical Surface Quasi-Geostrophic solution in the limit :
The interior solution is
which corresponds to the baroclinic mode
At the ocean surface, dominates and projects onto
, which was used by to
propose an approximation of the total solution by a modified surface solution with
an effective Brunt–Väisälä frequency that had to be adjusted
using independent observations. Then, the three-dimensional geostrophic
stream function and buoyancy can be retrieved from satellite measurements of
SST as follows :
These equations are known as the effective SQG (eSQG) model. It is worth
mentioning that the parameter contains the contribution of interior PV
as well as the effect of SSS, if salinity measurements are not used to derive
the geostrophic velocities
The comparison between altimetric measurements of SSH and SST images unveils the synergy between these two measurements (e.g., Fig. ) . In general, while SST images can be used to obtain information about the location and geometry of ocean structures, it is difficult to quantify velocities from them (see also Sect. ). Conversely, although altimeters provide information about ocean velocities, it is difficult to recover the location and geometry of ocean structures. However, within the eSQG framework, SSH and SST are in phase and contain the same information. These ideas motivated to reconstruct the surface stream function combining SST and SSH measurements through the definition of an empirical transfer function, : where can be empirically estimated by combining SST and SSH measurements as This idea has been analyzed in and who showed that the transfer function can be approximated by a Butterworth filter: with , a cut-off frequency and an amplitude that has to be determined from other measurements such as altimetric data, drifters, etc. (equivalently to the parameter in the eSQG approach). This approach is well suited to combine simultaneous measurements of SST and SSH such as the ones provided by Sentinel-3 satellite from ESA.
During recent years there have been some efforts to include the ageostrophic effects in the SQG framework. On the one hand, included wind-driven ageostrophic contributions into the SQG dynamics. They integrated Eq. () (without the buoyancy and Stokes terms) over a ML of depth , using pressure derived from SSH and assuming an SQG-like vertical decaying (Eq. ) and the parameterization of the turbulent stress given by Eq. (): where is the geostrophic velocity at the surface. Interestingly, the effect of wind does not appear explicitly in the above equation and is contained in the ML depth. Moreover, this solution implies that on scales smaller than those of wind stress, i.e., a few hundreds of kilometers, the total averaged velocity is in phase with the geostrophic velocity. On the other hand, also included ageostrophic effects by rewriting the SQG using the two-dimensional semigeostrophic equations, allowing the extension of this approach to scales smaller than the Rossby radius of deformation.
Besides the use of PV inversion arguments, the identification between frontal structures and streamlines has also been exploited to derive ocean currents from a single SST image. In particular, it has been explored the use of singularity analysis . Singularity exponents are dimensionless variables that measure the local degree of regularity (if positive) or irregularity (if negative) of the scalar at each point. The set of singularity exponents do not only provide information about the statistics of changes of scale in the scalar, but also about the specific geometrical arrangement of the structures explaining those changes in scale. A striking feature of singularity exponents is that singularity isolines, especially those associated with the most singular values (i.e., more negative), seem to delineate with remarkable accuracy the streamlines of the flow. They do so more closely than the isolines of the scalar from which they are derived (see, for instance, Fig. 8 in ). However, no theoretical proof of this observed property has been given so far. Figure shows, for example, the map of singularity exponents derived from the SST map shown in Fig. . As shown in the figure, the singularity exponents provide very detailed information about the patterns underlying the SST, and provide a constant, homogeneous value along singularity lines, despite the progressive change in the amplitude of the gradient of SST. Fronts and sharp transitions in general are associated with negative values and so they are shown in white colors in the figure, but subtler transitions (i.e., smaller amplitude gradients) are also associated with negative values, which uncover a more detailed view of the circulation. Positive values (represented in dark colors in the figure) are also in correspondence with frontal structures which have less dynamic relevance.
The apparent correspondence between singularity lines and streamlines motivated the introduction of a simple method (called maximum singular stream function method or MSSM, ) that provides an estimate of a normalized stream function from the singularity exponents obtained from a map of a given ocean scalar. However, the MSSM is not very useful for dynamic studies, as it just gives information on the geometry of the flow, but neither the modulus of the velocity vector nor the sense of the circulation (upstream or downstream the depicted streamlines) are known. Besides, by construction the MSSM relies on the capability of the so-called most singular manifold (MSM) to describe the full geometry of the flow, something that introduces a certain degree of quality loss in the method due to numerical degradation. Nevertheless, the capability of singularity analysis to capture the underlying organization of the flow points to its future combination with the SQG approach or with altimetric data to improve the reconstruction of high-resolution velocities.
Retrieval from coastal HF radar
The lack of direct satellite measurements of surface ocean currents has motivated the development of different techniques to derive them from complementary satellite observations, as seen in Sect. . These techniques are based on imposing theoretical frameworks that are a simplification of the dynamics, even with respect to the dynamics underlying current ocean models. An alternative to avoid this issue is to use coastal radar, which allow remote sensing retrievals of ocean currents by measuring the Doppler shift of the radio waves back-scattered by small sea surface waves. Radar operating in the 3–50 MHz range has the advantage that the emitted wavelengths (6 to 100 m) are comparable to those of typical surface waves, translating to a strong backscatter .
Two methodologies are presently being used: the CODAR SeaSonde
and the Wellen radar
Radar-derived currents are assumed to have a measurement depth of 1 m at 10–15 Mhz, and they have been extensively used for oceanographic studies in coastal regions. See the exhaustive review by and the references therein.
HF radar provides spatial and temporally averaged currents. They retrieve their information from a horizontal footprint that changes with the distance from the antenna. Although they can provide information of the surface currents up to 20–70 km from the coast, the actual coverage depends on radio interferences, the time of the day, solar activity and sea state . The frequency spectra of any radar measurement reveals the existence of white noise . The amplitude of the noise is not linked to the radar station, as it changes with time and location. In their analysis, concludes that the average sampling period should have to adapt in order to retrieve the geophysical signal. The origin of such noise has not yet been fully understood, but various processes have been proposed to affect the radar measurements: changes in the velocity field during the duration of the radar measurement , of radio frequency interferences, and signal sampling .
The effective spatial resolution of long-range radar systems has been investigated by . Their analysis indicates that the effective resolution of WERA antennas ranges from 10 km near to the radar stations and 25 km at long range (150 km). The resolutions of SeaSonde antennas are 40 and 60 km, respectively.
Being an integrated measurement, the nature of the radar-derived currents remains an open debate. For example, it has been suggested that HF radar currents include either the entire wave-induced Stokes drift , part of it or none of it . In their work, compare HF radar currents with two types of surface drifters: seven iSphere drifters without drogue (found to be driven by the Eulerian current and the Stokes drift at the surface) and seven CODE-type drifters (following the ocean current at 1 m depth). Both types of drifters experienced little wind drag. In their comparison they found that the difference between HF radar currents and the iSphere velocities strongly correlated with the Stokes drift. Moreover, the difference between HF radar velocities and the CODE-type drifters appeared to be independent of Stokes drift for the wind and wave conditions in their study area.
The results of indicate that the drifters responding to the vertically integrated surface currents might be more suitable for HF radar validation than drifter without drogue, although they caution that the results might depend on the local dynamics.
Growth of HF radar sites. Source: Coastal Observing Research and
Development Center (CORDC), available at
[Figure omitted. See PDF]
Data assimilation of ocean currents
In this section we will focus on the various applications for assimilating remotely sensed ocean velocities in regional and coastal simulations. In most of the following applications, ocean currents are mainly derived from coastal HF radar, and only two works refer to the assimilation of global currents derived from altimeter data.
In the case of coastal simulations, it is widely accepted that the main source of errors is the inadequate wind stress forcing. Assimilation of HF radar could improve the realism of the simulations by partially correcting surface wind forcing. However, the amount of available observations (HF radar, along-track altimetry and SST maps from satellites, and vertical temperature and salinity profiles from moorings, gliders and profilers) remains sparse compared with the fast, small-scale, nonlinear dynamics characteristic of coastal areas.
The first work assimilating HF radar surface data into an ocean model was done by using a nudging technique to correct the model surface current towards the HF radar estimates. Since then, and driven by the continuous expansion of the network of HF radar systems, different data assimilation approaches have been used to assimilate HF radar currents into nonlinear, high-resolution ocean models: nudging , sequential assimilation and 4DVAR assimilation schemes .
Nudging
The first work aiming to assimilate HF radar currents into a regional model of the Monterey Bay (California, US) was published by . The HF radar observations, , were assimilated by adding a fictitious surface wind stress term that nudged the model solution (uppermost layer) towards the observed values: with being the water density and a drag coefficient. The data being assimilated was the 30 min averaged surface currents, available every 2 h and linearly interpolated to the time step of the model. They showed that such a continuous assimilation strategy was able to modify the model currents towards the observed direction. However, significant differences remained in the velocity field even after more than 170 h of assimilation. In particular, the reconstructed velocities remained small compared with the observed ones. The authors pointed out that errors in the Doppler retrieved currents might have been the reason for it and suggested that the HF data should be processed before assimilation, for example by removing the divergent component from the observation field. The same approach was used by to assimilate OSCAR currents (see Sect. ) in a basin-wide simulation of the Indian Ocean. In this work, the current measurements from three RAMA buoys were used to assess the impact of the assimilation. The authors pointed out that, although it is said that OSCAR currents do not provide an accurate representation of the meridional currents at these RAMA locations, the model performed even worse. The assimilation of OSCAR velocities reduced the deficiencies of the model at these locations (Fig. ).
Correspondence between the zonal velocity component measured at the RAMA station located at 1.5 N, 90.0 E. (a) Model without assimilation. (b) Resulting from assimilating OSCAR currents. From Fig. 1 in Santoki et al. (2013).
[Figure omitted. See PDF]
A strategy to simultaneously update the three-dimensional velocity field was used by on the New Jersey coast (US). In their application, they estimated the correlation between the surface CODAR data and the measurements provided by a moored acoustic Doppler current profile and used them to project surface CODAR data to the depth. The authors compared two methodologies to feed their three-dimensional maps into the dynamical model: a continuous nudging and the intermittent melding described by . Their results indicate that the intermittent corrections of the three-dimensional ocean currents better allowed the model to freely adjust and develop than the continuous nudging of the model observations toward observations.
The nudging scheme of used a four-dimensional nudging coefficient: where the nudging coefficient, , was a function of the distance between the observations and each model grid point. In their work, they propose an analytic form for the nudging coefficient: where is the horizontal separation between and , is the nudging length scale, is the depth of influence of the surface observation and is a damping timescale. Each observation may accelerate and decelerate a fraction of the water column, disseminating the corresponding stresses in the four-dimensional neighborhood of each observation. In their application to assimilate HF radar data in the Raritan Bay and the coastal waters of New York and New Jersey, they implemented the limiting case , , (1800 s) and m. The impact of the assimilation was estimated using in situ observations of the ocean currents, temperature and salinity withheld from the assimilation. They found that the vertically projected nudging was able to improve both the hindcasting and the 24 h forecasts of near-surface currents and temperature.
Sequential methods
used what they called a “quasi-ensemble” assimilation scheme derived from the ensemble Kalman filter (EnKF) introduced by to assimilate HF radar observations into a 1 km, nested, regional model of the Fedje area (Norway). The basic equations of the EnKF are as follows: In Eq. (), represents the -dimensional model state vector. In an ocean model, the state vector is usually constructed from the values of sea level, and the three-dimensional fields of temperature, salinity and horizontal currents. The superscripts and indicate the analysis and the forecast solutions, respectively. The vector represents the set of observations available at the analysis time. The observation operator, , projects the model solution to the observation space. When the observation operator is linear, it is represented by the observation matrix . The model error covariance matrix is given by . Similarly, the observation error covariance is given by . The matrix , called the Gain matrix, extrapolates the information from the observation locations to every component of the state vector. As such, Eq. () has the potential to correct the state of the whole three-dimensional system from a set of observations of the surface current. The term is known as the assimilation increment and it is used to project, to the model space, the information provided by the observations that was missing in the forecast.
Data assimilation cycle in Breivik and Sætra (2001). Surface currents are used to initialize, every hour, a 6 h prediction. In the initialization procedure, three cycles of EnOI are used to assimilate the current data available every 20 min.
[Figure omitted. See PDF]
The gain matrix given by Eq. () is said to be optimal (in the sense that it provides the most likely estimate of the system that provided the values being observed) if the system is linear and if both forecast and observation errors are Gaussian and unbiased. However, as discussed by , this is not the case when the system dynamical laws are nonlinear. Indeed, in nonlinear systems, the time evolution of Gaussian errors is not longer Gaussian, and the error covariance matrix no longer fully describes the statistical properties of the forecast errors. For nonlinear models, proposes Eq. () as a Monte Carlo estimation of the forecast error from the dispersion of an ensemble of plausible estimates of the state of the system. Specifically, let us consider an ensemble of model states, , evolving according to the nonlinear system dynamics and differing because of differences in the initial conditions, external forcing or model parameters. At any time, , the ensemble mean, , and the ensemble of anomalies, , can be easily calculated. If we define the matrix as the matrix whose columns correspond to the members of the ensemble of anomalies, the ensemble covariance is given by Eq. ().
An advantage of the EnKF is that, at each time step, we can easily calculate the projection of the state vector onto the observation space: a fact that allows the calculation of the terms and without the need for explicitly estimating the error covariance matrix (Eq. ) or the operator . This fact strongly reduces the computational cost associated with Eq. .
The parameter in Eq. (), known as inflation factor, is introduced to scale the weight of the ensemble versus the observations, to take into account the effect of the model error, and to avoid the collapse of the covariance matrix. To reduce the impact of the sampling errors (i.e., the errors arising from the fact of using a finite ensemble) in the estimation of ensemble covariance matrices, some kind of localization is usually used to reduce the effect of spurious covariances. An example of the pervasive effects of the spurious covariances in systems with short and long scales can be found in . Covariance localization can be explicitly implemented by multiplying the empirical covariance by an analytic localization function or by performing a local analysis in which we divide the state space into a set of independent local analysis domains, limiting the influence of observations to some subset of space points or state variables . Implicit implementation of localization is obtained by truncating the eigenvalue expansion of the term in Eq. () .
The quasi-ensemble proposed by consisted of replacing
the ensemble of model simulations with an ensemble of model states coming
from a unique model simulation taken at different times:
A necessary condition for the ensemble Eq. () to have a meaningful
covariance Eq. () is that the collection of states defining the
ensemble is taken from a representative model simulation. The
advantage of using Eq. () is that, once the ensemble has
been constructed, the covariance remains constant, reducing the numerical
cost of the assimilation algorithm Eqs. ()–(). The
resulting algorithm has been known lately as an ensemble optimal
interpolation
In , the radar data was available every 20 min, and three data assimilation cycles were used to get the initial conditions for a 6 h forecast (Fig. ). The low cost of the EnOI made it possible to have a 6 h forecast within 45 min of the data acquisition time. However, although Eq. () allows the correction of the three-dimensional hydrographical fields of the model (temperature and salinity), found that the model rapidly became unstable. The reason was the nested nature of the simulation. Without correcting the external, coarse simulation, large density gradients built up between the (free) external and the (constrained) internal simulations. Therefore, they had to leave out the cross-updates of temperature and salinity. As such, the information added by the assimilation was lost after 6 h. Years later, compared the approach of with the usual implementation of the EnKF , in an experiment assimilating hourly surface currents over the Qingdao coastal waters (China). In , the ensemble members corresponded to the difference between successive model outputs every 6 h during 1 month. Their results indicated that, although EnKF provides a better fit to independent surface currents, both EnOI and EnKF improve the simulation of the coastal surface currents.
Data assimilation cycle in Oke et al. (2002). The time-distributed averaging procedure approach used to initialize the problem at time uses all the observations in the period . In their application, the time is approximately the inertial period.
[Figure omitted. See PDF]
Another seminal implementation of the EnKF to assimilate a subset of observations from an array of CODAR SeaSonde HF radar deployed along the Oregon coast was described by . In their work, they used a stationary version of the physical-space statistical analysis system (PSAS) introduced by and a time-distributed averaging procedure (TDAP). Observations were low-pass filtered to remove the tidal signal, and the average during a full inertial period , i.e., approximately 17 h, was assimilated using an EnOI algorithm to obtain an estimate of the system at time (Fig. ). The model was then initiated at time from a true solution of the model and ran until . At each time step, the model solution is corrected as where refers to the time steps of the simulation. One of the advantages of the time-distributed strategy is that the model always starts from a pure model output, avoiding initialization shocks. As the assimilation increment is distributed over a quarter of the inertial period, it allows the model dynamics to adjust to the data assimilation increment, better preserving the model dynamical balances. The results were validated using data from a moored acoustic Doppler current profile. The authors found that, despite the presence of an unexplained bias in the results, the data assimilation increased the magnitude of the fluctuations of the model velocity field increasing the agreement with the observations (Fig. ). The authors pointed out that the assimilation of HF radar data compensated for the unrepresented signal of the wind stress forcing used in their simulation.
Comparison between the alongshore wind stress and the ocean vertical averaged current during the 40-day experiment. From Fig. 10 in Oke et al. (2002).
[Figure omitted. See PDF]
assimilated low-pass filtered Monterey Bay HF radar measurements using a two-step data assimilation approach: they used an EnOI method to update the velocity field of the first layer of the model, and a second step in which the surface velocity corrections were projected downward using Ekman theory arguments of either energy conservation or momentum transfer. They illustrated the disadvantage of only correcting the surface layer, as had been done in . The simultaneous correction of the three-dimensional velocity field reduced the spurious velocity shear that occurs when only the surface layer of the model is corrected.
used an approach similar to to assimilate velocity profiles measured by a set of moorings in a regional simulation of the Oregon coast. As in , only the velocity field was updated and the other variables were allowed to evolve as a result of the dynamical adjustment. Disregarding the ensemble covariance between currents and the hydrography fields was justified by the weak correlation that existed between these variables but also because of the sampling error of the empirical correlations estimated by the EnOI. Their results showed that their EnOI algorithm was able to improve the solution of the model and to induce significant dynamical changes.
A slightly different approach was used by to assimilate 2-day averaged currents in a nested simulation of the West Florida Shelf. Only the radial HF radar component was seen by the data assimilation algorithm, and the background error covariance is used to statistically extrapolate the velocity perpendicular to the radial direction. In their work the background error covariance matrix was built from a set of model simulations differing in the wind forcing. The reference wind forcing combines the NCEP NAM (North American Mesoscale Model) with in situ wind measurements. The 6 h wind field during the year 2004 was used to calculate a set of empirical orthogonal functions (EOFs). An ensemble of 100 synthetic wind fields was created by perturbing the reference wind field with a linear combination of these EOFs with Gaussian random coefficients. The analysis step corrected both currents and hydrography. Similar to the findings of , the authors found that the forecast skill improved if a spatial filter is used to remove spurious barotropic waves from the assimilation increment and if the wind stress is included in the state vector. This allows the data assimilation to correct both the state of the ocean and the forcing term. In , a similar ensemble approach is implemented with a state vector that contained only the wind forcing of the model, i.e., . In that case, the implicit observation operator provides the corresponding upper ocean surface current, i.e., . The rationale behind this approach was the thought that too-frequent assimilation of observations often produces unrealistic features that, if not dissipated, will degrade the model results. opted for correcting the main source of the model error (the wind stress forcing) rather than the state of the ocean itself. Their results were validated against independent wind and SST observations. Their results indicate that improvements in the amplitude of the wind stress drove the corresponding improvement in the SST. However, in places where the SST was driven by other factors (e.g., open boundary conditions), changes in the forcing wind had no impact. The effort of using HF radar measurements to correct (separately) wind forcing and the open boundary conditions was done by . In both cases, although some reduction of the error was obtained for surface currents, mixed results were obtained with respect to temperature and salinity.
The expected advantage of incorporating HF radar and in situ temperature and salinity observations from glider transects into the operational system used by the Australian Bureau of Meteorology was investigated by . They used the Bluelink Ocean Data Assimilation System (BODAS), an EnOI data assimilation system descendant from the pioneering work of . Using synthetic HF radar and gliders, they checked the added value that these observations would have in their operational system. They found that HF data could reduce the analysis errors by 80 %, with improvements reaching 200 km beyond the radar footprint. Moreover, as HF radar is able to detect spatial structures smaller than the ones resolved by the Global Ocean Observing System, it would also help reduce sea level errors. However, glider transects were found to have only a localized impact, probably due to the short spatial scales over the shelf region. It was thus suggested that, if a glider program was to be implemented, transects should be closely spaced (around 100 km) to resolve the mesoscale variability.
4DVAR
used a 4DVAR approach using the Massachusetts Institute of Technology general circulation model (MITgcm) introduced by to dynamically interpolate HF radar data collected off the San Diego coast. Application of 4DVAR algorithms always start by defining a cost function of the following type: which is a weighted average of the model–data misfit and the changes to the control variables. The control vector must be defined according to each particular application. It usually contains the initial model state (currents, temperature and salinity), the fields at the open boundaries, atmospheric forcing fields (mass and momentum) or model parameters. Note that if the initial model state is the only control variable, then error covariance matrix should be equal to the model error covariance used in the EnKF. As such, the first term in Eq. () is a measure of the distance between the mode and the observations, and the second term introduces penalties upon departures from the set of background control values . The goal of the 4DVAR is to find the optimal value of the control, , for which the cost function Eq. () reaches its mimimum value. For linear and perfect systems, it has been shown that the solution that minimizes Eq. () can be written as Eqs. ()–(). See for a detailed discussion. In the 4DVAR assimilation, the cost function is minimized iteratively. At each iteration, the ocean model is run forward to calculate the value of the cost function, and its adjoint model is run backwards to obtain the gradient of the cost function by respect the control vector, , which is used to determine a descent direction towards the minimum .
Although not explicitly noted, the observation operator , the observation error covariance , and the error covariance matrix , should be a function of time, although in many applications (i. e. operational implementations) these matrices are kept constant. The specification of the error covariance matrix, , is key in the performance of the 4DVAR system as it introduces constraints in the space of all possible control values. They are usually nondiagonal matrices in order to include geophysically balanced covariances. Finding their appropriate form remains a research issue. Because of the lack of an appropriate observing system, physical, statistical and computational constraints usually dictate their form . In particular, when control variables contain physical fields (e.g., the initial conditions), the covariance matrices are modeled using recursive filters , diffusion equations and simplified linear balance operators .
Data assimilation cycle in Hoteit et al. (2009). The pair of direct model run and adjoint model run is repeated iteratively until the predefined convergence criteria is reached. After convergence, the solution at the center of the assimilation period is used as the restart point for the next assimilation cycle: overlap of 5 days.
[Figure omitted. See PDF]
In the model starts from rest and it is initialized using data from a single profile of and . The model is initially forced with wind data from a single shore station and with zero heat and freshwater fluxes. The model covers the San Diego coast region (US); has open boundaries in the north, west and south; and does not include tides. The hourly HF radar velocities were then used to try to constrain the initial conditions, the open boundary conditions and the air–sea fluxes of heat, mass and momentum. The tidal components of the currents were removed using a least-squares fit to four diurnal and four semidiurnal tidal lines over a 1-year period. Their results showed that the observed surface currents could be fitted by adjusting the wind stress controls and that the resulting surface currents showed skill over persistence for about 20 h. However, they found that without constraining the surface winds, the resulting solution was weakly sensitive to the control of initial and boundary conditions after about two inertial periods. Moreover, and similar to the findings of previous works using different data assimilation methods, they concluded that surface current observations alone were not enough to constrain the three-dimensional structure of the system.
The first implementation of a multivariate assimilation of multiple data sources including HF radar currents was done by in the New York Bight using the Regional Ocean Modeling System (ROMS) model and its adjoint model . Their data assimilation method was an incremental strong-constrain 4DVAR that only adjusted the initial conditions using assimilation windows of 3 days, overlapping the data assimilation windows, advancing the beginning of the data assimilation window by 1 day. Using a series of sensitivity experiments, revealed that the assimilation of HF radar currents in the model increased the current prediction skill of the model by 1–2 days. However, assimilation of surface currents slightly degraded the prediction skill of subsurface temperature. These results indicated either the presence of deficiencies in the error covariance matrix, , used by the assimilation algorithm or deficiencies in the dynamical model itself (and its forcing), leading to over-correction of the model initial condition. The improvement of prediction skill of surface currents by the multidata assimilation of all the available observations was also reported by .
Data assimilation cycle in Yu et al. (2012). The data assimilation is done with the help of a linear tangent model (LTM) and its adjoint code (ALTM). The LTM is an approximation to the linearized dynamics of the ROMS model, used for both the forecast step and to define the reference solution of the LTM model. No overlap between the different assimilation cycles.
[Figure omitted. See PDF]
Surface salinity field (daily average) corresponding to 14 November 2010 without assimilation (a) and after assimilation (b). From Fig. 1 in Santoki et al. (2013).
[Figure omitted. See PDF]
Summary of characteristics for the different methods. The latency of altimetric maps is taken to be 3 days, which corresponds to the intermediate map generated by SSALTO/DUACS system although preliminary data is available within 12 h . The resolution and latency of wind-driven currents is taken from the characteristics of present scatterometer data.
Technique | Velocities | Latency | Section | ||
---|---|---|---|---|---|
Altimetric maps | geostrophic | 3 days | 30 km | 75 km | |
Wind stress | ageostrophic | 2 h | 12.5 km | 75 km | |
Feature tracking | total | 4 h | 20 km | 20 km | |
Heat equation | total | 4 h | 4–16 km | 4–16 km | |
PV inversion | geostrophic | 4 h | 1 km | 5 km | |
HF radar WERA | total | 1 h | 200 m | 10–25 km | |
HF radar SeaSonde | total | 1 h | 200 m | 40–60 km |
If these techniques are combined with altimetric maps, their characteristics are those of altimetry.
The ability of the assimilation of ocean surface currents to correct the position of a SST front in a regional simulation was demonstrated by . In their experiments, they assimilated daily-averaged maps of HF-radar-derived surface currents defined in their 6 km grid. The ocean model was nested inside the 9 km grid Navy Coastal Ocean Model (NCOM) of the California Current system. Although ROMS was the ocean model used to simulate the circulation, the data assimilation used a stand-alone linear tangent model (LTM) and its exact adjoint code (ALTM). The LTM was dynamically compatible with the nonlinear model and its reference ocean state is obtained by the temporal interpolation of the ROMS trajectory, sampled every 4 h. With the data assimilation strategy shown in Fig. , they control the initial condition. After the minimization of the cost function, the initial condition was used to provide a 6-day forecast with ROMS. The model output after 3 days was used as a first guess for the next assimilation cycle. Although the surface winds were not corrected by the assimilation, it was found that the assimilation of the HF radar data was able to improve the geometry of the SST front.
used the ROMS model and its adjoint to simultaneously assimilate hourly HF radar data in the Gulf of Naples (Italy), together with an 8-day mean product of SST (merging microwave and infrared data) with horizontal resolution of 4.4 km, and daily absolute dynamic topography with horizontal resolution . The simulation domain corresponded to the Tyrrhenian Sea. The control of the cost function Eq. () was the initial conditions, the surface forcing and the open boundary conditions. The assimilation window was 7 days. Despite the significant variability between assimilation cycles, the reconstructed circulation was able to correct the location of ocean features such as submesoscale jets near the region covered by the HF radar (Fig. ).
Finally, the work of assesses the added value of assimilating OSCAR velocity fields in their forecasting system of the Angola Basin circulation. Their baseline experiment assimilates satellite sea surface temperature and in situ profiles of temperature and salinity. Gridded sea surface height (available daily), OSCAR velocity fields (available every 5 days) and drifter velocity observations (derived from 6 h interpolated drifter positions) have been subsequently assimilated. Their results indicated that drifter velocity assimilation improved Lagrangian predictability. Assimilation of OSCAR improved Lagrangian predictability as much as altimetry but only by half as much as the drifter improvement. However, simultaneous assimilation of drifter and OSCAR velocities degraded the results obtained by assimilating drifter velocities alone. The main reason for the negative impact of OSCAR data was hypothesized to be the low resolution (spatial and temporal) of the velocity field, together with a large spatial coverage, which weighted the assimilation results to such a less accurate estimate of the surface velocity.
Summary
The retrieval of surface velocities remains one of the most challenging problems in oceanography, with an impact in almost all fields of oceanography. At present, the routine retrieval of ocean velocities on global scales is based on measurements of the SSH done by altimeters, which are then used to derive surface currents invoking the geostrophic approximation. This is a robust approach: it is an all-weather, global and well-understood methodology that has become the standard for oceanographic research and has had a deep impact in our vision and understanding of ocean dynamics. Moreover, the inclusion of information from wind and, more recently, waves, as well as corrections to the geostrophic approximation provides very realistic estimations of surface ocean currents. Nevertheless, altimetry is limited by the sampling characteristics and noise level of current altimeters implying constraints to observe structures smaller than 75 km or close to the coast. As a consequence, a significant part of the mesoscale field, particularly in those areas with small Rossby radius such as the Mediterranean Sea, cannot be observed. In addition, operational applications of altimetric maps are limited by the latency of altimetric data and the need for past and future data to generate altimetric maps. Wind-driven currents derived from wind measurements, on the contrary, have very low latency and, potentially, higher spatial resolution. At present, the existence of several scatterometers provides quite good sampling, although all points on the Earth's surface are not yet covered every 6 h. It is worth mentioning that inertial currents are difficult to retrieve due to the lack of information about its phase.
The limitations of altimetric maps has motivated the use of sea surface temperature observations to obtain surface velocities. Standard methods (feature tracking, inversion of heat equation) require a sequence of SST (or BT) images, which may be difficult to obtain if infrared observations are used. Furthermore, the need for high-resolution data for techniques such as the maximum cross-correlation technique and the low quality of the resulting velocities further limit its operational use. During the recent years the surface quasi-geostrophic framework has emerged as a potential complement to altimetric maps due its high resolution and low latency (see Table ). This approach is able to capture ocean structures on the order of 5–10 km and at distances to the coast on the order of a few kilometers. One of its main limitations, in addition to the presence of clouds, is the need for SST to be a proxy of interior potential vorticity. Observations and the analysis of numerical models show that this situation is typically found in winter. Nevertheless, velocities derived from SQG could have a strong potential for operational applications, if expert supervision can be provided. In addition, its capability to provide surface currents close the coast opens the door to extend the coverage of the currents provided by HF radar and provide a theoretical framework to improve the assimilation schemes.
A large effort is also being devoted to the direct measurement of ocean currents using remote sensing techniques based on the measurements of the Doppler shift. Two complementary approaches are underway: the use of satellite platforms (e.g., SAR) and the use of land-based systems such as HF coastal radar. Presently, the main constraint of these systems is their limited sampling characteristics, which restrict them to case studies. Nevertheless, they do provide insight about the expected contribution than the assimilation of ocean currents will provide to operational oceanography. Although various approaches have been successfully used to use observations of ocean currents to partially constrain nonlinear simulations of various coastal areas, and even improve the geometrical location of the temperature fronts, it has been shown that multiple data sources need to be simultaneously assimilated to better constrain the hydrography of the system. In addition, as the main source of errors in these simulations, advanced multivariate methodologies (ENKF or 4DVAR) need to be used to be able to retrieve wind stress information from ocean currents to further increase the prediction skill of coastal operational systems.
All datasets used in this review are public and can be accessed through NASA's Ocean Color and the European Marine Copernicus initiatives. Additional altimetric data can be accessed through AVISO Altimetry.
The authors declare that they have no conflict of interest.
This article is part of the special issue “Current perspectives in modelling, monitoring, and predicting geophysical fluid dynamics”. It is not associated with a conference.
Acknowledgements
This work was supported by the European Space Agency through the GlobCurrent Data User Element project (4000109513/13/I-LG) and by the Ministry of Economy and Competitiveness, Spain, and FEDER EU through the National RD Plan under COSMO (CTM2016-79474-R) and PROMISES (ESP2015-67549-C3-2-R) projects We also acknowledge support from the Office of Naval Research, grant no. N00014-16-1-2492, and from Fundación General CSIC (Programa ComFuturo). We would like to thank Ian Barton for providing the velocity field obtained through the MCC method. We are appreciative of the comments and advice provided by Graham Quartly, Bertrand Chapron and Fabrice Ardhuin, as well as Breogán Gómez and one anonymous reviewer, that helped to improve this review. The authors wold also like to thank the organizing committee of the NLOA for inviting Jordi Isern-Fontanet and Joaquim Ballabrera-Poy, which generated this review. Edited by: Vicente Perez-Munuzuri Reviewed by: Breogán Gómez and one anonymous referee
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2017. This work is published under https://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Ocean currents play a key role in Earth's climate – they impact almost any process taking place in the ocean and are of major importance for navigation and human activities at sea. Nevertheless, their observation and forecasting are still difficult. First, no observing system is able to provide direct measurements of global ocean currents on synoptic scales. Consequently, it has been necessary to use sea surface height and sea surface temperature measurements and refer to dynamical frameworks to derive the velocity field. Second, the assimilation of the velocity field into numerical models of ocean circulation is difficult mainly due to lack of data. Recent experiments that assimilate coastal-based radar data have shown that ocean currents will contribute to increasing the forecast skill of surface currents, but require application in multidata assimilation approaches to better identify the thermohaline structure of the ocean. In this paper we review the current knowledge in these fields and provide a global and systematic view of the technologies to retrieve ocean velocities in the upper ocean and the available approaches to assimilate this information into ocean models.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details



1 Institut de Ciències del Mar (CSIC), Passeig Marítim de la Barceloneta 37-49, 08003 Barcelona, Spain; Barcelona Expert Center in Remote Sensing (CSIC), Passeig Marítim de la Barceloneta 37-49, 08003 Barcelona, Spain
2 Institut de Ciències del Mar (CSIC), Passeig Marítim de la Barceloneta 37-49, 08003 Barcelona, Spain