Attiq Ahmad 1 and Muhammad Mohsin Riaz 2 and Abdul Ghafoor 1 and Tahir Zaidi 3
Academic Editor:Dean Hines
1, Military College of Signals, National University of Sciences and Technology (NUST), Islamabad 44000, Pakistan
2, Center for Advanced Studies in Telecommunication, COMSATS, Islamabad 44000, Pakistan
3, College of Electrical and Mechanical Engineering, NUST, Islamabad 44000, Pakistan
Received 30 April 2015; Revised 4 August 2015; Accepted 16 August 2015; 26 August 2015
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
The visible light astronomy due to reflection, refraction, interference, and diffraction enables scientists to unearth many of nature's secrets; however, brightness of stars creates a haze in the sky. On the other hand, the infrared (IR) astronomy enables us to peer through the veil of interstellar dust and see objects at extreme cosmological distances. IR images have good radiometric resolution whereas visible images provide detailed information. In this regard, various image fusion techniques have been developed to combine the complementary information present in both images. These techniques can be grouped into wavelet, statistical decomposition, and compressive sensing.
The wavelet transform based fusion schemes generally decompose the visible and IR images into different base and detail layers, to combine the useful information. In [1], contourlet transform fusion is used to separate foreground and background information; however, the separation is not always accurate, which causes loss in target information. In [2], nonsubsampled contourlet transform, local energy, and fuzzy logic based fusion claims better subjective visual effects; however, merger and description of necessary components of IR and visible images in fusion model require improvements especially in case of noisy images. In [3], wavelet transform and fuzzy logic based scheme utilizes dissimilarity measure to assign weights; however, some artifacts are also introduced in the fused image. Contrast enhancement (using ratio of local and global divergence of IR image) based fusion lacks color consistency [4]. In adaptive intensity hue saturation method [5], the amount of spatial details injected into each band of multispectral image is appropriately determined by the weighting matrix, which is defined on the basis of the edges present in panchromatic and multispectral bands. The scheme preserves the spatial details; however, it is unable to control the spectral distortion sufficiently [6]. In [7], gradient-domain approach based on mapping contrast defines the structure tensor matrix onto a low-dimensional gradient field. However, the scheme effects the natural output colours. In [8], wavelet transform and segmentation based fusion scheme is developed to enhance targets in low contrast. However, the fusion performance is dependent on segmentation quality and large segmentation errors can occur for cosmological images (especially when one feature is split into multiple regions).
Statistical fusion schemes split the images into multiple subspaces using different matrix decomposition techniques. [figure omitted; refer to PDF] -means and singular value decomposition based scheme suffers from computational complexity [9]. In [10], spatial and spectral fusion model uses sparse matrix factorization to fuse images with different spatial and spectral properties. The scheme combines the spectral information from sensors having low spatial but high spectral resolution with the spatial information from sensors having high spatial but low spectral resolution. Although the scheme produces better fused results with well preserved spectral and spatial properties, its issues include spectral dictionary learning process and computational complexity. In [11], an internal generative mechanism based fusion algorithm first decomposes source image into a coarse layer and a detail layer by simulating the mechanism of human visual system for perceiving images. Then the detail layer is fused using pulse coupled neural network, and the coarse layer is fused by using the spectral residual based saliency method. The scheme is time inefficient and yields weak fusion performance. In [12], independent components analysis based IR and visible image fusion scheme uses kurtosis information of the independent components analysis based coefficients. However, further work is required for determining fusion rules of primary features.
Compressive sensing based fusion schemes exploit the sparsity of data using different dictionaries. Adjustable compressive measurement based fusion scheme suffers from empirical adjustment of different parameters [13]. In [14], a compressive sensing approach preserves data (such as edges, lines, and contours); however, design of appropriate sparse transform and optimal deterministic measurement matrix is an issue. In [15], a compressive sensing based image fusion scheme (for infrared and visible images) first compresses the sensing data by random projection and then obtains sparse coefficients on compressed samples by sparse representation. The fusion coefficients are finally combined with the fusion impact factor and the fused image is reconstructed from the combined sparse coefficients. However, the scheme is inefficient and prone to noise effects. In [16], a nonnegative sparse representation based scheme is used to extract the features of source images. Some methods are developed to detect the salient features (which include the target and contours) in the IR image and texture features in visible image. Although the scheme performs better for noisy images, the sparseness of the image is controlled implicitly.
In a nutshell, the above-mentioned state-of-the-art fusion techniques suffer from limited accuracy, high computational complexity, or nonrobustness. To overcome these issues, a UDTCWT based visible/IR image fusion scheme for astronomical images is developed. The UDTCWT reduces noise effects and improves object classification due to its inherited shift invariance property. Local standard deviation along with distance transforms is used to extract useful information (especially small objects). Simulation results illustrate the superiority of proposed scheme in terms of accuracy, for most of the cases.
2. Proposed Method
Let [figure omitted; refer to PDF] be the input source IR ( [figure omitted; refer to PDF] ) and visible ( [figure omitted; refer to PDF] ) registered images (with dimensions [figure omitted; refer to PDF] ). The local standard deviation [figure omitted; refer to PDF] for estimating local variations of [figure omitted; refer to PDF] is [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is local mean image computed as [figure omitted; refer to PDF] The local standard deviation measures the randomness of pixels in a local area where the high values indicate presence of astrobodies and the low value values correspond to smooth/blank space (without any object or astrobody).
The image [figure omitted; refer to PDF] is obtained by thresholding [figure omitted; refer to PDF] to remove the pixel containing large variations: that is, [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is a controlling parameter and [figure omitted; refer to PDF] and [figure omitted; refer to PDF] are mean and variance of [figure omitted; refer to PDF] , respectively. The gray distance image [figure omitted; refer to PDF] (to classify different points which are present inside/outside any shape/object) is computed using [figure omitted; refer to PDF] and mask [figure omitted; refer to PDF] as [figure omitted; refer to PDF] The distance transform (used to eliminate oversegmentation and short sightedness) measures the overall distance of the pixel from other bright pixels. For instance, a pixel closer to a cluster of stars (objects) tends to be part of the segmented mask and vice versa.
Let [figure omitted; refer to PDF] be the binary image obtained from the distance image [figure omitted; refer to PDF] : that is, [figure omitted; refer to PDF] where [figure omitted; refer to PDF] denotes mean image and [figure omitted; refer to PDF] is a positive constant. The [figure omitted; refer to PDF] image segments the foreground from background regions. The connected components image [figure omitted; refer to PDF] (to segment different binary patterns) with structure element [figure omitted; refer to PDF] (a [figure omitted; refer to PDF] matrix of all ones) is [figure omitted; refer to PDF] Let [figure omitted; refer to PDF] and [figure omitted; refer to PDF] represent area and perimeter of the [figure omitted; refer to PDF] th connected component placed at [figure omitted; refer to PDF] th location, respectively; a binary segmented image [figure omitted; refer to PDF] is constructed as [figure omitted; refer to PDF] where [figure omitted; refer to PDF] and [figure omitted; refer to PDF] are thresholding parameters. UDTCWT is applied on the source images [figure omitted; refer to PDF] to obtain coefficient matrix [figure omitted; refer to PDF] of dimensions [figure omitted; refer to PDF] (where [figure omitted; refer to PDF] represents wavelet coefficients). The decomposition obtained using UDTCWT not only eliminates noise/unwanted artifacts, but also is effective in preserving the useful information present in the input images (due to its undecimated property). The binary coefficient matrix [figure omitted; refer to PDF] is obtained by assigning nonzero values at pixel locations where visible image provides more information than the IR image. This binary thresholding ensures that the fused image contains the significant/important information of both source images (as the higher value of UDTCWT corresponds to presence of significant/important information): [figure omitted; refer to PDF] A binary fuse map [figure omitted; refer to PDF] is computed as [figure omitted; refer to PDF] where [figure omitted; refer to PDF] represents [figure omitted; refer to PDF] operation. Let [figure omitted; refer to PDF] The final fused image [figure omitted; refer to PDF] is obtained by computing the inverse UDTCWT of fused coefficients [figure omitted; refer to PDF] . Figure 1 shows the flow diagram of proposed technique.
Figure 1: Flow diagram.
[figure omitted; refer to PDF]
3. Results and Discussion
To verify the significance of the proposed technique, simulations are performed on various visible/IR datasets. Quantitative analysis is performed using [figure omitted; refer to PDF] (luminance/contrast distortion), [figure omitted; refer to PDF] (mutual information), [figure omitted; refer to PDF] (weighted quality index), [figure omitted; refer to PDF] (edge dependent quality index), [figure omitted; refer to PDF] (structural similarity index measure), [figure omitted; refer to PDF] (human perception inspired metric), [figure omitted; refer to PDF] (edge transform metric), and [figure omitted; refer to PDF] (image feature metric) [17-22].
The [figure omitted; refer to PDF] metric [17, 18] is designed through modality image distortion as combination of loss of correlation, luminance distortion, and contrast distortion. The [figure omitted; refer to PDF] metric [17] represents the orientation preservation and edge strength values. It models the perceptual loss of information in fused results in terms of how well the strength and orientation values of pixels in source images are represented in fused image. It deals with the problem of objective evaluation of dynamic, multisensor image fusion, based on gradient information preservation between the inputs and the fused images. It also takes into account additional scene and object motion information present in multisensor sequences. The [figure omitted; refer to PDF] metric [17] is defined by assigning more weight to those windows, where saliency of the input image is high. It corresponds to the areas that are likely to be perceptually important parts of the underlying scene. The [figure omitted; refer to PDF] index [17] takes into account aspects of the human visual system, where it expresses the contribution of the edge information of the source images to the fused images. The [figure omitted; refer to PDF] measure [19] is the similarity between two images and is designed to improve traditional measures of mean square error and peak signal to noise ratio, which are inconsistent with human eye perception. The [figure omitted; refer to PDF] metric [20] evaluates the performance of image fusion for night vision applications, using a perceptual quality evaluation method based on human visual system models. Image quality of fused image is assessed by contrast sensitivity function and contrast preservation map. The [figure omitted; refer to PDF] metric [21] assesses the pixel-level fusion performance and reflects the quality of visual information obtained from the fusion of input images. The [figure omitted; refer to PDF] metric [22] evaluates the performance of the combinative pixel-level image fusion, based on an image feature measurement (i.e., phase congruency and its moments), and provides an absolute measurement of image features. By comparing the local cross-correlation of corresponding feature maps of input images and fused output, the quality of the fused result is assessed without a reference image.
These quality metrics [17-22] work well for noisy, blurred, and distorted images, using multiscale transformation, arithmetic, statistical, and compressive sensing based schemes for multiexposure, multiresolution, and multimodal environments. These are also useful for remote and airborne sensing, military, and industrial engineering related applications. The normalized range of these measures is between 0 and 1 where high values imply better fusion metric for each quality measure.
Figure 2(a) shows Andromeda galaxy (M31) JPEG IR image taken by Spitzer space telescope [23] while Figure 2(b) shows the corresponding visible image taken using 12.5[variant prime][variant prime] Ritchey Chretien Cassegrain (at F6) and ST10XME [24]. Figures 2(c)-2(f) are the outputs of local variance, distance transform, and segmentation steps.
Figure 2: Andromeda galaxy (M31): (a) visible image, (b) IR image, (c) local variance image, (d) distance image, (e) IR segmented image, and (f) visual segmented image.
(a) [figure omitted; refer to PDF]
(b) [figure omitted; refer to PDF]
(c) [figure omitted; refer to PDF]
(d) [figure omitted; refer to PDF]
(e) [figure omitted; refer to PDF]
(f) [figure omitted; refer to PDF]
Figure 3 shows the fusion results obtained by ratio pyramid (RP) [25], dual tree complex wavelet transform (DTCWT) [26], nonsubsampled contourlet transform (NSCT) [27], multiresolution singular value decomposition (MSVD) [28], Ellmauthaler et al. [8], and proposed schemes. By visual comparison, it can be noticed that the proposed scheme provides better fusion results, especially the preservation of background intensity value as compared to existing state-of-the-art schemes.
Figure 3: Andromeda galaxy (M31): (a) RP [25] fusion, (b) DTCWT [26] fusion, (c) NSCT [27] fusion, (d) MSVD [28] fusion, (e) Ellmauthaler et al. [8] fusion, and (f) proposed fusion.
(a) [figure omitted; refer to PDF]
(b) [figure omitted; refer to PDF]
(c) [figure omitted; refer to PDF]
(d) [figure omitted; refer to PDF]
(e) [figure omitted; refer to PDF]
(f) [figure omitted; refer to PDF]
Figures 4(a) and 4(b) show visible and IR Jupiter's moon JPEG images taken by new Horizons spacecraft using multispectral visible imaging camera and linear Etalon imaging spectral array [29]. The fusion results obtained by RP [25], DTCWT [26], NSCT [27], MSVD [28], Ellmauthaler et al. [8], and proposed schemes are shown in Figures 4(c)-4(h), respectively. Note that only the proposed scheme is able to accurately preserve both the moon texture (from IR image) and other stars (from visible image) in the fused image.
Figure 4: Jupiter's moon: (a) IR image, (b) visible image, (c) RP [25] fusion, (d) DTCWT [26] fusion, (e) NSCT [27] fusion, (f) MSVD [28] fusion, (g) Ellmauthaler et al. [8] fusion, and (h) proposed fusion.
(a) [figure omitted; refer to PDF]
(b) [figure omitted; refer to PDF]
(c) [figure omitted; refer to PDF]
(d) [figure omitted; refer to PDF]
(e) [figure omitted; refer to PDF]
(f) [figure omitted; refer to PDF]
(g) [figure omitted; refer to PDF]
(h) [figure omitted; refer to PDF]
Figures 5(a) and 5(b) show visible and IR Nabula (M16) JPEG images taken by Hubble space telescope [30]. The fusion results obtained by RP [25], DTCWT [26], NSCT [27], MSVD [28], Ellmauthaler et al. [8], and proposed schemes are shown in Figures 5(c)-5(h), respectively. The fused image using proposed scheme highlights the IR information more accurately as compared to existing state-of-the-art schemes.
Figure 5: Nabula (M16): (a) IR image, (b) visible image, (c) RP [25] fusion, (d) DTCWT [26] fusion, (e) NSCT [27] fusion, (f) MSVD [28] fusion, (g) Ellmauthaler et al. [8] fusion, and (h) proposed fusion.
(a) [figure omitted; refer to PDF]
(b) [figure omitted; refer to PDF]
(c) [figure omitted; refer to PDF]
(d) [figure omitted; refer to PDF]
(e) [figure omitted; refer to PDF]
(f) [figure omitted; refer to PDF]
(g) [figure omitted; refer to PDF]
(h) [figure omitted; refer to PDF]
Table 1 shows the quantitative comparison of existing and proposed schemes (where the bold values indicate best results). It can be observed that the results obtained using proposed schemes are significantly better in most of the cases/measures as compared to existing state-of-the-art schemes.
Table 1: Quantitative comparison.
Dataset | Technique | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
Andromeda galaxy (M31) | Proposed | 0.8220 | 0.8319 | 0.7707 | 0.4804 | 0.6461 | 0.3487 | 0.6612 | 0.4615 |
Ellmauthaler et al. [8] | 0.7179 | 0.8444 | 0.7345 | 0.3647 | 0.6350 | 0.2229 | 0.5610 | 0.3387 | |
MSVD [28] | 0.6452 | 0.6946 | 0.6243 | 0.4148 | 0.4535 | 0.4432 | 0.0045 | 0.2675 | |
NSCT [27] | 0.6259 | 0.6003 | 0.5576 | 0.2641 | 0.4606 | 0.1777 | 0.3432 | 0.2689 | |
DTCWT [26] | 0.7085 | 0.8134 | 0.6573 | 0.3113 | 0.5436 | 0.1682 | 0.2682 | 0.2290 | |
RP [25] | 0.4706 | 0.5416 | 0.5102 | 0.2514 | 0.3970 | 0.5282 | 0.2075 | 0.2026 | |
| |||||||||
Jupiter's moon | Proposed | 0.7927 | 0.7255 | 0.7814 | 0.4622 | 0.7566 | 0.3433 | 0.6725 | 0.5617 |
Ellmauthaler et al. [8] | 0.2832 | 0.6477 | 0.6343 | 0.4230 | 0.7398 | 0.1768 | 0.5672 | 0.5614 | |
MSVD [28] | 0.4780 | 0.4970 | 0.5217 | 0.5243 | 0.5292 | 0.4599 | 0.0065 | 0.4923 | |
NSCT [27] | 0.4155 | 0.4083 | 0.4631 | 0.5279 | 0.5212 | 0.2672 | 0.5001 | 0.6139 | |
DTCWT [26] | 0.4571 | 0.5932 | 0.5851 | 0.3476 | 0.5973 | 0.1805 | 0.4785 | 0.4844 | |
RP [25] | 0.3467 | 0.3989 | 0.4022 | 0.4749 | 0.3919 | 0.4825 | 0.0183 | 0.3634 | |
| |||||||||
Nabula (M16) | Proposed | 0.7399 | 0.4896 | 0.8461 | 0.8587 | 0.8645 | 0.7646 | 0.7553 | 0.5652 |
Ellmauthaler et al. [8] | 0.7318 | 0.4230 | 0.8446 | 0.8494 | 0.8563 | 0.5736 | 0.5424 | 0.5494 | |
MSVD [28] | 0.4918 | 0.4120 | 0.6466 | 0.6704 | 0.6539 | 0.5598 | 0.0051 | 0.4331 | |
NSCT [27] | 0.5204 | 0.2980 | 0.6037 | 0.5899 | 0.6013 | 0.4916 | 0.3953 | 0.5058 | |
DTCWT [26] | 0.6736 | 0.3608 | 0.8023 | 0.8225 | 0.8125 | 0.4477 | 0.4631 | 0.5079 | |
RP [25] | 0.5885 | 0.3099 | 0.6834 | 0.6818 | 0.6934 | 0.6597 | 0.1708 | 0.4639 |
4. Conclusion
A fusion scheme for astronomical visible/IR images based on UDTCWT, local standard deviation, and distance transform is proposed. The use of UDTCWT is helpful in retaining useful details of the image. The local standard deviation variation measures presence or absence of small objects. The distance transform activates the effects of proximity in the segmentation process and eliminates effects of oversegmentation in addition to short sightedness. The scheme reduces noise artifacts and efficiently extracts the useful information (especially small objects). Simulation results on different visible/IR images verify the effectiveness of proposed scheme.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
[1] L. Kun, G. Lei, L. Huihui, C. Jingsong, "Fusion of infrared and visible light images based on region segmentation," Chinese Journal of Aeronautics , vol. 22, no. 1, pp. 75-80, 2009.
[2] Z. Fu, X. Dai, Y. Li, H. Wu, X. Wang, "An improved visible and Infrared image fusion based on local energy and fuzzy logic," in Proceedings of the 12th IEEE International Conference on Signal Processing, pp. 861-865, October 2014.
[3] J. Saeedi, K. Faez, "Infrared and visible image fusion using fuzzy logic and population-based optimization," Applied Soft Computing , vol. 12, no. 3, pp. 1041-1054, 2012.
[4] S. Yin, L. Cao, Y. Ling, G. Jin, "One color contrast enhanced infrared and visible image fusion method," Infrared Physics and Technology , vol. 53, no. 2, pp. 146-150, 2010.
[5] Y. Leung, J. Liu, J. Zhang, "An improved adaptive intensity-hue-saturation method for the fusion of remote sensing images," IEEE Geoscience and Remote Sensing Letters , vol. 11, no. 5, pp. 985-989, 2014.
[6] B. Chen, B. Xu, "A unified spatial-spectral-temporal fusion model using Landsat and MODIS imagery," in Proceedings of the 3rd International Workshop on Earth Observation and Remote Sensing Applications (EORSA '14), pp. 256-260, IEEE, Changsha, China, June 2014.
[7] D. Connah, M. S. Drew, G. D. Finlayson Spectral Edge Image Fusion: Theory and Applications , Springer, 2014.
[8] A. Ellmauthaler, E. A. B. da Silva, C. L. Pagliari, S. R. Neves, "Infrared-visible image fusion using the undecimated wavelet transform with spectral factorization and target extraction," in Proceedings of the 19th IEEE International Conference on Image Processing (ICIP '12), pp. 2661-2664, October 2012.
[9] M. Ding, L. Wei, B. Wang, "Research on fusion method for infrared and visible images via compressive sensing," Infrared Physics and Technology , vol. 57, pp. 56-67, 2013.
[10] B. Huang, H. Song, H. Cui, J. Peng, Z. Xu, "Spatial and spectral image fusion using sparse matrix factorization," IEEE Transactions on Geoscience and Remote Sensing , vol. 52, no. 3, pp. 1693-1704, 2014.
[11] X. Zhang, X. Li, Y. Feng, H. Zhao, Z. Liu, "Image fusion with internal generative mechanism," Expert Systems with Applications , vol. 42, no. 5, pp. 2382-2391, 2015.
[12] Y. Lu, F. Wang, X. Luo, F. Liu, "Novel infrared and visible image fusion method based on independent component analysis," Frontiers in Computer Science , vol. 8, no. 2, pp. 243-254, 2014.
[13] A. Jameel, A. Ghafoor, M. M. Riaz, "Adaptive compressive fusion for visible/IR sensors," IEEE Sensors Journal , vol. 14, no. 7, pp. 2230-2231, 2014.
[14] Z. Liu, H. Yin, B. Fang, Y. Chai, "A novel fusion scheme for visible and infrared images based on compressive sensing," Optics Communications , vol. 335, pp. 168-177, 2015.
[15] R. Wang, L. Du, "Infrared and visible image fusion based on random projection and sparse representation," International Journal of Remote Sensing , vol. 35, no. 5, pp. 1640-1652, 2014.
[16] J. Wang, J. Peng, X. Feng, G. He, J. Fan, "Fusion method for infrared and visible images by using non-negative sparse representation," Infrared Physics and Technology , vol. 67, pp. 477-489, 2014.
[17] G. Piella, H. Heijmans, "A new quality metric for image fusion," in Proceedings of the IEEE Conference on Image Processing Conference, pp. 171-173, September 2003.
[18] Z. Wang, A. C. Bovik, "A universal image quality index," IEEE Signal Processing Letters , vol. 9, no. 3, pp. 81-84, 2002.
[19] Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli, "Image quality assessment: from error visibility to structural similarity," IEEE Transactions on Image Processing , vol. 13, no. 4, pp. 600-612, 2004.
[20] Y. Chen, R. S. Blum, "A new automated quality assessment algorithm for image fusion," Image and Vision Computing , vol. 27, no. 10, pp. 1421-1432, 2009.
[21] C. S. Xydeas, V. Petrovic, "Objective image fusion performance measure," IET Electronics Letters , vol. 36, no. 4, pp. 308-309, 2000.
[22] J. Zhao, R. Laganière, Z. Liu, "Performance assessment of combinative pixel-level image fusion based on an absolute feature measurement," International Journal of Innovative Computing, Information and Control , vol. 3, no. 6, pp. 1433-1447, 2007.
[23] Andromeda galaxy (M31) IR, http://sci.esa.int/herschel/48182-multiwavelength-images-of-the-andromeda-galaxy-m31/
[24] Andromeda galaxy (M31) visible, http://www.robgendlerastropics.com/M31Page.html
[25] A. Toet, "Image fusion by a ratio of low-pass pyramid," Pattern Recognition Letters , vol. 9, no. 4, pp. 245-253, 1989.
[26] J. J. Lewis, R. J. O'Callaghan, S. G. Nikolov, D. R. Bull, N. Canagarajah, "Pixel- and region-based image fusion with complex wavelets," Information Fusion , vol. 8, no. 2, pp. 119-130, 2007.
[27] Q. Zhang, B.-L. Guo, "Multifocus image fusion using the nonsubsampled contourlet transform," Signal Processing , vol. 89, no. 7, pp. 1334-1346, 2009.
[28] V. P. S. Naidu, "Image fusion technique using multi-resolution singular value decomposition," Defence Science Journal , vol. 61, no. 5, pp. 479-484, 2011.
[29] Jupiter's moon, http://www.technology.org/2014/12/03/plutos-closeup-will-awesome-based-jupiter-pics-new-horizons-spacecraft
[30] Nabula (M16), http://webbtelescope.org/webb_telescope/technology_at_the_extremes/keep_it_cold.php
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2015 Attiq Ahmad et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
An undecimated dual tree complex wavelet transform (UDTCWT) based fusion scheme for astronomical visible/IR images is developed. The UDTCWT reduces noise effects and improves object classification due to its inherited shift invariance property. Local standard deviation and distance transforms are used to extract useful information (especially small objects). Simulation results compared with the state-of-the-art fusion techniques illustrate the superiority of proposed scheme in terms of accuracy for most of the cases.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer





