(ProQuest: ... denotes non-US-ASCII text omitted.)
Chun-Mei Li 1 and Shu-Qian Shen 2
Academic Editor:Shi-Liang Wu
1, Guilin University of Electronic Technology, Guilin, Guangxi 541004, China
2, College of Science, China University of Petroleum, Qingdao, Shandong 266580, China
Received 14 June 2014; Accepted 10 August 2014; 31 August 2014
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
Consider the following nonlinear matrix equation: [figure omitted; refer to PDF] where A is an n×n nonsingular complex matrix. A solution X of (1) is called a square root of A . The matrix square roots have many applications in the boundary value problems [1] and the computation of the matrix logarithm [2, 3].
In the last few years there has been a constantly increasing interest in developing the theory and numerical methods for the matrix square roots. The existence and uniqueness of the matrix square root can be found in [4-6]. Here, it is worthwhile to point out that any nonsingular matrix has a square root, and the square root is also nonsingular [4]. A number of methods have been proposed for computing square root of a matrix [5, 7-16]. The computational methods for the matrix square root can be generally separated into two classes. The first class is the so-called direct methods, for example, Schur algorithm developed by Björck and Hammarling [7]. The second class is the iterative methods. Matrix iterations Xk+1 =f(Xk ) , where f is a polynomial or a ration function, are attractive alternatives for computing square roots [9, 11-13, 15, 17]. A well-known iterative method for computing matrix square root is Newton's method. It has nice numerical behavior, for example, quadratic convergence. Newton's method for solving (1) was proposed in [18]. Later, some simplified Newton's methods were developed in [11, 19, 20]. Unfortunately, these simplified Newton's methods have poor numerical stability.
In this paper, we propose two new algorithms to compute the nonsingular square root of a matrix, which have good numerical stability. We first apply Samanskill technique, especially, proposed in [21] to compute the matrix square root. Convergence theorems and stability analysis for these new algorithms are given in Sections 3 and 4. In Section 5, we use some numerical examples to show that these new algorithms are more effective than the known ones in some aspects. And the final conclusions are given in Section 6.
2. Two New Algorithms
In order to compute the square root of matrix A , a natural approach is to apply Newton's method to (1), and this can be stated as follows.
Algorithm 1 (see [11, 19] (Newton's method for (1))).
We consider the following.
Step 0. Given X0 and [varepsilon] , set k=0 .
Step 1. Let Res(Xk )=||Xk2 -A||/||A|| . If Res(Xk )<[varepsilon] , stop.
Step 2. Solve for Hk in Sylvester equation: [figure omitted; refer to PDF]
Step 3. Update Xk+1 =Xk +Hk , k=k+1 , and go to Step 1.
Applying the standard local convergence theorem to Algorithm 1 [19, P. 148], we deduce that the sequence {Xk } generated by Algorithm 1 converges quadratically to a square root X* of A if the starting matrix X0 is sufficiently close to X* .
In this paper, we propose two new algorithms to compute the nonsingular square root of the matrix A . Our idea can be stated as follows. If (1) has a nonsingular solution X , then we can transform (1) into an equivalent nonlinear matrix equation: [figure omitted; refer to PDF] Then we apply Newton's method to (3) for computing the nonsingular square root of A .
By the definition of F-differentiable and some simple calculations, we obtain that if the matrix X is nonsingular, then the mapping G is F-differentiable at X and [figure omitted; refer to PDF] Thus Newton's method for (3) can be written as [figure omitted; refer to PDF] Combining (4), the iteration (5) is equivalent to the following.
Algorithm 2 (Newton's method for (3)).
We consider the following.
Step 0. Given X0 and [varepsilon] , set k=0 .
Step 1. Let Res(Xk )=||Xk2 -A||/||A|| . If Res(Xk )<[varepsilon] , stop.
Step 2. Solve for Hk in generalized Sylvester equation: [figure omitted; refer to PDF]
Step 3. Update Xk+1 =Xk +Hk , k=k+1 , and go to Step 1, where Res(Xk )=||Xk2 -A||/||A|| .
By using Samanskii technique [21] to Newton's method (5), we get the following algorithm.
Algorithm 3 (Newton's method for (3) with Samanskii technique).
We consider the following.
Step 0. Given X0 , m , and [varepsilon] , set k=0 .
Step 1. Let Res(Xk )=||Xk2 -A||/||A|| . If Res(Xk )<[varepsilon] , stop.
Step 2. Let Xk,0 =Xk , i=1 .
Step 3. If i>m , go to Step 6.
Step 4. Solve for Hk,i-1 in generalized Sylvester equation: [figure omitted; refer to PDF]
Step 5. Update Xk,i =Xk,i-1 +Hk,i-1 , i=i+1 , and go to Step 3.
Step 6. Update Xk+1 =Xk,m , k=k+1 , and go to Step 1.
Remark 4.
In this paper, we only consider the case that m=2 . If m=1 , then Algorithm 3 is Algorithm 2.
Remark 5.
Iteration (5) is more suitable for theoretical analysis such as the convergence theorems and stability analysis in Sections 3 and 4, while Algorithms 2 and 3 are more convenient for numerical computation in Section 5. In actual computations, the Sylvester equation CXD+EXF=G may be solved by the algorithms developed in [22].
Although Algorithms 2 and 3 are also Newton's methods, Algorithms 2 and 3 are more effective than Algorithm 1. Algorithm 3, especially, with m=2 has cubic convergence rate.
3. Convergence Theorems
In this section, we establish local convergence theorems for Algorithms 2 and 3. We begin with some lemmas.
Lemma 6 (see [23, P. 21]).
Let T be an (nonlinear) operator from a Banach space E into itself and let x* ∈E be a solution of x=Tx . If T is Frechet differentiable at x* with ρ(Tx* [variant prime] )<1 , then the iteration xn+1 =Txn , n=0,1,2,... , converges to x* , provided that x0 is sufficiently close to x* .
Lemma 7 (see [17, P. 45]).
Let A,B∈Cn×n and assume that A is invertible with ||A-1 ||...4;α . If ||A-B||...4;β and αβ<1 , then B is also invertible, and [figure omitted; refer to PDF]
Lemma 8.
If the matrix X^∈Cn×n is nonsingular, then there exist γ>0 and L>0 such that, for all X,Y∈B(X^,γ) , it holds that [figure omitted; refer to PDF] where B(X^,r)={X|"||X-X^||<γ} and GX[variant prime] , GY[variant prime] are the F-derivative of the mapping G defined by (4) at X , Y .
Proof.
Let α=||X^-1 || , and we select 0<γ<α-1 .
From Lemma 7 it follows that X is nonsingular and ||X-1 ||...4;α/(1-αγ) for each X^∈B(X0 ,γ) . Then GX[variant prime] is well defined, and so does GY[variant prime] , where Y∈B(X^,γ) . According to (4), we have [figure omitted; refer to PDF] where L=2(α/(1-αγ))3 ||A|| .
Hence, we have [figure omitted; refer to PDF]
Theorem 9.
If (3) has a nonsingular solution X* and the mapping GX* [variant prime] :Cn×n [arrow right]Cn×n is invertible, then there exists a close ball S=B(X* ,δ) , such that, for all X0 ∈S , the sequence {Xk } generated by Algorithm 2 converges at least quadratically to the solution X* .
Proof.
Let [straight phi](X)=X-(GX[variant prime])-1 (G(X)) . By Taylor formula in Banach space [24, P. 67], we have [figure omitted; refer to PDF]
Hence, the F-derivative of [straight phi] at X* is 0. By Lemma 6, we derive that the sequence {Xk } generated by the iteration (5) converges to X* . It is also obtained that the sequence {Xk } generated by Algorithm 2 converges to X* .
Let ||(GX* [variant prime] )-1 ||=β , according to Xk [arrow right]X* (k[arrow right]∞) and Lemma 7; for large enough k , we have [figure omitted; refer to PDF] By Lemma 8, we have [figure omitted; refer to PDF]
By making use of Taylor formula once again, for all t∈[0,1] , we have [figure omitted; refer to PDF] Hence, [figure omitted; refer to PDF] Combining (13)-(16), we have [figure omitted; refer to PDF] which implies that the sequence {Xk } generated by Algorithm 2 converges at least quadratically to the solution X* .
Theorem 10.
If (1) has a nonsingular solution X* and the mapping GX* [variant prime] :Cn×n [arrow right]Cn×n is invertible, then there exists a close ball S=B(X* ,δ) , such that, for all X0 ∈S , the sequence {Xk } generated by Algorithm 3 converges at least cubically to the solution X* .
Proof.
Let [straight phi](X)=X-(GX[variant prime])-1 (G(X)) . By Taylor formula in Banach space [24, P. 67], we have [figure omitted; refer to PDF]
Hence, the F-derivative of [straight phi] at X* is 0. By Lemma 6, we derive that the sequence {Xk } generated by iteration (5) converges to X* . It is also obtained that the sequence {Xk } generated by Algorithm 3 converges to X* .
Let ||(GX* [variant prime] )-1 ||=β , according to Xk [arrow right]X* (k[arrow right]∞) and Lemma 7; for large enough k , we have [figure omitted; refer to PDF] By Lemma 8, we have [figure omitted; refer to PDF]
By making use of Taylor formula once again, for all t∈[0,1] , we have [figure omitted; refer to PDF] Hence, [figure omitted; refer to PDF] Combining (19)-(22) and Theorem 9, we have [figure omitted; refer to PDF] where M=8β2L3 +32β3L4 δ . Therefore, the sequence {Xk } generated by Algorithm 3 converges at least cubically to the solution X* .
4. Stability Analysis
In accordance with [2] we define an iteration Xk+1 =f(Xk ) to be stable in a neighborhood of a solution X=f(X) , if the error matrix Ek =Xk -X* satisfies [figure omitted; refer to PDF] where L is a linear operator that has bounded power; that is, there exists a constant c>0 such that, for all n>0 and arbitrary E of unit norm, ||Ln (E)||<c . This means that a small perturbation introduced in a certain step will not be amplified in the subsequent iterations.
Note that this definition of stability is an asymptotic property and is different from the usual concept of numerical stability, which concerns the global error propagation, aiming to bound the minimum relative error over the computed iterates.
Now we consider the iteration (5) and define the error matrix Ek =Xk -X* ; that is, [figure omitted; refer to PDF] For the sake of simplicity, we perform a first order error analysis; that is, we omit all the terms that are quadratic in the errors. Equality up to second order terms is denoted with the symbol [=, single dot above] .
Substituting (25) into (5) we get [figure omitted; refer to PDF] combining (4) we have [figure omitted; refer to PDF] which implies that [figure omitted; refer to PDF] Omitting all terms that are quadratic in the errors, we have [figure omitted; refer to PDF] By using AX*-1 =X* , we have [figure omitted; refer to PDF] that is, [figure omitted; refer to PDF] which means that iteration (5) is self-adaptive; that is to say, the error Ek in the k th iteration does not propagate to the (k+1 )st iteration. When X* and -X* have no eigenvalue in common, especially, the matrix equation EX* +X* E=0 has a unique solution E=0 [17, P. 194]. Therefore, under the condition that X* and -X* have no eigenvalue in common, the iteration (5) has optimal stability; that is, the operator L defined in (24) coincides with the null operator.
5. Numerical Examples
In this section, we compare our algorithms with the following.
Algorithm 11 (the Denman-Beavers iteration [9]).
Consider [figure omitted; refer to PDF]
Algorithm 12 (the scaled Denman-Beavers iteration [13]).
Consider [figure omitted; refer to PDF]
Algorithm 13 (the Pade iteration [13]).
Consider [figure omitted; refer to PDF] where p...5;1 is a chosen integer: [figure omitted; refer to PDF]
Algorithm 14 (the scaled Pade iteration [13]).
Consider [figure omitted; refer to PDF]
All tests are performed by using MATLAB 7.1 on a personal computer (Pentium IV/2.4 G), with machine precision 2.2×10-16 . The stopping criterion for these algorithms is the relative residual error: [figure omitted; refer to PDF] where Xk is the current, say the k th, iteration value.
Example 1.
Consider the matrix [figure omitted; refer to PDF] We use Algorithms 1, 2, and 3 with X0 =0.3I and Algorithms 11-14 to compute the nonsingular square root of A . We list the iteration steps (denoted by "IT "), CPU time (denoted by "CPU "), and the relative residual error (denoted by "ERR ") in Table 1.
Table 1
| IT | CPU | ERR |
Algorithm 1 | 7 | 0.0086 | 2.17 × 1 0 - 16 |
Algorithm 2 | 7 | 0.0080 | 2.04 × 1 0 - 16 |
Algorithm 3 | 5 | 0.0103 | 1.96 × 1 0 - 16 |
Algorithm 11 | 9 | 0.0172 | 1.99 × 1 0 - 16 |
Algorithm 12 | 6 | 0.0136 | 2.03 × 1 0 - 16 |
Algorithm 13 with p=1 | 11 | 0.0094 | 2.71 × 1 0 - 16 |
Algorithm 13 with p=2 | 9 | 0.0101 | 2.68 × 1 0 - 16 |
Algorithm 14 with p=1 | 9 | 0.0127 | 1.97 × 1 0 - 16 |
Algorithm 14 with p=2 | 6 | 0.0108 | 3.61 × 1 0 - 16 |
Example 2.
Consider the matrix [figure omitted; refer to PDF] We use Algorithms 1, 2, and 3 with the starting matrix X0 =0.9I and Algorithms 11-14 to compute the nonsingular square root of A . We list the numerical results in Table 2.
Table 2
| IT | CPU | ERR |
Algorithm 1 | 6 | 7.6310 | 5.72 × 1 0 - 16 |
Algorithm 2 | 6 | 8.7200 | 3.61 × 1 0 - 16 |
Algorithm 3 | 4 | 9.0258 | 2.60 × 1 0 - 16 |
Algorithm 11 | 8 | 13.2301 | 3.87 × 1 0 - 16 |
Algorithm 12 | 7 | 11.6758 | 2.98 × 1 0 - 16 |
Algorithm 13 with p=1 | 10 | 8.8936 | 9.36 × 1 0 - 16 |
Algorithm 13 with p=2 | 6 | 9.4387 | 5.78 × 1 0 - 16 |
Algorithm 14 with p=1 | 9 | 10.3571 | 2.89 × 1 0 - 16 |
Algorithm 14 with p=2 | 5 | 8.1043 | 3.87 × 1 0 - 16 |
From Tables 1 and 2, we can see that Algorithms 2 and 3 outperform Algorithms 1, 11, 12, and 13 in both iteration steps and approximation accuracy, and Algorithm 3 outperforms Algorithms 1, 2, and 11-14 in both iteration steps and approximation accuracy. Therefore, our algorithms are more effective than the known ones in some aspects.
6. Conclusion
In this paper, we propose two new algorithms for computing the nonsingular square root of a matrix A by applying Newton's method to nonlinear matrix equation G(X)=X-AX-1 =0 . Convergence theorems and stability analysis for these new algorithms are given. Numerical examples show that our methods are more effective than the known one in some aspects.
Acknowledgments
The authors wish to thank the editor and anonymous referees for providing very useful suggestions as well as Professor Xuefeng Duan for his insightful and beneficial discussion and suggestions. This work was supported by National Natural Science Fund of China (nos. 11101100, 11301107, and 11261014) and Guangxi Provincial Natural Science Foundation (nos. 2012GXNSFBA053006, 2013GXNSFBA019009).
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
[1] B. A. Schmitt, "On algebraic approximation for the matrix exponential in singularly perturbed bounded value problems," SIAM Journal on Numerical Analysis , vol. 57, pp. 51-66, 2010.
[2] S. H. Cheng, N. J. Higham, C. S. Kenney, A. J. Laub, "Approximating the logarithm of a matrix to specified accuracy," SIAM Journal on Matrix Analysis and Applications , vol. 22, no. 4, pp. 1112-1125, 2001.
[3] L. Dieci, B. Morini, A. Papini, "Computational techniques for real logarithms of matrices," SIAM Journal on Matrix Analysis and Applications , vol. 17, no. 3, pp. 570-593, 1996.
[4] G. W. Cross, P. Lancaster, "Square roots of complex matrices," Linear and Multilinear Algebra , vol. 1, pp. 289-293, 1974.
[5] W. D. Hoskins, D. J. Walton, "A faster method of computing square roots of a matrix," IEEE Transactions on Automatic Control , vol. 23, no. 3, pp. 494-495, 1978.
[6] C. R. Johnson, K. Okubo, "Uniqueness of matrix square roots under a numerical range condition," Linear Algebra and Its Applications , vol. 341, no. 1-3, pp. 195-199, 2002.
[7] Å. Björck, S. Hammarling, "A Schur method for the square root of a matrix," Linear Algebra and Its Applications , vol. 52-53, pp. 127-140, 1983.
[8] S. G. Chen, P. Y. Hsieh, "Fast computation of the nth root," Computers & Mathematics with Applications , vol. 17, no. 10, pp. 1423-1427, 1989.
[9] E. D. Denman, A. N. Beavers Jr., "The matrix sign function and computations in systems," Applied Mathematics and Computation , vol. 2, no. 1, pp. 63-94, 1976.
[10] L. P. Franca, "An algorithm to compute the square root of a 3×3 positive definite matrix," Computers & Mathematics with Applications , vol. 18, no. 5, pp. 459-466, 1989.
[11] N. J. Higham, "Newton's method for the matrix square root," Mathematics of Computation , vol. 46, no. 174, pp. 537-549, 1986.
[12] M. A. Hasan, "A power method for computing square roots of complex matrices," Journal of Mathematical Analysis and Applications , vol. 213, no. 2, pp. 393-405, 1997.
[13] N. J. Higham, "Stable iterations for the matrix square root," Numerical Algorithms , vol. 15, no. 2, pp. 227-242, 1997.
[14] Z. Liu, Y. Zhang, R. Ralha, "Computing the square roots of matrices with central symmetry," Applied Mathematics and Computation , vol. 186, no. 1, pp. 715-726, 2007.
[15] Y. N. Zhang, Y. W. Yang, B. H. Cai, D. S. Guo, "Zhang neural network and its application to Newton iteration for matrix square root estimation," Neural Computing and Applications , vol. 21, no. 3, pp. 453-460, 2012.
[16] Z. Liu, H. Chen, H. Cao, "The computation of the principal square roots of centrosymmetric H-matrices," Applied Mathematics and Computation , vol. 175, no. 1, pp. 319-329, 2006.
[17] J. M. Ortega, W. C. Rheinboldt Iterative Solution of Nonlinear Equations in Several Variables , SIAM, Philadephia, Pa, USA, 2008.
[18] P. Laasonen, "On the iterative solution of the matrix equation AX2 -I=0 ," Mathematical Tables and Other Aids to Computation , vol. 12, pp. 109-116, 1958.
[19] J. M. Ortega Numerical Analysis , Academic Press, New York, NY, USA, 1972., 2nd.
[20] B. Meini, "The matrix square root from a new functional perspective: theoretical results and computational issues," SIAM Journal on Matrix Analysis and Applications , vol. 26, no. 2, pp. 362-376, 2004.
[21] V. Samanskii, "On a modification of the Newton's method," Ukrainian Mathematical Journal , vol. 19, pp. 133-138, 1967.
[22] J. D. Gardiner, A. J. Laub, J. J. Amato, C. B. Moler, "Solution of the Sylvester matrix equation AXBT +CXDT =E ," Association for Computing Machinery: Transactions on Mathematical Software , vol. 18, no. 2, pp. 223-231, 1992.
[23] M. A. Krasnoselskii, G. M. Vainikko, P. P. Zabreiko, Y. B. Rutitskii, V. Y. Stetsenko Approximate Solution of Operator Equations , Wolters-Noordhoff Publishing, Groningen, The Netherlands, 1972.
[24] D. J. Guo Nonlinear Functional Analysis , Shandong Science Press, Shandong, China, 2009.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2014 Chun-Mei Li and Shu-Qian Shen. Chun-Mei Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
Two new algorithms are proposed to compute the nonsingular square root of a matrix A. Convergence theorems and stability analysis for these new algorithms are given. Numerical results show that these new algorithms are feasible and effective.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer