On convergence and error analysis of the

parametric iteration method

 

S. A. Saeed Alavi 1*, Aghileh Heydari 1, Farhad Khellat 2

 

1 Department of Mathematics, Payame Noor University, Tehran, I.R of Iran

2 Department of Mathematic, Faculty of Mathematical sciences, Shahid Beheshti University, Evin-Tehran, Iran

*Corresponding author E-mail: alavi601@yahoo.com

 

 

Copyright © 2015 S. A. Saeed Alavi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

 

Abstract

 

Parametric iteration method falls under the category of the analytic approximate methods for solving various kinds of nonlinear differential equations. Its convergence only for some special problems has been proved. However in this paper, an analysis of error is presented, then due to it, the convergence of method for general problems is proved. To assess the performance of the claimed error bound and also the convergence of the method, numerical experiments are presented performed in MATLAB 2012b.

 

Keywords: He’s Variational Iteration Method; Parametric Iteration Method; Convergence; Error Bound.

 

1.         Introduction

Parametric iteration method (PIM) is an analytic approximate method for solving linear and nonlinear problems proposed in [1]. At beginning it was proposed for solving nonlinear fractional differential equations by modifying He’s variational iteration method (VIM) [2]. The PIM enjoys some augmented factors which made it more completed than the VIM. In fact by adjusting these factors one can establish more accurate approximations in comparison with the VIM.

During recent decade, many researchers have been worked on the VIM for solving various kinds of problems which mentioning them is out of the scope of this paper. Besides, some authors have been centered on the convergence of the VIM for some specific problems, like for multi-order fractional DE’s [3], multi-delay DE’s [4], ODE’s [5], systems of ODE’s [6] and etc. Herein, the work of Odibat [7] is more interesting and different because of its generality. In fact, he concluded the convergence of the VIM by introducing a semi-contraction operator and completed the proof like the proof of the Banach’s fixed point theorem.

On the other hand, the PIM was utilized for solving various kind of differential equations like Abel equation [8], nonlinear chaotic Genesio system [9], boundary value problems [10], linear optimal control problems [11] and etc. Convergence theorem for some particular cases was discussed in some of these literature (e.g. [8,9]), but what is still missing is a proof of the convergence of the PIM for a general differential equation. Also, from the both theoretical and practical viewpoint, another necessary talk is a complete discussion about the error bound of the approximations. Therefore, the goal of this article is to establish an error term and then presenting a general proof of the convergence of the PIM.

2.         Parametric iteration method (PIM)

To explain the basic idea of the PIM considers the following differential equation:

                                                                                                                                                 (1)

Where  is a nonlinear operator,  denotes the time, and  is an unknown variable. First consider (1) as below:

                                                                                                                                    (2)

Where  and  denote linear and nonlinear differential operator of the unknown respectively, and  is the source term. We then construct a family of iterative formulas as:

                                                                                                                            (3)

where  . In this formula  and  denote the so-called auxiliary parameter and auxiliary function respectively. In this work we take  Accordingly, the successive approximations  will be readily obtained by choosing the zeroth component. (For more details about PIM see [1]).

3.         Error analysis and convergence

Consider the following nonlinear problem

                                                                                                                                              (4)

Where  and  are defined as:

                                                                                      (5)

Where  and  are continuous real functions on . In  we use the infinity norm i.e. for vector  we have and for everywe use the maximum norm as. Also the norm of vector functions like  is:

                                                                                                                (6)

In order to use the PIM, we rewrite (4) as

                                                                                                                                                   (7)

Where is an auxiliary linear operator and  is the nonlinear operator. Then the constructed iteration formula by PIM will be defined by:

                                                                                (8)

Taking  and choosing the initial approximation in the above sequence, clearly we can say that . So the following lemma will be obtained.

Lemma 3.1: For every and for every

                                                                                                              (9)

Now the iteration formula (8) can be written as

                                                                              (10)

Let’s denote nth approximation by  and  then the convergence of  is due to the norm

                                                                              (11)

Before presenting the main theorem, we restate the Lipschitz condition for the vector function . Suppose that for every the component  of function , there exist a positive real constant  such that for every  and for every  and  the following condition satisfies:

                                             (12)

In this situation, letting  we can say that  satisfies a Lipschitz condition with respect to the first argument with the Lipschitz constant L, i.e.

                                                                                                                                          (13)

Theorem 3.2: Assume that  is continuous on where and satisfies a Lipschitz condition on with respect to the first argument with the Lipschitz constant L. Also suppose that  is bounded onto a positive real number. Then for two arbitrary successive approximations we have

                                                                                                   (14)

Proof: If we denote the approximate solution obtained by the first iteration with  and , according to (10) and noticing that where and  we can write

                                                          (15)

Now, let  in (10), using the notation, we have

 

 

 

 

 

We rearrange the final statement as bellow

                                                                 (16)

And similarly

                                                                (17)

In summary, this argument will lead to the following general form

                                                              (18)

The maximum of left hand side on index satisfies (18) too, due to the fact that the right hand side of (18) is independent from index . So, due to the defined norm (6), taking maximum of both side of (18) on all , we have:

                                                                                               (19)

This completes the proof. ■

Now we want to prove that if we choose  such that  then the right hand side of (14) vanishes when n tends to infinity. First we prove an auxiliary lemma.

Lemma 3.3: Assume that  then for every  where we have

                                                                                                                                           (20)

Proof: For a fixed  we have  . So

                                                                                                                          (21)

Also for every real number  and  we know that. Therefore noticing to the assumption  and taking  will complete the proof. ■

Theorem 3.4: Due to the last assumption we have

                                                                                                            (22)

Proof: Let  and. Therefore the sum in (22) can be written as

                                                                                                                                                         (23)

Or

                                                                                                                                 (24)

Noticing to the expansion of the exponential function and for all, the series  is absolutely convergent and we name its sum. Clearly, the limit of  is zero. Also, by lemma 3.3 we know that. So, for every given there exist a such that for every  we have. In this case we can write

 

 

                                                                                                                       (25)

In the last inequality absolutely convergence of  is used. On the other hand, as  tends to infinity. So if we keep  fixed and let then we have

                                                                                                                                                      (26)

Since the  was arbitrary, the proof is complete. ■

Theorem 3.2 and 3.4 indicate that for sufficiently large, two successive terms of the sequence  for every arbitrary  satisfy the following relation

                                                                                                                                                            (27)

Using this, one can easily show that is a Cauchy sequence in. Therefor it is convergent in the complete space. So we have:

Corollary 3.5: Due to the last assumptions, the sequence (10) constructed by the PIM is convergent.

Corollary 3.6: The valid region of the convergence-control parameter  is  .

4.         Numerical experiments

To demonstrate the efficiency of the error bound defined by (14) we consider the following two dimensional test problems.

                                                                                                                              (28)

Where the time domain is and the exact solutions from [12] are:

                                                                                                                            (29)

                                                                                                                          (30)

In view of (4) and (5) we have:

                                                                           (31)

 is linear and obviously, it satisfies a Lipschitz condition by Lipschitz constant . Choosing, theis bounded by and using the notation  for the error bound described in (14) we have:

                                                                                                                    (32)

Furthermore the norm of direct difference of two successive approximations and  appeared in the left hand side of (14) is denoted by

                                                                                                                                                 (33)

Then for everyand for all  the numerical results must confirm the relation  to ensure that the theoretical result of the Theorem 3.2 is reliable.

Also in order to discuss the convergence of the PIM claimed in the Corollary 3.5, we denote the absolute error by  and define it by

                                                                                                                                             (34)

In Fig.1, we plot ,  and obtained by the PIM with  and . Other different values of  and  is discussed in Fig.2 and Table 1.

 

        

 

Analysis of the Fig.1:

Error bound: As could be seen from the plot of  , the estimated error bound is really an upper bound for which confirms the inequality (14). This is true for every iteration as zoomed out part shows. Plot of  also shows that by increasing  the error bound vanishes which is a confirmation of theorem 3.4.

Cauchy sequence: The plot of shows that  when  grows. This means that for every  and for sufficiently large,  which confirms (27). The latter can be used to conclude the sequence  is a Cauchy sequence.

Convergence: The plot of shows that for sufficiently large , , and this is a confirmation of convergence of the sequence constructed by the PIM.

 

In Fig.2, we plot  and  for the solutions of the PIM by various  and. As could be seen the error bound confirms what is claimed in theorem 3.2 and 3.4.

Although Fig.1 and Fig.2 provide us good information of error bound, but what is doesn’t show is a specified data for the convergence rate. So to study of the convergence rate we report the final values of  and  for  and in Table 1.

 

Table 1: Test of Error Bound for Various  And

     0.001

 

Table 1 shows that for  the convergence is excellent but for  we need much more iterations to conclude the convergence. For, although  seems to be appropriate for vanishing  but bigness of  in the same formula (32) makes  very big. On the other hand, for, similarly but  is not very big in opposite of the case, consequently the convergence is faster. From this viewpoint,  seems to be the best choice since, however there exist many counterexamples in nonlinear problems showing that some values other than  can give better approximations. Such an argument leads to a known problem that is finding an optimal value of accelerating parameter which in general is an open problem in this field. By the results of this paper a proposal is minimizing the error term as a function of the parameter  as a variable which is left to the further works.

5.         Conclusion

In this paper, a convergence analysis of the PIM is presented. This is performed by establishing a novel error bound and showing this error bound tends to zero. Also an interesting result has been concluded for the auxiliary parameter h. Although finding optimal h in general is an open problem, but we hope that the results of this paper are a promising tools for researchers. Our proposal is to find optimal h by minimizing the presented error term as a function of, which is left to the further works.

References

[1]         A. Ghorbani, “Toward a New Analytical Method for Solving Nonlinear Fractional Differential Equations”, Comput. Meth. Appl. Mech. Engrg. Vol.197, (2008), pp: 4173-4179. http://dx.doi.org/10.1016/j.cma.2008.04.015.

[2]         J. H. He, “Variational iteration method – a kind of non–linear analytical technique: some examples”, Int. J. Non–Linear Mech. Vol.34, (1999), pp: 699-708. http://dx.doi.org/10.1016/S0020-7462(98)00048-1.

[3]         S. Yang, A. Xiao, H. Sua, “Convergence of the variational iteration method for solving multi-order fractional differential equations”, Computers and Mathematics with Applications, Vol.60, (2010), pp: 2871–2879. http://dx.doi.org/10.1016/j.camwa.2010.09.044.

[4]         S. Yang, A. Xiao, “convergence of variational iteration method for solving multi-delay differential equations”, computers & mathematics with applications, Vol.61, No.8, (2011), pp: 2148-2151.

[5]         E. Yusufoğlu, “Two convergence theorems of variational iteration method for ordinary differential equations”, Appl. Math. Lett. (2011), http://dx.doi.org/10.1016/j.aml.2011.02.005.

[6]         D. Khojasteh,”convergence of variational iteration method for solving linear systems of ODE’s with constant coefficients”, computers & mathematics with applications, Vol.56, No.8,(2008),pp: 2027-2033.

[7]         Z. M. Odibat, “A study on the convergence of variational iteration method,” Mathematical and Computer Modelling Vol.51, (2010), pp: 1181_1192.

[8]         J. Saberi-Nadjafi, A. Ghorbani, “Piecewise-truncated parametric iteration method: a promising analytical method for solving Abel differential equations", Z. Naturforsch. Vol.65a, (2010), pp: 529-539.

[9]         A. Ghorbani and J. Saberi-Nadjafi, “A Piecewise-Spectral Parametric Iteration Method for Solving the Nonlinear Chaotic Genesio System”, Mathematical and Computer Modeling, Vol.54, (2011), pp: 131-139. http://dx.doi.org/10.1016/j.mcm.2011.01.044.

[10]      A. Ghorbani, M. Gachpazan, J. Saberi-Nadjafi, “A modified parametric iteration method for solving nonlinear second order BVPs”, Comput. Appl. Math. Vol.30, No.3, (2011), pp: 499-515. http://dx.doi.org/10.1590/S1807-03022011000300002.

[11]      A. S. Alavi, A. Heydari, “Parametric Iteration Method for Solving Linear Optimal Control Problems”, Applied Mathematics, Vol.3, (2012), pp: 1059-1064. http://dx.doi.org/10.4236/am.2012.39155.

[12]      C. K. Chui and G. Chen, “Linear Systems and Optimal Control”, Springer-Verlag, Berlin, Heidelberg, (1989), pp:76-80 http://dx.doi.org/10.1007/978-3-642-61312-8.