渭水文库 - 千万精品文档,你想要的都能搜到,下载即用。

张金海---中国科学院地质与地球物理研究所.pdf

Sick°[病态]16 页 1.635 MB 访问 342.97下载文档
张金海---中国科学院地质与地球物理研究所.pdf张金海---中国科学院地质与地球物理研究所.pdf张金海---中国科学院地质与地球物理研究所.pdf张金海---中国科学院地质与地球物理研究所.pdf张金海---中国科学院地质与地球物理研究所.pdf张金海---中国科学院地质与地球物理研究所.pdf
当前文档共16页 2.97
下载后继续阅读

张金海---中国科学院地质与地球物理研究所.pdf

Journal of Computational Physics 250 (2013) 511–526 Contents lists available at SciVerse ScienceDirect Journal of Computational Physics journal homepage: www.elsevier.com/locate/jcp Optimized explicit finite-difference schemes for spatial derivatives using maximum norm q Jin-Hai Zhang ⇑, Zhen-Xing Yao Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, China a r t i c l e i n f o Article history: Received 1 September 2012 Received in revised form 27 February 2013 Accepted 21 April 2013 Available online 10 May 2013 Keywords: Optimized scheme Explicit finite-difference Numerical dispersion Maximum norm Simulated annealing algorithm a b s t r a c t Conventional explicit finite-difference methods have difficulties in handling high-frequency components due to strong numerical dispersions. One can reduce the numerical dispersions by optimizing the constant coefficients of the finite-difference operator. Different from traditional optimized schemes that use the 2-norm and the least squares, we propose to construct the objective functions using the maximum norm and solve the objective functions using the simulated annealing algorithm. Both theoretical analyses and numerical experiments show that our optimized scheme is superior to traditional optimized schemes with regard to the following three aspects. First, it provides us with much more flexibility when designing the objective functions; thus we can use various possible forms and contents to make the objective functions more reasonable. Second, it allows for tighter error limitation, which is shown to be necessary to avoid rapid error accumulations for simulations on large-scale models with long travel times. Finally, it is powerful to obtain the optimized coefficients that are much closer to the theoretical limits, which means greater savings in computational efforts and memory demand. Ó 2013 The Authors. Published by Elsevier Inc. All rights reserved. 1. Introduction The explicit finite-difference (FD) scheme is one of the most popular approaches used in various numerical simulations, because it is simple in its numerical implementation and is powerful in handling complex media. However, the conventional explicit FD method has serious numerical artifacts in the presence of high-frequency components and/or coarse grids. This problem would dramatically increase both the demands on the memory and computational cost, especially for large-scale models [7], since a fine grid should be properly designed and a high-order FD operator should be applied. A popular way to avoid this problem is to manually decrease the dominant frequency. This method could result in an acceptable running time, but would result in very limited spatial resolutions, because high-frequency components are necessary for improving the final resolutions. Another way is to apply advanced methods that have less numerical dispersion, such as optimized explicit FD methods and implicit FD methods (either conventional or optimized). Compared with implicit FD methods, explicit FD methods usually have much less computational cost. Therefore, we prefer to develop the optimized scheme of explicit FD methods to further reduce the numerical dispersions while maintaining the computational cost. Optimized schemes of FD methods appeared two decades ago [10,17]. It has been widely used to reduce the numerical dispersions in many practical applications, such as acoustics, seismology, and electromagnetics. The basic idea of the q This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. ⇑ Corresponding author. Tel.: +86 1082998908. E-mail address: zjh@mail.iggcas.ac.cn (J.-H. Zhang). 0021-9991/$ - see front matter Ó 2013 The Authors. Published by Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.jcp.2013.04.029 512 J.-H. Zhang, Z.-X. Yao / Journal of Computational Physics 250 (2013) 511–526 optimized scheme is to increase the accurate wave number coverage of the FD operator within a tolerable error range by modifying the constant coefficients. The main advantages of using optimized explicit FD methods are that we can significantly improve the numerical results by maintaining the algorithm structure, the source code and the computational efficiency. In addition, we can use a relatively coarser grid as well as larger time step, hence the memory demand and the total running time would further decrease. Holberg [10] suggests using the group velocity, which is related to dispersion errors. Etgen [6] suggests employing the phase velocity rather than the group velocity, since the phase velocity is more straight-forward than the group velocity. Some works propose adding a weight function to the objective functions to enhance the influence of the wave number of interest [9,13,16]. However, almost all traditional optimized schemes use the least squares to minimize the objective functions. Thus the forms and the contents of the objective functions are fairly limited and inflexible. Unfortunately, the forms and the contents of the objective functions greatly affect the extent of the improvement in accuracy. In other words, the coverage of accurate wave numbers obtained by traditional optimized schemes is not wide enough because of the limitations created by the objective functions. Therefore there is still some development space for optimized schemes. In addition, traditional optimized schemes do not pay enough attention to the proper selection of error limitation, which is found to be critical for solid accuracy improvement. Usually traditional optimized schemes tend to employ relatively large error limitation to obtain a wide-enough accurate wave number coverage. Typically, the error limitations shown in the literature range from 0.0003 to 0.03 [10]. Although we may have seen a large accurate wave number coverage range in theoretical analyses, we would ultimately find that an optimized FD operator using a large error limitation is actually apt to obtain less improvement or even worse results compared with un-optimized FD operators [26]. Therefore, we should try to obtain a reasonable accurate range that is as wide as possible; meanwhile we should carefully select the error limitation to guarantee that the accuracy improvements are tangible. Almost all previous optimized schemes use the 2-norm to construct the objective functions, because such objective functions can be easily solved by the least squares (e.g., [2,3,13,16,23,27]). Whereas, the main task of optimization is to improve the accurate wave number coverage, as widely as possible; thus all our works should contribute to this task, including constructing the objective functions and solving the objective functions. We should not consider too much whether the objective function is easy to solve by some existing method; after all, either the 2-norm or the least squares is just one possible choice. Besides the 2-norm, we can use the 1-norm, the p-norm and the maximum norm. Besides the least squares we can use many other advanced optimization approaches, such as the simulated annealing algorithm [14], the genetic algorithm [11] and the particle swarm optimization [5]. Therefore we may obtain much greater accuracy improvement, or even reach the theoretical upper limit, if we adopt a reasonable objective function, a powerful solver and a proper error limitation. In this paper, we employ the maximum norm to construct the objective functions and use the simulated annealing algorithm to minimize the objective functions. The maximum norm provides us with an intuitive and effective measure of the optimized FD operator. The simulated annealing algorithm provides us with a more powerful tool in searching for the optimal coefficients. Without being constrained by the solver as before, we can freely design the forms and contents of the objective functions and try to make the objective functions more reasonable. The maximum norm and the simulated annealing algorithm allow us to use much smaller error limitations to make the accuracy improvements more concrete. Traditional optimization methods have difficulties in evaluating the error before performing the optimization. Thus, they empirically give a possible wave number range that the optimized operator may cover. In consequence, the maximum error will be very high if the expected input wave number is too big; otherwise, the accurate wave number range will be underestimated. In contrast, the new scheme proposed in this paper does not have such a problem. One can determine the preferred error limitation according to the modeling tasks by either the size of the model or the duration of the record. Then the program will try its best to find the optimal coefficients under the given error limitation. 2. Basic forms of optimized FD operator According to the sampling theory of discrete signals, the band-limit continuous signal f(x) can be recovered by a sinc interpolation of uniformly sampled signals fn as follows f ðxÞ ¼   1 X sin p ðx  nDÞ D n¼1 p ðx  nDÞ fn ; ð1Þ D where D is the grid interval of spatial direction x, and fn  f(nD) are the sampled values on the discrete positions nD. For simplicity, we define u  p/D and h  (x  nD)u; hence the first four orders of spatial derivatives are  1  X @f ðxÞ sin h cos h fn ; ¼u  2 þ @x h h n¼1 ð2Þ  1  X @ 2 f ðxÞ 2 sin h 2 cos h sin h 2 fn ; ¼u   @x2 h h3 h2 n¼1 ð3Þ J.-H. Zhang, Z.-X. Yao / Journal of Computational Physics 250 (2013) 511–526 513  1  X @ 3 f ðxÞ 6 sin h 6 cos h 3 sin h cos h 3 fn ; ¼ u  þ þ  @x3 h h4 h3 h2 n¼1 ð4Þ  1  X @ 4 f ðxÞ 24 sin h 24 cos h 12 sin h 4 cos h sin h 4 fn : ¼ u   þ þ @x4 h h5 h4 h3 h2 n¼1 ð5Þ These expansions at x = 0 are as follows  1 X @f ðxÞ cosðnpÞ ¼ fn ;  @x x¼0 n¼1 nD  @ 2 f ðxÞ  @x2   @ 3 f ðxÞ  @x3   @ 4 f ðxÞ  @x4  1 X 2 cosðnpÞ ¼ n¼1 x¼0 n¼1 ¼ nD  3 fn ; 6 ð7Þ  n3 D3 1  X 4p 2 24 n2 D n 4 D4 n¼1 x¼0 n2 D2 1  2 X p ¼ x¼0 ð6Þ  4 cosðnpÞfn ;  ð8Þ cosðnpÞfn ; ð9Þ respectively. We see that there is a singularity when n = 0. To avoid this singularity, we can also express the expansions according to the symmetry as follows [16]  1 X @ m f ðxÞ ¼ 2 a0n ðfn  fn Þ for m is odd; @xm x¼0 n¼1 ð10Þ  1 X @ m f ðxÞ 0 ¼ a f þ 2 a0n ðfn þ fn Þ for m is even; 0 0  @xm x¼0 n¼1 ð11Þ where m is the order of the derivatives, and a0n are the coefficients of fn in Eqs. (6)-(9), n = 1, 2, . . . , 1. The conventional explicit FD operators are actually truncated Nth-order expansions multiplied by a window function an [8,4], where N is an even integer and an is the constant coefficient defined by the binomial coefficient formula an ¼ !, N N þn 2 N ! N 2 ð12Þ : For the optimized scheme, the basic aim is to search for a new group of coefficients that are different from the above expansions and have better numerical performance. The final form of the optimized FD operator can be expressed as follows: N=2 @ m f ðxÞ 1 X  bn f n ; m @xm D n¼N=2 ð13Þ where bn are the coefficients that are ready to be optimized. According to the Fourier transform theory, the spatial derivatives can be equally expressed in the wave number domain as follows [15] @ m f ðxÞ m  ðikx Þ Fðkx Þ; @xm ð14Þ pffiffiffiffiffiffiffi where kx is the wave number, F(kx) is the forward Fourier transform of f(x), and i ¼ 1. Eq. (14) is the analytical expression of the spatial derivatives in the wave number domain, which covers the whole Nyquist bandwidth. Thus we can examine the accuracy of our optimized FD operators by comparing their Fourier transforms with the analytical wave number (ikx)m. When m = 1, applying a spatial Fourier transformation to (13), we obtain the following relation ikx Fðkx Þ  N=2 i X  bn sinðnkx DÞFðkx Þ  ikx Fðkx Þ; D n¼N=2  ð15Þ  where kx is defined as the wave number of the optimized FD operator, and kx is an approximation of the analytical wave number kx. When m = 2, we obtain the following relation 2 kx Fðkx Þ  N=2 1 X D2 n¼N=2  2 bn cosðnkx DÞFðkx Þ  ðkx Þ Fðkx Þ: ð16Þ 514 J.-H. Zhang, Z.-X. Yao / Journal of Computational Physics 250 (2013) 511–526 3. Objective functions using maximum norm The 2-norm is the most popular criterion used to construct objective functions (e.g., [2,13,16,19,23,27]) E¼ Z kc 0 m  m jðikx Þ  ðikx Þ j2 wðkx Þdk; ð17Þ  where kc is the maximum accurate wave number under a given error limitation, kx is the approximated wave number, and w(kx) is some weight function. The least squares are usually used to find the optimized coefficients that minimize the objective functions. The optimized coefficients could be determined by setting @E ¼ 0; @bn ð18Þ and solving the resulting system of linear algebraic equations. The advantage of using the 2-norm and the least squares is that we can obtain a unique group of optimized coefficients. However, the forms and the contents of the objective functions are somewhat limited; that is, one cannot design the objective functions arbitrarily since the designed objective functions should be solvable by the least squares. This limitation makes it difficult to find the optimal group of optimized coefficients, because the flexibility of designing the objective functions would severely influence the final accuracy. In fact, the 2-norm is only one of the candidates for examining the optimized coefficients. We can also use the 1-norm or the maximum-norm (i.e. the infinite-norm). Following the theory proposed by Tam and Webb [23], Bogey and Bailly [3] minimize the relative difference rather than the traditional absolute difference. Using 1-norm and proper weight functions, obtain much higher accuracy than the standard explicit high-order methods. For all traditional optimization methods, however, it is difficult to provide a proper accurate wave number range, which is required before performing the optimization but is in fact critical for the success of optimization; in addition, they all fall into the least square when minimizing the objective function. The p-norm is defined as jjyjjp ¼ X p jyj j !1=p ; j ¼ 1; 2; . . . ; J: ð19Þ j2N For p = 2, Eq. (19) denotes the 2-norm; for p = 1, Eq. (19) denotes the maximum-norm, which can also be expressed as kyk1 ¼ maxðjy1 j; jy2 j; . . . ; jyJ jÞ: ð20Þ Recalling that kyk1 6 kyk2 ; ð21Þ we see that the maximum-norm is not so strict as the 2-norm. Generally, if p > r > 0, we have kykp 6 kykr : ð22Þ Inequality (22) indicates that the maximum-norm is the loosest among all p-norms. Fortunately, this loosest constraint would not seriously affect the accuracy since the value of ||y||1 is comparable to that of the 2-norm and 1-norm. The maximum-norm provides us with the largest number of possible solutions under a given error limitation [24]. This would greatly enhance the possibility of finding a group of optimized coefficients when scanning a vast solution set. On the other hand, checking the maximum deviation sounds more reasonable than checking the ‘‘distance’’ between the accurate and approximated wave numbers since it is not working in the space domain. Therefore, we chose the maximum-norm as our criterion for designing the objective functions to extend the accurate wave number coverage as widely as possible. In this paper, we examine the absolute error between the analytical wave numbers and the approximated wave numbers using the maximum norm. For the optimized FD operators of the frequently-used first- and second-order derivatives, the objective functions are   N=2   X   Eðkc ; eÞ  max kx D  bn sinðnkx DÞ 6 e  06kx 6kc  n¼N=2 ð23Þ   N=2   X   2 2 Eðkc ; eÞ  max kx D  bn cosðnkx DÞ 6 e;  06kx 6kc  n¼N=2 ð24Þ and respectively, where kc is the maximum accurate wave number range that the optimized FD operator can handle, and e is the error limitation, also called the tolerant threshold. This is probably the most straightforward and simplest objective function that we can find in the literature. It is an intuitive and effective measure of the optimized FD operator. J.-H. Zhang, Z.-X. Yao / Journal of Computational Physics 250 (2013) 511–526 515 4. Optimized scheme using simulated annealing Despite being straightforward and simple, the maximum-norm is actually seldom used in designing the objective functions; in contrast the 2-norm is popular. The main reason is that the maximum-norm cannot be solved easily by the least squares. Holberg [10] presents the absolute error of the group velocity based on the maximum-norm; whereas he uses the 4-norm in practice in order to still use the least squares for determining the optimized coefficients. Lele [17] suggests using the relative error of the optimized FD operator based on the maximum-norm when designing a compact scheme (i.e., optimized Padé scheme); however he determines the optimized coefficients by solving the linear algebraic equations on several specified wave numbers. Obviously, it is difficult to solve the maximum-norm problem; thus we have to employ a much more complex optimization approach. In this paper, we use the simulated annealing algorithm [14,22], as it has good flexibility in handling various optimization problems. The simulated annealing algorithm is also famous for searching global minima that are buried among many local minima. Therefore it is suitable for our purposes. Fig. 1 shows the flowchart of solving the objective functions based on the maximum norm using the simulated annealing algorithm. In fact, we would obtain many quite different groups of reasonable solutions under a given error limitation rather than only one group as with the least squares; thus we can further select the best one by some tradeoff between the accurate-wave number coverage and the total error (or the peak error). The flowchart shown in Fig. 1 tries to find the best group of the optimized coefficients under the given error threshold T by continuously searching until it cannot get a better group within the given iteration number N. The best group of optimized coefficients means that it provides the widest accurate-wave number range of [0, kc]. The temperature S basically controls the range of perturbation on the solution; for a high temperature, there is a high possibility to reach a wide range, and vice versa. Fig. 1. Flow chart of the optimized scheme using the simulated annealing algorithm. E is the objective functions shown in (23) or (24). b denotes the vector of bN/2 to bN/2, d denotes a small perturbation around b. a0 and a1 denote the bottom and upper limits of b, respectively. e is the absolute error. T is the error limitation. S is the temperature, a is the cooling rate and S0 is the minimal value of S. n = 1, 2, . . . , N and N is the repeat number under the current temperature S. k = 1, 2, . . . , K and K is the total uniform sampling number of the wave number kx e [0, p). kc = M  p/K is the maximum accurate wave number found by the flow chart. M is the index of the maximum accurate wave number kc. F is a flag: true for 1 and false for 0. R[0, 1] is a function to generate a random number between 0 and 1. 516 J.-H. Zhang, Z.-X. Yao / Journal of Computational Physics 250 (2013) 511–526 For each searching procedure at the kth wave number, the temperature S would be very high (e.g., 10 000) at the beginning to guarantee that the best solution can be obtained. The temperature would gradually decrease by a factor of a = 0.99. For any estimated coefficients c (i.e., bN=2  bN=2 ), some small perturbations d are added to test whether there are any better (a) 0.1 Absolute error 0 -0.1 -0.2 -0.3 4 16 20 10 12 14 18 8 Conventional FD Optimized 8th-order FD, ε=0.00005 Optimized 8th-order FD, ε=0.0001 Optimized 8th-order FD, ε=0.0002 Optimized 8th-order FD, ε=0.0004 -0.4 -0.5 -0.6 6 0 10 20 30 40 50 60 70 80 90 100 Percentage of Nyquist wavenumber (%) (b) ε=0.0004 4 ε=0.0002 2 ε=0.0001 1 ε=0.00005 Absolute error (×0.0001) 0 -1 -2 -4 -6 4 6 8 10 12 14 16 18 20 -8 -10 0 10 20 30 40 50 60 Percentage of Nyquist wavenumber (%) Fig. 2. Influence of the error thresholds on the optimized FD operators. (a) A global view of absolute error within [0.6, 0.1]; (b) a local view within [0.001, 0.0004]. The FD operators are for the second derivative along the spatial direction. Curves denote the absolute errors between the wave number of the FD operators and the analytical wave number. Thin solid curves denote the conventional FD operators with numbers indicating the order. The bold curves denote the optimized 8th-order FD operators using different error limitation e. Table 1 Optimized coefficients for high-order FD operators of first derivative.a b1 b2 b3 b4 b5 b6 a 4th-Order 6th-Order 8th-Order 10th-Order 12th-Order 0.67880327 0.08962729 0.77793115 0.17388691 0.02338713 0.84149635 0.24532989 0.06081891 0.00839807 0.88414717 0.30233648 0.10275057 0.02681517 0.00398089 0.91067892 0.34187892 0.13833962 0.04880710 0.01302148 0.00199047 b0 = 0, and bn = bn, n = 1, 2, . . . , N/2, where N is the order of the optimized FD operator. 517 J.-H. Zhang, Z.-X. Yao / Journal of Computational Physics 250 (2013) 511–526 coefficients around c. If there are some, the perturbed coefficients a = c + d will be set to be the initial values for the next searching procedure at n + 1; if there is no, they will still be taken as the potential candidate of the initial values with a random possibility by R[0, 1] < exp{[E(c)  E(a)]/S}, where E is the maximum of the absolute errors between the analytical wave numbers and the approximated wave numbers for 0 to k (see Eqs. (23) and (24)). If some group of coefficients satisfies Table 2 Optimized coefficients for high-order FD operators of second derivative.a b0 b1 b2 b3 b4 b5 b6 a 4th-Order 6th-Order 8th-Order 10th-Order 12th-Order 2.55616844 1.37140059 0.09331637 2.8215452 1.57663238 0.18347238 0.01761260 2.97581692 1.70664680 0.25959423 0.04618682 0.00533093 3.06801592 1.78858721 0.31660756 0.07612137 0.01626042 0.00216736 3.12513824 1.84108651 0.35706478 0.10185626 0.02924772 0.00696837 0.00102952 bn = bn, n = 1, 2, . . . , N/2, where N is the order of the optimized FD operator. (a) 0.1 0 Absolute error -0.1 -0.2 4 -0.3 6 24 8 10 12 16 20 28 Conventional FD Optimized 4th-order FD Optimized 6th-order FD Optimized 8th-order FD Optimized 10th-order FD Optimized 12th-order FD -0.4 -0.5 -0.6 0 10 20 30 40 50 60 70 80 90 20 24 28 100 Percentage of Nyquist wavenumber (%) (b) 2 1 Absolute error (×0.0001) 0 -1 -2 -3 4 6 8 10 12 16 -4 -5 -6 0 10 20 30 40 50 60 Percentage of Nyquist wavenumber (%) Fig. 3. Accuracy comparison between the conventional and optimized FD operators of the first derivative. (a) A global view of absolute error within [0.6, 0.1]; (b) a local view within [0.0006, 0.0002]. Curves denote the absolute errors between the wave number of the FD operators and the analytical wave number. Thin solid curves denote the conventional FD operators with numbers indicating the order. The bold curves denote the optimized FD operators using the error limitation e = 0.0001. 518 J.-H. Zhang, Z.-X. Yao / Journal of Computational Physics 250 (2013) 511–526 E(a) < T, then it will be remembered as the potential solutions b for 0 to k. Next, we would further test whether there is some solution for 0 to k + 1. If there is no solution for 0 to k + 1, the coefficients b for 0 to k would be the final solution under the given error threshold T, and the maximum accurate wave number kc is k  p/K. The total number of optimized coefficients is N+1, i.e., bN=2  bN=2 , which is difficult to determine with the simulated annealing algorithm when N is large. To reduce the optimization effort, Zhang and Yao [26] set up three criteria according to the theories of sinc interpolation [4] and finite impulse response [21] for the second order derivative. We extend Zhang and Yao’s criteria to more general cases: (1) the coefficients should be real numbers bn e R, and the operator should be symmetric for even order derivatives (i.e., bn = bn) and be anti-symmetric for odd order derivatives (i.e., bn = bn); (2) the total PN=2 energy of the optimized FD operator should be zero for both even and odd order derivatives, that is n¼N=2 bn ¼ 0; (3) the coefficients should have an amplitude of damped oscillation away from the center position (n = 0), that is |bn| > |bn+1| and bnbn+1 < 0 for n P 0; (4) to cover a much wider wave number range, all coefficients should be as large as possible (including b0 and bN/2o). PN=2 Rules 1 and 2 reduce the actual number of coefficients to only N/2, since b0 ¼ 2 n¼1 bn for even order derivatives and PN=2 b0 ¼ 2 n¼1 bn ¼ 0 for odd order derivatives. Thus we can obtain the whole operator by purely determining the independent coefficients b1  bN=2 ðor bN=2  b1 ). Rules 3 and 4 greatly decrease the scope of the search and make the simulated annealing algorithm affordable for high-order FD operators. In fact, the original coefficients of the conventional FD operators also obey these criteria. (a) 0.1 0 Absolute error -0.1 -0.2 4 -0.3 6 24 8 1012 16 20 28 Conventional FD Optimized 4th-order FD Optimized 6th-order FD Optimized 8th-order FD Optimized 10th-order FD Optimized 12th-order FD -0.4 -0.5 -0.6 0 10 20 30 40 50 60 70 80 90 100 Percentage of Nyquist wavenumber (%) (b) 2 1 Absolute error (×0.0001) 0 -1 -2 -3 4 6 8 10 12 16 20 24 28 -4 -5 -6 0 10 20 30 40 50 60 70 Percentage of Nyquist wavenumber (%) Fig. 4. Accuracy comparison between the conventional and optimized FD operators of the second derivative. See Fig. 3 for detail captions. 519 J.-H. Zhang, Z.-X. Yao / Journal of Computational Physics 250 (2013) 511–526 0.01 0 Absolute error -0.01 -0.02 6 Conventional FD -0.03 8 10 12 16 20 24 28 50 60 Tam & Webb’s FD, ε>0.01 Optimized 6th-order FD, ε=0.01 Optimized 6th-order FD, ε=0.005 Optimized 6th-order FD, ε=0.0001 Optimized 12th-order FD, ε=0.001 Optimized 12th-order FD, ε=0.0001 -0.04 -0.05 -0.06 0 10 20 30 40 70 80 Percentage of Nyquist wavenumber (%) Fig. 5. Accuracy comparison between the Kam and Webb’s operator and our optimized FD operators of the first derivative. Thin solid curves denote the conventional FD operators with numbers indicating the order. The bold curves denote the optimized FD operators using different error limitations. The four bold curves on the left side are for the 6th-order FD operators and the two bold curves on the right side are for the 12th-order FD operators, respectively. Table 3 Optimized coefficients used in Figs. 5–7.a b1 b2 b3 b4 b5 b6 a Tam and Webb’s using 0.01 Our using 0.01 Our using 0.005 Our using 0.0005 0.79926643 0.18941314 0.02651995 0.80961299 0.20184244 0.03151878 0.80359697 0.19704245 0.03021447 0.92014414 0.35645100 0.15232195 0.05826689 0.01743499 0.00314725 b0 = 0, and bn = bn, n = 1, 2, . . . , 6. Our optimized coefficients using 0.0001 are listed in Tables 1 and 2. 5. Proper selection of error threshold The error threshold e plays an important role in the optimized scheme of the FD operator [25,26]. For a small error limitation (e.g., 0.00001) it can guarantee the accuracy of the resulting operators, but would make it difficult to gain an apparent improvement. For a big error limitation (e.g., 0.0003–0.03 as suggested by Holberg [10] and by many other works) it can easily cover a much wider wave number range; unfortunately, the practical performances shown in numerical experiments may greatly deviate from the theoretical analyses, especially for large travel times or at large distances. Therefore it is necessary to select a proper error limitation for the objective functions to guarantee that the accuracy improvements are apparent and solid. Using the phase velocity is more convenient than using the group velocity when designing the objective function [6,13,16]. Basically, the absolute error of the operator in the wave number domain is similar to the phase velocity. Whereas Holberg [10] points out that the objective function based on the phase velocity should have a much smaller error limitation than that based on the group velocity, which is about one order of magnitude smaller. However, our experiments show that the practical error limitation is not necessarily as small as suggested by Holberg [10] to obtain the same accuracy. Nevertheless, we still suggest using a tight error limitation since in practice the requirement on the accuracy is always increasing. 6. Absolute-error analyses We evaluate the accuracy performance of the optimized FD operators by examining its absolute errors in the wave number domain. First, we show the accuracy influences caused by different error limitations. In Fig. 2 we take the 8th-order FD operator of the second derivative as an example. Obviously, the absolute-error curves of the conventional FD operators increase gradually with increasing wave numbers; whereas the absolute-error curves of the optimized FD operators vibrate rapidly within the given error limitation. The optimized FD operators have a much wider accurate-wave number coverage at the cost of much more errors that are evenly distributed within the ‘‘accurate-wave number’’ coverage. Fortunately these error limitations are small enough for many practical applications. 520 J.-H. Zhang, Z.-X. Yao / Journal of Computational Physics 250 (2013) 511–526 Conventional FD Bogey & Bailly’s 8th-order FD Bogey & Bailly’s 10th-order FD Bogey & Bailly’s 12th-order FD Absolute error (×0.001) 1 0 8 -1 8 Our optimized 8th-order FD Our optimized 10th-order FD Our optimized 12th-order FD Our optimized 12th-order FD 10 10 12 12 12 8 10 12 -2 0 10 20 30 40 50 60 Percentage of Nyquist wavenumber (%) Fig. 6. Accuracy comparison between the Bogey and Bailly’s operators and our optimized FD operators of the first derivative. Black thin solid curves denote the conventional FD operators with numbers indicating the order. The dashed curves denote the 8th-order optimized FD operators, the colorful thin curves denote the 10th-order optimized FD operators, and the bold solid curves denote the 12th-order optimized FD operators. The red curves are generated using the error limitation of 0.0001, and the green curve is generated using 0.0005. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) (b) 0.6 0.5 Conventional FD 0.5 Tam & Webb’s FD 0.4 Exact solution 0.4 Exact solution Amplitude Amplitude (a) 0.6 0.3 0.2 ε=0.01 0.2 0.1 0.1 0 0 -0.1 200 205 210 215 x / Δx 220 225 -0.1 200 230 (c) 0.6 205 210 215 x / Δx 0.5 Our optimized FD 0.5 Our optimized FD 0.4 Exact solution 0.4 Exact solution 0.3 ε=0.005 0.2 225 230 220 225 230 0.3 ε=0.0001 0.2 0.1 0.1 0 0 -0.1 200 220 (d) 0.6 Amplitude Amplitude 0.3 205 210 215 x / Δx 220 225 230 -0.1 200 205 210 215 x / Δx Fig. 7. Comparison between numerical results obtained by 6th-order FD operators. Dashed curves are obtained by the analytical solution, and solid curves are obtained by (a) conventional 6th-order FD operator; (b) Kam and Webb’s operator (corresponding to e > 0.01); (c) optimized 6th-order FD operator using e = 0.0005; (d) optimized 6th-order FD operator using e = 0.0001. The error limitations listed in Fig. 2 (i.e., 0.00005–0.0004) are all far lower than those listed in the literature (e.g., 0.0003– 0.03). Obviously, a bigger error limitation would lead to greater accuracy improvements but much larger peak errors; meanwhile, we see that only a doubled error limitation would earn similar accuracy improvements, as indicated by the black dots 521 J.-H. Zhang, Z.-X. Yao / Journal of Computational Physics 250 (2013) 511–526 and the vertical arrows. Therefore, we have to make a balance between the accuracy improvements and the peak errors. We prefer a tight error limitation to avoid rapid error accumulation, especially for large-scale and long-term problems. We select 0.0001 as our error limitation for later experiments. Tables 1 and 2 list the optimized coefficients under this error limitation for the first and second derivatives, respectively. Figs. 3 and 4 show the 4th- to 12th-order optimized FD operators for the first and second derivatives, respectively. Obviously, the accuracy of the optimized FD operator has a wider accurate wave number coverage than does the conventional same-order FD operator. In addition, the higher-order optimized FD operators have much wider accurate wave number coverage than do the lower-order optimized FD operators. For example, the accuracy of the optimized 4th-order FD operator is only slightly higher than that of the conventional 4th-order FD operator. In contrast, the accuracy of the optimized 8th-order FD operator is much higher than that of the conventional 8th-order FD operator, and even reaches that of the conventional 12th-order FD operator. Furthermore, the accuracy of the optimized 12th-order FD operator is much higher than that of the conventional 12th-order FD operator and even reaches that of the conventional 24th-order FD operator. Therefore, we suggest using the higher-order optimized FD operators in practical applications since they have much higher accuracy compared with the lower-order optimized FD operators. 7. Experiments on 1D advection equation To illustrate the optimized scheme proposed in this paper, we compare our optimized FD operators with Kam and Webb’s optimized FD operator (1993). We consider the 1D advection equation @u @u þ ¼ 0; @t @x ð25Þ with an initial disturbance " # 1 ðx  20Þ2 ; uðx; t ¼ 0Þ ¼ exp  ln 2 2 r ð26Þ where 0 6 x 6 400 and the grid interval D = 1. Fig. 5 shows the curves of three different 6th-order optimized FD operators using error limitations of 0.01, 0.005 and 0.0001. The optimized coefficients used in Fig. 5 are listed in Table 3. Obviously, our (b) 0.6 (a) 0.6 Conventional 12th-order FD 0.5 0.4 0.3 0.2 0.2 0.1 0 0 205 210 215 x / Δx 220 225 0.4 0.4 Exact solution ε=0.0005 0.2 0.1 0 0 205 210 215 x / Δx 215 x / Δx 220 225 230 220 225 230 220 225 230 Exact solution 0.3 0.1 -0.1 200 210 Our optimized 12th-order FD 0.5 0.3 0.2 205 (d) 0.6 Our optimized 12th-order FD 0.5 -0.1 200 230 Exact solution 0.3 0.1 (c) 0.6 Amplitude Amplitude Exact solution Amplitude Amplitude 0.4 -0.1 200 Conventional 24th-order FD 0.5 -0.1 200 ε=0.0001 205 210 215 x / Δx Fig. 8. Comparison between numerical results obtained by 12th-order FD operators. Dashed curves are obtained by the analytical solution, and solid curves are obtained by (a) conventional 12th-order FD operator; (b) conventional 24th-order FD operator; (c) optimized 12th-order FD operator using e = 0.001; (d) optimized 12th-order FD operator using e = 0.0001. 522 J.-H. Zhang, Z.-X. Yao / Journal of Computational Physics 250 (2013) 511–526 -2 Distance (km) 0 -1 2 1 (a) Depth (km) 2 1 O12 N24 N12 N36 0 (b) Depth (km) 2 1 O12 N24 N12 N36 0 (c) Depth (km) 2 1 O12 N12 N24 N36 0 Fig. 9. Snapshot comparison among different methods at three travel times: (a) 3 s; (b) 6 s; (c) 9 s. Each subfigure has four equivalent parts and they are separated by dashed lines. Snapshots are generated by the conventional 12th-order, 24th-order and 36th-order FD methods and the optimized 12th-order FD method, respectively. They are sequentially indicated by N12, N24, N36 and O12. The conventional 36th-order FD method are shown as references. 523 J.-H. Zhang, Z.-X. Yao / Journal of Computational Physics 250 (2013) 511–526 Distance (km) Distance (km) Depth (km) 0.5 0.4 0.9 1.3 1.5 1.9 Distance (km) 2.3 1.6 N12 N12 N12 O12 O12 O12 N24 N24 N24 N36 N36 N36 2.0 2.4 0.2 Depth (km) 0 0.4 0.2 Depth (km) 0 0.4 0.2 Depth (km) 0 0.4 0.2 0 Fig. 10. Comparison of local details in Fig. 9. The depth range is 0 6 z 6 500 m for all sub Figures.. Left column: 500 6 x 6 1400 m at 3 s; middle column: 1500 6 x 6 2400 m at 6 s; right column: 1600 6 x 6 2500 m at 9 s. three operators have smaller peak errors than that of Tam and Webb [23]. In addition, one of our operators shows wider wave number coverage as well as smaller peak errors than do Tam and Webb’s operator. This indicates that our maximum-norm objective functions as well as the simulated annealing algorithm are better than the 2-norm objective functions and the least squares. On the other hand, the other two operators obtained by our scheme seem to have a narrower wave number coverage than Tam and Webb’s operator; surprisingly, our numerical experiments show that this is actually not the case. We also compare our optimized coefficients with some existing optimized coefficients for high-order FD operators [3]. Bogey and Bailly call the 8th-, 10th- and 12th-order operators here as 9-, 11- and 13-point stencils. Fig. 6 show the difference between our results and their results. For each order listed, our optimized operator generally has quite similar wave number range but much smaller maximum error. For example, our operators have an error limitation of only 0.0001 (see the red curves), but theirs have error limitations of 0.0011, 0.0002 and 0.0006 (see the blue curves), respectively. If we use much looser error limitation, such as 0.0005 (see the green bold curve), the wave number coverage will be much wider compared with the blue bold curve. We do not show the waveform comparison since the difference in waveform is not so significant in small scale model or short duration of the record. However, note that a big error in wave number domain always has a big risk in the presence of either long-period or over-size problems. As shown in Fig. 7 (corresponding to r = 8), the optimized FD operator using 0.005 is apparently superior to Tam and Webb’s operator that uses 0.01; in addition, the optimized FD operator using the error limitation of 0.0001 is better than that using 0.005. Fig. 8 (corresponding to r = 3) shows the case of the 12th-order optimized FD operators, which also indicates that a small error limitation is better than a large error limitation. Therefore, we should use a small error limitation from now on rather than purely pursuing much wider wave number coverage by arbitrarily relaxing the error limitation. 8. Experiments on 2D wave equation In this section, we consider the 2D scalar wave equation 524 J.-H. Zhang, Z.-X. Yao / Journal of Computational Physics 250 (2013) 511–526 (a) 0 Distance (km) 8 4 0 Receiver Source Depth (km) 12 (km/s) 4.5 4.0 3.5 1 3.0 2.5 2 2.0 1.5 3 (b) A B C 4.0 4.1 4.2 Time (s) (c) A B C 7.0 7.1 7.2 Time (s) (d) A B C 10.0 10.1 10.2 Time (s) Fig. 11. Waveforms comparison among different methods for SEG/EAGE salt model. (a) Modified SEG/EAGE salt model; (b) 4.0–4.25 s (beginning from the first arrival); (c) 7.0–7.25 s; (d) 10.0–10.25 s (including multiple reflected waves from the salt dome). Waveforms are generated by the conventional 12thorder, 24th-order FD methods and the optimized 12th-order FD method, respectively. The conventional 36th-order FD method are shown as references (see dashed curves). @2u @2u 1 @2u þ 2¼ 2 þ dðxs ; zs ÞSðx; z; tÞ; 2 @x @z m ðx; zÞ @t2 ð27Þ where t is the time, v(x, z) is the 2D velocity function, and u  u(x, z; t) is the wave field. We take the Ricker wavelet Sðx; z; tÞ ¼ ð1  2p2 x2 t 2 Þ expðp2 x2 t2 Þ ð28Þ as the initial waveform, where x is the dominant frequency. First, we illustrate the above absolute-error analyses by impulse responses in a 2D homogeneous medium, where 2500 6 x 6 2500 m and 2500 6 z 6 2500 m with a uniform grid spacing of 5 m. The source location is located at J.-H. Zhang, Z.-X. Yao / Journal of Computational Physics 250 (2013) 511–526 525 xs = 0 m and zs = 0 m. The velocity is v = 2000 m/s and the dominant frequency x = 50 Hz. Note that the dominant frequency and the velocity used here almost reach the upper limit that is fairly difficult to handle in practice, and the scale of the model is also typical in practical applications in geophysical exploration (e.g., [25]). Figs. 9(a)–(c) show the wavefield snapshots at 3 s, 6 s and 9 s, respectively. Fig. 10 shows the local details of Fig. 9. Obviously, the optimized 12th-order FD methods obtain much better results compared with the conventional 12th-order FD method. In addition, the results obtained by the optimized 12th-order FD method are quite similar to those obtained by the conventional 24th-order FD method. Figs. 9 and 10 indicate that the improvement after using our optimization scheme is significant, even for the large-scale model at a long travel time. Note that these numerical experiments show perfect agreement with the theoretical analyses in the previous section. To verify the capabilities of our optimized FD operators, we simulate the wave field propagations on a modified SEG/EAGE salt model [1]. Fig. 11(a) shows the velocity model and Fig. 11(b)–(d) show the waveforms at three time windows. For convenience of comparison, we take the waveforms generated by the conventional 36th-order FD method as references, as shown by the dashed curves. We see that the waveforms generated by the conventional 12th-order FD method evidently deviate from the reference waveforms due to numerical dispersion. In contrast, the waveforms obtained by the optimized 12th-order FD method are almost the same as those obtained by the conventional 24th-order FD method. In addition, the waveforms obtained by the optimized 12th-order FD method only show slight differences from the reference waveforms at 10 s (see Fig. 11(d)). Fig. 11 indicates that the optimized FD method is superior to the conventional FD method for the same order. Again, these conclusions are consistent with the theoretical and numerical analyses in the previous sections. 9. Discussions Only the absolute error is used in our objective functions. In fact, we can also try relative error [3], or add a proper weight function to important wave numbers, or try any other possible forms of the objective function, to obtain further improvement. Fortunately, the extension is straightforward to the proposed scheme. As a general approach for optimizing FD operators, our optimized scheme can be applied to other orders of spatial derivative, for example the third and fourth derivatives [17,20]. In general, the optimized scheme is applicable to various equations that do not contain cross derivatives. This paper only concentrates on the FD discretization of spatial derivatives. We can also take the FD discretization of the temporal derivative into account [3,4,12] or extend our method to higher order time discretization [18]. Of course, this scheme is also available for compact FD operators [17,27] to achieve additional accuracy improvements. 10. Conclusions We present a new optimization scheme to reduce the numerical dispersions of high-order explicit FD methods. The objective functions are constructed with the maximum norm rather than the traditional 2-norm; in addition, we solve the objective functions using the simulated annealing algorithm rather than the traditional least squares. The maximum norm provides us with the largest number of possible solutions, which greatly enhances the possibility of finding the optimized coefficients for the simulated annealing algorithm over a vast solution set. We show that the error limitation is essential for solid accuracy improvements. A small error limitation is superior to a large error limitation, although we may draw the opposite conclusion according to the theoretical analyses. This indicates that we should use a small error threshold (e.g., 0.0001) to guarantee accuracy for large-scale modeling with long travel times, rather than purely pursuing the accurate wave number coverage by arbitrarily relaxing the error limitations. For both the first and second spatial derivatives, our optimized 8th-order FD method has the same accuracy as the conventional 12th-order FD method, and our optimized 12th-order FD method has the same accuracy as the conventional 24thorder FD method. This means we can greatly save on both memory demand and computational cost when using our optimized high-order FD methods. Acknowledgments This research was supported by the National Natural Science Foundation of China (Grant No. 41074092 and 41130418) and the National Major Project of China (Grant No. 2011ZX05008-006). References [1] F. Aminzadeh, N. Burkhard, J. Long, T. Kunz, P. Duclos, Three dimensional SEG/EAEG models—An update, The Leading Edge 15 (1996) 131–134. [2] G. Ashcroft, X. Zhang, Optimized prefactored compact schemes, J. Comput. Phys. 190 (2003) 459–477. [3] C. Bogey, C. Bailly, A family of low dispersive and low dissipative explicit schemes for flow noise and noise computations, J. Comput. Phys. 194 (2004) 194–214. [4] C. Chu, P.L. Stoffa, Determination of finite-difference weights using scaled binomial windows, Geophysics 77 (2012) W17–W26. [5] R.C. Eberhart, J. Kennedy, A new optimizer using particle swarm theory. Proceedings of the Sixth International Symposium on Micromachine and Human Science, Nagoya, Japan. 39–43, 1995. [6] J.T. Etgen, A tutorial on optimizing time domain finite-difference schemes: ‘‘Beyond Holberg’’, Stanford Exploration Project Report 129 (2007) 33–43. [7] J.T. Etgen, M.J. O’Brien, Computational methods for large-scale 3D acoustic finite-difference modeling, A tutorial, Geophysics 72 (2007) SM223–SM230. [8] B. Fornberg, Calculation of weights in finite difference formulas, SIAM Review 40 (1998) 685–691. 526 J.-H. Zhang, Z.-X. Yao / Journal of Computational Physics 250 (2013) 511–526 [9] Z. Haras, S. Ta’asan, Finite difference schemes for long-time integration, J. Comput. Phys. 114 (1994) 265–279. [10] O. Holberg, Computational aspects of the choice of operator and sampling interval for numerical differentiation in large-scale simulation of wave phenomena, Geophys. Prospect. 35 (1987) 629–655. [11] J.H. Holland, Genetic Algorithms, Sci. Am. 267 (1992) 66–72. [12] F.Q. Hu, M.Y. Hussaini, J.L. Manthey, Low dissipation and low-dispersion Runge-Kutta schemes for computational acoustics, J. Comput. Phys. 124 (1996) 177–191. [13] J.W. Kim, D.J. Lee, Optimized compact finite difference schemes with maximum resolution, AIAA J. 34 (1996) 887–893. [14] S. Kirkpatrick, C.D. Gelatt, M.P. Vecchi, Optimization by simulated annealing, Science 220 (1983) 671–680. [15] D.D. Kosloff, E. Baysal, Forward modeling by a Fourier method, Geophysics 47 (1982) 1402–1412. [16] C. Lee, Y. Seo, A New Compact Spectral Scheme for Turbulence Simulations, J. Comput. Phys. 183 (2002) 438–469. [17] S. Lele, Compact finite difference schemes with spectral-like resolution, J. Comput. Phys. 103 (1992) 16–42. [18] X. Li, W. Wang, M. Lu, M. Zhang, Y. Li, Structure-preserving modelling of elastic waves: a symplectic discrete singular convolution differentiator method, Geophys. J. Int. 188 (2012) 1382–1392. [19] J. Gao, Z. Yang, X. Li, An optimized spectral difference scheme for CAA problems, J. Comput. Phys. 231 (2012) 4848–4866. [20] Y. Liu, M.K. Sen, A practical implicit finite-difference method: Examples from seismic modeling, J. Geophys. Eng. 6 (2009) 231–249. [21] A.V. Oppenheim, R.W. Schafer, J.R. Buck, Discrete-time signal processing, 2nd edition., Prentice-Hall, New York, 1999. [22] M.K. Sen, P.L. Stoffa, Global Optimization methods in geophysical inversion, 2nd edition., Cambridge University Press, Cambridge, 2013. [23] C.K.W. Tam, J.C. Webb, Dispersion-relation-preserving schemes for computational aeroacoustic, J. Comput. Phys. 107 (1993) 262–281. [24] A. Tarantola, Inverse problem theory and methods for model parameter estimation, SIAM, Philadelphia, 2005. [25] J.H. Zhang, Z.X. Yao, Globally optimized finite-difference extrapolator for strongly VTI media, Geophysics 77 (2012) S125–S135. [26] J.H. Zhang, Z.X. Yao, Optimized finite-difference operator for broad-band seismic wave modeling, Geophysics 77 (2013) A13–A18. [27] H. Zhou, G. Zhang. Prefactored optimized compact finite-difference schemes for second spatial derivatives, Geophysics 76 (2011) WB87–WB95.

相关文章