Computational Biomechanics Project Second StageDigitizationSite MapSite MapTop Page
Organization / MembersResearchProductsDownloadsLinks

Research System / Member List
Researchers pages

Circulatory System Simulation Team Back to Member List
Visiting Researcher: Kuniyoshi Abe To Downloads Article List>>

Overview of Research

When attempting to elucidate phenomena in various fields of science and engineering, linear equations of the form Ax = b with a large sparse matrix (a high-dimensional matrix in which most elements are 0w) as the coefficient often arise in the end. There are no practical methods for solving such equations other than obtaining an approximate solution using a computer that operates using a finite number of digits. Therefore, a numerical approximation method is indispensable. The research that I am undertaking concerns an iterative numerical method for solving large sparse matrix equations rapidly.

The first step in this research is to develop a numerical method that is faster or more robust with respect to rounding errors as compared to the existing methods. This is because in the various fields of computational science and engineering, it is critical to obtain a numerical solution to linear equations that have arisen as efficiently and as quickly as possible in order to elucidate large-scale phenomena in a rigorous manner. A technique called preconditioning is often used in conjunction with iterative methods. Preconditioning is a method in which an equation is solved more efficiently by performing a transformation that maintains the mathematical equivalence. This technique is known to greatly reduce the number of iterations required. Thus, it is critical to develop a good preconditioning method in order to improve the convergence of the solution and reduce the computation time.

In the existing preconditioning methods, the same preconditioner is applied at each iteration. Thus, there is a possibility that suitable preconditioner is not applied at each iteration, and it was not always possible to obtain sufficient convergence or expect a reduction in the computation time. Therefore, in order to develop a better preconditioning method, I propose a method in which a different preconditioner is applied adaptively at each iteration. Since a different preconditioner can be applied at each iteration with the proposed method, one can expect a greater effect with respect to improving the convergence and reducing the computation time. A prominent effect was observed in numerical experiments with problems for which the solution did not converge using the existing methods.

The second step is to expedite the computation through parallelization. Because different solving methods are combined in the proposed preconditioning method, the efficiency gained from parallelization differs from that in which a single method is used. Therefore, a set of suitable conditions that will draw the most performance out of the proposed preconditioning method must be devised by considering the combination of solving methods, convergence, percentage of computation time for each combined solving method, and parallelization techniques that can be applied to the solving methods.

Instead of computing K_1rk that arises out of the Krylov subspace algorithm with preconditioning, the proposed preconditioning is performed by computing the approximation of A_1rk at each iteration (hereafter referred to as “internal iteration”). Here, K denotes the preconditioner matrix and r represents the vector that arises in the algorithm. Arbitrary iterative solution algorithms (e.g., the stationary iteration method or the Krylov subspace method) can be combined together for this internal iteration. The effect of preconditioning is significant when the successive over-relaxation (SOR) method or the Krylov subspace method with preconditioning is used, and it is predicted that the number of iterations required can be reduced to a great extent. At the same time, parallelization is challenging, and it may not be efficient in terms of the computation time due to the prevention of vectorization. One the other hand, the computation time can be reduced through vectorization when the Gauss-Seidel method or the Krylov subspace method without preconditioning is used. In these cases, however, the effect of preconditioning is less, the number of iterations cannot be expected to reduce, and there can even be cases where the solution does not converge. By using Fujitsu’s VPP700E (distributed memory type), I have been able to illustrate an efficient method for utilizing the proposed preconditioning from the perspective of the vectorization rate, computation time, convergence, and a combination of solving methods.

Top Page
Copyright (c) RIKEN, Japan. All rights reserved.