As a starting post, I’m writing about one of my GFG articles1 =)
The Gauss Seidel method (prominently known as the ‘Liebmann method’) is an iterative process to solve a square system of (multiple) linear equations. For any iterative method in numerical analysis, every solution attempt begins with an approximate solution of an equation, and iterations are performed until the desired accuracy is obtained. In this method too that is followed, and more specifically, the most recent values are used in successive iterations, allowing the user to control round-off error.
The algorithm itself is very similar to the Jacobi one and is called the method of successive displacement (since recently obtained values are used in the subsequent equations). Its convergence criteria depend upon certain properties which must be satisfied, which include a matrix being diagonally dominant, symmetrical, and positive.
Steps:
- Compute the values for all the linear equations for Xi. (Initial array must be available)
- Compute each Xi and repeat the above steps.
- Make use of the absolute relative approximate error after every step to check if the error occurs within a pre-specified tolerance.
Example: Consider the system of linear equations:
10x1 -x2 +2x3 +0x4 = 6 -x1 + 11x2 -x3 +3x4 = 25 2x1 -x2 +10x3 -x4 = -11 0x1 + 3x2 -x3 +8x4 = 15
Now to solve for x1,x2,x3 and x4, we take them to the LHS:
x1 = x2/10-x3/5+3/5 x2 = x1/11+x3/11-3x4/11+25/11 x3 = -x1/5+x2/10+x4/10-11/10 x4 = -3x2/8+x3/8+15/8
To get the values of first approximation, we take (x1,x2,x3,x4)=(0,0,0,0) (Initial approximation is always 0) Calculating thereafter, we get values of first approximation (or the first approximate solution):
x1 = 0.6000 x2 = (3/5)/11 + 25/11 = 2.3272 x3 = (-3/5)/5 + 2.3272/10 -11/10 = -0.9873 x4 = -3(2.3272)/8 + (-0.9873)/8 + 15/8 = 0.8789 (values upto 4 decimal places, rounded-off).
Using the approximations obtained in each previous step (using obtained x1 = 0.6, x2 = 2.3272, x3 = -0.9873 and x4 = 0.8789 values for second approximation and so forth) the iterative procedure is repeated until the desired accuracy has been reached. The following are the approximated solutions after four iterations:
x1 x2 x3 x4 First iteration : 0.60000000 2.32727273 -0.98727273 0.87886364 Second Iteration: 1.03018182 2.03693802 -1.0144562 0.98434122 Third Iteration : 1.00658504 2.00355502 -1.00252738 0.99835095 Fourth Iteration: 1.00086098 2.00029825 -1.00030728 0.99984975
The exact solution of the system is (1, 2, -1, 1). As can be seen, by the fourth iteration, the values are nearly close to the solution. The more number of iterations we perform, the closer we are to the actual solution, or the more the approximation converges to the real value.
Here’s my program implementing the Gauss-Seidel method with inputs from the above example (hardcoded):
import numpy as np
# Setting the iteration limit:
Limit = 10
"""
Coefficients of matrix considered: (LHS vectors)
[[10., -1., 2., 0.],[-1., 11., -1., 3.],[2., -1., 10., -1.],[0., 3., -1., 8.]]=A
Corresponding constants of equation on other side: (RHS vector)
[6., 25., -11., 15.]=B
-> We will solve for Ax=B, where x=[x1,x2,x3,x4]
"""
# Initializing the matrix that I want to perform the Gauss Siedel method on:
A = np.array([[10., -1., 2., 0.],
[-1., 11., -1., 3.],
[2., -1., 10., -1.],
[0., 3., -1., 8.]])
# Initializing the RHS column vector:
b = np.array([6., 25., -11., 15.])
print("System of equations:")
# Displaying in matrix form, x1*(co-efficient) + x2*(co-efficient) +...
for i in range(A.shape[0]):
row = ["{0:3g}*x{1}".format(A[i, j], j + 1) for j in range(A.shape[1])]
print("[{0}] = [{1:3g}]".format(" + ".join(row), b[i]))
# Returning an array of zeros for the RHS vector (b) for initial approximation:
x = np.zeros_like(b)
for it_count in range(1, Limit):
x_new = np.zeros_like(x)
# Displaying solution-vector/approximate values of x1,x2,x3,x4 for each iteration:
print("Iteration {0}: {1}".format(it_count, x))
for i in range(A.shape[0]):
s1 = np.dot(A[i, :i], x_new[:i])
s2 = np.dot(A[i, i + 1:], x[i + 1:])
x_new[i] = (b[i] - s1 - s2) / A[i, i]
# Using tolerance limit to reach closer approximation to answer and checking if desired accuracy is obtained. (considering relative tolerance/error limit = 1e - 8, and checking between arrays x and new x) If desired accuracy is obtained, I break out:
if np.allclose(x, x_new, rtol=1e-8):
break
x = x_new
# Displaying the solution to my matrix: (vector x = [x1, x2, x3, x4])
print("Solution: {0}".format(x))
error = np.dot(A, x) - b
# Displaying the approximation error:
print("Error: {0}".format(error))
Output:
System of equations:
[ 10*x1 + -1*x2 + 2*x3 + 0*x4] = [ 6]
[ -1*x1 + 11*x2 + -1*x3 + 3*x4] = [ 25]
[ 2*x1 + -1*x2 + 10*x3 + -1*x4] = [-11]
[ 0*x1 + 3*x2 + -1*x3 + 8*x4] = [ 15]
Iteration 1: [ 0. 0. 0. 0.]
Iteration 2: [ 0.6 2.32727273 -0.98727273 0.87886364]
Iteration 3: [ 1.03018182 2.03693802 -1.0144562 0.98434122]
Iteration 4: [ 1.00658504 2.00355502 -1.00252738 0.99835095]
Iteration 5: [ 1.00086098 2.00029825 -1.00030728 0.99984975]
Iteration 6: [ 1.00009128 2.00002134 -1.00003115 0.9999881 ]
Iteration 7: [ 1.00000836 2.00000117 -1.00000275 0.99999922]
Iteration 8: [ 1.00000067 2.00000002 -1.00000021 0.99999996]
Iteration 9: [ 1.00000004 1.99999999 -1.00000001 1. ]
Solution: [ 1. 2. -1. 1.]
Error: [ 2.06480930e-08 -1.25551054e-08 3.61417563e-11 0.00000000e+00]
Pros and Cons
The advantages of this method include its fast iteration process (in comparison to other methods) and low memory requirements. The lower rate of convergence (again, comparatively) and the requirement of a large number of iterations (to reach the convergence point) account for its disadvantages.
List of my GeeksforGeeks articles1 written during my technical scripter internship:
Name | Link |
---|---|
Gauss Siedel method | View |
Picard’s iterative method | View |
Bootstrap 4 Utilities | View |
Disabling Tabs in Bootstrap | View |
Different types of Procedures in MySQL | View |
Different types of Triggers in MySQL | View |
Menu driven CGS<->MKS conversion in C++ | View |
String comparison using operator overloading in C++ | View |
PHP Imagick polaroidImage() | View |
PHP Imagick gaussianBlurImage() | View |
PHP Imagick floodFillPaintImage() | View |
PHP Imagick cropThumbnailImage() | View |
All of these were written sometime between 06/01/19 - 07/01/19 (following the MM/DD/YYYY format here and for all posts henceforth), which was the duration of my internship.
Anirban | 01/07/19 |