**Obtaining roots**

In mathematics there are methods of obtaining roots interactively. These methods are used to calculate approximately the root of an equation that by traditional methods have no solution.

## Summary

[ hide ]

- 1 Bisection method
- 1 Hypothesis
- 2 Termination condition
- 3 Pseudocode algorithm
- 4 Final comments

- 2 Regulates Falsi
- 1 Hypothesis
- 2 Termination condition
- 3 Algorithm in pseudocode
- 4 Final comments

- 3 The Newton-Raphson method
- 1 Termination condition
- 2 Algorithm in pseudocode
- 3 Final comments

- 4 The secant method
- 1 Termination condition
- 2 Algorithm in pseudocode
- 3 Final comments

- 5 Source

## Bisection method

A few iterations of the bisection method applied in an interval [a _{1} ; b _{1} ]. The red dot is the root of the function.

The method is to approximate the root of the equation as the midpoint of the interval [a, b]. Evaluating the function at this point, one decides whether the root is in the left half of the interval or in the right half. In this way, one of the two halves is discarded and the width of the new search interval is exactly one half of the previous one. As this process is repeated, the search interval decreases in amplitude. If we agree to call the initial interval [a _{1} , b _{1} ], then, in the interaction number n of the method we have: X _{n} = (a _{n} + b _{n} ) / 2 for n = 1,2,3,4 ………

### Hypothesis

Let the equation f (x) = 0 and an interval [a, b] such that:

- In the interval the equation has a single root.
- f (x) is continuous on [a, b].
- f (x) has different signs in a and b, that is, f (a) .f (b) <0.

### Termination condition

If it is desired to obtain the root of the equation with an absolute error less than ε, the bisection method will be carried out until the approximation xn for which: E _{m} (X _{n} ) = (b _{n} -a _{n} ) / 2≤ ε

### Pseudocode algorithm

The equation to be solved is assumed to be f (x) = 0, fulfilling the conditions of the Hypothesis . It is assumed that the function f (x), the extremes a and b of the separation interval and the tolerance ε to be allowed are known. repeat

x: = (a + b) / 2

Error: = (ba) / 2

if f (x) = 0 then

x is exactly the root sought to

finish

else

if f (a) .f (x) <0 then

b: = x

else

a: = x

end

end

until Error≤ ε

the sought root is x and its maximum absolute error is Error

Terminate

### Final comments

The bisection method is the simplest of the methods for determining real roots of equations . It is an inefficient method compared to other methods, so it is not recommended if the calculations (especially the evaluation of f (x)) have to be done by hand. On the other hand, the bisection method has several important attractions:

- The conditions required for its application are minimal, in fact f (x) only needs to be continuous in the separation interval.
- The algorithm has very simple logic and is very easy to program.
- The speed of convergence is independent of the function f (x), so there are no fears that pathological cases will occur, a question that is present in almost all the most effective methods.
- The dimensioning of the Error is very simple, in addition, safe and is also independent of the characteristics of the function f (x).

All of the above is summed up in one sentence: **It is not a quick method but it is the most robust of all the procedures to find real roots of equations.**

## Regulates Falsi

The name of this method comes from a Latin phrase that means an inclined rule and geometrically consists in taking as an approximation of the root in the interval [a _{n} , b _{n} ] the point of intersection with the x axis of a segment that joins the ends of the arc of the graph in that interval. For this reason, it is also known as the string method.

### Hypothesis

We want to find the root r of an equation f (x) = 0 that is in the interval [a, b].

- In the interval the equation has a single root.
- f (x) is continuous on [a, b].
- f (x) has different signs in a and b, that is, f (a) .f (b) <0.

### Termination condition

If it is desired to obtain the root of the equation with an absolute error smaller than ε and the hypotheses are satisfied , the Regula falsi method will be carried out up to the approximation X _{n} for which: E _{m} (X _{n} ) = | X _{n} -X _{n-1} | ≤ε

### Pseudocode algorithm

The equation to be solved is assumed to be f (x) = 0, and that it complies with the Hypothesis . The function f (x), the extremes a and b of the separation interval and the tolerance ε that will be allowed are assumed to be known . _{Previous}

X : = 10 ^{6} repeat x: = a- (ba) / (f (b) – f (a)) * f (a) Error: = | xx _{previous} | if f (x) = 0 then x is exactly the root sought Finish else if f (a) f (x) <0 then b: = x else a: = x end end x _{previous} = x until Error <= ε

The searched root is x and its maximum absolute error is Error

Terminate

### Final comments

The Regula Falsi method can be considered as a modification of the bisection method to improve the speed of convergence. Although for this to occur, it is sufficient to fulfill very simple hypotheses , to achieve a good speed, stronger conditions are required, in particular that the graph of the function f (x) has little curvature. In the extreme case where the graph is linear, convergence occurs in a single iteration. The condition of small curvature in the search interval can always be achieved by reducing the amplitude of [a, b], which is very simple if the graph can be viewed on a display. The algorithm it is quite simple to program and differs from bisection only in some details.

## The Newton-Raphson method

The Newton-Raphson method uses the tangent line for its application, it is more efficient than the bisection method

This important numerical method can be thought of as a systematic way of applying the iterative method so that rapid convergence is obtained. Consider the equation f (x) = 0 whose root r is to be found. The function f (x) is assumed to be differentiable as many times as necessary in the approximations of r. If the equation is multiplied by a constant A ≠ 0 and x is added in each member, the equivalent equation is obtained:. The equation has been written in the form x = g (x), where g (x) = x + Af (x). The idea is to find a value of A such that the derivative of g (x) is very small in absolute value in the vicinity of the root r.

### Termination condition

If you want to obtain the root of the equation with an absolute error less than ε, the Newton-Raphson [[method] will be carried out until the approximation x _{n} for which: E _{m} (X _{n} ) = | X _{n} -X _{n-1} | ≤ε

### Pseudocode algorithm

It is assumed that the equation to be solved is f (x) = 0, that the root to be found is separated within an interval [a, b] in which f (x) and its first derivatives are continuous and that f ‘ (x) and f ” (x) are not null in [a, b]. It is assumed that x _{0} has been selected within the interval [a, b] so that f (x) f ” ‘(x0)> 0 is fulfilled (that is, the sign of the function coinciding with the sense of the concavity ). The functions f (x) and f ‘(x), the initial approximation x _{0} and the tolerance ε that will be allowed are assumed to be known . X _{aterior} : x = _{0}

repeat

x: = x _{above} – (f (x _{above} )) / f ‘(x _{above} )

Error: = | xx _{above} | _{Previous}

x : = x until Error <ε The root sought is x and its absolute error is Error Terminate

### Final comments

The method of Newton-Raphson is generally a method of rapid convergence, although quickly depends on the function f (x) and the initial approximation is chosen; usually with four or five iterations the root is obtained with four exact decimal places. These characteristics make it advisable to use this algorithm when working by hand or when time constraints require the use of a very efficient method for calculating roots. Its biggest drawback is the need to find the first derivative of f (x), which can be very cumbersome and almost always has to be done outside the machine.

## The secant method

The Secant method uses for its execution the slope of the secant line to the graph f (x). This method requires two initial approximations since the secant is determined by two points of the curve.

The secant method is a modification of the Newton-Raphson method aimed at eliminating the need to use the derived function . For this, the slope of the tangent line is replaced by the slope of a secant line to the graph of f (x). The secant method requires two initial approximations of the roots r, since a secant determines points on the curve.

### Termination condition

If you want to obtain the root of the equation with an absolute error less than ε, the secant method will be carried out up to the approximation x _{n} for which: E _{m} (X _{n} ) = | X _{n} – X _{n-1} | ≤ε

### Pseudocode algorithm

It is assumed that the equation to be solved is f (x) = 0, that the root to be found is separated within the interval [a, b]. It is assumed that x _{0 and} x _{1} have been selected within the interval [a, b] close enough to ar for the algorithm to converge. It is assumed that the function f (x), the initial approximations x _{0 and} x _{1} and the tolerance ε that will be allowed are known.

x _{a} : = x _{0}

and _{a} : = f (x _{a} )

x _{b} : = x _{1}

and _{b} : = f (x _{b} )

repeat

x _{c} : = x _{b} – (x _{b} -x _{a} ) / (y _{b} -y _{a} ) * y _{b}

y _{c} : = f (x _{c} )

Error: = | x _{c} -x _{b} |

x _{a} : = x _{b}

y _{a} : = y _{b}

x _{b} : = x _{c}

y _{b} : = y _{c}

until Error <ε

The sought root is x and its maximum absolute error is Error.

End up.

### Final comments

The secant method has several very positive characteristics, mainly its speed of convergence. If this is measured in the number of iterations necessary to obtain the root with a certain accuracy, then this speed is slightly less than that of Newton-Raphson; however, since this method requires evaluating two functions at each step, whereas the secant algorithm only evaluates one function , the secant method is almost always faster. Another favorable aspect is the fact that it is not necessary to know the first or second derivatives of the function; they only need to be continuous and not cancel, which can be verified by observing the graph of f (x) obtained on the screen. A drawback of the method is the possibility of non-convergence to the root if the initial approximations are not so close to them.