Maximum Probability Estimation (EMV)

The Maximum Probability Estimation (EMV) is a general model for estimating parameters of a probability distribution that depends on the observations of the sample. 

In other words, the EMV maximizes the probability of the density function parameters that depend on the probability distribution and the observations of the sample.

When we talk about maximum likelihood estimation, we must mention the function of maximum likelihood. Mathematically, given a sample x = (x 1 ,…, x n ) and parameters, θ = (θ 1 ,…, θ n ) then,

Do not panic! This symbol means the same as the sum for the sums. In this case, it is the multiplication of all density functions that depend on the observations of the sample (x i ) and the parameters θ.

The larger the value of L (θ | x), that is, the value of the maximum likelihood function, the more likely the parameters based on the sample will be.

Logarithmic EMV Function

To find the maximum likelihood estimates we have to differentiate (derive) the products of density functions and it is not the most comfortable way to do it.

When we encounter complicated functions, what we can do is a monotonous transformation. In other words, it would be like wanting to draw Europe on a real scale. We should reduce the scale so that it could fit on a sheet of paper.

In this case, the monotonous transformation is done through natural logarithms since they are monotonous and increasing functions. Mathematically,

The properties of the logarithms allow us to express the previous multiplication as the sum of natural logarithms applied to the density functions.

So, the monotonous transformation by logarithms is simply a “change of scale” towards smaller numbers.

The estimated value of the parameters that maximizes the probability of the parameters of the maximum likelihood function with logarithms is equivalent to the estimated value of the parameters that maximizes the probability of the parameters of the original maximum likelihood function.

Then, we will always deal with the monotonous modification of the maximum likelihood function given its greater ease for calculations.

Curiosity

However complex and strange the EMV may seem, we are continuously applying it without realizing it.

When?

In all estimates of the parameters of a linear regression under classical assumptions. More commonly known as Ordinary Minimum Squares (MCO).

In other words, when we are applying MCO, we are applying EMV implicitly since both are equivalent in terms of consistency.

Application

Like other methods, EMV is based on iteration. That is, repeat a certain operation as many times as required to find the maximum or minimum value of a function. This process may be subject to restrictions on the final values ​​of the parameters. For example, that the result is greater than or equal to zero or that the sum of two parameters must be less than one.

The GARCH symmetric model and its different extensions apply the EMV to find the estimated value of the parameters that maximizes the probability of the density function parameters.

Leave a Comment