MNC formulas. Least squares in excel - using the trend function

If some physical quantity depends on another quantity, then this dependence can be studied by measuring y at different values ​​of x. As a result of measurements, a series of values ​​is obtained:

x 1 , x 2 , ..., x i , ... , x n ;

y 1 , y 2 , ..., y i , ... , y n .

Based on the data of such an experiment, it is possible to plot the dependence y = ƒ(x). The resulting curve makes it possible to judge the form of the function ƒ(x). However constant coefficients, which are included in this function, remain unknown. The method allows you to determine them least squares. The experimental points, as a rule, do not lie exactly on the curve. The method of least squares requires that the sum of the squared deviations of the experimental points from the curve, i.e. 2 was the smallest.

In practice, this method is most often (and most simply) used in the case of a linear relationship, i.e. when

y=kx or y = a + bx.

Linear dependence is very widespread in physics. And even when the dependence is non-linear, they usually try to build a graph in such a way as to get a straight line. For example, if it is assumed that the refractive index of glass n is related to the wavelength λ of the light wave by the relation n = a + b/λ 2 , then the dependence of n on λ -2 is plotted on the graph.

Consider the dependence y=kx(straight line passing through the origin). Compose the value φ - the sum of the squared deviations of our points from the straight line

The value of φ is always positive and turns out to be the smaller, the closer our points lie to the straight line. The method of least squares states that for k one should choose such a value at which φ has a minimum


or
(19)

The calculation shows that the root-mean-square error in determining the value of k is equal to

, (20)
where – n is the number of measurements.

Let us now consider a somewhat more difficult case, when the points must satisfy the formula y = a + bx(a straight line not passing through the origin).

The task is to find the best values ​​of a and b from the given set of values ​​x i , y i .

We again compose the quadratic form φ, equal to the sum squared deviations of points x i , y i from a straight line

and find the values ​​a and b for which φ has a minimum

;

.

.

The joint solution of these equations gives

(21)

The root-mean-square errors of determining a and b are equal

(23)

.  (24)

When processing the measurement results by this method, it is convenient to summarize all the data in a table in which all the sums included in formulas (19)–(24) are preliminarily calculated. The forms of these tables are shown in the examples below.

Example 1 The basic equation of the dynamics of rotational motion ε = M/J (a straight line passing through the origin) was studied. For various values ​​of the moment M, the angular acceleration ε of a certain body was measured. It is required to determine the moment of inertia of this body. The results of measurements of the moment of force and angular acceleration are listed in the second and third columns tables 5.

Table 5
n M, N m ε, s-1 M2 M ε ε - kM (ε - kM) 2
1 1.44 0.52 2.0736 0.7488 0.039432 0.001555
2 3.12 1.06 9.7344 3.3072 0.018768 0.000352
3 4.59 1.45 21.0681 6.6555 -0.08181 0.006693
4 5.90 1.92 34.81 11.328 -0.049 0.002401
5 7.45 2.56 55.5025 19.072 0.073725 0.005435
– – 123.1886 41.1115 – 0.016436

By formula (19) we determine:

.

To determine the root-mean-square error, we use formula (20)

0.005775kg-one · m -2 .

By formula (18) we have

; .

SJ = (2.996 0.005775)/0.3337 = 0.05185 kg m 2.

Given the reliability P = 0.95, according to the table of Student's coefficients for n = 5, we find t = 2.78 and determine absolute errorΔJ = 2.78 0.05185 = 0.1441 ≈ 0.2 kg m 2.

We write the results in the form:

J = (3.0 ± 0.2) kg m 2;


Example 2 We calculate the temperature coefficient of resistance of the metal using the least squares method. Resistance depends on temperature according to a linear law

R t \u003d R 0 (1 + α t °) \u003d R 0 + R 0 α t °.

The free term determines the resistance R 0 at a temperature of 0 ° C, and the slope determines the product temperature coefficientα to the resistance R 0 .

The results of measurements and calculations are given in the table ( see table 6).

Table 6
n t°, s r, Ohm t-¯t (t-¯t) 2 (t-¯t)r r-bt-a (r - bt - a) 2,10 -6
1 23 1.242 -62.8333 3948.028 -78.039 0.007673 58.8722
2 59 1.326 -26.8333 720.0278 -35.581 -0.00353 12.4959
3 84 1.386 -1.83333 3.361111 -2.541 -0.00965 93.1506
4 96 1.417 10.16667 103.3611 14.40617 -0.01039 107.898
5 120 1.512 34.16667 1167.361 51.66 0.021141 446.932
6 133 1.520 47.16667 2224.694 71.69333 -0.00524 27.4556
515 8.403 – 8166.833 21.5985 – 746.804
∑/n 85.83333 1.4005 – – – – –

By formulas (21), (22) we determine

R 0 = ¯ R- α R 0 ¯ t = 1.4005 - 0.002645 85.83333 = 1.1735 Ohm.

Let us find an error in the definition of α. Since , then by formula (18) we have:

.

Using formulas (23), (24) we have

;

0.014126 Ohm.

Given the reliability P = 0.95, according to the table of Student's coefficients for n = 6, we find t = 2.57 and determine the absolute error Δα = 2.57 0.000132 = 0.000338 deg -1.

α = (23 ± 4) 10 -4 hail-1 at P = 0.95.


Example 3 It is required to determine the radius of curvature of the lens from Newton's rings. The radii of Newton's rings r m were measured and the numbers of these rings m were determined. The radii of Newton's rings are related to the radius of curvature of the lens R and the ring number by the equation

r 2 m = mλR - 2d 0 R,

where d 0 is the thickness of the gap between the lens and the plane-parallel plate (or lens deformation),

λ is the wavelength of the incident light.

λ = (600 ± 6) nm;
r 2 m = y;
m = x;
λR = b;
-2d 0 R = a,

then the equation will take the form y = a + bx.

.

The results of measurements and calculations are entered in table 7.

Table 7
n x = m y \u003d r 2, 10 -2 mm 2 m-¯m (m-¯m) 2 (m-¯m)y y-bx-a, 10-4 (y - bx - a) 2, 10 -6
1 1 6.101 -2.5 6.25 -0.152525 12.01 1.44229
2 2 11.834 -1.5 2.25 -0.17751 -9.6 0.930766
3 3 17.808 -0.5 0.25 -0.08904 -7.2 0.519086
4 4 23.814 0.5 0.25 0.11907 -1.6 0.0243955
5 5 29.812 1.5 2.25 0.44718 3.28 0.107646
6 6 35.760 2.5 6.25 0.894 3.12 0.0975819
21 125.129 – 17.5 1.041175 – 3.12176
∑/n 3.5 20.8548333 – – – – –

Which finds the widest application in various fields of science and practice. It can be physics, chemistry, biology, economics, sociology, psychology and so on and so forth. By the will of fate, I often have to deal with the economy, and therefore today I will arrange for you a ticket to an amazing country called Econometrics=) … How do you not want that?! It's very good there - you just have to decide! …But what you probably definitely want is to learn how to solve problems least squares. And especially diligent readers will learn to solve them not only accurately, but also VERY FAST ;-) But first general statement of the problem+ related example:

Let indicators be studied in some subject area that have a quantitative expression. At the same time, there is every reason to believe that the indicator depends on the indicator. This assumption can be both a scientific hypothesis and based on an elementary common sense. Let's leave science aside, however, and explore more appetizing areas - namely, grocery stores. Denote by:

– retail space of a grocery store, sq.m.,
- annual turnover of a grocery store, million rubles.

It is quite clear what more area store, the greater its turnover in most cases.

Suppose that after conducting observations / experiments / calculations / dancing with a tambourine, we have at our disposal numerical data:

With grocery stores, I think everything is clear: - this is the area of ​​the 1st store, - its annual turnover, - the area of ​​the 2nd store, - its annual turnover, etc. By the way, it is not at all necessary to have access to classified materials - a fairly accurate assessment of the turnover can be obtained using mathematical statistics. However, do not be distracted, the course of commercial espionage is already paid =)

Tabular data can also be written in the form of points and depicted in the usual way for us. Cartesian system .

We will answer important question: how many points are needed for a qualitative study?

The bigger, the better. The minimum admissible set consists of 5-6 points. In addition, if not in large numbers data, "abnormal" results should not be included in the sample. So, for example, a small elite store can help out orders of magnitude more than “their colleagues”, thereby distorting general pattern, which is to be found!

If it’s quite simple, we need to choose a function , schedule which passes as close as possible to the points . Such a function is called approximating (approximation - approximation) or theoretical function . Generally speaking, here immediately appears an obvious "pretender" - a polynomial of high degree, the graph of which passes through ALL points. But this option is complicated, and often simply incorrect. (because the chart will “wind” all the time and poorly reflect the main trend).

Thus, the desired function must be sufficiently simple and at the same time reflect the dependence adequately. As you might guess, one of the methods for finding such functions is called least squares. First, let's analyze its essence in general view. Let some function approximate the experimental data:


How to evaluate the accuracy of this approximation? Let us also calculate the differences (deviations) between the experimental and functional values (we study the drawing). The first thought that comes to mind is to estimate how big the sum is, but the problem is that the differences can be negative. (For example, ) and deviations as a result of such summation will cancel each other out. Therefore, as an estimate of the accuracy of the approximation, it suggests itself to take the sum modules deviations:

or in folded form: (suddenly, who doesn’t know: is the sum icon, and is an auxiliary variable-“counter”, which takes values ​​from 1 to ).

Approximating the experimental points with various functions, we will obtain different meanings, and obviously, where this sum is less, that function is more accurate.

Such a method exists and is called least modulus method. However, in practice it has become much more widespread. least square method, in which the possible negative values are eliminated not by the modulus, but by squaring the deviations:

, after which efforts are directed to the selection of such a function that the sum of the squared deviations was as small as possible. Actually, hence the name of the method.

And now we're back to another important point: as noted above, the selected function should be quite simple - but there are also many such functions: linear , hyperbolic, exponential, logarithmic, quadratic etc. And, of course, here I would immediately like to "reduce the field of activity." What class of functions to choose for research? Primitive but effective technique:

- The easiest way to draw points on the drawing and analyze their location. If they tend to be in a straight line, then you should look for straight line equation with optimal values ​​and . In other words, the task is to find SUCH coefficients - so that the sum of the squared deviations is the smallest.

If the points are located, for example, along hyperbole, then it is clear that the linear function will give a poor approximation. In this case, we are looking for the most “favorable” coefficients for the hyperbola equation - those who give the minimum amount squares .

Now notice that in both cases we are talking about functions of two variables, whose arguments are searched dependency options:

And in essence, we need to solve a standard problem - to find minimum of a function of two variables.

Recall our example: suppose that the "shop" points tend to be located in a straight line and there is every reason to believe the presence linear dependence turnover from the trading area. Let's find SUCH coefficients "a" and "be" so that the sum of squared deviations was the smallest. Everything as usual - first partial derivatives of the 1st order. According to linearity rule you can differentiate right under the sum icon:

If you want to use this information for an essay or coursework, I will be very grateful for the link in the list of sources, you will not find such detailed calculations anywhere:

Let's make a standard system:

We reduce each equation by a “two” and, in addition, “break apart” the sums:

Note : independently analyze why "a" and "be" can be taken out of the sum icon. By the way, formally this can be done with the sum

Let's rewrite the system in an "applied" form:

after which the algorithm for solving our problem begins to be drawn:

Do we know the coordinates of the points? We know. Sums can we find? Easily. We compose the simplest system of two linear equations with two unknowns("a" and "beh"). We solve the system, for example, Cramer's method, resulting in a stationary point . Checking sufficient condition for an extremum, we can verify that at this point the function reaches precisely minimum. Verification is associated with additional calculations and therefore we will leave it behind the scenes. (if necessary, the missing frame can be viewed). We draw the final conclusion:

Function the best way (at least compared to any other linear function) brings experimental points closer . Roughly speaking, its graph passes as close as possible to these points. In tradition econometrics the resulting approximating function is also called pair equation linear regression .

The problem under consideration has a large practical value. In the situation with our example, the equation allows you to predict what kind of turnover ("yig") will be at the store with one or another value of the selling area (one or another meaning of "x"). Yes, the resulting forecast will be only a forecast, but in many cases it will turn out to be quite accurate.

I will analyze just one problem with "real" numbers, since there are no difficulties in it - all calculations are at the level school curriculum 7-8 grade. In 95 percent of cases, you will be asked to find just a linear function, but at the very end of the article I will show that it is no more difficult to find the equations for the optimal hyperbola, exponent, and some other functions.

In fact, it remains to distribute the promised goodies - so that you learn how to solve such examples not only accurately, but also quickly. We carefully study the standard:

Task

As a result of studying the relationship between two indicators, the following pairs of numbers were obtained:

Using the least squares method, find the linear function that best approximates the empirical (experienced) data. Make a drawing on which, in a Cartesian rectangular coordinate system, plot experimental points and a graph of the approximating function . Find the sum of squared deviations between empirical and theoretical values. Find out if the function is better (in terms of the least squares method) approximate experimental points.

Note that "x" values ​​are natural values, and this has a characteristic meaningful meaning, which I will talk about a little later; but they, of course, can be fractional. In addition, depending on the content of a particular task, both "X" and "G" values ​​can be fully or partially negative. Well, we have been given a “faceless” task, and we start it decision:

Odds optimal function find as a solution to the system:

For the purposes of a more compact notation, the “counter” variable can be omitted, since it is already clear that the summation is carried out from 1 to .

It is more convenient to calculate the required amounts in a tabular form:


Calculations can be carried out on a microcalculator, but it is much better to use Excel - both faster and without errors; watch a short video:

Thus, we get the following system:

Here you can multiply the second equation by 3 and subtract the 2nd from the 1st equation term by term. But this is luck - in practice, systems are often not gifted, and in such cases it saves Cramer's method:
, so the system has a unique solution.

Let's do a check. I understand that I don’t want to, but why skip mistakes where you can absolutely not miss them? Substitute the found solution into the left side of each equation of the system:

The right parts of the corresponding equations are obtained, which means that the system is solved correctly.

Thus, the desired approximating function: – from all linear functions experimental data is best approximated by it.

Unlike straight dependence of the store's turnover on its area, the found dependence is reverse (principle "the more - the less"), and this fact is immediately revealed by the negative angular coefficient. Function informs us that with an increase in a certain indicator by 1 unit, the value of the dependent indicator decreases average by 0.65 units. As they say, the higher the price of buckwheat, the less sold.

To plot the approximating function, we find two of its values:

and execute the drawing:


The constructed line is called trend line (namely, the line linear trend, i.e. in general, the trend is not necessarily a straight line). Everyone is familiar with the expression "to be in trend", and I think that this term does not need additional comments.

Calculate the sum of squared deviations between empirical and theoretical values. Geometrically, this is the sum of the squares of the lengths of the "crimson" segments (two of which are so small you can't even see them).

Let's summarize the calculations in a table:


They can again be carried out manually, just in case I will give an example for the 1st point:

but it is much more efficient to do the already known way:

Let's repeat: what is the meaning of the result? From all linear functions function the exponent is the smallest, that is, it is the best approximation in its family. And here, by the way, the final question of the problem is not accidental: what if the proposed exponential function will it be better to approximate the experimental points?

Let's find the corresponding sum of squared deviations - to distinguish them, I will designate them with the letter "epsilon". The technique is exactly the same:


And again for every fire calculation for the 1st point:

In Excel, we use the standard function EXP (Syntax can be found in Excel Help).

Conclusion: , so the exponential function approximates the experimental points worse than the straight line .

But it should be noted here that "worse" is doesn't mean yet, what is wrong. Now I built a graph of this exponential function - and it also passes close to the points - so much so that without an analytical study it is difficult to say which function is more accurate.

This completes the solution, and I return to the question of the natural values ​​of the argument. In various studies, as a rule, economic or sociological, months, years or other equal time intervals are numbered with natural "X". Consider, for example, such a problem.

  • tutorial

Introduction

I am a computer programmer. I made the biggest leap in my career when I learned to say: "I do not understand anything!" Now I am not ashamed to tell the luminary of science that he is giving me a lecture, that I do not understand what it, the luminary, is talking to me about. And it's very difficult. Yes, it's hard and embarrassing to admit you don't know. Who likes to admit that he does not know the basics of something-there. By virtue of my profession, I have to attend a large number of presentations and lectures, where, I confess, in the vast majority of cases I feel sleepy, because I do not understand anything. And I don’t understand because the huge problem of the current situation in science lies in mathematics. It assumes that all students are familiar with absolutely all areas of mathematics (which is absurd). To admit that you do not know what a derivative is (that this is a little later) is a shame.

But I've learned to say that I don't know what multiplication is. Yes, I don't know what a subalgebra over a Lie algebra is. Yes, I don't know why you need in life quadratic equations. By the way, if you are sure that you know, then we have something to talk about! Mathematics is a series of tricks. Mathematicians try to confuse and intimidate the public; where there is no confusion, no reputation, no authority. Yes, it is prestigious to speak in the most abstract language possible, which is complete nonsense in itself.

Do you know what a derivative is? Most likely you will tell me about the limit of the difference relation. In the first year of mathematics at St. Petersburg State University, Viktor Petrovich Khavin me defined derivative as the coefficient of the first term of the Taylor series of the function at the point (it was a separate gymnastics to determine the Taylor series without derivatives). I laughed at this definition for a long time, until I finally understood what it was about. The derivative is nothing more than just a measure of how much the function we are differentiating is similar to the function y=x, y=x^2, y=x^3.

I now have the honor of lecturing students who fear mathematics. If you are afraid of mathematics - we are on the way. As soon as you try to read some text and it seems to you that it is overly complicated, then know that it is badly written. I argue that there is not a single area of ​​mathematics that cannot be spoken about "on fingers" without losing accuracy.

The challenge for the near future: I instructed my students to understand what a linear-quadratic controller is. Don't be shy, waste three minutes of your life, follow the link. If you do not understand anything, then we are on the way. I (a professional mathematician-programmer) also did not understand anything. And I assure you, this can be sorted out "on the fingers." On the this moment I do not know what it is, but I assure you that we will be able to figure it out.

So, the first lecture that I am going to give to my students after they come running to me in horror with the words that a linear-quadratic controller is a terrible bug that you will never master in your life is least squares methods. Can you decide linear equations? If you are reading this text, then most likely not.

So, given two points (x0, y0), (x1, y1), for example, (1,1) and (3,2), the task is to find the equation of a straight line passing through these two points:

illustration

This straight line should have an equation like the following:

Here alpha and beta are unknown to us, but two points of this line are known:

You can write this equation in matrix form:

Here you should do lyrical digression: what is a matrix? A matrix is ​​nothing but a two-dimensional array. This is a way of storing data, no more values ​​​​should be given to it. It is up to us how exactly to interpret a certain matrix. Periodically, I will interpret it as a linear mapping, periodically as a quadratic form, and sometimes simply as a set of vectors. This will all be clarified in context.

Let's replace specific matrices with their symbolic representation:

Then (alpha, beta) can be easily found:

More specifically for our previous data:

Which leads to the following equation of a straight line passing through the points (1,1) and (3,2):

Okay, everything is clear here. And let's find the equation of a straight line passing through three points: (x0,y0), (x1,y1) and (x2,y2):

Oh-oh-oh, but we have three equations for two unknowns! The standard mathematician will say that there is no solution. What will the programmer say? And he will first rewrite the previous system of equations in the following form:

In our case vectors i,j,b are three-dimensional, hence (in the general case) there is no solution to this system. Any vector (alpha\*i + beta\*j) lies in the plane spanned by the vectors (i, j). If b does not belong to this plane, then there is no solution (equality in the equation cannot be achieved). What to do? Let's look for a compromise. Let's denote by e(alpha, beta) how exactly we did not achieve equality:

And we will try to minimize this error:

Why a square?

We are looking not just for the minimum of the norm, but for the minimum of the square of the norm. Why? The minimum point itself coincides, and the square gives a smooth function (a quadratic function of the arguments (alpha,beta)), while just the length gives a function in the form of a cone, non-differentiable at the minimum point. Brr. Square is more convenient.

Obviously, the error is minimized when the vector e orthogonal to the plane spanned by the vectors i and j.

Illustration

In other words: we are looking for a line such that the sum of the squared lengths of the distances from all points to this line is minimal:

UPDATE: here I have a jamb, the distance to the line should be measured vertically, not orthographic projection. This commenter is correct.

Illustration

In completely different words (carefully, poorly formalized, but it should be clear on the fingers): we take all possible lines between all pairs of points and look for the average line between all:

Illustration

Another explanation on the fingers: we attach a spring between all data points (here we have three) and the line that we are looking for, and the line of the equilibrium state is exactly what we are looking for.

Quadratic form minimum

So, having given vector b and the plane spanned by the columns-vectors of the matrix A(in this case (x0,x1,x2) and (1,1,1)), we are looking for a vector e with a minimum square of length. Obviously, the minimum is achievable only for the vector e, orthogonal to the plane spanned by the columns-vectors of the matrix A:

In other words, we are looking for a vector x=(alpha, beta) such that:

I remind you that this vector x=(alpha, beta) is the minimum quadratic function||e(alpha, beta)||^2:

Here it is useful to remember that the matrix can be interpreted as well as the quadratic form, for example, the identity matrix ((1,0),(0,1)) can be interpreted as a function of x^2 + y^2:

quadratic form

All this gymnastics is known as linear regression.

Laplace equation with Dirichlet boundary condition

Now the simplest real problem: there is a certain triangulated surface, it is necessary to smooth it. For example, let's load my face model:

The original commit is available. To minimize external dependencies, I took the code of my software renderer, already on Habré. For solutions linear system I use OpenNL , it's a great solver, but it's really hard to install: you need to copy two files (.h+.c) to your project folder. All smoothing is done by the following code:

For (int d=0; d<3; d++) { nlNewContext(); nlSolverParameteri(NL_NB_VARIABLES, verts.size()); nlSolverParameteri(NL_LEAST_SQUARES, NL_TRUE); nlBegin(NL_SYSTEM); nlBegin(NL_MATRIX); for (int i=0; i<(int)verts.size(); i++) { nlBegin(NL_ROW); nlCoefficient(i, 1); nlRightHandSide(verts[i][d]); nlEnd(NL_ROW); } for (unsigned int i=0; i&face = faces[i]; for (int j=0; j<3; j++) { nlBegin(NL_ROW); nlCoefficient(face[ j ], 1); nlCoefficient(face[(j+1)%3], -1); nlEnd(NL_ROW); } } nlEnd(NL_MATRIX); nlEnd(NL_SYSTEM); nlSolve(); for (int i=0; i<(int)verts.size(); i++) { verts[i][d] = nlGetVariable(i); } }

X, Y and Z coordinates are separable, I smooth them separately. That is, I solve three systems of linear equations, each with the same number of variables as the number of vertices in my model. The first n rows of matrix A have only one 1 per row, and the first n rows of vector b have original model coordinates. That is, I spring-tie between the new vertex position and the old vertex position - the new ones shouldn't be too far away from the old ones.

All subsequent rows of matrix A (faces.size()*3 = the number of edges of all triangles in the grid) have one occurrence of 1 and one occurrence of -1, while the vector b has zero components opposite. This means I put a spring on each edge of our triangular mesh: all edges try to get the same vertex as their starting and ending points.

Once again: all vertices are variables, and they cannot deviate far from their original position, but at the same time they try to become similar to each other.

Here is the result:

Everything would be fine, the model is really smoothed, but it moved away from its original edge. Let's change the code a little:

For (int i=0; i<(int)verts.size(); i++) { float scale = border[i] ? 1000: 1; nlBegin(NL_ROW); nlCoefficient(i, scale); nlRightHandSide(scale*verts[i][d]); nlEnd(NL_ROW); }

In our matrix A, for the vertices that are on the edge, I add not a row from the category v_i = verts[i][d], but 1000*v_i = 1000*verts[i][d]. What does it change? And this changes our quadratic form of the error. Now a single deviation from the top at the edge will cost not one unit, as before, but 1000 * 1000 units. That is, we hung a stronger spring on the extreme vertices, the solution prefers to stretch others more strongly. Here is the result:

Let's double the strength of the springs between the vertices:
nlCoefficient(face[ j ], 2); nlCoefficient(face[(j+1)%3], -2);

It is logical that the surface has become smoother:

And now even a hundred times stronger:

What is it? Imagine that we have dipped a wire ring in soapy water. As a result, the resulting soap film will try to have the least curvature as possible, touching the same border - our wire ring. This is exactly what we got by fixing the border and asking for a smooth surface inside. Congratulations, we have just solved the Laplace equation with Dirichlet boundary conditions. Sounds cool? But in fact, just one system of linear equations to solve.

Poisson equation

Let's have another cool name.

Let's say I have an image like this:

Everyone is good, but I don't like the chair.

I cut the picture in half:



And I will select a chair with my hands:

Then I will drag everything that is white in the mask to the left side of the picture, and at the same time I will say throughout the whole picture that the difference between two neighboring pixels should be equal to the difference between two neighboring pixels of the right image:

For (int i=0; i

Here is the result:

Code and pictures are available

It is widely used in econometrics in the form of a clear economic interpretation of its parameters.

Linear regression is reduced to finding an equation of the form

or

Type equation allows for given parameter values X have theoretical values ​​of the effective feature, substituting the actual values ​​of the factor into it X.

Building a linear regression comes down to estimating its parameters − a and in. Linear regression parameter estimates can be found by different methods.

The classical approach to estimating linear regression parameters is based on least squares(MNK).

LSM allows one to obtain such parameter estimates a and in, under which the sum of the squared deviations of the actual values ​​of the resultant trait (y) from calculated (theoretical) mini-minimum:

To find the minimum of a function, it is necessary to calculate the partial derivatives with respect to each of the parameters a and b and equate them to zero.

Denote through S, then:

Transforming the formula, we obtain the following system of normal equations for estimating the parameters a and in:

Solving the system of normal equations (3.5) either by the method of successive elimination of variables or by the method of determinants, we find the desired parameter estimates a and in.

Parameter in called the regression coefficient. Its value shows the average change in the result with a change in the factor by one unit.

The regression equation is always supplemented with an indicator of the tightness of the connection. When using linear regression, the linear correlation coefficient acts as such an indicator. There are various modifications of the linear correlation coefficient formula. Some of them are listed below:

As you know, the linear correlation coefficient is within the limits: -1 1.

To assess the quality of the selection of a linear function, the square is calculated

A linear correlation coefficient called determination coefficient . The coefficient of determination characterizes the proportion of the variance of the effective feature y, explained by regression, in the total variance of the resulting trait:

Accordingly, the value 1 - characterizes the proportion of dispersion y, caused by the influence of other factors not taken into account in the model.

Questions for self-control

1. The essence of the method of least squares?

2. How many variables provide a pairwise regression?

3. What coefficient determines the tightness of the connection between the changes?

4. Within what limits is the coefficient of determination determined?

5. Estimation of parameter b in correlation-regression analysis?

1. Christopher Dougherty. Introduction to econometrics. - M.: INFRA - M, 2001 - 402 p.

2. S.A. Borodich. Econometrics. Minsk LLC "New Knowledge" 2001.


3. R.U. Rakhmetova Short course in econometrics. Tutorial. Almaty. 2004. -78s.

4. I.I. Eliseeva. Econometrics. - M.: "Finance and statistics", 2002

5. Monthly information and analytical magazine.

Nonlinear economic models. Nonlinear regression models. Variable conversion.

Nonlinear economic models..

Variable conversion.

elasticity coefficient.

If there are non-linear relationships between economic phenomena, then they are expressed using the corresponding non-linear functions: for example, an equilateral hyperbola , second degree parabolas and etc.

There are two classes of non-linear regressions:

1. Regressions that are non-linear with respect to the explanatory variables included in the analysis, but linear with respect to the estimated parameters, for example:

Polynomials of various degrees - , ;

Equilateral hyperbole - ;

Semilogarithmic function - .

2. Regressions that are non-linear in the estimated parameters, for example:

Power - ;

Demonstrative -;

Exponential - .

The total sum of the squared deviations of the individual values ​​of the resulting attribute at from the average value is caused by the influence of many factors. We conditionally divide the entire set of reasons into two groups: studied factor x and other factors.

If the factor does not affect the result, then the regression line on the graph is parallel to the axis oh and

Then the entire dispersion of the resulting attribute is due to the influence of other factors and the total sum of squared deviations will coincide with the residual. If other factors do not affect the result, then u tied with X functionally, and the residual sum of squares is zero. In this case, the sum of squared deviations explained by the regression is the same as the total sum of squares.

Since not all points of the correlation field lie on the regression line, their scatter always takes place as due to the influence of the factor X, i.e. regression at on X, and caused by the action of other causes (unexplained variation). The suitability of the regression line for the forecast depends on what part of the total variation of the trait at accounts for the explained variation

Obviously, if the sum of squared deviations due to regression is greater than the residual sum of squares, then the regression equation is statistically significant and the factor X has a significant impact on the outcome. y.

, i.e. with the number of freedom of independent variation of the feature. The number of degrees of freedom is related to the number of units of the population n and the number of constants determined from it. In relation to the problem under study, the number of degrees of freedom should show how many independent deviations from P

The assessment of the significance of the regression equation as a whole is given with the help of F- Fisher's criterion. In this case, a null hypothesis is put forward that the regression coefficient is equal to zero, i.e. b= 0, and hence the factor X does not affect the result y.

The direct calculation of the F-criterion is preceded by an analysis of the variance. Central to it is the expansion of the total sum of squared deviations of the variable at from the average value at into two parts - "explained" and "unexplained":

- total sum of squared deviations;

- sum of squared deviations explained by regression;

is the residual sum of the squares of the deviation.

Any sum of squared deviations is related to the number of degrees of freedom , i.e. with the number of freedom of independent variation of the feature. The number of degrees of freedom is related to the number of population units n and with the number of constants determined from it. In relation to the problem under study, the number of degrees of freedom should show how many independent deviations from P possible is required to form a given sum of squares.

Dispersion per degree of freedomD.

F-ratios (F-criterion):

If the null hypothesis is true, then the factor and residual variances do not differ from each other. For H 0, a refutation is necessary so that the factor variance exceeds the residual by several times. The English statistician Snedecor developed tables of critical values F-relationships at different levels of significance of the null hypothesis and a different number of degrees of freedom. Table value F-criterion is the maximum value of the ratio of variances that can occur if they diverge randomly for a given level of probability of the presence of a null hypothesis. Computed value F-relationship is recognized as reliable if o is greater than the tabular one.

In this case, the null hypothesis about the absence of a relationship of features is rejected and a conclusion is made about the significance of this relationship: F fact > F table H 0 is rejected.

If the value is less than the table F fact ‹, F table, then the probability of the null hypothesis is higher than a given level and it cannot be rejected without a serious risk of drawing the wrong conclusion about the presence of a relationship. In this case, the regression equation is considered statistically insignificant. N o does not deviate.

Standard error of the regression coefficient

To assess the significance of the regression coefficient, its value is compared with its standard error, i.e., the actual value is determined t-Student's criterion: which is then compared with the tabular value at a certain level of significance and the number of degrees of freedom ( n- 2).

Parameter Standard Error a:

The significance of the linear correlation coefficient is checked based on the magnitude of the error correlation coefficient r:

Total variance of a feature X:

Multiple Linear Regression

Model building

Multiple regression is a regression of an effective feature with two or more factors, i.e. a model of the form

Regression can give a good result in modeling if the influence of other factors affecting the object of study can be neglected. The behavior of individual economic variables cannot be controlled, that is, it is not possible to ensure the equality of all other conditions for assessing the influence of one factor under study. In this case, you should try to identify the influence of other factors by introducing them into the model, i.e. build a multiple regression equation: y = a+b 1 x 1 +b 2 +…+b p x p + .

The main goal of multiple regression is to build a model with a large number of factors, while determining the influence of each of them individually, as well as their cumulative impact on the modeled indicator. The specification of the model includes two areas of questions: the selection of factors and the choice of the type of regression equation

The essence of the method lies in the fact that the quality criterion of the solution under consideration is the sum of squared errors, which is sought to be minimized. To apply this, it is required to carry out as many measurements of an unknown random variable as possible (the more - the higher the accuracy of the solution) and a certain set of expected solutions, from which it is required to choose the best one. If the set of solutions is parameterized, then the optimal value of the parameters must be found.

Why are error squares minimized, and not errors themselves? The fact is that in most cases errors occur in both directions: the estimate can be greater than the measurement or less than it. If we add errors with different signs, then they will cancel each other out, and as a result, the sum will give us an incorrect idea of ​​the quality of the estimate. Often, in order for the final estimate to have the same dimension as the measured values, the square root is taken from the sum of squared errors.


A photo:

LSM is used in mathematics, in particular - in probability theory and mathematical statistics. This method has the greatest application in filtering problems, when it is necessary to separate the useful signal from the noise superimposed on it.

It is also used in mathematical analysis for an approximate representation of a given function by simpler functions. Another area of ​​application of LSM is the solution of systems of equations with fewer unknowns than the number of equations.

I came up with a few more very unexpected applications of the LSM, which I would like to talk about in this article.

MNCs and typos

Typos and spelling errors are the scourge of automatic translators and search engines. Indeed, if a word differs by only 1 letter, the program regards it as another word and translates/searchs for it incorrectly or does not translate/doesn't find it at all.

I had a similar problem: there were two databases with addresses of Moscow houses, and they had to be combined into one. But the addresses were written in a different style. In one database there was the KLADR standard (All-Russian address classifier), for example: "BABUSHKINA PILOT UL., D10K3". And in another database there was a postal style, for example: “St. Pilot Babushkin, house 10 building 3. It seems that there are no errors in both cases, and automating the process is incredibly difficult (each database has 40,000 records!). Although there were enough typos too ... How to make the computer understand that the 2 addresses above belong to the same house? This is where MNC came in handy for me.

What I've done? Having found the next letter in the first address, I looked for the same letter in the second address. If they were both in the same place, then I assumed the error for that letter to be 0. If they were located in adjacent positions, then the error was 1. If there was a shift by 2 positions, the error was 2, and so on. If there was no such letter at all in the other address, then the error was assumed to be n+1, where n is the number of letters in the 1st address. Thus, I calculated the sum of squared errors and connected those records in which this sum was minimal.

Of course, the numbers of houses and buildings were processed separately. I don’t know if I invented another “bicycle”, or it really was, but the problem was solved quickly and efficiently. I wonder if this method is used in search engines? Perhaps it is used, since every self-respecting search engine, when meeting an unfamiliar word, offers a replacement from familiar words (“perhaps you meant ...”). However, they can do this analysis somehow differently.

OLS and search by pictures, faces and maps

This method can also be applied to search by pictures, drawings, maps, and even by people's faces.

A photo:

Now all search engines, instead of searching by images, in fact, use search by image captions. This is undoubtedly a useful and convenient service, but I propose to supplement it with a real image search.

A sample picture is introduced and a rating is made for all images by the sum of the squared deviations of the characteristic points. Determining these very characteristic points is in itself a non-trivial task. However, it is quite solvable: for example, for faces, these are the corners of the eyes, lips, the tip of the nose, nostrils, the edges and centers of the eyebrows, pupils, etc.

By comparing these parameters, you can find a face that is most similar to the sample. I have already seen sites where such a service works, and you can find a celebrity that is most similar to the photo you suggested, and even compose an animation that turns you into a celebrity and back. Surely the same method works in the bases of the Ministry of Internal Affairs, containing identikit images of criminals.

Photo: pixabay.com

Yes, and fingerprints can be searched in the same way. Map search focuses on the natural irregularities of geographical objects - the bends of rivers, mountain ranges, the outlines of coasts, forests and fields.

Here is such a wonderful and versatile OLS method. I am sure that you, dear readers, will be able to find many unusual and unexpected applications of this method for yourself.