House, design, repair, decor. Yard and garden. Do it yourself

House, design, repair, decor. Yard and garden. Do it yourself

» Possible values \u200b\u200bof random variance. The concept of random variable

Possible values \u200b\u200bof random variance. The concept of random variable

One-dimensional random variables

The concept of random variable. Discrete and continuous random variables. The function of the distribution of probability and its properties. The density of the probability distribution and its properties. Numeric characteristics of random variables: mathematical expectation, dispersion and their properties, secondary quadratic deviation, mod and median; Primary and central moments, asymmetry and excess.

1. The concept of random variable.

Randomit is called the value that takes as a result of tests or another (but at the same time only one) possible value, a pre-known known, changing testing to the test and depending on random circumstances. In contrast to the random event, which is a qualitative characteristic of a random test result, a random value characterizes the result of the test quantitatively. Examples of random variance can be the size of the processed part, the error of the measurement of any parameter of the product or environment. Among the random variables with which you have to meet in practice, two main types can be distinguished: discrete values \u200b\u200band continuous.

Discrete It is called such a random value that takes a finite or infinite counting set of values. For example, the frequency of hits for three shots; the number of defective products in the party from pieces; the number of calls entering the telephone exchange during the day; the number of failures of the elements of the device for a certain period of time when it is tested for reliability; The number of shots to the first hit in the target, etc.

Continuous This is called such a random value that can take any values \u200b\u200bfrom a certain finite or infinite interval. Obviously, the number of possible values \u200b\u200bof a continuous random variable infinitely. For example, an error in measuring the range of the radar; time of trouble-free operation of the microcircuit; Error of the manufacture of parts; Salt concentration in sea water, etc.

Random variables are typically denoted by letters, etc., and their possible values \u200b\u200b-, etc. It is not enough to list all its possible values \u200b\u200bto specify the random variable. It is also necessary to know how often these or other values \u200b\u200bmay appear as a result of tests under the same conditions, i.e. it is necessary to set the likelihoods of their appearance. The combination of all possible values \u200b\u200bof random variance and corresponding to them probabilities is the distribution of random variance.

2. The laws of the distribution of random variable.

Distribution law A random variable is called any correspondence between the possible values \u200b\u200bof the random variable and the corresponding probabilities. The random amount says that it obeys this law of distribution. Two random variables are called independentIf the distribution law of one of them does not depend on what the possible values \u200b\u200breceived another value. Otherwise random variables are called dependent. Several random variables are called mutually independentIf the laws of the distribution of any number of them do not depend on which the possible values \u200b\u200bof the remaining values \u200b\u200bare accepted.

The distribution law of a random variable can be specified in the form of a table, as a distribution function, in the form of distribution density. The table containing the possible values \u200b\u200bof the random variable and the corresponding probabilities is the simplest form of the task of the law of the distribution of random variable:

The tabular task of the distribution law can be used only for a discrete random variable with a finite number of possible values. The table form of the task of the law of random variable is also a number of distribution.

For clarity, a number of distribution represent graphically. With a graphic image in a rectangular coordinate system along the abscissa axis, all possible values \u200b\u200bof random variance are deposited, and according to the ordinate axis, the corresponding probabilities. Then build points and connect them with straight cuts. The resulting figure is called polygon distribution (Fig. 5). It should be remembered that the ordinite vertices compound is made only for clarity purposes, since in between and, and so on. Random value cannot be taken, therefore the probabilities of its appearance in these intervals are zero.

The distribution polygon, like a number of distribution, is one of the forms of the task of the distribution law of the discrete random variable. They may have a different form, but everyone has one common property: the sum of the ordinate of the vertices of the distribution polygon, which is the sum of the probabilities of all possible values \u200b\u200bof random variable, is always equal to one. This property follows from the fact that all possible values \u200b\u200bof random variance form a complete group of incomplete events, the sum of which is equal to one.

Definition. A random variable is called a numerical value, the value of which depends on which elementary outcome occurred as a result of an experiment with a random outcome. The set of all the values \u200b\u200bthat the random value can receive are called a multitude of possible values \u200b\u200bof this random variable.

Random variables indicate: X., Y 1., Z I.; ξ , η 1., μ I.and their possible values \u200b\u200b- x 3., y 1K., z ij..

Example. In the experience with a one-time cast of the playing bone of a random variable is the number X. Purchased glasses. Many possible values \u200b\u200bof random variable X. Has appearance

{x 1 \u003d 1, x 2 \u003d 2, ..., x 6 \u003d 6}.

We have the following compliance between elementary outcomes ω and random values X.:

That is, each elementary outcome ω I., i \u003d 1, ..., 6, put in line with the number i..

Example. The coin is thrown up to the first appearance of "coat of arms". In this experience, you can enter, for example, such random variables: X. - the number of casts to the first appearance of the "coat of arms" with a multitude of possible values \u200b\u200b( 1, 2, 3, … ) I. Y. - the number of "numbers", which fell to the first appearance of "coat of arms", with many possible values {0, 1, 2, …} (it's clear that X \u003d y + 1). In this experience, the space of elementary outcomes Ω can be identified with many

{G, cg, csg, ..., c ... cg, ...},

and elementary outcome ( C ... Tsg.) is put in line with the number m + 1. or m.where m. - the number of repetitions of the letter "C".

Definition. Scalar function X (Ω)defined in the space of elementary outcomes, called a random variable if for any x∈ R. (Ω: X (Ω)< x} It is an event.

Random variable distribution function

To study the probabilistic properties of random variable, you need to know the rule that allows you to find the likelihood that a random value will take a value from a subset of its values. Any such rule is called the law of probability distribution or a random variable distribution.

The general distribution law inherent in all random values \u200b\u200bis the distribution function.

Definition. Distribution function (probability) random variable X. Call function F (x)whose value at the point x. Equally the likelihood of an event (X.< x} , that is, events consisting of those and only those elementary outcomes ω for which X (Ω)< x :

F (x) \u003d p (x< x} .

It is usually said that the value of the distribution function at the point x. Equally the likelihood that a random value X. will take a value less x..

Theorem. The distribution function satisfies the following properties:

Typical view of the distribution function.

Discrete random variables

Definition. Random variable X. Call discrete if the many possible values \u200b\u200bof course or countable.

Definition. Near the distribution (probability) discrete random variable X. Call a table consisting of two lines: all possible values \u200b\u200bof random variance are listed in the top string, and in the lower probability p i \u003d p \\ (x \u003d x i \\) that random value will take these values.

To verify the correctness of the table, it is recommended to sum up probabilities. p I.. By virtue of the ignition axiom:

For a number of distribution of a discrete random variable, you can construct its distribution function F (x). Let be X. - defined by his number of distribution, and x 1< x 2 < … < x n . Then for all x ≤ x 1 event (X.< x} It is impossible, therefore, by definition F (x) \u003d 0. If a x 1< x≤ x 2 , then event (X.< x} consists of those and only those elementary outcomes for which X (ω) \u003d x 1. Hence, F (x) \u003d p 1. Similarly, for x 2< x ≤ x 3 event (X.< x} consists of elementary outcomes ω for which either X (ω) \u003d x 1either X (ω) \u003d x 2, i.e (X.< x}={X=x 1 }+{X=x 2 } . Hence, F (x) \u003d p 1 + p 2 etc. For x\u003e x n event (X.< x} reliably then F (x) \u003d 1.

The distribution law of the discrete random variable can also be specified analytically as a formula or graphically. For example, the distribution of the playing bone is described by the formula

P (x \u003d i) \u003d 1/6, i \u003d 1, 2, ..., 6.

Some discrete random variables

Binomial distribution. Discrete random variability X. distributed according to the binomial law, if it takes values \u200b\u200b0, 1, 2, ... n. In accordance with the distribution specified by Bernoulli formula:

This distribution is nothing more than the distribution of the number of success X. in n. Tests according to the Bernoulli scheme with a probability of success p. and failure q \u003d 1-P.

Poisson distribution. Discrete random variability X. Distributed by the law of Poisson if it takes as many non-negative values \u200b\u200bwith probabilities

where λ > 0 - Poisson distribution parameter.

The distribution of Poisson is also called the law of rare events, as it always manifests itself where a large number of tests are produced, in each of which, with a small probability, "rare" events occurs.

In accordance with the law of Poisson, distributed, for example, the number of calls received during the day on the telephone exchange; the number of meteorites falling in a certain area; The number of broken particles in the radioactive decay of the substance.

Geometric distribution. Consider the Bernoulli scheme again. Let be X. - The number of tests that must be carried out before the first success will appear. Then X. - discrete random value, taking values \u200b\u200b0, 1, 2, ..., n., ... We define the likelihood of an event (X \u003d n).

  • X \u003d 0.if successful will be successful in the first test, therefore P (x \u003d 0) \u003d P.
  • X \u003d 1.if there is a failure in the first test, and in the second - success, then P (x \u003d 1) \u003d QP.
  • X \u003d 2.if in the first two tests - failure, and in the third - success, then P (x \u003d 2) \u003d Q 2 P.
  • Continuing the procedure, we get P (x \u003d i) \u003d Q i p, i \u003d 0, 1, 2, ...

      A random variable with such a number of distribution is called distributed according to a geometric law.

Random variables.

In mathematics value - This is the common name of the various quantitative characteristics of objects and phenomena. Length, area, temperature, pressure, etc. - examples of different quantities.

The value that takes various Numerical values \u200b\u200bunder the influence of accidental circumstances are called random variable. Examples of random variables: 1) the number of patients waiting for admission from the doctor, 2) the exact dimensions of the internal organs of people, etc.

Distinguish discrete and continuous random variables.

Random value is called discreteIf it takes only certain dated from each other, which can be installed and listed.

Examples:

1) The number of students in the audience - can only be a whole positive number:

0,1,2,3,4….. 20…..

2) The figure that appears on the upper face when throwing a playing bone - can take only integer values \u200b\u200bfrom 1 to 6.

3) the relative frequency of getting into target at 10 shots - its meanings:

0; 0,1; 0,2; 0,3 ….. 1

4) The number of events occurring in the same time intervals: the pulse rate, the number of ambulance calls per hour, the number of operations per month with a fatal outcome, etc.

Random value is called continuousif she can take any Values \u200b\u200bwithin a certain interval, which sometimes has sharply pronounced borders, and are not known, they are considered that the values \u200b\u200bof the random variable are lying in the interval (- ¥; ¥) .. For continuous random values \u200b\u200binclude, for example, temperature, pressure, weight and the growth of people, the size of blood shaped blood elements, blood pH, etc.


The concept of a random variable plays a decisive role in the current theory of probabilities, which has developed special techniques for the transition from random events to random values.

If a random value depends on time, then we can talk about a random process.

3.1. Discrete random variable

To give a complete characteristic of a discrete random variable, you must specify all its possible values \u200b\u200band their probabilities.

The correspondence between the possible values \u200b\u200bof the discrete random variable and their probabilities is called the law of the distribution of this magnitude.

Denote the possible values \u200b\u200bof the random variable x through xi, and the probabilities corresponding to them through PI *. Then the transit of the discrete random variable can be set in three ways: in the form of a table, graphics or formula.

1. Table, which is called a number of distribution,all possible values \u200b\u200bof the discrete random variable are listed and corresponding to these values \u200b\u200bof the probability P (x):

Table 3.1.

H.

In this case, the sum of all PI probabilities should be equal to one ( the condition is normalization):

pi \u003d p1 + p2 + ... + pn \u003d

2. Graphically - in the form of a broken line that is customary called polygon distribution(Fig.3.1). Here, along the horizontal axis, all possible values \u200b\u200bof the random variable xi are laid, and along the vertical axis - the corresponding probabilities of PI.

3. Analytically - in the form of formula: for example, if the probability of entering the target at one shot is equal to r,then the probability of misachering at one shot Q \u003d 1 - p, A. Treaty of target defeat 1 time n. The shots are given by the formula: P (n) \u003d QN-1 × P,

3.2. The law of the distribution of a continuous random variable. The density of the probability distribution.

For continuous random variables, it is impossible to apply the distribution law in the forms above, since the continuous value has countless ("uncountable") many possible values, completely filling some interval. Therefore, to make a table in which all its possible values \u200b\u200bwould be listed, or to build a polygon distribution cannot be built. In addition, the probability of any particular value is very small (close to 0). At the same time, various areas (intervals) of possible values \u200b\u200bof a continuous random variable are usually equally likely. Thus, there is a certain distribution law, although not in the former sense.

Consider a continuous random amount x, the possible values \u200b\u200bof which are completely filled with some interval (A, B) *. Law probability distributions Such a value should allow to find the likelihood of its values \u200b\u200binto any given interval (x1, x2), lying inside (A, B *) (Fig.3.2.)

This probability is denoted by p (x1<Х< х2), или Р(х1 £ Х £ х2).

Consider first very small interval The values \u200b\u200bfrom x to (x + dx) (see Fig.3.2.) The low probability of the DR that the random value will take some value from this small interval (x, x + dx), will be the proportional value of this DX interval: DR ~ DX, or, introducing the ratio of the proportionality F, which himself may depend on x, we get:

dR \u003d F (x) × dx. (3.2)


Introduced by us function f (x) called Probability distribution density The random variable x or, in short, the probability density (distribution density). Equation (3.2) can be viewed as a differential equation and then the likelihood of hitting. The ranks of the interval (x1, x2) is equal to:

P (x1< Х < х2) = f (x) dX. (3.3)

Graphically this probability P (x1< Х < х2) равна площади криволинейной трапеции, ограниченной осью абсцисс, кривой f (x) and straight x \u003d x1 and x \u003d x2 (see Fig.3.3), which follows from the geometrical meaning of a specific integral (3.3). Curve f (x) It is called distribution curve.

From (3.3) it can be seen that if a function is known f (x), That changes the limits of integration, you can find the likelihood for any intervals. Therefore, it is setting function f (x) Fully determines the distribution law for continuous random variables.

For the density of the probability distribution F (x) must be performed the condition is normalizationas:

f (x)dX = 1, (3.4)

if it is known that all values \u200b\u200bof x lie in the interval (A, B), or in the form:

f (x) dx \u003d 1, (3.5)

if the boundaries of the interval for values \u200b\u200bx are unknown. The conditions for normalization of probability density (3.4) or (3.5) are a consequence of the values \u200b\u200bof the random variable x reliably Lying within (a, b) or (- ¥, + ¥). From (3.4) and (3.5) it follows that the area of \u200b\u200bthe figure, limited distribution curve and the abscissa axis, is always equal to 1.

3.3. Numerical characteristics of random variables.

The results set forth in paragraphs 3.1 and 3.2 show that the full characteristic of discrete or continuous random values \u200b\u200bgives the laws of their distribution.

However, in many practically significant situations are so-called numerical characteristics Random variables, the main purpose of which is to express in a compressed form the most essential features of their distribution. It is important that these parameters are specific (constant) valueswhich can be evaluated using the data obtained in the experiments. These estimates are engaged in the so-called "descriptive statistics".

In probability theory and mathematical statistics, there are quite a lot of different characteristics, here we are considering most frequently used. Only for the part of these are the formulas for which their values \u200b\u200bare calculated, in other cases, the calculations will leave the computer.

3.3.1. Characteristics of the situation: Mathematical Waiting, Fashion, Median.

It is they characterize the position of a random variable on a numeric axis, i.e. indicate some of its important values \u200b\u200bthat characterize the distribution of other values. Among them, the mathematical expectation of M (x) plays a crucial role.

but). Mathematical expectation M (x) The random variable is a probabilistic analogue of its average arithmetic.

For a discrete random variable, it is calculated by the formula:

M (x) \u003d x1r1 + x2p2 + ... + xnrn \u003d \u003d, (3.6)

and in the case of a continuous random variable M (x) are determined by formulas:

M (x) \u003d or m (x) \u003d (3.7)

where f (x) is the probability density, dp \u003d f (x) dx - probability element (PI analog) for a small DX interval (DX).

Example.Calculate the average value of a continuous random variable having a uniform distribution on the segment (A, B).

Decision: With a uniform distribution, the probability density on the interval (A, B) is constant, i.e. f (x) \u003d fo \u003d const, and outside (A, B) is zero, and from the normalization condition (4.3) we will find the value F0:

F0 \u003d f0 × x | \u003d (b-a) f0, from where

M (x) \u003d | \u003d \u003d (A + B).

Thus, the mathematical expectation of M (x) coincides with the middle of the interval (A, B), which is determined, i.e. \u003d m (x) \u003d.


B). Fashion MO (x) discrete random variablecalled it most likely value(Fig.3.4, a), and continuous - Value H.in which density probability maximum (Fig.3.4, b).

in). Another position characteristic - median (Me.) Random variable distribution.

Median Fur)random variance called its value H.which divides all distribution into two equivalent parts. In other words for random variable equally probably Take values less me (x) or more me (x): P (x< Ме) = Р(Х > I) \u003d.

Therefore, the median can be calculated from the equation:

(3.8)

Graphically median is the value of a random variable whose ordinate is divided area, limited distribution curve, in half (S1 \u003d S2) (Fig.3.4, B). This characteristic usually use only For continuous random variables, although it can be determined formally for discrete x.

If M (x), Mo (x) and Me (x) coincide, then the distribution of random variance is called symmetric, otherwise - asymmetric.

Scattering characteristics - dispersion and standard deviation (secondary quadratic deviation).

DispersionD. (X.) random variable x is defined as mathematical expectation of a square deviation of random X from its mathematical expectation M (x):

D (x) \u003d m 2, (3.9)

or D (x) \u003d m (x2) - a)

Therefore for discreterandom Dimension is calculated by formulas:

D (x) \u003d [xi - m (x)] 2 pi, or d (x) \u003d xi2 pi -

and for the continuous magnitude, distributed in the interval (A, B):

a for the interval (-∞, ∞):

D (x) \u003d 2 f (x) dx, or d (x) \u003d x2 f (x) dx -

The dispersion characterizes the average scattering, the scattering of the values \u200b\u200bof the random variable x relative to its mathematical expectation. The word "dispersion" itself means "scattering".

But the dispersion D (x) has the dimension of the square of the random variable, which is very inconvenient when evaluating the scatter in physical, biological, medical, etc. applications. Therefore, usually use another parameter, the dimension of which coincides with the dimension of X. This middle quadratic deviation random variable x, which is denoted s. (X):

s. (X) \u003d (3.13)

So, mathematical expectation, fashion, median, dispersion and secondary quadratic deviation are the most consumed The numeric characteristics of the distributions of random variables, each of which, as shown, expresses some characteristic property of this distribution.

3.4. Normal law of distribution of random variables

Normal distribution law(Gauss's law) plays an extremely important role in the theory of probabilities. First, this is the most common in practice the law of the distribution of continuous random variables. Secondly, it is limit The law, in the sense that, under certain conditions, other distribution laws are approaching.

Normal law The distribution is characterized by the following formula for probability density:

, (3.13)

Here x - the current values \u200b\u200bof the random variable x, and m (x) and s. - Its mathematical expectation and standard deviation that fully determine the function f (x). Thus, if a random variety is distributed according to a normal law, it is enough to know only two numeric parameters: M (x) and s.To completely know the law of its distribution (3.13).Function schedule (3.13) called normal curve distributions (Gauss curve). It has a symmetric appearance relative to the ordinate x \u003d m (x). The maximum probability density equal to "corresponds to the mathematical expectation of` x \u003d m (x), and as the density of the probability of F (x) removes from it, it decreases, gradually approaching zero (Fig. Change value M (x) in (3.13) does not change the form of a normal curve, but leads only to its shift along the abscissa axis. The value of M (x) is also called the scattering center, and the RMS deviation s. characterizes the width of the distribution curve (see Fig.3.6).

With increasing s. The maximum order of the curve decreases, and the curve itself becomes more common, stretching along the abscissa axis, whereas with a decrease s.the curve is drawn up while simultaneously compressing from the sides (Fig. 6).

Naturally, for any values \u200b\u200bof M (x) and S, the area bounded by a normal curve and axis of x remains equal to 1 (normalization condition):

f (x) dx \u003d 1, or f (x) dx \u003d

Normal distribution is symmetrically, therefore M (x) \u003d Mo (x) \u003d me (x).

The probability of entering the values \u200b\u200bof the random variable to the interval (x1, x2), i.e. p (x1< Х< x2) равна

P (X1.< Х < x2) = . (3.15)

In practice, the problem of finding the likelihood of the values \u200b\u200bof the normally distributed random variable in the interval, symmetric relative to M (x). In particular, consider the following, important task in applied regard. I will postpone from m (x) to the right and left segments equal to S, 2S and 3S (Fig. 7) and consider the result of calculating the likelihood of entering x at the corresponding intervals:

P (M (x) - s. < Х < М(Х) + s.) = 0,6827 = 68,27%. (3.16)

P (m (x) - 2s< Х < М(Х) + 2s) = 0,9545 = 95,45 %. (3.17)

P (M (x) - 3s< Х < М(Х) + 3s) = 0,9973 = 99,73 %. (3.18)

From (3.18), it follows that the values \u200b\u200bof a normal distributed random variable with parameters M (x) and S with a probability of p \u003d 99.73% lie in the interval M (x) ± 3s, otherwise almost all possible values \u200b\u200bof this random fall into this interval. values. This method of estimating the range of possible values \u200b\u200bof random variance is known as "Rule of Three Sigm".

Example.It is known that the blood pH of the blood is a normal distributed value with an average value (mathematical expectation) 7.4 and a standard deviation of 0.2. Determine the range of possible values \u200b\u200bof this parameter.

Decision:To answer this question, we use the "rule of three sigm". With a probability of equal to 99.73%, it can be argued that the range of pH values \u200b\u200bfor a person is 7.4 ± 3 · 0.2, that is 6.8 ÷ 8.

* If the exact values \u200b\u200bof the interval boundaries are unknown, then the interval is considered (- ¥, + ¥).

Send your good work in the knowledge base is simple. Use the form below

Students, graduate students, young scientists who use the knowledge base in their studies and work will be very grateful to you.

Posted by http://www.allbest.ru/

Discrete random variables

Let some test be performed, the result of which is one of the incomplete random events (the number of events or of course or countingly, that is, events can be numbered). Each outcome is put in accordance with some valid number, that is, a valid function x with values \u200b\u200bare specified on the set of random events. This feature X is called discrete random value (The term "discrete" is used because the values \u200b\u200bof random variance are individual numbers, unlike continuous functions). Since the values \u200b\u200bof random variables change depending on random events, then the main interest represents the probabilities with which the random value takes various numeric values. The law of the distribution of random variable is a relationship that establishes the relationship between the possible values \u200b\u200bof the random variable and the corresponding probabilities. The distribution law may have various forms. For a discrete random variable, the distribution law is a totality of pairs of numbers (), where - the possible values \u200b\u200bof the random variable, and the probabilities with which it takes these values \u200b\u200bis :. Wherein.

Couples can be considered as points in some coordinate system. By connecting these points with straight lines, we get a graphic image of the distribution law - a polygon distribution. Most often, the law of distribution of a discrete random variable is recorded in the form of a table in which pairs are made.

Example. The coin was added twice. Create a law distribution of the number of "coat of arms" in this test.

Decision. Random X is the number of emissions "coat of arms" in this test. Obviously, x can take one of three meanings: 0, 1, 2. The probability of the appearance of the "coat of arms" at one tossing a coin is equal to p \u003d 0.5, and the loss of the "rush" Q \u003d 1 - p \u003d 0.5. The probabilities with which a random value takes listed values \u200b\u200bwill find by Bernoulli formula:

The law of the distribution of the random variable x write in the form of a distribution table

Control:

Some laws of distribution of discrete random variables, often occurring in solving various tasks, received special names: geometric distribution, hypergeometric distribution, binomial distribution, Poisson distribution and others.

The distribution of the discrete random variable can be specified using the distribution function F (x), which is equal to the likelihood that the random value x will take values \u200b\u200bon the interval ???? x?: F (x) \u003d P (x

The function F (x) is defined on the entire valid axis and has the following properties:

one) ? ? F (x)? one;

2) F (x) - non-decreasing function;

3) f (??) \u003d 0, f (+?) \u003d 1;

4) f (b) - f (a) \u003d p (a? X< b) - вероятность того, что случайная величина Х примет значения на промежутке 2 =(1-2.3) 2 =1.69

2 =(2-2.3) 2 =0.09

2 =(5-2.3) 2 =7.29

We will write the law distribution of the Square Deviation:

Solution: We will find the mathematical expectation of M (x):

M (x) \u003d 2 * 0.1 + 3 * 0.6 + 5 * 0.3 \u003d 3.5

WEW THE Act Distribution Random X 2

We will find the mathematical expectation M (x 2):

M (x 2) \u003d 4 * 0.1 + 9 * 0.6 + 25 * 0.3 \u003d 13.5

The desired dispersion D (x) \u003d m (x 2) - 2 \u003d 13.3- (3.5) 2 \u003d 1.05

Properties of dispersion

1. Dispersion of a constant value with zero: D (C) \u003d 0

2. A constant multiplier can be made for a dispersion sign, eating it into a square. D (CX) \u003d C 2 D (x)

3. The dispersion of the sum of independent random variables is equal to the amount of dispersions of these values. D (x 1 + x 2 + ... + x n) \u003d d (x 1) + d (x 2) + ... + d (x n)

4. The dispersion of the binomial distribution is equal to the product of the number of tests on the likelihood of the appearance and fault of the event in one test D (x) \u003d NPQ.

To estimate the scattering of possible values \u200b\u200bof the random variable around its average value, in addition to the dispersion, some other characteristics are also served. These include the average quadratic deviation.

Definition. The average quadratic deviation of the random variable X is called a square root from the dispersion:

Example 8. Random value X is set by the law of distribution

Find a medium quadratic deviation from (x)

Solution: find the mathematical expectation x:

M (x) \u003d 2 * 0.1 + 3 * 0.4 + 10 * 0.5 \u003d 6.4

We find the mathematical expectation X 2:

M (x 2) \u003d 2 2 * 0.1 + 3 2 * 0.4 + 10 2 * 0.5 \u003d 54

Find dispersion:

D (x) \u003d m (x 2) \u003d m (x 2) - 2 \u003d 54-6.4 2 \u003d 13.04

Second average quadratic deviation

(x) \u003d Vd (x) \u003d v13.04? 3.61

Theorem. The average quadratic deviation of the amount of the final number of mutually independent random variables is equally square root from the sum of the squares of the average quadratic deviations of these quantities:

Random variables

The concept of a random variable is the main in the theory of probability and its applications. Random values, for example, are the number of points in a single throwing of the playing bone, the number of hazardous radium atoms during the period of time, the number of calls on the telephone station for a certain period of time, deviation from the nominal part of the part with a properly established process and so on.

In this way, random value A variable value is called, which, as a result of experiment, can receive one or another numeric value.

In the future, we will look at two types of random variables - discrete and continuous.

1. Discrete random variables

Consider a random variable *, the possible values \u200b\u200bof which form a finite or infinite sequence of numbers x.1 , x.2 , . .., x.n., . .. . Let a function specify p (X)whose value at each point x \u003d X.i.(i \u003d 1,2,. ..) Equally the likelihood that the value will take a value x.i..

Such a random value is called discrete (intermittent). Function p (x) called law distributions probables random values, or briefly law distributions. This feature is defined at sequence points. x.1 , x.2 , . .., x.n., . .. . Since in each of the tests, a random value is always taken any value from the region of its change,

Example1. Random value - the number of points falling down by a single throwing of the playing bone. Possible values \u200b\u200b- numbers 1, 2, 3, 4, 5, and 6. In this case, the likelihood that any of these values \u200b\u200bwill take, one and the same and equal to 1/6. What will the law of distribution? ( Decision)

Example2. Let a random value - the number of events A. with one test, and P (a) \u003d P. Many possible values \u200b\u200bconsist of 2 numbers 0 and 1: =0 If an event A. did not happen and =1 If an event A. occurred. In this way,

Suppose that is produced n. Independent tests, as a result of each of which may occur or not to step up A.. Let the likelihood of an event A. each time the test is equal p. A. for n. Independent tests. The area of \u200b\u200bchange consists of all integers from 0 before n. inclusive. Law of probability distribution p (M)determined by Bernoulli formula (13 "):

The law of probability distribution according to Bernoulli formula is often called binomial, as P.n.(m)represents m.-D member of the decomposition of Binoma.

Let a random value can take any integer non-negative value, and

where is some positive constant. In this case, they say that a random variety is distributed by law Poisson, Note that when k \u003d 0. should be put 0!=1 .

As we know, at large values \u200b\u200bof the number n. Independent probability tests P.n.(m) Offensive m. Once events A. It is more convenient to find not by Bernoulli formula, but according to the Laplace formula [see formula (15)]. However, the latter gives large errors at a low probability r Event appearance BUT In one test. In this case, to count the probability P.n.(m) It is convenient to use the Poisson formula in which to put.

The Formula of Poisson can be obtained as an extreme case of Bernoulli formula with an unlimited increase in the number of tests. n. And with the desire for zero probability.

Example3. Party of parts arrived at the plant in the amount of 1000 pcs. The probability that the detail will be defective, equal to 0.001. What is the probability that there will be 5 defective among arrivals? ( Decision)

The distribution of Poisson is often found in other tasks. So, for example, if the telephonist is on average in one hour receives N. calls, how can you show, probability P (k) that for one minute she will get k. Calls, expressed by the Formula of Poisson, if put.

If the possible values \u200b\u200bof random variance form the final sequence x.1 , x.2 , . .., x.n., the law of the probability distribution of random variance is specified in the form of the following table in which

Values

Probability p (XI)

This table is called nearby distributions random variable. Vividly function p (x) You can depict in the form of a graph. To do this, take a rectangular coordinate system on the plane.

According to the horizontal axis, we will postpone the possible values \u200b\u200bof the random variable, and along the vertical axis - the function values. Schedule function p (x) depicts in fig. 2. If you connect the points of this graph with rectilinear segments, then the figure is called polygon distributions.

Example4. Let the event BUT - the appearance of one point when throwing a playing bone; P (a) \u003d 1/6. Consider a random amount - the number of events BUT With ten throwing a playing bone. Function values p (x) (distribution law) are shown in the following table:

Values

Probability p (XI)

Probability p (X.i.) Calculated by Bernoulli formula n \u003d 10.. For x\u003e 6. They are practically equal to zero. The graph of the function P (X) is depicted in fig. 3.

The function of the distribution of probability random variance and its properties

Consider a function F (x)defined on the entire numeric axis as follows: for each h. value F (x) Equally the likelihood that the discrete random value will take a value less h., i.e.

This feature is called function distributions probables, or briefly function distributions.

Example1. Find the function of the distribution of the random variable given in Example 1, paragraph 1. ( Decision)

Example2. Find the function of the distribution of a random variable given in Example 2, paragraph 1. ( Decision)

Knowing the distribution function F (x)It is easy to find the likelihood that a random value satisfies inequalities.

Consider the event, which is that the random value will take a value less. This event disintegrates in the amount of two inconsistent events: 1) Random value takes values \u200b\u200bsmaller, i.e. ; 2) Random value takes values \u200b\u200bthat satisfy inequalities. Using axiom of addition, get

But by defining the distribution function F (x) [cm. formula (18)] we have

standingly

In this way, probability hit discrete random values in interval equal increment functions distributions on the it is interval.

Considermaintenancepropertiesfunctionsdistribution.

1 °. Function distributions is an unlawful.

In fact, let< . Так как вероятность любого события неотрицательна, то. Поэтому из формулы (19) следует, что

2 °. Values functions distributions satisfy inequalities .

This property follows from the fact that F (x) Determined as a probability [cm. formula (18)]. It is clear that * and.

3 °. Probability togo, what discrete random value vick one of possible values x.i., equal skump functions distributions in point x.i..

Indeed, let x.i. - The value received by the discrete random variable, and. Believing in the formula (19), we get

In the limit, instead of the likelihood of incoming a random variable to the interval, we obtain the likelihood that the value will take this value. x.i.:

On the other hand, we get, i.e. Function limit F (x) Right, because. Consequently, in the limit of formula (20) will take

those. value p (X.i.) equal to jump function ** x.i.. This property is clearly illustrated in Fig. 4 and rice. five.

Continuous random variables

In addition to the discrete random variables, the possible values \u200b\u200bof which form a finite or infinite sequence of numbers that are not fully filling on no interval, there are often random variables, the possible values \u200b\u200bof which form some interval. An example of such a random variable can serve as a deviation from a nominal part of the part with a properly established technological process. This kind, random variables cannot be given using the probability distribution law p (x). However, they can be set using the probability distribution function F (x). This feature is defined in the same way as in the case of a discrete random variable:

So here is a function F (x) Defined on the entire numeric axis, and its value at the point h. Equally the likelihood that a random value will take a value less than h..

Formula (19) and properties 1 ° and 2 ° are valid for the distribution function of any random variable. The proof is carried out similarly to the case of a discrete value.

Random value is called continuousif there is a non-negative piecewise continuous function * satisfying for any values x. equality

The function is called density distributions probables, or briefly density distributions. If a x. 1 2 , on the basis of formulas (20) and (22) we have

Based on the geometrical meaning of the integral as area, it can be said that the probability of performing inequalities is equal to the area of \u200b\u200bcurvilinear trapezium with the base , limited from the top of the curve (Fig. 6).

Since, but on the basis of formula (22)

Using the formula (22), we find as an integral derivative in the variable upper boundary, counting the distribution density continuous **:

Note that for a continuous random variable, the distribution function F (x) continuous anywhere h.where the function is continuous. It follows from the fact that F (x) At these points differentiate.

Based on formula (23), believing x. 1 \u003d X., have

Due to the continuity of the function F (x) We get that

Hence

In this way, probability togo, what continuous random value can to accept anyone separate value x, equal zero.

From here it follows that events consisting in the fulfillment of each of the inequalities

Have the same probability, i.e.

In fact, for example,

Comment. As we know if the event is impossible, the probability of its occurrence is zero. In the classical definition of the probability, when the number of test outcomes of course, there is also an inverse offer: if the probability of the event is zero, the event is not possible, since in this case it does not favor one of the test outcomes. In the case of a continuous random variable, the number of possible values \u200b\u200bof its values \u200b\u200bis infinite. The likelihood that this value will take any particular value x. 1 As we saw, zero is equal. However, it does not follow from here that this event is impossible, since as a result of the test, the random value may, in particular, take the value x. 1 . Therefore, in the case of a continuous random variable, it makes sense to talk about the likelihood of random variance to the interval, and not about the likelihood that it will take some particular value.

For example, in the manufacture of a roller, we are not interested in the likelihood that its diameter will be equal to the nominal. For us, the likelihood that the roller diameter does not leave the admission field.

Example. The density of the distribution of a continuous random variable is as follows:

Function graph shows PA Fig. 7. Determine the likelihood that a random value will take a value that satisfies inequalities. Invite the distribution function of a given random variable. ( Decision)

The following two points are devoted to frequently occurring distributions of continuous random variables - uniform and normal distributions.

* The function is called piecewise continuous on the entire numeric axis, if it is on any segment or continuous, or has a finite number of gap points of the I genus.

** Differentiation rule of the integral with variable upper bounds, derived in the case of the final lower boundary, remains fair for integrals with an infinite lower boundary. Indeed,

Since integral

there is a permanent value.

Random variables

Under random values \u200b\u200bunderstand the numerical characteristics of random events. In other words, random variables are numerical results of experiments whose values \u200b\u200bthat are impossible (at this time) to predict in advance.

For example, the following values \u200b\u200bcan be viewed as random:

2. Percentage of boys among children born in a given maternity hospital for some specific day.

3. The number and area of \u200b\u200bspots in the sun visible in some observatory during a certain day.

4. The number of students who were late for this lecture.

5. The dollar exchange rate on the stock exchange (let's say, on the MICEX), although it may be not so "case", as it seems to manifiers.

6. Number of equipment failures on a specified day at a specific enterprise.

Random variables are divided into discrete and continuous depending on what many possible values \u200b\u200bof the corresponding characteristic are discrete or continuous.

This division is quite conditionally, but useful when choosing adequate research methods. If the number of possible values \u200b\u200bof random variable of course or comparable to a plurality of all natural numbers (i.e., it can be renumbered), then the random value of PDF Created with FinePrint PDFFFactory Trial Version http://www.fineprint.com is called discrete. Otherwise, it is called continuous, although in fact, it would be implicitly assumed that actually continuous random variables take their meaning in some simple numerical outback (segment, interval). For example, the discrete will be random variables given above at the numbers 4 and 6, and continuous - under numbers 1 and 3 (spots area). Sometimes a random value is mixed. Such, for example, a dollar rate (or some other currency), which actually takes only a discrete set of values, but it turns out it is convenient to assume that the set of its values \u200b\u200b"continuously".

Random variables can be set in different ways.

Discrete random variables are usually asked by their distribution law. Here, each possible value x1, x2, ... the random value x is compared the probability P1, P2, ... of this value. As a result, a table consisting of two lines is formed:

This is the law of the distribution of random variable.

Continuous random variables The distribution law is not possible, since according to its very definition, their value cannot be renounced and therefore the task in the form of a table is excluded. However, for continuous random variables, there is another way to task (applicable, by the way, for discrete values) is a distribution function:

equal of the probability of an event that is that the random value of x will take a value less than the specified number x.

Often, instead of the distribution function, it is convenient to use another function - the density f (x) of the distribution of a random value X. It is sometimes sometimes called the differential function of the distribution, and F (x) in this terminology is called the integral distribution function. These two functions mutually define each other according to the following formulas:

If the random variable is discrete, then for it the concept of the distribution function also makes sense, in this case a graph of the distribution function consists of horizontal sections, each of which is located above the previous one by a value equal to PI.

Important examples of discrete values \u200b\u200bare, for example, binomially distributed values \u200b\u200b(Bernoulli distribution), for which PDF Created with Fineprint PDFFactory Trial Version http://www.fineprint.com

n PK (1-P) N-K \u003d! ()!

where P is the probability of a separate event (it is sometimes conventionally called the "probability of success"). So distributed the results of a series of consecutive homogeneous tests (Bernoulli circuit). The limit case of the binomial distribution (with an increase in the number of tests) is the distribution of Poisson for which

pK \u003d? K / K! · EXP (-?)

where?\u003e 0 Some positive parameter.

The simplest example of continuous distribution is a uniform distribution. It is on the segment has a constant distribution density equal to 1 / (B-a), and outside of this segment the density is 0.

An extremely important example of a continuous distribution is a normal distribution. Is it defined by two parameters M and? (mathematical expectation and standard deviation - see below), its distribution density has the form:

1 EXP (- (X-M) 2/2? 2)

The fundamental role of the normal distribution in the theory of probability is due to the fact that due to the central limit theorem (CPT), the sum of the large number of random variables, which are pairwise independent (about the concept of independence of random variables, see below) or weakly dependent, turns out to be approximately distributed according to a normal law. It follows that a random value, the randomness of which is caused by the imposition of a large number of slightly dependent random factors, can be considered approximately as distributed normally (regardless of how the stories of its factors were distributed). In other words, the normal distribution law is very universal.

There are several numeric characteristics that are convenient to use when studying random variables. Among them, we will lay out a mathematical expectation

equal by the average value of random variable, dispersion

D (x) \u003d m (x-m (x)) 2,

equal to mathematical waiting for a square deviation of a random variable from the average value, and another, convenient in practice, an additional value (the same dimension as the initial random value):

as a matter of standard deviation. We will assume (without stipulating this in the future) that all discharged integrals exist (i.e. converge on the entire numeric axis). As is known, the dispersion and the standard deviation characterize the degree of scattering of a random variable around its average value. Than PDF Created with FinePrint PDFFactory Trial Version http://www.fineprint.com is less dispersion, the more closely the values \u200b\u200bof the random variable around its average value are grouped.

For example, the mathematical expectation for the distribution of Poisson is equal to?, For a uniform distribution, it is equal to (A + B) / 2, and for normal distribution it is equal to m. The dispersion for the distribution of Poisson is equal to?, For a uniform distribution (B-a) 2/12, and for a normal distribution is equal to? 2. The following properties of mathematical expectation and dispersion will be used:

1. m (x + y) \u003d m (x) + m (y).

3. D (CX) \u003d C2D (x), where C is an arbitrary constant number.

4. D (x + a) \u003d d (a) for an arbitrary constant (non-random) value A.

Random value? \u003d U-Mu is called centered. From the property 1, it follows that m? \u003d M (u-mu) \u003d m (u) -m (u) \u003d 0, that is, its average value is 0 (its name is connected). At the same time, due to properties 4, we have D (?) \u003d D (U).

There is also a useful ratio that is convenient to use in practice to calculate the dispersion and associated with her values:

5. D (x) \u003d m (x2) -m (x) 2

Random variables X and Y are called independent if there are an events and independent values \u200b\u200bof x and y for arbitrary x and y values. For example, independent will (apparently ...) the results of measurement of voltage in the power grid and the growth of the main energy of the enterprise. But the power of this power grid and the salary of the main energy in enterprises can no longer be considered independent.

If the random variables x and y are independent, then the following properties occur (which for arbitrary random variables may not be performed):

5. M (xy) \u003d m (x) m (y).

6. D (x + y) \u003d d (x) + d (y).

In addition to individual random variables X, Y, ... the systems of random variables are studied. For example, steam (x, y) of random variables can be considered as a new random value, the values \u200b\u200bof which are two-dimensional vectors. Similarly, it is possible to consider the systems of a larger number of random variables, called multidimensional random values. This kind of system of values \u200b\u200bis also set by their distribution function. For example, for a system of two random variables, this function has the form

F (x, y) \u003d p,

that is, it is equal to the likelihood of an event concluding that the random value of X will take a value less than the specified number X, and the random value y is a smaller than the given number Y. This function is also called the function of the joint distribution of random variables X and Y. It is also possible to consider the average vector - the natural analogue of the mathematical expectation, but instead of the dispersion you have to study several numerical characteristics, called the moments of the second order. This, firstly, two private dispersions DX and DY PDF Created with FinePrint PDFFactory Trial Version http://www.fineprint.com Random variables X and Y, considered separately, and, secondly, the covariance, in more detail considered below.

If the random variables x and y are independent, then

F (x, y) \u003d fx (x) fy (y)

The product of the functions of the distribution of random variables X and Y and therefore the study of a pair of independent random variables is largely completely to the study of x and y separately.

Random variables

The above was considered experiments whose results are random events. However, it often arises the need to quantify the results of the experiment in the form of a certain amount, which is called a random variable. Random value is the second (after a random event) the main object of studying the theory of probability and provides a more general way to describe the experience with a random outcome than a set of random events.

Considering experiments with a random outcome, we have already dealt with random values. Thus, the number of success in the series of tests is an example of a random variable. Other examples of random variables are: the number of calls at the telephone station per unit of time; Time for the next call; The number of particles with a given energy in the systems of particles considered in statistical physics; The average daily temperature in this area, etc.

Random value is characterized by the fact that it is impossible to accurately predict its meaning that it will accept, but on the other hand, a lot of possible values \u200b\u200bare usually known. So for the number of success in the sequence of tests, this is the set of course, since the number of success can take values. Many values \u200b\u200bof random variance can coincide with a real half-axis, as in the case of waiting time, etc.

Consider examples of experiments with a random outcome, to describe which random events are usually applied and we introduce an equivalent description with the assignment of a random variable.

one). Let the result of the experience be an event or event. Then this experiment can be put into line with a random value that takes two values, for example, and with probabilities and, and the place of equality: and. Thus, experience is characterized by two outcomes of IP probabilities and, or the same experience is characterized by a random variable receiving two values \u200b\u200band with probabilities and.

2). Consider experience with throwing a playing bone. Here, the outcome of the experience can be one of the events where - the loss of the face with the number. Probability. We introduce an equivalent description of this experience with a random variable that can take values \u200b\u200bwith probabilities.

3). The sequence of independent tests is characterized by a complete group of incomplete events, where - an event consisting in the emergence of success in a series of experiments; Moreover, the probability of an event is determined by the formula Bernuli, i.e. here you can enter a random variable - the number of success that takes values \u200b\u200bwith probabilities. Thus, the sequence of independent tests is characterized by random events with their probabilities or random variables with probabilities of what makes values.

four). However, not for any experience with a random outcome there is such a simple correspondence between the random variable and a set of random events. For example, consider an experiment in which the point of at random rushes into a segment. It is natural to introduce a random amount - the coordinate on the segment in which the point falls. Thus, we can talk about a random event, where is the number of. However, the likelihood of this event. You can do otherwise - cut into a finite number of non-passing segments and consider random events that consist in the fact that a random value takes values \u200b\u200bfrom the interval. Then probabilities are finite values. However, this method has a significant drawback, since the segments are selected randomly. In order to eliminate this disadvantage consider the segments of the species where the variable. Then the corresponding probability is the function of the argument. This complicates a mathematical description of a random variety, but the description (29.1) becomes the only one, eliminates the ambiguity of segment segment.

For each of the examples considered, it is easy to determine the probabilistic space, where - the space of elementary events - - the algebra of events (subsets), is the probability defined for any. For example, in the last example, - - the algebra of all segments contained in.

The considered examples lead to the next definition of a random variable.

Let - probabilistic space. A random value is a unambiguous actual function defined on for which the set of elementary events of the form is an event (i.e. belong) for each actual number.

Thus, the definition requires that for each real set, and this condition ensures that the probability of an event is determined for each. This event is customary to designate a shorter entry.

Probability distribution function

The function is called a random variable probability distribution function.

The function is sometimes called briefly - the distribution function, as well as the integral law of the probability distribution of random variable. The function is a complete characteristic of a random variable, that is, it is a mathematical description of all properties of a random variable and a more detailed method of describing these properties does not exist.

We note the following important feature of the definition (30.1). Often the function is determined differently:

According to (30.1), the function is a continuous right. This question will be discussed below. If you use the definition (30.2), it is continuous to the left, which is a consequence of the use of strict inequality in relation (30.2). Functions (30.1) and (30.2) are equivalent descriptions of random variance, since it does not matter what determination to use both when studying theoretical issues and in solving problems. For certainty, in the future we will use only the definition (30.1).

Consider an example of building a function graphics. Let a random value takes values \u200b\u200bwith probabilities, and. Thus, other values \u200b\u200bbesides these random value takes with a zero probability: for any,. Or as they say, other values \u200b\u200bother than a random variety cannot accept. Let for certainty. Find the values \u200b\u200bof the function for from the intervals: 1), 2), 3), 4), 5), 6), 7). On the first interval, therefore the distribution function. 2). If, then. Obviously random events are inconsistent, therefore, by the formula of the addition of probabilities. By condition, the event is impossible and, but. Therefore. 3). Let, then. Here is the first term, and the second, since the event is impossible. Thus, for any satisfying condition. four). Let, then. five). If, then. 6) when we have. 7) If, then. The results of the calculations are presented in Fig. 30.1 Function schedule. At the gap points, the continuity of the function on the right is indicated.

The main properties of the probability distribution function

Consider the main properties of the distribution function, following directly from the definition:

1. We introduce the designation:. Then follows from the definition. Here, the expression is considered as an impossible event with a zero probability.

2. Let. Then follows from the definition of the function. Random event is reliable and its probability is equal to one.

3. The probability of a random event consistent is that a random value takes a value from the interval when it is determined through the function with the following equality

To prove this equality, consider the ratio.

Events are inconsistent, therefore, according to the formula for the addition of probabilities from (31.3), it follows that it coincides with the formula (31.2), since and.

4. The function is inconsistent. For prove, consider. At the same time, equality is true (31.2). Its left part, since the probability takes values \u200b\u200bfrom the interval. Therefore, the right side of equality (31.2) is non-negative :, or. This equality was obtained under the condition, therefore, a non-decreasing function.

5. The function is continuous to the right at each point, i.e.

where - any sequence aspiring to the right, i.e. and.

To prove, imagine the function in the form:

Now, on the basis of axioms of countable additivity, the probability expression in curly brackets is so in such a way that the continuity of the right function proves.

Thus, each probability distribution feature has properties 1-5. The inverse statement is true: if, satisfies 1-5 conditions, it can be considered as a function of distribution of some random variable.

Discrete random variable probability distribution function

Random value is called discrete, if the set of its values \u200b\u200bis certainly or countable.

For a complete probabilistic description of the discrete random value of the resulting value, it suffices to set the likelihood that the random value takes the value. If they are given, then the function of the distribution of probabilities of the discrete random variable can be represented as:

Here the summation is conducted in all indexes satisfying the condition.

The function of the distribution of probabilities of the discrete random variable is sometimes represented through the so-called function of a single jump.

In this case, it takes a form if the random value takes the final set of values, and the upper summation limit in (32.4) is relumed to be equal if the random value takes the counting set of values.

An example of constructing a chart of probability distribution functions of a discrete random variable was considered in paragraph 30.

Probability distribution density

Let a random value having a differentiable probability distribution function, then the function is called a probability distribution density (or probability density) of a random variable, and a random value is a continuous random value.

Consider the main properties of probability density.

From the definition of the derivative follows the equality:

According to the properties of the function, equality takes place. Therefore (33.2) takes the form:

This ratio explains the name of the function. Indeed, according to (33.3), the function is the probability per unit interval at the point, since. Thus, the probability density determined by the relation (33.3) is similar to the densities of other values \u200b\u200bknown in physics, such as current density, substance density, charge density, etc.

2. Since it is a non-decreasing function, its derivative is non-negative function:

3. From (33.1), since. Thus, equality is right

4. Because, then from the relation (33.5) follows

Equality, which is called the condition of normalization. His left part is the probability of a reliable event.

5. Let, then from (33.1) follows

This ratio is important for applications, since it allows you to calculate the likelihood through the probability density or through the probability distribution function. If we put, then the ratio (33.6) follows from (33.7).

In fig. 33.1 shows examples of graphs of the distribution function and probability density.

Note that the density of the probability distribution can have several maxima. The value of the argument in which the density has a maximum is called the mode of distribution of random variable. If the density has more than one fashion, then called multimodal.

Distribution Density of Discrete Random Value

distribution discrete probability density

Let a random value takes values \u200b\u200bwith probabilities,. Then its probability distribution function is where - the function of a single jump. It is possible to determine the density of the probability of a random variable by its distribution function, according to equality. However, there are mathematical difficulties associated with the fact that the function of a single jump, which is included in (34.1), has a gap of the first kind with. Therefore, at the point there is no derived function.

To overcome this complexity, -function is introduced. The function of a single jump can be represented through -Function with the following equality:

Then formally the derivative and density of the probability of a discrete random variable is determined from the relation (34.1) as a derivative function:

The function (34.4) has all probability density properties. Consider an example. Let the discrete random value takes values \u200b\u200bwith probabilities, and let. Then the likelihood is that a random value will take a value from the segment can be calculated on the basis of the general properties of the density by the formula:

Here, since a special point - the function determined by the condition is inside the integration area, and with a singular point is outside the integration area. In this way.

For a function (34.4), the condition of normalization is also performed:

Note that in mathematics, the recording of the form (34.4) is considered incorrect (incorrect), and the recording (34.2) is correct. This is due to the fact that -function with zero argument, and they say that there is no. On the other hand, in (34.2) -function is contained under the integral. In this case, the right side (34.2) is the final value for any, i.e. The integral from -function exists. Despite this in physics, technician and other applications of probability theory, a density representation is often used in form (34.4), which, firstly, allows to obtain true results, applying properties - functions, and secondly, has an obvious physical interpretation.

Examples of densities and probability distribution functions

35.1. Random value is called uniformly distributed on the segment if its probability distribution density

where is the number determined from the normalization condition:

The substitution (35.1) in (35.2) leads to equality, the solution of which relatively looks:.

The probability distribution function is a uniformly distributed random variable can be found by formula (33.5), which is determined by density:

In fig. 35.1 shows graphs of functions and a uniformly distributed random variable.

35.2. Random value is called normal (or Gaussian), if its probability distribution density:

where, - numbers called function parameters. The function takes its maximum value:. The parameter makes sense of an effective width. In addition to this geometric interpretation of the parameters, have a probabilistic interpretation, which will be considered in the subsequent.

From (35.4) an expression for the probability distribution function

where is the Laplace function. In fig. 35.2 presents graphs of functions and normal random variance. To refer to the fact that a random value has a normal distribution with parameters and a record is often used.

35.3. Random value has a Cauchy probability distribution density if

This density corresponds to the distribution function

35.4. Random value is called distributed according to the exponential law if its probability distribution density has the form:

We define its probability distribution feature. With from (35.8) follows. If, then

35.5. The relay distribution of probability random variance is determined by the density of the type

This density corresponds to the probability distribution function with and equal to.

35.6. Consider the examples of constructing the distribution function and density of the discrete random variable. Let a random value are the number of success in a sequence of independent tests. Then random value takes values, with a probability that is determined by Bernoulli formula:

where, - the probabilities of success and failure in one experience. Thus, the probability distribution function of a random variable has the form

where - the function of a single jump. Hence the distribution density:

where - Delta function.

Singular random variables

In addition to discrete and continuous random variables, there are still so-called singular random variables. These random variables are characterized by the fact that their probability distribution function is continuous, but the growth points form a plurality of zero measure. The point of growth of the function is called the value of its argument such that the derivative.

Thus, almost everywhere on the field definition area. A function satisfying this condition is also called singular. An example of a singular distribution function is the Cantor curve (Fig. 36.1), which is built as follows. Remies when and when. Then the interval is divided into three equal parts (segment) and the value is determined for the inner segment - as a half-caseing of certain values \u200b\u200bon the nearest segments on the right and left. Currently, the function is defined for, its value, and for with the value. The semitum of these values \u200b\u200bis equal to and determines the value on the inner segment. The segments are then considered and, each of them is divided into three equal segments and the function is determined on the internal segments as a half as the nearest right and the left of the specified values \u200b\u200bof the function. Thus, with a function - as half the numbers and. Similarly, the function is on the interval. The function is then determined on the interval on which, etc.

...

Similar documents

    Random variables. Function and density of probability distribution discrete random variance. Singular random variables. Mathematical expectation of random variable. Chebyshev inequality. Moments, cumulant and characteristic function.

    abstract, added 03.12.2007

    The concepts of probability theory and mathematical statistics, the use of them in practice. Determination of a random variable. Types and examples of random variables. The law of the distribution of the discrete random variable. The laws of the distribution of a continuous random variable.

    abstract, added 25.10.2015

    The likelihood of incoming random variables in a given interval. Constructing a chart of a random variable distribution function. Determination of the likelihood that the issue taken the product meets the standard. The law of the distribution of the discrete random variable.

    examination, added 01/24/2013

    Discrete random variables and distribution. Formula of the full probability and the Bayes formula. General properties of mathematical expectation. Dispersion of random variable. Random variable distribution function. Classical definition of probabilities.

    examination, added 12/13/2010

    The function of the distribution of a continuous random variable. Mathematical expectation of a continuous random variable, the distribution density of the system probabilities. Covariator. Correlation coefficient.

    laboratory work, added 19.08.2002

    Features of the distribution function as the most universal characteristic of a random variable. Description of its properties, their representation using geometric interpretation. The patterns of calculating the likelihood of distribution of a discrete random variable.

    presentation, added 01.11.2013

    Determination of probabilities of various events according to Bernoulli formula. Drawing up a discrete random variable distribution law, calculating mathematical expectations, dispersion and the riconductic deviation of a random variable, probability densities.

    examination, added 31.10.2013

    The use of Bernoulli Formula to find the likelihood of origin of the event. Construction of a discrete random variable graph. Mathematical expectation and properties of the integral distribution function. The function of the distribution of a continuous random variable.

    examination, added 01/29/2014

    The theory of probabilities and patterns of mass random phenomena. Inequality and Chebyshev Theorem. Numerical characteristics of a random variable. Distribution and Fourier transformation. Characteristic function of the Gaussian random variable.

    abstract, added 01/24/2011

    Calculation of mathematical expectation, dispersion, distribution functions and the riconductic deviation of a random variable. The law of the distribution of random variable. Classical definition of the probability of an event. Finding the distribution density.

Random value is called discrete if the totality of all of its possible values \u200b\u200bis a finite or infinite, but necessarily counting many values, i.e. Such a set, all the elements of which can be (at least theoretically) are numbered and discharged in the appropriate sequence.

Such of the random variables listed above as the number of points falling out when throwing a playing cube, the number of visitors to the pharmacy during the day, the number of apples on the tree are discrete random values.

The most complete information about the discrete random variable gives distribution law this value - this is a correspondence between all possible values \u200b\u200bof this random variable and the corresponding probabilities.

The discrete random distribution law is often specified in the form of a two-level table, in the first row of which all possible values \u200b\u200bof this value are listed (in ascending order), and in the second probability-compliant probability values:

X. x 1 x 2 x N.
P. p 1. p 2. p N.

Since all possible values \u200b\u200bof discrete random variance are the full system, the probability amount is equal to one ( conditions of normalization):

Example 4. There are ten student groups, respectively, 12, 10, 8, 10, 9, 12, 8, 11.10 and 9 students. Make a law of the distribution of a random value of X, defined as the number of students in at random the selected group.

Decision. The possible values \u200b\u200bof the random variable X (in ascending order) are 8, 9, 10, 11, 12. The likelihood that 8 students will be the chance that 8 students will be in at random

Similarly, you can find the probabilities of the remaining values \u200b\u200bof the random variable x:

Thus, the desired distribution law:

X.
P. 0,2 0,2 0,3 0,1 0,2

The permissions of the discrete random variable can also be specified using the formula that allows for each possible value of this value to determine the appropriate probability (for example, the distribution of Bernoulli, the distribution of Poisson). To describe certain features of the discrete random variable use it main numerical characteristics: Mathematical expectation, dispersion and average quadratic deviation (standard).

Mathematical expectation M (x) (also the designation "μ") of a discrete random variable is called the amount of the works of each of its possible values \u200b\u200bto the corresponding probabilities:

The main meaning of the mathematical expectation of a discrete random variable is that it is mean This value. In other words, if a certain amount of tests were produced, based on the results of which the average arithmetic of all observed values \u200b\u200bof the discrete random variable X are found, then this arithmetic average is approximately equal (the more precisely, the more tests) mathematical waiting for a given random variable.

Let's give some properties of mathematical expectation.

1. The mathematical expectation of a permanent value is equal to this constant value:

M (c) \u003d with

2. The mathematical expectation of the work of a constant multiplier to a discrete random amount is equal to the product of this constant factor on the mathematical expectation of this random variable:

M (kx) \u003d km (x)

3. The mathematical expectation of the sum of two random variables is equal to the sum of the mathematical expectations of these quantities:

M (x + y) \u003d m (x) + m (y)

4. The mathematical expectation of the work of independent random variables is equal to the product of their mathematical expectations:

M (x · y) \u003d m (x) · m (y)

Separate values \u200b\u200bof the discrete random variable are grouped near the mathematical expectation as the center. To characterize the degree of scattering of possible values \u200b\u200bof the discrete random variable relative to its mathematical expectation, the concept is introduced dispersion discrete random variance.

Dispersion D (x) (the designation "σ 2") of a discrete random variable X is called the mathematical expectation of the square of the deviation of this magnitude from its mathematical expectation:

D (x) \u003d σ 2 \u003d m ((x - μ) 2),(11)

In practice, dispersion is more convenient to calculate according to the formula

D (x) \u003d σ 2 \u003d m (x 2) - μ 2, (12)

We list the basic properties of the dispersion.

  1. The dispersion of a constant value is zero:
  1. Dispersion of any random variable There is a number non-negative:

D (x) ≥0

  1. The dispersion of the work of a constant multiplier K to a discrete random value is equal to the product of the square of this constant multiplier to the dispersion of a given random value:

D (kx) \u003d k 2 · d (x).

In computational terms, it is more convenient not dispersion, and the other measure of the dispersion of a random variable X.which is most often used - average quadratic deviation(standard deviation or simply standard).

Medium quadratic deviation The discrete random variable is called a square root of its dispersion:

The convenience of standard deviation is that it has the dimension of the most random variable X.While the dispersion has a dimension representing the square of dimension X.

End of work -

This topic belongs to the section:

Elements of probability theory

Scientific methodical justification of the topic .. The theory of probability studies the patterns manifested when studying such .. Many random events can be quantified random values \u200b\u200bthat take values \u200b\u200bin ..

If you need additional material on this topic, or you did not find what they were looking for, we recommend using searching for our work base:

What we will do with the material obtained:

If this material turned out to be useful for you, you can save it to your social networking page: