Probability of an event. Determining the probability of an event. Classical and statistical definition of probability

  • Probability is the degree (relative measure, quantitative assessment) of the possibility of the occurrence of some event. When the reasons for some possible event to actually occur outweigh the opposite reasons, then this event is called probable, otherwise - unlikely or improbable. The preponderance of positive reasons over negative ones, and vice versa, can be to varying degrees, as a result of which the probability (and improbability) can be greater or lesser. Therefore, probability is often assessed at a qualitative level, especially in cases where a more or less accurate quantitative assessment is impossible or extremely difficult. Various gradations of “levels” of probability are possible.

    The study of probability from a mathematical point of view constitutes a special discipline - probability theory. In probability theory and mathematical statistics, the concept of probability is formalized as a numerical characteristic of an event - a probability measure (or its value) - a measure on a set of events (subsets of a set of elementary events), taking values ​​from

    (\displaystyle 0)

    (\displaystyle 1)

    Meaning

    (\displaystyle 1)

    Corresponds to a reliable event. An impossible event has a probability of 0 (the converse is generally not always true). If the probability of an event occurring is

    (\displaystyle p)

    Then the probability of its non-occurrence is equal to

    (\displaystyle 1-p)

    In particular, the probability

    (\displaystyle 1/2)

    Means equal probability of occurrence and non-occurrence of an event.

    The classic definition of probability is based on the concept of equal probability of outcomes. The probability is the ratio of the number of outcomes favorable for a given event to the total number of equally possible outcomes. For example, the probability of getting heads or tails in a random coin toss is 1/2 if it is assumed that only these two possibilities occur and that they are equally possible. This classical “definition” of probability can be generalized to the case of an infinite number of possible values ​​- for example, if some event can occur with equal probability at any point (the number of points is infinite) of some limited region of space (plane), then the probability that it will occur in some part of this feasible region is equal to the ratio of the volume (area) of this part to the volume (area) of the region of all possible points.

    The empirical “definition” of probability is related to the frequency of an event, based on the fact that with a sufficiently large number of trials, the frequency should tend to the objective degree of possibility of this event. In the modern presentation of probability theory, probability is defined axiomatically, as a special case of the abstract theory of set measure. However, the connecting link between the abstract measure and the probability, which expresses the degree of possibility of the occurrence of an event, is precisely the frequency of its observation.

    The probabilistic description of certain phenomena has become widespread in modern science, in particular in econometrics, statistical physics of macroscopic (thermodynamic) systems, where even in the case of a classical deterministic description of the movement of particles, a deterministic description of the entire system of particles does not seem practically possible or appropriate. In quantum physics, the processes described are themselves probabilistic in nature.

Classical and statistical definition of probability

For practical activities, it is necessary to be able to compare events according to the degree of possibility of their occurrence. Let's consider a classic case. There are 10 balls in the urn, 8 of them are white, 2 are black. Obviously, the event “a white ball will be drawn from the urn” and the event “a black ball will be drawn from the urn” have different degrees of possibility of their occurrence. Therefore, to compare events, a certain quantitative measure is needed.

A quantitative measure of the possibility of an event occurring is probability . The most widely used definitions of the probability of an event are classical and statistical.

Classic definition probability is associated with the concept of a favorable outcome. Let's look at this in more detail.

Let the outcomes of some test form a complete group of events and are equally possible, i.e. uniquely possible, incompatible and equally possible. Such outcomes are called elementary outcomes, or cases. It is said that the test boils down to case scheme or " urn scheme", because Any probability problem for such a test can be replaced by an equivalent problem with urns and balls of different colors.

The outcome is called favorable event A, if the occurrence of this case entails the occurrence of the event A.

According to the classical definition probability of an event A is equal to the ratio of the number of outcomes favorable to this event to the total number of outcomes, i.e.

, (1.1)

Where P(A)– probability of event A; m– number of cases favorable to the event A; n– total number of cases.

Example 1.1. When throwing a dice, there are six possible outcomes: 1, 2, 3, 4, 5, 6 points. What is the probability of getting an even number of points?

Solution. All n= 6 outcomes form a complete group of events and are equally possible, i.e. uniquely possible, incompatible and equally possible. Event A - “the appearance of an even number of points” - is favored by 3 outcomes (cases) - the loss of 2, 4 or 6 points. Using the classical formula for the probability of an event, we obtain

P(A) = = .

Based on the classical definition of the probability of an event, we note its properties:

1. The probability of any event lies between zero and one, i.e.

0 ≤ R(A) ≤ 1.

2. The probability of a reliable event is equal to one.

3. The probability of an impossible event is zero.

As stated earlier, the classical definition of probability is applicable only for those events that can arise as a result of tests that have symmetry of possible outcomes, i.e. reducible to a pattern of cases. However, there is a large class of events whose probabilities cannot be calculated using the classical definition.

For example, if we assume that the coin is flattened, then it is obvious that the events “appearance of a coat of arms” and “appearance of heads” cannot be considered equally possible. Therefore, the formula for determining the probability according to the classical scheme is not applicable in this case.

However, there is another approach to estimating the probability of events, based on how often a given event will occur in the trials performed. In this case, the statistical definition of probability is used.

Statistical probabilityevent A is the relative frequency (frequency) of occurrence of this event in n trials performed, i.e.

, (1.2)

Where P*(A)– statistical probability of an event A; w(A)– relative frequency of the event A; m– number of trials in which the event occurred A; n– total number of tests.

Unlike mathematical probability P(A), considered in the classical definition, statistical probability P*(A) is a characteristic experienced, experimental. In other words, the statistical probability of an event A is the number around which the relative frequency is stabilized (set) w(A) with an unlimited increase in the number of tests carried out under the same set of conditions.

For example, when they say about a shooter that he hits the target with a probability of 0.95, this means that out of hundreds of shots fired by him under certain conditions (the same target at the same distance, the same rifle, etc. .), on average there are about 95 successful ones. Naturally, not every hundred will have 95 successful shots, sometimes there will be fewer, sometimes more, but on average, when shooting is repeated many times under the same conditions, this percentage of hits will remain unchanged. The figure of 0.95, which serves as an indicator of the shooter's skill, is usually very stable, i.e. the percentage of hits in most shootings will be almost the same for a given shooter, only in rare cases deviating any significantly from its average value.

Another disadvantage of the classical definition of probability ( 1.1 ) limiting its use is that it assumes a finite number of possible test outcomes. In some cases, this disadvantage can be overcome by using a geometric definition of probability, i.e. finding the probability of a point falling into a certain area (segment, part of a plane, etc.).

Let the flat figure g forms part of a flat figure G(Fig. 1.1). Fit G a dot is thrown at random. This means that all points in the region G“equal rights” with respect to whether a thrown random point hits it. Assuming that the probability of an event A– the thrown point hits the figure g– is proportional to the area of ​​this figure and does not depend on its location relative to G, neither from the form g, we'll find

Brief theory

To quantitatively compare events according to the degree of possibility of their occurrence, a numerical measure is introduced, which is called the probability of an event. The probability of a random event is a number that expresses the measure of the objective possibility of an event occurring.

The quantities that determine how significant the objective reasons are to expect the occurrence of an event are characterized by the probability of the event. It must be emphasized that probability is an objective quantity that exists independently of the knower and is conditioned by the entire set of conditions that contribute to the occurrence of an event.

The explanations we have given for the concept of probability are not a mathematical definition, since they do not quantify the concept. There are several definitions of the probability of a random event, which are widely used in solving specific problems (classical, geometric definition of probability, statistical, etc.).

Classic definition of event probability reduces this concept to the more elementary concept of equally possible events, which is no longer subject to definition and is assumed to be intuitively clear. For example, if a die is a homogeneous cube, then the loss of any of the faces of this cube will be equally possible events.

Let a reliable event be divided into equally possible cases, the sum of which gives the event. That is, the cases into which it breaks down are called favorable for the event, since the appearance of one of them ensures the occurrence.

The probability of an event will be denoted by the symbol.

The probability of an event is equal to the ratio of the number of cases favorable to it, out of the total number of uniquely possible, equally possible and incompatible cases, to the number, i.e.

This is the classic definition of probability. Thus, to find the probability of an event, it is necessary, having considered the various outcomes of the test, to find a set of uniquely possible, equally possible and incompatible cases, calculate their total number n, the number of cases m favorable for a given event, and then perform the calculation using the above formula.

The probability of an event equal to the ratio of the number of experimental outcomes favorable to the event to the total number of experimental outcomes is called classical probability random event.

The following properties of probability follow from the definition:

Property 1. The probability of a reliable event is equal to one.

Property 2. The probability of an impossible event is zero.

Property 3. The probability of a random event is a positive number between zero and one.

Property 4. The probability of the occurrence of events that form a complete group is equal to one.

Property 5. The probability of the occurrence of the opposite event is determined in the same way as the probability of the occurrence of event A.

The number of cases favoring the occurrence of an opposite event. Hence, the probability of the occurrence of the opposite event is equal to the difference between unity and the probability of the occurrence of event A:

An important advantage of the classical definition of the probability of an event is that with its help the probability of an event can be determined without resorting to experience, but based on logical reasoning.

When a set of conditions is met, a reliable event will definitely happen, but an impossible event will definitely not happen. Among the events that may or may not occur when a set of conditions is created, the occurrence of some can be counted on with good reason, and the occurrence of others with less reason. If, for example, there are more white balls in an urn than black balls, then there is more reason to hope for the appearance of a white ball when drawn from the urn at random than for the appearance of a black ball.

The next page discusses.

Example of problem solution

Example 1

A box contains 8 white, 4 black and 7 red balls. 3 balls are drawn at random. Find the probabilities of the following events: – at least 1 red ball is drawn, – there are at least 2 balls of the same color, – there are at least 1 red and 1 white ball.

The solution of the problem

We find the total number of test outcomes as the number of combinations of 19 (8+4+7) elements of 3:

Let's find the probability of the event– at least 1 red ball is drawn (1,2 or 3 red balls)

Required probability:

Let the event– there are at least 2 balls of the same color (2 or 3 white balls, 2 or 3 black balls and 2 or 3 red balls)

Number of outcomes favorable to the event:

Required probability:

Let the event– there is at least one red and 1 white ball

(1 red, 1 white, 1 black or 1 red, 2 white or 2 red, 1 white)

Number of outcomes favorable to the event:

Required probability:

Answer: P(A)=0.773;P(C)=0.7688; P(D)=0.6068

Example 2

Two dice are thrown. Find the probability that the sum of points is at least 5.

Solution

Let the event be a score of at least 5

Let's use the classic definition of probability:

Total number of possible test outcomes

Number of trials favoring the event of interest

On the dropped side of the first dice, one point, two points..., six points may appear. similarly, six outcomes are possible when rolling the second die. Each of the outcomes of throwing the first die can be combined with each of the outcomes of the second. Thus, the total number of possible elementary test outcomes is equal to the number of placements with repetitions (choice with placements of 2 elements from a set of volume 6):

Let's find the probability of the opposite event - the sum of points is less than 5

The following combinations of dropped points will favor the event:

1st bone 2nd bone 1 1 1 2 1 2 3 2 1 4 3 1 5 1 3

Average the cost of solving a test is 700 - 1200 rubles (but not less than 300 rubles for the entire order). The price is greatly influenced by the urgency of the decision (from a day to several hours). The cost of online help for an exam/test is from 1000 rubles. for solving the ticket.

You can leave a request directly in the chat, having previously sent the conditions of the tasks and informed you of the deadlines for the solution you need. Response time is a few minutes.

Examples of related problems

Total probability formula. Bayes formula
Using the example of solving a problem, the total probability formula and Bayes formula are considered, and it is also explained what hypotheses and conditional probabilities are.

“The reader has already noticed in our presentation the frequent use of the concept “probability.”

This is a characteristic feature of modern logic as opposed to ancient and medieval logic. A modern logician understands that all our knowledge is only more or less probabilistic, and not certain, as philosophers and theologians are accustomed to think. He is not overly concerned about the fact that inductive inference only imparts probability to its conclusion, since he does not expect anything more. However, he will think about it if he finds reason to doubt even the probability of his conclusion.

Thus two problems have acquired much greater importance in modern logic than in earlier times. The first is the nature of probability, and the second is the significance of induction. Let us briefly discuss these problems.

There are, accordingly, two types of probability - definite and uncertain.

Probability of a certain kind occurs in the mathematical theory of probability, where problems such as throwing dice or tossing coins are discussed. It occurs wherever there are several possibilities and none of them can be preferred over the other. If you toss a coin, it should land on either heads or tails, but both seem equally likely. Therefore, the chances of heads and tails are 50%, one is taken as reliability. Similarly, if you roll a die, it can land on any of the six sides, and there is no reason to favor one over the other, so each has a 1/6 chance. Insurance companies use this kind of probability in their work. They don't know which building will burn down, but they do know what percentage of buildings burn down every year. They do not know how long a particular person will live, but they do know the average life expectancy at any given period. In all such cases, the estimate of probability is not itself merely probable, except in the sense in which all knowledge is merely probable. A probability estimate may itself have a high degree of probability. Otherwise, insurance companies would go bankrupt.

Great efforts have been made to increase the likelihood of induction, but there is reason to believe that all these attempts were in vain. The probability characteristic of inductive inferences is almost always, as I said above, of an uncertain nature.

Now I will explain what it is.

It has become trivial to say that all human knowledge is fallible. It is obvious that errors are different. If I say that Buddha lived in the 6th century before the Nativity of Christ, the probability of error will be very high. If I say that Caesar was killed, the likelihood of error will be small.

If I say that there is a great war going on now, then the probability of an error is so small that only a philosopher or logician can admit its presence. These examples concern historical events, but a similar gradation exists in relation to scientific laws. Some of them have the obvious nature of hypotheses, to which no one will give more serious status due to the lack of empirical data in their favor, while others seem to be so definite that there is practically no doubt on the part of scientists about their truth. (When I say “truth,” I mean “approximate truth,” since every scientific law is subject to some amendment.)

Probability is something that lies between what we are sure of and what we are more or less inclined to admit, if this word is understood in the sense of the mathematical theory of probability.

It would be more correct to talk about degrees of certainty or degrees of reliability . This is a broader concept of what I called “certain probability,” which is also more important.”

Bertrand Russell, The Art of Drawing Conclusions / The Art of Thinking, M., “House of Intellectual Books”, 1999, p. 50-51.

as an ontological category reflects the extent of the possibility of the emergence of any entity under any conditions. In contrast to the mathematical and logical interpretation of this concept, ontological mathematics does not associate itself with the obligation of quantitative expression. The meaning of V. is revealed in the context of understanding determinism and the nature of development in general.

Excellent definition

Incomplete definition ↓

PROBABILITY

concept characterizing quantities. the measure of the possibility of the occurrence of a certain event at a certain conditions. In scientific knowledge there are three interpretations of V. The classical concept of V., which arose from mathematical. analysis of gambling and most fully developed by B. Pascal, J. Bernoulli and P. Laplace, considers winning as the ratio of the number of favorable cases to the total number of all equally possible ones. For example, when throwing a dice that has 6 sides, each of them can be expected to land with a value of 1/6, since no one side has advantages over another. Such symmetry of experimental outcomes is specially taken into account when organizing games, but is relatively rare in the study of objective events in science and practice. Classic V.'s interpretation gave way to statistics. V.'s concepts, which are based on the actual observing the occurrence of a certain event over a long period of time. experience under precisely fixed conditions. Practice confirms that the more often an event occurs, the greater the degree of objective possibility of its occurrence, or B. Therefore, statistical. V.'s interpretation is based on the concept of relates. frequency, which can be determined experimentally. V. as a theoretical the concept never coincides with the empirically determined frequency, however, in plural. In cases, it differs practically little from the relative one. frequency found as a result of duration. observations. Many statisticians consider V. as a “double” refers. frequencies, edges are determined statistically. study of observational results

or experiments. Less realistic was the definition of V. as the limit relates. frequencies of mass events, or groups, proposed by R. Mises. As a further development of the frequency approach to V., a dispositional, or propensitive, interpretation of V. is put forward (K. Popper, J. Hacking, M. Bunge, T. Settle). According to this interpretation, V. characterizes the property of generating conditions, for example. experiment. installations to obtain a sequence of massive random events. It is precisely this attitude that gives rise to physical dispositions, or predispositions, V. which can be checked using relatives. frequency

Statistical V.'s interpretation dominates scientific research. cognition, because it reflects specific. the nature of the patterns inherent in mass phenomena of a random nature. In many physical, biological, economic, demographic. and other social processes, it is necessary to take into account the action of many random factors, which are characterized by a stable frequency. Identifying these stable frequencies and quantities. its assessment with the help of V. makes it possible to reveal the necessity that makes its way through the cumulative action of many accidents. This is where the dialectic of transforming chance into necessity finds its manifestation (see F. Engels, in the book: K. Marx and F. Engels, Works, vol. 20, pp. 535-36).

Logical, or inductive, reasoning characterizes the relationship between the premises and the conclusion of non-demonstrative and, in particular, inductive reasoning. Unlike deduction, the premises of induction do not guarantee the truth of the conclusion, but only make it more or less plausible. This plausibility, with precisely formulated premises, can sometimes be assessed using V. The value of this V. is most often determined by comparison. concepts (more than, less than or equal to), and sometimes in a numerical way. Logical interpretation is often used to analyze inductive reasoning and construct various systems of probabilistic logic (R. Carnap, R. Jeffrey). In semantics logical concepts V. is often defined as the degree to which one statement is confirmed by others (for example, a hypothesis by its empirical data).

In connection with the development of theories of decision making and games, the so-called personalistic interpretation of V. Although V. at the same time expresses the degree of faith of the subject and the occurrence of a certain event, V. themselves must be chosen in such a way that the axioms of the calculus of V. are satisfied. Therefore, V. with such an interpretation expresses not so much the degree of subjective, but rather reasonable faith . Consequently, decisions made on the basis of such V. will be rational, because they do not take into account psychological factors. characteristics and inclinations of the subject.

With epistemological t.zr. difference between statistical, logical. and personalistic interpretations of V. is that if the first characterizes the objective properties and relationships of mass phenomena of a random nature, then the last two analyze the features of the subjective, cognizant. human activities under conditions of uncertainty.

PROBABILITY

one of the most important concepts of science, characterizing a special systemic vision of the world, its structure, evolution and knowledge. The specificity of the probabilistic view of the world is revealed through the inclusion of the concepts of randomness, independence and hierarchy (the idea of ​​levels in the structure and determination of systems) among the basic concepts of existence.

Ideas about probability originated in ancient times and related to the characteristics of our knowledge, while the existence of probabilistic knowledge was recognized, which differed from reliable knowledge and from false knowledge. The impact of the idea of ​​probability on scientific thinking and on the development of knowledge is directly related to the development of probability theory as a mathematical discipline. The origin of the mathematical doctrine of probability dates back to the 17th century, when the development of a core of concepts allowing. quantitative (numerical) characteristics and expressing a probabilistic idea.

Intensive applications of probability to the development of cognition occur in the 2nd half. 19 - 1st floor 20th century Probability has entered the structures of such fundamental sciences of nature as classical statistical physics, genetics, quantum theory, and cybernetics (information theory). Accordingly, probability personifies that stage in the development of science, which is now defined as non-classical science. To reveal the novelty and features of the probabilistic way of thinking, it is necessary to proceed from an analysis of the subject of probability theory and the foundations of its numerous applications. Probability theory is usually defined as a mathematical discipline that studies the patterns of mass random phenomena under certain conditions. Randomness means that within the framework of mass character, the existence of each elementary phenomenon does not depend on and is not determined by the existence of other phenomena. At the same time, the mass nature of phenomena itself has a stable structure and contains certain regularities. A mass phenomenon is quite strictly divided into subsystems, and the relative number of elementary phenomena in each of the subsystems (relative frequency) is very stable. This stability is compared with probability. A mass phenomenon as a whole is characterized by a probability distribution, that is, by specifying subsystems and their corresponding probabilities. The language of probability theory is the language of probability distributions. Accordingly, probability theory is defined as the abstract science of operating with distributions.

Probability gave rise in science to ideas about statistical patterns and statistical systems. The latter are systems formed from independent or quasi-independent entities; their structure is characterized by probability distributions. But how is it possible to form systems from independent entities? It is usually assumed that for the formation of systems with integral characteristics, it is necessary that sufficiently stable connections exist between their elements that cement the systems. Stability of statistical systems is given by the presence of external conditions, external environment, external rather than internal forces. The very definition of probability is always based on setting the conditions for the formation of the initial mass phenomenon. Another important idea characterizing the probabilistic paradigm is the idea of ​​hierarchy (subordination). This idea expresses the relationship between the characteristics of individual elements and the integral characteristics of systems: the latter, as it were, are built on top of the former.

The importance of probabilistic methods in cognition lies in the fact that they make it possible to study and theoretically express the patterns of structure and behavior of objects and systems that have a hierarchical, “two-level” structure.

Analysis of the nature of probability is based on its frequency, statistical interpretation. At the same time, for a very long time, such an understanding of probability dominated in science, which was called logical, or inductive, probability. Logical probability is interested in questions of the validity of a separate, individual judgment under certain conditions. Is it possible to evaluate the degree of confirmation (reliability, truth) of an inductive conclusion (hypothetical conclusion) in quantitative form? During the development of probability theory, such questions were repeatedly discussed, and they began to talk about the degrees of confirmation of hypothetical conclusions. This measure of probability is determined by the information available to a given person, his experience, views on the world and psychological mindset. In all such cases, the magnitude of probability is not amenable to strict measurements and practically lies outside the competence of probability theory as a consistent mathematical discipline.

The objective, frequentist interpretation of probability was established in science with significant difficulties. Initially, the understanding of the nature of probability was strongly influenced by those philosophical and methodological views that were characteristic of classical science. Historically, the development of probabilistic methods in physics occurred under the determining influence of the ideas of mechanics: statistical systems were interpreted simply as mechanical. Since the corresponding problems were not solved by strict methods of mechanics, assertions arose that turning to probabilistic methods and statistical laws is the result of the incompleteness of our knowledge. In the history of the development of classical statistical physics, numerous attempts were made to substantiate it on the basis of classical mechanics, but they all failed. The basis of probability is that it expresses the structural features of a certain class of systems, other than mechanical systems: the state of the elements of these systems is characterized by instability and a special (not reducible to mechanics) nature of interactions.

The entry of probability into knowledge leads to the denial of the concept of hard determinism, to the denial of the basic model of being and knowledge developed in the process of the formation of classical science. The basic models represented by statistical theories are of a different, more general nature: they include the ideas of randomness and independence. The idea of ​​probability is associated with the disclosure of the internal dynamics of objects and systems, which cannot be entirely determined by external conditions and circumstances.

The concept of a probabilistic vision of the world, based on the absolutization of ideas about independence (as before the paradigm of rigid determination), has now revealed its limitations, which is most strongly reflected in the transition of modern science to analytical methods for studying complex systems and the physical and mathematical foundations of self-organization phenomena.

Excellent definition

Incomplete definition ↓



Random articles

Up