Probability Defination

ESTIMATING PROBABILITIES

Introduction

Suppose someone tells you “there is a 50-50 chance that we will be able to deliver your order on Friday”. This statement means something intuitively, even though when Friday arrives there are only two outcomes. Either the order will be delivered or it will not. Statements like this are trying to put probabilities or chances on uncertain events.

Probability is measured on a scale between 0 and 1. Any event which is impossible has a probability of 0, and any event which is certain to occur has a probability of 1. For example, the probability that the sun will not rise tomorrow is 0; the probability that a light bulb will fail sooner or later is 1. For uncertain events, the probability of occurrence is somewhere between 0 and 1. The 50-50 chance mentioned above is equivalent to a probability of 0.5.

Try to estimate probabilities for the following events. Remember that events which are more likely to occur than not have probabilities which are greater than 0.5, and the more certain they are the closer the probabilities are to 1. Similarly, events which are more likely not to occur have probabilities which are less than 0.5. The probabilities get closer to 0 as the events get more unlikely.

  • The probability that a coin will fall heads when tossed.
  • The probability that it will snow next Christmas.
  • The probability that sales for your company will reach record levels next year.
  • The probability that your car will not break down on your next journey.
  • The probability that the throw of a dice will show a six.

The probabilities are as follows:

  • The probability of heads is 0.5.
  • This probability is quite low. It is somewhere between 0 and 0.1.
  • You can answer this one yourself.
  • This depends on how frequently your car is serviced. For a reliable car it should be greater than 0.99.
  • The probability of a six is 1/6 or 0.167.

Theoretical Probabilities

Sometimes probabilities can be specified by considering the physical aspects of the situation. For example, consider the tossing of a coin. What is the probability that it will fall heads? There are two sides to a coin. There is no reason to favour either side as a coin is symmetrical. Therefore the probability of heads, which we call P(H) is:

Empirical Probabilities

Often it is not possible to give a theoretical probability of an event. For example, what is the probability that an item on a production line will fail a quality control test? This question can be answered either by measuring the probability in a test situation (i.e. empirically) or by relying on previous results. If 100 items are taken from the production line and tested, then:

Sometimes it is not possible to set up an experiment to calculate an empirical probability. For example, what are your chances of passing a particular examination? You cannot sit a series of examinations to answer this. Previous results must be used. If you have taken 12 examinations in the past, and failed only one, you might estimate:

TYPES OF EVENT

There are five types of event:

  • Mutually exclusive
  • Non-mutually-exclusive
  • Independent
  • Dependent or non-independent
  • Mutually Exclusive Events

If two events are mutually exclusive then the occurrence of one event precludes the possibility of the other occurring. For example, the two sides of a coin are mutually exclusive since, on the throw of the coin, “heads” automatically rules out the possibility of “tails”. On the throw of a dice, a six excludes all other possibilities. In fact, all the sides of a dice are mutually exclusive; the occurrence of any one of them as the top face automatically excludes any of the others.

  • Non-Mutually-Exclusive Events

These are events which can occur together. For example, in a pack of playing cards hearts and queens are non-mutually-exclusive since there is one card, the queen of hearts, which is both a heart and a queen and so satisfies both criteria for success.

  • Independent Events

These are events which are not mutually exclusive and where the occurrence of one event does not affect the occurrence of the other. For example, the tossing of a coin in no way affects the result of the next toss of the coin; each toss has an independent outcome.

  • Dependent or Non-Independent Events

These are situations where the outcome of one event is dependent on another event. The probability of a car owner being able to drive to work in his car is dependent on him being able to start the car. The probability of him being able to drive to work given that the car starts is a conditional probability and

P(Drive to work|Car starts)

where the vertical line is a shorthand way of writing “given that”.

  • Complementary Events

An event either occurs or it does not occur, i.e. we are certain that one or other of these situations holds.

For example, if we throw a dice and denote the event where a six is uppermost by A, and the event where either a one, two, three, four or five is uppermost by Ā (or not A) then A and Ā are complementary, i.e. they are mutually exclusive with a total probability of 1. Thus:

 

P(A) + P(Ā) = 1.

 

This relationship between complementary events is useful as it is often easier to find the probability of an event not occurring than to find the probability that it does occur. Using the above formula, we can always find P(A) by subtracting P(Ā) from 1.

THE TWO LAWS OF PROBABILITY

Addition Law for a Complete List of Mutually Exclusive Events

(a) If all possible mutually exclusive events are listed, then it is certain that one of these outcomes will occur. For example, when the dice is tossed there must be one number showing afterwards.

Addition Law for Non-Mutually-Exclusive Events

Events which are non-mutually-exclusive are, by definition, capable of occurring together.

Multiplication Law for Independent Events

Consider an item on a production line. This item could be defective or acceptable. These two possibilities are mutually exclusive and represent a complete list of alternatives.

Distinguishing the Laws

Although the above laws of probability are not complicated, you must think carefully and clearly when using them. Remember that events must be mutually exclusive before you can use the addition law, and they must be independent before you can use the multiplication law. Another matter about which you must be careful is the listing of equally likely outcomes. Be sure that you list all of them. For example, we can list the possible results of tossing two coins, namely:

 

First Coin   Second Coin
Heads   Heads
Tails   Heads
Heads   Tails
Tails   Tails

 

There are four equally likely outcomes. Do not make the mistake of saying, for example, that there are only two outcomes (both heads or not both heads); you must list all the possible outcomes. (In this case “not both heads” can result in three different ways, so the probability of this result will be higher than “both heads”.)

In this example, the probability that there will be one heads and one tails (heads – tails, or tails – heads) is 0.5. This is a case of the addition law at work, the probability of heads – tails ( 1/4 ) plus the probability of tails – heads ( 1/4 ). Putting it another way, the probability of different faces is equal to the probability of the same faces – in both cases1/2.

TREE DIAGRAMS

A compound experiment, i.e. one with more than one component part, may be regarded as a sequence of similar experiments. For example, the rolling of two dice can be considered as the rolling of one followed by the rolling of the other; and the tossing of four coins can be thought of as tossing one after the other. A tree diagram enables us to construct an exhaustive list of mutually exclusive outcomes of a compound experiment.

Furthermore, a tree diagram gives us a pictorial representation of probability.

By exhaustive, we mean that every possible outcome is considered.

By mutually exclusive we mean, as before, that if one of the outcomes of the compound experiment occurs then the others cannot.

We work from left to right in the tree diagram. At the start we take a ball from the bag. This ball is either red or white so we draw two branches labelled R and W, corresponding to the two possibilities. We then also write on the branch the probability of the outcome of this simple  experiment being along that branch.

We then consider drawing a second ball from the bag. Whether we draw a red or a white ball the first time, we can still draw a red or a white ball the second time, so we mark in the two possibilities at the end of each of the two branches of our existing tree diagram. We can then see that there are four different mutually exclusive outcomes possible, namely RR, RW, WR and WW. We enter on these second branches the conditional probabilities associated with them.

Thus,  on  the  uppermost  branch  in  the  diagram  we  must  insert  the  probability  of  obtaining  a second red ball given that the first was red. This probability is 4/7 as there are only seven balls left in the bag, of which four are red. Similarly for the other branches.

Each complete branch from start to tip represents one possible outcome of the compound experiment and each of the branches is mutually exclusive. To obtain the probability of a particular outcome of the compound experiment occurring, we multiply the probabilities along the different sections of the branch, using the general multiplication law for probabilities.

We thus obtain the probabilities shown in Table 1.1. The sum of the probabilities should add up to 1, as we know one or other of these mutually exclusive outcomes is certain to happen.

A bag contains three red balls, two white balls and one blue ball. Two balls are drawn at random (without replacement). Find the probability that:

Both white balls are drawn.

The blue ball is not drawn.  .

A red then a white are drawn.

A red and a white are drawn.

BINOMIAL DISTRIBUTION

The binomial distribution can be used to describe the likely outcome of events for discrete variables which:

Have only two possible outcomes; and

Are independent.

Suppose we are conducting a questionnaire.  The Binomial distribution might be used to analyse the results if the only two responses to a question are ‘yes’ or ‘no’ and if the response to one question (eg, ‘yes’) does not influence the likely response to any other question (ie ‘yes’ and ‘no’).

.POISSON DISTRIBUTION

Introduction

The poisson distribution may be regarded as a special case of the binomial distribution.  As with the Binomial distribution, the Poisson distribution can be used where there are only two possible outcomes:-

 

  • Success (p)
  • Failure (q)

 

These events are independent.  The Poisson distribution is usually used where n is very large but p is very small, and where the mean np is constant and typically < 5.  As p is very small (p < 0.1 and often much less), then the chance of the event occurring is extremely low.  The Poisson distribution is therefore typically used for unlikely events such as accidents, strikes etc.

The Poisson distribution is also used to solve problems where events tend to occur at random, such as incoming phone calls, passenger arrivals at a terminal etc.

Whereas the formula for solving Binomial problems uses the probabilities, for both “success” (p) and “failure” (q), the formula for solving Poisson problems only uses the probabilities for “success” (p).

 

(Visited 56 times, 1 visits today)
Share this:

Written by 

Leave a Reply

Your email address will not be published. Required fields are marked *