The sum of all the probabilities in the sample space is 1.
The probability of an event which cannot occur is 0.
The probability of any event which is not in the sample space is zero.
The probability of an event which must occur is 1.
The probability of the sample space is 1.
The probability of an event not occurring is one minus the probability of it occurring.
The complement of an event E is denoted as E' and is written as P (E') = 1 - P (E)
P (A∪B) is written as P (A + B) and P (A ∩ B) is written as P (AB).
If A and B are mutually exclusive events, P(A or B) = P (A) + P (B)
When two events A and B are independent i.e. when event A has no effect on the probability of event B, the conditional probability of event B given event A is simply the probability of event B, that is P(B).
If events A and B are not independent, then the probability of the intersection of A and B (the probability that both events occur) is defined by P (A and B) = P (A) P (B|A).
A and B are independent if P (B/A) = P(B) and P(A/B) = P(A).
If E1, E2, ......... En are n independent events then P (E1 ∩ E2 ∩ ... ∩ En) = P (E1) P (E2) P (E3)...P (En).
Events E1, E2, E3, ......... En will be pairwise independent if P(Ai ∩ Aj) = P(Ai) P(Aj) i ≠ j.
P(Hi | A) = P(A | Hi) P(Hi) / ∑i P(A | Hi) P(Hi).
If A1, A2, ……An are exhaustive events and S is the sample space, then A1 U A2 U A3 U ............... U An = S
If E1, E2,….., En are mutually exclusive events, then P(E1 U E2 U ...... U En) = ∑P(Ei)
If the events are not mutually exclusive then P (A or B) = P (A) +P (B) – P (A and B)
Three events A, B and C are said to be mutually independent if P(A∩B) = P(A).P(B), P(B∩C) = P(B).P(C), P(A∩C) = P(A).P(C), P(A∩B∩C) = P(A).P(B).P(C)
The concept of mutually exclusive events is set theoretic in nature while the concept of independent events is probabilistic in nature.
If two events A and B are mutually exclusive,
P (A ∩ B) = 0 but P(A) P(B) ≠ 0 (In general)
⇒ P(A ∩ B) ≠ P(A) P(B)
⇒ Mutually exclusive events will not be independent.
The probability distribution of a count variable X is said to be the binomial distribution with parameters n and abbreviated B (n,p) if it satisfies the following conditions:
The total number of observations is fixed
Each outcome represents either a success or a failure.
The probability of success i.e. p is same for every outcome.
Some important facts related to binomial distribution:
(p + q)n = C0Pn + C1Pn-1q +...... Crpn-rqr +...+ Cnqn
The probability of getting at least k successes out of n trials is
P(x > k) = Σnx = k nCxpxqn-x
Σnx = k nCxqn-xpx = (q + p)n = 1
Mean of binomial distribution is np
Variance is npq
Standard deviation is given by (npq)1/2, where n
Sum of binomials is also binomial i.e. if X ~ B(n, p) and Y ~ B(m, p) are independent binomial variables with the same probability p, then X + Y is again a binomial variable with distribution X + Y ~ B(n + m, p).
If X ~ B(n, p) and, conditional on X, Y ~ B(X, q), then Y is a simple binomial variable with distributionY ~ B( n, pq).
The Bernoulli distribution is a special case of the binomial distribution, where n = 1. Symbolically, X ~ B (1, p) has the same meaning as X ~ Bern (p).
If an experiment has only two possible outcomes, then it is said to be a Bernoulli trial. The two outcomes are success and failure.
Any binomial distribution, B (n, p), is the distribution of the sum of n independent Bernoulli trials Bern (p), each with the same probability p.
The binomial distribution is a special case of the Poisson Binomial Distribution which is a sum of n independent non-identical Bernoulli trials Bern(pi). If X has the Poisson binomial distribution with p1 = … = pn = p then X ~ B(n, p).
A cumulative binomial probability refers to the probability that the binomial random variable falls within a specified range (e.g., is greater than or equal to a stated lower limit and less than or equal to a stated upper limit).
A negative binomial experiment is an experiment which consists of x repeated trials in which each trial can result in just two possible outcomes a success or a failure. In addition to this it has the following properties:
The probability of success, denoted by P, is the same on every trial.
The trials are independent, that is, the outcome on one trial does not affect the outcome on other trials.
The experiment continues until r successes are observed, where r is specified in advance.
Let A1, A2, .... An be a set of mutually exclusive and exhaustive events and E be some event which is associated with A1, A2, ...., An. Then probability that E occurs is given by
P (E) = ∑n(i=1) P(Ai)P(E/Ai).
If the set of n events related to a sample space are pair-wise independent, they must be mutually independent, but vice versa is not always true.
In probability problems which require the application of the total probability formula the events A1 and A2 must fulfill the following three conditions:
A1 ∩ A2 = Φ
A1 U A2 = S.
A1 ∈ S and A2 ∈ S.
Conditions for application of Bayes formula in probability questions include:
A1, A2, ......., An of the sample space are exhaustive and mutually exclusive i.e. A1 U A2 U ........... U An = S
and Ai ∩ Aj = ∅, where j, i = 1, 2, ........ n and i ≠ j
Priori events i.e. within the sample space there would exist an event B such that P (B) > 0.
The main aim is to compute a conditional probability of the form P (Ai /B).
We know that at least one of the two sets of the two probabilities are given below:
P(Ai ∩ B) for each Ai
P(Ai) and P(B/Ai) for each Ai
If in a problem some event has already happened and then the probability of another event is to be found, it is an application of Bayes Theorem. To recognize the question in which Bayes’ theorem is to be used, the key word is “is found to be".