MTH-361A | Spring 2026 | University of Portland
A sequence of r.v.s \(X_1, X_2, \cdots, X_n\) is independent and identically distributed (i.i.d) if:
Examples:
Why is i.i.d. Important?
A sequence of multiple Bernoulli trials consists i.i.d. Bernoulli r.v.s where each follows a Bernoulli distribution with “success” probability \(p\).
\(\star\) Trials are independent (i.e., one outcome does not affect the next).
A geometric r.v. is a discrete r.v. that represents the number of Bernoulli trials until the first “success” where each trial is independent, with a fixed “success” probability \(p\): \[X \sim \text{Geom}(p)\]
Sample Space:
\[ \begin{aligned} 1 & \longrightarrow 0 \text{ fail until success} \\ 0,1 & \longrightarrow 1 \text{ fail until success} \\ 0,0,1 & \longrightarrow 2 \text{ fail until success} \\ & \vdots \\ 0,0,0,\cdots,1 & \longrightarrow k \text{ fail until success} \\ \end{aligned} \]
Probabilities:
\[ \begin{aligned} 1 & \longrightarrow (1-p)^0 p \\ 0,1 & \longrightarrow (1-p)^1 p \\ 0,0,1 & \longrightarrow (1-p)^2 p \\ & \vdots \\ 0,0,0,\cdots,1 & \longrightarrow (1-p)^k p \\ \end{aligned} \]
\(\star\) The geometric r.v. counts the number of “failures” before a “success” and can also be viewed as counting the number of trials including the first “success.”
The geometric r.v. \(X \sim \text{Geom}(p)\) has infinite possible outcomes (or infinite sized sample space) where \(p\) is the “success” probability.
The PMF of the geometric r.v. can be written in two ways:
\(\star\) The geometric r.v. models a situation where samples are taken with replacement, and the number of failures until the first success is counted.
What is the probability of “success” on the 6th trial with \(p=\frac{1}{2}\)? \[P(X=5) = \left(1-\frac{1}{2}\right)^5 \left(\frac{1}{2}\right) \approx 0.016\] because there are 5 “failures” before the 6th trial.
What is the probability that the first “success” occurs before the 6th trial, given \(p=\frac{1}{2}\)? \[ \begin{aligned} P(X \le 5) & = \sum_{i=0}^5 P(X = i) \\ & = \sum_{i=0}^5 \left(1-\frac{1}{2}\right)^{i} \left(\frac{1}{2}\right) \\ P(X \le 5) & \approx 0.984 \\ \end{aligned} \] because we need to count five or fewer “failures” before a “success” occurs.
The Geometric R.V. models key scenarios used in:
\(\star\) It is the baseline of modeling the number of independent Bernoulli trials needed to observe the first “success”.
Suppose a scientist is estimating the number of contacts a person can have before they get infected with a contagious disease. Each contact is independent and each person encountered is unique.
Define the r.v.:
\[ X = \text{the number of contacts before infection} \]
Assume the probability of infection is \(p = 0.01\).
Then, \[X \sim \text{Geom}(0.01).\]
\(\star\) This is a Geometric r.v. because it is counting the number contacts without infection (“failures”) before a contact with infection (“success”) under multiple independent contacts (trials).
A binomial r.v. is a discrete r.v. representing the number of “success” in \(n\) independent Bernoulli trials, each with “success” probability \(p\): \[X \sim \text{Binom}(n,p)\]
Sample Space:
Suppose \(n = 3\).
\[ \begin{aligned} 0,0,0 & \longrightarrow 3 \text{ fail and } 0 \text{ success} \\ 0,0,1 & \longrightarrow 2 \text{ fail and } 1 \text{ success} \\ 0,1,0 & \longrightarrow 2 \text{ fail and } 1 \text{ success} \\ 0,1,1 & \longrightarrow 1 \text{ fail and } 2 \text{ success} \\ 1,0,0 & \longrightarrow 2 \text{ fail and } 1 \text{ success} \\ 1,0,1 & \longrightarrow 1 \text{ fail and } 2 \text{ success} \\ 1,1,0 & \longrightarrow 1 \text{ fail and } 2 \text{ success} \\ 1,1,1 & \longrightarrow 0 \text{ fail and } 3 \text{ success} \\ \end{aligned} \]
Probabilities:
Suppose \(n = 3\). \[ \begin{aligned} 0,0,0 & \longrightarrow (1-p)^3 p^0 \\ 0,0,1 & \longrightarrow (1-p)^2 p^1 \\ 0,1,0 & \longrightarrow (1-p)^2 p^1 \\ 1,0,0 & \longrightarrow (1-p)^2 p^1 \\ 0,1,1 & \longrightarrow (1-p)^1 p^2 \\ 1,0,1 & \longrightarrow (1-p)^1 p^2 \\ 1,1,0 & \longrightarrow (1-p)^1 p^2 \\ 1,1,1 & \longrightarrow (1-p)^0 p^3 \end{aligned} \]
\(\star\) The binomial r.v. counts the number of “successes” in \(n\) independent Bernoulli trials, where each trial has a “success” probability \(p\).
Permutations
An arrangement of objects in a specific order. For \(n\) objects, we pick \(k\) objects to permute with number of permutations given by \[_n P_k = \frac{n!}{(n-k)!}.\]
Combinations
A selection of objects where order does not matter. For \(n\) objects, we pick \(k\) objects to combine with number of combinations given by \[_n C_k = \binom{n}{k} = \frac{n!}{k!(n-k)!}.\]
The binomial coefficient, denoted as \(\binom{n}{k}\) represents the number of ways to choose \(k\) objects from a set of \(n\) objects without regard to order. It is given by the formula: \[\binom{n}{k} = \frac{n!}{k!(n-k)!}\]
Expanding binomial expressions using the Binomial Theorem: \[(x+y)^n = \sum_{k=0}^n \binom{n}{k} y^k x^{n-k}\]
If we let \(x=1-p\) and \(y=p\) (Bernoulli PMF), then \[(1-p+p)^n = \sum_{k=0}^n \binom{n}{k} p^k (1-p)^{n-k} = 1.\]
\(\star\) Since \(p\) is the “success” probability and the Binomial Theorem reduces to \(1\), then this satisfies the probability axioms.
The binomial r.v. \(X \sim \text{Binom}(n,p)\) has finite possible outcomes with PMF given by \[P(X=k) = \binom{n}{k} p^k (1-p)^{n-k}, \ k = 0,1,2, \cdots, n\] where \(p\) is the “success” probability. The term \(\binom{n}{k} = \frac{n!}{k! (n-k)!}\) is the binomial coefficient.
\(\star\) The binomial r.v. models a situation where samples are taken with replacement, and the number of successes is counted within a finite number of trials.
What is the probability of getting 4 “success” in 10 trials with \(p=\frac{1}{2}\)? \[P(X=4) =\binom{10}{4} \left(\frac{1}{2}\right)^4 \left(1-\frac{1}{2}\right)^{n-k} \approx 0.205\]
What is the probability of getting at most 4 “success” in 10 trials with \(p=\frac{1}{2}\)? \[ \begin{aligned} P(X \le 4) & = \sum_{i=0}^4 P(X = i) \\ & = \sum_{i=0}^4 \binom{10}{i} \left(\frac{1}{2}\right)^i \left(1-\frac{1}{2}\right)^{10-i} \\ P(X \le 4) & \approx 0.377 \\ \end{aligned} \] because we need to count four or fewer “success” in 10 trials.
The Binomial R.V. models key scenarios used in:
\(\star\) It is the baseline for modeling number of “success” outcomes in a fixed number of independent trials.
Suppose a scientist is estimating the number of infected individuals within a population. They take \(200\) individuals as a random sample and count how many have the infection. Each individual is independently sampled and unique.
Define the r.v.:
\[ X = \text{the number of infected in a sample of 200 individuals} \]
Assume that the probability of infection is known to be \(p = 0.01\).
Then, \[X \sim \text{Binom}(200,0.01).\]
\(\star\) This is a Binomial r.v. because it is counting the number of infected (“success”) in a fixed number of individuals (trials).