These are some of my notes on statistics from the Udemy Data Science Bootcamp. The Python code associated with this section is available here.

Data Distributions

Distribution

  • Shows possible values a random variable can take and how frequently they occur.

Frequency distribution

  • Discrete data values repeated with various frequencies.
  • Pre-established intervals of possible values with frequencies corresponding to the numbers of values in the intervals.

Relative frequency distribution

  • Each frequency of the frequency distribution is divided by the total number of data points in the distribution.

Random or Probability experiments

  • These have a finite number of possible outcomes.
  • A particular set of outcomes is called an event.
  • Every event has a probability.
  • Intersection and union of two or more events is possible.
  • Two events can be mutually exclusive if their intersection is 0.
  • A random variable is used to describe the outcome of a random experiment in numerical form.
  • A discrete random variable has a finite number of possible values because the random experiment has a finite number of possible outcomes.
  • A continuous random variable has values that form a continuous interval of real numbers because the random experiment has an uncountable number of possible outcomes.
  • A probability distribution shows all the probabilities of all the values of a random variable.
  • The probability of a continuous random variable is the area of the region under the curve of a probability distribution function, bounded by the X-axis on one side and by the maximum and minimum values of the random variable interval, for that probability, on the other sides.
  • The area of the region under the entire probability distribution curve is 1.
  • The mean of a continuous random variable is the point on the X-axis at which the region under the distribution curve would balance perfectly on a fulcrum.
  • The median of a continuous random variable is the point on the X-axis at which the area under the distribution curve is split into two regions of equal area.
  • In probability theory, the expected value of a random variable is the probability-weighted average.

Probability distributions

  • Probability function: It is a function that assigns a probability to each distinct outcome in the sample space.
  • Probability distribution: It is a collection of probabilities for each possible outcome. Y=Actual outcomey=One of the possible outcomesP(Y=y)=P(y) \begin{alignedat}{1} Y &= Actual \space outcome \\ y &= One \space of \space the \space possible \space outcomes \\ P(Y = y) &= P(y) \end{alignedat}
  • Population and sample data:
Population Sample
Mean μ\mu xˉ\bar{x}
Variance σ2\sigma^2 s2s^2
Standard Deviation σ\sigma ss
  • Certain distributions share characteristics. So, they are separated into types:
    • Discrete distributions
    • Continuous distributions

Discrete distributions

  • These have a finite number of outcomes.
  • Can add individual values to determine probability of an interval.
  • Can be expressed with a table, graph, or a piece-wise function.
  • Expected values might be unattainable. P(Yy)=P(Y<y+1) P(Y \leq y) = P(Y < y + 1)

Uniform Distribution

  • Notation: Y~U(a,b)Y~U(a) Y \text{\textasciitilde} U(a, b) \\ Y \text{\textasciitilde} U(a)
  • All outcomes are equally likely.
  • The expected value and variance have no predictive power.
  • Example and uses:
    • Outcomes of rolling a single die.
    • Often used in shuffling algorithms due to its fairness.
  • Graph: Discrete Uniform Distribution graph

Bernoulli Distribution

  • Notation: Y~Bern(p)Y~B(1,p) Y \text{\textasciitilde} Bern(p) \\ Y \text{\textasciitilde} B(1, p)
  • Consists of a single trial.
  • The trial has two possible outcomes.
  • Expected value and variance: E(Y)=yy.P(Y=y)=1.p+0.(1p)E(Y)=pVar(Y)=E{[YE(Y)]2}=(1p)2.p+(0p)2.(1p)=pp32p2+p2p3=pp2Var(Y)=p.(1p) \begin{alignedat}{1} E(Y) &= \sum_y y. P(Y = y) \\ &= 1 . p + 0 . (1 - p) \\ &\boxed{E(Y) = p} \\ Var(Y) &= E\{[Y - E(Y)]^2\} \\ &= (1 - p)^2 . p + (0 - p)^2 . (1 - p) \\ &= p - p^3 -2p^2 + p^2 - p^3 \\ &= p - p^2 \\ &\boxed{Var(Y) = p.(1 - p)} \end{alignedat}
  • Example and uses:
    • Guessing a single True / False question.
    • Often used when trying to determine what we expect to get out of a single trial of an experiment.
  • Graph: Discrete Bernoulli Distribution graph

Binomial Distribution

  • Notation: Y~B(n,p) Y \text{\textasciitilde} B(n, p)
  • Sequence of identical Bernoulli events.
  • Measures the frequency of occurrence of one of the possible outcomes over nn trials.
  • Probability: P(Y=y)=C(y,n).py.(1p)ny P(Y = y) = C(y, n).p^y.(1 - p)^{n-y}
  • Expected value and variance: E(Y)=n.pVar(Y)=n.p.(1p) \begin{alignedat}{1} E(Y) &= n.p \\ Var(Y) &= n.p.(1 - p) \end{alignedat}
  • Example and uses:
    • Used in determining how many times to expect a heads if a coin is flipped 10 times.
    • Used to predict how likely an event is to occur over a series of trials.
  • Graph: Discrete Binomial Distribution graph

Poisson Distribution

  • Notation: Y~Po(λ) Y \text{\textasciitilde} Po(\lambda)
  • Used to determine the likelihood of a certain event occurring over a given interval of time or distance.
  • Measures the frequency over an interval of time or distance (only non-negative values).
  • Probability: P(Y=y)=λy.eλy! P(Y = y) = \frac{\lambda^y.e^{-\lambda}}{y!}
  • Expected value and variance: E(Y)=y=0y.eλ.λyy!=y=1y.eλ.λy(y1)!y!  when y=0,E(y)=0=λ.eλ.y=1λy1(y1)!E(Y)=λ.eλ.y=0λyy!  when y=y1, then the limit becomes y=0=λ.eλ.eλ for any constant "c", x=0cxx!=ecE(Y)=λVar(Y)=E(Y2)E(Y)2=E((Y).(Y1)+Y)E(Y)2=E((Y).(Y1))+E(Y)E(Y)2Var(Y)=E((Y).(Y1))+(λλ2)E((Y).(Y1))=y=0y.(y1).eλ.λyy! Var(Y)=y=0y.(y1).eλ.λyy!+(λλ2)=y=2y.(y1).eλ.λyy!+(λλ2) Var(y)=0 for y=0 and 1=y=2y.(y1).eλ.λy(y2)!y!+(λλ2)=λ2.eλ.y=2λy2(y2)!+(λλ2)=λ2.eλ.y=0λyy!+(λλ2)  when y=y2, then the limit becomes y=0=λ2.eλ.eλ+(λλ2) for any constant "c", x=0cxx!=ec=λ2+λλ2Var(Y)=λ \begin{alignedat}{1} E(Y) &= \displaystyle\sum_{y = 0}^\infty y.\frac{e^{-\lambda}.\lambda^y}{y!} \\ &= \displaystyle\sum_{y = 1}^\infty \cancel{y}.\frac{e^{-\lambda}.\lambda^y}{_{(y - 1)!}^{\cancel{y!}}} \space \because \space when \space y = 0, E(y) = 0\\ &= \lambda.e^{-\lambda}.\displaystyle\sum_{y = 1}^\infty \frac{\lambda^{y-1}}{(y - 1)!} \\ E(Y) &= \lambda.e^{-\lambda}.\displaystyle\sum_{y = 0}^\infty \frac{\lambda^y}{y!} \space \because \space when \space y = y' - 1, \space then \space the \space limit \space becomes \space y = 0 \\ &= \lambda.\cancel{e^{-\lambda}}.\cancel{e^\lambda} \space \because for \space any \space constant \space "c", \space \sum_{x = 0}^\infty\frac{c^x}{x!} = e^c \\ \boxed{E(Y) = \lambda} \\ \\ Var(Y) &= E(Y^2) - E(Y)^2 \\ &= E((Y).(Y - 1) + Y) - E(Y)^2 \\ &= E((Y).(Y - 1)) + E(Y) - E(Y)^2 \\ Var(Y) &= E((Y).(Y - 1)) + (\lambda - \lambda^2) \\ E((Y).(Y - 1)) &= \displaystyle\sum_{y = 0}^\infty y.(y - 1).\frac{e^{-\lambda}.\lambda^y}{y!} \\ \therefore \space Var(Y) &= \displaystyle\sum_{y = 0}^\infty y.(y - 1).\frac{e^{-\lambda}.\lambda^y}{y!} + (\lambda - \lambda^2) \\ &= \displaystyle\sum_{y = 2}^\infty y.(y - 1).\frac{e^{-\lambda}.\lambda^y}{y!} + (\lambda - \lambda^2) \space \because Var(y) = 0 \space for \space y = 0 \space and \space 1 \\ &= \displaystyle\sum_{y = 2}^\infty \cancel{y.(y - 1)}.\frac{e^{-\lambda}.\lambda^y}{_{(y - 2)!}^{\cancel{y!}}} + (\lambda - \lambda^2) \\ &= \lambda^2.e^{-\lambda}.\displaystyle\sum_{y = 2}^\infty \frac{\lambda^{y - 2}}{(y - 2)!} + (\lambda - \lambda^2) \\ &= \lambda^2.e^{-\lambda}.\displaystyle\sum_{y = 0}^\infty \frac{\lambda^y}{y!} + (\lambda - \lambda^2) \space \because \space when \space y = y' - 2, \space then \space the \space limit \space becomes \space y = 0 \\ &= \lambda^2.\cancel{e^{-\lambda}}.\cancel{e^\lambda} + (\lambda - \lambda^2) \space \because for \space any \space constant \space "c", \space \sum_{x = 0}^\infty\frac{c^x}{x!} = e^c \\ &= \cancel{\lambda^2} + \lambda - \cancel{\lambda^2} \\ \boxed{Var(Y) = \lambda} \end{alignedat}
  • Example and uses:
    • Used to determine how likely a specific outcome is, knowing how often the event usually occurs.
    • Often incorporated in marketing analysis to determine whether above average visits are out of the ordinary or not.
  • Graph: Discrete Poisson Distribution graph

Continuous distributions

  • These have infinitely many consecutive possible values.
  • Cannot add the individual values that make up the interval because there are infinitely many of them.
  • Can be expressed with a graph or a continuous function. Cannot be expressed with a table.
  • Integrals are used to calculate the likelihood of an interval.
  • The cumulative distribution functions are important.
  • Probability: P(Y=y)=0P(Y<y)=P(Yy) \begin{alignedat}{1} P(Y = y) &= 0 \\ P(Y < y) &= P(Y \leq y) \end{alignedat}

Normal distribution / Gaussian distribution

  • Notation: Y~N(μ,σ2) Y \text{\textasciitilde} N(\mu, \sigma^2)
  • Its graph is a bell-shaped, symmetric curve with thin tails.
  • Most natural events follow this distribution.
  • 68% of the observations are within 1 +/- standard deviation of the mean: (μσ,μ+σ)(\mu - \sigma, \mu + \sigma).
  • 95% of the observations are within 2 +/- standard deviations of the mean: (μ2σ,μ+2σ)(\mu - 2\sigma, \mu + 2\sigma).
  • 99.7% of the observations are within 3 +/- standard deviations of the mean: (μ3σ,μ+3σ)(\mu - 3\sigma, \mu + 3\sigma).
  • Can be standardized to use the Z-table.
  • These approximate a wide variety of random variables.
  • Distributions of sample means with large enough sample sizes can be approximated to normal distributions.
  • All computable statistics are elegant.
  • Decisions based on normal distribution insights have a good track record.
  • Mean, median, and mode are equal.
  • It has no skew.
  • Expected value and variance: Probability Distribution Function f(y)=1σ.2π.e(yμ)22σ2E(Y)=y.1σ.2π.e(yμ)22σ2dyE(Y)=1σ.2π.y.e(yμ)22σ2dyLet t=yμ2.σ y=μ+2.σ.tdydt=2.σ dy=2.σdt E(Y)=1σ.2π.(μ+2.σ.t).et2.2.σdt=2.σσ.2π.(μ+2.σ.t).et2dt=1π.(μ+2.σ.t).et2dt=1π.[μet2dt+2.σt.et2dt]=1π.[μ.π+2.σ.(12.et2)]=1π.[μ.π+0]  the exponential tends to 0=μ.ππE(Y)=μVar(Y)=E(Y2)[E(Y)]2=E(Y2)μ2=y2.1σ.2π.e(yμ)22σ2dyμ2Var(Y)=1σ.2π.y2.e(yμ)22σ2dyμ2Let t=yμ2.σ y=μ+2.σ.tdydt=2.σ dy=2.σdtVar(Y)=1σ.2π.(2.σ.t+μ)2.et2.2.σdtμ2=2.σσ.2π.(2.σ.t+μ)2.et2dtμ2=1π.(2.σ.t+μ)2.et2dtμ2=1π.[(2.σ2.t2+2.2.σ.μ.t+μ2).et2dt]μ2=1π.[2.σ2.t2.et2dt+2.2.σ.μ.t.et2dt+μ2.et2dt]μ2=1π.[2.σ2.t2.et2dt+2.2.σ.μ.0+μ2.π]μ2=1π.[2.σ2.t2.et2dt]+1π.μ2.πμ2=1π.[2.σ2.t2.et2dt]+μ2μ2=2.σ2π.t2.et2dt=2.σ2π.([t2.et2]+12.et2dt)=2.σ2π.(0+12.et2dt)=2.σ2π.12.et2dt=σ2π.et2dt=σ2π.πVar(Y)=σ2 \begin{alignedat}{1} Probability \space Distribution \space Function \space f(y) &= \frac{1}{\sigma.\sqrt{2\pi}}.e^{\frac{-(y - \mu)^2}{2\sigma^2}} \\ \\ E(Y) &= \displaystyle\int_{-\infty}^\infty y.\frac{1}{\sigma.\sqrt{2\pi}}.e^{\frac{-(y - \mu)^2}{2\sigma^2}}dy \\ E(Y) &= \frac{1}{\sigma.\sqrt{2\pi}}.\displaystyle\int_{-\infty}^\infty y.e^{\frac{-(y - \mu)^2}{2\sigma^2}}dy \\ Let \space t &= \frac{y - \mu}{\sqrt{2}.\sigma} \\ \therefore \space y &= \mu + \sqrt{2}.\sigma.t \\ \frac{dy}{dt} &= \sqrt{2}.\sigma \\ \therefore \space dy &= \sqrt{2}.\sigma dt \\ \therefore \space E(Y) &= \frac{1}{\sigma.\sqrt{2\pi}}.\displaystyle\int_{-\infty}^\infty (\mu + \sqrt{2}.\sigma.t).e^{-t^2}.\sqrt{2}.\sigma dt \\ &= \frac{\cancel{\sqrt{2}}.\cancel{\sigma}}{\cancel{\sigma}.\sqrt{\cancel{2}\pi}}.\displaystyle\int_{-\infty}^\infty (\mu + \sqrt{2}.\sigma.t).e^{-t^2} dt \\ &= \frac{1}{\sqrt{\pi}}.\displaystyle\int_{-\infty}^\infty (\mu + \sqrt{2}.\sigma.t).e^{-t^2} dt \\ &= \frac{1}{\sqrt{\pi}}.\Biggr[\mu\displaystyle\int_{-\infty}^\infty e^{-t^2} dt + \sqrt{2}.\sigma\displaystyle\int_{-\infty}^\infty t.e^{-t^2} dt\Biggr] \\ &= \frac{1}{\sqrt{\pi}}.\Biggr[\mu.\sqrt{\pi} + \sqrt{2}.\sigma.\Biggr(-\frac{1}{2}.e^{-t^2}\Biggr)_{-\infty}^\infty\Biggr] \\ &= \frac{1}{\sqrt{\pi}}.[\mu.\sqrt{\pi} + 0] \space \because \space the \space exponential \space tends \space to \space 0 \\ &= \frac{\mu.\cancel{\sqrt{\pi}}}{\cancel{\sqrt{\pi}}} \\ \boxed{E(Y) = \mu} \\ \\ Var(Y) &= E(Y^2) - [E(Y)]^2 \\ &= E(Y^2) - \mu^2 \\ &= \displaystyle\int_{-\infty}^\infty y^2.\frac{1}{\sigma.\sqrt{2\pi}}.e^{\frac{-(y - \mu)^2}{2\sigma^2}}dy - \mu^2 \\ Var(Y) &= \frac{1}{\sigma.\sqrt{2\pi}}.\displaystyle\int_{-\infty}^\infty y^2.e^{\frac{-(y - \mu)^2}{2\sigma^2}}dy - \mu^2 \\ Let \space t &= \frac{y - \mu}{\sqrt{2}.\sigma} \\ \therefore \space y &= \mu + \sqrt{2}.\sigma.t \\ \frac{dy}{dt} &= \sqrt{2}.\sigma \\ \therefore \space dy &= \sqrt{2}.\sigma dt \\ Var(Y) &= \frac{1}{\sigma.\sqrt{2\pi}}.\displaystyle\int_{-\infty}^\infty (\sqrt{2}.\sigma.t + \mu)^2.e^{-t^2}.\sqrt{2}.\sigma dt - \mu^2 \\ &= \frac{\cancel{\sqrt{2}}.\cancel{\sigma}}{\cancel{\sigma}.\sqrt{\cancel{2}\pi}}.\displaystyle\int_{-\infty}^\infty (\sqrt{2}.\sigma.t + \mu)^2.e^{-t^2} dt - \mu^2 \\ &= \frac{1}{\sqrt{\pi}}.\displaystyle\int_{-\infty}^\infty (\sqrt{2}.\sigma.t + \mu)^2.e^{-t^2} dt - \mu^2 \\ &= \frac{1}{\sqrt{\pi}}.\Biggr[\displaystyle\int_{-\infty}^\infty (2.\sigma^2.t^2 + 2.\sqrt{2}.\sigma.\mu.t + \mu^2).e^{-t^2} dt\Biggr] - \mu^2 \\ &= \frac{1}{\sqrt{\pi}}.\Biggr[2.\sigma^2.\displaystyle\int_{-\infty}^\infty t^2.e^{-t^2} dt + 2.\sqrt{2}.\sigma.\mu.\displaystyle\int_{-\infty}^\infty t.e^{-t^2} dt + \mu^2.\displaystyle\int_{-\infty}^\infty e^{-t^2} dt\Biggr] - \mu^2 \\ &= \frac{1}{\sqrt{\pi}}.\Biggr[2.\sigma^2.\displaystyle\int_{-\infty}^\infty t^2.e^{-t^2} dt + 2.\sqrt{2}.\sigma.\mu.0 + \mu^2.\sqrt{\pi}\Biggr] - \mu^2 \\ &= \frac{1}{\sqrt{\pi}}.\Biggr[2.\sigma^2.\displaystyle\int_{-\infty}^\infty t^2.e^{-t^2} dt\Biggr] + \frac{1}{\cancel{\sqrt{\pi}}}.\mu^2.\cancel{\sqrt{\pi}} - \mu^2 \\ &= \frac{1}{\sqrt{\pi}}.\Biggr[2.\sigma^2.\displaystyle\int_{-\infty}^\infty t^2.e^{-t^2} dt\Biggr] + \mu^2 - \mu^2 \\ &= \frac{2.\sigma^2}{\sqrt{\pi}}.\displaystyle\int_{-\infty}^\infty t^2.e^{-t^2} dt \\ &= \frac{2.\sigma^2}{\sqrt{\pi}}.\Biggr(\Biggr[-\frac{t}{2}.e^{-t^2}\Biggr]_{-\infty}^\infty + \frac{1}{2}.\displaystyle\int_{-\infty}^\infty e^{-t^2} dt\Biggr) \\ &= \frac{2.\sigma^2}{\sqrt{\pi}}.\Biggr(0 + \frac{1}{2}.\displaystyle\int_{-\infty}^\infty e^{-t^2} dt\Biggr) \\ &= \frac{\cancel{2}.\sigma^2}{\sqrt{\pi}}.\frac{1}{\cancel{2}}.\displaystyle\int_{-\infty}^\infty e^{-t^2} dt \\ &= \frac{\sigma^2}{\sqrt{\pi}}.\displaystyle\int_{-\infty}^\infty e^{-t^2} dt \\ &= \frac{\sigma^2}{\cancel{\sqrt{\pi}}}.\cancel{\sqrt{\pi}} \\ \boxed{Var(Y) = \sigma^2} \end{alignedat}
  • Example and uses:
    • Often observed in the distribution of size of animals in the wilderness.
    • Most biological measures are normally distributed like height; length of arms, legs, nails; blood pressure; thickness of tree barks, etc.
    • IQ tests.
    • Stock market information.
    • Heavily used in regression analysis.
  • Graph: Continuous Normal Distribution graph

Z-tables / Z-score tables

  • Transformation:
    • It is a way to alter every element of a distribution to get a new distribution.
    • Adding a number to every element of a Normal distribution moves the graph of the distribution to the right. The standard deviation remains unchanged.
    • Subtracting a number from every element of a Normal distribution moves the graph of the distribution to the left. The standard deviation remains unchanged.
    • Multiplying a number to every element of a Normal distribution causes the graph of the distribution to shrink in width. The mean remains unchanged.
    • Dividing a number from every element of a Normal distribution causes the graph of the distribution to expand in width. The mean remains unchanged.
    • Controlling for the standard deviation:
      • Decreasing the mean moves the graph to the left. This is the same effect as subtracting a number from every element.
      • Increasing the mean moves the graph to the right. This is the same effect as adding a number to every element.
    • Controlling for the mean:
      • Decreasing the standard deviation makes the graph taller by increasing the amount of data in the middle and thinning the tails. This is the same as multiplying a number to every element.
      • Increasing the standard deviation makes the graph shorter by decreasing the amount of data in the middle and fattening the tails. This is the same as dividing a number from every element.
  • Standardizing:
    • A special transformation that converts the E(X) to 0E(X) \space to \space 0 and the Var(X) to 1Var(X) \space to \space 1.
    • After standardizing, a Normal distribution is called a Standard Normal distribution.
    • Subtracting the μ\mu of the original distribution from every element causes the mean of the distribution to change to 00, thereby moving the mean of the graph to the origin.
    • Dividing every resulting element by the σ\sigma of the original distribution causes the standard deviation to change to 11, thereby standardizing the peak and tails of the graph
  • Reasons for standardizing:
    • To compare different normally distributed datasets.
    • To detect normality.
    • To detect outliers.
    • To create confidence intervals.
    • To test hypotheses.
    • To perform regression analysis.
  • A Z-score table is the table of the Cumulative Distribution function of a Normal distribution after it has been standardized.
  • Notation of a Standard Normal distribution: Z~N(0,1) Z \text{\textasciitilde} N(0, 1)
  • Z-value: z=yμσ z = \frac{y - \mu}{\sigma}
  • In the z-value mentioned above, the numerator transforms the E(X)E(X) to 00 and the denominator transforms the STD(X)STD(X) to 11.

Z Table - Negative.png Z Table - Positive.png

Students’ T Distribution

  • Notation: Y~t(k or v) where k or v=Degrees of freedom Y \text{\textasciitilde} t(k \space or \space v) \space where \space k \space or \space v = Degrees \space of \space freedom
  • Represents a small sample size approximation of a Normal Distribution.
  • Its graph is a bell-shaped, symmetric curve with fat tails and a low peak. This accounts for the higher value of uncertainity caused by the small sample size.
  • It accounts for extreme values better than a Normal Distribution.
  • Expected value and variance: If k>2:E(Y)=μVar(Y)=S2.kk2tv,α=xˉμsn \begin{alignedat}{1} If \space k > 2:& \\ E(Y) &= \mu \\ Var(Y) &= S^2.\frac{k}{k - 2} \\ t_{v, \alpha} &= \frac{\bar{x} - \mu}{\frac{s}{\sqrt{n}}} \end{alignedat}
  • Example and uses:
    • Often used in anlysis when examining a small sample of data that usually follows a Normal Distribution.
    • Frequently used when conducting statistical analysis
    • Used for hypothesis testing with limited data
    • CDF table = T-table
  • Graph: Continuous Students' T-Distribution graph

Sampling Distribution

  • Notation: Y~N(μ,σ2n) Y \text{\textasciitilde} N(\mu, \frac{\sigma^2}{n})
  • A sampling distribution is a collection of a measure of central tendendy of various samples of a population.
Central Limit Theorem
  • No matter the underlying distribution, the sampling distribution of the means approximates a normal distribution.
  • The more number of samples that are used, the closer the approximation to the population mean. Number of samples equal to or more than 3030 is better.
  • The more number of samples, the closer the approximation to the Normal distribution.
  • The bigger the samples, the closer the approximation to the Normal distribution.
  • This theorem allows us to perform tests, solve problems, and make inferences using the Normal distribution, even when the population is not normally distributed.
  • The mean of the sampling distribution is very close to the mean of the original distribution.
  • The variance of the sampling distribution is nn times smaller, where nn is the size of size of the samples.
  • Used whenever we have the sum or average of many variables.
  • Notation: Y~N(μ,σ2n) Y \text{\textasciitilde} N(\mu, \frac{\sigma^2}{n})

Chi-squared Distribution

  • Notation: Y~χ2(k) where k=Degrees of freedom Y \text{\textasciitilde} \chi^2(k) \space where \space k = Degrees \space of \space freedom
  • This distribution is the square of the t-distribution.
  • Its graph is asymmetric and skewed to the right. It has a fat tail to the right and no tail to the left.
  • Expected value and variance: E(Y)=kVar(Y)=2k \begin{alignedat}{1} E(Y) &= k \\ Var(Y) &= 2k \end{alignedat}
  • Example and uses:
    • Often used to test goodness of fit.
    • Often used for hypothesis testing.
    • Often used when computing confidence intervals.
    • Contains a table of known values for its Cumulative Distribution function called the χ2\chi^2-table.
  • Graph: Continuous Chi-Squared Distribution graph
  • Chi-Square Probability Table: Chi-Squared Table.png

Exponential Distribution

  • Notation: Y~Exp(λ) where λ=Scale Y \text{\textasciitilde} Exp(\lambda) \space where \space \lambda = Scale
  • Usually observed in events that significantly change early on.
  • The Probability Distribution Function and the Cumulative Distribution Function plateau after a certain point.
  • The scale parameter determines the rate at which the Probability and Cumulative distribution functions plateau.
  • The scale parameter also determines the spread of the graph.
  • Natural logarithm is often used to transform the values of such distributions since a table of known values is not available. This transformation results in a Normal distribution.
  • Expected value and variance: E(Y)=1λVar(Y)=1λ2 \begin{alignedat}{1} E(Y) &= \frac{1}{\lambda} \\ Var(Y) &= \frac{1}{\lambda^2} \end{alignedat}
  • Example and uses:
    • Used with dynamically changing values, like online website traffic or radioactive decay.
  • Graph: Continuous Exponential Distribution graph

Logistic Distribution

  • Notation: Y~Logistic(μ,S) where S=Scale and μ=Location or Mean Y \text{\textasciitilde} Logistic(\mu, S) \space where \space S = Scale \space and \space \mu = Location \space or \space Mean
  • Observed when trying to determine how continous variable inputs can affect the probability of a binary outcome.
  • The Cumulative Distribution Function picks up near the mean.
  • The smaller the scale parameter, the quicker it reaches values close to 11.
  • Expected value and variance: E(Y)=μVar(Y)=S2.π23 \begin{alignedat}{1} E(Y) &= \mu \\ Var(Y) &= \frac{S^2.\pi^2}{3} \end{alignedat}
  • Example and uses:
    • Often used in sports to anticipate how a player’s performance can determine the outcome of the match.
  • Graph: Continuous Logistic Distribution graph

Measures of Relationship between variables

Coefficient of Variance

  • Also known as relative standard deviation.
  • Comparing the standard deviation of two different datasets gives no meaningful information but comparing the coefficient of variance of two different datasets makes sense.
  • Coefficient of Variance: CV=σμCV^=sxˉ \begin{alignedat}{1} C_V &= \frac{\sigma}{\mu} \\ \widehat{C_V} &= \frac{s}{\bar{x}} \end{alignedat}

Covariance

  • Used to determine relation between two variables.
  • Can be positive, zero, or negative.
  • Covariance: σxy=i=1N(xiμx)(yiμy)Nsxy=i=1n(xixˉ)(yiyˉ)n1 \begin{alignedat}{1} \sigma_{xy} &= \frac{\sum_{i = 1}^N(x_i - \mu_x) * (y_i - \mu_y)}{N} \\ s_{xy} &= \frac{\sum_{i = 1}^n(x_i - \bar{x}) * (y_i - \bar{y})}{n - 1} \end{alignedat}

Linear Correlation Coefficient

  • Correlation adjusts the covariance so that the relation between two variables becomes easy and intuitive to interpret.
  • Correlation does not imply causation.
  • Correlation of 1 means strong positive correlation - an increase in one variable is accompanied by a similar increase in the second variable.
  • Correlation of 0 means the two variables are independent of each other.
  • Correlation of -1 means strong negative correlation - an increase in one variable is accompanied by a similar decrease in the second variable.
  • Linear Correlation Coefficient: =σxyσx.σy=sxysx.σy \begin{alignedat}{1} &= \frac{\sigma_{xy}}{\sigma_x.\sigma_y} \\ &= \frac{s_{xy}}{s_x.\sigma_y} \end{alignedat}

Estimators and Estimates

Estimators

  • An estimator is a mathematical function that approximates a population parameter depending only on sample information.
  • Examples:
Term Estimator Parameter
Mean xˉ\bar{x} μ\mu
Variance s2s^2 σ2\sigma^2
Correlation rr ρ\rho
  • Important properties:
    • Bias:
      • Expected value of an unbiased estimator is the population parameter. The bias in this case is 00.
      • If the expected value of an estimator is (parameter+b)(parameter + b), then the bias is bb.
    • Efficiency:
      • The most efficient estimator is the one with the smallest variance.

Estimates

  • This is the output that you get from an estimator.

Types of estimates

Point estimate

  • Single value.
  • Examples: 1, 5, 122.67, 0.32

Confidence Intervals

  • An interval within which we are confident (with a certain percentage of confidence) that the population parameter will fall.
  • The confidence interval is built around the point estimate.
  • Confidence intervals are more precise than point estimates.
  • If MEME is the margin of error, then the confidence interval is given by the following formula: [xˉME,xˉ+ME] [\bar{x} - ME, \bar{x} + ME]
  • Examples: (1, 5), (12, 33), (221.78, 745.66), (-0.71, 0.11)
Level of confidence:
  • This is represented by (1α)(1 - \alpha)
  • We are (1α)100%(1 - \alpha) * 100\% that the population parameter will fall within the specified confidence interval.
  • Common values of α\alpha:
    • 0.01
    • 0.05
    • 0.1
Margin of Error
  • Formula: ME=Reliability FactorStandard DeviationSample Size=Zα2σn=tv,α2sn \begin{alignedat}{1} ME &= Reliability \space Factor * \frac{Standard \space Deviation}{\sqrt{Sample \space Size}} \\ &= Z_{\frac{\alpha}{2}} * \frac{\sigma}{\sqrt{n}} \\ &= t_{v,\frac{\alpha}{2}} * \frac{s}{\sqrt{n}} \end{alignedat}
Effect on Confidence Interval
Term Effect on width of CI
(1α)(1 - \alpha) \uparrow \uparrow
σ\sigma \uparrow \uparrow
nn \uparrow \downarrow

Formulas for calculating test statistics and confidence intervals

No. of Populations Population variance Samples Statistic Variance Test Statistic Formula CI Formula
One Known - zz σ2\sigma^2 Z=xˉμ0σnZ = \frac{\bar{x} - \mu_0}{\frac{\sigma}{\sqrt{n}}} xˉ±zα2.σn\bar{x} \pm z_{\frac{\alpha}{2}} . \frac{\sigma}{\sqrt{n}}
One Unknown - tt s2s^2 T=xˉμ0snT = \frac{\bar{x} - \mu_0}{\frac{s}{\sqrt{n}}} xˉ±tn1,α2.sn\bar{x} \pm t_{n - 1, \frac{\alpha}{2}} . \frac{s}{\sqrt{n}}
Two - Dependent tt sdifference2s_{difference}^2 T=dˉμ0sdnT = \frac{\bar{d} - \mu_0}{\frac{s_d}{\sqrt{n}}} dˉ±tn1,α2.sdn\bar{d} \pm t_{n - 1, \frac{\alpha}{2}} . \frac{s_d}{\sqrt{n}}
Two Known Independent zz σx2,σy2\sigma_x^2, \sigma_y^2 Z=(xˉyˉ)μ0σx2nx+σy2nyZ = \frac{(\bar{x} - \bar{y}) - \mu_0}{\sqrt{\frac{\sigma_x^2}{n_x} + \frac{\sigma_y^2}{n_y}}} (xˉyˉ)±zα2.σx2nx+σy2ny(\bar{x} - \bar{y}) \pm z_{\frac{\alpha}{2}} . \sqrt{\frac{\sigma_x^2}{n_x} + \frac{\sigma_y^2}{n_y}}
Two Unknown, assumed equal Independent tt sp2=(nx1).sx2+(ny1).sy2nx+ny2s_p^2 = \frac{(n_x - 1).s_x^2 + (n_y - 1).s_y^2}{n_x + n_y - 2} T=(xˉyˉ)μ0sp2nx+sp2nyT = \frac{(\bar{x} - \bar{y}) - \mu_0}{\sqrt{\frac{s_p^2}{n_x} + \frac{s_p^2}{n_y}}} (xˉyˉ)±tnx+ny2,α2.sp2nx+sp2ny(\bar{x} - \bar{y}) \pm t_{n_x + n_y - 2, \frac{\alpha}{2}}.\sqrt{\frac{s_p^2}{n_x} + \frac{s_p^2}{n_y}}
Two Unknown, assumed different Independent tt sx2,sy2s_x^2, s_y^2 - (xˉyˉ)±tv,α2.sx2nx+sy2ny(\bar{x} - \bar{y}) \pm t_{v, \frac{\alpha}{2}}.\sqrt{\frac{s_x^2}{n_x} + \frac{s_y^2}{n_y}}

Scientific Method

It is a procedure that consists of systematic observation, measurement, experiment, and formulation, testing, and modification of hypotheses.

Steps in Data-Driven Decision Making

  1. Formulate a hypothesis.
  2. Find the right test.
  3. Execute the test.
  4. Make a decision.

Hypotheses

  • A hypothesis is an idea that can be tested.
  • It is a supposition or proposed explanation made on the basis of limited evidence as a starting point for further investigation.

Null Hypothesis

  • Notation: H0H_0
  • It is the hypothesis to be tested.
  • It is the status-quo: The belief that we are contesting with our test.
  • It is similar to the notion: Innocent until proven guilty.
  • In statistics, it is the statement that we are trying to reject.

Alternative Hypothesis

  • Notation: H1H_1 or HAH_A
  • It is the change or innovation that is contesting the status-quo.
  • The act of performing a test shows that have doubts about the truthfulness of the null hypothesis.
  • In general, the researcher’s opinion is contained in the alternative hypothesis.

Hypotheses Testing

  • During testing, we can either accept the null hypothesis or reject the null hypothesis.
  • In a two-tailed test:
    • Rejection region: The tails of the distribution show when we reject the null hypothesis.
    • Acceptance region: Everything that remains in the middle shows when we accept the null hypothesis.

Two-tailed Hypothesis Testing

Level of Significance

  • Notation: α\alpha
  • It is the probability of rejecting a null hypothesis that is true. So, it is the probability of making this error.
  • Common significance levels: 0.10, 0.05, 0.01

Types of tests

Two-sided (two-tailed) test

It is used when the null hypothesis contains an equality ($=$) or an inequality sign ($\not =$). Two-Sided Test

One-sided (one-tailed) test

It is used when the null hypothesis doesn’t contain an equality or inequality sign ($<, >, \le, \ge$). One-Sided Test

Types of errors while testing

Type I errors

  • These errors are also called False Positive errors.
  • This occurs when you reject a true null hypothesis.
  • The probability of making this error is α\alpha - the level of significance.
  • Since you choose the alpha, the responsibility of making this error lies solely with you.

Type II errors

  • These errors are also called False Negative errors.
  • This occurs when you accept a false null hypothesis.
  • The probability of making this error is β\beta.
  • Beta depends solely on the sample size and the population variance. So, if your topic is hard to test due to difficulty in sampling or high variability of the data, it is more likely to make this type of error.
  • So, this type of error is not your fault.

Power of the test

  • The goal of a test is to reject a false null hypothesis.
  • It’s probability is denoted by 1β1 - \beta and is called the power of the test.

Rejecting the Null hypothesis

  • The law in most countries states that a person is “innocent until proven guilty”.
  • It comes from the Latin phrase: Ei incumbit probatio, qui dicit, non qui negat; cum per rerum naturam factum negantis probatio nulla sit.
  • This translates to: The proof lies upon him who affirms, not upon him who denies; since, by the nature of things, he who denies a fact cannot produce any proof.
  • So, here, the null hypothesis is that “he is innocent”.
  • If we reject the null hypothesis, then we are saying that the person is guilty. But if the person is really innocent, then we have committed a Type I error.
  • If we accept the null hypothesis, then we are saying that the person is innocent. But if the person is really guilty, then we have committed a Type II error.
  • Example H0H_0: The person is innocent.
H0The truthH_0 \diagdown The \space truth H0H_0 is true H0H_0 is false
Accept H0H_0 \checkmark Type II error (False Negative)
Reject H0H_0 Type I error (False Positive) \checkmark

p-value

  • It is the smallest level of significance at which we can reject the null hypothesis, given the observed sample statistic.

Notable p-values

  • 0.000: This indicates that we reject the null hypothesis at all significance levels.
  • 0.05:
    • Often called the cut-off line, this indicates that we would accept the null hypothesis if our p-value is higher than 0.050.05. Otherwise, we reject reject the null hypothesis.
    • This is equivalent to testing at 5%5\% significance level.

Decision Rules

  • Reject the null hypothesis if:
    • |test statistic| > |critical value|
    • The p-value is less than some significance level like 0.050.05.

Formulae to calculate p-value

  • One-tailed test:
    • Used when the null hypothesis includes either the <<, \le, >>, or \ge sign.
    • Formula: 1 - cdf(test statistic)
  • Two-tailed test:
    • Used when the null hypothesis includes == or \ne sign.
    • Formula: 2 x (1 - cdf(|test statistic|))


Comments

comments powered by Disqus