The Normal distribution is ubiquitous in statistics, partially because of the central limit theorem, which states that sums of i.i.d. random variables eventually become Normal. Linear transformations of Normal random variables result in new random variables that are also Normal. If you are taking an intro stats course, you'll likely use the Normal distribution for Z-tests and in simple linear regression. Under regularity conditions, maximum likelihood estimators are asymptotically Normal. The Normal distribution is also called the gaussian distribution.

dist_normal(mu = 0, sigma = 1, mean = mu, sd = sigma)

## Arguments

mu, mean

The mean (location parameter) of the distribution, which is also the mean of the distribution. Can be any real number.

sigma, sd

The standard deviation (scale parameter) of the distribution. Can be any positive number. If you would like a Normal distribution with variance $$\sigma^2$$, be sure to take the square root, as this is a common source of errors.

## Details

We recommend reading this documentation on https://pkg.mitchelloharawild.com/distributional/, where the math will render nicely.

In the following, let $$X$$ be a Normal random variable with mean mu = $$\mu$$ and standard deviation sigma = $$\sigma$$.

Support: $$R$$, the set of all real numbers

Mean: $$\mu$$

Variance: $$\sigma^2$$

Probability density function (p.d.f):

$$f(x) = \frac{1}{\sqrt{2 \pi \sigma^2}} e^{-(x - \mu)^2 / 2 \sigma^2}$$

Cumulative distribution function (c.d.f):

The cumulative distribution function has the form

$$F(t) = \int_{-\infty}^t \frac{1}{\sqrt{2 \pi \sigma^2}} e^{-(x - \mu)^2 / 2 \sigma^2} dx$$

but this integral does not have a closed form solution and must be approximated numerically. The c.d.f. of a standard Normal is sometimes called the "error function". The notation $$\Phi(t)$$ also stands for the c.d.f. of a standard Normal evaluated at $$t$$. Z-tables list the value of $$\Phi(t)$$ for various $$t$$.

Moment generating function (m.g.f):

$$E(e^{tX}) = e^{\mu t + \sigma^2 t^2 / 2}$$

## Examples

dist <- dist_normal(mu = 1:5, sigma = 3)

dist
#> <distribution[5]>
#> [1] N(1, 9) N(2, 9) N(3, 9) N(4, 9) N(5, 9)
mean(dist)
#> [1] 1 2 3 4 5
variance(dist)
#> [1] 9 9 9 9 9
skewness(dist)
#> [1] 0 0 0 0 0
kurtosis(dist)
#> [1] 0 0 0 0 0

generate(dist, 10)
#> [[1]]
#>  [1] -0.8108486  3.1290850 -1.4674832  1.7460871  3.4934064  3.9052010
#>  [7]  1.7590286  0.1694654 -3.6338433  3.3848392
#>
#> [[2]]
#>  [1] -2.2066013 -2.4039005  3.2639074  4.3371113 -2.0375703  5.9578031
#>  [7]  3.0801269 -3.7750115  0.9902331  5.0333456
#>
#> [[3]]
#>  [1] 4.1220225 4.9879121 4.6098284 2.7827616 4.8635804 8.1434257 1.2292206
#>  [8] 0.8810348 1.5311399 0.6177225
#>
#> [[4]]
#>  [1]  5.2400808  5.3246579  2.0500678  7.0236494  3.6936049  4.9635324
#>  [7]  2.8028659 -0.2610234  1.2803466  6.4835634
#>
#> [[5]]
#>  [1]  1.0661804  8.2075461  5.3635234  9.7258872  9.7606581  0.6162498
#>  [7]  7.9255529  8.3436648 12.8533424 -0.3098655
#>

density(dist, 2)
#> [1] 0.12579441 0.13298076 0.12579441 0.10648267 0.08065691
density(dist, 2, log = TRUE)
#> [1] -2.073106 -2.017551 -2.073106 -2.239773 -2.517551

cdf(dist, 4)
#> [1] 0.8413447 0.7475075 0.6305587 0.5000000 0.3694413

quantile(dist, 0.7)
#> [1] 2.573202 3.573202 4.573202 5.573202 6.573202