Suppose we have random variables , and a real-valued function . In this chapter, weβre only going to do one thing: introduce multiple methods to find the probability distribution of . Any one of these methods can be employed to find the distribution , but usually one of the methods leads to a simpler derivation than the others. The βbestβ method varies from one application to another.

### The method of distribution functions

The first way of finding the distribution of is using the definition directly.

We can work this out in the following steps:

- Write out the distribution function of :

- Find the region of such that and denote the region as .

- Integrate over .

The hardest part of this method is finding the set . Weβll gain some insight with some examples.

#### Sugar example

A company is selling sugar online. Suppose the amount of sugar it can sell per day is tons, which is a continuous random variable with density function defined as

For each ton of sugar sold, the company can earn $300. The daily operation cost is $100. Find the probability distribution of the daily income of this company.

Let the random variable denote the daily profit in hundred dollars. We want to write as and find .

If , . If , . When

Note that as ranges from 0 to 1, ranges from β
-1 to 2, so the distribution function of is

The density function can also be calculated:

#### Example of two variables

Suppose and are two continuous random variables with joint density function

Find the density function of . Also use the density function of to find .

We first need to find and use it to obtain the density function .

Now we need to find the region of and . We know that and , and is bounded between 0 and 1 due to the latter condition.

## R code.

`library(ggplot2) ggplot(NULL, aes(x = c(0, 1))) + stat_function( fun = ~ .x - 0.3, geom = "area", xlim = c(0.3, 1), fill = "#0073C2", alpha = 0.7 ) + stat_function( fun = ~ .x, geom = "area", xlim = c(0, 1), fill = "#CD534C", alpha = 0.5 ) + geom_segment(aes(x = 1, y = 0, xend = 1, yend = 1), linetype = "dashed") + geom_segment(aes(x = 0, y = 1, xend = 1, yend = 1), linetype = "dashed") + labs(x = expression(y[1]), y = expression(y[2])) + ggpubr::theme_pubr()`

We can find the integral by subtracting the lower-right blue triangle region () from the entire red triangle.

So the distribution function of is

Calculating the density function of is now straightforward:

which gives us

The expectation of can be calculated as

### Sum of independent random variables

An important application of the method of distribution functions is to calculate the distribution of from the distributions of and when they are independent, continuous random variables.

The first few steps are the same as whatβs described above:

Define the region such that

We have

Now the independence comes into place:

The distribution function of is called the

**convolution**of and . By differentiating the above distribution function, we can find the density function of .#### Uniform distribution example

If

*and are two independent random variables both uniformly distributed on , calculate the probability density of . We can directly apply the equation above:*We know that and . There are several cases here:

- If , , then .

- If , we also need , or .

- If , we also need , or .

- If , .

When

When

In summary,

The sum of two uniform random variables is called a

**triangular random variable**because of the shape of the density function above.## R code.

`ggplot(NULL, aes(x = c(-1, 3))) + stat_function(fun = ~ 0, geom = "line", xlim = c(-1, 0)) + stat_function(fun = ~ .x, geom = "line", xlim = c(0, 1)) + stat_function(fun = ~ 2 - .x, geom = "line", xlim = c(1, 2)) + stat_function(fun = ~ 0, geom = "line", xlim = c(2, 3)) + labs(x = "a", y = expression(f[X+Y](a))) + ggpubr::theme_pubr()`

#### Normal distribution example

Suppose and are two independent standard normal random variables. Find the density of . Recall that

Therefore is a normal random variable with mean 0 and variance 2. Similar results of this example can be obtained for a more general case, which is a very important property of normal random variables.

**Theorem**

Let be a sequence of independent normal random variables with parameters and , then is normally distributed with parameters and .

### The method of transformations

We first consider the univariate case. Suppose is a continuous random variable with density function . To find

what we can do is to transform the condition back to a condition of . We know that exists when the function is monotonic, i.e.Β , .

Given is a monotonic function of , maps every distinct value of to a distinct value of .

If is a monotonically increasing function,

If is a monotonically decreasing function,

These two cases can be unified as follows using the difference in the sign of . If is a monotonic function for all such that ,

#### Sugar example revisited

In the sugar example, we defined the density function of as

and . Find with the method of transformations.

When

so is a monotonically decreasing function. The inverse function is

and the first derivative is

Now we can find the density function using the formula

Clean this up a bit and we get

#### Multivariate example

The transformation method can also be used in multivariate situations. Let random variables and have a joint density function

Find the density function for .

We can prove that and use the method described earlier to find the distribution for the sum of two independent random variables. We can also apply the method of transformations here. If we fix

*,*we haveFrom here we can consider this as a one-dimensional transformation problem.

Which is

Using the joint density function of and , we can obtain the marginal density function of :

So our final answer is

#### Multivariate procedure

As shown in the example above, when our problem is and we want to find , the procedure is

- Fix , and denote

- Calculate the joint density function of and using the formula

- Find the marginal density of with

Suppose random variables and have a joint density function

Find the density function of .

Following the procedure, we first fix for some . Then we consider the univariate transformation and get the joint density function for and .

The joint density function of and is

and now we can obtain the marginal density of :

### The method of moment generating functions

We know that for a random variable , its associated moment generating functions is given by . The moment can help us find .

For random variables and , if both moment generating functions exist and , we have , i.e.Β and have the same probability distribution.

#### Normal random variables

Let and . Show that using the method of moment generating functions.

Define . If we can show that , then the distribution of is identical to the distribution of .

We know that , so

#### Use for sum of independent random variables

The method of moment generating functions is also very useful for calculating the probability distributions for the sum of independent random variables.

Suppose and are independent with moment generating functions and . If , then

In this special case,

#### Some more examples

Find the distribution of if and are

- independent binomial random variables with parameters and , respectively.

- independent Poisson random variables with parameters and , respectively.

In the first scenario we have and . since . Recall that

so the MGF of is

which tells us .

In the second case, we have and . Since

the MGF of can be written as

So .