Understanding Measures of Dispersion in an easy manner !

Introduction

In the field of statistics for both sample and population data, when you have a whole population you are 100% sure of the measures you are calculating. When you use sample data and compute statistic then a sample statistic is the approximation of population parameter. When you have 10 different samples which give you 10 different measures.

Measures of dispersion

The mean, median and mode are usually not by sufficient measure to reveal the shape of distribution of a data set. We also need a measure that can provide some information about the variation among data set values.

The measures that helps us to know the spread of data set is called are called as “Measures of dispersion”.  The Measures of Central Tendency and Measures of dispersion taken together gives a better picture about the dataset.

Measures of dispersion are called Measures of variability. Variability also called as dispersion or spread refers how spread data is. It helps to compare data set with other data sets. It helps to determine the consistency. Once we get to know the variation of data, we can control the causes behind that particular variation.

Some measures of dispersion are :

  1. Range
  2. Variance
  3. Standard deviation
  4. Interquartile Range (IQR)

Note: In this blog we won’t be discussing IQR, as it has some other application which we will cover in detail

Range

The difference between the smallest and largest observation in sample is called as “Range”. In easy words, range is the difference between the two extreme values in the dataset.

Let say, if X(max) and X(min) are two extreme values then range will be,

Range = X(max) – X(min)

Example: The minimum and maximum BP are 113 and 170. Find range.

Range = X(max) – X(min)

= 170 – 113

= 57

So, range is 57.

Variance

Now let’s consider two different distributions A and B which has data sets as following

A = {2, 2, 4, 4} and B = {1, 1, 5, 5}

If we compute mean for both the distributions,

                   

We can see that we have got the mean as 3 for both the distribution, but if we observe both the distributions there is difference in the data points. When observing distribution A we can say data points are close to each other there is not a large difference. On the other side when we observer distribution B we can observe that data points are far then each other there is a large difference. We can say that the distance is more that means there is more spread and this spread is called “Variance”.

Variance measures the dispersion of set of data points around their mean. Variance in statistics is a measure of how far each value in the data set from the mean.

The formula for variance is different for both Population and Sample
Why squaring?

Dispersion cannot be negative. Dispersion is nothing but the distance hence it cannot be negative. If we don’t square we will get both negative and positive value which won’t cancel out. Instead, squaring amplifies the effect of large distances.

Let us consider first variance for population, it is given by formula

When we computed the mean we saw it was same but when we compute the variance we observed that both the variance are different. The variance of distribution A is 4 and that of distribution B is 1.

The reason behind the large and small value in variance is because of the distance between the data points.

When the distance between the data points is more which means dispersion or spread is more hence we get higher variance. When the distance between the data points is less which means dispersion or spread is less hence we get lower variance.

For sample variance, there is little change in the formula.

Why n-1 ?

As we now we take sample from population data. So sample data should surely make some inference about the population data. There are different inferences using sample data for population data.

Now let us consider that we have a population data of ages and we are plotting it on the graph and it increasing across the x-axis. Also we have the mean at the middle.

So if we randomly select sample in the population data, the sample mean and population mean is almost equal.

If we take a random sample then the distance between the mean of random sample and actual sample is huge. So sample mean <<<<< population mean and sample variance <<<< population variance. Here we are underestimating the true population variance.

Hence we take the n-1 during the calculation of variance using sample data. n-1 makes the distance shorter then that of using n. Therefore to reduce the distance we use ‘n – 1’ instead of ‘n’ while computing sample variance. This ‘n-1’ is called as Bessel’s correction.

Also while discussing further topics we will come across a term Degree of freedom = n – 1.

Importance of Variance

  1. Variance can determine what a typical member of a data set looks like and how similar the points are.
  2. If the variance is high it implies that there are very large dissimilarities among data points in data set.
  3. If the variance is zero it implies that every member of data set is the same.

Standard deviation

As variance is measure of dispersion but sometime the figure obtained while computing variance is pretty large and hard to compare as unit of measurement is square.

Standard deviation (SD) is a very common measure of dispersion. SD also measures how spread out the values in data ste are around the mean.

More accurately it is a measure of average distance between the values of data and mean.

  1. If data values are similar, then the SD will be low (close to zero).
  2. If data values are of high variable, then the SD will be high (far from zero).

  • If SD is small, data has little spread (i.e. majority of points fall near the mean).
  • If SD = 0, there is no spread. This only happens when all data items are of same value.
  • The SD is significantly affected by outliers and skewed distributions.

Coefficient of variation

Standard deviation is the most common measure of variablity for a single data set Whereas the coefficient of variation is used to compare the SD of two or more dataset.

Example

     

  • If we observe, variance gives answer in square units and so in original units and hence SD is preferred and interpretable.
  • Correlation coefficient does not have unit of measurement. It is universal across data sets and perfect for comparisons.
  • If Correlation coefficient is same we can say that two data sets has same variability.

Python Implementation 

Python code for finding range

import numpy as np
import statistics as st

data = np.array([4,6,9,3,7])
print(f"The range of the dataset is {max(data)-min(data)}")

The Output will give us the value of range i.e. 6

Python code for finding variance

import numpy as np
import statistics as st

data = np.array([3,8,6,10,12,9,11,10,12,7])
var = st.variance(data)

print(f"The variance of the data is {var}")

The Output will give us the value of variance i.e. 8.

Python code for finding Standard deviation

import numpy as np
import statistics as st

data = np.array([3,8,6,10,12,9,11,10,12,7])
sd= st.stdev(data)

print(f"The standard deviation of data points is {sd}")

The Output will give us the value of SD i.e. 2.8284271247461903

Conclusion

So here we have understood about Measures of variability. Measures of Central Tendency and Measures of Variability together are called Univariate Measures of analysis.

Measures which deals with only one variable is called as univariate measures.

In the next section, we are going to discuss about more interesting topic such as 5 number summary statistics and skewness.

Happy Learning !! 

 

 

All about Descriptive and Inferential Statistics

So in the previous article we had a brief introduction about Statistics and importance of it in the field of analytics. In this article we will move one foot forward towards understanding the stats.

In this blog we are going to have an overview of types of statistics, Types of data and measurement scale.

Types of Statistics

So basically statistics is divided into 2 major categories i.e. Descriptive and Inferential statistics.

Descriptive statistics:

This is one of the very important part of stats. In this type we deal with numbers that can be numbers, figures or information to describe any certain phenomena. These numbers are known as descriptive statistics.

It helps us to organize and summarize data using numbers and graphs to look for a pattern in the data set.

Some examples of this type of statistics are Measures of central tendency which include mean, median, mode, etc. Also includes Measures of variability that are standard deviation, range, variance, etc.

Example: Reports of production, cricket batting averages, ages, ratings, marks, etc.

Inferential statistics:

To make an inference or draw a conclusion from the population sample data is used. Inferential statistics is a decision, estimate, prediction or generalization about a population based on the sample.

Inferential statistics is used to make interferences from the data whereas descriptive statistics simply describes what’s going on in our data.

Scenario based study:

Suppose a particular college has 1000 students. We are interested to find out how many of the total students prefer eating in canteen and how much prefer eating in mess. A random group of 100 students were selected and hence it becomes our sample data.

So, population size = 1000 college students

sample size = 100 random students selected

So now we can do survey with this 100 student sample and after doing the survey we get the following insights.

So after analyzing the data we get the following visualizations.

Insights rederived:

  1. 72 % of students prefer eating in canteen.
  2. Of the total students who prefer canteen 44.4 % are from 4th year.
  3. Of the total number of students who prefer canteen 72% are from 3rd and 4th year.
  4. 1st year students are more inclined towards eating in mess.

The above statistics give the trends of data among the sample data. In this insights we are using numbers hence this all is included in Descriptive Statistics.

Now, suppose we wanted to open a canteen or mess in the college from the above insights we can assume that –

  1. 3rd year and 4th year students are main target to start the business.
  2. To get more sales you can provide discounts to 1st year and 2nd year students.
  3. Since from the above insights we can conclude that canteen is better option than that of mess to run a business and most of the students in the data are inclined towards canteen than that of mess.

So here we made interferences/assumptions/estimations from the above insights for the whole college on the basis of the sample data. Hence this is a crucial part of Inferential statistics.

So here we have discussed the main difference between descriptive and inferential statistics based on the above scenario.