Positive skew
In probability theory and statistics, skewness is a measure of the extent to which a probability distribution of a realvalued random variable "leans" to one side of the mean. The skewness value can be positive or negative, or even undefined.
The qualitative interpretation of the skew is complicated. For a unimodal distribution, negative skew indicates that the tail on the left side of the probability density function is longer or fatter than the right side – it does not distinguish these shapes. Conversely, positive skew indicates that the tail on the right side is longer or fatter than the left side. In cases where one tail is long but the other tail is fat, skewness does not obey a simple rule. For example, a zero value indicates that the tails on both sides of the mean balance out, which is the case both for a symmetric distribution, and for asymmetric distributions where the asymmetries even out, such as one tail being long but thin, and the other being short but fat. Further, in multimodal distributions and discrete distributions, skewness is also difficult to interpret. Importantly, the skewness does not determine the relationship of mean and median.
Contents
Introduction
Consider the distribution in the figure. The bars on the right side of the distribution taper differently than the bars on the left side. These tapering sides are called tails, and they provide a visual means for determining which of the two kinds of skewness a distribution has:
 negative skew: The left tail is longer; the mass of the distribution is concentrated on the right of the figure. It has relatively few low values. The distribution is said to be leftskewed, lefttailed, or skewed to the left.^{[1]} Example (observations): 1,1001,1002,1003.
 positive skew: The right tail is longer; the mass of the distribution is concentrated on the left of the figure. It has relatively few high values. The distribution is said to be rightskewed, righttailed, or skewed to the right.^{[1]} Example (observations): 1,2,3,1000.
Relationship of mean and median
The skewness is not strictly connected with the relationship between the mean and median: a distribution with negative skew can have the mean greater than or less than the median, and likewise for positive skew.
In the older notion of nonparametric skew, defined as $(\backslash mu\; \; \backslash nu)/\backslash sigma,$ where µ is the mean, ν is the median, and σ is the standard deviation, the skewness is defined in terms of this relationship: positive/right nonparametric skew means the mean is greater than (to the right of) the median, while negative/left nonparametric skew means the mean is less than (to the left of) the median. However, the modern definition of skewness and the traditional nonparametric definition do not in general have the same sign: while they agree for some families of distributions, they differ in general, and conflating them is misleading.
If the distribution is symmetric then the mean is equal to the median and the distribution will have zero skewness.^{[2]} If, in addition, the distribution is unimodal, then the mean = median = mode. This is the case of a coin toss or the series 1,2,3,4,... Note, however, that the converse is not true in general, i.e. zero skewness does not imply that the mean is equal to the median.
"Many textbooks," a 2005 article points out, "teach a rule of thumb stating that the mean is right of the median under right skew, and left of the median under left skew. [But] this rule fails with surprising frequency. It can fail in multimodal distributions, or in distributions where one tail is long but the other is fat. Most commonly, though, the rule fails in discrete distributions where the areas to the left and right of the median are not equal. Such distributions not only contradict the textbook relationship between mean, median, and skew, they also contradict the textbook interpretation of the median."^{[3]}
Definition
The skewness of a random variable X is the third standardized moment, denoted γ_{1} and defined as
 $$
\gamma_1 = \operatorname{E}\Big[\big(\tfrac{X\mu}{\sigma}\big)^{\!3}\, \Big] = \frac{\mu_3}{\sigma^3} = \frac{\operatorname{E}\big[(X\mu)^3\big]}{\ \ \ ( \operatorname{E}\big[ (X\mu)^2 \big] )^{3/2}} = \frac{\kappa_3}{\kappa_2^{3/2}}\ ,
where μ_{3} is the third central moment μ, σ is the standard deviation, and E is the expectation operator. The last equality expresses skewness in terms of the ratio of the third cumulant κ_{3} and the 1.5th power of the second cumulant κ_{2}. This is analogous to the definition of kurtosis as the fourth cumulant normalized by the square of the second cumulant.
The skewness is also sometimes denoted Skew[X].
The formula expressing skewness in terms of the noncentral moment E[X^{3}] can be expressed by expanding the previous formula,
 $$
\begin{align}
\gamma_1 &= \operatorname{E}\bigg[\Big(\frac{X\mu}{\sigma}\Big)^{\!3} \,\bigg] \\ & = \frac{\operatorname{E}[X^3]  3\mu\operatorname E[X^2] + 3\mu^2\operatorname E[X]  \mu^3}{\sigma^3}\\ &= \frac{\operatorname{E}[X^3]  3\mu(\operatorname E[X^2] \mu\operatorname E[X])  \mu^3}{\sigma^3}\\ &= \frac{\operatorname{E}[X^3]  3\mu\sigma^2  \mu^3}{\sigma^3}\ .
\end{align}
Sample skewness
For a sample of n values the sample skewness is
 $$
g_1 = \frac{m_3}{m_2^{3/2}} = \frac{\tfrac{1}{n} \sum_{i=1}^n (x_i\overline{x})^3}{\left(\tfrac{1}{n} \sum_{i=1}^n (x_i\overline{x})^2\right)^{3/2}}\ ,
where $\backslash scriptstyle\backslash overline\{x\}$ is the sample mean, m_{3} is the sample third central moment, and m_{2} is the sample variance.
Given samples from a population, the equation for the sample skewness $g\_1$ above is a biased estimator of the population skewness. (Note that for a discrete distribution the sample skewness may be undefined (0/0), so its expected value will be undefined.) The usual estimator of population skewness is
 $$
G_1 = \frac{k_3}{k_2^{3/2}} = \frac{\sqrt{n\,(n1)}}{n2}\; g_1,
where $k\_3$ is the unique symmetric unbiased estimator of the third cumulant and $k\_2$ is the symmetric unbiased estimator of the second cumulant. Unfortunately $G\_1$ is, nevertheless, generally biased (although it obviously has the correct expected value of zero for a symmetric distribution). Its expected value can even have the opposite sign from the true skewness. For instance a mixed distribution consisting of very thin Gaussians centred at −99, 0.5, and 2 with weights 0.01, 0.66, and 0.33 has a skewness of about −9.77, but in a sample of 3, $G\_1$ has an expected value of about 0.32, since usually all three samples are in the positivevalued part of the distribution, which is skewed the other way.
The variance of the skewness of a sample of size n from a normal distribution is^{[4]}^{[5]}
 $\backslash operatorname\{var\}(G\_1)=\; \backslash frac\{6n\; (\; n\; \; 1\; )\}\{\; (\; n\; \; 2\; )(\; n\; +\; 1\; )(\; n\; +\; 3\; )\; \}\; .$
An approximate alternative is 6/n but this is inaccurate for small samples.
Properties
Skewness can be infinite, as when
 $\backslash Pr\; \backslash left[\; X\; >\; x\; \backslash right]=x^\{3\}\backslash mbox\{\; for\; \}x>1,\backslash \; \backslash Pr[X<1]=0$
or undefined, as when
 $\backslash Pr[X]=(1x)^\{3\}\; 2\backslash mbox\{\; for\; negative\; \}x\backslash mbox\{\; and\; \}\backslash pr[x>x]=(1+x)^\{3\}/2\backslash mbox\{\; for\; positive\; \}x.$
In this latter example, the third cumulant is undefined. One can also have distributions such as
 $\backslash Pr\; \backslash left[\; X\; >\; x\; \backslash right]=x^\{2\}\backslash mbox\{\; for\; \}x>1,\backslash \; \backslash Pr[X<1]=0$
where both the second and third cumulants are infinite, so the skewness is again undefined.
If Y is the sum of n independent and identically distributed random variables, all with the distribution of X, then the third cumulant of Y is n times that of X and the second cumulant of Y is n times that of X, so $\backslash mbox\{Skew\}[Y]\; =\; \backslash mbox\{Skew\}[X]/\backslash sqrt\{n\}$. This shows that the skewness of the sum is smaller, as it approaches a Gaussian distribution in accordance with the central limit theorem.
Applications
Skewness has benefits in many areas. Many models assume normal distribution; i.e., data are symmetric about the mean. The normal distribution has a skewness of zero. But in reality, data points may not be perfectly symmetric. So, an understanding of the skewness of the dataset indicates whether deviations from the mean are going to be positive or negative.
D'Agostino's Ksquared test is a goodnessoffit normality test based on sample skewness and sample kurtosis.
In almost all countries the distribution of income is skewed to the right.
Other measures of skewness
Pearson's skewness coefficients
Karl Pearson suggested simpler calculations as a measure of skewness:^{[6]} the Pearson mode or first skewness coefficient,^{[7]} defined by
 (mean − mode) / standard deviation,
as well as Pearson's median or second skewness coefficient,^{[8]} defined by
 3 (mean − median) / standard deviation.
The latter is a simple multiple of the nonparametric skew.
Starting from a standard cumulant expansion around a Normal distribution, one can actually show that skewness = 6 (mean − median) / standard deviation ( 1 + kurtosis / 8) + O(skewness^{2}). One should keep in mind that above given equalities often don't hold even approximately and these empirical formulas are abandoned nowadays. There is no guarantee that these will be the same sign as each other or as the ordinary definition of skewness.
The adjusted FisherPearson standardized moment coefficient is the version found in Excel and several statistical packages including Minitab, SAS and SPSS.^{[9]} The formula for this statistic is
 $G\; =\; \backslash frac\{\; n\; \}\{(\; n\; \; 1\; )(\; n\; \; 2\; )\; \}\; \backslash sum\_\{\; i\; =\; 1\; \}^n\; \backslash left(\; \backslash frac\{\; x\_i\; \; \backslash overline\{\; x\; \}\; \}\{\; s\; \}\; \backslash right)^3$
where n is the sample size and s is the sample standard deviation.
Quantile based measures
A skewness function
 $\backslash gamma(\; u\; )=\; \backslash frac\{\; F^\{\; 1\; \}(\; u\; )\; +F^\{\; 1\; \}(\; 1\; \; u\; )2F^\{\; 1\; \}(\; 1\; /\; 2\; )\; \}\{F^\{\; 1\; \}(\; u\; )\; F^\{\; 1\; \}(\; 1\; \; u\; )\; \}$
can be defined,^{[10]}^{[11]} where F is the cumulative distribution function. This leads to a corresponding overall measure of skewness^{[10]} defined as the supremum of this over the range 1/2 ≤ u < 1. Another measure can be obtained by integrating the numerator and denominator of this expression.^{[12]} The function γ(u) satisfies 1 ≤ γ(u) ≤ 1 and is well defined without requiring the existence of any moments of the distribution.^{[12]}
Galton's measure of skewness^{[13]} is γ(u) evaluated at u = 3 / 4. Other names for this same quantity are the Bowley Skewness,^{[14]} the YuleKendall index^{[15]} and the quartile skewness.
Kelley's measure of skewness uses u = 0.1.
Lmoments
Use of Lmoments in place of moments provides a measure of skewness known as the Lskewness.^{[16]}
Cyhelský's skewness coefficient
An alternative skewness coefficient may be derived from the sample mean and the individual observations:^{[17]}
 a = ( number of observations below the mean  number of observations above the mean ) / total number of observations
The distribution of the skewness coefficient a in large sample sizes (≥45) approaches that of a normal distribution. If the variates have a normal or a uniform distribution the distribution of a is the same. The behavior of a when the variates have other distributions is currently unknown. Although this measure of skewness is very intuitive, an analytic approach to its distribution has proven difficult.
Distance skewness
A value of skewness equal to zero does not imply that the probability distribution is symmetric. Thus there is a need for another measure of asymmetry which has this property: such a measure was introduced in 2000.^{[18]} It is called distance skewness and denoted by dSkew. If X is a random variable which takes values in the ddimensional Euclidean space, X has finite expectation, X' is an independent identically distributed copy of X and $\backslash \backslash cdot\backslash $ denotes the norm in the Euclidean space then a simple measure of asymmetry is
 dSkew (X) := 1  EXX' / EX + X' if X is not 0 with probability one,
and dSkew (X):= 0 for X = 0 (with probability 1). Distance skewness is always between 0 and 1, equals 0 if and only if X is diagonally symmetric (X and X has the same probability distribution) and equals 1 if and only if X is a nonzero constant with probability one.^{[19]} Thus there is a simple consistent statistical test of diagonal symmetry based on the sample distance skewness:
 dSkew_{n}(X):= 1 ∑_{i,j} x_{i} – x_{j} / ∑_{i,j}x_{i} + x_{j}.
Groeneveld & Meeden’s coefficient
Groeneveld & Meeden have suggested, as an alternative measure of skewness,^{[12]}
 $\backslash mathrm\{skew\}(X)\; =\; \backslash frac\{(\; \backslash mu\; \; \backslash nu\; )\; \}\{\; E(\; \; X\; \; \backslash nu\; \; )\; \}$
where μ is the mean, ν is the median, … is the absolute value and E() is the expectation operator.
See also
Commons has media related to Skewness (statistics). 
Notes
References
 Johnson, NL, Kotz, S, Balakrishnan N (1994) Continuous Univariate Distributions, Vol 1, 2nd Edition Wiley ISBN 0471584959
External links
has learning materials about Skewness 
 An Asymmetry Coefficient for Multivariate Distributions by Michel Petitjean
 On More Robust Estimation of Skewness and Kurtosis Comparison of skew estimators by Kim and White.
 Closedskew Distributions — Simulation, Inversion and Parameter Estimation
