Skip to content

Gwet's gamma coefficient

Jeffrey Girard edited this page Nov 14, 2022 · 17 revisions

Overview

The gamma coefficient is a chance-adjusted index for the reliability of categorical measurements. It estimates chance agreement using a hybrid (category- and average-distribution-based) approach. It is also called Gwet's agreement coefficient (AC1 when nominal and AC2 when non-nominal).

MATLAB Functions

  • mGAMMA %Calculate gamma using vectorized formulas

Simplified Formulas

Use these formulas with two raters and two (dichotomous) categories:


$$p_o = \frac{n_{11} + n_{22}}{n}$$

$$m_1 = \frac{n_{+1} + n_{1+}}{2}$$

$$m_2 = \frac{n_{+2} + n_{2+}}{2}$$

$$p_c = \left( \frac{m_1}{n} \right) \left( \frac{m_2}{n} \right) + \left( \frac{m_1}{n} \right) \left( \frac{m_2}{n} \right)$$

$$\gamma = \frac{p_o - p_c}{1 - p_c}$$


$n_{11}$ is the number of items both raters assigned to category 1

$n_{22}$ is the number of items both raters assigned to category 2

$n$ is the total number of items

$n_{1+}$ is the number of items rater 1 assigned to category 1

$n_{2+}$ is the number of items rater 1 assigned to category 2

$n_{+1}$ is the number of items rater 2 assigned to category 1

$n_{+2}$ is the number of items rater 2 assigned to category 2

Contingency Table

Generalized Formulas

Use these formulas with multiple raters, multiple categories, and any weighting scheme:


$$r_{ik}^\star = \sum_{l=1}^q w_{kl} r_{il}$$

$$p_o = \frac{1}{n'} \sum_{i=1}^{n'} \sum_{k=1}^q \frac{r_{ik} (r_{ik}^\star - 1)}{r_i (r_i - 1)}$$

$$T_w = \sum_{k,l} w_{kl}$$

$$\pi_k = \frac{1}{n} \sum_{i=1}^n \frac{r_{ik}}{r_i}$$

$$p_c = \frac{T_w}{q(q-1)} \sum_{k,l} \pi_k (1 - \pi_k)$$

$$\gamma = \frac{p_o - p_c}{1 - p_c}$$


$q$ is the total number of categories

$w_{kl}$ is the weight associated with two raters assigning an item to categories $k$ and $l$

$r_{il}$ is the number of raters that assigned item $i$ to category $l$

$n'$ is the number of items that were coded by two or more raters

$r_{ik}$ is the number of raters that assigned item $i$ to category $k$

$r_i$ is the number of raters that assigned item $i$ to any category

$n$ is the total number of items

References

  1. Gwet, K. L. (2008). Computing inter-rater reliability and its variance in the presence of high agreement. The British Journal of Mathematical and Statistical Psychology, 61(1), 29–48.
  2. Gwet, K. L. (2014). Handbook of inter-rater reliability: The definitive guide to measuring the extent of agreement among raters (4th ed.). Gaithersburg, MD: Advanced Analytics.