Quantifying the relationship between different variables is one of the most popular practices in research. The type of scale used to measure the variables is the key factor that should be considered when selecting the right function from those available. There are four different types of variables, and each can be identified from the others by the relations that apply to the possible values on the measuring scale. The nominal example is the most straightforward to comprehend. One further characteristic of ordinal scales is that their potential values can be placed in any sequence. It is one of the scale’s possible uses. Additional significance is associated with the differences in value from one interval to the next on the interval scale. Last but not least, in the ratio scale, we have an absolute zero; hence, dividing values on the ratio scale is entirely appropriate.

### How to Choose Association Measures #

Let’s say you want to use a single measure or coefficient to investigate the relationship between two variables. Several factors must be considered when choosing an ideal measure of association, briefly covered in the Summary section. We solely base our choice of an acceptable measure on the variable we measure to simplify. Based on this data, we select a suitable measure of association to analyse the relationship between the two variables.

Let’s start with the association between a nominal variable, and as the name suggests, the statistical test on nominal variables is known as a test of “nominal variable association.” Assessments of nominal variables are commonly utilized in research carried out within the field of social sciences. These evaluations may focus on gender, color, religious affiliation, etc. There are two approaches to summarize the strength of the association between nominal variables:

- Coefficients based on the symmetric measure (χ2
- Coefficients based on Proportional Reduction of Error

The different nominal association statistics frequently used for symmetric measures are:

- Pearson Chi-square
- Maximum-likelihood Chi-square
- Tshuprow’s T
- Yates Correction
- Fisher Exact Test
- Contingency Coefficient
- Phi Coefficient
- Cramer V

### Further Reading #

- Agresti, Alan (1996). Introduction to categorical data analysis. NY: John Wiley and Sons.
- Bewick V, Cheek L, Ball J. Statistics review 8: Qualitative data – tests of association. Crit Care. 2004;8:46–53.
- Goodman, Leo A. and W. H. Kruskal (1954, 1959, 1963, 1972). Measures for association for cross-classification, I, II, III and IV. Journal of the American Statistical Association. 49: 732-764, 54: 123-163, 58: 310-364, and 67: 415-421 respectively. The 1972 installment discusses the uncertainty coefficient.
- Liebetrau, Albert M. (1983). Measures of association. Newbury Park, CA: Sage Publications. Quantitative Applications in the Social Sciences Series No. 32.
- Miller R, Siegmund D. Maximally selected Chi-square statistics. Biometrics. 1982;38:1101–
- Rosenberg, M. (1968). The logic of survey analysis. NY: Basic Books.
- Scott M, Flaherty D, Currall J. Statistics: Dealing with categorical data. J Small Anim Pract. 2013;54:3–8.
- Streiner D. Chapter 3: Breaking up is hard to do: The heartbreak of dichotomizing continuous data. In: Streiner DA, editor. Guide for the Statistically Perplexed. Buffalo, NY: University of Toronto Press; 2013.