Skip to main content
Medicine LibreTexts

Glossary

  • Page ID
    38537
  • Example and Directions
    Words (or words that have the same definition) The definition is case sensitive (Optional) Image to display with the definition [Not displayed in Glossary, only in pop-up on pages] (Optional) Caption for Image (Optional) External or Internal Link (Optional) Source for Definition
    (Eg. "Genetic, Hereditary, DNA ...") (Eg. "Relating to genes or heredity") The infamous double helix https://bio.libretexts.org/ CC-BY-SA; Delmar Larsen
    Glossary Entries
    Word(s) Definition Image Caption Link Source
    2x2 Table A convenient way for epidemiologists to organize data, from which one calculates either measures of association or test characteristics.        
    Absolute measure of association A measure of association calculated fundamentally by subtraction. See also risk difference.        
    Absolute risk See Incidence.        
    Attributable fraction A misleading measure of association that supposedly quantifies the proportion of cases of disease that can be “attributed” to a particular exposure. However, since every case of disease has more than one cause, the attributable fractions for all relevant exposures will sum to well over 100%, making the attributable fraction uninterpretable.        
    Baseline The start of a cohort study or randomized controlled trial.        
    Bias Systematic error. Selection bias stems from poor sampling (your sample is not representative of the target population), poor response rate from those invited to be in a study, treating cases and controls or exposed/unexposed differently, and/or unequal loss to follow up between groups. To assess selection bias, ask yourself "who did they get, and who did they miss?"--and then also ask yourself "does it matter"? Sometimes it does, other times, maybe it doesn't. Misclassification bias means that something (either the exposure, the outcome, a confounder, or all three) were measured improperly. Examples include people not being able to tell you something, people not being willing to tell you something, and an objective measure that is somehow systematically wrong (eg always off in the same direction, like a blood pressure cuff that is not zeroed correctly). Recall bias, social desirability bias, interviewer bias--these are all examples of misclassification bias. The end result of all of them is that people are put into the wrong box in a 2x2 table. If the misclassification is equally distributed between the groups (eg, both exposed and unexposed have equal chance of being put in the wrong box), it's non-differential misclassification. Otherwise, it's differential misclassification.        
    Case-control study An observational study that begins by selecting cases (people with the disease) from the target population. One then selects controls (people without the disease)–importantly, the controls must come from the same target population as cases (so, if they suddenly developed the disease, they’d be a case). Also, selection of both cases and controls is done without regard to exposure status. After selecting both cases and controls, one then determines their previous exposure(s). This is a retrospective study design, and as such, more prone to things like recall bias than prospective designs. Case-control studies are necessary if the disease is rare and/or if the disease has a long induction period. The only appropriate measure of association is the odds ratio, because one cannot measure incidence in a case-control study.        
    Censored time Time during which a given person is not contributing person-time at risk to a cohort study or randomized controlled trial. Left censoring happens before the person begins to contribute person-time at risk (because they are not yet enrolled in the study, even though the study has started), and right censoring happens after a person stops contributing person-time at risk (because they experienced the event of interest, a competing risk, or were lost to follow-up).        
    Cohort study An observational design. Usually prospective, in which case one selects a sample of at-risk (non-diseased) people from the target population, assesses their exposure status, and then follows them over time looking for incident cases of disease. Because we measure incidence, the usual measure of association is either the risk ratio or the rate ratio, though occasionally one will see odds ratios reported instead. If the exposure under study is common (>10%), one can just select a sample from the target population; however, if the exposure is rare, then exposed persons are sampled deliberately. (Cohort studies are the only design available for rare exposures.) This whole thing can be done in a retrospective manner if one has access to existing records (employment or medical records, usually) from which one can go back and "create" the cohort of at-risk folks, measure their exposure status at that time, and then "follow" them and note who became diseased.        
    Comorbidity/comorbid condition If a person has more than one disease at a time, all such diseases for that person are known as comorbidities or comorbid conditions.        
    Competing risks In a cohort study or randomized controlled trial, competing risks are defined as “everything else that might kill someone or otherwise make them no longer at risk of the outcome under study.” So, if we are studying ovarian cancer, then possible competing risks are fatal motor vehicle accidents, fatal heart attacks, etc., as well as oophorectomy (surgical removal of the ovaries). If someone experiences a competing risk, they no longer contribute person-time at risk.        
    Confidence interval A way of quantifying random error. The correct interpretation of a confidence interval is: if you repeated the study 100 times (go back to your target population, get a new sample, measure everything, do the analysis), then 95 times out of 100 the confidence interval you calculate as part of this process will include the true value, assuming the study contains no bias. Here, the true value is the one that you would get if you were able to enroll everyone from the population into your study--this is almost never actually observable, since populations are usually too large to have everyone included in a sample. Corollary: If your population is small enough that you can have everyone in your study, then calculating a confidence interval is moot.        
    Confounding A systematic error in a study (some people call it a bias; I prefer not to) that is caused by a third variable interfering in the exposure-disease relationship.        
    Count A measure of disease frequency used in lieu of prevalence when the disease is extremely rare.        
    Cross-sectional study An observational study design in which one takes a sample from the target population, assesses their exposure and disease status all at that one time. One is capturing prevalent cases of disease; thus the odds ratio is the correct measure of association. Cross-sectional studies are good because they are quick and cheap; however, one is faced with the chicken-egg problem of not knowing whether the exposure came before the disease.        
    Cumulative incidence See Incidence Proportion.        
    Descriptive epidemiology A summary of what is known about a particular condition, including data on incidence, prevalence, and known risk factors.        
    Determinants Things that cause or prevent disease. Also called “causes.”        
    Diagnostic testing Applying a clinical test to a person who has presented with symptoms, to aid in determining what condition the person has, so that they can be correctly treated.        
    Differential misclassification bias Misclassification that occurs in one study group more than another. Adversely affects internal validity.        
    Disproportionately distributed Refers to a situation wherein exposed individuals have either more or less of the disease of interest (or diseased individuals have either more or less of the exposure of interest) than unexposed individuals.        
    Ecologic fallacy A logical error that stems from applying group-level characteristics to individuals.        
    Effect modification Refers to the scenario when the relationship between an exposure and an outcome varies on the basis of a third variable. For instance, perhaps yoga prevents ACL injuries in females but not males. Sex in that scenario is the effect modifier. Effect modification is not the same as confounding.        
    Endemic The amount of a disease usually found in a given area. Known through surveillance.        
    Epidemic The occurrence, in a community or region, of cases of an illness (or specific health-related behaviour or other health-related events) clearly in excess of normal expectancy. Epidemiologists and other public health professionals keep track of what levels are "expected" through surveillance.        
    Epidemiology The study of the distribution and determinants of disease or other health-related events in human populations, and the application of that study to prevent and control health problems.        
    Etiology The sum of what is known about how a disease process develops within an individual, including known determinants.        
    External validity The extent to which we can apply a study’s results to other people in the target population. Synonymous with generalizability. External validity is irrelevant if a study lacks internal validity.        
    Generalizability See external validity.        
    Gold standard The best that is currently available. Not necessarily the most feasible.        
    Incidence A measure of disease frequency that quantifies occurrence of new disease. There are two types, incidence proportion and incidence rate. Both of these have “number of new cases” as the numerator; both can be referred to as just “incidence.” Both must include time in the units, either actual time or person-time. Also called absolute risk.        
    Incidence density See incidence rate.        
    Incidence proportion A measure of disease frequency. The numerator is "number of new case" and the denominator is "the number of people who were at risk at the start of follow-up." Sometimes if the denominator is unknown, you can substitute the population at the mid-point of follow-up (an example would be the incidence of ovarian cancer in Oregon. We would know how many new cases popped up in a given year, via cancer surveillance systems. To estimate the incidence proportion, we could divide by the number of women living in Oregon on July 1 of that year. This of course is only an estimate of the true incidence proportion, as we don't know exactly how many women lived here, nor do we know which of them might not have been at risk of ovarian cancer.) The units for incidence proportion are "per unit time." You can adjust this if necessary (ie if you follow people for 1 month, you can multiply by 12 to estimate the incidence for 1 year). You can (read: should) also adjust the final answer so that it looks "nice." For instance, 13.6/100,000 in 1 year is easier to comprehend than 0.000136 in 1 year. Also called risk and cumulative incidence.        
    Incidence rate A measure of disease frequency. The numerator is "number of new cases." The denominator is "sum of the person-time at risk." The units for incidence rate are "per person-[time unit]", usually but not always person-years. You can (and should) adjust the final answer so that it looks "nice." For instance, instead of 3.75/297 person-years, write 12.6 per 1000 person-years. Also called incidence density.        
    Incident cases All new cases of a particular disease, arising over some period of time.        
    Incubation period The amount of time between an exposure and the onset of symptoms. Roughly, the induction period plus the latent period.        
    Induction period The amount of time between an exposure and the biological onset of disease. Depending on the exposure/disease pair in question, can vary from minutes for some potent toxins to decades for many chronic diseases.        
    Internal validity The extent to which a study’s methods are sufficiently correct that we can believe the findings as they apply that that study sample.        
    Latent period The amount of time between biological onset of disease and diagnosis. Depending on the disease, can be highly-variable in length, from hours to years. Duration of the latent period also varies depending on access to healthcare.        
    Measure of association Quantifies the degree to which a given exposure and outcome are related statistically. Implies nothing about whether the association is causal. Examples of measures of association are odds ratios, risk ratios, rate ratios, risk differences, etc.        
    Measures of disease frequency Quantifies how much disease is in a population. See count, incidence, and prevalence.        
    Misclassification bias Systematic error that results from something (either the exposure, the outcome, a confounder, or all three) having been measured incorrectly. Examples include people not being able to tell you something, people not being willing to tell you something, and an objective measure that is somehow systematically wrong (eg, always off in the same direction, like a blood pressure cuff that is not zeroed correctly). Recall bias, social desirability bias, interviewer bias-–these are all examples of misclassification bias. The end result of all of them is that people are put into the wrong box in a 2×2 table. If the misclassification is equally distributed between the groups (eg, both exposed and unexposed have equal chance of being put in the wrong box), it’s non-differential misclassification. Otherwise, it’s differential misclassification.        
    Missing at random All studies have missing data, and many statistical analyses assume that they are missing at random, meaning any given participant is as likely as any other to have missing data. This assumption is almost never met; the kinds of participants who have missing data are usually fundamentally different than those who have more complete data.        
    Morbidity Any adverse health outcome short of death.        
    Mortality Death.        
    Negative predictive value (NPV) One of four test characteristics used to describe the accuracy of screening/diagnostic tests. NPV is the probability that one does not have the disease, given that one tested negative. Calculated as D/(C+D) in standard 2×2 notation (TN/(FN+TN)). Varies as disease prevalence varies.        
    Non-differential misclassification Misclassification that occurs equally among all groups.        
    Null hypothesis Used in statistical significance testing. The null hypothesis is always that there is not difference between the two groups under study.        
    Null value The value taken by a measure of association if the exposure and disease are not related. Is equal to 1.0 for relative measures of association, and equal to 0.0 for absolute measures of association.        
    Observational studies All study designs in which participants choose their own exposure groups. Includes cohorts, case-control, cross-sectional. Basically, includes all designs other than randomized controlled trial.        
    Odds ratio A measure of association, used in study designs that deal with prevalent cases of disease (case-control, cross-sectional). Calculated as AD/BC, from a standard 2x2 table. Abbreviated OR.        
    P-value A way of quantifying random error. The correct interpretation of a p-value is: the probability that, if you repeated the study (go back to the target population, draw a new sample, measure everything, do the analysis), you would find a result at least as extreme, assuming the null hypothesis is true. If it’s actually true that there’s no difference between the groups, but your study found that there were 15% more smokers in group A with a p-value of 0.06, then that means that there's a 6% chance that, if you repeated the study, you'd again find 15% (or a bigger number) more smokers in one of the groups. In public health and clinical research, we usually use a cut-off of p < 0.05 to mean "statistically significant"--so, we are allowing a type I error rate of 5%. Thus, 5% of the time we'll "find" something, even though really there isn't a difference (ie, even though really the null hypothesis is true). The other 95% of the time, we are correctly rejecting the null hypothesis and concluding that there is a difference between the groups.        
    Period prevalence Prevalence calculated over a longer period of time. Used for short-duration infectious diseases or injuries.        
    Person-time at risk (PTAR) For participants enrolled in a cohort study or randomized controlled trial, this is the amount of time each person spent at risk of the disease or health outcome. A person stops accumulating person-time at risk (usually shortened to just "person-time") when: (1) they are lost to follow-up; (2) they die (or otherwise not become a risk) of something else other than the disease under study (ie they die of a competing risk); (3) they experience the disease or health outcome under study (now they are an incident case); or (4) the study ends. Each person enrolled in such a study could accumulate a different amount of person-time at risk.        
    Point estimate The measure of association that is calculated in a study. Typically presented with a corresponding 95% confidence interval.        
    Point prevalence Prevalence calculated at a specific moment in time.        
    Population A group of people who share a common characteristic.        
    Population at risk All individuals in a population who (1) have not yet experienced the disease or health outcome under study; and (2) are capable of experiencing that disease or health outcome. In other words, the population at risk excludes all prevalent cases, as well as those who for some reason could never experience the outcome (eg, biological males cannot have endometrial cancer). It is not always possible to correctly identify those in the latter group, depending on the disease or health outcome in question. For instance, technically, if we were studying pregnancy, we would need to exclude all women who are either themselves infertile or who are in a monogamous relationship with a man who is infertile. However, in practice it is difficult to identify infertile couples (those who have never tried to get pregnant won't know they're infertile); in such a scenario one would just acknowledge the limitation (that the calculation of population at risk was imperfect, and why).        
    Positive predictive value (PPV) One of four test characteristics used to describe the accuracy of screening/diagnostic tests. PPV is the probability that one has the disease, given that one tested positive. Calculated as A/(A+B) or TP/(TP+FP) in standard 2×2 notation. Varies as disease prevalence varies.        
    Power The probability that your study will find something that is there. Power = 1 – β; beta is the type II error rate. Small studies, or studies of rare events, are typically under-powered.        
    Prevalence A measure of disease frequency that quantifies existing cases. The numerator is "all cases" and the denominator is "the number of people in the population." Usually expressed as a percent unless the prevalence is quite low, in which case write it as "per 1000" or "per 10,000" or similar. There are no units for prevalence, though it is understood that the number refers to a particular point in time.        
    Prognosis The likely course of a disease; how well someone with the disease will fare, given current treatment regimens.        
    Prophylaxis Treatment undertaken in an attempt to prevent a poor outcome. It is designed specifically to prevent, not to treat. For instance, in chapter 9, there is discussion of “risk-reducing mastectomy”—prophylactic removal of breasts in women at very high risk of breast cancer. The mastectomy occurs prior to the cancer, in an attempt to prevent the cancer from occurring. As another example, health care workers known to have been exposed to HIV (e.g., from an accidental needle stick) are offered prophylactic anti-retroviral drugs, in an attempt to prevent their bodies from seroconverting/becoming infected with HIV.        
    Prospective cohort study See cohort study.        
    Public health surveillance See surveillance.        
    Publication bias Bias in the state of the literature on a particular topic that results from journals preferentially publishing papers with exciting results, rather than those showing no effect.        
    Random error Inherent in all measurements. “Noise” in the data. Will always be present, but the amount depends on how precise your measurement instruments are. For instance, bathroom scales usually have 0.5 – 1 pound of random error; physics laboratories often contain scales that have only a few micrograms of random error (those are more expensive, and can only weigh small quantities). One can reduce the amount by which random error affects study results by increasing the sample size. This does not eliminate the random error, but rather better allows the researcher to see the data within the noise. Corollary: increasing the sample size will decrease the p-value, and narrow the confidence interval, since these are ways of quantifying random error.        
    Randomized controlled trial (RCT) An intervention (experimental) study. Just like a prospective cohort except that the investigator tells people randomly whether they will be exposed or not. So, grab an at-risk (non-diseased) sample from the target population, randomly assign half of them to be exposed and half to be non-exposed, then follow looking for incident cases of disease. The correct measure of association is the risk ratio or rate ratio. If done with a large enough sample, RCTs will be free from confounders (this is their major strength), because all potential co-variables will be equally distributed between the two groups (thus making it so that no co-variables are associated with the exposure, a necessary criterion for a confounder). Note that the ‘random’ part is in assigning the exposure, NOT in getting a sample (it does not need to be a ‘random sample’). RCTs are often not do-able because of ethical concerns.        
    Rate ratio A measure of association calculated for studies that observe incident cases of disease (cohorts or RCTs). Calculated as the incidence proportion in the exposed over the incidence proportion in the unexposed, or A/(A+B) / C/(C+D), from a standard 2x2 table. Note that 2x2 tables for cohorts and RCTs show the results at the end of the study--by definition, at the beginning, no one was diseased. See also rate ratio and relative risk. Abbreviated RR.        
    Recall bias A subset of misclassification bias that specifically results from people being unable to accurately recall past exposures.        
    Relative measure of association A measure of association calculated fundamentally by division. See also risk ratio, rate ratio, relative risk, odds ratio.        
    Relative risk Abbreviated RR. Can refer either to risk ratio or rate ratio--because of this uncertainty, this term is not used in this book.        
    Retrospective cohort study See cohort study.        
    Risk See Incidence Proportion.        
    Risk difference A measure of association calculated for studies that observe incident cases of disease (cohorts or RCTs). Calculated as the incidence proportion in the exposed minus the incidence proportion in the unexposed.        
    Risk factors Variables known to be associated with a disease. May or may not be causally-related.        
    Risk ratio A measure of association calculated for studies that observe incident cases of disease (cohorts or RCTs). Calculated as the incidence proportion in the exposed over the incidence proportion in the unexposed, or A/(A+B) / C/(C+D), from a standard 2x2 table. Note that 2x2 tables for cohorts and RCTs show the results at the end of the study--by definition, at the beginning, no one was diseased. See also rate ratio and relative risk. Abbreviated RR.        
    Sample The group actually enrolled in a study. Hopefully the sample is sufficiently similar to the target population that we can say something about the target population, based on results from our sample. In epidemiology we often don’t worry about getting a “random sample”–that’s necessary if we’re asking about opinions or health behaviours or other things that might vary widely by demographics, but not if we’re measuring disease etiology or biology or something else that will likely NOT vary widely by demographics (for instance, the mechanism for developing insulin resistance is likely the same in all humans). Nonetheless, if the sample is different enough than the target population, that is a form of selection bias, and can be detrimental in terms of external validity.        
    Screening Applying a clinical test to asymptomatic individuals, on the theory that finding (and treating) the disease earlier will lead to better outcomes.        
    Selection bias A type of systematic error resulting from who chooses/is chosen to be in a study and/or who drops out of a study. Can affect either internal validity or external validity.        
    Sensitivity One of four test characteristics used to describe the accuracy of screening/diagnostic tests. Sensitivity is the probability that one tests positive, given that one has the disease. Calculated as A/(A+C) or TP/(TP+FN) in standard 2×2 notation. Does not vary as disease prevalence varies.        
    Specificity One of four test characteristics used to describe the accuracy of screening/diagnostic tests. Specificity is the probability that one tests negative, given that one does not have the disease. Calculated as D/(B+D) or TN/(TN+FP) in standard 2×2 notation. Does not vary as disease prevalence varies.        
    Statistical significance A somewhat-arbitrary method for determining whether or not to believe the results of a study. In clinical and epidemiologic research, statistical significance is typically set at p < 0.05, meaning a type I error rate of <5%. As with all statistical methods, pertains to random error only; a study can be statistically significant but not believable, eg, if there is likelihood of substantial bias. A study can also be statistically significant (eg, p was < 0.05) but not clinically significant (eg, if the different in systolic blood pressure between the two groups was 2 mm Hg—with a large enough sample this would be statistically significant, but it matters not at all clinically).        
    Surveillance The ongoing, systematic collection, analysis, and interpretation of health data, essential to the planning, implementation, and evaluation of public health practice, closely integrated with the timely dissemination to those who need to know. Surveillance both (1) provides information for descriptive epidemiology (person, place, time), and (2) allows us to know what "normal" is, so that potential epidemics are identified early. Also called public health surveillance.        
    Target population The group about which we want to be able to say something. One only very rarely is able to enroll the entire target population into a study (since it would be millions and millions of people), and so instead we draw a sample, and do the study with them. In epidemiology we often don't worry about getting a "random sample"--that's necessary if we're asking about opinions or health behaviors or other things that might vary widely by demographics, but not if we're measuring disease etiology or biology or something else that will likely not vary widely by demographics (for instance, the mechanism for developing insulin resistance is the same in all humans).        
    Test characteristics Four numerical summaries that describe different aspects of screening/diagnostic test accuracy. Two of the test characteristics (sensitivity and specificity) are “fixed,” meaning their values do not change as disease prevalence changes. The other two (positive predictive value and negative predictive value) do change as disease prevalence changes.        
    Type I error The probability that a study “finds” something that isn’t there. Typically represented by α, and closely related to p-values. Usually set to 0.05 for clinical and epidemiologic studies.        
    Type II error The probability that a study did not find something that was there. Typically represented by β, and closely related to power. Ideally will be above 90% for clinical and epidemiologic studies, though in practice this often does not happen.        
    • Was this article helpful?