Transcript for:
Tinjauan Kritis terhadap Studi Kohort

Welcome to the fourth video in this series of critical appraisal modules. In this module we will be focusing on the critical appraisal of cohort studies using the Critical Appraisal Skills Programme or CASP approach. In the previous module we spoke to you about RCTs and their importance in healthcare, particularly in terms of being able to attribute causality.

You may ask why do we need observational studies if the RCT design is so strong in terms of attributing causality? But sometimes it isn't possible to test some hypotheses and trials. It's also sometimes unethical to undertake a trial of an agent we believe could be harmful. and it is not feasible to study some very rare outcomes in trials.

For this reason, observational studies do have an important place in healthcare. Generally, in observational studies in health, researchers are interested in risk factors or exposures, and outcomes often diseases. Unlike in experimental studies, researchers do not manipulate exposures in observational studies.

They simply observe what is occurring and then they can calculate measures to quantify the extent of the risk. For the learning outcomes, we will introduce you to cohort studies and describe their purpose and value in the context of healthcare research, and show you how we can critically appraise one using the CAASPP checklist. We will also talk about the risk ratio, how to calculate these and how to interpret them in cohort studies. Finally, there will be a link to a short quiz at the end of this video which will give you the opportunity to test your knowledge on concepts we will have discussed using multiple choice questions and answers.

A cohort study is the strongest research design of the observational studies. It generally involves the researcher identifying research participants who do not have the outcome of interest. The researcher then classifies participants according to their exposure status and follows the participants over time to see whether or not they develop the outcome in question.

To take a simple example, if a researcher was interested in whether or not smoking causes lung cancer, the researcher would identify a group of people who do not have lung cancer and then classify them according to whether or not they are smokers. The researcher would then follow up the participants over time to compare rates of lung cancer between the groups in the cohort according to their smoking status. So how do we express the different rate in risk between exposed and unexposed members of the cohort?

The most common method of doing this is the risk ratio. The risk ratio is the ratio of the incidence of disease in the exposed group to the incidence of disease in the unexposed group. Incidence refers to new cases of a disease. Relative risk can quantify the strength of an association between exposures and outcomes. If the relative risk is greater than one, it means that the exposure increases the risk of disease.

The higher the number, the greater the risk. If the relative risk is less than one, it means that the exposure decreases the risk of disease. The lower the number, the more the factor is protective.

If the risk ratio is exactly one, then it means there is no difference in risk between exposed and unexposed individuals. So let's take an example in which we'll calculate the relative risk together. We'll look at some fictional data. Imagine that we're still thinking about the relationship between smoking and lung cancer. We might have conducted a cohort study and obtained the following results.

We can calculate the relative risk by looking at the data. Calculate the risk ratio by first calculating the risk of developing lung cancer in the exposed group, then by calculating the risk of developing lung cancer in the unexposed group, and then by dividing the risk in the exposed group to the risk in the unexposed group. I've given key sections of the table the letters A, B, C and D as this is a common convention. To calculate the risk of developing lung cancer in the exposed group, it's A divided by A plus B, which gives 0.85.

To calculate the risk of developing lung cancer in the unexposed group, it's C divided by C plus D, which gives 0.05. To express the risk of developing lung cancer in the exposed to the unexposed group, it's 0.85 divided by 0.05, which gives 17. So that means that people who smoke are 17 times more likely to develop lung cancer than non-smokers, according to this fictional data. Cohort studies are an extremely useful study design for quantifying the strength of association between an exposure and an outcome. However, like all research studies in healthcare, their quality can vary, so it is important that readers of cohort studies critically appraise the quality of the research study. The CAASPP programme has produced a checklist for critically appraising cohort studies which you can access by following the link below this video.

Let's have a look at this checklist and then we'll apply its use to a sample cohort study. We can see that the CAASPP cohort studies checklist again separates the three key principles of critical appraisal of validity, trustworthiness of results and value of relevance into three sections A, B and C respectively. Let's see how they can be addressed with an example. The study we'll use to work through this checklist is by Gerhardt et al. on Lithium Treatment and Risk for Dementia in Adults with Bipolar Disorder population-based cohort study, which you can also access by selecting the link below this video.

The first question in the checklist is to consider whether the cohort study examined a clearly focused question. The answer for this study would appear to be yes, as the study considered the association between lithium and dementia risk. The research question is contextualised against the biological background of lithium inhibiting glycogen synthase kinase 3. an enzyme implicated in the etiology of dementia. The second question looks at whether the cohort was recruited in an acceptable way.

This question is mainly about selection bias. Bias is a systematic error in a research study which results in incorrect measurement of the relationship between exposure or intervention and outcome. Selection bias refers to bias in terms of the way participants were selected as research participants, i.e. were the research participants truly representative of the research population or are the participants in some way untypical?

This study's participants were people aged over 50 with a diagnosis of bipolar disorder from eight large US states and Medicaid insured US population. Medicaid is a social health care program for Americans with disabilities or lower incomes. The third question considers a different type of bias, namely measurement bias. This refers to bias in terms of the way exposure and or outcome are measured, which could lead to an incorrect estimate of the relationship between exposure and outcome.

It is important to consider factors such as whether or not exposures were assessed via self-report, as this may be affected by social desirability bias or the fallibility of memory. The exposure in this study was lithium use and it was assessed via health administrative data, so it is more likely that the measurement of the exposure and outcome are more reliable. and self-report measurements. This question relates to assessing the possible measurement bias in measuring outcome which, in this case, was development of dementia.

Things to consider in this criteria relate to whether or not the outcome was measured subjectively or objectively, with objective measures of outcome being generally more reliable. and consider whether or not assessors were blind to exposure status. This means whether or not assessors knew whether or not a person had or had not been exposed. As if they know this, it might affect their assessment of outcome, particularly in more subjective outcome measurements.

Sometimes, however, the fact of whether or not assessors are blind is not so essential if the outcome is clear-cut. For example, this study's outcome is more of a factual, objective outcome, less likely to be affected by assessors'knowledge of exposure status, and the use of health administrative data lessens the likelihood that the results of the study have been compromised by measurement bias in assessing the outcome. This question relates to confounding.

A confounder is a factor which is independently associated with an exposure and an outcome, and which can either hide a true relationship between an exposure and outcome, or make it seem like there is a relationship between exposure and outcome, when in fact there is no relationship. Possible confounders can often relate to age, gender or health-related behaviour such as dietary factors or whether or not people exercise. It is possible to estimate the impact which confounders may have on a research study by statistically adjusting for them when analysing the relationship between exposure and outcome.

In this study the authors present an unadjusted statistical model a model which adjusted for gender, age and ethnicity, all of which are common confounders, and a statistical model which suggested for age, gender and ethnicity as well as other factors which are potential confounders. The authors do note that lithium treated patients were found to have a lower risk of developing dementia in their study. but also that lithium treated patients had lower rates of baseline cerebrovascular disease and diabetes most of which are actually risk factors for dementia and therefore factors which could be confounders. This question relates to the time spent on following up the cohort.

Cohort studies are generally undertaken over several years And it is important that the study follow-up period is long enough for the outcome to manifest itself. It's also important that the study tries to collect data from as many people who started in the cohort as possible, and not to allow people to drop out, as the people who do drop out may not be typical of the ones who remain in the study, and this again could lead to an incorrect measurement of the relationship between exposure and outcome. The maximum follow-up time period in this study was three years and judgment is required as to whether or not this outcome period is long enough to allow for development of the outcome if the outcome will be occurring.

In this study the authors found that 301 to 365 days of lithium exposure was associated with reduced dementia risk but that no association was found for shorter duration of exposure to lithium. The measurement used to express the association between lithium exposure and dementia risk is the hazard ratio which is similar to the risk ratio we looked at earlier but the hazard ratio accounts for the rate or time period at which events are happening instead of just looking at whether or not an event happened. The hazard ratio for 301 to 365 days of lithium exposure was 0.77, meaning that lithium exposure is protective because the hazard ratio is less than 1. Like risk ratios, if a hazard ratio is under 1, then this means that an exposure is protective, whereas if a hazard ratio is higher than 1, then this means that an exposure raises the risk of an outcome occurring.

The precision of results are represented by confidence intervals. Confidence intervals relate to the range in which the true value lies. A research study generally involves a sample rather than everyone in the research population. And even in a robustly conducted study, there will always be some error in the sample compared to the population. A confidence interval quantifies this by acknowledging that while the sample produced a certain value, the true value in the population is likely to be within a certain range.

A narrower confidence interval quantifies the value of the sample. interval indicates higher precision of results, so a confidence interval of 3 to 4 around the risk ratio of 3.5 would be more precise than a confidence interval of 2 to 7 around the same risk ratio of 3.5. The believability of results can be assessed by considering the factors we have previously discussed, such as confounding or bias.

It's also important to consider whether the results may have been affected by chance. Assessing the believability of results also requires judgement and consideration of factors such as the biological plausibility of the relationship between exposure and outcome and the contextualisation of the cohort study in the wider body of research in the field. Questions 10, 11 and 12 of the CASP checklist follow on from this theme and require judgment to consider the potential local applicability of the results. The fifth module in this series will look at case control studies and we will be following a similar format as we used in this video to appraise an example of a recent study.

Thank you for listening. Thank you for listening. These training videos have been developed by the Cochrane Common Mental Disorders Group at the University of York with support from Tees Esk and Weir Valleys NHS Foundation Trust, Northumberland Tyne and Wear NHS Foundation Trust and the Economic and Social Research Council.

If you would like to test your knowledge on the topics introduced in this module please follow the link below which will take you to a short online quiz.