Risk of Bias in Dietary Supplementation Research
Scientific research is not free from biases or study design which may lead to bias, we aim to make these biases in the research papers we use transparent. We analyze RCTs (studies where participants are randomly assigned to test the effects of an intervention against a control) that are open access (meaning the full papers are accessible to everyone) in the Supplement AI dietary supplement study database. It is important to note that studies which aren't both open access and RCTs cannot accurately be assigned a risk of bias rating, so they are not analyzed for risk of bias.
Causes of Bias
To assess studies for bias we use the gold standard guidelines established by Cochrane's Risk of Bias tool ROB 1.0 which assess bias across four domains [1]:
- Random Sequence Generation (Selection Bias)
- Allocation Concealment (Selection Bias)
- Blinding of Participants and Personnel (Performance Bias)
- Blinding of Outcome Assessment (Detection Bias)
Random Sequence Generation (Selection Bias)
Random sequence generation assesses whether the process used to create the sequence that determines which participants go into each group (treatment or control) was truly random. For example, using alternating participants (one goes into the treatment group, the next into the control group) or restricted randomization, is not truly random as the sequences are predictable. If the method isn't random, certain types of participants may be more likely to end up in one group than the other due to intentional or subconscious biases (selection bias) [2]. Truly random methods include using a computer-generated random number sequence, drawing lots, or using a random number table. Truly random methods produce comparable groups required for accurate results. Proper methods ensure that the sequence cannot be predicted so that selection bias is not introduced. Selection bias can influence the results by creating differences between groups, leading to outcomes that reflect these imbalances rather than the true effect of the intervention (treatment or action being tested in a study).
Allocation Concealment (Selection Bias)
Allocation concealment is the hiding or lack thereof of the upcoming group assignments from the people enrolling participants in the study. Similar to random sequence generation, if the group assignments (treatment or control) are predictable, researchers might subconsciously or intentionally place certain participants in one group, creating selection bias [3]. For instance, if researchers know the next participant will be assigned to the treatment group, they might encourage the enrollment of a certain type of participant, which would affect the study outcomes. Without proper concealment, there's a risk that the allocation process could be influenced by researchers, even unintentionally, leading to outcomes which do not represent the true effect of the intervention.
Blinding of Participants and Personnel (Performance Bias)
Sufficient blinding is keeping both the participants and the people administering the treatments unaware of which group the participants are in (treatment or control). If either group knows, their behavior or expectations could influence the study outcomes. For example, if participants know they're receiving the active treatment, they may expect better outcomes, and these expectations could affect their reported results. Similarly, if personnel know who's receiving the treatment, they might consciously or subconsciously treat them differently. Sufficient blinding practices include using identical placebos and blinded outcome assessors, preventing participants and personnel from knowing group assignments. Insufficient practices include visibly different treatments or open-label trials. When blinding is insufficient, it can introduce performance bias, where outcomes are influenced not only by the treatment itself but by people's expectations or behaviors.
Blinding of Outcome Assessment (Detection Bias)
Blinding of outcome assessment looks at whether the people assessing the outcomes and drawing conclusions know which group the participants were assigned to. If assessors know who received the treatment, they might (unintentionally or intentionally) interpret the results more or less favorably for that group. For example, if researchers know that a participant received the active treatment, they might be more likely to interpret a borderline result as positive. This is called detection bias, where their judgment is skewed by their expectations, potentially leading to inaccurate conclusions.
Domain Assessment
To assess the domains in an efficient manner we use RobotReviewer, an automated tool that uses AI to evaluate risk of bias [4]. RobotReviewer assesses the four domains as low risk of bias or high/unclear risk of bias, providing supporting evidence for its judgments. RobotReviewer has been proven to be as accurate as experts [5]. However we use a semi-automated workflow with RobotReviewer to provide initial assessments, which are then reviewed by human evaluators to ensure accurate judgements. Then based on the final domain judgements, an evaluator determines the overall risk of bias in order to flag studies that definitively demonstrate high or low risk of bias. By combining RobotReviewer with evaluator review, we mitigate the potential for errors in evaluating risk of bias while handling large volumes of research effectively [6]. This allows us to label RCTs which demonstrate high risk of bias and low risks of bias.
How to Use the ROB Labels
The high risk of bias and low risk of bias labels are useful for understanding RCTs. However it is important to note that studies which are not open access RCTs are not analyzed may have unaccounted bias or lack of bias, since they cannot accurately be assigned a risk of bias rating.
High ROB
Improper design in domains which introduce bias may not be apparent when initially reading a paper. However, as established they have a high possibility to cause inaccurate outcomes of the intervention. So, the high ROB label acts as an indicator for Supplement AI and you to explore other papers investigating the intervention. This effectively allows for addressing any potential inaccuracies caused by biased or unintentionally flawed methodology.
Low ROB
Proper RCT design leads to a low risk of bias, which gives more credibility to the results and conclusions of a given study. RCTs with low risk of bias are great sources of accurate outcomes for a given intervention, especially large sample human RCTs. However, research without the low ROB label can also be incredibly valuable sources of accurate outcomes, especially high quality meta analyses and systematic reviews.
TL;DR
In dietary supplementation research, bias in study design can affect results, so we assess bias using Cochrane's Risk of Bias (ROB) tool across four domains: random sequence generation, allocation concealment, blinding of participants/personnel, and blinding of outcome assessment. We use RobotReviewer, an AI tool, to initially evaluate studies, followed by human review to ensure accuracy. Only open access, randomized controlled trials (RCTs) are analyzed, and studies are labeled as high or low risk of bias to aid you and Supplement AI identify trustworthy results.
Abbreviations
- RCT: Randomized Control Trial
- COI: Conflict of Interest
References
- Higgins, J. P. T., Altman, D. G., Gøtzsche, P. C., Jüni, P., Moher, D., Oxman, A. D., Savović, J., Schulz, K. F., Weeks, L., & Sterne, J. A. C. (2011). The Cochrane Collaboration's tool for assessing risk of bias in randomized trials. BMJ, 343, d5928. https://doi.org/10.1136/bmj.d5928
- Kahan, B.C., Rehal, S., & Cro, S. (2015). Risk of selection bias in randomised trials. Trials, 16, 405. https://doi.org/10.1186/s13063-015-0920-x
- Schulz, K. F., & Grimes, D. A. (2002). Allocation concealment in randomised trials: Defending against deciphering. Lancet, 359(9306), 614-8. https://doi.org/10.1016/S0140-6736(02)07750-4
- Zhang, Y., Marshall, I., & Wallace, B. C. (2016). Rationale-augmented convolutional neural networks for text classification. Proceedings of the Conference on Empirical Methods in Natural Language Processing, 795-804. https://doi.org/10.18653/v1/D16-1076
- Jardim, P.S.J., Rose, C.J., Ames, H.M., et al. (2022). Automating risk of bias assessment in systematic reviews: A real-time mixed methods comparison of human researchers to a machine learning system. BMC Medical Research Methodology, 22, 167. https://doi.org/10.1186/s12874-022-01649-y
- Soboczenski, F., Trikalinos, T.A., Kuiper, J., et al. (2019). Machine learning to help researchers evaluate biases in clinical trials: A prospective, randomized user study. BMC Medical Informatics and Decision Making, 19, 96. https://doi.org/10.1186/s12911-019-0814-8