METHODS
1. Study design
This study is a methodological study designed to evaluate the quality of academic articles systematically.
2. Samples
The data for this study consisted of RCTs published in JKBNS between January 2011 and December 2024. The inclusion criteria were original research articles involving human participants that explicitly identified the study as an RCT in the title, abstract, or methods section. The exclusion criteria included editorials, animal studies, reviews or meta-analyses, surveys, methodological studies, qualitative studies, retrospective studies, and quasi-experimental studies. Based on these criteria, 22 studies were finally included (
Figure 1).
3. Instruments
The CONSORT 2025 checklist was used to evaluate whether the reporting of the included RCTs adhered to established reporting standards. In addition, three validated quality appraisal tools—RoB 2.0, SIGN checklist for RCTs, and JBI critical appraisal checklist—were used to map their respective items to the corresponding components of the CONSORT 2025 checklist.
1) The CONSORT 2025 checklist
The CONSORT 2025 checklist is the latest revised reporting guideline aimed at ensuring clarity, transparency, and completeness in the reporting process of RCTs [
6]. This checklist comprises six structured sections that cover the entire research article, including title and abstract, introduction, methods, results, discussion, and open science. Each section includes detailed items that specify the minimum requirements for proper reporting. Key evaluation items include whether the study type is clearly stated in the title and abstract section, whether a structured abstract is provided, whether the background and objectives are clearly described in the introduction section, and whether the methods section includes methodological components such as trial design, eligibility criteria, intervention details, outcome measures, sample size calculation, randomization procedures, allocation concealment, and blinding strategies. The results section includes items related to participant flow, recruitment and follow-up periods, participant characteristics, primary and secondary outcomes, and appropriateness of statistical analysis. The discussion section focuses on the interpretation of findings, study limitations, and the external validity of the results. In contrast, the open science section includes items such as trial registration, funding sources, ethical approval, accessibility of the study protocol, and data sharing plans. This checklist consists of 30 items. Compared to the 2010 version, the 2025 revision includes newly added items and modifications that reflect contemporary expectations in clinical research, such as stakeholder involvement, equity considerations, and transparency in data processing and sharing. In this study, each included RCTs article was evaluated for compliance with each item of the CONSORT 2025 checklist. All items were scored using a binary coding system. If the item was explicitly described in the article, it was scored as “1”; if it was not described or was unclear, it was scored as “0”. If an item was entirely unaddressed, it was classified as “not reported” and scored as 0. In cases where certain checklist items were not applicable due to study design characteristics (e.g., crossover trials or single-group studies), those items were also scored as 0.
2) The Cochrane RoB 2.0
The RoB 2.0 tool is a rigorously developed, widely used instrument for evaluating the internal validity of RCTs. Developed by the Cochrane Collaboration, this tool assesses the risk of bias across five domains related to the design, conduct, and reporting of clinical trials [
7]. The five domains are randomization process, deviations from intended interventions, missing outcome data, measurement of outcomes, and selection of the reported results. Specifically, it evaluates the adequacy of random sequence generation and allocation concealment, the blinding of participants and investigators, adherence to intervention protocols, handling of missing data, appropriateness and blinding of outcome measurement methods, and the presence of selective outcome reporting. The overall risk of bias is determined not by a scoring system but by a qualitative judgment that synthesizes the assessments across the five domains. For this purpose, each domain provides structured signaling questions, and reviewers respond with “Yes”, “Probably Yes”, “Probably No”, “No”, or “No Information”. Based on these responses, the risk of bias for each domain is assessed, and each study is ultimately classified as having an overall risk of bias of “low”, “some concerns”, or “high”.
3) The SIGN checklist for RCTs
The SIGN checklist for RCTs is a methodological tool developed to systematically assess the internal validity and methodological rigor of clinical research. This checklist is designed to identify potential sources of bias across the design, conduct, and analysis phases of RCTs and to determine the level and reliability of clinical evidence [
8]. Key evaluation criteria include the appropriateness of random sequence generation and allocation concealment, baseline comparability between groups, clarity in the description of interventions and control conditions, and blinding of participants and researchers. In addition, it assesses the completeness of follow-up and handling of missing data, the validity and consistency of outcome measurements, adherence to prespecified analysis plans (e.g., analysis of prespecified outcomes), the use of intention-to-treat (ITT) analysis, and the justification and adequacy of sample size and statistical methods. Based on the degree to which these criteria are satisfied, each study is classified as providing high-quality evidence (++), acceptable-quality evidence (+), or low-quality evidence (0).
4) The JBI critical appraisal checklist
The JBI critical appraisal checklist for RCTs is a tool designed to evaluate the appropriateness of the study design and the methodological consistency of trial implementation. This checklist is designed to assess whether randomized clinical trials provide reliable evidence regarding internal validity and feasibility and is widely used as a key instrument in evidence-based nursing and healthcare practice [
9]. The JBI checklist consists of 13 items, which cover key elements such as whether randomization was performed correctly, whether allocation was adequately concealed, the baseline comparability between intervention and control groups, clarity of intervention descriptions, blinding of participants, researchers, and outcome assessors, completeness of follow-up, application of ITT analysis, reliability and validity of outcome measures, and appropriateness of statistical analysis. Each item is assessed using one of four response options: “Yes”, “No”, “Unclear”, or “Not applicable”.
4. Data collection
Data were retrieved from the archive of the JKBNS website using the keywords “randomized,” “무작위,” and “RCTs.” The search was conducted between June 13 and June 25, 2025. The identified articles were organized in an Excel spreadsheet containing the author(s), year of publication, article title, and DOI. From July 13 to August 9, 2025, two independent reviewers (Cho and Kim) screened the titles, study designs, and full texts of the retrieved articles. In the first stage, studies were selected based on titles containing terms such as “실험연구” (experimental study), “무작위” (randomized), “random*,” “RCTs,” and “효과” (effect). In the second stage, abstracts were reviewed to identify studies explicitly mentioning “RCTs” or “randomization.” In the third stage, articles labeled as “experimental study” or “quasi-experimental study” in the methods section were further assessed. If random allocation was clearly described in the full text, they were classified as RCTs and included in the final sample. Disagreements regarding study inclusion primarily concerned whether to include crossover RCTs or post-test-only RCTs. After discussion, both types of studies were agreed to be included.
5. Data analysis
Data analysis was conducted to systematically summarize the general characteristics of the included RCTs and report their quality, as well as to compare the conceptual concordance and differences among the assessment tools. The general characteristics of the studies (authors, year of publication, study design, method of randomization, participant characteristics, sample size, type of intervention, and outcome variables) were summarized from the original articles. Reporting quality was assessed according to the 30 items of the CONSORT 2025 checklist. Each item was coded as "1" if explicitly reported or "0" if not reported or unclear. Compliance with individual items was calculated as frequencies, and the total and mean scores for each study were computed to evaluate the overall reporting quality. Two independent reviewers, both experienced in conducting meta-analyses and quality appraisals, performed the evaluations. Before the formal assessment, they held preliminary discussions to align their understanding of the evaluation criteria and conducted a pilot assessment on two to three studies to calibrate scoring. As a result, inter-rater reliability, assessed using Cohen’s kappa, indicated perfect agreement (κ = 1.00; 95% CI: 1.00 to 1.00) across all assessments. In addition, conceptual mapping was performed between the CONSORT 2025 items and the domains of the Cochrane RoB 2.0, the SIGN checklist for RCTs, and the JBI critical appraisal checklist to identify the correspondence across the tools. This mapping was carried out based on two rounds of expert consensus. The researchers with expertise in evidence based practice and guideline adaptation independently reviewed the definitions and scope of each item across tools. Subsequently, consensus meetings were held to reconcile interpretations and resolve discrepancies. Only items with apparent conceptual equivalence were retained in the final mapping framework.
6. Ethical considerations
This study is a literature analysis focusing on previously published journal articles. During the analysis, researchers made every effort to minimize subjective bias, and all data were obtained from peer-reviewed publications, with sources clearly cited. Academic and publication ethics were strictly followed throughout the study process and reporting of results.
RESULTS
1. Characteristics of RCTs in JKBNS (2011~2024)
This study analyzed the characteristics of 22 RCTs published in the JKBNS between 2011 and 2024. Of these, 19 were standard RCTs, and 3 employed a crossover RCT design. Although approximately three RCTs were published annually, no RCT articles were published in 2016, 2020, 2022, or 2023, and only one RCT was published in each of 2017, 2018, and 2021. Randomization methods were reported in 19 studies, using diverse techniques such as Excel based randomization, computer generated random numbers, random number tables, coin tossing, and lottery methods. However, three studies did not specify their randomization procedures. Participants included university students, nurses, surgical patients, individuals with chronic diseases, and older adults, with sample sizes ranging from 18 to 132. Most interventions were non-pharmacological, with aromatherapy being the most frequent (eight studies), followed by music therapy (three studies) and hand hygiene interventions (two studies). Intervention durations ranged from a single day to 12 weeks, with one study not reporting the intervention duration. The number of intervention sessions varied from 1 to 42, and 14 studies implemented a single-session intervention. The time per session ranged from 15 seconds to two weeks, though some studies reported session time in relative terms such as "until the end of surgery," "until recovery room discharge," or "continuous"; five studies did not report session time. In eight studies, the control group received no intervention, while three used usual or traditional care. Other comparators included almond oil inhalation, natural hand drying, saline, and rest. Different outcome variables were assessed, covering a range of domains, including pain, physiological parameters (e.g., blood pressure, pulse, respiration, heart rate variability), psychological indicators (e.g., anxiety, stress, sleep disturbance, depression), and bacterial counts, thereby capturing the multidimensional effects of nursing interventions (
Table 1).
A total of 22 RCTs published in JKBNS between 2011 and 2024 were analyzed to assess reporting completeness according to the CONSORT guidelines. Evaluation of 42 detailed items corresponding to 30 checklist numbers revealed an overall average reporting rate of 59.0%.
2. Reporting of RCTs based on the CONSORT 2025 in JKBNS (2011~2024)
Based on an item-by-item analysis of CONSORT 2025, the following components were reported in all included studies: '1b. Structured summary of the trial design, methods, results, and conclusions', '6. Scientific background and rationale', '7. Specific objectives, '9. Trial design', '11. Trial setting', '12a. Eligibility criteria for participants', '12b. Eligibility criteria for sites/providers', '13. Intervention and comparator details', '14. Prespecified outcomes', '19. Implementation', '21b. Definition of analysis population', '22b. Losses and exclusions', '23a. Recruitment and follow-up dates', '24a. Intervention delivery', '26. Outcomes and estimation', '29. Interpretation', and '30. Limitations'. Item '16a. Sample size determination' was adequately reported in 21 studies, while items '17a. Sequence generation' and '21a. Statistical methods' were sufficiently addressed in 20 studies. In contrast, items such as '10. Changes to trial protocol', '16b. Interim analyses and stopping guidelines', '21c. Missing data handling', and '23b. Reason for trial ending' were not reported in any of the reviewed studies. Additionally, items '21d. Additional analyses' (Study ID: 9), '27. Harms' (Study ID: 18), and '28. Ancillary analyses' (Study ID: 1) were each reported in only one study, respectively (
Table 2).
Since 2005, the International Committee of Medical Journal Editors (ICMJE) [
11], and since 2013, the Declaration of Helsinki [
12], as well as Dickersin & Rennie [
13], have emphasized the importance of clinical trial registration. Dickersin and Rennie stated that researchers, research institutions and organizations, journal editors, legislators, and consumers — in fact, all stakeholders — must take immediate action, both collectively and within their respective domains, to ensure comprehensive registration of clinical trials. The Korean Association of Medical Journal Editors [
14] reflected the principles of the ICMJE in its Korean-translated revised editions, distributed in February 2006 and September 2008, of the Uniform Requirements for Manuscripts Submitted to Biomedical Journals: Writing and Editing for Biomedical Publication. These guidelines required that the clinical trial registration number be included at the end of the abstract. In 2010, the Clinical Research Information Service (CRIS) was launched as a system for registering clinical trials. By May of that year, CRIS officially joined the World Health Organization International Clinical Trials Registry Platform as a primary registry, establishing a registration system that met international standards [
15]. In 2024 JKBNS specified in the '8. Ethical considerations' section of RCT articles that RCT protocol registration must be reported [
16].
3. Trend analysis across combined years of studies published in JKBNS (2011~2024) based on CONSORT 2025 guidelines
The changes observed over five-year intervals are summarized in
Table 3. A closer examination revealed that certain items consistently demonstrated high ('1b. Structured summary of the trial design, methods, results, and conclusions','6. Scientific background and rationale', '7. Specific objectives', and related items) or low ('10. Changes to trial protocol','16b. Interim analyses and stopping guidelines', and similar items) reporting rates across all periods.
Some items showed a progressive increase in reporting rates in more recent years ('1a. Identification as a randomized trial in the title', '2. Trial registration', '3. Protocol and statistical analysis plan', '4. Data sharing', '5b. Conflicts of interest', '18. Allocation concealment mechanism', and related items). Conversely, a few items exhibited a declining trend in reporting over time ('24 b. Concomitant care', '25. Baseline data', and similar items).
4. Mapping the CONSORT 2025 checklist to RoB 2.0, SIGN, and JBI: a comparative analysis of RCT quality criteria
The CONSORT 2025 checklist was mapped against the quality assessment tools for RCTs, including the Cochrane RoB 2.0, the SIGN, and the JBI instruments (
Table 4). A total of 14 items from the CONSORT 2025 checklist were found to be commonly matched with these three appraisal tools, reflecting core elements of RCT design. These items are directly related to ensuring the internal validity of trials and include: prespecified outcomes (Item 14), random sequence generation (Items 17a and 17b), allocation concealment (Item 18), blinding (Items 20a and 20b), definition of the analysis population (Item 21b), handling of missing data (Item 21c), participant flow (Item 22a), losses and exclusions (Item 22b), intervention delivery (Item 24a), concomitant care (Item 24b), baseline data (Item 25), and outcomes and estimation (Item 26).
Specifically, Item 14 “Prespecified outcomes” was matched with RoB 2.0 items 4.1, 4.2, 5.2, and 5.3, which assess the standardization and reliability of outcome measurements and the consistency of analysis. It was also matched with SIGN item 1.7 and JBI items 8 and 9. Items 17a “Sequence generation” and 17b “Type of randomization and restriction” were mapped to RoB 2.0 item 1.1, SIGN items 1.2 and 2.1, and JBI item 1, which evaluate the adequacy of the randomization procedure. Item 18 “Allocation concealment mechanism” was matched with RoB 2.0 item 1.2, SIGN items 1.3 and 2.1, and JBI item 2, focusing on whether the allocation was adequately concealed prior to assignment. Items 20a “Who was blinded” and 20b “How blinding was achieved” were matched with RoB 2.0 items 2.1, 2.2, and 4.3; SIGN items 1.4 and 2.1; and JBI items 4, 5, and 7, all assessing blinding among participants, providers, and outcome assessors. Item 21b “Definition of analysis population” was matched with RoB 2.0 item 2.6, SIGN item 1.9, and JBI item 11, evaluating whether the analyzed population was defined in accordance with the pre-specified analysis plan. Item 21c “Missing data handling” was matched with RoB 2.0 items 3.1~3.4, SIGN item 2.1, and JBI item 10, focusing on the appropriateness of handling missing data and potential bias. Item 22a “Participant flow” was matched with RoB 2.0 item 3.1, SIGN item 2.1, and JBI item 10, assessing transparency in reporting participant movement, including recruitment and analysis inclusion. Item 22b “Losses and exclusions” was matched with RoB 2.0 items 3.1~3.4, SIGN items 1.8 and 2.1, and JBI item 10, focusing on the reporting and justification of participant withdrawals or exclusions. Item 24a “Intervention delivery” was matched with RoB 2.0 items 2.3~2.5, SIGN item 1.6, and JBI item 6, examining whether the intervention was delivered as planned. Item 24b “Concomitant care” was similarly matched with RoB 2.0 items 2.3~2.5, SIGN item 1.6, and JBI item 6, assessing the balance of concomitant treatments between groups. Item 25 “Baseline data” was matched with RoB 2.0 item 1.3, SIGN items 1.5 and 2.1, and JBI item 3, evaluating comparability of baseline characteristics. Finally, Item 26 “Outcomes and estimation” was matched with RoB 2.0 items 4.1~4.5, SIGN item 2.1, and JBI items 8 and 9, assessing the reporting of numerical outcome estimates and their precision (e.g., confidence intervals).
DISCUSSION
Based on the CONSORT 2025 guidelines, the areas that were generally underreported can be discussed as follows. First, there was insufficient reporting on Open Science, including trial registration, the protocol, and statistical analysis plan, and data sharing. These elements were newly added in the 2025 revision of CONSORT to reflect the global trend toward open science, emphasizing openness and accountability in research through registration, data and code sharing, and conflict of interest reporting [
6]. When examining five-year reporting trends, no studies published between 2011 and 2020 reported items related to Open Science, whereas 75% of those published between 2021 and 2024 did. Considering that the revision was made in 2025, it is evident that reporting on these elements had begun to emerge gradually even before the formal update, reflecting the evolving trend of the times.
Regarding the public disclosure of protocols and statistical analysis plans, discrepancies were identified between the registered protocol and the full report, particularly concerning primary outcome variables and sample sizes [
17]. This finding suggests that making statistical analysis plans (SAP) publicly available could play a crucial role in preventing discrepancies between research design and reporting. The insufficient reporting of data sharing is consistent with previous studies [
18], which have shown that although many journals require a data sharing statement in their submission guidelines, published articles still often lack this information. Data sharing not only promotes transparency but also enhances the reproducibility of research findings, enables secondary analyses and meta-analyses, and contributes to public health and scientific progress. Therefore, it is essential to ensure that data sharing involves not only a formal declaration but also actual execution and reporting [
18].
To ensure transparency and validity in research, it is particularly important to emphasize the role of trial registration, the public disclosure of protocols and statistical analysis plans, and data sharing. Recent studies have shown that when these elements are insufficient, they can lead to bias in outcome reporting and potentially distort the estimates for adverse events (harms) or primary outcomes [
19]. CONSORT 2025 emphasizes registration, protocols, SAP, and data sharing as separate sections. In the case of JKBNS, the CRIS system is primarily used, and all three relevant articles were documented in the main text rather than in the abstracts. Specifically, these items were reported in the Ethical Considerations section of the main text [
20].
This study highlights the need for researchers to place greater emphasis on reporting within the open science framework. By faithfully applying these standards, including the transparent reporting of registered protocols, analysis plans, and data availability, future research will be able to enhance both the credibility and impact of its findings.
On the other hand, there was insufficient reporting in the Methods section regarding changes to the trial protocol, harm definition and assessment, and interim analyses and stopping guidelines. First, reporting changes to the trial protocol was insufficient, limiting transparency into procedural modifications or alterations to analysis plans that occurred during the study. Second, reporting related to harm definition and assessment was inadequate, making it difficult to systematically evaluate adverse events or unintended effects resulting from interventions, thereby posing challenges for readers attempting to replicate or interpret the findings. In accordance with CONSORT Harms and the 2025 CONSORT update, nursing intervention studies should explicitly define, systematically monitor, and report adverse events. Although nursing interventions are generally considered low-risk, specific outcomes—such as increased patient fatigue, procedural discomfort, transient anxiety, or minor physiological changes—should be prospectively identified and documented using standardized criteria. Transparent reporting of such events not only strengthens methodological rigor but also ensures ethical accountability and reproducibility in nursing research. Lastly, interim analyses and stopping guidelines were not reported, which meant there was no clear evidence provided regarding safety reviews through interim analyses or the possibility of early trial termination. Such limitations in reporting may weaken the credibility and reproducibility of research [
21]. Therefore, future studies should provide detailed descriptions of these elements in accordance with the CONSORT 2025 guidelines.
Reporting related to randomization and blinding was found to be insufficient. Specifically, while descriptions of sequence generation in the randomization process were provided, reporting on allocation concealment —known to significantly affect a study's internal validity— was lacking. This may be due to the nature of nursing research, in which researchers often act as facilitators, making it challenging to maintain allocation concealment. Similarly, given the nature of nursing interventions, it may have been challenging to implement blinding effectively. In this study, the reporting of both the randomization procedure and the blinding methods was insufficiently detailed. Without a clear explanation of the randomization process and allocation concealment, it is difficult to completely rule out selection bias. Likewise, inadequate reporting on whether participants, researchers, and outcome assessors were blinded — as well as the level of blinding (single-blind, double-blind, assessor-blind, etc.) — can reduce the objectivity of outcome measurement and introduce performance and detection bias when interpreting the results. The findings of this study revealed that the reporting rates for “Who was blinded” and “How blinding was achieved” were generally low. In this review, blinding was explicitly reported only in studies in which both participants and data collectors were blinded [
22] and in which only participants were blinded [
23,
24]. This low rate of reporting is likely attributable to the fact that most nursing interventions are non-pharmacological, making it inherently difficult to implement blinding procedures. The CONSORT 2025 guidelines emphasize the need to provide detailed descriptions of randomization and blinding procedures [
6]. Previous studies have repeatedly pointed out that a lack of such information undermines the credibility and reproducibility of clinical trial findings [
16,
25]. Therefore, in situations where blinding is challenging due to the nature of nursing interventions, strategies to minimize bias—such as separating participants from outcome assessors—should be implemented. Future research should also systematically report on the specific methods used for randomization and blinding to ensure transparency and rigor. Moreover, nursing research often operates with limited funding resources and is typically conducted on a small scale. As a result, researchers themselves often perform multiple roles—including randomization, intervention delivery, and data collection—rather than assigning these tasks to separate personnel. Such structural constraints may further hinder the feasibility of implementing blinding procedures. Therefore, future nursing studies should explore practical strategies to strengthen blinding procedures during the design phase and advocate for institutional and policy-level efforts to expand funding support. Increased funding for nursing research is essential to enhance the methodological rigor and credibility of intervention studies in this field.
In the Statistical Methods section, insufficient reporting was provided on missing data handling, additional analyses, and participant flow. Similarly, in the Reason for Trial Ending section, reporting was limited to the reasons for trial ending, harms, and ancillary analyses. Only one study [
26] reported an incident related to harm, such as a needlestick injury. Apart from this, no other studies provided information regarding harm. This may be due to the nature of the interventions included in nursing research, which are generally non-invasive, educational, or non-pharmacological. Consequently, the absence of reported harm likely reflects the lack of adverse events rather than a failure to report them. However, it is essential to note that any adverse event arising from an intervention should be explicitly reported in accordance with the CONSORT Harms extension.
Specifically, there was a lack of detailed descriptions of missing-data handling, making it difficult for readers to clearly understand the potential bias introduced by the method used to address missing data. Reporting on additional analyses (e.g., subgroup analyses, sensitivity analyses) was also limited. Furthermore, the reporting of participant flow was insufficient, making it difficult to transparently identify participant enrollment, dropouts, and inclusion in the final analysis. In addition, there were notable limitations in the Reason for Trial Ending section. Without a clear explanation of why the trial was terminated, it becomes difficult to assess the study's legitimacy and validity fully. The incomplete reporting of harms reduces the reliability of evaluating adverse events and safety outcomes. Moreover, the lack of reporting on ancillary analyses limits the potential for additional interpretation of the study’s findings. Given the increasing emphasis on transparency, safety, and reproducibility in research [
27], improvements in the reporting of statistical methods are needed to strengthen the overall quality and credibility of studies.
In this study, the number of published RCTs was relatively low in 2016, 2020, 2022, and 2023. This finding is consistent with previous reviews conducted across multiple countries that reported substantial enrollment delays, operational gaps, and difficulties in participant recruitment during this period [
28]. The overall social context of the coronavirus disease 2019 pandemic, characterized by restrictions on face-to-face interactions and disruptions in healthcare operations, likely contributed to these challenges. Given that experimental studies require rigorous control of interventions and study conditions, it is reasonable to infer that conducting RCTs during this time was particularly constrained by pandemic-related limitations.
Reflecting differences in reporting feasibility across study designs, this study found that crossover trials tended to yield lower compliance scores when assessed against RCT reporting guidelines. This pattern may reflect the inherent methodological challenges of crossover designs, in which complete blinding of participants and data collectors is often unfeasible. Moreover, essential methodological details—such as blinding procedures or allocation concealment—were frequently underreported or ambiguously stated, possibly because they were regarded as self-evident within the study design. Although specific items may have been deemed “non-applicable,” we adhered strictly to the CONSORT-based evaluation criteria and assigned a score of 0 when reporting was insufficient. Therefore, lower compliance scores in crossover trials should be interpreted with consideration of the study design’s inherent limitations, underscoring the need for transparent, detailed reporting in future research to enhance methodological rigor and interpretability.
In this study, we compared (mapped) the items of the CONSORT 2025 reporting guideline with those of three major quality assessment tools: RoB 2, SIGN, and JBI Critical Appraisal Criteria. The characteristics of each tool are as follows: RoB 2 is a risk of bias assessment tool for RCTs developed by Cochrane. It evaluates the systematic risk of bias in five key domains: randomization process, implementation of interventions, measurement of outcomes, and selective reporting [
7]. It categorizes each domain as “Low risk,” “Some concerns,” or “High risk,” providing an overall judgment of study reliability. The SIGN checklist is applicable to a variety of study designs, including RCTs, cohort studies, and case-control studies [
8]. It evaluates factors such as the clarity of the research question, randomization and blinding, handling of missing data, and appropriateness of analysis. The quality of studies is graded as ++ (High quality), + (Acceptable), or - (Low quality). The JBI Critical Appraisal Tools, developed by the JBI in Australia, provide tailored checklists for nearly all study designs, including RCTs, cohort studies, single-case studies, and qualitative research [
9]. Each item is evaluated as Yes, No, Unclear, or Not Applicable. A key strength of the JBI tools is their ability to assess the quality of evidence across a wide range of healthcare research, including qualitative and mixed methods studies.
In this study, a comparison was made between the CONSORT 2025 reporting guideline and the key items of major quality assessment tools (RoB 2, SIGN, and JBI Critical Appraisal Criteria). The results showed that these quality assessment tools primarily focus on evaluating the internal validity and reliability of studies, particularly by examining factors that may threaten validity. In this respect, their purpose aligns with that of CONSORT. In the Methods domain, most items matched closely between CONSORT and the quality assessment tools, while in the Results domain, partial alignment was also observed.
However, certain technical aspects, such as risk of bias, emphasized in RoB 2 and JBI [
7,
29], were less well addressed in CONSORT, which overall tended to focus more on methodological reporting. These findings indicate that while CONSORT plays an essential role as a guideline for enhancing the transparency of clinical trial reporting, its focus differs from that of tools designed to evaluate research quality. Therefore, for a comprehensive interpretation of clinical trial reporting and evaluation, it is necessary to use CONSORT alongside quality assessment tools such as RoB 2, SIGN, and JBI.
This integrated approach is meaningful in that it goes beyond simply assessing RCT bias. A multilayered evaluation of study quality based on research design provides a stronger foundation for systematically interpreting the credibility and applicability of research findings.