Validating and Updating GRASP: A New Evidence-Based Framework for Grading and Assessment of Clinical Predictive Tools
Background: When selecting predictive tools, for implementation in clinical practice or for recommendation in guidelines, clinicians are challenged with an overwhelming and ever-growing number of tools. Many of these have never been implemented or evaluated for comparative effectiveness. The authors developed an evidence-based framework for grading and assessment of predictive tools (GRASP), based on critical appraisal of published evidence. The objective of this study is to validate, update GRASP, and evaluate its reliability. Methods: We aimed at validating and updating GRASP through surveying a wide international group of experts then evaluating GRASP reliability. Results: Out of 882 invited experts, 81 valid responses were received. Experts overall strongly agreed to GRASP evaluation criteria of predictive tools (4.35/5). Experts strongly agreed to six criteria; predictive performance (4.87/5), predictive performance levels (4.44/5), usability (4.68/5), potential effect (4.61/5), post-implementation impact (4.78/5) and evidence direction (4.26/5). Experts somewhat agreed to one criterion; post-implementation impact levels (4.16/5). Experts were neutral about one criterion; usability is higher than potential effect (2.97/5). Experts also provided recommendations to six open-ended questions regarding adding, removing or changing evaluation criteria. The GRASP concept and its detailed report were updated then the interrater reliability of GRASP was tested and found to be reliable. Discussion and Conclusion: The GRASP framework grades predictive tools based on the critical appraisal of the published evidence across three dimensions: 1) Phase of evaluation; 2) Level of evidence; and 3) Direction of evidence. The final grade of a tool is based on the highest phase of evaluation, supported by the highest level of positive evidence, or mixed evidence that supports positive conclusion.
READ FULL TEXT