A Systematic Literature Review of Empiricism and Norms of Reporting in Computing Education Research Literature
Computing Education Research (CER) is critical for supporting the increasing number of students who need to learn computing skills. To systematically advance knowledge, publications must be clear enough to support replications, meta-analyses, and theory-building. The goal of this study is to characterize the reporting of empiricism in CER literature by identifying whether publications include information to support replications, meta-analyses, and theory building. The research questions are: RQ1) What percentage of papers in CER venues have empirical evaluation? RQ2) What are the characteristics of the empirical evaluation? RQ3) Do the papers with empirical evaluation follow reporting norms (both for inclusion and for labeling of key information)? We conducted an SLR of 427 papers published during 2014 and 2015 in five CER venues: SIGCSE TS, ICER, ITiCSE, TOCE, and CSE. We developed and applied the CER Empiricism Assessment Rubric. Over 80 evaluation. Quantitative evaluation methods were the most frequent. Papers most frequently reported results on interventions around pedagogical techniques, curriculum, community, or tools. There was a split in papers that had some type of comparison between an intervention and some other data set or baseline. Many papers lacked properly reported research objectives, goals, research questions, or hypotheses, description of participants, study design, data collection, and threats to validity. CER authors are contributing empirical results to the literature; however, not all norms for reporting are met. We encourage authors to provide clear, labeled details about their work so readers can use the methodologies and results for replications and meta-analyses. As our community grows, our reporting of CER should mature to help establish computing education theory to support the next generation of computing learners.
READ FULL TEXT