Research transparency and integrity benefit greatly from computationally reproducible code, and there is an increasing emphasis on learning the skills to code. However, there hasn’t been as much emphasis on learning the skills to check code. People cite a lack of time, expertise, and incentives as reasons that they don’t ask others to review their research code, but the most commonly cited reason was embarrassment for others to see their code.
An analysis by Nuijten et al. (2016) of over 250K p-values reported in 8 major psych journals from 1985 to 2013 found that:
Of 35 articles published in Cognition with usable data (but no code, Hardwicke et al. (2018) found:
Of 62 Registered Reports in psychology published from 2014–2018, 36 had data and analysis code, 31 could be run, and 21 reproduced all the main results (Obels et al, 2020)
The process of methodically and systematically checking over code–your own or someone else’s–after it has been written.
The specific goals of any code review will depend on the stage in the research process at which it is being done, the expertise of the coder and reviewer, and the amount of time available.
In this talk, we’ll focus on pre-submission code review by colleagues.
All file references should use relative paths, not absolute paths.
Name files and code objects so both people and computers can easily find things.
An approach to programming that focuses on the creation of a document containing a mix of human-readable narrative text and machine-readable computer code.
Huge thanks to the Code Review Guide Team (especially Hao Ye, Kaija Gahm, Andrew Stewart, Elaine Kearney, Ekaterina Pronizius, Saeed Shafiei Sabet, Clare Conry Murray)
Anyone is welcome to get involved in the project.