A Benchmarking Proposal for DevOps Practices on Open Source Software Projects
The popularity of open-source software (OSS) projects has grown significantly over the last few years with more organizations relying on them. As these projects become larger, the need for higher quality also increases. DevOps practices have been shown to improve quality and performance. The DORA benchmarking reports provide useful information to compare DevOps practices performance between organizations, but they focus on continuous deployment and delivery to production, while OSS projects focus on the continuous release of code and its impact on third parties. The DORA reports mention the increasing presence of OSS projects as they are widely used in the industry, but they have never been used to measure OSS projects performance levels. This study reveals that the DORA benchmark cannot be applied to OSS projects and proposes benchmarking metrics for OSS projects, being the first one that adapts the DORA metrics and applies them in OSS projects. The metrics proposed in this study for benchmarking OSS projects include Release Frequency and Lead Time For Released Changes to measure throughput, and Time To Repair Code and Bug Issues Rate to assess stability. In contrast to the DORA reports, where data is collected through manual surveys, in our proposal, data is collected automatically by a tool we developed that retrieves information from public GitHub repositories. This reduces the risk of survey-based data collection. Our study also shows the benchmark feasibility by applying it to four popular OSS projects: Angular, Kubernetes, Tensorflow, and VS Code. In addition, we proposed challenges that address the topics and future works to expand the knowledge and findings of this study. Overall, the findings of the study can help to improve future research on OSS projects and provide a better understanding and challenges of the role of DevOps practices in OSS projects.
READ FULL TEXT