The recent furore around University of Minnesota’s “Hypocrite commits” research, which spilled over from the Linux Kernel Mailing List and into mainstream tech media, has provoked a lot of discussion about the Linux kernel community’s processes, and arguably provided ammunition to folks who have been saying all along that open source software cannot be trusted.
Over in the ELISA community, which is exploring how to use Linux in safety-critical systems, it was even suggested that this incident demonstrates that, for Linux to be fit for use in a safety application, the kernel process would need to be redefined and coding standards enforced.
For myself, I continue to believe that open source projects deliver software that is as good as, and often better than, proprietary initiatives. A key difference is that the mistakes and breakages happening behind closed doors often go unreported, and the participants learn less as a result.
I expect that the kernel community will continue to learn, and will evolve its processes in response to this and other events over time. And I totally understand Greg KH’s decision to revert all commits in response to the incident, not least because it signals that action must and will be taken when trust is lost.
But thinking about the use of Linux and open source in a safety context, it seems to me this situation only confirms something that we already know:
- complex software generally has bugs
- some bugs are likely to slip through whatever process is in place to catch them
It would be extremely naive to think that bad actors haven't already introduced bugs or vulnerabilities into widely used software.
Equally it would be wildly optimistic to hope that software at the scale and complexity of Linux could ever be considered bug-free, or completely deterministic, or 100% “safe”, or 100% “secure”.
For engineers thinking about how to use Linux or similarly complex software in safety critical applications, I suggest that the lesson here is not “We need to enforce coding standards and change the process to make the software fit for safety”.
I think it would be much more realistic to say:
“We expect occasional bugs in Linux (or any large-scale software). Our safety design recognises that, and our mitigations aim to minimise the risk of harm arising when bugs occur.”
And as a postscript… it now looks as though the claims of the researchers were fiction all along. In light of this, we can highlight a further benefit of the open source approach. This has all played out in the glare of public scrutiny, with the evidence visible to all parties. As a result we can all consider what happened and learn from it.
Download the white paper: Safety of Software-Intensive Systems From First Principles
In this white paper, Paul Albertella and Paul Sherwood suggest a new approach to software safety based on Codethink's contributions to the ELISA community and supported by Exida. Fill the form and you will receive the white paper in your inbox.
Related blog posts:
- Applying functional safety techniques to complex or software-intensive systems: Safety is a system property, not a software property >>
- Open source and Safety at Codethink: Meet the Codethings: Safety-critical systems and the benefits of STPA with Shaun Mooney >>
Other Content
- Speed Up Embedded Software Testing with QEMU
- Open Source Summit Europe (OSSEU) 2024
- Watch: Real-time Scheduling Fault Simulation
- Improving systemd’s integration testing infrastructure (part 2)
- Meet the Team: Laurence Urhegyi
- A new way to develop on Linux - Part II
- Shaping the future of GNOME: GUADEC 2024
- Developing a cryptographically secure bootloader for RISC-V in Rust
- Meet the Team: Philip Martin
- Improving systemd’s integration testing infrastructure (part 1)
- A new way to develop on Linux
- RISC-V Summit Europe 2024
- Safety Frontier: A Retrospective on ELISA
- Codethink sponsors Outreachy
- The Linux kernel is a CNA - so what?
- GNOME OS + systemd-sysupdate
- Codethink has achieved ISO 9001:2015 accreditation
- Outreachy internship: Improving end-to-end testing for GNOME
- Lessons learnt from building a distributed system in Rust
- FOSDEM 2024
- QAnvas and QAD: Streamlining UI Testing for Embedded Systems
- Outreachy: Supporting the open source community through mentorship programmes
- Using Git LFS and fast-import together
- Testing in a Box: Streamlining Embedded Systems Testing
- SDV Europe: What Codethink has planned
- How do Hardware Security Modules impact the automotive sector? The final blog in a three part discussion
- How do Hardware Security Modules impact the automotive sector? Part two of a three part discussion
- How do Hardware Security Modules impact the automotive sector? Part one of a three part discussion
- Automated Kernel Testing on RISC-V Hardware
- Automated end-to-end testing for Android Automotive on Hardware
- GUADEC 2023
- Embedded Open Source Summit 2023
- RISC-V: Exploring a Bug in Stack Unwinding
- Adding RISC-V Vector Cryptography Extension support to QEMU
- Introducing Our New Open-Source Tool: Quality Assurance Daemon
- Achieving Long-Term Maintainability with Open Source
- FOSDEM 2023
- Think before you Pip
- BuildStream 2.0 is here, just in time for the holidays!
- A Valuable & Comprehensive Firmware Code Review by Codethink
- GNOME OS & Atomic Upgrades on the PinePhone
- Flathub-Codethink Collaboration
- Codethink proudly sponsors GUADEC 2022
- Tracking Down an Obscure Reproducibility Bug in glibc
- Web app test automation with `cdt`
- FOSDEM Testing and Automation talk
- Protecting your project from dependency access problems
- Porting GNOME OS to Microchip's PolarFire Icicle Kit
- YAML Schemas: Validating Data without Writing Code
- Full archive