The Role of DevSecOps in Modern Edge Systems
DevSecOps evolved to address the need for building in security continuously across the software development lifecycle so that teams could deliver secure applications with speed and quality. Incorporating testing, triage, and risk mitigation earlier in the continuous integration, continuous delivery (CI/CD) workflow prevents the time-intensive, and often costly, repercussions of making a fix post system deployment.
This concept is part of “shifting left,” which moves security testing toward developers, enabling them to fix security issues in their code in near real time rather than “bolting on security” toward the end of the development. When development organizations code with security in mind from the outset, it’s easier and less costly to catch and fix vulnerabilities before they go too far into production or after release.
DevSecOps in the IT domain implies that any infrastructure (software and hardware) can be created with a set of scripts and all the testing and integration can be automated. A typical embedded developer is used to having the hardware (aka target board) sitting right next to a development system.
This worked fine for small software teams or if the software could be divided neatly into sub-components. However modern edge systems that combine the best of new IT development methods with traditional embedded requirements (deterministic real-time performance, security, and reliability) are complex and require parallel development work-streams for software development and hardware design. Lynx refers to this as the Mission Critical Edge. The DevSecOps thinking of automated testing and integration can alleviate some of the issues caused by this new model.
We are clearly not alone. The Department of Defense (DoD) published its Digital Modernization Strategy in 2019, which called out DevSecOps as a foundational element of its approach to implementation. Since the “Software is Never Done” report, the DoD rapidly has made progress in the pursuit of those improvements and the strategy in the DoD Modernization guide. Even amidst the COVID-19 pandemic, significant progress was demonstrated in the transformation process, as prototype or minimal viable product (MVP) solutions have been tested on real programs (GBSD, B2 Bomber, F-22), are being required on new funded programs (Air Force’s ABMS), and are soon to be mandated across many new programs.
With the DoD CIO office up-leveling the work of the Air Force and pushing requirements across the Joint Force, it appears only a matter of time before transformation is DoD-wide. The U.S. Air Force has implemented a number of “Software Factories” as part of their implementation plan. While the exact structure of these is somewhat unclear at the time of this writing, the results of one, Kessel Run, for 2020, show that there are benefits to be had.
One of the key challenges associated with mission critical platforms (aircraft, automotive, healthcare, critical infrastructure) that needs to be considered though, is that the software technology is required to undergo safety certification before the systems can be deployed. The challenge is that any change in software, in most cases, leads to the need for re-certification. Typically, certification efforts are a significant factor in the timeline and cost of the programs. Integration issues for large programs, that have multiple software teams working on different aspects of the infrastructure and applications, cause delays in delivery timelines and cost overruns. These issues become particularly exacerbated if the system-level testing and integration is not started early in the program lifecycle.
Many Agile methods and DevSecOps processes and techniques aren’t defined in detail; engineering teams are effectively empowered to embrace the details they see relevant to themselves. This is counter to industries’ safety and security standards that require a rigorous, well-defined process. This means that software teams must define and document their DevOps tools, processes and techniques. An important example of this is traceability. Proving requirements are satisfied with validation evidence is important for demonstrating system functionality and airworthiness. Therefore, any DevSecOps process must manage traceability precisely.
DevSecOps and DO-326A/DO-356
An interesting approach that has come out of the DevSecOps practice is the concept of treating security requirements in the same ways as safety and functional requirements. Guided by the outcome of detailed threat analysis and the implementation of security controls, then to validation through testing, and of course documentation. This is the key to integrate security into DevOps and a good way to build security into the development culture and have software teams communicate using a familiar language.
DO-326A defines an airworthiness security process (AWSP) which, at the thirty thousand feet level, defines certification, security risk assessment and security development activities. Security risks that are identified during the assessment require development activities to mitigate the risk to the aircraft. These activities are meant to be integrated into the safety processes required for the software. DO-356 is a companion document to DO-326A that proves compliance with airworthiness security requirements throughout the stages of development. The provisions in these documents are not yet mandatory and therefore, merely serve as guidelines. It is also noteworthy that they focus on intentional unauthorized electronic interaction including instances of malware installation and system manipulation, as opposed to offering guidance on physical attacks.
Good engineering practices dictate adoption of coding guidelines or standards such as MISRA, or SEI CERT guidelines. This approach assures newly developed code follows industry best practices. However, a coding standard by itself does not prevent all complex security vulnerabilities. Additionally, it isn’t practical to implement coding standards on existing code.
In practice, many avionics systems have requirements defined up front as part of the request for proposal (RFP) process during vendor selection. It is also probable that milestones are established as part of large-scale airframe projects where deliverables are well outlined. In such cases, planning around these requirements and milestones is necessary as they feed the design and implementation phases that can still be iterative, Agile processes.
Today most system development of safety critical platforms follows a V-Model where equal weight is given to coding and testing. V shape shows the relationship between each phase of the development/design and the corresponding testing phase. Testing (right side of the V) is performed for verification and the left side is used for validation.
The software development method chosen during the design, implementation and testing of code is left up to the manufacturer as long as it meets the basic criteria of good engineering practices with traceability, safe and secure practices, and reporting and documentation evidence for results. Agile and iterative methods can work well in this phase despite the entire lifecycle not necessarily working within the Agile framework. In fact, the approach leads to better results by shifting many important parts of development earlier such as testing.
Categories of Software Tools Used During DevSecOps Development
Our experiences of engaging with customers that are implementing DevSecOps for the Mission Critical Edge indicate the use of the following application security testing (AST) tools during the CI/CD process:
Static application security testing (SAST): These tools scan proprietary or custom code for coding errors and design flaws that could lead to exploitable weaknesses.
Software composition analysis (SCA): These tools scan source code and binaries to identify known vulnerabilities in open source and third-party components. They also provide insight into security and license risks to accelerate prioritization and remediation efforts, as well as being used to continuously detect new open source vulnerabilities.
Interactive application security testing (IAST): These tools work in the background during manual or automated functional tests to analyze web application runtime behavior. For example, the Seeker® IAST tool uses instrumentation to observe application request/response interactions, behavior, and dataflow. It detects runtime vulnerabilities and automatically replays and tests the findings, providing detailed insights to developers down to the line of code where they occur. This enables developers to focus their time and effort on critical vulnerabilities.
Dynamic application security testing (DAST): This testing technology mimics how a hacker would interact with your product, testing applications over a network connection.
Software development tools require qualification to certification bodies such as the Federal Aviation Administration (FAA). As such, it’s important that any tools used have an acceptable pedigree and the ability to meet tool qualification requirements. The process to qualify tools for use in software that will be certified to DO-178C is described in DO-330. The requirements for a code-coverage tool differ to those for a compiler, which differ again for a static analysis tool. At a high level, a project needs to describe how it plans to use the tool in a Tool Operational Requirements Document, and provide a Tool Qualification Plan outlining how to prove that the tool performs correctly. The latter typically requires a set of test artifacts and expected results.
DevSecOps brings a really interesting approach to treating security requirements in the same ways as safety and functional requirements, which is a perfect fit for modern edge systems with a mission critical element. It’s encouraging to see this approach increasingly being built into development culture and supporting the next wave of mission critical product innovation in the industries of aviation, automotive and more.
This article was written by Ian Ferguson, VP Marketing, Lynx Software Technologies (San Jose, CA). For more information, go here .
INSIDERElectronics & Computers
University of Rochester Lab Creates New 'Reddmatter' Superconductivity Material...
INSIDERElectronics & Computers
MIT Report Finds US Lead in Advanced Computing is Almost Gone - Mobility...
Airbus Starts Testing Autonomous Landing, Taxi Assistance on A350 DragonFly...
Boeing to Develop Two New E-7 Variants for US Air Force - Mobility Engineering...
PAC-3 Missile Successfully Intercepts Cruise Missile Target - Mobility...
Air Force Pioneers the Future of Synthetic Jet Fuel - Mobility Engineering...
Driver-Monitoring: A New Era for Advancements in Sensor Technology
Manufacturing & Prototyping
Tailoring Additive Manufacturing to Your Needs: Strategies for...
How to Achieve Seamless Deployment of Level 3 Virtual ECUs for...
Electronics & Computers
Specifying Laser Modules for Optimized System Performance
The Power of Optical & Quantum Technology, Networking, &...
Electronics & Computers
Leveraging Machine Learning in CAE to Reduce Prototype Simulation and Testing
University of Rochester Lab Creates New 'Reddmatter' Superconductivity Material
INSIDERTest & Measurement
New Consortium to Develop Thermal Protection Materials for Hypersonic Vehicles
Multi-Agent RF Propagation Simulator
Low Distortion Titanium in Laser Powder Bed Fusion Systems
How to Test a Cognitive EW System