Improving Software Reliability in Satellite and Spacecraft Applications

In the space industry, much of the effort to refine system engineering processes has been done by hand using tools as divergent as Microsoft PowerPoint to IBM Rational Doors. While these tools are valuable, they do not provide a direct connection to source code and test results.

Table 1. A requirement traceability matrix shows connections between high-level requirements and source files.
Other related industries, such as commercial and military avionics, employ rigorous software process standards such as DO-178B that promote process enforcement by documenting links between software requirements, code, and testing. While military avionics generally do not require such rigorous enforcement of certification, they generally use DO-178B as a guideline. Such tools have been empirically shown to create higher levels of software reliability. Without this level of diligence, lives would be lost and military missions could fail.

The space industry shares many of the same characteristics with the avionics industry. In addition to being safety critical, space missions experience many of the same cost and schedule challenges. The space industry has typically not been governed by DO-178B-certified software development process methodologies, but has employed manual methods. However, by introducing the same techniques used by commercial and military avionics, the space industry would significantly improve the likelihood of mission success and cost reduction.

Software Certification Tools: Code Review

Figure 1. RTM conceptual view, showing connections between requirements and verification.
Software certification tools support, enforce, and automate processes. They also help circumvent software errors ranging from simple coding errors to the verification of potential errors that can cause runtime defects. It is often useful to verify that the data modified by a procedure is the data you expect the procedure to modify.

These checks, in process standard nomenclature, encompass a “coding standard.” In many process standards, including DO-178B and MISRA, use of a coding standard is a key part of enforcing the process. However, if it is done late in the development cycle or without automated tools, developers often choose only to enforce a subset of the coding rules associated with a coding standard. Introducing an automated coding standards checker early in the development process creates a more process-oriented workflow. The time saved by automating the process and not doing it entirely by hand represents the first level of reliability and cost savings.

While coding standards checking on its own can be considered a part of system engineering verification tasks, the value of this task becomes more clear when you apply static data flow analysis. Used in environments such as DO-178B, static data flow analysis makes sure that when code executes, it links and runs with the correct data and connects requirements and code. Armed with requirements traceability, developers can verify that the software does what it was specified to do, and these connections become proof points for project management and process auditing purposes.

Dynamic Analysis in System Testing

Table 2. The three parts of an RTM show connections between high-level requirements, low-level requirements, software verification plans, and reports.
Dynamic analysis evaluates the effectiveness of test plans, providing a high level of assurance that the subsystem on a spacecraft or satellite has been adequately tested before integration.

Traditionally, these tasks are done by hand. Test managers execute system test plans and provide results to the project manager. In some cases, the test manager looks at the code to verify the effectiveness of the test plan. Clearly, this is not always straightforward, and in large projects it becomes logistically impossible.

Dynamic analysis tools easily automate these processes and document what modules, lines, or conditions in code have executed. More importantly, the tools trace what part of the code is executed by each part of the test plan, enabling developers to examine the effectiveness of the test plan. When portions of the code are not executed, it identifies “unreachable code,” which in most certified systems should be eliminated, as well as ”infeasible code,” which generally requires further examination.

Integrating the static and dynamic analysis results into the overall process substantiates the effectiveness of the test plan and provides invaluable day-to-day tangible insight into the project management process. This level of system traceability delivers a huge cost savings and improves reliability over the traditional system test checklist.

Unit Testing

Figure 2. A conceptual diagram illustrating connections between documents in a typical satellite software development project (left) zooms in to illustrate connections between individual requirements, source, and verification (right).
Unit test tools automate the creation of test case drivers. By examining offnominal cases and verifying that initial inputs and outputs on a procedure and module level map match the plan, the tools provide insight about how parts of the system work together.

Generally speaking, tools used in certification environments automatically create stubs and stub out global variables, which allow developers to examine code from the module to the function. Developers can then test code before hardware is fully available and run regression testing, which verifies that the results are the same every time.

To achieve better quality testing, unit test tools are used in conjunction with code coverage tools. When combined, the developer chooses a representative sample of input cases to fulfil code coverage requirements, rather than randomly choosing a set of cases that is convenient.

From a system engineering context, these tools map between portions of code and the expected and actual inputs and outputs. Results can be tied into either higher-level system engineering requirements or lower-level/derived requirements. These connections clearly document that inputs and outputs map as expected, prove higher-level requirements are adequate, and confirm test cases ran as designed.

Requirement Traceability

Requirement traceability is the critical component that ties this all together. It is easy to see why achieving requirements traceability manually is error prone and an inadequate process. In fact, research has shown that 85% of software errors stem from the requirement traceability failures.

These facts, in conjunction with DO- 178B’s traceability requirements, have led to the creation of tools that directly parse requirements in a requirement management system and connect them, in an automated fashion, to test plans and test results. This automation provides time savings in system engineering processes, as well as rigorous enforcement.

Integrating Results

Implementing a Requirement Traceability Matrix (RTM) is fundamental for illustrating and verifying connections within a system. This matrix (see Table 1) shows the connection between high-level, lower-level, and derived requirements, test plans, and verification results.

Let’s consider a typical segment of a requirement traceability matrix, as depicted in Table 1. This segment shows the connection between upstream software requirements and specific source files. The code associated with each software requirement is shown graphically, enabling the developer to identify gaps in the requirements and code that does not correspond to a requirement.

While this simple example illustrates a small part of a requirement traceability matrix, the real power becomes apparent in more complex systems. To add depth to this concept, let’s look at the requirement traceability matrix in a couple of case studies.

Consider a generic space vehicle with hundreds of subsystems designed by dozens of suppliers. Certifying this vehicle for flight and human-rated flight is difficult. However, the task’s complexity can be alleviated using a consistent set of tools. For example, in CEV, where different subsystems have been designed by NASA, Orbital Sciences, and Honeywell, all subcontractors use LDRA tools for verification reports. This forces a consistent process across suppliers and allows for uniform evaluation of project milestones.

Figure 1 offers a typical traceability matrix for this scenario. First, high-levelprogram requirements connect to lowerlevel requirements. These lower-level requirements dictate the specifics addressed by the spacecraft flight software and have system test and code review verification tasks associated with them. The specific system test requirements correspond to MC/DC (Modified Condition/Decision Condition) that ensures input parameters don’t mask decisions made by the software.

This documentation tree provides a high degree of confidence that the software performs as expected. In cases where application failure can cause loss of life, documentation review proves process reliability, illustrating whether subcontractors rigorously used the RTM to track their requirements and testing.

In Table 2, two high-level requirements are provided and two lower-level diagrams are shown. These lower-level requirements lead to unit test and system test verification plans. The third section gives system and unit test, and system test verification results. Connections between the levels link artifacts such as requirements, code, and test.

Satellite Design

In real life, the connections between the documents are more complicated, although the concepts of covering link and percentage coverage remain the same. Consider the case of the typical satellite system, with a system requirements specification which covers an interface control document and a software requirements specification (see Figure 2). A typical graphical view illustrates the connections represented in the requirement traceability matrix.

The project diagram, on the left in Figure 2, shows that source code is mapped to specific requirements. A software verification plan must cover both the source code and any requirements-based test case. Again, this whole matrix of relationships all the way down to the software verification report can be represented via an automated tool chain that offers integrated artifacts from code review, code coverage, unit testing, and system testing.

The use of software-certification validation tools bridges the gap between the software engineer, the tester, and the project manager. Simple systems are easier to diagram on paper, but in reality these tools are more suitable for large systems than simple systems.

Even in cases where there are thousands of requirements, these tools scale to represent all of the requirements and verification results. At any point, the project manager can take a look at overall status or use filters to isolate subsystems and look at one subsystem at a time. A test engineer can look solely at the connections between software and test cases, while the project manager can validate test cases in the context of code coverage. The end result is increased productivity and reliability.


Right now, use of tools represents a current best practice and an excellent starting point for process improvement. Going forward, the use of tools may be mandated, particularly in circumstances where satellites interact with the commercial airspace; private space vehicles carry human crew; and space vehicles operate and re-enter in public airspace. In many of these circumstances, private vehicles will require commercial licenses from the FAA, which may force some or all of the processes in DO-178B to be applied. Any company building components in these contexts should consider the use of certified software validation tools as a way to establish an edge for the future.

By having the space system industry employ the software certification process used by military and commercial avionics, the space industry realizes the same increased productivity and reliability, as well as notable cost savings. Rigorous links are between various parts of a program, establishing a better quality process where automated tools replace error-prone human efforts. Most importantly, the software validation tools ultimately decrease the margin of error and save lives.

This article was written by Jay Thomas, Field Application Engineer, LDRA (San Bruno, CA). For more information, contact Mr. Thomas at This email address is being protected from spambots. You need JavaScript enabled to view it., or visit .