A well-considered testing approach with a focus on early testing stages means: fast test results with short iterations and daily feedback for development - immediately.

SNPS1333033731

Why TPT for this? Because in TPT, test cases are defined independently of technology and execution method, enabling tests to be reused even in later testing stages such as HiL or vehicle testing. Additionally, TPT supports Black-Box Testing, Grey-Box Testing, and White-Box Testing.

Tedious test maintenance is a thing of the past: Expected values can be determined in separate definitions independent of test data, eliminating duplications.

MiL + -

Model-in-the-Loop (MiL) Testing

During Model-in-the-Loop (MiL) Testing, software models are directly tested in the development environment, such as MATLAB/Simulink by MathWorks.

Advantages of MiL Testing with TPT

Supported modeling technologies include MATLAB/Simulink, TargetLink, and ASCET.

TPT seamlessly integrates with MATLAB/Simulink and supports generated code from TargetLink, which can be tested as Software-in-the-Loop (SiL). The same applies to ASCET.

Once created, test cases can be fully reused across all other testing stages. Back-to-Back Testing with TPT is particularly convenient and immediately identifies discrepancies between the model and its execution on the control unit.

SiL + -

Software-in-the-Loop (SiL) Testing

Both generated and manually written code can be tested with TPT. Integration is often fully automatic. All versions of MinGW and Visual Studio compilers are supported, and tests can be debugged in both TPT and IDEs. TPT supports SiL Unit and Software Integration tests, automatically stubbing unresolved references. SiL test execution with TPT can take place on the host under Windows and Linux, in a CI environment, and in the cloud.

Depending on the form of the test object, it can be connected to TPT in various ways:

Presented Source

Integration with TPT

Source Code

via C-Platform

AUTOSAR Software Components

via AUTOSAR-Platform

Library

via C-Platform

Object Code

via C-Platform

Executable Application

via EXE-Platform

PiL + -

Processor-in-the-Loop (PiL) Testing

During Processor-in-the-Loop (PiL) Testing, the embedded software is tested directly on the processor that will later be used in the control unit. The goal is to verify the compatibility of hardware and software components early on, such as drivers or actuator control.

Advantages of PiL Testing with TPT

With TPT, testing can be done either physically or even virtually in a simulation environment. Supported platforms include the Universal Debug Engine (UDE) from PLS, Trace32 from Lauterbach, and winIDEA from iSYSTEM.

Even if HEX and ELF files are provided as the source, they can be integrated using the Lauterbach, PLS UDE, and winIDEA platforms.

For execution in the simulation environment, you will need our Trace32 Support Package and a Trace32 license from Lauterbach. In Trace32, you can choose whether to use the simulation instead of a board. This even saves the purchase, setup, and maintenance of hardware.

SNPS1333033731

Background on PiL Testing

Algorithms and functions for processors in embedded systems are typically developed on a PC within a development environment, either directly in C, C++, or model-based languages such as Simulink, TargetLink, ASCET, or ASCET-DEVELOPER models. The resulting C/C++ code must be compiled with a specific "target" compiler for the processor that will be used in the control unit of the vehicle.

To verify whether the compiled code also works on the target processor, PiL tests are conducted. The control algorithms for PiL testing are usually executed on an evaluation board, sometimes also on the actual control unit. In both variants, the real processor used in the control unit is employed, not the PC as in Software-in-the-Loop (SiL) testing. Using the target processor has the advantage of detecting compiler errors.

"In-the-Loop" in PiL tests means that the controller is embedded in real hardware and simulates the environment of the software being tested. Environment models like MiL, SiL, and HiL are uncommon in PiL tests because embedding such models on the target processor is complex or impossible. When environment models converge with the processor, it is usually referred to as Hardware-in-the-Loop Testing (HiL).

At this level, integration and system tests are often conducted, and they can be part of steps SWE.5 and SYS.5.

HiL + -

Hardware-in-the-Loop (HiL) Testing

Hardware-in-the-Loop (HiL) Testing involves connecting the finished control unit electrically to a simulation environment for testing.

SNPS1333033731

Advantages of HiL Testing with TPT

HiL Tests on Control PCs

For this scenario, TPT test cases are directly modeled and executed on the control PC. While the tests are running, TPT communicates with the real-time simulator, allowing for the continuous alteration and observation of signals and parameters. Results can be recorded in real-time.

Also possible and easy to set up is communication with application tools such as INCA, CANape, fault simulators, or directly with the CAN bus. The TPT Dashboard also enables manual, interactive tests with TPT on the HiL.

PC-controlled HiL tests are supported by:

  • dSPACE HiL systems
  • Vector CANoe
  • NI Veristand
  • RT-LAB
  • ASAM XIL-capable HiL systems

Real-time HiL Tests

Tests can be performed with TPT on HiL systems in real-time, with cycle times of less than 100µs. In this setup, tests run directly on the real-time system.

Real-time tests are supported by:

  • Speedgoat
  • Concurrent iHawk systems
  • NI Veristand
  • MathWorks Simulink Real-Time
ViL + -

Vehicle-in-the-Loop (ViL) Testing

Vehicle-in-the-Loop (ViL) testing involves testing the components, control units, actuators, and sensors in the final target environment, ultimately representing vehicle testing.

Typically, vehicles are tested under various environmental conditions in cold, warm, and hot regions. Even today, these tests are mainly conducted manually. Manual tests only scale with trained drivers and available vehicles.

TPT's Autotester provides significant added value by offering a structured approach to testing vehicles. With the Autotester, you can describe manual driving maneuvers, guide a driver step-by-step through a test both audibly and visually, verify the correctness of execution, and fully automate all tests. The best part is that if the driver detects unusual behavior during the drive, they can make voice recordings with the press of a button, and the trace will be labeled accordingly at the time of recording. The driving data for this situation is then trimmed for simplified analysis on the computer.

Black-Box, Grey-Box, and White-Box Testing

Operational testing can be distinguished between these three approaches: Black-Box, Grey-Box, and White-Box testing. The differentiating factor is the information available to the tester for test creation and execution. For the embedded domain, the following characteristics apply.

Black-Box Test

In Black-Box Testing, the tester receives a test object as a black box along with a description of how the Black Box should behave, usually in the form of requirements. There is no information about the internal structure. The tester creates test cases, executes them, and compares the responses of their Black-Box with the expected values derived from the specification.

Grey-Box Test

Test case creation in Grey-Box testing is essentially like Black-Box testing, but the tester has basic knowledge of the internal structure, for example, through descriptions of internal states the system can assume. However, the tester does not have direct access to the implementation or the code.

White-Box Test

In White-Box testing, the tester has all the information, including insight into the codebase. In practice, this approach often leads to lower product quality than the Black-Box approach. From a quality assurance perspective, this approach should never be used as a reference for deriving expected values. However, there are meaningful exceptions, such as when White-Box testing is used to test defensive programming constructs, like sequentially repeating null pointer checks, which cannot be stimulated with Black-Box approaches.

Conclusion

It is unclear whether coverage reports available to the tester after execution are considered White Box or Black Box. This remains somewhat ambiguous. However, it is clear that coverage reports can help identify gaps in testing compared to requirements and/or gaps in requirements compared to the code.

Support and Training

SolvNetPlus

Explore the Synopsys Support Community! Login is required.

SNUG

Erase boundaries and connect with the global community.

Connect with the Synopsys Virtual Prototyping Team