Standard tool architecture for WCET analysis has evolved. In a first phase, which may itself consist of several subphases, the software is analyzed to determine invariants about the sets of execution states at each instruction. Abstract interpretation is mostly used for this phase. The invariants allow the prediction of conservative execution times for individual instructions and for basic blocks. A second phase determines a worst-case path through the program. This is often done by implicit path enumeration; the control flow is translated into an integer linear program and then solved.
The results are more precise if a strong analysis of the control flow is performed. The compiler that has translated the source program into the executable often has valuable information about the control flow. Making this information available is a promising avenue.
The construction of timing-analysis tools is difficult, tedious, and error-prone. Research is on the way to develop computer support for this task.
For non-hard real time tasks, measurement-based methods are being evaluated and tested.
Embedded system software –if compared to hardware- usually involves a significant overhead in terms of energy consumption and execution time. However, for flexibility reasons, hardware cannot be used in applications with changing requirements. In order to make a software implementation feasible, efficiency of embedded software is a must. Various approaches for achieving this efficiency have been explored.
Due to the efficiency requirements, using power-hungry, high-performance off-the-shelf processors from desktop computers is infeasible for many applications. Therefore, the use of customized processors is becoming more common. These processors are optimized for a certain application domain or a specific application. As a consequence, hundreds of different domain or even application specific programmable processors (ASIPs) have appeared in the semiconductor market, and this trend is expected to continue. Prominent examples include low-cost/low-energy microcontrollers (e.g. for wireless sensor networks), number-crunching digital signal processors (e.g. for audio and video codecs), as well as network processors (e.g. for internet traffic management). Source-to-source level transformations are another approach for improving the efficiency of embedded software. These transformations are applied before any compiler is started. To some extent, these transformations are independent of the final processor architecture. Therefore, the advantage of this approach is that it can be used with almost any compiler. It can also be used in combination with retargetable compilation.
A third approach is the use of sophisticated optimizations within a compiler. Optimizations tuned towards embedded systems have been designed by a number of members of the ARTIST2 compiler cluster.
Due to the increasing importance of the memory interface, various optimizations have been designed that help to maximize the efficiency of the memory interface. These optimizations can be either integration into compilers or used as source-to-source level optimizations.
Requirements of most embedded systems comprise not only optimal resource usage but also high safety and dependability guaranties. To meet safety requirements it is necessary to develop software in higher programming languages and to ensure that the transformation process to machine code preserves the semantics of the program and consequently the software system’s behaviour. To achieve very efficient machine code, compilers for embedded systems apply aggressive optimizations. The need to verify the compilation process is increased by the fact that such optimizations are very error-prone, in particular if they change the structure of programs. To make optimizations applicable in safety-critical systems and to ensure that efficient and also correct executable code is produced by the compiler, methods for the verification of optimizing compilers are explored.