This section covers both Timing Analysis and Compilers.

Timing Analysis

Hard real-time systems need to satisfy stringent timing constraints, which are derived from the systems they control. A schedulability analysis for the set of tasks making up the system and a given hardware architecture has to be performed in order to guarantee that all the timing constraints of these tasks will be met (timing validation). Schedulability analysis requires knowledge of upper bounds for the execution times of all the system’s tasks. These upper bounds (and lower bounds) have to be safe, i.e., they must never underestimate (overestimate) the real execution time. The upper bounds represent the worst-case guarantee the developer can give.

Furthermore, the bounds should be tight, i.e., the overestimation (underestimation) should be as small as possible. Thus, the main criteria for the acceptance of a timing-analysis tool that is used to obtain guarantees are soundness of the methods-do they produce safe results?-and precision-are the derived bounds tight?
The problem of determining upper bounds on execution times for single tasks and for quite complex processor architectures has been solved. Several commercial WCET tools are available. They have experienced positive feedback from extensive industrial use in the automotive and aeronautics industry. This feedback concerned performance of the tools, precision of the results, and usability. In addition, several research prototypes are under development.

Timing-Analysis tools have to cover a rather large space originating from different application domains with their specific requirements, different classes of processor architectures, more general hardware and overall system architectures, and different user expectations. The tools offered by the cluster partners have their strengths in different points of this space.

The existing tools serve some particular and highly relevant points in this space. AbsInt’s tool for example has been used in the development of safety-critical systems in the Airbus A380.

On the other hand, they currently do not serve distributed architectures well. Another interesting point in this space is occupied by timing-analysis tools adapted for teaching. This is approached by cooperation between Mälardalen and Tidorum.
The tool development is highly complex connected to the high complexity of modern processor architectures. It takes too much effort and it is error prone. Therefore, computer-supported tool-development efforts are needed.


A large number of compiler platforms are available in industry and academia, e.g. GCC (GNU), LANCE (Dortmund/Aachen), OCE (Atair/Mentor), SUIF (Stanford), and CoSy (ACE). Compiler platforms are usually conceived as software systems that allow for quick development of compilers for new target machines and that permit efficient research by means of an open, easily extensible infrastructure.

A key problem, though, is the fact that there is still no “one-size-fits-all” platform. Each of the available platforms has its specific strengths and weaknesses w.r.t. openness, IP rights issues, code quality etc. Furthermore, different platforms serve different research requirements, e.g. some are more suitable for backend modifications while others are better for source level transformations. Therefore, it is expected that the heterogeneous platform landscape will continue to exist in the future. Nevertheless, the members of the ARTIST2 compiler cluster have decided to largely focus on one specific compiler platform, i.e. the CoSy system by ACE.

With the increasing level of customization of embedded processors it becomes more and more obvious that architecture aware compilation is a must to achieve sufficient code quality. Application of only classical code optimization techniques, largely working at the machine-independent intermediate representation level is not good enough. Therefore, members of the ARTIST2 compiler cluster have designed numerous novel code optimization techniques, e.g. Dortmund and Aachen have extensively worked on optimizations for DSP, VLIW and network processors. This work is being continued in the context of ARTIST2.

Timing problems are expected to become more severe in the future, due to the increasing speed gap between processors and memories. Due to this gap, efforts for improving the performance of systems have been predicted to hit the “memory wall”. This means that memories will be the key limiting constraint for further performance improvements. Memories also consume a major portion of the electrical energy of embedded systems. Both problems can be addressed by introducing memory hierarchies. However, traditional memory hierarchies have been designed for a good average case behaviour. New technologies are needed in order to design fast, energy-efficient and timing predictable memory hierarchies. Some work has been done in the context of caches (loop caches, filter caches etc.). However, these approaches have mostly focussed on hardware approaches for reaching the goals. For these approaches, compilers were considered to be black boxes and untouchable. Only few authors (e.g. Barua, Catthoor, Dutt, Kandemir) have taken a holistic approach, looking at both hardware as well as software issues.

(c) Artist Consortium, All Rights Reserved - 2006, 2007, 2008, 2009

Réalisation Axome - Création de sites Internet