Rigorous Embedded Design 2011

April 10th, 2011      Salzburg, Austria (within EuroSys 2011) organised and funded by ARTIST 


  • Joseph Sifakis (CNRS, Verimag Grenoble)

A Vision for Computer Science: The System Perspective

In this talk, I will discuss the evolution of Computer Science and in particular its shift of focus from algorithms and programs to systems. I will advocate for a coherent scientific foundation of system design and present a vision for its development encompassing three work directions: Marrying Physicality and Computation: Computation models ignore physical time and resources and are by their nature very different from analytic models used in physical systems engineering. In order to take into account interaction of computing systems with physical environments they must be enriched and extended with paradigms and methods from Electrical Engineering and Control Theory. Component-based Construction: Complex systems are designed by assembling heterogeneous components. Heterogeneity has different sources including a large variety of interaction mechanisms, synchronous or asynchronous execution and different levels of abstraction. We need theoretical frameworks encompassing meaningful and natural composition of heterogeneous components which is essential for tractable and productive system design. Guaranteeing Correctness: System design flows are empirical and lack theoretical foundations and methods guaranteeing that the designed systems meet their requirements. Currently correctness of complex systems is ensured mainly by a posteriori checking which suffers from well-known complexity limitations and incurs high development costs. We need theory and methods the guarantee at design time system correctness. We distinguish two complementary avenues: One is enforcing by control essential properties to adapt a system�s behavior to uncertain and dynamically changing environments. The other is inferring global properties of composite systems from properties of their components. I will conclude with general remarks about the nature of Computer Science as a scientific discipline on its own right and advocate for a deeper interaction and cross-fertilization with other more mature disciplines.
  • Wang Yi (Uppsala University)

The Digraph Real-Time Task Model

Models for real-time scheduling have to balance the inherently contradicting goals of expressiveness and analysis efficiency. Traditional models such as Layland and Liu’s multi-rate real-time tasks with tractable feasibility tests have limited expressiveness, restricting their ability to model many systems accurately. In particular, they are all recurrent, preventing the modeling of
structures like mode switches, local loops, etc. On the other extreme, models like timed automata provide powerful modeling capacity; but the analysis efficiency is limited due to their intrinsic computational complexity.

Recently we have proposed a model based on arbitrary directed graphs (digraphs) for real-time scheduling. The model is free from the above limitations on modeling capacity; but the feasibility problem remains tractable.
  • Kim Larsen (Aalborg University)

Timing and Performance Analysis Using Timed Automata

Within the European Joint Technology Initiative ARTEMIS as well as several national initiatives such as CISS (www.ciss.dk) and DaNES (http://www.danes.aau.dk/), model-driven development is a key to dealing with the increasing complexity of embedded systems, while reducing the time and cost to market. The use of models should permit early assessment of the functional correctness of a given design as well as requirements for resources (e.g. energy, memory, and bandwidth) and
real-time and performance guarantees.

In this talk we show how the well-know formalism of timed automata and the supporting tool UPPAAL has been applied to timing and performance analysis of embedded systems. The timing analysis covers a framework for accurate WCET analysis of binary code executing on ARM7 and ARM9 processors taking caching and pipelining into account. Also covered is a framework for schedulability analysis of a rich class of applications (tasks with uncertain and complex arrival patterns, execution times and dependencies) executing on single or multiprocessor platforms. Both the schedulability analysis and WCET frameworks has been applied to industrial cases.

In recent work a stochastic semantics of network of timed automata has been given, where components repeatedly races against each other for
performing the next output. The resulting stochastic behaviour provides a refinement of the classical timed labelled transtion system semantics of timed automata. Also, this allow for stasticial model checking to be applied for settling soft-real time performance properties of the type "within T time-units a given request R will have been granted with probability at least P" with a desired level of significance. In particular, it allows for comparison of different scheduling principles in terms of their soft-real time performance.

Joint work with Mads Chr. Olesen, Andreas Dalsgaard, Rene R. Hansen Marius Mikucionis, Brian Nielsen, Alexandre David, Wang Zheng, Axel Legay.
  • Dejan Nickovic (I.S.T. Austria)

Monitoring properties of AMS designs

Consumer embedded devices such as cell phones, GPS and portable multimedia systems have gained tremendous importance in the recent past. In embedded systems, digital, analog and software components are combined on a single chip, resulting in increasingly complex designs that introduce richer functionality on smaller devices. As a consequence, there is an increase in potential insertion of errors into a design, yielding a need for automated analog and mixed-signal validation tools. In the purely digital setting, formal verification based on properties expressed in industrial specification languages such as PSL and SVA is nowadays successfully integrated in the design flow. On the other hand, the validation of analog and
mixed-signal systems still largely depends on simulation-based, ad-hoc

We consider ingredients of the standard verification methodology that can be successfully exported from digital to analog and mixed-signal setting, in particular property-based monitoring techniques. Property-based monitoring is a lighter approach to the formal verification, where the system is seen as a “black-box” that generates sets of traces, whose correctness is checked against a property, that is its high-level specification. Although incomplete, monitoring is effectively used to catch faults in systems, without guaranteeing their full correctness.

We first present a technique for property-based analog and mixed-signal monitoring. In the heart of the framework lies signal temporal logic STL, that is a high-level specification language allowing to express transient properties of analog, mixed and timed signals. STL is an extension of the real-time metric interval temporal logic MITL, where one can specify temporal relations between relevant “events” in the continuous signals that are captured using numerical predicates. We then present procedures for automatic translation of arbitrary STL specifications into monitors, i.e. programs that check the correctness of a set of simulation traces with respect to the given property. We introduce analog monitoring tool AMT that implements the presented
monitoring techniques and illustrate the usefulness and the limitations of the approach on two industrial case studies, considering properties of a FLASH memory cell and a DDR2 memory interface.

Finally, we discuss limitations of the presented methodology with respect to industrial strength validation that are exposed by the two case studies. In particular, we identify possible useful extensions that could improve the expressiveness of the logic and the quality of the validation process.
  • Kai Lampka (ETH)

Compositionality and Heterogeneity: Incorporating formal Methods in the Design Cycle of Embedded Real-time Systems

When analyzing and designing embedded systems with hard real-time constraints contemporary methods abstract from individual events and their processing. In an abstract setting, one rather considers infinite streams of input stimuli provided by the environment to some task running on a computing device. Upon completion the task may emit (timed) events to the environment, which again may trigger computations in some down-streamed devices. When processing an event, a task can not process any other incoming event of the same type. Conflicting resource accesses of tasks is resolved by some scheduling policy, e.g. earliest-deadline-first (EDF), priority-based, etc.. Event streams, and tasks sharing computing devices and communicating via (non-blocking) FIFO-buffers of infinite capacities are the main ingredients of an abstract system understanding which in the past decade has successfully served for establishing a formal framework for the system-level design of embedded real-time systems.

The presentation will introduce various methodologies for modelling and analyzing system components with real-time constraints and complying with different computational models. Furthermore it will discusse how the abstract system understanding, as illustrated above, can be exploited for unifying a heterogenous landscape of modelling methodologies and combine them in a single framework, ultimately targeting a rigorously formal and compositional analysis of distributed (hard) real-time systems.
  • Jonas Rox (Braunschweig)

Using Compositional Performance Analysis for Obtaining Viable End-to-End Latencies in Distributed Embedded Systems

System and communication platform integration is a major challenge and systematic analysis of the dynamic timing effects becomes key to building safe and reliable systems. To give worst case guarantees for the timing behavior of complex distributed embedded real-time systems, different compositional approaches for system level performance analysis have been developed which exhibit great flexibility and scalability. The basic idea of compositional approaches is to break down the analysis complexity of complete systems into separate local component analyses and to integrate existing local performance analysis techniques, e.g. uniprocessor scheduling analysis known from real-time research, into system level analyses. While this approach is in theory able to handle arbitrary complex systems, the accuracy of the system level results, e.g. maximum end-to-end latencies, can easily become very pessimistic with increasing numbers of components. Additionally, complex communication drivers used in real embedded systems can have a large influence on the end-to-end timing, not only due to an additional delay, but due to other mechanics, e.g. over- and under sampling effects. In this talk techniques to incorporate the effects of modern communication stacks into the analysis and methods to reduce the pessimism of the system level analysis results will be presented.
  • Bertrand Jeannet (INRIA)

Improving the precision of abstract interpretation analysis with widening in presence of complex control structures

Abstract Interpretation is a method of choice for inferring invariants on numerical variables of a program. Several numerical abstract domains have been designed for this purpose, like boxes (intervals), octagons, convex polyhedra, ... As these domains in general contains infinite ascending sequences, they require a widening operator for ensuring the convergence of iterative fixpoint computations.

Although most widening operators for numerical domains are based on a clear geometrical intuition, in the context of program analysis it is difficult to predict and to keep under control the loss of information it induces.

A classical technique for maintaining a better precision is the so-called ``widening up-to’’ or ``widening with thresholds’’. However the benefit of this technique strongly depends on the choice of relevant thresholds constraints. We describe in this talk a technique for automatically inferring relevant threshold constraints, that applies to any control graph. Although our technique is simple, it happens to infer the required threshold constraints in many rather difficult cases, in particular in the case of unstructured, concurrent or interprocedural control graphs.

We also compare our technique with other techniques aimed at improving the precision of widening, such as abstract acceleration (Gonnord and Halbwachs), policy iteration, (Costan, Gaubert and al), guided iteration (Gopan and Reps).
  • Christoph Kirsch (Salzburg)

Virtualizing Time, Space, and Power for Cyber-Physical Cloud Computing

Imagine a fleet of autonomously flying high-performance quadrotor helicopters equipped with small-form-factor server hardware, cameras,
and laser range finders gathering and processing data for information acquisition tasks such as search-and-rescue missions and environmental
monitoring. Inspired by data center cloud computing, the helicopters do not directly execute any mission code but instead work as servers hosting virtual abstractions of networked autonomous vehicles that perform the actual missions. Similar to virtual machines, virtual vehicles are spatially isolated in memory but may also be temporally isolated for real-time performance and even power-isolated for cost accounting. Virtual vehicles can be created and deployed dynamically at flight time and then migrate from one real vehicle to another in order to aggregate information as efficient and fast as possible. We discuss the potential capabilities and design challenges of software abstractions and systems infrastructure for such cyber-physical cloud computing. In particular, we discuss the problem and preliminary solutions of providing time, space, and power isolation of virtual vehicles (or machines) simultaneously.

Joint work with Silviu Craciunas, Andreas Haas, Hannes Payer, Harald
Roeck, Andreas Rottmann, Ana Sokolova, Rainer Trummer (University of
Salzburg) and Joshua Love, Raja Sengupta (UC Berkeley)




http://www.cs.uni-salzburg.at/ ck/p...

(c) Artist Consortium, All Rights Reserved - 2006, 2007, 2008, 2009