WFCD - 2010

October 24th, 2010      Scottsdale, Arizona (USA), within ESWeek 2010 organised and funded by ARTIST 

Invited Speakers

Invited Speakers

   
Prof. Edward A. Lee
UC Berkeley
Predictability, Repeatability, and Models
for Cyber-Physical Systems
PDF
To be able to analyze and control the joint dynamics of software, networks, and physical processes, engineers need to be able to analyze and control the temporal behavior of software and networks. Most real-time systems techniques aim to do this by focusing on predicting execution time bounds through program analysis and architectural modeling. In this talk, I attempt to give a rigorous meaning to "prediction" in this context, and argue that what engineers need is not predictability so much as repeatability.
The argument relies on the observation that boolean properties (properties that are either satisfied or not satisfied) are necessarily properties of models and not properties of the physical realizations of systems. Predictability is about the ability to anticipate whether properties will be satisfied, whereas repeatability is about the ability of a physical realization to deliver consistent behavior.

 
Dr. Axel Legay
INRIA
Stochastic Abstraction: From component-based design and simulation to abstraction and lightweight verification
PDF
Systems integrating multiple heterogeneous distributed applications communicating over a shared network are typical in various sensitive domains such as aeronautic or automotive embedded systems. Verifying the correctness of a particular application inside such a system is known to be a challenging task, which is often beyond the scope of existing exhaustive validation techniques. The main difficulty comes from network communication which makes all applications interfering and therefore forces to explore the full state-space of the system.
In this talk, we will introduce "stochastic abstraction", a new simulation-based technique for verifying application running within a large heterogeneous system. Our technique, which exploits the design methodology, starts by performing simulations of the system in order to learn the context in where the application is used. Then, it creates a stochastic abstraction for the application, which takes the context information into account. This smaller model can be verified
using efficient techniques such as statistical model checking or any probabilistic model checking approach. We have applied our technique to an industrial case study: the cabin communication system of an airplane. We use the BIP toolset to model and simulate the system. We have conducted experiments to verify the clock synchronization protocol i.e., the application used to synchronize the clocks of all computing devices within the system. The technique was also applied to an AFDX network. We will show that (1) the BIP toolset allows to model a very detailed and accurate model of an AFDX network, and (2) our methodology allows for a finer analysis than worst-case methods.

Our approach takes the best of various areas: statistics and mathematics, component-based design methodology, and formal verification.

The talk is divided in three parts: (1) introduction to stochastic abstraction, (2) case studies, and (3) the future of stochastic abstraction in component-based design and verification.


 
Prof. Pieter J. Mosterman
The MathWorks
Heterogeneous Function Composition to Eliminate a Class of Direct Relationships in Software Components of Dynamic Systems
PDF
This presentation will discuss the effect of software architecture choices on the functional design for embedded systems with a significant data processing aspect. This aspect is often modeled by time-based block diagrams. These diagrams consist of a network of blocks, each of which may represent a dynamic system. While the semantics of such diagrams can be well studied based on a strict functional (side-effect free) approach, a more efficient implementation is often required for their execution. Achieving such efficiency typically relies on: (i) a sorted order of execution and (ii) internal state that breaks direct dependency between input and output. The latter allows an efficient decomposition of computations into: (i) an output part that computes the output of a block and (ii) an update part that computes the new state of a block. Based on the data dependencies between the blocks, they can be automatically sorted into an order to evaluate all required output values. Automatically deriving this sorted order removes such accidental complexity from the responsibility of the user and is an important part of the simulation and code generation technology stack. The architecture of the system may be designed concurrently and be responsible for decomposing the overall network of dynamic systems of the functional view into software components that, in turn, each comprise a dynamic system network. As a result, mapping a software architecture view of the embedded system onto the functional view imposes a hierarchical decomposition of the sorted order and the original sorted order may change. Homogeneous composition of block functionality may introduce direct dependencies between input and output at the parent level in the hierarchy. In many cases, these direct dependencies result in cycles in the execution graph that disallow automatic code generation. To resolve this problem, a heterogeneous composition approach is presented that minimizes the direct dependency between input and output of the software components. This then eliminates cycles in the data dependency graph and allows efficient automatic code generation.
 
Dr. Pierre Paulin
Director of SoC Platform Automation,
STMicroelectronics (Canada) Inc.
A Component-based
Multi-Core Programming Environment
PDF
In this talk, we will describe the MPflex programming environment designed for an STMicroelectronics large-scale multicore fabric, with emphasis on the use of components used in the capture, mapping, optimization, debug and analysis phases. We describe the elements of the top-down mapping flow:
- High-level capture of applications expressed using a range of parallel programming models (PPM)
- MIND component framework which is the basis for the capture of the PPMs
- A component-based repository of Parallel Programming Patterns (PPPs), which are building blocks of the PPMs
- Functional simulation of the parallel application on the host
- Mapping of the application onto a large-scale multi-core fabric, with support for multiple implementations of the PPP components
- Component-aware debug, trace and analysis tools
The use of this environment will be demonstrated using an HD video high-quality rescaling application, mapped onto the ST multicore platform.

 
Prof. Paul Petterson
Mälardalen University
Progress - Predictable component-based development of embedded systems
PDF
Progress is a new framework for predictable component-based development of real-time and embedded systems, targeting the domains of vehicular and telecommunication systems. It features a component model, analysis techniques and tools for formal verification and analysis of resource utilization, worst-case execution time, reliability, and schedulability, an execution platform, an integrated development environment, and development process guidelines. In this talk, I will focus on the Progress component model ProCom and how it is designed to address the particularities of the embedded systems domain, including resource limitations, and requirements on safety and timeliness. In particular, I will present a notion of abstract resources and resource utilization of components, and discuss how central resource analysis problems, such as feasibility, optimization, and trade-off analysis, can be solved using results from the model of priced (or weighted) timed automata.
 
Joseph Sifakis
Verimag Laboratory
Generating Distributed Implementations for BIP
PDF
BIP is a framework for the structured composition of a set of atomic components, by using multiparty interaction and priorities. Atomic components are transition systems labelled with communication ports. Their behaviour is described using Petri nets whose transitions trigger the computation of functions written in C. Interactions are sets of strongly synchronizing ports. Priorities are strict partial orders between interactions. The operational semantics of BIP models are implemented by a centralized Engine that receives requests for synchronization from atomic components and executes atomically-enabled interactions, which are maximal for the priority order.
We propose a method for generating distributed implementations from BIP models. The method is based on source-to-source transformations, which are correct-by-construction by preserving observational equivalence. This leads to BIP models with binary send/receive interactions that are directly implementable on distributed platforms.
The method involves two steps. First, it replaces strong synchronization primitives in the atomic components of the initial model by send/receive primitives. Second, it replaces the centralized Engine by a distributed Engine for a given partition of the interactions. The partition reflects the degree of parallelism between interactions: two interactions belonging to different partition blocks can be executed in parallel if they are not in conflict.
The distributed Engine is a structured in two layers. The first layer includes a set of controllers implementing an interaction protocol for executing interactions within a given partition block. The second layer implements a conflict resolution protocol that resolves conflicts between interactions handled by different controllers.
The method is general and can be used for automatically generating a wide range of implementations, from fully centralized to fully decentralized, depending on the degree of parallelism between interactions.
The method has been implemented in a tool that generates stand-alone C++ code from BIP models, using either TCP sockets or MPI for deployment on multi-core platforms.

 
Prof. Janos Sztipanovits
Vanderbilt University
Composition Challenges in Heterogeneous Systems
PDF
During the last few decades, remarkable progress has been achieved in the analysis and design of physical systems by investigating stability, robustness, and performance properties. Despite of this success, system theory applies to homogeneous system models, where behaviors and interactions are restricted to continuous dynamics. Computation/communication dynamics – such as timing uncertainties caused by the communication network, jitter caused by the schedulers, and value uncertainties caused by quantizers and finite word-length arithmetic — are usually ignored. The cost of this “simplification” is significant because verification has to be repeated whenever implementation is changing. In this talk we present research results that address decoupling an essential system level property - stability – from implementation side effects. We demonstrate the application of passivity-based approach in distributed systems and discuss challenges emerging from satisfying stability and safety simultaneously.
 
Prof. Lothar Thiele
ETH Zürich
Component-Based Worst-Case Analysis
of System Temperatures
PDF
The talk provides a new method to determine upper bounds on the peak temperature of embedded systems. The approach is based on real-time calculus and uses the well known
Fourier’s law on the cooling and heating of the system.
We use the proposed technique to analyze the maximum peak temperature for a given arrival curve under any workload-conserving real-time scheduling algorithms.
The approach is compositional and is central to design real-time systems that give guarantees
on the schedulability as well as on the maximal temperature.

 
Dr. Maarten Wiggers
Twente University
Requirements for a
Real-Time Multi-Processor Mapping Flow
based on Convex Programming
PDF
Modern embedded multimedia systems process multiple streams concurrently, and each stream is processed by a job. We assume that such a system is implemented as a multi-processor system with run-time schedulers, and that these jobs are implemented as task graphs. In such a task graph, tasks communicate data over buffers and wait for space in the buffer before producing data into it. Given the task graph and the multi-processor system it is still non-trivial to find the required settings that let the task graph satisfy the stream’s real-time requirements. These settings are for example the buffer sizes and the settings of the run-time schedulers.
Convex programming is attractive as it computes a globally optimal solution in polynomial time. This talk will present requirements on the task graph and multi-processor system to allow application of convex programming to compute the settings that are sufficient to let the task graph satisfy its real-time requirements. We will show that for a class of task graphs and a class of run-time schedulers a timed dataflow model can be constructed that conservatively models the temporal behavior of the task graph. This class of run-time schedulers is the class of budget schedulers that by construction guarantees a minimum resource budget in a maximum replenishment interval. The resulting timed dataflow model has the important property that it is monotone in the token arrival times, even though multi-processor scheduling anomalies can still occur with these budget schedulers. This property enables approximations that allow the construction of a convex program that simultaneously computes sufficient buffer and budget sizes.


 

(c) Artist Consortium, All Rights Reserved - 2006, 2007, 2008, 2009

Réalisation Axome - Création de sites Internet