ARTIST Summer School in Europe 2009

September 7-11, 2009       Autrans (near Grenoble), France organised and funded by ARTIST 

Programme

  General Chairs:     
- Joseph Sifakis (VERIMAG Laboratory)
- Bruno Bouyssounouse (VERIMAG Laboratory)

 



 

Detailed Abstracts



Professor Tarek Abdelzaher

University of Illinois at Urbana Champaign, USA
Course:
Sensor Networks: Theoretical Challenges and Practical Applications
 
bio
Abstract:
The emergence of sensor networks at the beginning of this decade opened the door for significant research opportunities and challenging new problem formulations in the areas of distributed computing, networking, and embedded systems, to name a few. This talk presents an applied perspective on the current state of the art in sensor network research and its relation to practical application needs. While a significant amount of early work addressed problems and models that resulted in little practical impact, an equally significant amount of problems, driven by current and prospective application needs, remain unsolved. A perspective is presented on a few misconceptions to rectify and several significant research challenges to address in the next decade of sensor network research to promote both theoretical impact and field deployment. Preliminary results are presented on analytical foundations for key sensor network problems, and initial practical experiences are reported that apply such foundations in the context of recently deployed applications.


Professor Luis Almeida

University of Porto, Portugal
Course:
Mobile Cyber-Physical Systems
 
bio
Abstract:
Cyber-Physycal systems refer to those systems that are deeply inmersed in the environment and go beyond what has been commonly coined as embedded systems, for example due to large heterogeneity, poorly defined frontier or highly variable composition. One particular kind of systems that meet such definition are the teams of autonomous agents that are becoming more and more common in applications such as search, rescue, demining and surveillance. These mobile cyber-physical systems (M-CPS) allow relocating sensors and actuators dynamically to improve global performance, be it improved sensing, improved control or more efficient actuation. However, achieving the necessary cooperation is not trivial given that the team composition is variable with agents entering and departing, the network topology is highly dynamic and the communication channel is prone to interference from many sources.
In this talk we will address some of the main issues that need to be solved to build efficient M-CPS such as synchronization, information sharing, membership, location-awareness and consensus. We will illustrate these issues with a few case studies, including a robotic soccer team from RoboCup Middle-Size League, which also exhibits most of the typical requirements and constraints of M-CPS. The focus will be on the networking and middleware infrastructures and we will advocate the use of dynamically reconfigurable and adaptive techniques to cope efficiently with the uncertainties of the topology, membership and interference, reducing their impact on the timeliness of the communications.


Professor Michael Backes

Saarland University and MPI-SWS, Germany
Course:
Machine-assisted design and analysis of certifiably secure protocols —
theory and tools for end-to-end security
 
bio
Abstract:
State-of-the-art technologies struggle to keep pace with possible security vulnerabilities. There are few suitable guidelines or automated tools to design or analyze security protocols. Therefore, many protocols have security flaws and imperfections. The lack of a consistent methodology and tools for analyzing security protocols throughout the various stages of their design hinders the detection and prevention of vulnerabilities and comprehensive protocol analysis. The challenge is to develop a general methodology and tools for guaranteeing end-to-end security — from high-level specifications of the desired security requirements for a given task, to a specification of a security protocol that relies on innovative cryptographic primitives, to a secure, executable program.
In this talk, I will propose a general methodology for automatically devising and verifying security protocols and executable programs based on high-level specifications of selected security requirements and protocol tasks. This includes developing type systems for automatically analyzing abstract security protocols, a general framework for conducting cryptographic proofs, and techniques for automatically reasoning about executable code. A particular focus is incorporating zero-knowledge proofs in the design and verification of security protocols, and to use this primitive’s unique security features to serve the growing demand for sophisticated new security properties, in particular privacy properties such as anonymity. Salient applications of our methodology and tools are electronic voting and peer-to-peer systems.


Professor Sanjoy Baruah

University of North Carolina at Chapel Hill, USA
Course:
Techniques for multiprocessor real-time scheduling
 
bio
Abstract:
As real-time computer system requirements become ever more complex, there is an increasing need for tool-support to assist in the design and analysis of such systems. In order to build these tools, it is necessary that the underlying theoretical and conceptual foundations be thoroughly understood. Issues of resource allocation and scheduling on uni-processor platforms are quite well understood today, with algorithms such as Earliest Deadline First, Rate- and Deadline Monotonic, etc. providing the formal foundations upon which sophisticated system design and analysis tools have been built.
However, we are witnessing an accelerating trend towards implementing real-time systems on multiprocessor and multi-core platforms. Accordingly, it is becoming imperative that we obtain a similar understanding of real-time resource-allocation and scheduling issues with respect to multiprocessor platforms as well. Much work remains to be done before we can claim that our understanding of multiprocessor scheduling is as complete as of uniprocessor scheduling; nevertheless, some very important and interesting results have been obtained over the past few years. This talk will survey these recent results, and suggest future directions of research in multiprocessor real-time scheduling theory.


Professor Luca Benini

University of Bologna, Italy
Course:
Designing scalable and predictable SoC communication fabrics
how much hardware support is really needed?
 
bio
Abstract:
In this talk I will give an overview of recent trends in communication fabrics for embedded many-core platforms The shift toward multicore architectures has been imposed by technology reasons (power consumption and design closure issues in nanometer technology) and not by the "coming of age" of parallel programming models, compilation, analysis and verification environments. Thus, we may be building terascale many-cores architectures that we cannot program efficiently. Even worse, we may not be able to give any guarantees on execution timing, constraints and real-time properties of applications.
To address this challenge, hardware support can be designed within on-chip communication fabrics to improve communication predictability. I will discuss options and tradeoffs and provide some insights on integrated hardware-software approaches addressing the issue of predictable inter-processor communication.


Dr. Jan Beutel

ETH Zurich, Switzerland
Course:
Tools for Distributed Embedded Systems
 
bio
Abstract:
Although a decade has passed since prominent visions of wireless sensor networks were put forth by Estrin, Pister and others designing such systems today is more an art dominated by experience than a coordinated process yielding predictive results. In this lecture, we will review the current state-of-the-art and define what is actually hard (and new) about this (new?) class of systems. For this purpose we will learn about the PermaSense project, a sensor network deployed in an extremely hazardous environment on glaciated peaks in the Swiss Alps. The main characteristics of the design and testing strategies used in the development of the PermaSense application will be presented and discussed in the light of a holistic system design goal. We will continue to discuss recent achievements in the area of multi-contextual and automated test and validation tools developed at ETH Zurich. The talk will end with a look ahead at a proposed future system design methodology that should allow a tighter integration of virtual and real-world design spaced, which we feel is a necessity for a successful adoption of larger and more complex networked embedded systems.


Professor Krishnendu (Krish) Chakrabarty

Duke University, USA
Course:
Design Automation Methods for Digital Microfluidic Biochips
 
bio
Abstract:
Microfluidics-based biochips (or lab-on-chip) are revolutionizing laboratory procedures in molecular biology, and leading to a convergence of information technology with biochemistry and microelectronics. Advances in microfluidics technology offer exciting possibilities for high-throughput DNA sequencing, protein crystallization, drug discovery, immunoassays, neo-natal and point-of-care clinical diagnostics, etc. As microfluidic lab-on-chip mature into multifunctional devices with "smart" reconfiguration and adaptation capabilities, automated design and ease of use become extremely important. Computer-aided design (CAD) tools are needed to allow designers and users to harness the new technology that is rapidly emerging for integrated biofluidics.
This talk will present ongoing work on design automation techniques for microfluidic biochips. First, the speaker will provide an overview of electrowetting-based digital microfluidic biochips. Next, the speaker will describe synthesis tools that can map bioassay protocols to a reconfigurable microfluidic device and generate an optimized schedule of bioassay operations, the binding of assay operations to functional units, and the layout and droplet flow-paths for the biochip. Techniques for pin-constrained chip design, fault detection, and dynamic reconfiguration will also be presented. An automated design flow allows the biochip user to concentrate on the development of nano- and micro-scale bioassays, leaving implementation details to CAD tools.


Dr. Ing. Armando Walter Colombo

Schneider Electric Gmbh, Germany
Course:
Service-oriented architecture based distributed Control and Automation Systems
 
bio
Abstract:
This presentation summarizes the main Features (Scientific and Technological) of the Components and Systems specified, developed and implemented in the EU FP6 IP SOCRADES (Service-Oriented Cross-Layer Infrastructure for Distributed Smart Embedded Devices) Project. Based on own experiences of the partners, the current trends in Control, Communication and Information Technologies and requirements from different industrial sectors, SOCRADES has developed a platform for engineering a next generation of industrial automation systems, exploiting the Service-Oriented Architecture paradigm both at the device and at the application level. There will be presented the results of implementations of pilot applications in different scenarios, e.g., car manufacturing and electromechanical asembly systems and finally there will be outlined a set of outlooks to continue developing the Infrastructure and to use it for different supervisory control components, systems and functions in other industrial scenarios.


Professor Koen De Bosschere

Ghent University, Belgium
Course:
The HiPEAC 2012-2020 vision
 
bio
Abstract:
In 2008-2009, the HiPEAC network of excellence has been working on its vision for the computing systems 2012-2020. This vision is the result of a huge analysis effort in the whole HiPEAC community in 2008, and a synthesis effort in 2009. This vision will be used to steer the future research efforts in the HiPEAC community.
The lecture starts from the grand European societal challenges (energy, transportation, environment, aging population, health, …) and emerging application trends like personalized services, cloud based computing, ubiquitous computing, intelligent sensing, etc. and how computing systems can help tacking some societal challenges, and how it can help enabling new applications and help shaping Europe’s future.

The second part sketches the constraints that will determine the future innovations in computing systems. It discusses the upcoming technological evolutions such as the primacy of power, the impact of the non recurring engineering costs, the reliability crisis, the parallel programming challenge, … and by 2020 the end of Moore’s law.
In the third part, and building on the previous parts, the HiPEAC vision for the next generation computing systems is presented. It explains how software developers will be able to simultaneously improve their productivity and the performance of their applications by using heterogeneous multi-core systems and the appropriate tools to design and program them.


Dr Gilbert Edelin

Thales Research & Technology, France
Course:
Embedded Systems at THALES:
 the Artemis Challenges for an industrial group
 
bio
Abstract:
The diversity and importance of Embedded Systems in Thales have motivated for a long time the Group to invest in the different research segments of this scientific domain. This encompass mainly topics like Computing architectures, Engineering , Seamless connectivity and Middleware which are the three pillars of the Artemis SRA. In this talk I will present some challenges we are facing in these areas, with some examples of in-house developments and research projects:
- in computing architectures, the advent of multi-manycore processing solutions and related disruptions in legacy SW, programming models, or even product lifecycles raise challenges like programming environments, optimised accelerators (data streaming applications) or certification and safe partitioning (safety critical applications);
- in engineering, beyond the development of model based engineering techniques which are now reaching a first industrial deployment stage within the group, research will address the domain specific engineering techniques: formalisms (eg models of computation), analysis for non functional properties, multiviewpoint, etc.;
- in seamless connectivity, the efficiency of layered approaches for the developement of SW for distributed systems with poor RT constraints must be carefully evaluated regarding non functional properties which generally require not to discard all properties of the underneath layers. We should maybe consider seamless connectivity beyond middleware.


Professor Nicolas Halbwachs

Verimag Laboratory, France
Course:
Synchronous Programming,
and its use for Modeling Non-synchronous Systems
 
bio
Abstract:
Synchronous languages are now widely used for programming safety critical embedded systems. Dedicated validation methods and tools have been developed for these formalisms. For instance, the Scade toolset, based on the synchronous data-flow language Lustre and developed by Esterel-Technologies, is used worldwide, in particular in avionics.
In this talk, we will first recall the principles of the synchronous paradigm. In a second part, we will show how synchronous formalisms can be used for modeling non synchronous systems, taking advantage of their associated tools for early simulation and validation of these systems. In the European integrated project ASSERT, we developed a translator from the architecture description language AADL to Lustre, allowing the joint modeling and simulation of software components (described with Scade or Lustre) and the implementation architecture on which these components are intended to run. As a case study, the methodology was applied to a very critical part of the control system of the "automatic transfer vehicule" (ATV), a spacecraft developed by Astrium.


Prof. Dr. Dr.h.c. Hermann Kopetz

TU Vienna, Austria
Course:
The Role of Time in Embedded System Design
 
bio
Abstract:
Embedded systems interact with the physical environment in order to achieve the intended effects in the environment. The behavior of the physical environment is governed by the laws of physics where real-time is a central concept. It follows that in the model of the environment that is contained in the embedded computer system time is also of central importance. This lecture will focus on the role of time in Embedded System Design. The different models of time in the physical environment—dense time—and in the computer system—discrete time—lead to fundamental conflicts at the interface of these two subsystems. In the first part of the lecture we will elaborate on these conflicts and their consequences for the faithfulness of the computer models of the physical subsystem concerning simultaneity and determinism. In the second part we will present practical guidelines for the system designer in order to achieve the predictable temporal behavior of the embedded system.


Prof. Kim Larsen

Aalborg University, Denmark
Course:
Validation, Performance Analysis and Synthesis of Embedded Systems
 
bio
Abstract:
Within the upcoming European Joint Technology Initiative ARTEMIS as well as several national initiatives such as CISS (www.ciss.dk) and DaNES, model-driven development is a key to dealing with the increasing complexity of embedded systems, while reducing the time and cost to market. The use of models should permit early assessment of the functional correctness of a given design as well as requirements for resources (e.g. energy, memory, and bandwidth) and real-time and performance guarantees. Thus, there is a need for quantitative models allowing for timed, stochastic and hybrid phenomenas to be modeled an analysed.
UPPAAL and the branches CORA and TIGA provide an integrated tool environment for modelling, validation, verification and synthesis of real-time systems modelled as networks timed automata, extended with data types and user-defined functions. The talk will provide details on the expressive power of timed automata in relationship to embedded systems as well as details on the power and working of the UPPAAL verification engine.
In this talk we demonstrate how UPPAAL has been applied to the validation, performance analysis and synthesis of embedded control problems. The applications include so-called task graph scheduling, MPSoC systems consisting of application software running under different RTOS on processors interconnected through an on-chip network. Also we show how CORA and TIGA has been used to synthesize optimal (e.g. wrt. energy or memory) control strategies for given applications, including climate controller and control of hydralic systems.


Dr. Jan Reineke

Saarland University, Germany
Course:
Everything you (n)ever wanted to know about caches
 
bio
Abstract:
Embedded systems as they occur in application domains such as automotive, aeronautics, and industrial automation often have to satisfy hard real-time constraints. Safe and precise bounds on the worst-case execution time (WCET) of each task have to be derived.
In this talk I will discuss the influence of the cache architecture on the
precision and soundness of WCET analyses, by
- evaluating predictability metrics that capture the inherent uncertainty in any cache analysis.
- introducing the notion of relative competitiveness, which allows to derive new cache analyses that are optimal w.r.t. the predictability metrics
- investigating the soundness of measurement-based WCET analysis in the presence of caches.


Prof. Dr. Lothar Thiele

ETH Zurich, Switzerland
Course:
Scalable Software for MPSoCPlatforms
 
bio
Abstract:
The presentation describes an environment to map applications onto MPSoC platforms. The corresponding platform enables the specification, simulation, performance evaluation and mapping of distributed algorithms. Major characteristics are scalability and multi-resolution methods for validation and estimation that combine simulation-based and analytic approaches.


Prof. Dr. Dr. h.c. mult. Reinhard Wilhelm

Saarland University, Germany
Course:
Timing Analysis and Timing Predictability:
extension to multi-processor systems
 
bio
Abstract:
Hard real-time systems are subject to stringent timing constraints which are dictated by the surrounding physical environment.
A schedulability analysis has to be performed in order to guarantee that all timing constraints will be met ("timing validation"). Existing techniques for schedulability analysis require upper bounds for the execution times of all the system’s tasks to be known.
These upper bounds are commonly called worst-case execution times (WCETs).
The WCET-determination problem has become non-trivial due to the advent of processor features such as caches, pipelines, and all kinds of speculation, which make the execution time of an individual instruction locally unpredictable. Such execution times may vary between a few cycles and several hundred cycles.
A combination of Abstract Interpretation (AI) with Integer Linear Programming (ILP) has been successfully used to determine precise upper bounds on the execution times of real-time programs.
The task solved by abstract interpretation is to compute invariants about the processor’s execution states at all program points.
These invariants describe the contents of caches, of the pipeline, of prediction units etc. They allow to verify local safety properties, safety properties who correspond to the absence of "timing accidents". Timing accidents, e.g. cache misses, pipeline stalls are reasons for the increase of the execution time of an individual instruction in an execution state.

The technology and tools have been used in the certification of several time-critical subsystems of the Airbus A380. The AbsInt tool, aiT, is the only tool worldwide, validated for these avionics applications.


Dr. Eran Yahav

IBM T.J. Watson Research Center, USA
Course:
Verification and Synthesis of Concurrent Programs
 
bio
Abstract:
Practical and efficient concurrent systems are notoriously hard to design, implement, and verify. Current practices for developing concurrent systems are rather limited. Directly using low-level concurrency constructs is the realm of experts, and is extremely error-prone. Generic higher-level constructs (e.g., transactional memory) are currently limited, and are not clearly easier to use. Analytic techniques (e.g., race detection) only address a fraction of the problems, and can only be applied after the code is written and is potentially broken in a fundamental manner. In this talk, I will survey recent work of our group on machine-assisted construction of concurrent algorithms.

(c) Artist Consortium, All Rights Reserved - 2006, 2007, 2008, 2009

Réalisation Axome - Création de sites Internet