ARTIST Summer School in China 2010

July 18-23, 2010      Beida (Peking University) - Beijing, China organised and funded by ARTIST 

Invited Speakers



Dr. Vania Joloboff

LIAMA Sino-French Laboratory, Tsinghua University, China
Vania Joloboff received a doctorate degree from University of Grenoble (France) and graduated from Ecole des Mines (France).

Prior to joining LIAMA, Vania Joloboff was Chief Technical Officer at Silicomp Group, with mission to develop business for Silicomp Group and to oversee Silicomp Group R&D program.

Formerly, Vania Joloboff was Technical Director at the Open Software Foundation heading development of the OSF Motif technology, which later was included in CDE desktop solution from HP, IBM and Sun Microsystems; and the Embedded Java group. Before joining OSF, Vania directed Bull (French computer manufacturer) Research Center at Sophia-Antipolis. He is the founder of the KOALA’s group.
Course:
Embedded Systems Virtual Prototyping
Abstract:
An inherent property of embedded systems is that they combine hardware and software into a coherent apparatus that serves some function. The development of embedded systems require tools to design such combination of hardware and software, and to validate that the resulting product verifies the required properties. We define Virtual Prototyping as the technology that makes it possible to develop a virtual prototype of the system under design, which can be exercised and tested like the real device.
The real application software can be run over the virtual prototyping platform, engineers can explore design alternatives and test the application software. At the core of a virtual prototyping platform is a hardware simulation technology since the hardware functions must be simulated in order to run the software. The goal of this course is to explain various facets of virtual prototyping technology.

The course starts with an introduction to virtual prototyping and comparsion to other modeling techniques in a model driven engineering approach. Following, a quick review of of computer architecture fundamentals necessary to understand simulation concepts will be done. Next the course will present various hardware modeling techniques. This part includes an introduction to SystemC and Transaction Level Modeling (TLM), the two technologies most widely used in the industry, with examples drawn from real virtual prototypes.

The second part of the course will consist in reviewing various techniques that can be used to improve performance of virtual prototyping, using SystemC and TLM models, and show how virtual prototyping can be integrated with other formal methods tools to validate the final embedded system product.

The course will end with a short presentation of challenges for the future and directions for research in virtual prototyping.

Pre-requisite: The course is easier to follow for students who have some background in computer architecture and C++ programming experience, although it is not necessary.


Suzanne Lesecq

CRI - CEA, France
Suzanne LESECQ passed the “Agrégation” in Electrical Engineering in 1992. She received the PhD in Process Control from the Grenoble Institute of Technology, France, in 1997. She joined the University Joseph Fourier in 1998 where she has been appointed as Associate-Professor from 1998 to 2006 and full-time Professor from 2006 to 2009. She joined the CEA LETI in 2009. She has published more than 90 papers in world leading Conferences, International Journals, book chapters. Her topics of interest are Process Control and Fault Detection and Isolation, together with their safe implementation on control units.
Course:
Data Fusion Techniques applied to Sensor Networks
Abstract:
Data fusion is an information processing technique that aims association, combination, integration and blending of multiple data sources, representing a variety of knowledge and information, in order to provide resulting information better than the one obtained from all sources each considered separately.
The problem of the combination and the simultaneous use of data and information from multiple sources can be found in many fields of application often associated with the need of observing an environment from sensors more or less reliable, more or less accurate, and more or less effective. But in fact, the term data fusion extends to larger areas. It includes the combination of all sources of knowledge, whether from sensors, navigation systems, various databases (map data, documentaries, digital terrain models, rules of expertise) or even analysis or previous data fusion.
During the tutorial, we will consider sensor fusion, i.e. fusion of data acquired from various sensors, possibly of different modalities. Various techniques will be first summarized, together with their numerically robust implementation.
Then the context of sensor fusion in a sensor network will be considered. Especially, the challenges (sensor positioning, computational capability of sensor node, data loss, etc.) that arise in this context will be presented.


Prof. Dr. Dr. h.c. mult. Reinhard Wilhelm

Saarland University, Germany

Research interests: Timing Analysis for Real-Time Systems, Static Program Analysis based on 3-valued logic, vulgo Shape Analysis, Compiler Construction, Algorithm Explanation.

Positions and Functions: Chair for Programming Languages and Compiler Construction at Saarland University, Scientific Director of Schloss Dagstuhl, the Leibniz Center for Informatics, Site Coordinator Saarbruecken in the AVACS Project, Coordinator of the Predator Project, Member of the Strategic Management Board for the Artist2 and ArtistDesign Networks of Excellence, Associate of AbsInt Angewandte Informatik GmbH, Member of the ACM SIGBED Executive Committee, Member of the Steering Committee of the International Conference on Embedded Software EMSOFT, Member at Large of the Steering Committee of the ACM Conference on Languages, Compilers, and Tools for Embedded Systems LCTES, Coorganizer of ARTIST workshop Reconciling Predictability with Performance (RePP), Member of the Scientific Advisory Board of CWI, Member of the Program Committees of SCOPES 2009, LCTES 2009, MEMOCODE 2009, RTSS 2008.
Course:
Timing Analysis and Timing Predictability
Abstract:
Hard real-time systems are subject to stringent timing constraints which are dictated by the surrounding physical environment.
A schedulability analysis has to be performed in order to guarantee that all timing constraints will be met ("timing validation"). Existing techniques for schedulability analysis require upper bounds for the execution times of all the system’s tasks to be known.
These upper bounds are commonly called worst-case execution times (WCETs).
The WCET-determination problem has become non-trivial due to the advent of processor features such as caches, pipelines, and all kinds of speculation, which make the execution time of an individual instruction locally unpredictable. Such execution times may vary between a few cycles and several hundred cycles.
A combination of Abstract Interpretation (AI) with Integer Linear Programming (ILP) has been successfully used to determine precise upper bounds on the execution times of real-time programs.
The task solved by abstract interpretation is to compute invariants about the processor’s execution states at all program points.
These invariants describe the contents of caches, of the pipeline, of prediction units etc. They allow to verify local safety properties, safety properties who correspond to the absence of "timing accidents". Timing accidents, e.g. cache misses, pipeline stalls are reasons for the increase of the execution time of an individual instruction in an execution state.

The technology and tools have been used in the certification of several time-critical subsystems of the Airbus A380. The AbsInt tool, aiT, is the only tool worldwide, validated for these avionics applications.


Professor Wang Yi

Uppsala University
Wang Yi is Professor and chair of embedded systems, and Director of the newly introduced Embedded Systems Masters’ Program at Uppsala University. He obtained his PhD in computer science from Chalmers University of Technology in 1991. His research interests are in methods and tools for modelling, verification and implementation of embedded and real-time systems. He is a co-founder of the UPPAAL model checker. He is a co-founder of the UPPAAL model checker. He has been a program (co-)chair for TACAS, EMSOFT, FORMATS and HSCC, and an associate editor for IEEE transactions on computers, the Journal of Computing Science and Engineering, the Elsevier Journal of Systems Architecture, and the Journal of Computer Science and Technology. His current interests are mainly in real-time software development on multicore platforms. He is one of the principle investigators for the newly established UPMARC centre of excellence at Uppsala, devoted to new techniques and tools for programming multicore computer systems. He together with his students received the Best Paper Award of RTSS 2009.
Course:
Modelling and Analysis of Timed Systems
Abstract:
My lecture will include two parts:
- Part (1): Model Checking of Real-Time Systems
In the first part, I will give a tutorial on UPPAAL which is a model-checker for real-time systems using timed automata. The tool has been developed and maintained jointly by Uppsala University in Sweden and Aalborg University in Denmark. UPPAAL has been widely used in research, education and industrial environment for embedded systems design. In this tutorial, I will focus on the semantical and algorithmic aspects of the tool. The main topics include: transition systems, temporal Logics, the theory of timed automata, and algorithms and data structures implemented in UPPAAL for solving the verification problems efficiently. I will also outline a recent work on combining abstract interpretation and model checking techniques to solve the multicore WCET analysis problems, which is presented in details in the second part of this lecture.
- Part (2): Real-Time Systems on Multicores
Future processor chips will contain many CPU’s, i.e., processor cores each of which may support several hardware threads working in parallel. The new architecture gives rise to the grand challenge for embedded software development to make the most efficient use of on-chip resources including processor cores, caches, communication bandwidth in order to meet requirements of performance and predictability. In this talk, I will give an overview on the CoDeR-MP project at Uppsala in collaboration with ABB and SAAB to develop high-performance and predictable real-time software on multicore platforms. I will present the technical challenges including the multicore timing analysis problem and our proposed solutions dealing with shared L2 caches, bandwidth for accessing off-chip memory and multiprocessor scheduling. In particular I will present in details our recent work on fixed-priority multiprocessor scheduling. In 1973, Liu and Layland discovered the famous utilization bound for fixed-priority scheduling on singleprocessor systems. Since then, it has been a long standing open problem to find fixed-priority scheduling algorithms with the same bound for multiprocessor systems. Recently we have developed a partitioning-based fixed-priority multiprocessor scheduling algorithm with Liu and Layland’s utilization bound, which can be used for real-time task assignment and scheduling for multicore systems.

(c) Artist Consortium, All Rights Reserved - 2006, 2007, 2008, 2009

Réalisation Axome - Création de sites Internet