ARTIST Summer School Europe 2010

September 5-10, 2010      Autrans (near Grenoble), France organised and funded by ARTIST 

Invited Speakers

Bios and Abstracts



Professor Rajeev Alur

University of Pennsylvania - USA
Rajeev Alur is Zisman Family Professor of Computer and Information Science, and Director of the newly introduced Embedded Systems Masters’ Program, at University of Pennsylvania. He obtained his bachelor’s degree in computer science from Indian Institute of Technology at Kanpur in 1987, and PhD in computer science from Stanford University in 1991. Before joining Penn in 1997, he was with Computing Science Research Center in Bell Laboratories. The main focus of his research is foundations and tools for automated analysis of software and embedded systems.
His research spans multiple computing disciplines such as computer-aided verification, embedded control systems, logic in computer science, and programming languages.
He is a Fellow of the ACM, a Fellow of the IEEE, and an Alfred P. Sloan Faculty Fellow.
He received the inaugural CAV (Computer-Aided Verification) Award for fundamental contributions to analysis of real-time systems.

Course:
Interfaces for Control Components
Abstract:
Modern software engineering heavily relies on clearly specified interfaces for separation of concerns among designers implementing components and programmers using those components. The need for interfaces is evident for assembling complex systems from components, but more so in control applications where the components are designed by control engineers using mathematical modeling tools and used by software executing on digital computers.
However, the notion of an interface for a control component must incorporate some information about timing, and standard programming languages do not provide a way of capturing such resource requirements.
This tutorial will describe how finite automata over infinite words can be used to define interfaces for control components. When the resource is allocated in a time-triggered manner, the allocation from the perspective of an individual component can be described by an infinite word over a suitably chosen alphabet.
The control engineer can express the interface of the component as an omega-regular language that contains all schedules that meet performance requirement. The software must ensure, then, that the runtime allocation is in this language. The main benefit of this approach is composability: conjoining specifications of two components corresponds to a simple language-theoretic operation on interfaces. We have demonstrated how to automatically compute automata for performance requirements such as exponential stability and settling time for the LQG control designs. The framework is supported by a toolkit, RTComposer, that is implemented on top of Real Time Java. The benefits of the approach will be demonstrated using applications to wireless sensor/actuator networks based on the WirelessHART protocol and to distributed control systems based on the Control Area Network (CAN) bus.


Professor David Atienza

EPFL - Switzerland
David Atienza Alonso is Professor and Director of the Embedded Systems Laboratory (ESL) at the Institute of Electrical Engineering within the School of Engineering (STI) of EPFL, Switzerland. He also holds the position of Adjunct Professor at the Computer Architecture and Automation Department of Complutense University of Madrid (UCM), Spain.
His research interests focus on design methodologies for integrated systems and high-performance embedded systems, including new modelling frameworks to explore thermal management techniques for Multi-Processor System-on-Chip, novel architectures for logic and memories in forthcoming nano-scale electronics, dynamic memory management and memory hierarchy optimizations for embedded systems, Networks-on-Chip interconnection design, and low-power design of embedded systems.

Course:
Thermal-Aware Design of 2D and 3D Multi-Processor System-on-Chip Architectures
Abstract:
Multi-Processor Systems-on-Chips (MPSoCs) are penetrating the consumer electronics market as powerful solutions to the growing demand for scalable and high performance systems, at limited design complexity and power dissipation. Nevertheless, MPSoCs are prone to alarming temperature variations on the die, which seriously decrease their expected reliability and lifetime. Furthermore, technical advances in manufacturing technologies are fueling the trend towards high performance 3D MPSoC designs. However, 3D stacking creates higher power and heat density, leading to degraded reliability and performance if thermal management is not handled properly.
Thus, it is critical to develop dedicated design methodologies that guarantee safe thermal behavior of forthcoming 2D and 3D MPSoCs at low energy and performance cost.
This presentation targets the development of dedicated design methodologies for 2D and 3D MPSoCs that seamlessly address thermal modeling, analysis and management. First, I will revise thermal modeling mechanisms for 2D MPSoCs based on simulation and emulation frameworks. Second, I will introduce reactive and proactive run-time thermal management methods which prevent hot spots and large thermal gradients in 2D MPSoCs while incurring negligible performance degradation. Finally, I will show how new thermal modeling and active management methods, including liquid cooling, can be modeled and included in 3D MPSoC architectures. The main concepts in the different parts of this presentation will be illustrated by industrial case studies based on Sun’s UltraSPARC T1, Freescale Multimedia-SoCs and IBM 3D-stacked chip prototypes.


Professor Sanjoy Baruah

University of North Carolina at Chapel Hill - USA
Sanjoy Baruah is a professor in the Department of Computer Science at the University of North Carolina at Chapel Hill. He received his Ph.D. from the University of Texas at Austin in 1993. His research and teaching interests are in scheduling theory, real-time and safety-critical system design, and resource-allocation and sharing in distributed computing environments.

Course:
Scheduling Issues in Mixed-Criticality Systems
Abstract:
Due to cost and related considerations, there is an increasing tendency in safety-critical embedded systems towards implementing multiple functionalities on a single shared platform. Certain features have been identified as being common to a large number of important emergent application domains of this kind; these features must therefore be taken into consideration in designing resource-allocation and scheduling policies for such integrated platforms. These features include the following:
  1. Different applications sharing the same platform may have different criticalities, in both the sense that their contribution to the overall platform-wide mission is different, and that they may be subject to different certification requirements by statutory certification authorities;
  2. These applications are typically implemented as collections of event-driven code each of which is embedded within an infinite loop (and hence essentially runs "for ever"); and
    Often, the different applications share additional resources (other than CPU’s); some of these resources are serially reusable rather than preemptive.
  3. This combination of features gives rise to a very rich workload model, and some interesting scheduling problems that are not satisfactorily addressed using techniques from conventional scheduling theory. In this talk, I will describe a series of formalisms that have been proposed to represent workloads possessing these features, and describe several open resource-allocation and scheduling problems concerning such workloads.


Professor Luca Benini

University of Bologna - Italy
Luca Benini is a Full Professor at the University of Bologna. He also holds a visiting faculty position at the Ecole Polytecnique Federale de Lausanne (EPFL). He received a Ph.D. degree in electrical engineering from Stanford University in 1997. Dr. Benini’s research interests are in the design of systems for ambient intelligence, from multi-processor systems-on-chip/networks on chip to energy-efficient smart sensors and sensor networks.From there his research interest have spread into the field of biochips for the recognition of biological molecules, and into bioinformatics for the elaboration of the resulting information and further into more advanced algorithms for in silico biology. He has published more than 300 papers in peer-reviewed international journals and conferences, three books, several book chapters and two patents. He has been program chair and vice-chair of Design Automation and Test in Europe Conference. He has been a Member of the 2003 MEDEA+ EDA roadmap committee 2003. He is a member of the IST Embedded System Technology Platform Initiative (ARTEMIS): working group on Design Methodologies, a Member of the Strategic Management Board of the ArtistDesign Network of Excellenc and a Member of the Advisory group on Computing Systems of the IST Embedded Systems Unit.

Course:
Programming Heterogeneous Many-core platforms in Nanometer Technology:
the P2012 experience
Abstract:
Programmability is a key requirement for fast time-to-market and agile adaptation to rapidly evolving multimedia standards and customer expectations. Programming models and software development environments for embededded computing platforms should not only provide effective programming abstractions, but also help narrowing down the huge gaps in computational density and energy efficiency between programmable and hardwired solutions. In this talk I will provide a view on how these these fundamental issues are being addressed in the context of up-coming nanometer technology many-core platforms, drawing upon the experience of STMicroelectronics’ Plaform 2012 project.


Professor Giovanni De Micheli

EPFL - Lausanne, Switzerland
Giovanni De Micheli is Professor and Director of the Institute of Electrical Engineering and of the Integrated Systems Centre at EPF Lausanne, Switzerland. He also chairs the Scientific Committee of CSEM, Neuchatel, Switzerland. Previously, he was Professor of Electrical Engineering at Stanford University. He holds a Nuclear Engineer degree (Politecnico di Milano, 1979), a M.S. and a Ph.D. degree in Electrical Engineering and Computer Science (University of California at Berkeley, 1980 and 1983).
His research interests include several aspects of design technologies for integrated circuits and systems, such as synthesis, hw/sw codesign and low-power design, as well as systems on heterogeneous platforms including electrical, micromechanical and biological components. He is author of: Synthesis and Optimization of Digital Circuits, McGraw-Hill, 1994, co-author and/or co-editor of eight other books and of over 400 technical articles . He is, or has been, member of the technical advisory board of several companies, including Magma Design Automation, Certess, Coware and STMicroelectronics.
Prof. De Micheli is the recipient of the 2003 IEEE Emanuel Piore Award for contributions to computer-aided synthesis of digital systems. He is a Fellow of ACM and IEEE. He received the Golden Jubilee Medal for outstanding contributions to the IEEE CAS Society in 2000. He received the 1987 D. Pederson Award for the best paper on the IEEE Transactions on CAD/ICAS, two Best Paper Awards at the Design Automation Conference, in 1983 and in 1993, and a Best Paper Award at the DATE Conference in 2005.
He has been serving IEEE in several capacities, namely: Division 1 Director (2008-9), co-founder and President Elect of the IEEE Council on EDA (2005-7), President of the IEEE CAS Society (2003), Editor in Chief of the IEEE Transactions on CAD/ICAS (1987-2001). He is and has been Chair of several conferences, including DATE (2010), pHealth (2006), VLSI SOC (2006), DAC (2000) and ICCD (1989). He is a founding member of the ALaRI institute at Universita’ della Svizzera Italiana (USI), in Lugano, Switzerland, where he is currently scientific counselor.

Course:
Nanosystems: devices, circuits, architectures and applications
Abstract:
Much of our economy and way of living will be affected by nanotechnologies in the decade to come and beyond. Mastering materials at the molecular level and their interaction with living matter opens up unforeseeable horizons. This talk deals with how we will conceive, design and use nanosystems, i.e., integrated systems exploiting nanodevices. Whereas switching circuits and microelectronics have been the enablers of computer and communication systems, nanosystems have the potentials to realize innovative computational fabrics whose applications require broader hardware abstractions, extended software layers and with a much higher complexity level overall. The abstraction of computation, the nanosystem architecture, the technological feasibility envelope and the multivariate design optimization problems pose challenging and disruptive research questions that this talk will address.


Professor Nikil Dutt

UC Irvine - USA
Nikil Dutt is a Chancellor’s Professor of CS and EECS at the University of California, Irvine. He received his PhD from the University of Illinois at Urbana-Champaign in 1989. His research interests are in embedded systems design automation, computer architecture, optimizing compilers, system specification techniques, distributed embedded systems, and brain-inspired architectures and computing. He has received numerous best paper awards and is coauthor of 7 books. Professor Dutt served as Editor-in-Chief of ACM Transactions on Design Automation of Electronic Systems (TODAES) (2003-2008) and currently serves as Associate Editor of ACM Transactions on Embedded Computer Systems (TECS) and of IEEE Transactions on VLSI Systems (IEEE-TVLSI). He was an ACM SIGDA Distinguished Lecturer during 2001-2002, and an IEEE Computer Society Distinguished Visitor for 2003-2005. He has served on the steering, organizing, and program committees of several premier CAD and Embedded System Design conferences and workshops, and serves or has served on the advisory boards of ACM SIGBED and ACM SIGDA. Professor Dutt is a Fellow of the IEEE, an ACM Distinguished Scientist, and recipient of the IFIP Silver Core Award.

Course:
Integrating End-to-End and Cross-Layer Optimizations for Cyber-Physical Systems
Abstract:
Much attention has focused on safety and reliability issues for Cyber-Physical Systems. However, the increasing embedded software/hardware content in these systems raises new issues for guaranteeing Quality of Service (QoS) – which we broadly interpret to include timing, reliability, safety, security, accuracy, etc. In many engineering domains, functionality is either independent of time (e.g., a cause-and-effect model) or employs a crude model of time imposed by another abstraction (e.g., a feedback control system). Similarly, time is implicitly modeled in many software systems using notions of causality or sequential execution. Since time is not fundamentally modeled with functionality in many physical systems, we need to integrate formal models and analyses with simulation, testing, and monitoring of deployed systems in a mutually synergistic manner. These formal models must address both cross-layer and end-to-end considerations, capturing multiple abstraction layers from physical processes through various layers of the information processing hierarchy (application, middleware, network, OS, hardware architecture) in a distributed environment. A holistic approach to understanding timing and its interrelationship with other QoS metrics is critical for these distributed multi-layer CPS applications. This talk will describe initial efforts at composing timing and reliability in a cross-layer manner and outline challenges in the context of emerging CPS applications.


Professor Rolf Ernst

TU Braunschweig - Germany
Rolf Ernst received a diploma in computer science and a Dr.-Ing. (with honors) in electrical engineering from the University of Erlangen-Nuremberg, Germany, in 81 and 87. From 88 to 89, he was a Member of Technical Staff in the Computer Aided Design & Test Laboratory at Bell Laboratories, Allentown, PA. Since 90, he has been a professor of electrical engineering at the Technical University of Braunschweig, Germany, where he chairs a university institute of 65 researchers and staff. He was Head of the Department of Electrical Engineering from 1999 to 2001.
His research activities include embedded system design and design automation. The activities are currently supported by the German "Deutsche Forschungsgemeinschaft" (corresponds to the NSF), by the German BMBF, by the European Union, and by industrial contracts, such as from Intel, Thomson, EADS, Ford, Bosch, Toyota, and Volkswagen. He gave numerous invited presentations and tutorials at major international events and contributed to seminars and summer schools in the areas of hardware/software co-design, embedded system architectures, and system modeling and verification.
He chaired major international events, such as the International Conference on Computer Aided Design of VLSI (ICCAD), or the Design Automation and Test in Europe (DATE) Conference and Exhibition, and was Chair of the European Design Automation Association (EDAA), which is the main sponsor of DATE. He is a founding member of the ACM Special Interest Group on Embedded System Design (SIGBED), and was a member of the first board of directors. He is a member of the European Networks-of-Excellence ArtistDesign (real-time systems). He is an elected member (Fachkollegiat) and Deputy Spokesperson of the "Computer Science" review board of the German DFG (corresponds to NSF). He is an advisor to the German Ministry of Economics and Technology for the high-tech entrepreneurship program EXIST.

Course:
Formal Performance Analysis and Optimization of Safety-related Embedded Systems
Abstract:
Compositional approaches to formal performance analysis and optimization have reached industrial practice. They have successfully been applied to complete systems such as a premium car electronics consisting of many different controllers, bus standards and gateways. While the current approaches typically assume fault-free operation, many hard real-time systems also require safety guarantees. Since many of the protection mechanisms have an impact on timing, fault statistics have to be considered in performance analysis that, so far, was fully deterministic. Furthermore, larger systems often include less critical system functions that need not be designed under worst case conditions, but share components with hard real-time and safety critical functions. Analysis extensions are needed which efficiently support “mixed critical” function set integration in order to avoid over provisioned systems.
The presentation will start with an introduction to the theory of compositional analysis and give several examples. Then, it will outline safety requirements using the IEC 61508 standard as a prominent example and explain the impact of fault tolerance and fail safe mechanisms on real-time properties. Next, performance analysis including appropriate fault models will be presented delivering real-time guarantees under IEC 61508 safety requirements. Based on the results, possible approaches to analyze and optimize mixed critical systems integration will be discussed. The presentation will conclude with a proposal for system self-protection against system failures due to incorrect system updates or extensions.


Professor Dr. rer nat Hermann Härtig

Technische Universität Dresden - Germany
Hermann Härtig is full professor at Technische Universität Dresden and leads its operating systems research group. Under his leadership, the group contributed significantly to L4 micro-kernel technology. It produced "L4/Fiasco", the first Implementation of the L4 microkernel in a high level programming language, invented "L4Linux", the L4-based virtualisation technology, and pioneered their application in real-time environments and for security-sensitive applications. Before joining TU Dresden, he lead the "BirliX" operating systems project at the former German National Research Center for Information Technology. He regularly spends extended sabbatical visits at major industry and University research labs and consults in various topics related to operating systems, real-time systems and system security.

Course:
The L4 Microkernel
Abstract:
L4 has become one of the most intensively researched and successful micro kernels. In my lecture, I will outline the principles that lead to the design of its minimalist interface, the evolution of the interface and some research projects that had been and are based on L4.


Dr. Jörn Janneck

United Technologies Research Center
Jorn W. Janneck is a staff researcher at the United Technologies Research Center in Berkeley, CA. He received his diploma in computer science from the University of Bremen in 1995, and his PhD in electrical engineering from the ETH Zurich in 2000. Prior to joining UTRC he worked in the Xilinx Research Labs, was a visiting postdoctoral scholar at the University of California at Berkeley, and has also held research positions at the Fraunhofer Institute for Material Flow and Logistics in Dortmund and Lund University.

His research includes various aspects of engineering and describing concurrent and parallel systems. More recently he has been focused on the use of dataflow as a programming paradigm for parallel platforms, and specifically programmable logic devices and multicore machines, working on the design of programming languages and corresponding tools for translating, profiling, and analyzing dataflow programs. His work has significantly influenced the recent MPEG/ISO standards activities to restructure the normative description of video codecs as dataflow programs.

Course:
Dataflow Programming
Abstract:
The emergence and increasingly widespread deployment of highly parallel computing platforms poses a significant programming and engineering challenge to designers and developers. Programming models, languages, and tools based on traditional sequential conceptions of algorithms often scale poorly to parallel machines, placing much of the burden of matching an algorithm to a computer architecture on the programmer, and making the resulting implementation more brittle and less portable in the process.
In this talk I discuss how some of the technical challenges of implementing a concurrent algorithm on a parallel computing substrate are addressed using a dataflow programming model and associated languages and tools, and show how some of the properties of the programming model are used in the process.


Professor Alberto Sangiovanni-Vincentelli

UC Berkeley - USA
Alberto Sangiovanni Vincentelli holds the Edgar L. and Harold H. Buttner Chair of Electrical Engineering and Computer Sciences at the University of California at Berkeley. He has been on the Faculty since 1976. He obtained an electrical engineering and computer science degree ("Dottore in Ingegneria") summa cum laude from the Politecnico di Milano, Milano, Italy in 1971.
He was a co-founder of Cadence and Synopsys, the two leading companies in the area of Electronic Design Automation. He is the Chief Technology Adviser of Cadence. He is a member of the Board of Directors of Cadence and the Chair of its Technology Committee, UPEK, a company he helped spin off from ST Microelectronics, Sonics, and Accent, an ST Microelectronics-Cadence joint venture he helped founding. He was a member of the HP Strategic Technology Advisory Board, and is a member of the Science and Technology Advisory Board of General Motors and of the Scientific Council of the Tronchetti Provera foundation and of the Snaidero Foundation. He is the founder and Scientific Director of the Project on Advanced Research on Architectures and Design of Electronic Systems (PARADES), a European Group of Economic Interest supported by Cadence, Magneti-Marelli and ST Microelectronics. He is a member of the High-Level Group, of the Steering Committee, of the Governing Board and of the Public Authorities Board of the EU Artemis Joint Technology Initiative. He is member of the Scientific Council of the Italian National Science Foundation (CNR).
In 1981, he received the Distinguished Teaching Award of the University of California. He received the worldwide 1995 Graduate Teaching Award of the IEEE (a Technical Field award for “inspirational teaching of graduate students”). In 2002, he was the recipient of the Aristotle Award of the Semiconductor Research Corporation. He has received numerous research awards including the Guillemin-Cauer Award (1982-1983), the Darlington Award (1987-1988) of the IEEE for the best paper bridging theory and applications, and two awards for the best paper published in the IEEE Transactions on CAS and CAD, five best paper awards and one best presentation awards at the Design Automation Conference, other best paper awards at the Real-Time Systems Symposium and the VLSI Conference. In 2001, he was given the Kaufman Award of the Electronic Design Automation Council for “pioneering contributions to EDA”. In 2008, he was awarded the IEEE/RSE Wolfson James Clerk Maxwell Medal “for groundbreaking contributions that have had an exceptional impact on the development of electronics and electrical engineering or related fields” with the following citation: “For pioneering innovation and leadership in electronic design automation that have enabled the design of modern electronics systems and their industrial implementation” In 2009, he received the first ACM/IEEE A. Richard Newton Technical Impact Award in Electronic Design Automation to honor persons for an outstanding technical contribution within the scope of electronic design automation. In 2009, he was awarded an honorary Doctorate by the University of Aalborg in Denmark.
He is an author of over 850 papers, 15 books and 3 patents in the area of design tools and methodologies, large-scale systems, embedded systems, hybrid systems and innovation.

Course:
Distributed Embedded System Challenges: Communication, Communication, and Communication!
Abstract:
System design is about the implementation of a set of functionalities satisfying a number of constraints ranging from performance to cost, emissions, reliability, fault tolerance, power consumption and weight. The choice of implementation architecture implies which functionality will be implemented as a hard component or as software running on a programmable component. System design should be based on two basic pillars: what has to be designed and how it is designed. The “what” is in general expressed in informal ways including natural language and it is articulated in:
- Functionnalities the system has to implement,
- Constraints the implementation has to satisfy and
- Objectives to follow for optimizing the design process (e.g., cost, time-to-market, maintainability).
The “how” is in general partially or completely identified by a limited set of options to maximize re-use and minimize design time. A way of expressing the set of options is to list the components that can be put together to form a solution to the design problem. The components must include communication elements that are used to combine the elements that implement a part of the design and the rules that have to be followed to yield a valid design. The set of all possible combinations of components is a platform and the particular selection of elements and of their communication is called a platform instance. Then the design process is about “mapping” functionalities to elements of the library to yield a platform instance so that the constraints are satisfied and the goals are optimized. This meet-in-the-middle approach is called platform-based design. It is intimately related to Composability as the rules that are given to form a platform are concerned with properties of the interconnected components and are used to make sure that given properties are guaranteed correct when the platform is formed. Note that one can also introduce a virtual element in the library, one that is not fully instantiated but needs to be refined. This aspect allows freedom in optimizing the solution trading off re-use with quality.
Platform-based design will be introduced with the help of a number of examples taken from distributed systems such as cars and airplanes, VLSI designs and even synthetic biology artifacts. The tools and methods will be illustrated with particular attention to the communication issues among components.
We will also use the framework to discuss advances in heterogeneous system design (cyber-physical systems) such as energy efficient buildings and in novel communication architectures such as the so called Loosely Time Triggered Architecture. Methods to select an optimized communication that includes protocols, topology and physical location of components will be also presented.


Professor Hiroaki Takada

Nagoya University - Japan
Hiroaki Takada is a Professor at the Department of Information Engineering, the Graduate School of Information Science, Nagoya University. He is also the Executive Director of the Center for Embedded Computing Systems (NCES). He received his Ph.D. degree in Information Science from University of Tokyo in 1996. His research interests include real-time operating systems, real-time scheduling theory, automotive embedded systems, and embedded system design. He is the leader of the TOPPERS Project, a project to develop open-source real-time operating systems for embedded systems.

Course:
Challenges of Hard Real-Time Operating Systems
— Multiprocessor Support and Energy Consumption Optimization
Abstract:
Recent sophisticated embedded systems require multiprocessor technology because the performance of a uniprocessor system is approaching its limit. Implementing a hard real-time system on a multiprocessor system, however, includes several difficult issues due to the inherent unpredictability of inter-processor synchronization.
This lecture discribes some of the challenges in implementing a hard real-time operating system (RTOS) supporting multiprocessor systems and discusses some approaches to solving them. RTOS functionality for dynamic enery performance scaling (DEPS), which is an approach to optimize energy consumption of embedded systems, is also discussed.


Professor Dr.-Ing. Jürgen Teich

University of Erlangen-Nuremberg
Jürgen Teich received his masters degree (Dipl.-Ing.) in 1989 from the University of Kaiserslautern (with honours). From 1989 to 1993, he was PhD student at the University of Saarland, Saarbruecken, Germany from where he received his PhD degree (summa cum laude). In 1994, Dr. Teich joined the DSP design group of Prof. E. A. Lee and D.G. Messerschmitt in the Department of Electrical Engineering and Computer Sciences (EECS) at UC Berkeley where he was working in the Ptolemy project (PostDoc). From 1995 to 1998, he held a position at Institute of Computer Engineering and Communications Networks Laboratory (TIK) at ETH Zurich, Switzerland, finishing his Habilitation entitled `Synthesis and Optimization of Digital Hardware/ Software Systems’ in 1996. From 1998 to 2002, he was full professor in the Electrical Engineering and Information Technology department of the University of Paderborn, holding a chair in Computer Engineering. Since 2003, he is appointed full professor in the Computer Science Institute of the Friedrich-Alexander University Erlangen-Nuremberg holding a chair in Hardware-Software-Co-Design. Dr. Teich has been a member of multiple program committees of well-known conferences and workshops including CODES+ISSS 2007. In 2004, Prof. Teich has been elected reviewer for the German Science Foundation (DFG) for the area of Computer Architecture and Embedded Systems. Prof. Teich is involved in many interdisciplinary national basic research projects as well as industrial projects. He is supervising more than 20 PhD students currently.

Course:
Invasive Computing - Basic Concepts and Foreseen Benefits
Abstract:
Technology roadmaps today foresee already now 1000 and more processors being integrated into one single MPSoC in the year 2020. Obviously, the control of parallel applications cannot be organized in a fully centralized way any more as it is done in today’s multi-core processors. Also, feature variations will become a problem if algorithms are not able to cope with such. One way shown how be able to live with expected increase of defects and errors is to exploit reconfigurability of processors, communication links and memories properly. The question comes only at what price this can and should be done and to what degree.
In this lecture, we present a novel paradigm for organizing the computations of large scale MPSoCs of the future decentrally and in a resource-aware manner called Invasive Computing. This involves drastical changes in the way MPSoCs will be programmed in 2020 and drastical changes
in the underlying architecture.

The main idea of Invasive Computing relies on the vision that applications will organize themselves and spread their computational load at run-time on processors, communication and memory resources in phases called invasion, and, depending on available degree of parallelism, dynamically changing user objectives or in dependence of the state of the underlying hardware such as temperature profile, load, permissions, or faultiness, again retreat from these.

We present first ideas of how to embed this novel parallel computing paradigm into existing parallel programming languages, what kind of architectural changes will be required, and finally, what applications would benefit from this kind of self-organization on invasive MPSoCs. It will be outlined how and to what degree invasive computing may improve fault-resilience, scalability, efficiency and resource utilization.


Professor Wang Yi

Uppsala University - Sweden
Wang Yi is Professor and chair of embedded systems, and Director of the newly introduced Embedded Systems Masters’ Program at Uppsala University. He obtained his PhD in computer science from Chalmers University of Technology in 1991. His research interests are in methods and tools for modelling, verification and implementation of embedded and real-time systems. He is a co-founder of the UPPAAL model checker. His current interests are mainly in real-time software development on multicore platforms. He is one of the principle investigators for the newly established UPMARC centre of excellence at Uppsala, devoted to new techniques and tools for programming multicore computer systems. He together with his students received the Best Paper Award of RTSS 2009.

Course:
Towards Real-time Applications on Multi-core Platforms:
the Timing Problem and Possible Solutions
Abstract:
Future processor chips will contain many CPU’s, i.e., processor cores each of which may support several hardware threads working in parallel. The new architecture gives rise to the grand challenge for embedded software development to make the most efficient use of on-chip resources including processor cores, caches, communication bandwidth in order to meet requirements of performance and predictability. In this talk, I will give an overview on the CoDeR-MP project at Uppsala in collaboration with ABB and SAAB to develop high-performance and predictable real-time software on multicore platforms. I will present the technical challenges including the multicore timing analysis problem and our proposed solutions dealing with shared L2 caches, bandwidth for accessing off-chip memory and multiprocessor scheduling. In particular I will present in details our recent work on fixed-priority multiprocessor scheduling. In 1973, Liu and Layland discovered the famous utilization bound for fixed-priority scheduling on singleprocessor systems. Since then, it has been a long standing open problem to find fixed-priority scheduling algorithms with the same bound for multiprocessor systems. Recently we have developed a partitioning-based fixed-priority multiprocessor scheduling algorithm with Liu and Layland’s utilization bound, which can be used for real-time task assignment and scheduling for multicore systems.


 

Be sure to see also the full schedule.

 

The programme is defined by ArtistDesign’s Strategic Management Board.

(c) Artist Consortium, All Rights Reserved - 2006, 2007, 2008, 2009

Réalisation Axome - Création de sites Internet