ARTIST Summer School Europe 2011

September 4-9, 2011      Aix-les-Bains (near Grenoble), France organised and funded by ARTIST 

Invited Speakers: abstracts + slides

Bios and Abstracts

Professor Tarek Abdelzaher

University of Illinois at Urbana Champaign - USA
Tarek Abdelzaher received his Ph.D. from the University of Michigan in 1999 on Quality of Service Adaptation in Real-Time Systems. He founded the Software Predictability Group at the University of Virginia until 2005. He is currently an Associate Professor at the Department of Computer Science, the University of Illinois at Urbana Champaign. He has authored/coauthored more than 100 refereed publications in real-time computing, distributed systems, sensor networks, and control. He is Editor-in-Chief of the Journal of Real-Time Systems, an Associate Editor of the IEEE Transactions on Mobile Computing, IEEE Transactions on Parallel and Distributed Systems, the ACM Transaction on Sensor Networks, and the Ad Hoc Networks Journal, as well as Editor of ACM SIGBED Review. He was Program Chair of RTAS 2004 and RTSS 2006, and General Chair of RTAS 2005, IPSN 2007, RTSS 2007, DCoSS 2008 and Sensys 2008. Abdelzaher’s research interests lie broadly in understanding and controlling the temporal properties of software systems in the face of increasing complexity, distribution, and degree of embedding in an external physical environment.

Challenges in Human-centric Sensor Networks
The proliferation of sensors, such as GPS devices, pedometers, smart power meters, and camera phones, in the possession of the common individual ushers in an era of sensor networks where the individual sensors are people or devices they own that collect data on their behalf. We call such sensor networks, human-centric. An example is a sensor network where individuals share their GPS traces to produce real-time maps of traffic speed on popular streets of a large city. The involvement of humans in the sensor data collection and sharing loop offers significant new challenges such as sparse sampling, observability, source privacy preservation, and accommodation of large amounts of unreliable data from faulty, biased, or even malicious sources. The talk describes these challenges, outlines theoretical foundations developed to address them, and presents an example service called FusionSuite that embodies the solutions within a software framework (implemented on mobile phones) that enables formation and management of social sensing networks. Results are presented from an application of FusionSuite that aims to reduce the energy cost and carbon footprint of vehicular transportation.

Professor Sanjoy Baruah

University of North Carolina at Chapel Hill - USA
Sanjoy Baruah is a professor in the Department of Computer Science at the University of North Carolina at Chapel Hill. He received his Ph.D. from the University of Texas at Austin in 1993. His research and teaching interests are in scheduling theory, real-time and safety-critical system design, and resource-allocation and sharing in distributed computing environments.

Certification-cognizant scheduling in integrated computing environments
In many modern embedded platforms, safety-critical functionalities are implemented in an integrated manner alongside less critical functionalities. Some highly safety-critical functionalities may be subject to mandatory certification by statutory Certification Authorities.
Current approaches to achieving certification in such integrated platforms are centered on ensuring complete spatial and temporal separation between the safety-critical functionalities and the remainder of the system. However, such separation often results in poor use of platform resources, since (i) the pessimism typically needed for ensuring certifiable correctness of the safety-critical functionalities requires severe over-provisioning of platform resources to these functionalities; while (ii) the very principle of separation rules out the “reclaiming’’ of these over-provisioned resources for executing non-critical functionalities. The challenge, then, is to be able to both meet certification requirements for a subset of the functionalities implemented upon an integrated platform, and ensure efficient usage of platform resources.

Recent research in real-time scheduling theory has yielded some promising techniques for meeting these twin goals of both being able to certify the correctness of the safety-critical functionalities to extremely high levels of assurance, and simultaneously ensuring high utilization of platform computing resources. This presentation will survey some of these recent results in certification-cognizant scheduling, and will enumerate several important open research problems.

Professor Luca Benini

University of Bologna - Italy
Luca Benini is a Full Professor at the University of Bologna. He also holds a visiting faculty position at the Ecole Polytecnique Federale de Lausanne (EPFL). He received a Ph.D. degree in electrical engineering from Stanford University in 1997. Dr. Benini’s research interests are in the design of systems for ambient intelligence, from multi-processor systems-on-chip/networks on chip to energy-efficient smart sensors and sensor networks.From there his research interest have spread into the field of biochips for the recognition of biological molecules, and into bioinformatics for the elaboration of the resulting information and further into more advanced algorithms for in silico biology. He has published more than 300 papers in peer-reviewed international journals and conferences, three books, several book chapters and two patents. He has been program chair and vice-chair of Design Automation and Test in Europe Conference. He has been a Member of the 2003 MEDEA+ EDA roadmap committee 2003. He is a member of the IST Embedded System Technology Platform Initiative (ARTEMIS): working group on Design Methodologies, a Member of the Strategic Management Board of the ArtistDesign Network of Excellenc and a Member of the Advisory group on Computing Systems of the IST Embedded Systems Unit.

Managing MPSoCs beyond their Thermal Design Power
The current trend in mobile computing is to design MPSoC platforms with negative thermal margins: their die can dissipate (in a sustainable way) significantly more than the thermal design power of its package and cooling system. Such an apparently suicidal policy is dictated by market pressure form the smart phone and tablet markets: the CPU of new platforms are marketed based on their peak clock frequency, as maximum-speed sequential execution is required for a few critical use cases (e.g. first-time loading of script-rich web page). While waiting for a recovery from such a market-driven folly, we need to learn how to thermally manage platforms which can burn if we let them run at full speed for less than a few seconds. In this talk I will review advanced thermal management techniques and delve into recent evolutions which can help us address the looming thermal crisis.

Professor Rastislav Bodik

UC Berkeley, USA
Ras Bodik is Associate Professor of Computer Science at University of California, Berkeley, and the Vice Chair for Graduate Matters. He has worked in program analysis, compilation, programming languages, computer architecture and programming tools. His group developed highly scalable algorithms for analysis of Java programs, the first automatic program specializer, and a miner of program specifications, among other results. He currently leads two projects. The first project revolves broadly around program synthesis, ranging from programming by demonstration for end users to programming for GPU architectures. The second project explores programming for low-power mobile devices from a web browser perspective. The project spans the entire application stack, from the design of a declarative constraint-based language to replace AJAX, to new specifications of CSS, to algorithms for parallel page layout and rendering. Ras will also be happy to talk about his new undergraduate course on programming languages in which students develop a language with coroutines, their own parser generator, and a web browser with own layout engine and reactive scripting.

Automatic Programming Revisited
Why is it that Moore’s Law hasn’t yet revolutionized the job of the programmer? Compute cycles have been harnessed in testing, model checking, and autotuning but programmers still code with bare hands. Can their cognitive load be shared with a computer assistant?
Automatic programming of the 80’s failed relying on too much AI. Later, synthesizers succeeded in deriving programs that were superbly efficient, even surprising, but these synthesizers first had to be formally taught considerable human insight about the domain.

Using examples from algorithms, frameworks, and GPU programming, I will describe how the emerging synthesis community rethought automatic programming. The first innovation is to abandon automation, focusing instead on the intriguing new problem of how the human should communicate his incomplete ideas to her computerized algorithmic assistant, and how the assistant should talk back. As an example, I will describe programming with angelic oracles.

The second line of innovation changes the algorithmics. Here, we have replaced deductive logic with constraint solving. Indeed, new synthesis is to its classical counterpart what model checking is to verification, and enjoys similar benefits: because algorithmic synthesis relies more on compute cycles and less on a formal expert, it is easier to adapt the synthesizer to a new domain.

I will conclude by explaining how all this will enable end users to program their browsers, spreadsheets, without ever seeing a line of code.

Dr. Fabien Clermidy

CEA - France
Fabien Clermidy obtained his Masters degree in 1994 and his Ph.D in Engineering Science in 1999 on fault-tolerant architectures. He then started working at the Center for Atomic Energy (CEA) in Saclay (near Paris) in 2000 as a research engineer. In this position, he participated in the design of a massively parallel and reconfigurable computer, with fault-tolerant features. He then moved to Grenoble and became the main architect of the Network-on-Chip activity at the microelectronic laboratory (LETI) of CEA. He then managed the MAGALI project in the design department of the LETI. MAGALI is a complex NoC-based System-on-Chip aiming to provide a prototyping platform for Software Defined Radio (SDR) and Cognitive Radio (CR) applications, with low-power features. He is currently the LETI project leader for the P2012 project, a joined ST/CEA initiative for developing multi-core architectures. Fabien Clermidy has published more than 50 papers in international conferences and journals including ISSCC, JSSC, Symposium on VLSI circuits, NOCS and DATE, and is the main inventor or co-inventor of 9 patents.

Designing Network-on-Chip based multi-core heterogeneous System-on-Chip:
the MAGALI experience
With the evolution of the number of cores in multi-core embedded systems, Network-on-Chip (NoC) is now a central element in the design of such complex Systems-on-Chips (SoC). As a communication infrastructure, NoC should provide the required performance as well as the different services required for programming and optimizing the full platform.
In this course, we will present our experience in the design and programming of an existing multi-core NoC based platform called MAGALI (Multi-Applicative GALS Architecture Low-power Innovative Structure). MAGALI is a Digital Baseband chip targeting advanced Telecommunication protocols such as 3GPP-LTE, Software Defined Radio and Cognitive Radio. The chip is based around a 3x5 mesh NOC with 22 heterogeneous computing units. It has been designed in a low-power 65nm CMOS technology and consumes only 500 mW for a 32 mm2 area.
Different topics will be addressed during the course: the NoC architecture and its Globally Asynchronous Locally Synchronous (GALS) design, the choices made on the execution model and their implementation, the low-power features with an extension to future development and finally, programming the chip.

Professor Peter Druschel

Max Planck Institute for Software Systems - Germany
Peter Druschel is the founding director of the Max Planck Institute for Software Systems and also leads the distributed systems research group. Prior to joining the MPI-SWS in August 2005, he was a Professor of Computer Science at Rice University in Houston, TX. He also spent time with the SRC group at Laboratoire d’Informatique de Paris 6 (LIP6) (May-June 2000, June 2002), the Cambridge Distributed Systems group at Microsoft Research Cambridge, UK (August- December 2000), and the PDOS group at the MIT Laboratory for Computer Science (January-June 2001).

Peter received his Ph.D. from the University of Arizona in 1994, under the direction of Larry L. Peterson. He is the recipient of an NSF CAREER Award (1995), an Alfred P. Sloan Fellowship (2000), and the 2008 Mark Weiser Award. He is on the editorial boards of the Communications of the ACM (CACM) and the ACM Transactions on Computer Systems (TOCS) and has served as program chair for SOSP, OSDI and NSDI. Together with Antony Rowstron and Frans Kaashoek, he started the IPTPS series of workshops. Peter is a member of the Academia Europaea and the German Academy of Sciences Leopoldina.

Trust and Accountability in Social Systems
Social interactions and preferences play an important role in today’s information systems. Online social networking sites like Facebook and Twitter attract millions of users who share content, opinions, sentiments, ratings and referrals; Google, Yahoo and Bing offer personalized search and advertising services that are sensitive to the interests of users; federated systems like the Internet have to respect the interests, policies, customs and laws of participating businesses, organizations, and countries; and peer-to-peer systems like BitTorrent, Sopcast and Skype are powered by voluntary resource contributions from participating users. In this lecture, we consider some of the challenges that result from these social aspects of information systems; study how social relationships can be leveraged to mitigate some of these challenges; and introduce accountability as a way to facilitate fault detection, transparency and trust.

Professor Rolf Ernst

TU Braunschweig - Germany
Rolf Ernst received a diploma in computer science and a Dr.-Ing. (with honors) in electrical engineering from the University of Erlangen-Nuremberg, Germany, in 81 and 87. From 88 to 89, he was a Member of Technical Staff in the Computer Aided Design & Test Laboratory at Bell Laboratories, Allentown, PA. Since 90, he has been a professor of electrical engineering at the Technical University of Braunschweig, Germany, where he chairs a university institute of 65 researchers and staff. He was Head of the Department of Electrical Engineering from 1999 to 2001.
His research activities include embedded system design and design automation. The activities are currently supported by the German "Deutsche Forschungsgemeinschaft" (corresponds to the NSF), by the German BMBF, by the European Union, and by industrial contracts, such as from Intel, Thomson, EADS, Ford, Bosch, Toyota, and Volkswagen. He gave numerous invited presentations and tutorials at major international events and contributed to seminars and summer schools in the areas of hardware/software co-design, embedded system architectures, and system modeling and verification.
He chaired major international events, such as the International Conference on Computer Aided Design of VLSI (ICCAD), or the Design Automation and Test in Europe (DATE) Conference and Exhibition, and was Chair of the European Design Automation Association (EDAA), which is the main sponsor of DATE. He is a founding member of the ACM Special Interest Group on Embedded System Design (SIGBED), and was a member of the first board of directors. He is a member of the European Networks-of-Excellence ArtistDesign (real-time systems). He is an elected member (Fachkollegiat) and Deputy Spokesperson of the "Computer Science" review board of the German DFG (corresponds to NSF). He is an advisor to the German Ministry of Economics and Technology for the high-tech entrepreneurship program EXIST.

Mixed safety critical system design and analysis
In many safety related embedded system applications, only part of the functions are safety and time critical. Safety standards impose strong requirements on such systems challenging system performance and cost. With increasing function integration, this challenge affects networked embedded systems as well as MpSoC. New solutions are required for hardware-software architectures and for tools and methods that effectively support such mixed critical systems. The lecture will start with an introduction to mixed criticality system design and the related safety standards using the IEC 61508 as a prominent example. Focusing on systems which are both time and safety critical, such as in automotive and avionics, the lecture will, then, elaborate on the impact of fault tolerance and fail safe mechanisms on real-time properties. Formal models and analysis methods will be presented that support the efficient design of mixed critical systems.

Professor Babak Falsafi

EPFL - Switzerland
Babak Falsafi is a Professor in the School of Computer and Communication Sciences at EPFL, and an Adjunct Professor of Electrical & Computer Engineering and Computer Science at Carnegie Mellon. He is the founding director of the EcoCloud research center at EPFL innovating future energy-efficient and environmentally friendly cloud technologies. He also directs the Parallel Systems Architecture Lab (PARSA) at EPFL, and led the Microarchitecture thrust in the FCRP Center for Circuit and System Solutions (C2S2) from 2006 to 2010. His research targets architectural support for parallel programming, resilient systems, architectures to break the memory wall, and analytic and simulation tools for computer system performance evaluation. He is a recipient of an NSF CAREER award in 2000, IBM Faculty Partnership Awards in 2001, 2003 and 2004, and an Alfred P. Sloan Research Fellowship in 2004. He is a senior member of ACM and IEEE.

Towards Dark Silicon and its Implication on Server Design
Technology forecasts indicate that device scaling will continue well into the next decade. Unfortunately, it is becoming extremely difficult to harness performance out of such an abundance of transistors due to a number of technological, circuit, architectural, methodological and programming challenges. In this lecture, I will argue that the ultimate emerging showstopper is power even for workloads abound in parallelism. Supply voltage scaling as a means to maintain a constant chip power envelope with an increase in transistor numbers has hit diminishing returns, requiring drastic measures to cut power to continue riding the Moore’s law. This trend will likely lead design towards "dark silicon" where only a fraction of a chip can be powered up at a time.
I will present results backing this argument based on validated models for future server chips and parameters extracted from real server workloads. In the rest of the lecture, I will use these results to project future research directions for server hardware and software.

Professor Martti Forsell

VTT - Finland
Martti Forsell is a Chief Research Scientist of Computer Architecture and Parallel Computing at VTT, Oulu, Finland, as well as an Adjunct Professor in the Department of Electrical and Information Engineering at the University of Oulu. He received M.Sc., Ph.Lic., and Ph.D. degrees in computer science from the University of Joensuu, Finland in 1991, 1994, and 1997 respectively. Prior to joining VTT, he has acted as a lecturer, researcher, and acting professor in the Department of Computer Science, University of Joensuu. Dr. Forsell has a long background in parallel and sequential computer architecture and parallel computing research. He is the inventor of the first scalable high-performance CMP architecture armed with an easy-to-use general-purpose parallel application development scheme (consisting of a computational model, programming language, experimental optimizing compiler, and simulation tools) exploiting the PRAM-model, as well as a number of other TLP and ILP architectures, architectural techniques and development methodologies and tools for general purpose computing. At application-specific front, he has acted as the main architect of the Silicon Hive CSP 2500 processor and programming methodology aimed for low-power digital front-end radio signal processing. He is a co-organizer of the Highly Parallel Processing on a Chip (HPPC) workshop series. His current research interests are processor and computer architectures, chip multi-processors, networks on chip, models of parallel computing, functionality mapping techniques, parallel languages, compilers, simulators, and performance, silicon area and power consumption modeling. He has published 85 scientific publications, holds one patent on processor architectures and programming methodology, and has participated to various research and development projects in cooperation with academia and industry. Recently has has been named as the leader of a large VTT funded project, REPLICA, aiming to remove the performance and programmability limitations of chip multiprocessor architectures with a help of a strong PRAM model of computation.

Parallelism, programmability and architectural support for them on multi-core machines
The advent of multi-core systems (CMPs, MP-SOCs, NOCs) has raised an old but very challenging problem to the limelights of embedded and general purpose system design—How to program these parallel systems so that application development would be as simple as for single core systems and still get high utilization and close to linearly improved performance with respect to sequential solutions out of them for all kinds of computational problems? Namely, according to our measurements and also wide-spread consensus among computer architects and software developers, current solutions making use of asynchronous shared memory and message passing do not solve neither the above programmability nor performance requirements unless the set of applicable computational problems is severely limited. Since virtually all current solutions define the same computability as the strongest theoretical models, i.e. can simulate each others at some, not necessarily linearly slowed, execution rate, the problem is mainly architectural—current multi-core machines are not efficient enough in executing certain important patterns of parallel processing.
In this lecture we will take a look at the nature of parallelism in computation and methods to capture intrinsic parallelism ignoring the most architectural implementation dependent details. To give a more practical insight, we will explain the key problems related to current multi-core solutions and the effect of them on performance and programmability of parallel computational problems both at methodological and architectural level using qualitative examples and our quantitative performance models. We will also take a look at architectures that could be used to avoid these problems and therefore provide significantly simpler parallel programmability without sacrificing the performance for a very wide set of computational problems. Simplified application examples are given.

Professor Kim Larsen

University of Aalborg - Denmark
Kim Guldstrand Larsen (1957) is Professor in Computer Science at Aalborg University (1993- ), and has been Industrial Professor at Twente University, The Netherlands (2000-2007). He is currently director of CISS, the Centre for Embedded Software Systems, a national centre of excellence within ICT bridging between industry and research (2002- ). He is the leader of the Modeling and Validation Cluster within the ArtistDesign European Network of Excellence, and is director of the DaNES project (Danish Network for Intelligent Embedded Systems).

His research interests include modeling, verification, performance analysis of real-time and embedded systems with application and contributions to concurrency theory and model checking. In particular since 1995 he has been prime investigator of the tool UPPAAL and co-founder of the company UP4ALL International. He has published more than 150 publications in international journals and conferences as well as co-authored 6 software-tools.

He is or has been editorial board member of the journals: Formal Methods in System Design, Theoretical Computer Science and Nordic Journal of Computing. He is a member of the steering committee for the ETAPS conference series, the CONCUR conference series, the TACAS conference series and the FORMATS workshop series. He is member of the Royal Danish Academy of Sciences and Letters, Copenhagen, and is member of the Danish Academy of Technical Sciences.

Timing and Performance Analysis of Embedded Systems
In this talk we will show how WCET and schedulability problems for single- and multi-processor embedded applications may be modelled using the formalism of timed automata and efficiently and accurately analysed using the model checking tool UPPAAL. Based on a stochastic semantic interpretation of timed automata, a statistical model checking engine has been implemented in UPPAAL, allowing to make more detailed performance analysis in terms of expected response times, processor utilization, blocking times, etc.

Professor Yunhao Liu

Tsinghua University/HKUST - China
Yunhao Liu received his BS degree in Automation Department from Tsinghua University, and an MA degree in Beijing Foreign Studies University, China. He received an MS and a Ph.D. degree in Computer Science and Engineering at Michigan State University, USA. Yunhao is serving as the Associate Editor for IEEE Transactions on Mobile Computing and IEEE Transactions on Parallel and Distributed Systems. He is a senior member of IEEE computer society, and a member of ACM. Yunhao is currently an ACM Distinguished Speaker, and Vice Chair for ACM China Council. He holds the Tsinghua EMC Chair Professorship, and he is also a faculty member with Computer Science and Engineering Department, Hong Kong University of Science and Technology.

GreenOrbs: Lessons Learned from
Extremely Large Scale Sensor Network Deployment
The world has just ten years to bring greenhouse gas emissions under control before the damage they cause becomes irreversible." This is a famous prediction raised by climate scientists and environmentalists recently. It reflects the increasing attention in the past decade from human beings on global climate change and environmental pollution. On the other hand, forest, which is regarded as the earth’s lung, is a critical component in global carbon cycle. It is able to absorb 10% 30% of CO2 from industrial emissions. Moreover, it has large capacity of water conservation, preventing water and soil loss, and hence reducing the chance of nature disasters like mud-rock flows and floods. Forestry applications usually require long-term, large-scale, continuous, and synchronized surveillance of huge measurement areas with diverse creatures and complex terrains. The state-of-arts forestry techniques, however, support only small-scale, discontinuous, asynchronous, and coarse-grained measurements, which at the same time incur large amount of cost with respect to human resource and equipments. WSNs have great potential in resolving the challenges in forestry. Under such circumstances, GreenOrbs is launched. The information GreenOrbs offers can be used as evidences, references, and scientific tools for human beings in the battle against global climate changes and environmental pollution.
The prototype system is deployed in the campus woodland of Zhejiang Forestry University. The deployment area is around 40,000 square meters. The deployment started in May 2009 and included 50 nodes. In November 2009 it was expanded to include 330 nodes. The system scale reaches 400 in April 2010, and 500 in August 2010. The duty cycle of nodes is set at 8%. The network diameter is 12 hops. The sensor data are published online via the official GreenOrbs website. The Tianmu Mountain deployment includes 200+ nodes and has been in continuous operation since August 2009. The deployment area is around 200,000 square meters. The duty cycle of nodes is set at 5%. The network diameter is 20 hops.

We learned a lot of lessons during the deployment of GreenOrbs. This experiment results in several publications, including ACM Sensys 2009, 2010, ACM Sigmetrics 2010, ICNP 2010, INFOCOM 2010, 2011, etc. In this discussion, we will focus on several open issues for extremely large scale deployment of sensor networks including routing, diagnosis, localization, link quality, and etc.

Professor Alberto Sangiovanni-Vincentelli

UC Berkeley - USA
Alberto Sangiovanni Vincentelli holds the Edgar L. and Harold H. Buttner Chair of Electrical Engineering and Computer Sciences at the University of California at Berkeley. He has been on the Faculty since 1976. He obtained an electrical engineering and computer science degree ("Dottore in Ingegneria") summa cum laude from the Politecnico di Milano, Milano, Italy in 1971.
He was a co-founder of Cadence and Synopsys, the two leading companies in the area of Electronic Design Automation. He is the Chief Technology Adviser of Cadence. He is a member of the Board of Directors of Cadence and the Chair of its Technology Committee, UPEK, a company he helped spin off from ST Microelectronics, Sonics, and Accent, an ST Microelectronics-Cadence joint venture he helped founding. He was a member of the HP Strategic Technology Advisory Board, and is a member of the Science and Technology Advisory Board of General Motors and of the Scientific Council of the Tronchetti Provera foundation and of the Snaidero Foundation. He is the founder and Scientific Director of the Project on Advanced Research on Architectures and Design of Electronic Systems (PARADES), a European Group of Economic Interest supported by Cadence, Magneti-Marelli and ST Microelectronics. He is a member of the High-Level Group, of the Steering Committee, of the Governing Board and of the Public Authorities Board of the EU Artemis Joint Technology Initiative. He is member of the Scientific Council of the Italian National Science Foundation (CNR).
In 1981, he received the Distinguished Teaching Award of the University of California. He received the worldwide 1995 Graduate Teaching Award of the IEEE (a Technical Field award for “inspirational teaching of graduate students”). In 2002, he was the recipient of the Aristotle Award of the Semiconductor Research Corporation. He has received numerous research awards including the Guillemin-Cauer Award (1982-1983), the Darlington Award (1987-1988) of the IEEE for the best paper bridging theory and applications, and two awards for the best paper published in the IEEE Transactions on CAS and CAD, five best paper awards and one best presentation awards at the Design Automation Conference, other best paper awards at the Real-Time Systems Symposium and the VLSI Conference. In 2001, he was given the Kaufman Award of the Electronic Design Automation Council for “pioneering contributions to EDA”. In 2008, he was awarded the IEEE/RSE Wolfson James Clerk Maxwell Medal “for groundbreaking contributions that have had an exceptional impact on the development of electronics and electrical engineering or related fields” with the following citation: “For pioneering innovation and leadership in electronic design automation that have enabled the design of modern electronics systems and their industrial implementation” In 2009, he received the first ACM/IEEE A. Richard Newton Technical Impact Award in Electronic Design Automation to honor persons for an outstanding technical contribution within the scope of electronic design automation. In 2009, he was awarded an honorary Doctorate by the University of Aalborg in Denmark.
He is an author of over 850 papers, 15 books and 3 patents in the area of design tools and methodologies, large-scale systems, embedded systems, hybrid systems and innovation.

Mapping abstract models to architectures:
automatic synthesis across layers of abstraction
Compositional techniques for system design are evolving at a rapid pace. The challenges that these methods are facing to become a standard will be reviewed. The role of Platform Based Design will be outlined and the importance of design flows that leverage the concepts of refinement from higher level of abstractions to lower levels will be underlined. In this context, the separation of concerns between functionality and architecture is a technique used to facilitate design reuse at all design levels. This separation of concerns and the successive refinement of the design by mapping functionality onto architecture are the core concepts that will be introduced. Mapping optimizes a set of objective functions while satisfying constraints on the mapped design, which can be seen as a synthesis process. Formalized design methods gain traction in the designer community when they facilitate automating the synthesis process from specification to implementation, as witnessed by the RTL to layout ASIC flow. While logic synthesis and layout synthesis, which can be seen as special cases of optimized mapping, have been widely researched and many excellent algorithms have been made available, the mapping problem at the system level is typically solved in an ad-hoc and implicit manner based on designer experience.
In this presentation, we will also introduce a formal mapping procedure that enables the development of automatic tools. The mapping procedure is based on a two-stage process: (1) determining a common semantic domain between function and architecture models, and selecting an appropriate set of primitives to decide the abstraction level, which together constitute a common modeling domain (CMD); (2) solving an optimal covering problem where the function model is covered by a minimum cost set of architectural components. This process is general in the sense that it can be applied at all levels of abstraction and for a variety of system level design problems. We demonstrate the use of the formal approach for the optimal mapping problems in two widely different domains which feature different models of computation for representation as well as different implementation platforms.

Professor Janos Sztipanovits

Vanderbilt University - USA
Dr. Janos Sztipanovits is currently the E. Bronson Ingram Distinguished Professor of Engineering at Vanderbilt University. He is founding director of the Institute for Software Integrated Systems (ISIS). His research areas are at the intersection of systems and computer science and engineering. His current research interest includes the foundation and applications of Model-Integrated Computing for the design of Cyber Physical Systems. His other research contributions include structurally adaptive systems, autonomous systems, design space exploration and systems-security co-design technology. He was founding chair of the ACM Special Interest Group on Embedded Software (SIGBED). He is member of the national steering group on CPS and he was general chair of the 1st International Conference on Cyber Physical Systems in 2010. Dr. Sztipanovits was elected Fellow of the IEEE in 2000 and external member of the Hungarian Academy of Sciences in 2010. He won the National Prize in Hungary in 1985 and the Golden Ring of the Republic in 1982. He graduated (Summa Cum Laude) from the Technical University of Budapest in 1970 and received his doctorate from the Hungarian Academy of Sciences in 1980.

Domain Specific Modeling Languages for Cyber Physical Systems:
Where are Semantics Coming From?
Cyber Physical Systems (CPS) are engineered systems comprising synergistically interacting physical and computational components. In CPS, the engineering of physical systems is based on models that typically have a computational manifestation (i.e. an executable form in some computational sense). The engineering of software using model-based techniques is now also established part of the overall software engineering practice. However, little is being done with regard to an integrated approach, where both the ‘physical artifacts’ and the ‘software artifacts’ would be engineered based on a set of coupled models. Solution requires modeling languages with precisely defined semantics. These languages need to be broad enough to capture both physical and computational domains and deep enough to model their complex interrelationships.
First, the talk will review methods for defining formal semantics for both computational and physical process. Next, the talk will cover the relationship between structural and behavioral semantics using illustrative examples. The practical use of these semantic foundations will be demonstrated in composing domain specific modeling languages. Finally, case studies will be presented on modeling heterogeneous CPS and using the models for generating virtual prototypes using simulation and embedded software for the final product.

Professor Dr. Lothar Thiele

ETH Zurich, Switzerland
After completing his Habilitation thesis from the Institute of Network Theory and Circuit Design of the Technical University Munich, Lothar Thiele joined the Information Systems Laboratory at Stanford University in 1987.

In 1988, he took up the chair of microelectronics at the Faculty of Engineering, University of Saarland, Saarbrucken, Germany. He joined ETH Zurich, Switzerland, as a full Professor of Computer Engineering, in 1994. He is leading the Computer Engineering and Networks Laboratory of ETH Zurich.

His research interests include models, methods and software tools for the design of embedded systems, embedded software and bioinspired optimization techniques.

In 1986 he received the "Dissertation Award" of the Technical University of Munich, in 1987, the "Outstanding Young Author Award" of the IEEE Circuits and Systems Society, in 1988, the Browder J. Thompson Memorial Award of the IEEE, and in 2000-2001, the "IBM Faculty Partnership Award". In 2004, he joined the German Academy of Natural Scientists Leopoldina. In 2005, he was the recipient of the Honorary Blaise Pascal Chair of University Leiden, The Netherlands.

Temperature-aware Scheduling
Power density has been continuously increasing in modern processors, leading to high on-chip temperatures. A system could fail if the operating temperature exceeds a certain threshold, leading to low reliability and even chip burnout.
There have been many results in recent years about thermal management, including (1) thermal-constrained scheduling to maximize performance or determine the schedulability of real-time systems under given temperature constraints, (2) peak temperature reduction to meet performance constraints,
and (3) thermal control by applying control theory for system adaption. The presentation will cover challenges, problems and approaches to real-time scheduling under temperature constraints for single- as well as multi-processors.



The programme is defined by ArtistDesign’s Strategic Management Board.

(c) Artist Consortium, All Rights Reserved - 2006, 2007, 2008, 2009

Réalisation Axome - Création de sites Internet