Many embedded system applications are implemented today using distributed architectures, consisting of several hardware nodes interconnected in a network. Each hardware node can consist of a processor, memory, interfaces to I/O and to the network. The networks are using specialized communication protocols, depending on the application area. For example, in the automotive electronics area communication protocols such as CAN, FlexRay and TTP are used. One important trend today is toward the integration of multiple cores on the same chip, hence embedded systems are not only distributed across multiple boards or chips, but also within the same chip.

As the complexity of the functionality increases, the way it is distributed has changed. If we take as an example the automotive applications, initially, each function was running on a dedicated hardware node, allowing the system integrators to purchase nodes implementing required functions from different vendors, and to integrate them into their system. Currently, number of such nodes has reached more than 100 in a high-end car, which can lead to large cost and performance penalties. Moreover, with the advent of poly-core (i.e. high cardinality multi-core) single-chip platforms, the effective number of processing nodes tends to grow in a “fractal” way, and future distributed systems with thousands of processing nodes are not a far away dream.

Not only the number of nodes has increased, but the resulting solutions based on dedicated hardware nodes do not use the available resources efficiently in order to reduce costs. For example, it should be possible to move functionality from one node to another node where there are enough resources (e.g., memory) available. Moreover, emerging functionality, such as brake-by-wire, is inherently distributed, and achieving an efficient fault-tolerant implementation is very difficult in the current setting.

Moreover, as the communications become a critical component, new protocols are needed that can cope with the high bandwidth and predictability required. The trend is towards hybrid communication protocols, such as the FlexRay protocol, which allows the sharing of the bus by event-driven and time-driven messages. Time-triggered protocols have the advantage of simplicity and predictability, while event-triggered protocols are flexible and have low cost. A hybrid communication protocol like FlexRay offers some of the advantages of both worlds. The need for scalable and predictable communication is not only a characteristic of automotive designs, but even multimedia and signal processing systems are increasingly communication dominated.

While computation and communication are clear targets, common consensus has been growing on the criticality of memory architecture and related memory management software challenges. Even predictable and efficient processors and communication fabrics are not sufficient to provide a predictable and efficient application level view of the platform if not adequately supported by a memory system.

The trend towards distributed architectures introduces a new challenge. A lot, if not most of the traditional software is sequential in nature. Major reason for this is that most modern programming languages are sequential and do not have adequate language-level concurrency support. Traditionally the timing performance of software increased as a result of the increase in clock speed of the individual processing cores. However, this free lunch is over because clock speeds have hardly increased since 2003. Multi-core and hyperthreading techniques are now used to boost platform performance. Modern compilers based on sequential programming languages are not able to sufficiently utilize these additional computational resources. New languages, techniques and tools are required that seamlessly match modern execution platforms, for instance by adequate application-level concurrency support. Although a number potential techniques already exist, getting more momentum in these directions is crucial to deal with future complexity and performance requirements.

With growing embedded system complexity more and more parts of a system are reused or supplied, often from external sources. These parts range from single hardware components or software processes to hardware-software (HW-SW) subsystems. They must cooperate and share resources with newly developed parts such that the design constraints are met. There are many software interface standards such as CORBA, COM or DCOM, to name just a few examples that are specifically designed for that task. Nevertheless in practice, software integration is not a solved but a growing problem. This is especially true when performance and energy efficiency can be achieved only if a sufficient degree of parallelism in application execution is achieved.

New design optimization tools are needed to handle the increasing complexity of such systems, and their competing requirements in terms of performance, reliability, low power consumption, cost, time-to-market, etc. As the complexity of the systems continues to increase, the development time lengthens dramatically, and the manufacturing costs become prohibitively high. To cope with this complexity, it is necessary to reuse as much as possible at all levels of the design process, and to work at higher and higher abstraction levels, not only for specification of overall system functionality, but also for supporting communication among a number of parallel executing nodes.

One of the most significant achievements in the cultural landscape of low-power embedded systems design is the consensus on the strategic role of power management technology. It is now widely acknowledged that resource usage in embedded system platforms depends on application workload characteristics, desired quality of service and environmental conditions. System workload is highly non-stationary due to the heterogeneous nature of information content. Quality of service depends on user requirements, which may change over time. In addition, both can be affected by environmental conditions such as network congestion and wireless link quality.

Power management is viewed as a strategic technology both for integrated and distributed embedded systems. In the first area, the trend is toward supporting power management in multi-core architectures, with a large number of power-manageable resources. Silicon technology is rapidly evolving to provide an increased level of control of-on chip power resources. Technologies such as multiple power distribution regions, multiple power-gating circuits for partial shutdown, multiple variable-voltage supply circuits are now commonplace. The challenge now is how to allocate and distribute workload in an energy efficient fashion over multiple cores executing in parallel. Also, one open issue is how to cope with the increasing amount of leakage in nanometer technologies, which tends to over-emphasize the cost of inactive logic, unless it can be set in a low-power idle state (which in many cases implies storage losses and high wakeup cost).

In the area of distributed low-power systems, wireless sensor networks are the key technology drivers, given their tightly power constrained nature. One important trend in this area is toward “battery free” operation. This can be achieved through energy storage devices (e.g. super-capacitors) coupled with additional devices capable of harvesting energy from environmental sources (e.g. solar energy, vibrational energy). Battery-free operation requires carefully balancing harvested energy and stored energy against the energy consumed by the system, in a compromise between quality of service and sustainable lifetime.
The concept of Multiprocessors-on-a-chip (MPSoC) has been discussed since some years but it appears that recently, the area has gained much more interest. In terms of industrial support, an increasing number of companies are active in the design of corresponding architectures as well as introducing the first products in the market. Whereas there are major breakthroughs in terms of new hardware architectures, corresponding programming environment are still at their infancy. In particular, ease of application specification, scalability, predictability of the overall system, parallelization, low power operation, efficiency and support of legacy code are just some of the main problems the community is facing.

A major industrial concern that comes with the integration of previously independent functions onto a single multicore or multiprocessor-system-on-chip is the resulting reliability of the individual functions. Depending on the criticality of a function, OEMs and indirectly their suppliers deliver guarantees to lawmakers on the overall failure rate of the system or component, with higher cost associated with the certification of higher level of reliability. Integration of functions with different reliability levels is then not cost efficient, if the resulting system needs to be verified for the highest level of reliability. A major research direction is therefore the investigation of methods that allow the co-integration of such functions. The researchers in ArtistDesign investigate countermeasures to this problem, for example by orthogonalization of the shared memory (e.g. Linköping University), or conservative bounds on the use of shared resources (e.g. TU Braunschweig).

A new and emerging research field related to embedded systems is that of design optimization for digital microfluidic biochips. Microfluidic biochips (also referred to as lab-on-a-chip) represent a promising alternative to conventional biochemical laboratories, and are able to integrate on-chip all the necessary functions for biochemical analysis using microfluidics, such as, transport, splitting, merging, dispensing, mixing, and detection. Biochips offer a number of advantages over conventional biochemical procedures. By handling small amount of fluids, they provide higher sensitivity while decreasing the reagent consumption and waste, hence reducing cost. Moreover, due to their miniaturization and automation, they can be used as point-of-care devices, in areas that lack the infrastructure needed by conventional laboratories. Due to these advantages, biochips are expected to revolutionize clinical diagnosis, especially immediate point of care diagnosis of diseases. Other emerging application areas include drug discovery, DNA sequencing, tissue engineering and chemical detection. Biochips can also be used in monitoring the quality of air and water, through real-time detection of toxins.

The digital microfluidic biochip is based on the manipulation of discrete, individually controllable droplets, on a two-dimensional array of identical cells. Due to the analogy between the droplets and the bits in a digital system, where are many similarities between the design of digital systems and digital microfluidic systems. Biochips, consisting of hundreds and thousands of cells have already been successfuly designed and commercialized. The actuation of droplets is performed by software-driven electronic control, without the need of micro-structures. Since each cell in the array is controlled individually, cells can be reconfigured during the execution of an assay to perform differents operations. Digital microfluidic biochips are expected to be integrated with microelectronic components in next generation system-on-chips. Consequently, models and techniques for the analysis and design of such systems are needed, including “biochemical compilers” which are able to efficiently map a biochemical application onto a digital microfluidic biochip.

(c) Artist Consortium, All Rights Reserved - 2006, 2007, 2008, 2009

Réalisation Axome - Création de sites Internet