Читать книгу Multi-Processor System-on-Chip 1 - Liliana Andrade - Страница 17

1.2.2. Configurability and extensibility

Оглавление

Integrated circuits for low-power IoT edge devices are often built using off-the-shelf processor IP that can be licensed from IP vendors. Since such licensable processors are multi-purpose by nature, to enable reuse across different customers and applications, they may not be optimal for efficiently implementing a specific set of application functions. However, some of these licensable processors offer support for customization by chip designers, in order to allow the processors to be tailored to the functions they need to perform for a specific application (Dutt and Choi 2003). More specifically, two mechanisms can be used to provide such customization capabilities:

 – Configurability: the processor IP is delivered as a parameterized processor that can be configured by the chip designer for the targeted application. More specifically, unnecessary features can be deconfigured and optimal parameters can be selected for various architectural features. This may involve optimization of the compute capabilities, memory organization, external interfaces, etc. For example, the chip designer may configure the memory subsystem with closely coupled memories and/or caches. Configurability allows performance to be optimized for the application at hand, while reducing area and power consumption.

 – Extensibility: the processor can be extended with custom instructions to enhance the performance for specific application functions. For the application at hand, the performance may be dominated by specific functions that execute critical code segments. The execution of such code segments may be accelerated dramatically by adding a few custom instructions. A further benefit of using these custom instructions is that the code size is reduced.

Both configurability and extensibility need to be used at design time. This must be supported by a tool chain (i.e. compiler, simulator, debugger) that is automatically enhanced to support the selected configuration and the added custom instructions. For example, the compiler must generate optimal code for the selected configuration while supporting programmers in using the custom instructions. Similarly, simulation models must support the selected configuration and include the custom instructions. If done properly, large performance gains can be achieved while optimizing area, power and code size, with a minimal impact on design time.

As an example of extensibility, we consider Viterbi decoding, which is a prominent function in an NB-IoT protocol stack for performing forward error correction (FEC) in the receiver. When using a straightforward software implementation on an off-the-shelf processor, this kernel becomes one of the most computationally intensive parts of an NB-IoT modem. Viterbi or similar FEC schemes are used in many communication technologies, especially in the IoT field, and often are a bottleneck in modem design.

In (Petrov-Savchenko and van der Wolf 2018), a processor extension for Viterbi decoding is presented using four custom instructions, which enhance the performance to just a few cycles per decoded bit. The instructions include a reset instruction, two instructions to calculate the path metrics and one instruction for the traceback. The instructions can be conveniently used as intrinsic instructions in the C source code. The resulting implementation reduces the worst-case MHz requirements for the Viterbi decoding function in an NB-IoT protocol stack to less than 1 MHz.

We note that the ability to extend the processor with custom instructions is radically different from adding an external hardware accelerator on a system bus. Using an external bus-based hardware accelerator requires data to be moved over a bus, with additional memory and synchronization requirements (e.g. through interrupts), thereby impacting area, cycles, power consumption and code size. When using custom instructions, these can be used directly in the software thread on the processor, accessing data that is available locally on the processor, in local registers or in local memory. Hence, there are no overheads for moving data to/from an accelerator or for performing explicit synchronization. It also greatly simplifies software development.

Multi-Processor System-on-Chip 1

Подняться наверх