Читать книгу Multi-Processor System-on-Chip 1 - Liliana Andrade - Страница 24

1.3.4. Example machine learning applications and benchmarks

Оглавление

The embARC MLI library is available from embarc.org (embARC Open Software Platform 2019), together with a number of example applications that demonstrate the usage of the library, such as:

 – CIFAR-10 low-resolution object classifier: CNN graph;

 – face detection: CNN graph;

 – human activity recognition (HAR): LSTM-based network;

 – keyword spotting: graph with CNN and LSTM layers trained on the Google speech command dataset.

The CIFAR-10 (Krizhevsky 2009) example application is based on the Caffe (Jia et al. 2014) tutorial. The CIFAR-10 dataset is a set of 60,000 low-resolution RGB images (32x32 pixels) of objects in 10 classes, such as “cat”, “dog” and “ship”. This dataset is widely used as a “Hello World” example in machine learning and computer vision. The objective is to train the classifier using 50,000 of these images, so that the other 10,000 images of the dataset can be classified with high accuracy. We used the CIFAR-10 CNN graph in Figure 1.9 for training and inference. This graph matches the CIFAR-10 graph from the Caffe tutorial, including the two fully connected layers towards the end of the graph.


Figure 1.9. CNN graph of the CIFAR-10 example application

We used the CIFAR-10 example application with 8-bit for both feature data and weights to benchmark the performance of machine learning inference on the ARC EM9D processor. The code of this CIFAR-10 application, built using the embARC MLI library, is illustrated in Figure 1.10.


Figure 1.10. MLI code of the CIFAR-10 inference application

As the code in Figure 1.10 shows, each layer in the graph is implemented by calling a function from the embARC MLI library. Before executing the first convolution layer, we call a permute function from the embARC MLI library to transform the RGB image into CHW format so that neighboring data elements are from the same color plane. The code further shows that a ping-pong scheme with two buffers, ir_X and ir_Y, is used for buffering input and output maps.

A very similar CIFAR-10 CNN graph has been used by others for benchmarking machine learning inference on their embedded processors, with performance numbers published in (Lai et al. 2018) and (Croome 2018). Table 1.3 presents the model parameters of the CIFAR-10 CNN graph that we used, with performance data for the ARC EM9D processor and two other embedded processors presented in Table 1.4.

Table 1.3. Model parameters of the CIFAR-10 CNN graph

# Layer type Weights tensor shape Output tensor shape Coefficients
0 Permute 3 × 32 × 32 0
1 Convolution 32 × 3 × 5 × 5 32 × 32 × 32 (32K) 2400
2 Max Pooling 32 × 16 × 16 (8K) 0
3 Convolution 32 × 32 × 5 × 5 32 × 16 × 16 (8K) 25600
4 Avg Pooling - 32 × 8 × 8 (2K) 0
5 Convolution 64 × 32 × 5 × 5 64 × 8 × 8 (4K) 51200
6 Avg Pooling 64 × 4 × 4 (1K) 0
7 Fully-connected 64 × 1024 64 65536
8 Fully-connected 10 × 64 10 640

The performance data for processor A is published in (Lai et al. 2018) in terms of milliseconds for a processor running at a clock frequency of 216 MHz. The cycle counts for processor A in Table 1.4 have been calculated by multiplying the published millisecond numbers with this clock frequency. The CIFAR-10 CNN graph reported in (Lai et al. 2018) has the same convolution and pooling layers as listed in Table 1.3, but uses a single fully connected layer with a 4x4x64x10 filter shape to directly transform the 64x4x4 input map into 10 output values. This modification of the Caffe CNN graph reduces the size of the weight data considerably, but requires retraining of the graph. The impact on the total cycle count is marginal.

The performance data for the RISC-V processor published in (Croome 2018) reports a total of 1.5 Mcycles for executing the CIFAR-10 graph on a highly parallel 8-core RISC-V architecture. For calculating the total number of cycles on a single RISC-V core, we consider that the performance is highly dominated by the cycles spent on 5x5 convolutions, which constitute more than 98% of the compute operations in this graph. For these 5x5 convolutions, (Croome 2018) reports a speed-up from a 1-core system to an 8-core system of 18.5/2.2 = 8.2. Hence, a reasonable estimate for the total number of cycles on a single RISC-V core is 1.5x8.2 = 12.3 Mcycles.

Table 1.4. Performance data for the CIFAR-10 CNN graph

# Layer type ARC EM9D [ Mcycles ] Processor A [ Mcycles ] Processor B (RISC-V ISA) [ Mcycles ]
0 Permute 0.01
1 Convolution 1.63 6.78
2 Max Pooling 0.14 0.34
3 Convolution 3.46 9.25
4 Avg Pooling 0.09 0.09
5 Convolution 1.76 4.88
6 Avg Pooling 0.07 0.04
7 Fully-connected 0.03 0.02
8 Fully-connected 0.001
Total 7.2 21.4 12.3

From Table 1.4, we conclude that the ARC EM9D processor spends 3x fewer cycles than processor A and 1.7x fewer cycles than the RISC-V core (processor B) for the same machine learning inference task, without using any specific accelerators. Thanks to the good cycle efficiency, the ARC EM9D processor can be clocked at a low frequency, which helps to save power in a smart IoT edge device.

Multi-Processor System-on-Chip 1

Подняться наверх