Читать книгу Multi-Processor System-on-Chip 1 - Liliana Andrade - Страница 38

2.4.2. KaNN code generator

Оглавление

The KaNN (Kalray Neural Network) code generator is a deep learning inference compiler targeting the MPPA3 platform. It takes as input a trained neural network model, described within a standard framework such as Caffe, TensorFlow or ONNX, and produces executable code for a set of compute clusters exposed as an OpenCL sub-device (Figure 2.15). Targeting OpenCL sub-devices allows several model inferences to execute concurrently on a single MPPA3 processor. The KaNN code generator optimizes for batch-1 inference, with the primary objective of reducing latency. At the user’s option, FP32 operators in the original network can be converted to FP16 operators. Integer quantization, such as the one used by TensorFlow Lite, is also supported; however, it must be expressed in the input model. Indeed, such models are assumed to be trained with fake quantization (Jacob et al. 2018), which must match the actual quantization applied during inference.


Figure 2.15. KaNN inference code generator workflow

Following the import of the input model into an intermediate representation, optimizations are applied to the compute graph:

 – elimination of channel concatenation and slicing copies;

 – padding of input activations of convolutional layers;

 – folding of batch normalizations, scalings, additions, into a single pointwise fused multiply-add operator;

 – fusion of convolutions with ReLU activation functions;

 – adaptation of arithmetic representations.

The KaNN code generation scheme performs inference in topological sort order of the (optimized) compute graph, parallelizing the execution of each operator over all the compute clusters of the target sub-device. When executing an operator, its input and output activations are distributed across the target local memories configured as SPM, while the network parameters are read from the (external) DDR memory. Depending on the type of operator (convolutional or fully connected), the spatial dimension sizes and the channel depth, input and output activations are distributed over the compute cluster local memories by splitting either along the spatial dimensions or along the channel dimension (Figure 2.16):

 – In case of spatial splitting of the output activations, each compute cluster only accesses an input activation tile and its shadow region, while all the operator parameters are required; these are read once from the DDR memory and multicasted to all the target compute clusters.

 – In case of channel splitting of the output activations, the full input layer must be replicated in the local memory of each compute cluster, but only the corresponding slice of parameters is read from the DDR memory.

In all cases, activations are computed once, laid out sequentially along the channel dimension and possibly copied to other local memories.


Figure 2.16. Activation splitting across MPPA3 compute clusters

For any compute cluster in the target sub-device, the code generation process defines and implements a local schedule for:

 – local memory buffer allocations/deallocations;

 – DDR memory read/multicast of parameters;

 – execution of operator operations;

 – inter-cluster activation exchanges;

 – inter-cluster synchronizations.

This process is backed by the computation graph (Figure 2.17) augmented with parameter read tasks (yellow) and activation production tasks (blue).

The results of KaNN code generation is a collection of OpenCL binary kernels, where each kernel interprets the contents of a static data block composed of a sequence of records. Each record contains its length, a native compute function pointer and a structure containing arguments for the compute function. For each record, the OpenCL kernel calls the native compute function with the pointer to the structure. The kernel ends after the interpretation of the last record.


Figure 2.17. KaNN augmented computation graph

Multi-Processor System-on-Chip 1

Подняться наверх