Building MxNet for Arch64

Viraj Deshwal
3 min readSep 5, 2019

--

Apache MxNet with AWS

Apache MXNet is a fast and scalable training and inference framework with an easy-to-use, concise API for machine learning.

MXNet includes the Gluon interface that allows developers of all skill levels to get started with deep learning on the cloud, on edge devices, and on mobile apps. In just a few lines of Gluon code, you can build linear regression, convolutional networks and recurrent LSTMs for object detection, speech recognition, recommendation, and personalization.

In the world of Edge we want to run machine learning models onto edge devices such as —

  1. NVIDIA Jetson Nano is a small, powerful computer for embedded applications and AI IoT that delivers the power of modern AI.

GPU 128-core Maxwell

CPUQuad-core ARM A57 @ 1.43 GHz

Memory4 GB 64-bit LPDDR4 25.6 GB/s.

2. NVIDIA Jetson TX2

  • NVIDIA Pascal™ Architecture GPU
  • 2 Denver 64-bit CPUs + Quad-Core A57 Complex
  • 8 GB L128 bit DDR4 Memory

Steps to build MxNet on Aarch64

1.Allocate Swap (shared memory to boost the RAM)

In your terminal —

fallocate -l 8G swapfile
sudo chmod 600 swapfile
sudo mkswap swapfile
sudo swapon swapfile

Now to check the memory in your Jetson Device type in your terminal -

free -m

2.Install Dependencies for MxNet

sudo apt-get update
sudo apt-get install -y git build-essential libatlas-base-dev libopencv-dev graphviz python-pip
sudo pip install — upgrade pip
sudo pip install — upgrade setuptools
sudo pip install numpy
sudo pip install graphviz jupyter

3. Export PATH

Export path of your cuda location and mxnet home directory.

$export PATH=/usr/local/cuda/bin:$PATH
$export MXNET_HOME=$HOME/mxnet/
$export PYTHONPATH=$MXNET_HOME/python:$PYTHONPATH

4.Clone MxNet

$git clone — recursive https://github.com/apache/incubator-mxnet.git mxnet

5.Configure make file of MxNet for arch64

$cd mxnet
$cd make

Under your make folder find config.mk and edit it. Change the following parameters —

USE_CUDA = 1
USE_CUDA_PATH = /usr/local/cuda/
USE_OPENCV = 1
USE_CUDNN = 1
NVCCFLAGS := -m64 #add this parameter in config.mk# For Jetson Nano ( add this to your config.mk)
CUDA_ARCH := -gencode arch=compute_53,code=sm_53
# For Jetson Tx2 (add this to config.mk)
CUDA_ARCH := -gencode arch=compute_62,code=sm_62

6. Configure Mshadow makefile.

Now edit the Mshadow Makefile to ensure MXNet builds with Pascal’s hardware level low precision acceleration by editing 3rdparty/mshadow/make/mshadow.mk. The last line has MSHADOW_USE_PASCAL set to 0. Change this to 1 to enable it.

7. Build MxNet

In mxnet home directory —

$ make -j8 

8. Install MXNet Python Bindings

$ cd python
$ sudo python3 setup.py install #I am using python3.

Mxnet is Done …

Time to Build GluonCV

1.clone the repository

$ git clone https://github.com/dmlc/gluon-cv

2. Build the GluonCV along with the dependencies

$ cd gluon-cv && python setup.py install — user

Time to Enjoy Mxnet and GluonCV…. Happy Deep Learning….

In my next blog I will be sharing How to boost the NVIDIA Jetson Nano/TX2 to utilize maximum compute power.

Stay tune folks….

--

--