top of page

Intelligence Everywhere

We accelerate the future by developing deep learning accelerators & SoC solutions.




Mobilint Aries enables high-performance AI inference for edge devices with remarkable energy efficiency. Unparalleled efficiency and remarkable performance rate are two key distinctions that define Aries. Along with the Host Processor, Aries flawlessly executes AI algorithms as a companion accelerator, delivering up to 80 TOPS of AI Performance at its maximum capacity. Aries is most suitable for high-performance edge fields, including self-driving, machine vision, securities, and smart factories. And, with Mobilint’s Dollar per TOPS promise, Aries will empower any and all to enrich our daily lives and communities with high-tech AI applications previously only dreamt of. 

*Available to our partners in Q1 2023

Key Features

Easy and Fast to Use

A full-stack SDK that natively supports TensorFlow, PyTorch, ONNX, and TVM is provided to all our clients.

World Best Price

Mobilint Aries offers
a superlative hardware upgrade
with competitive cost


Mobilint Aries supports 30+ up-to-date DNN models while maintaining
stellar performance

Versatile Performance

Mobilint Aries facilitates scalability and versatility by implementing effective scheduling and multitasking features.


Performance Benchmark

In April 2021, the latest result of the leading industry benchmark for Deep Learning Accelerator Performance, MLPerf™ Benchmark (v1.0), was released. In this round, Mobilint submitted the benchmark result of an Edge inference NPU implemented on the FPGA, Xilinx Alveo U250, and gained an x2.14 performance improvement compared to the previous v0.7 submission. This is the highest performance result from a South Korean contestant.

Closed Division*


Single Stream* : 17.27 (ms)

Offline* : 174 (samples/sec)


Single Stream : 6.98 (ms)

Offline : 463 (samples/sec)

Open Division*


Offline : 891.70 (samples/sec)


Offline : 2,404.61 (samples/sec)

* Closed Division : Based on relatively strict rule, it is for direct performance comparison.

* Open Division : Based on relatively free rule, it is for showcasing new and exciting thing.

* Single Stream : Measures latency consumed per a query.

* Offline : Measures the number of query an accelerator can process in a second.

The MLPerf benchmark results differ by system configuration.

Please visit here to check for specifications in detail. 


Deep Learning Accelerator is applicable almost everywhere.


Smart City


Drone / Robot


Surveilence Camera


Autonomous Vehicle


Smart Factory


Home Appliance

bottom of page