top of page
mobilint_logo_350px.png

Intelligence Everywhere

We accelerate the future by developing deep learning accelerators & SoC solutions.

Proucts

Products

MOBILINT ARIES 

Mobilint Aries enables high-performance AI inference for edge devices with remarkable energy efficiency. Unparalleled efficiency and remarkable performance rate are two key distinctions that define Aries. Along with the Host Processor, Aries flawlessly executes AI algorithms as a companion accelerator, delivering up to 80 TOPS of AI Performance at its maximum capacity. Aries is most suitable for high-performance edge fields, including self-driving, machine vision, securities, and smart factories. And, with Mobilint’s Dollar per TOPS promise, Aries will empower any and all to enrich our daily lives and communities with high-tech AI applications previously only dreamt of. 

*Available to our partners in Q1 2023

Key Features

Easy and Fast to Use

A full-stack SDK that natively supports TensorFlow, PyTorch, ONNX, and TVM is provided to all our clients.

World Best Price

Mobilint Aries offers
a superlative hardware upgrade
with competitive cost

Programmability

Mobilint Aries supports 30+ up-to-date DNN models while maintaining
stellar performance

Versatile Performance

Mobilint Aries facilitates scalability and versatility by implementing effective scheduling and multitasking features.

Benchmark

Performance Benchmark

In April 2021, the latest result of the leading industry benchmark for Deep Learning Accelerator Performance, MLPerf™ Benchmark (v1.0), was released. In this round, Mobilint submitted the benchmark result of an Edge inference NPU implemented on the FPGA, Xilinx Alveo U250, and gained an x2.14 performance improvement compared to the previous v0.7 submission. This is the highest performance result from a South Korean contestant.

Closed Division*

ResNet

Single Stream* : 17.27 (ms)

Offline* : 174 (samples/sec)

SSD-MobileNet

Single Stream : 6.98 (ms)

Offline : 463 (samples/sec)

Open Division*

ResNet

Offline : 891.70 (samples/sec)

SSD-MobileNet

Offline : 2,404.61 (samples/sec)

* Closed Division : Based on relatively strict rule, it is for direct performance comparison.

* Open Division : Based on relatively free rule, it is for showcasing new and exciting thing.

* Single Stream : Measures latency consumed per a query.

* Offline : Measures the number of query an accelerator can process in a second.

The MLPerf benchmark results differ by system configuration.

Please visit here to check for specifications in detail. 

Applications

Deep Learning Accelerator is applicable almost everywhere.

City%20Street_edited.jpg

Smart City

Black%20Drone_edited.jpg

Drone / Robot

Security%20Surveillance_edited.jpg

Surveilence Camera

Car%20Interior_edited.jpg

Autonomous Vehicle

AEnB2UrR5UCO_PE57QeJEKs1KKlPf9w80zQJrnGG

Smart Factory

Brand%20New%20Kitchen_edited.jpg

Home Appliance

bottom of page