top of page

World’s Best AI Accelerator for
Edge HPC Solutions

The fastest and easy-to-use NPU solution for a high-performance edge AI​
with a full-stack SDK and technical support


Mobilint ARIES

The fastest NPU chip for all edge devices

with world’s leading AI performance

Introducing our first NPU chip, ARIES. It is an edge NPU specialized for AI inference operations, with a high AI computing performance of up to 80 TOPS. This chip has a very fast computing speed thanks to its architecture consisting of efficient deep learning computing modules and its custom NPU compiler. It also has a high power-efficiency by maximizing data reuse and minimizing memory access. If you have been having problems with AI performance so far,

you no longer need to worry about it.


Key Features

Cost effective.png

Cost effective

Most cost-effective options for engineers prioritizing high price-performance ratio

high accuracy.png

High accuracy

World-leading lightweight technology
maintains 99.9 % accuracy of existing models

Reliable scalability.png


Solution for all edge devices supports
1~100+ TOPS

Easy to use.png

Easy to use

User-friendly full-Stack SDK that supports
major ML frameworks including Tensorflow,
Pytorch, ONNX



Supports more than 200 deep learning
models, including SOTA models, with
excellent performance

Eco friendly.png

Eco friendly

Eco-friendly solution that maximizes data
reuse and minimizes memory access for
high energy efficiency


Mobilint Form Factor

World's best AI chip solutions for edge and on-premise AI


Edge NPU PCIe card for AI inference

MLA100 is a high-performance PCIe AI accelerator for developers who have been frustrated with traditional processors.

This product is a companion chip that connects to a host PC and supports AI inference operations.

MLA100 is scalable and can operate stably even in applications that require high performance of over 100 TOPS. It is mainly used in applications that require high-performance AI functions, such as smart factories, smart cities, autonomous robots, and on-premise servers.

If you want to upgrade your product right away, this is the product for you!

Mask group.png


Stand-alone Edge NPU-embedded AI Box

For engineers interested in developing edge devices using AI, the product you are looking for has finally arrived!

The MLX-A1, which Mobilint will release in Q3 2023, is an integrated solution for AI inference. This stand-alone product can implement high-performance AI with minimal space and power.

It supports a variety of communication interfaces, such as MIPI, USB, and Ethernet, and can be used in a variety of environments, making it suitable for most applications.

Mask group-1.png

World’s Best AI Chip Solutions
for On-Device and On-Premise AI

The fastest and easy-to-use NPU solution for a high-performance
edge AI with a full-stack SDK and technical support.

Mobilint SDK qb

User-friendly Mobilint SDK

'qb' is SDK developed by Mobilint that empowers developers to rapidly create AI applications for edge devices.

'qb' offers user-friendly development environment, ensuring effortless and efficient deployment process

for AI models. Leveraging cutting-edge quantization technology, 'qb' guarantees that the model's accuracy remains above 99.9% of the original FP32 model, even after optimization for lightweight deployment.

This adaptable SDK is compatible with major ML frameworks, supports over 200 AI models,

and includes an intuitive runtime, streamlining AI deployment across a variety of edge devices.

230925_mobilint products-web-05.png
230925_mobilint products-web-06.png
bottom of page