top of page
메인화면-2_edited.jpg

World’s Best AI Accelerator for
Edge HPC Solutions

The fastest and easy-to-use NPU solution for a high-performance edge AI​
with a full-stack SDK and technical support

untitled_chip.png

Mobilint ARIES

The fastest NPU chip for all edge devices

with world’s leading AI performance

Introducing our first NPU chip, ARIES. It is an edge NPU specialized for AI inference operations, with a high AI computing performance of up to 80 TOPS. This chip has a very fast computing speed thanks to its architecture consisting of efficient deep learning computing modules and its custom NPU compiler. It also has a high power-efficiency by maximizing data reuse and minimizing memory access. If you have been having problems with AI performance so far,

you no longer need to worry about it.

key-features

Key Features

Cost effective.png

Cost effective

Most cost-effective options for engineers prioritizing high price-performance ratio

high accuracy.png

High accuracy

World-leading lightweight technology
maintains 99.9 % accuracy of existing models

Reliable scalability.png

Scalable

Solution for all edge devices supports
1~100+ TOPS

Easy to use.png

Easy to use

User-friendly full-Stack SDK that supports
major ML frameworks including Tensorflow,
Pytorch, ONNX

Programmable.png

Programmable

Supports more than 200 deep learning
models, including SOTA models, with
excellent performance

Eco friendly.png

Eco friendly

Eco-friendly solution that maximizes data
reuse and minimizes memory access for
high energy efficiency

MLA100

ARIES-Based Products

MLA100

Edge NPU PCIe card for AI inference

MLA100 is a high-performance PCIe AI accelerator for developers who have been frustrated with traditional processors.

This product is a companion chip that connects to a host PC and supports AI inference operations.


MLA100 is scalable and can operate stably even in applications that require high performance of over 100 TOPS. It is mainly used in applications that require high-performance AI functions, such as smart factories, smart cities, autonomous robots, and on-premise servers.


If you want to upgrade your product right away, this is the product for you!

Mask group.png

MLX-A1

Standalone Edge NPU-Embedded AI Box

For engineers interested in developing edge devices using AI, the product you are looking for has finally arrived!


The MLX-A1, which Mobilint will release in Q3 2023, is an integrated solution for AI inference. This stand-alone product can implement high-performance AI with minimal space and power.


It supports a variety of communication interfaces, such as MIPI, USB, and Ethernet, and can be used in a variety of environments, making it suitable for most applications.

Mask group-1.png
MLX-A1
제품전체.png
화살표_edited.png

World’s Best AI Chip Solutions
for On-Device and On-Premises AI

The fastest and easy-to-use NPU solution for a high-performance
edge AI with a full-stack SDK and technical support.

bottom of page