World’s Best AI Accelerator for
Edge HPC Solutions
The fastest and easy-to-use NPU solution for a high-performance edge AI
with a full-stack SDK and technical support
Mobilint ARIES
The fastest NPU chip for all edge devices
with world’s leading AI performance
Introducing our first NPU chip, ARIES. It is an edge NPU specialized for AI inference operations, with a high AI computing performance of up to 80 TOPS. This chip has a very fast computing speed thanks to its architecture consisting of efficient deep learning computing modules and its custom NPU compiler. It also has a high power-efficiency by maximizing data reuse and minimizing memory access. If you have been having problems with AI performance so far,
you no longer need to worry about it.
Key Features
Cost effective
Most cost-effective options for engineers prioritizing high price-performance ratio
High accuracy
World-leading lightweight technology
maintains 99.9 % accuracy of existing models
Scalable
Solution for all edge devices supports
1~100+ TOPS
Easy to use
User-friendly full-Stack SDK that supports
major ML frameworks including Tensorflow,
Pytorch, ONNX
Programmable
Supports more than 200 deep learning
models, including SOTA models, with
excellent performance
Eco friendly
Eco-friendly solution that maximizes data
reuse and minimizes memory access for
high energy efficiency
ARIES-Based Products
MLA100
Edge NPU PCIe card for AI inference
MLA100 is a high-performance PCIe AI accelerator for developers who have been frustrated with traditional processors.
This product is a companion chip that connects to a host PC and supports AI inference operations.
MLA100 is scalable and can operate stably even in applications that require high performance of over 100 TOPS. It is mainly used in applications that require high-performance AI functions, such as smart factories, smart cities, autonomous robots, and on-premise servers.
If you want to upgrade your product right away, this is the product for you!
MLX-A1
Standalone Edge NPU-Embedded AI Box
For engineers interested in developing edge devices using AI, the product you are looking for has finally arrived!
The MLX-A1, which Mobilint will release in Q3 2023, is an integrated solution for AI inference. This stand-alone product can implement high-performance AI with minimal space and power.
It supports a variety of communication interfaces, such as MIPI, USB, and Ethernet, and can be used in a variety of environments, making it suitable for most applications.