World’s Best AI Accelerator for
Edge HPC Solutions.
The fastest and easy-to-use NPU solutionfor a high-performance edge AI
with a full-stack SDK and technical support.
The fastest NPU chip for all edge devices
with world’s leading AI performance
Introducing our first NPU chip, ARIES. It is an edge NPU specialized for AI inference operations, with a high AI computing performance of up to 80 TOPS. This chip has a very fast computing speed thanks to its architecture consisting of efficient deep learning computing modules and its custom NPU compiler. It also has a high power-efficiency by maximizing data reuse and minimizing memory access. If you have been having problems with AI performance so far,
you no longer need to worry about it.
Best option for engineers with a lower price per performance ratio
World-leading lightweight technology
maintains 99.9 % accuracy of existing models
Solution for all edge devices supports
Easy to use
User-friendly full-Stack SDK that supports
major ML frameworks including Tensorflow,
Supports more than 100 deep learning
models, including SOTA models, with
Eco-friendly solution that maximizes data
reuse and minimizes memory access for
high energy efficiency
Mobilint Form Factor
World's best AI chip solutions for edge and on-premise AI
Edge NPU PCIe card for AI inference
MLA100 is a high-performance PCIe AI accelerator for developers who have been frustrated with traditional processors.
This product is a companion chip that connects to a host PC and supports AI inference operations.
MLA100 is scalable and can operate stably even in applications that require high performance of over 100 TOPS. It is mainly used in applications that require high-performance AI functions, such as smart factories, smart cities, autonomous robots, and on-premise servers.
If you want to upgrade your product right away, this is the product for you!
Stand-alone Edge NPU-embedded AI Box
For engineers interested in developing edge devices using AI, the product you are looking for has finally arrived!
The MLX-A1, which Mobilint will release in Q3 2023, is an integrated solution for AI inference. This stand-alone product can implement high-performance AI with minimal space and power.
It supports a variety of communication interfaces, such as MIPI, USB, and Ethernet, and can be used in a variety of environments, making it suitable for most applications.
Mobilint SDK qb
User-friendly Mobilint SDK
'qb' is SDK developed by Mobilint that empowers developers to rapidly create AI applications for edge devices.
'qb' offers user-friendly development environment, ensuring effortless and efficient deployment process
for AI models. Leveraging cutting-edge quantization technology, 'qb' guarantees that the model's accuracy remains above 99.9% of the original FP32 model, even after optimization for lightweight deployment.
This adaptable SDK is compatible with major ML frameworks, supports over 100 AI models,
and includes an intuitive runtime, streamlining AI deployment across a variety of edge devices.