top of page

Mobilint and AWS, Toward Connected Edge Intelligence

  • Mobilint Admin
  • May 23
  • 3 min read
Mobilint's partnership with AWS aims to streamline edge deployment of powerful hardware.
Mobilint's partnership with AWS aims to streamline edge deployment of powerful hardware.

There’s a shift happening in how intelligent systems are designed and deployed.

AI models are getting more efficient. Devices are getting smaller, faster, and more power-aware. Now, real-time decision-making is no longer just a cloud problem.

That’s why our team at Mobilint is partnering with AWS to bring our AI chip advancements together with AWS Connected Edge Intelligence platform for secure, real-time execution at the edge. Together, we make it easier for engineering teams to go from idea to deployed intelligence without the heavy lifting that’s typically involved in edge AI systems.


Connected Edge Intelligence: The Convergence of IoT, Edge AI, and Generative AI

Until recently, edge devices were mostly responsible for capturing data. The data would be sent to a server somewhere else to make the decisions. However, this model caused delays and a fundamental limit on what the edge could do on its own.

With the convergence of IoT, efficient AI models, and new edge-optimized hardware, that’s changing. Models trained in the cloud can now be deployed directly onto embedded systems. Instead of just collecting data, edge devices can analyze, decide, and act by their own, often within milliseconds.

That’s the shift toward what AWS calls connected edge intelligence: distributed systems that are capable of working together in real time.


The Grand Challenge: From Static Systems to Adaptive, Collaborative Ones

Edge-first intelligence offers flexibility with inter-connectivity in contrast to today's cloud-reliant AI structures
Edge-first intelligence offers flexibility with inter-connectivity in contrast to today's cloud-reliant AI structures

A lot of today’s embedded systems are still built around fixed logic and rigid workflows. They're programmed with predefined responses and rely heavily on centralized infrastructure to make high-level decisions. This makes updates slow, limits flexibility, and creates bottlenecks when data needs to travel back and forth between device and cloud.

Here’s the challenge our partnership aims to solve: To move from these static, cloud-dependent systems to adaptive, edge-first intelligence.

That means devices should be able to:

  • Make decisions locally, without waiting on a round-trip to the server

  • Coordinate with other devices to adapt workflows dynamically

  • Respond immediately to new conditions, without manual reprogramming

To make this possible, it requires better models, better connectivity, and critically, better hardware platforms that can handle real-time AI workloads on-site.


Why This Partnership Matters Now

Mobilint and AWS's partnership aims to support edge AI orchestration through advanced intelligent fabric.
Mobilint and AWS's partnership aims to support edge AI orchestration through advanced intelligent fabric.

We are excited to bring things forward with AWS especially since we know the timing’s right. A few things have aligned to make this the right moment for a deeper integration between AWS services including Amazon SageMaker, for streamlined ML development in the cloud, and AWS IoT Greengrass, and Mobilint’s hardware.


First, the silicon is ready

In January 2025, we successfully entered mass production of our AI accelerator ARIES, and we’re now moving forward with production plans for our AI SoC REGULUS. Each of these chips can run complex models on-device with less than 25W and 3W of power respectively. That means the hardware is now ready to run vision models, audio inference, and even some generative tasks locally without the need for bulky GPUs or constant power.


AI accelerator ARIES in its Mobile Express Module (MXM) form factor. Compactly sized and ready to deploy, the module performs at 80 TOPS with 25W of power.
AI accelerator ARIES in its Mobile Express Module (MXM) form factor. Compactly sized and ready to deploy, the module performs at 80 TOPS with 25W of power.

The models are getting lighter and smarter

Thanks to improvements in model quantization, pruning, and distillation, deep learning models that used to require a full server rack can now run efficiently on the edge. Developers can train in SageMaker and then deploy optimized, production-ready versions directly to the device using IoT Greengrass.


Connectivity is everywhere

Devices can now connect and collaborate more reliably. Greengrass helps manage these connections securely and at scale, letting you build multi-device systems that act as one.


Demand for deployment-ready AI is real

Across industries, there’s growing demand for systems that can operate autonomously: in smart factories, in hospitals, in vehicles, and in infrastructure. These systems need to be intelligent, low-latency, and reliable, even when offline.


Cost and scale matter

Cloud costs add up fast, especially with video and sensor data. Pushing more intelligence to the edge reduces bandwidth needs and makes scaling to thousands of devices more viable, technically and financially.


Advanced edge AI systems have the power to transform end-to-end business operations.
Advanced edge AI systems have the power to transform end-to-end business operations.

What’s Next

We’re not building edge AI for the sake of it. We’re building it because teams out there are solving hard problems that cloud-first architectures simply can’t handle. This partnership with AWS makes it easier to deploy and scale intelligent systems that work reliably outside the data center.

If you’re working on systems that need to run smarter, faster, and closer to the real world, we’d love to talk. We’re ready to help you make them work.

bottom of page