Machine learning is exploding. New models, new algorithms, and new applications are popping up every day. According to the Organization for Economic Cooperation and Development (OECD), in 2020 alone, more than $75 billion was invested in artificial intelligence (AI), up from $3billion in 2012. Most of that funding went toward machine learning research and development.
Machine learning consists of repetitive arithmetic operations, makes heavy use of linear algebra, and is traditionally performed on servers or the cloud by high-powered computers. But what if you could perform complex machine learning tasks off the premises but close by, using less powerful computing devices? That would make it possible for businesses to apply AI at the edge without needing massive data centers.
TinyML is bringing machine learning to the edge using devices with tiny, low-power microcontrollers.
Table of Contents
What is TinyML?
Tiny Machine Learning (TinyML) is a discipline at the crossroads of machine learning (ML) and embedded systems that allows you to run ML models on low-power microcontrollers.
Embedded systems are primarily composed of hardware and software that are designed to perform a particular function. They are computers, but in contrast to general-purpose computers, such as PCs, cellphones, or tablets, they focus on specific operations. Examples of embedded systems include printers, home appliances, digital cameras, electronic calculators, ATMs, and more.
Microcontrollers are tiny chips that can run on very little power—often with only a few kilobytes of memory and sometimes even without an operating system—and constitute the hardware portion of embedded systems, enabling them to perform their tasks.
Some of the features of microcontrollers include:
- Low-power: Microcontrollers are very energy efficient, consuming only a few milliwatts.
- Small memory footprint: They can run on as little as kilobytes of memory.
- No Operating System: They don’t need an OS to run, making them fast and efficient.
- Affordable: They’re generally low cost.
The origin of TinyML can be traced back to 2014. Pete Warden, a Google staff research engineer, is credited by his peers as the originator and early influencer of the TinyML subdomain of machine learning.
Also read: Effectively Implementing AI as a Service
Why TinyML is Important
In a world where ML is rapidly gaining popularity and use, the ability to apply it at the edge becomes more important. While big cloud providers like Microsoft Azure, Amazon AWS, and Google Cloud are creating data centers with thousands of servers running ML models in parallel over huge datasets, TinyML offers the ability to implement just one or two devices that can perform simple tasks based on a company’s needs. For example:
A smart home appliance company wants its appliances to automatically adjust power consumption based on real-time price fluctuations of electricity, but they don’t want to invest millions into building an expensive data center for all their appliances. Instead, they could embed TinyML algorithms onto each device to independently learn how much power needs to be reduced given its current state (e.g., what time of day and how much energy is consumed by other appliances in the house).
A car manufacturer wants to use its fleet of self-driving cars for a ride-sharing service, but it doesn’t want to invest in full autonomy at this point because not enough people will be able to benefit from it. So instead, they can embed TinyML algorithms onto their vehicles so that each one acts independently using pattern detection based on data collected from previous trips (e.g., speed limits and traffic patterns).
In both cases, these companies would save millions in building expensive infrastructures when only small devices are needed—allowing them to focus on developing new features instead.
The Impact of TinyML on Edge Computing
Edge computing refers to the process of moving cloud computing closer to users for them to benefit from faster response times. The idea is that you can avoid sending all your data back and forth through the internet when only a small part of it needs immediate attention.
TinyML has found ways to give devices an autonomous edge by running ML algorithms on each device without uploading anything beyond what’s necessary. This allows TinyML-powered devices to not only perform better but also require much less power than if they were connected directly into centralized servers with big datasets, saving money and time while improving user experience.
Also read: Micro Data Centers are Evolving the Edge
Benefits and Advantages of TinyML
There are many benefits to using TinyML over centralized cloud-based systems, including:
- Fast inference with low latency: TinyML algorithms can infer results in just tens of milliseconds, which allows them to run much faster than traditional ML models that must send all their data for training into remote servers.
- Low cost: ML at the edge allows companies to avoid building costly data centers with thousands of servers. Instead, they can use inexpensive self-learning devices and require less power, resulting in huge savings for their business.
- Low model size: TinyML models are optimized to run on small devices using techniques like compression or quantization, resulting in smaller files while still achieving the same accuracy as standard models.
- Local storage: ML algorithms require a large amount of memory and disk space to store all their parameters, which makes it difficult for devices with limited amounts of RAM or flash storage. TinyML reduces this problem by only keeping features that are important to determine an outcome—the inference part—instead of storing every training variable, which results in a smaller model file.
- Data privacy: Data privacy is becoming increasingly important in the era of big data, and TinyML provides an opportunity for companies to take back control by only keeping the data on the edge device to reduce the risk of losing sensitive data.
Challenges of TinyML
While the benefits of using machine learning at the edge are clear, there are still some limitations that companies must be aware of before they can effectively adopt this technology.
- Limited data transfer rate: Sending large datasets back and forth between devices is time-consuming due to limited bandwidth. However, the full rollout of 5G networks is expected to address this challenge by providing high speeds for data transfers.
- Limited memory: As mentioned earlier, ML models require a great deal of memory to store all their parameters, creating problems on devices with only small amounts of RAM. TinyML algorithms have been developed to address this challenge by using less storage while still achieving the same accuracy as standard models. There are only a few ML platforms that can currently satisfy the demands of TinyML devices.
- Troubleshooting problems: Because the ML model is trained on data obtained from the device and runs on the device itself, detecting and resolving performance issues is more complex than in a cloud environment where technicians can diagnose problems remotely.
Enterprise Use Cases for TinyML
TinyML is impactful for IT teams looking to improve their organization’s networking capabilities, security, and data privacy. Some of the industries that can benefit from TinyML include:
- Manufacturing: TinyML can enable more efficient manufacturing processes by providing real-time analysis on machines.
- Retail and e-commerce: Retail and e-commerce companies can use TinyML models at the edge to improve customer service, optimize pricing strategies, increase revenue opportunities, and track inventory levels in warehouses or stores.
- Healthcare: TinyML models can help healthcare companies detect health conditions in devices before symptoms arise, ensure equipment is working, or monitor patient vitals at home.
- Remote sensing and forestry management: TinyML algorithms are being used to solve problems in forestry management by predicting when trees will be ready to cut down and where there are forest fires based on the weather and vegetation.
- Automotive: Automobile companies can use TinyML models to improve safety features, fix faults in their vehicles before they become big problems, or collect information about the drivers’ behavior on the road for research purposes.
Getting Started with TinyML
TinyML is a promising technology that has many benefits for organizations looking to take advantage of smart devices at the edge. Several ML frameworks support TinyML applications. Some of the most popular platforms include TensorFlow, PyTorch Mobile, and Edge Impulse.
As the Internet of Things (IoT) continues to expand, many companies are looking for ways to use machine learning at the edge to analyze data collected from smart devices. As a result, there is increasing interest in developing tools that can help streamline this process without compromising accuracy or security.
Read next: AI-Driven ITSM is Transforming IT Management