New technique could boost AI learning on smartphones

17 Nov 2023

Image: © Shutter2U/Stock.adobe.com

Researchers claim their machine-learning technique was able to fine tune AI models using less resources while retaining accuracy, which could be used to improve AI on smaller devices like smartphones.

A new training method from researchers at the Massachusetts Institute of Technology (MIT) could be used to enable continuous learning for AI on edge computing devices.

The researchers said deep-learning techniques can help AI chatbots understand user accents or predict the next word someone will type based on their typing history. But these features require fine-tuning the AI model with new data.

The team said this becomes an issue on smartphones and small edge devices, as they can lack the memory and computational power required for this fine-tuning process. One way around this is through cloud servers, but this presents both energy concerns and security risks when it comes to sensitive data.

To fix this, the team claim to have developed a technique that enables deep-learning models to efficiently adapt to new sensor data, directly on an edge device.

This training method – which the researchers have dubbed PockEngine – can determine which parts of the machine learning model need to be updated to improve accuracy. This method then only stores and computes with those specific pieces.

Making more with less

The team said deep-learning models are based on neural networks, which comprise many interconnected layers of nodes that process data to make a prediction.

But all of the layers in the neural network are not important for improving accuracy. The team said that for the layers that are important, the entire layer may not need to be updated.

The PockEngine method is designed to fine-tune each layer on a certain task and measures the accuracy improvement after each individual layer. PockEngine then identifies the contribution of each layer and the trade-offs between accuracy and fine-tuning cost, in order to determine the percentage of each layer that needs to be fine-tuned.

The team said this training method can perform its tasks before runtime in order to minimise the computational power required and boost the speed of the fine-tuning process. The researchers claim PockEngine was able to perform 15 times faster than other methods on some hardware platforms without a drop in accuracy.

“On-device fine-tuning can enable better privacy, lower costs, customisation ability and also lifelong learning, but it is not easy,” said MIT associate professor Song Han. “Everything has to happen with a limited number of resources.

“We want to be able to run not only inference but also training on an edge device. With PockEngine, now we can.”

Last week, Humane – a company founded by former Apple designers – shared details of its new AI-powered device that aims to create a new era for wearables. The tiny device is called the Humane AI Pin, but a hefty price tag and vagueness around its capabilities may hold it back.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com