AI energy assessed
Experts are working on ways to reduce the carbon footprint of artificial intelligence (AI).
Artificial intelligence has become a focus of certain ethical concerns, but it also has some major sustainability issues.
Previous studies have found training and searching some neural networks requires nearly five times the lifetime emissions of the average car, including its manufacturing.
This issue gets even more severe in the model deployment phase, where deep neural networks need to be deployed on diverse hardware platforms, each with different properties and computational resources.
To reduce these issues, MIT researchers have developed a new automated AI system for training and running neural networks.
Results indicate that, by improving the computational efficiency of the system, the system can cut down the carbon emissions involved.
The new system trains one large neural network comprising many pretrained subnetworks of different sizes that can be tailored to diverse hardware platforms without retraining.
This dramatically reduces the energy usually required to train each specialised neural network for new platforms - which can include billions of internet of things (IoT) devices.
Using the system to train a computer-vision model, they estimated that the process required roughly 1/1,300 the carbon emissions compared to today's state-of-the-art neural architecture search approaches, while reducing the inference time by 1.5-2.6 times.
“The aim is smaller, greener neural networks,” says Song Han, an assistant professor at MIT.
“Searching efficient neural network architectures has until now had a huge carbon footprint. But we reduced that footprint by orders of magnitude with these new methods.”
The work was carried out on Satori, an efficient computing cluster donated to MIT by IBM that is capable of performing 2 quadrillion calculations per second.
An early copy of a report on the development is accessible in PDF form, here.