The computational efficiency of artificial intelligence is, in its own way, a double-edged sword. On the one hand, it should be trained fairly quickly, but the more a neural network “accelerates”, the more it consumes energy. So it can simply become unprofitable. However, the way out of the situation can be given by IBM, which has demonstrated new methods of teaching AI, which will allow it to learn several times faster with the same level of resource and energy costs.
To achieve such results, IBM had to abandon its computational methods using 32-bit and 16-bit techniques, developing an 8-bit technique, as well as a new chip to work with it.
All IBM developments were presented as part of NeurIPS 2018 in Montreal. Engineers of the company told about two developments. The first is called “deep machine learning of neural networks using 8-bit floating-point numbers.” In it, they describe how they managed to reduce arithmetic accuracy for applications from 32 bits to 16 bits and save them on an 8-bit model. Experts say that their technique speeds up the learning time of deep neural networks by 2-4 times compared with 16-bit systems. The second development is "8-bit multiplication in memory with projected phase transition memory." Here, experts uncover a method that compensates for the low accuracy of analog AI chips, allowing them to consume 33 times less energy than comparable digital AI systems.