Researchers ‘overclocking’ world’s fastest supercomputers to process big data faster March 04, 2015 | 04:40 / Interesting information

Researchers at Queen’s University Belfast, the University of Manchester, and the STFC Daresbury Laboratory are developing new software to increase the ability of supercomputers to process big data faster while minimizing increases in power consumption.

To do that, computer scientists in the Scalable, Energy-Efficient, Resilient and Transparent Software Adaptation (SERT) project are using “approximate computing” (also known as “significance-based computing”) — a form of “overclocking” that trades reliability for reduced energy consumption.

The idea is to operate hardware slightly above the threshold voltage (also called near-threshold voltage, NTV), actually allowing components to operate in an unreliable state — and assuming that software and parallelism can cope with the resulting timing errors that will occur — using increased iterations to reach convergence, for example.

dailytechinfo.org 

Inquire your business presence with us!
Read more
Trust our creativity and unique ideas!
Read more
Quality and colorful publish for affordable prices!
Read more
Improve IT skills and change your career!
Read more
Rich e-Library services!
Read more