Error-correcting codes discovered at MIT can still guarantee reliable communication, even in cellphones with failure-prone low-power chips. One of the triumphs of the information age is the idea of error-correcting codes, which ensure that data carried by electromagnetic signals — traveling through the air, or through cables or optical fibers — can be reconstructed flawlessly at the receiving end, even when they’ve been corrupted by electrical interference or other sources of what engineers call “noise.”

For more than 60 years, the analysis of error-correcting codes has assumed that, however corrupted a signal may be, the circuits that decode it are error-free. In the next 10 years, however, that assumption may have to change. In order to extend the battery life of portable computing devices, manufacturers may soon turn to low-power signal-processing circuits that are themselves susceptible to noise, meaning that errors sometimes creep into their computations.

Error-Correcting Codes for Reliable CommunicationFortunately, a research affiliate at MIT’s Research Laboratory of Electronics, demonstrates that some of the most commonly used codes in telecommunications can still ensure faithful transmission of information, even when the decoders themselves are noisy. The same analysis, which is adapted from his MIT thesis, also demonstrates that memory chips, which present the same trade-off between energy efficiency and reliability that signal-processing chips do, can preserve data indefinitely even when their circuits sometimes fail.

According to the semiconductor industry’s 15-year projections, both memory and computational circuits will become smaller and lower-power. As you make circuits smaller and lower-power, they’re subject to noise. So these effects are starting to come into play.

As per the theory of error-correcting codes, Noise in the channel might cause some of the bits to flip or become indeterminate. An error-correcting code would consist of additional bits tacked on to the message bits and containing information about them. If message bits became corrupted, the extra bits would help describe what their values were supposed to be.

The longer the error-correcting code, the less efficient the transmission of information, since more total bits are required for a given number of message bits. To date, the most efficient error-correcting codes are the low-density parity-check codes. Those are the codes that MIT researcher Varshney analyzed and discovered that those codes would have to be slightly modified to guarantee optimal performance with noisy circuits, but by using essentially the same decoding methodologies of low-density parity-check codes. And since the codes have to correct for errors in both transmission and decoding, they would also yield lower transmission rates (or require higher-power transmitters).