Exploring the impact of Flipped Bits.

Exploring the impact of Flipped Bits.

Following a few interesting conversations recently, I got interested in the idea of 'bit flip' – an occasion where a single binary bit changes state from a 0 to a 1 or from a 1 to a 0 inside a file.

I wrote a very inefficient script that sequentially flipped every bit in jpeg file, saved the new bitstream as a jpeg, attempted to render it in the [im] python library, and if successful, to calculate an RMSe error value for the new file.

I've not really had much time to take this further at the moment, but its an academic notion I'd be interested in exploring some more.

I'm not sure if a bit flip is a theoretical or 'real' threat on modern storage devices – in the millions of digital objects that have passed through my hands in the past 10 years, I've never knowingly handled a random bit flip errored file. I'd be interested in any thoughts / experiences / observations on the topic.

Please see the attached file for some pretty pictures.

Feel free to get in touch if you want any more data – images, RMSe data or scripts.



  1. paul
    February 19, 2013 @ 2:08 pm CET

    Apologies, I'll try and get this fixed…

  2. andy jackson
    February 19, 2013 @ 10:19 am CET

    The errors were caught using plain old fixity checking (signed, timestamped SHA-256 I think), and restored from mirrors. 

    In my experience, systematic faults like 'chattery' cables, dodgy disc controllers or flaky firmware are much more commonly problematic than random spontanous damage (cosmic rays etc.).

  3. Jay Gattuso
    February 18, 2013 @ 9:55 pm CET

    I can't see the file in flickr.  🙁

  4. Jay Gattuso
    February 18, 2013 @ 9:54 pm CET

    It would be very interesting to know how they caught it…and repaired it (I can only guess a brute force bit flip and MD5 comparison until the original MD5 is re-created….)

    Agree with your comment on workflow and tools – ditto over here. The main cause of errors is naff tools writing ill formed objects, or hardware having an issue (e.g. we discovered through fixity checks at some point we had a 'chattery' network cable in a critical switch, resulting in intermittent writing of file objects bitstreams….went back and fixed the problem, and badly written files, so no loss, but worrying for a while as we tracked the fault down.)

  5. Jay Gattuso
    February 18, 2013 @ 7:19 pm CET


    "I'm not expecting a direct correlation between the bit and the damage (though it'd be neat if flipping bit 17 always resulted in a cyan swathe across the image for example) but rather that images that are broken may all produce similar artefacts/shapes?"

    In my head, there IS a few bytes that are critical to a file – from my brief foray into bit mashing I saw that the first few bytes of jpeg are critical. Given how inefficient my code was, I've not been able to really run as many tests as I wanted – however, I'd love to see the aggregated results of a decent number of different jpgs (a few hundred, covering frame size and compression aggression) and see there are any relative or absolute patterns to errors and error percentages.

    For example, we know that jpeg is built of 8×8 blocks, each block having undergone a DCT, and the result is stored as one of the main parts of the jpeg file. This means that any changes to the MSB side of a byte will result in a larger 'per DCT block' error than a change on the LSB end – this means that we are more 'tolerant' (visually and arithmetically) of LSB skewed errors in most of the jpeg file than MSB errors.

    This offers some questions: At a base level, do these sorts of patterns affect how we should be clustering different file types to reduce the likelihood of errors / damage in the long term (or do simply not care because modern error correction and concealment and distributed bit writing methods remove this issue?…)

    What other patterns can we see? Are there critical portions of files that would benefit from a higher bit budget?

Leave a Reply

Join the conversation