When you reduce the number of colours in an image, it's often
hard to get a 1:1 match, and so typically you can expect to see
banding in an image - areas of unbroken solid colours where once
multiple similar colours were present. Such banding can often
ruin the look of the image, however by using dithering
algorithms you can reduce such banding and greatly improve the
appearance of the reduced image.
Here we see a nice view of the Tower of London (Image Credit:
Vera Kratochvil). Lets say we wanted to reduce the number
of colours in this image to 256 using the web safe colour
palette.
If we simply reduce the colour depth by matching the nearest
colour in the old palette to one in the new, then we'll get
something similar to the image below. As is quite evident, the
skyline has been badly effected by banding.
However, by applying a technique known as dithering, we can
still reduce the colour depth using exactly the same palette,
and get something comparable to the original and more
aesthetically pleasing.
Types of dithering
There are several different types of dithering, mostly falling
into Ordered or Unordered categories.
Ordered dithering uses a patterned matrix in order to dither the
image. An example of this is the very distinctive (and
nostalgic!) Bayer algorithm.
Unordered, or error diffusion, dithering calculates an error
value for each pixel and then propagates this to the
neighbouring pixels, often with very good results. The most well
known of these is Floyd–Steinberg, although there are several
more such as Burkes, and Sierra.
You could potentially use dithering for applications other
than images. An image is simply a block of pixel data, i.e.
colours. Colours are just numbers, and so is a great deal of
other data. So in theory you can dither a lot more than "just"
images.
Dithering via Error Diffusion
For at least the first part of this series, I will be
concentrating on error diffusion. For this algorithm, you scan
the image from left to right, top to bottom and visit each
pixel. Then, for each pixel, you calculate a value known as the
"error".
After calculating the error it is then applied to one or more
neighbouring values that haven't yet been processed. Generally,
this would mean adjusting at least 3 neighbouring cells, but
depending on the algorithm this could be quite a few more. I'll
go into this in more detail when I describe individual dithering
algorithms in subsequent posts.
So how do you determine the error? Well, hopefully is clear
that you don't dither an image as a single process. There has
to be another piece in the puzzle, a process to transform a
value. The error therefore is the difference between the
original and new values. When it comes to images, typically
this is going to a form of colour reduction, for example 32bit
(16 million colours) to 8bit (256 colours).
The diagram below tries to show what I mean - the grey boxes are
pixels that have been processed. The blue box is the pixel that
is currently being transformed, with the green therefore being
unprocessed pixels and candidates for the error diffusion. The
arrows simply highlight that the candidates are always forward
of the current pixel, and not behind it.
It's worth repeating that the error is not applied to any
previously transformed value. If you do modify an already
processed value, then you would need to have some way of
reprocessing it (as the combined value+error may not be valid
for your reduction method), which could get messy fast.
Next Steps
Hopefully this article serves as at least a basic and high level
overview of dithering - additional posts will deal with the
actual implementation of dithering.
Update History
2015-06-06 - First published
2020-11-21 - Updated formatting
Like what you're reading? Perhaps you like to buy us a coffee?
The founder of Cyotek, Richard enjoys creating new blog content for the site. Much more though, he likes to develop programs, and can often found writing reams of code. A long term gamer, he has aspirations in one day creating an epic video game - but until that time comes, he is mostly content with adding new bugs to WebCopy and the other Cyotek products.
Although I should really be working on adding the dithering algorithms into Gif Animator, I thought it would be useful to expand the repertoire of algorithms available for use with such tools. This article briefly mentions the expanded error diffusion algorithms that are now included, and in slightly more detail covers random dithering as well.
In my previous introductory post, I briefly described the concept of dithering an image. In this article, I will describe how to dither an image in C# using the Floyd–Steinberg algorithm.
When you reduce the number of colours in an image, it's often hard to get a 1:1 match, and so typically you can expect to see banding in an image - areas of unbroken solid colours where once multiple similar colours were present. Such banding can often ruin the look of the image, however by using dithering algorithms you can reduce such banding and greatly improve the appearance of the reduced image. This article briefly discusses dithering as a prelude to further articles with actual dithering implementations.