18-bit RGB palettes are an old format used by VGA displays of yesteryear (although interestingly Wikipedia states they are still used by many LCD monitors). These palettes use 6-bits for each of the red, green and blue channels and usually allowed a maximum of 256 colours from the 262,144 unique colours available.
Files using this format are usually quite recognisable, having
pal and a size of 768 bytes.
You can find examples of these palettes in many old games - files I have tested during the writing of this article came from Command and Conquer, Powermonger, Ultima 4, Stonekeep and Hardcore 4x4. Just to mix things up though, some palettes used 24-bit colour - examples I have tested include StarTopia and (I think) Daggerfall.
This article will describe how to read and write 18-bit palette files.
As this minor odyssey originally started with a user request to add support for "Westwood" palettes to our software, the example project was created using Command and Conquer palettes.
As I also own a couple of Red Alert games, I tested this project on palettes extracted from game files using XCC Mixer.
Finally, I also tested on palette files found in various games I have installed as mentioned in the introduction above.
Reading 18-bit palettes
The code I present in this article is example code and can be optimised in various ways (for example not reading and writing a single byte at a time), however I choose to have the sample code fairly basic to avoid complicating the article. A more optimised version can be found on our GitHub page.
Reading the palettes is straight forward - the number of colours present is the size of the file divided by 3. This is normally 768 bytes for a total of 256 colours.
Each colour is then represented by 3 bytes for the red, green and blue channels. As each value is a single byte, there are no endian issues to worry about.
For each byte read, I use bit shifting to move the bits two positions to the left, causing the first two bits to be discarded and the last two set to zero. This converts the value from the 0-63 range to 0-255, which is a lot easier to work with in most editing software. We can then combine the three channels together to get our RGB colour.
In the above code, I'm using the bitwise OR operator along with shifting to combine the three channels values into a single integer value. You could use
Color.FromArgb(r, g, b)but then you'd need to manually make sure the
bvalues are between
255. If you open a 24-bit palette using the above code, the shifted values will be too large and will cause
Color.FromArgbto throw an exception.
We can now read 18-bit palette files. (I did say it was simple!)
To make this code load 24-bit palettes instead of 18-bit, just remove the bit-shift (
Writing 18-bit Palettes
Writing an 18-bit palette is the exact reverse of reading. We simply loop through our colours, and write a single byte for each of the 3 supported channels. However, remembering that the 18-bit format uses 6-bits per channel we need to convert our 0-255 range down to 0-63. This is easy enough by shifting the bits right instead of left.
To make this code save 24-bit palettes instead of 18-bit, just remove the bit-shift (
This will perfectly save existing 18-bit palettes loaded into the program via the code in in the previous section. But what happens if you try and save colours that cover ranges beyond what 18-bit supports? Given there's 16,777,216 unique colours in a 24-bit palette, there's a lot of "missing" colours. Fortunately, the values will be automatically converted to the nearest equivalent and are so close it's quite likely you wouldn't notice any difference.
The screenshots below show examples of converting 24-bit colour palettes into 18-bit - you can see the converted results are very similar. Swatches outlined in black are direct matches, those outlined in red are the ones that don't quite fit - the RGB values displayed on the right hand side of each mismatch show that the difference is +/-3 at worst. It does mean that editing the palettes with software such as our own Palette Editor (which will get 18-bit support in the next version) could mean subtle shifting when converting 24-bit palettes to 18-bit but the output looks almost identical.
This is such simple code I was hesitant about writing an article regarding it, however in the end I decided it was worth it as I always assumed these RGB triplet palettes were 24-bit and I have been puzzled in the past opening what I now know to be an 18-bit palette and wondering why it was so dark. Hopefully therefore someone else will find this information useful too.
The usual sample project is available from the links below, and a reusable library can be found on our GitHub page.
- 2017-12-26 - First published
- 2020-11-22 - Updated formatting
Like what you're reading? Perhaps you like to buy us a coffee?