It's usually a bad idea to start optimizing your program too early, but I'd been running into some performance issues with my Hostile Takeover
engine. The game would slow down considerably when there were many sprites to draw to the screen. The issue was most noticeable when there were many characters on the screen at once, since each character consists of about 14-16 subsprites that are all individually drawn to the screen:
Until now, the subsprites for a single character frame were mostly gathered on a single texture atlas, but there were still some instances where the subsprites were spread out over several atlases - the weapon subsprites, for instance. When a character carrying a weapon was drawn to the screen, the code would first draw all the subsprites "behind" the weapon subsprite, then switch over to the texture altas with the weapon and draw that, and then switching back to the original texture atlas to draw the remaining character subsprites "in front of" the weapon subsprite. Switching texture altases like that is expensive, as the GPU has to upload the new texture altas data.
I had originally just planned on gathering all the subsprites that make up a character frame on one texture atlas, so that the code wouldn't need to switch back and forth between different atlases when drawing a single character. I did that, and I did get a slight performance increase, but it wasn't enough. So, what I've done now is decrease the image quality of the sprites themselves.
Until now, the sprites were all RGBA8888. This means that for each color (RGB) and the alpha value (A), 8 bits of memory would be used, making the images 32-bit (8x4). This allows for 16,777,216 different colors. This also takes up a lot of memory, though, which again means that the GPU has to push a lot of data for each sprite that needs drawing.
The sprites are now instead 16-bit or RGBA4444. This immediately halves the amount of data and memory used, and provides a pretty significant performance boost. 16-bit "only" allows for 4,096 different colors, though. Compared to the 16,777,216 in 32-bit, you'd think that the difference in quality would be pretty severe. I don't think it is, though. See for yourself:
At a quick glance, I almost can tell the difference. But if you look closely, you'll notice that there's a slight grain to the RGBA4444 image. This is because dithering
is used to simulate a higher number of colors. I actually kinda like this grain effect. It adds a texture and feel to the images that I can't quite put my finger on. So I think I'll stick with this. Increased performance, lower memory use and a grain effect that I kinda like. It's a win-win!
What do you think?