Now I'm wondering if I should use a 32px grid for the game I'm starting to create, or rather a 28px grid. First one seems more logical, since it's divisible by 8 and a 'computer number'.
Why do we like to use powers of two?
Chapter IOnce upon a time, when CPUs run at speeds of <8MHz, sprites were computationally expensive to draw, which meant there were some severe limits on what you could animate/move on a game screen.
So to make programmers happy, something called "hardware sprites" was invented, that allowed sprites to be drawn using relatively few CPU ticks. These hardware sprites were fixed arrays of 16x16 pixels. So if you wanted something bigger, you would use 4 hardware sprites, giving you 32x32 pixels.
We don't have those limitations anymore, but part of their legacy is that we still think of 16x16 and 32x32 as being correct sizes for sprites, and 28x28 as being sinful.
Chapter IIOnce upon a time, when computers had 16 colour palettes, the screen was described in terms of bitplanes. Each bitplane is a 1 bit-per-pixel bitmap of the screen, and 4 such bitplanes were superimposed to give 4 bits-per pixels = 16 colours.
Because memory is arranged in bytes, and because each byte contains 8 bits, Each byte corresponds to 8 pixels in a bitplane. It was difficult to scroll the graphics on the screen by 1 pixel because it meant splitting bytes up into individual bits, and barrel-shifting, and various OR and AND operations to produce the new image.
It was easier if background graphics was shifted 8 pixels at a time, since bytes could then be copied without needing to disect them.
So graphic tiles were made to be some multiple of 8 pixels wide (ie, 8,16,24,32) so that bitplanes could be translated easily in memory.
We don't have those limitations anymore, but part of their legacy is that we still think of multiples of 8 as being a correct size for background graphics, and anything else is sinful.
