Welcome, Guest. Please login or register.

Login with username, password and session length

 
Advanced search

1411711 Posts in 69402 Topics- by 58456 Members - Latest Member: FezzikTheGiant

May 20, 2024, 11:47:41 PM

Need hosting? Check out Digital Ocean
(more details in this thread)
TIGSource ForumsDeveloperTechnical (Moderator: ThemsAllTook)Fast pixel work in c++?
Pages: [1]
Print
Author Topic: Fast pixel work in c++?  (Read 1685 times)
drChengele
Level 2
**


if (status = UNDER_ATTACK) launch_nukes();


View Profile
« on: March 22, 2010, 02:13:52 PM »

I would like to make a pixel-based game in c++. It would be a 2d real-time strategy game with lots of single-pixel units fighting it out on a single screen. Even one screen is over a million pixels, and while of course not all of them would be changed every frame, the nature of the game would require accessing and modifying a lot of single pixels every tick (especially since I'd like water, explosions, rain, etc. all to be updated on a per-pixel basis).

Obviously the usual use of sprites and geometry would be inadequate for this task (there would be thousands of 1x1 to 3x3 sprites). I usually work with DirectX9 so I thought I'd use an offscreen texture that I would Lock, edit the pixels which changed colors, then Unlock it and render to backbuffer. I understand locking and unlocking for extended periods of time is not recommended, but with sufficient preprocessing I think I can make the "locked" code pretty compact.

Has anyone ever tried anything similar in DX/C++?

The only problem I see here is everything will be done by the CPU, and there will be hundreds of units to update every frame. Even with all the tricks and optimizations, there will still have to be stuff like A* pathfinding, AI, sight calculations, etc. So I am concerned that so much strain on the CPU will make the game unplayable. Is there a way to access or "print" single pixels using the GPU?
Logged

Praetor
Currently working on : tactical battles.
Ishi
Pixelhead
Level 10
******


coffee&coding


View Profile WWW
« Reply #1 on: March 22, 2010, 02:59:12 PM »

Could be done on the GPU by updating a vertex buffer with the positions/colours, and just drawing the whole thing as points. The drawing would be fast but uploading the vertex buffer could get hefty with so many individual points with a single draw call. If you rendered to texture you could then scale the image up as big as you want afterwards.

Alternatively on the CPU I've done software rendering in SDL before. I set the video mode with the SDL_SWSURFACE | SDL_DOUBLEBUF flags. I allocated my own screen buffer and did the rendering to that, then at the end of the loop locked the actual SDL surface and memcpy'd to it row by row.

I'm not sure if using the intermediate array is faster than keeping the SDL surface unlocked and drawing directly to it. It does have the advantage of being able to apply your own upscaling. For 2x upscaling, I went along the row pixel by pixel, but writing them twice, so

#X#X## becomes ##XX##XX####

then memcpy'd that row to make

##XX##XX####
##XX##XX####

didn't get too much speed loss from that.

It's possible to do basically the same thing in DirectX, though when I did that it was through an API our tutor gave us at uni that just gave you the pointer to the buffer, so I don't know details.
Logged

drChengele
Level 2
**


if (status = UNDER_ATTACK) launch_nukes();


View Profile
« Reply #2 on: March 22, 2010, 03:45:16 PM »

Thanks for the reply! Yeah I'm worried too about the vertex buffer approach, supplying a million vertices per process is probably not the way to go about this. I don't think that would go any faster than just applying a delta to an off-screen texture, but then, when working with the GPU I've learned that intuition is often wrong.

Actually I think I'll avoid upscaling altogether and just go for a 1-1 representation, but either way that's not a problem (especially since I'll be using Direct3D to render the surface which will enable pixel shaders and stuff like that).
Logged

Praetor
Currently working on : tactical battles.
Glaiel-Gamer
Guest
« Reply #3 on: March 22, 2010, 05:14:55 PM »

you made a 2D array
send it to the gpu as a texture

and render a screen sized quad
Logged
Pages: [1]
Print
Jump to:  

Theme orange-lt created by panic