Welcome, Guest. Please login or register.

Login with username, password and session length

 
Advanced search

1411591 Posts in 69386 Topics- by 58443 Members - Latest Member: Mansreign

May 07, 2024, 05:30:48 AM

Need hosting? Check out Digital Ocean
(more details in this thread)
TIGSource ForumsDeveloperTechnical (Moderator: ThemsAllTook)Copying pixels from texture to texture in OpenGL
Pages: [1]
Print
Author Topic: Copying pixels from texture to texture in OpenGL  (Read 16201 times)
DrDerekDoctors
THE ARSEHAMMER
Level 8
******



View Profile WWW
« on: October 08, 2008, 05:33:55 AM »

Hello, I'm stupid, can someone explain precisely how to copy a rectangular block of pixels from one part of a texture to either another part of the same texture or another texture altogether in OpenGL?

It's effectively so I can do a single draw of a minimap instead of drawing it as bajillions of little tiles.

Ta'

PS. The fate of the human miners in a remote sector of hostile alien space depends on this knowledge...
Logged

Me, David Williamson and Mark Foster do an Indie Games podcast. Give it a listen. And then I'll send you an apology.
http://pigignorant.com/
Decipher
Guest
« Reply #1 on: October 08, 2008, 07:35:36 AM »

Umm, just copy the needed bytes and re-upload the texture to the video memory. Last time I checked we still didn't have AMIGA style hw-blitters on GPUs Tongue. Though!

There's some other stuff you might find interesting. In 2007, ARB approved a new extension that is available from OpenGL 1.1 onwards. It's called EXT_framebuffer_blit. Using this extension you can copy framebuffer bytes from a source to a destination. So there goes my suggestion how to make what you think possible:
[For two different textures]
Create two framebuffer objects (FBOs), upload your texture data to the pixelbuffers of those newly created lovables. Bind them to appropriate extension-provided domains. Do whatever you want and boom! You should have something you wanted.

A close demonstration of handling textures as FBOs can be found here. If it doesn't work, you're free to say bad things about me.

Edit:
Forgot to say, ATI might not play it nice with this extension, just like they don't handle framebuffer objects as... framebuffer objects (!?). I mean, FBOs are implemented at software level by ATI drivers. So at the worst case you might get the overhead of a RAM-blit and re-upload of a regular texture. Though, that worst case is only valid if ATI even supports EXT_framebuffer_blit, which it might not for older drivers. In that case you might start by sending their driver team this awesome e-mail that starts with:

Dear ATI,
Fu*k you! ...
« Last Edit: October 08, 2008, 07:42:35 AM by Decipher » Logged
diwil
Level 2
**


View Profile WWW
« Reply #2 on: October 08, 2008, 07:49:09 AM »

As far as I know, there's no easy way to do this. A quick Google search revealed that you'll first need to render/copy the original texture to the frame buffer (a regular orthogonal type), after which you can use glCopyTexSubImage2D() to copy a region of the framebuffer to your other texture.
Code:
glCopyTexSubImage2D(GL_TEXTURE_2D,
GLint 0, // mipmap level
GLint xoffset,
GLint yoffset,
GLint x,
GLint y,
GLsizei width,
GLsizei height);
That's how the glCopyTexSubImage2D works. Hope this helped...

Also, Decipher's method of Frame Buffer Objects will probably work better. Wink I haven't been aquainted with those yet, so I can't really help more.
Logged
Decipher
Guest
« Reply #3 on: October 08, 2008, 08:21:55 AM »

Lynchpin, I don't think that method is a convenient one, as he's trying to copy pixels inter-texture not from screen to texture. With what you said to be able to do this you need to resize your viewport and blit an orthographic quad with the desired texture on. This has many drawbacks, such as:
  • The viewport must be of size (nW ** 2) * (nH ** 2)
  • You're limited to a maximum texture size of nearest-power-of-two to the size of target window.
  • The pixel data is affected due to viewport scaling.
  • Changing viewports is not a basic crop operation, it's a very heavy state switch.
  • You occupy the PCI/PCI-E/AGP bus per frame by copying back-and-forth from and to the current framebuffer (GL_READ_BUFFER).
  • Gazillions of state changes due to disable/re-enable drama per frame, and all of this just to be able to draw a stupid textured quad.

Other than these, Lynchpin's method would be a perfect guaranteed-to-work. Though, I strongly oppose it as it is very slow.
Logged
diwil
Level 2
**


View Profile WWW
« Reply #4 on: October 08, 2008, 08:36:39 AM »

That's all true. I'm still in the midst of learning OpenGL, so I have no bloody clue on the more advanced subjects; I'm only just now reading more about Frame Buffer Objects, to have a good pixel scaling effect going for my engine. Undecided

So, yeah, I guess what Decipher said is the best way to go. However, with cards that do not support FBO's, you'd have to use my approach, with the appropriate viewport scaling.
Logged
DrDerekDoctors
THE ARSEHAMMER
Level 8
******



View Profile WWW
« Reply #5 on: October 08, 2008, 08:49:26 AM »

Yeah, I was fearing that I'd have to use glCopyTexSubImage2D possibly. The other method I can use is to keep the the data in software, do the copying in software, delete the texture in VRAM and reupload it (which is easily done), however I suppose this will result in fragmentation of the VRAM (although I don't know how bad that is likely to be? Would it occur at all if the new and old textures were the same size?).

Speed isn't such an issue, though, as it's a very occasional thing that occurs between levels.
Logged

Me, David Williamson and Mark Foster do an Indie Games podcast. Give it a listen. And then I'll send you an apology.
http://pigignorant.com/
Decipher
Guest
« Reply #6 on: October 08, 2008, 11:54:18 PM »

The occurence of fragmentation would depend on the driver implementation of allocator functions, though I'm quite sure both ATI and nVidia (at least I know nVidia is) are over that drama.

But pay attention to the fact that, re-upping the textures means occupying the whatever bus your gfx card is connected to. So, actually if you do it per frame, what Lynchpin said is what you will do.
Logged
DrDerekDoctors
THE ARSEHAMMER
Level 8
******



View Profile WWW
« Reply #7 on: October 09, 2008, 02:36:08 AM »

S'no worry, as it's just to draw the landscape into the miniradar it'll only happen once per level. Goodoh. Smiley
Logged

Me, David Williamson and Mark Foster do an Indie Games podcast. Give it a listen. And then I'll send you an apology.
http://pigignorant.com/
LtJax
Level 0
**



View Profile
« Reply #8 on: October 12, 2008, 07:59:03 AM »

You could use glGetTexImage and load the image into a PBO, and then read from that same PBO using glTexSubImage2D. That's easy to implement, doesn't require a sync, and is pretty well supported. Either way, the PBO extension is basically mandatory if you wanna move data around in the GL. See http://www.opengl.org/registry/specs/ARB/pixel_buffer_object.txt
Also, you could probably solve your problem by composing the tiles in the fragment shader, using another texture (which would be smaller and easy to modify) that you use to choose the appropriate tile for each texel.
Logged

muku
Level 10
*****


View Profile
« Reply #9 on: October 12, 2008, 10:23:57 AM »

Either way, the PBO extension is basically mandatory if you wanna move data around in the GL. See http://www.opengl.org/registry/specs/ARB/pixel_buffer_object.txt

Beware though, PBOs are only supported in relatively recent graphics cards.
Logged
LtJax
Level 0
**



View Profile
« Reply #10 on: October 13, 2008, 04:30:45 AM »

Beware though, PBOs are only supported in relatively recent graphics cards.


That's just not true. It's supported since the Geforce4 and Radeon 7500 models (Although driver support did appear later). Even the crappy Intel GMA 750 supports it.
Logged

muku
Level 10
*****


View Profile
« Reply #11 on: October 13, 2008, 09:48:31 AM »

Beware though, PBOs are only supported in relatively recent graphics cards.


That's just not true. It's supported since the Geforce4 and Radeon 7500 models (Although driver support did appear later). Even the crappy Intel GMA 750 supports it.

Really? That's strange, the OpenGL Extensions Viewer puts GL_ARB_PIXEL_BUFFER_OBJECT into the OpenGL 2.1 category and tells me that my Radeon 9500 doesn't support it. Also, it bails out when I try to run the PBO test. So, where's my oversight?
Logged
Decipher
Guest
« Reply #12 on: October 13, 2008, 08:35:36 PM »

Beware though, PBOs are only supported in relatively recent graphics cards.


That's just not true. It's supported since the Geforce4 and Radeon 7500 models (Although driver support did appear later). Even the crappy Intel GMA 750 supports it.

Shoot me if I'm wrong but there's something weird with this statement. ATI never supported ARB version of the PBO extension (they have their somewhat stupid R2VB) which is what you have linked to, in fact ATI never supported any *BO (exception being VBO if I recall correctly) at the hardware level, they were a bunch of driver code running most likely at the driver space.

Now let's come to this, "They are being supported since GeForce 4" thingy. Are you sure you are not mixing PBOs with PBuffers?
Logged
Pages: [1]
Print
Jump to:  

Theme orange-lt created by panic