Welcome, Guest. Please login or register.

Login with username, password and session length

 
Advanced search

1411128 Posts in 69302 Topics- by 58376 Members - Latest Member: TitanicEnterprises

March 13, 2024, 06:05:48 PM

Need hosting? Check out Digital Ocean
(more details in this thread)
  Show Posts
Pages: [1] 2 3 4
1  Developer / Technical / Re: Light response textures on: June 22, 2012, 03:54:18 AM
Isn't this just a simple implementation of a BRDF?

http://en.wikipedia.org/wiki/Bidirectional_reflectance_distribution_function

2  Developer / Technical / Re: Linear Algebra and Calculus help for a struggling programmer on: June 09, 2012, 01:33:08 AM
We just don't draw our circles in the same way. You use your while loop to draw it on the CPU, I use the mathematical formula to procedurally generate a smoothed circle texture in a shader on the GPU:

gl_FragColor = vec4(1.0) - abs(0.5 - distance(texCoord, vec2(0.2, 0.1)));

Distance here is implemented using a variant of the formula you mentioned:

distance^2 = (xb-xa)^2 + (yb-ya)^2

Linear algebra is used for everything from color space conversions to raytracing and OpenGL. If you work a lot in such environments, you should sooner or later find you'll have an interest in learning more about them.
3  Developer / Technical / Re: Curve Fitting Algorithms on: March 08, 2012, 03:49:46 PM
Don't do that. Use a normal distribution curve instead. The you can just pick a cumulative distribution function which places your top100 scores in the top Nth percentile, which is probably more in line with the correct ranking. A total games played count or average overall score for all games would help to calculate N, but may not necessarily be available. If not, you can just scale N upwards over time, to simulate how the scores shift with an increased player base.
I'd suggest reading up on normal distribution at wikipedia.
I suggest this because you mentioned lacking a useful slope on the top 100, which I would assume is indicative of an outlier in a guassian function, especially in a gaming environment where you'll have a handful of great players and a large number of casuals.
4  Developer / Technical / Re: What pros do to protect their assets? C++ on: February 19, 2012, 09:56:31 AM
A good image viewer will be able to identify the file header if present. I have the impression larger studios don't really go out of their way to protect assets, any "protection" is just to compress or manage assets in a way that makes them easier to handle for updates and/or development. It also allows them to easily utilize existing libraries, like libpng or devIL, removing a potential bug source altogether.
If you feel a strong need to protect your assets, do it late in the development process, so you won't have to go through extra steps each time you create an asset. Once finished, you could easily add a simple stream cipher or something to each file and store the key in your executable.
5  Developer / Technical / Re: PHP/HTML Realtime Updates on: February 19, 2012, 09:45:09 AM
jQuery is quite stable across browsers. I use it whenever I'm coding websites, which is admittedly not often.
Even so, jQuery offers plugins for everything from ajax and DOM to in-place editing and events. The API is quite neat too.
6  Developer / Technical / Re: html <canvas>: drawImage() is so slow. Any tips? on: February 13, 2012, 01:48:46 PM
Then use OpenGL ES (aka WebGL), which should be available on most major browsers and integrates into the canvas element if I remember correctly. See e.g http://Iquilezles.org/apps/shadertoy/ for an awesome example of this.
7  Developer / Technical / Re: Android C++ development on: January 27, 2012, 03:28:19 PM
3.0, in a not too far off future, if you prefer programming for bleeding edge. Smiley
I'd probably avoid 1.1altogether, since the lack of shaders means you're going to be learning outdated practices that won't be as useful or flexible in the future. It could easily cause more problems than it solves imho.
8  Developer / Technical / Re: Android C++ development on: January 23, 2012, 02:50:42 PM
I must disagree slightly with the hate on JNI. If you use the javah command line tool, there's not really much to it, all the native function interfaces for the class file you run it on are generated for you, which is really the messiest bit. Once you get that, it's just ndk-build and System.load("library"), then you're good to go.
Of course, the communication is only Java->C/C++ using this, but I've never personally had a need to go in the other direction, so I can't really comment on how that works.

In general, I'd advocate avoiding the NativeActivity, even though I do prefer C++ myself. This is simply because the Java APIs are much more extensive and as far as my experience goes provide a huge superset compared to the NDK API.
There is of course the alternative to hook into all the private C++ framework components that the Android framework is built upon, but doing that will probably end you up with a ton of compatibility issues.

Just thought I'd add in a link to http://developer.sonyericsson.com/wportal/devworld/technology/borrowaphone?cc=gb&lc=en here as well, which could be useful if anyone wants to test compatibility on multiple devices. (Disclaimer: I work at SoMC, so there's some professional interest in that suggestion. Gentleman)
9  Developer / Technical / Re: Android C++ development on: January 22, 2012, 03:31:01 PM
There aren't really any good and free c++ android game engines. Most of the development is centered around java. Your best bet is if you know OpenGL. You could set up the rendering context and input event handlers in java, then use JNI to pass the data into cpp.
There is a tool called javah built into the java distributions that allows you to auto generate header files for the native functions in java, so the whole interface construction is rather painless. You could have a sample rendering a moving sprite up and running in an hour if you know your openGL well. You could even use the android java api to implement image loading (BitmapFactory) and pass the data to openGL texture IDs that you send to cpp. Make use of both worlds and getting started won't be that huge a deal.
I should also mention that if you set up an OpenGL context in Java and go to cpp via JNI, there are no issues with the OpenGL context, since it's the same thread and process space. OGL can be mixed between the languages however you like.

Good luck. Smiley
10  Developer / Technical / Re: Modern 3D APIs (aka "what can GPUs do these days?") on: January 20, 2012, 01:00:07 PM
@Linus: that looks really awesome! Thanks for sharing the info... my understanding is maybe not 100%, do you mean that for each particle you raycast vs a set of circles to find the intersection point, then use that as the particle's worldspace position?
Not exactly. Spheremarching is a rendering technique similar to raytracing, but much easier to implement on a gpu. Check out iquilezles.org for a multitude of 4k demos using this techonlogy. What i do here is place a particle in 2d space, then unproject it into 3d, placing it where it touches the geometry. The unprojection in this case is a sphere marching algorithm instead of just computing 3d coordinates based on an existing depth map.
11  Developer / Technical / Re: Modern 3D APIs (aka "what can GPUs do these days?") on: January 19, 2012, 11:18:49 AM
I feel slightly more talkative on a keyboard (The ipad really is horrible for writing.)

A couple of months back, I did something similar for a demo in OpenGL & OpenCL:


In this solution, I generate ~10000 particles that get a random lifetime, then respawn at new coordinates once they die.
I use a sphere marcher to generate a 3D position based on a 2D screenspace location.
Since I project the particles back into a 3D environment, it allows me to move the camera and have the particles stay in the same 3D position without degenerating.
Once the particles are generated and have 3D positions, physics and fluid dynamics can be applied to them fairly easily.

I would assume that directtovideo uses a similar solution, at least in the first two particle demos. However, I believe it's implemented using deferred shading:

1. Generate scene color, depth map, lighting & store in textures.
2. Send textures to particle engine.
3. When a particle is generated in screen space, use depth map to unproject particle into 3D.
4. Color particle using scene information.
5. Render particles on-screen.

If you don't have experience with newer versions of OpenGL, I'd recommend looking into GLSL, FBOs, VBOs and texture/matrix management in newer versions of OpenGL.

Without going into specifics, if you use a new version of OpenGL you can program the GPU to manage vertex shading, fragment output shading and so on. The matrix handling (Push, Pop, LoadIdentity, etc) is removed, you are instead expected to use libraries (I'd recommend glm) to manage your 3D math.

Since you have access to programmable shaders, you specify everything from how each individual vertex is projected from 3D coordinates to the display, to how the resulting set of triangles is to be interpreted, to how each fragment in the resulting output is handled. This is separated into the above three shader stages. (Even more in OGL 4.x)
These shader stages are compiled into shader programs:

Code:
int p = glCreateProgram();
int a = glCreateShader(GL_VERTEX_SHADER);
int b = glCreateShader(GL_GEOMETRY_SHADER);
int c = glCreateShader(GL_FRAGMENT_SHADER);
for((a,b,c) => i){
    glShaderSource(i, linecount, lines, lineslengths);
    glCompileShader(i);
    glAttachShader(p, i);
}
glLinkProgram(p);

Generally, you pipe data from the GPU, into stage one, stage two .. stage n, first by specifying a set of input data:

Code:
glUseProgram(p);
glBindBuffer(somevertexbuffer);
glVertexAttribPointer(glGetAttribLocation(p, "attributename"), 4, GL_FLOAT, false, 4*sizeof(GLfloat),0);
glVertexAttribPointer(glGetAttribLocation(p, "attributename2"), 4, GL_FLOAT, false, 4*sizeof(GLfloat),4*sizeof(GLfloat));

glm::mat4 m(1.0f);

glUniform4fv(glGetUniformLocation(p, "mymatrix"),1,false,glm::value_ptr(m));
glUniform1f(glGetUniformLocation(p, "myvalue"), 10.0f);

//Start the shaders on the configured input data.
glDrawArrays(GL_POINTS, 0, 100);

Then reading this data in the first shader stage:

Code:
//The vertex shader runs on each input element in the vertexattrib data, selecting data based on glDrawArrays.
in vec4 attributename;
in vec4 attributename2;

uniform mat4 mymatrix;

out vec4 outpos;

void main(){
    outpos = mymatrix*(attributename + attributename2); //Convert to display coordinates, this implementation = nonsensical
}

The data is passed through the defined variables into the next stage with same variable names:

Code:
//The geometry shader operates on a set of input elements, sized based on what is configured in glDrawArrays.

layout(points) in;
layout(triangle_strip, max_vertices = 4) out;

in vec4 outpos[]; //out variables from previous stage

out vec4

uniform float myvalue;

void main(){
 
    //Draw a quad at each point.
    gl_Position = outpos[0]+vec4(0.1,0.1,0.0,0.0);
    EmitVertex();
    gl_Position = outpos[0]+vec4(0.1,-0.1,0.0,0.0);
    EmitVertex();
    gl_Position = outpos[0]+vec4(-0.1,-0.1,0.0,0.0);
    EmitVertex();
    gl_Position = outpos[0]+vec4(-0.1,0.1,0.0,0.0);
    EmitVertex();
    EndPrimitive();
}

Finally, the primitives generated in the geometry shader are rasterized and sent to the final pass:

Code:
out vec4 color;
void main(){

   //Set the fragment to a purdy yellow.
   color = vec4(1.0,1.0,0.0,0.0);
}

So, this was a quick crash-course on everything you do to render in OpenGL.
Once that is over with, you want to chain everything:

Code:
glBindFramebuffer(GL_FRAMEBUFFER, myframebuffer);
glDrawArrays(...); //or DrawElements or whatever you want
glBindFramebuffer(GL_FRAMEBUFFER, 0);

Framebuffers allow you to render into textures. Textures in turn can be used in shader stages to spice things up, so you can end up with setups like:

Code:
setFB1();
render1();
setFB2();
render2();
setFinal();
renderHDR();

Allowing you to use multiple rendering iterations to post-process the initial output and create something else.
A fragment shader and framebuffer also allows you to render to multiple textures at once (Multiple Render Targets, MRT) which is what deferred shading is based on.

In general, deferred shading was introduced because a fragment can be shaded, and then overdrawn multiple times. If a fragment shader operation is very heavy, this takes a lot of time that is completely unnecessary. So, instead just store the data sent into the fragment shader directly into textures, and store that.
Once we have the final textures where everything has been rendered, we can use this as input to a more complex shader where we do heavy processing. This processing is now bounded only by the resolution since there is no overdraw which could hamper performance.
So, with the particle demos, you just take this data and pass it into a particle engine, where you use it as a source for particle location, color, acceleration, etc.

Once the lifetime of a particle runs out, you just kill it and respawn at the same screenspace coordinates that it previously was at, then unproject it into a new 3D position.

Branching is still fairly unsuitable on a GPU, but if the particle regeneration is simple enough, this shouldn't hamper the performance of the particle engine much. You want to keep all your particles active at once, simply because there's no real reason to handle logic for them being inactive. Just render them transparent if you have to. I would expect that on realistic scenes however, you'd generally be in a situation where all particles are at finite distances from the user, and thus visible.

In the above image, I take the output from the particle engine and pass it onwards into an HDR shader as well for kicks, since I do all my rendering to floating point textures.

I should shut up now and go code. Shocked

Quote
Without going into specifics
<- I suck at that Sad

tl;dr; was gonna show a cool picture, ended up explaining how OpenGL 3.x & OpenCL particle engines work.
12  Developer / Technical / Re: Modern 3D APIs (aka "what can GPUs do these days?") on: January 18, 2012, 02:22:40 PM
If your intention is to generate a large amount of geometry, geometry shaders are not the way to go. Generally they have a defined maximum limit, and are based on input attributes from a vertex shader, which can be less than ideal.

Instead, use OpenCL and share a context with OpenGL. This allows you to generate vertex data in OpenCL and send it to OpenGL for rendering (unless you write a custom renderer in OpenCL and ditch GL altogether.)

OpenCL is a General Purpose GPU programming API which is well suited to tasks where you run a similar algorithm on multiple input data, which is the case with your example 'for each pixel, add a vertex if not transparent'.

It's also superbly fun once you get into it. Smiley
13  Community / Versus / Re: Witch Battles on: January 24, 2011, 07:25:00 AM
Sounds like the callback and the OpenGL context runs on different threads, which would probably give you fairly illogical errors. Checked the thread ID for the callback and the OpenGL functions? Using some command interface to send things to the OpenGL thread is quite common in circumstances like that.

Good luck, I'll be awaiting more awesome screenshots. Beer!
14  Developer / Technical / Re: Sending vertex coordinates to vertex shaders on: January 06, 2011, 08:51:40 AM
Let's see.
I may be way off, since it was a while since I last looked at this, but as far as I know, you have a function family for specifying vertex attributes for a shader program: glVertexAttrib
They're used together with glGetAttribLocation to link vertex arrays, matrix arrays, and other data. Similarly, the glUniform family is used for uniform data.
Finally, I believe glDrawArrays is used to pass this data to geometry/vertex/fragment-shaders.
glDrawArrays allows you to set the starting point in the vertex arrays and how much of them you want to render.

Since OpenGL 3.x got rid of the fixed-function pipeline, you'll also have to implement your own matrix/vertex math libraries, which is logical since OpenGL isn't a math library. Tongue
Furthermore, you will always need to set up shaders to render your geometry.

The pipeline in OpenGL 3.x is overall very similar to what's used in OpenGL ES, so most samples from there should be applicable in OpenGL 3.x as well.
Edit: For example, this demo, or at least the Draw() function, should be, to a large extent, the same as in OpenGL 3.x.
15  Developer / Technical / Re: Does anyone here know about network streaming? on: December 27, 2010, 06:32:59 PM
I'll have to refer you to the http://www.virtualgl.org/About/Introduction project, since is very similar to what you want. Basically, enabling GFX-acceleration over the network. I tried it with a couple of different 3D apps on Ubuntu, but the results were fairly weak. I guess this could be different today though, as my rig at the time might have been outdated.

Either way, there's a couple of problem areas:
 * Response time - In e.g. Starcraft, a lag of 10ms would be considered bad. However, you can't start rendering a frame where there is response to input until after the input has been processed. That means you'll first have lag in one direction, rendering, then lag in the other direction. Now, unlike normal lag where there is some delay between your input and it being processed on the server, you'll be suffering from graphical lag, where your mouse pointer is behind its current position, and responses to input come later than normal.
 * Compression and transmission - Sure, these are easy normally, but now you need to compress and send a full-resolution image at a high enough rate for it to be considered real-time, and this places itself directly on top of the lag you get from transmission to server and rendering.
 * Prediction - Generally, you have prediction algorithms running on the clients to guess where units will be on the next packet arrival. This is a fairly simple way of making lag harder to observe. When processing graphics on the server, the problem is a lot harder, as you'll have to predict what the current user is going to do, rather than some other player, and errors in this type of prediction are much more apparent, not to mention the detail it has to be done in. (Will the player move the mouse up or down over the next few milliseconds?)

I would say rendering on a server is more suitable for other things than gaming. :/
Still, check out VirtualGL, there's lots of interesting documentation on the matter. Smiley
16  Developer / Technical / Re: Exponential curves and parabolas on: December 23, 2010, 05:48:08 PM
As increpare said, Gaussian distributions, or, similarly, gaussian functions, are your number two.
It's fairly easy to compute:
Code:
float gaussianfunction(float center, float height, float falloff, float x){
    return height * powf(2.718281828f, -(x-center)*(x-center)/(2.f*falloff*falloff));
}

A fairly simple implementation of number three, that gives you a function approaching a value, could be something akin to this, I believe:
Code:

typedef float function(float);

float approach(float value, float startvalue, function* f, x){
    return value - (value-startvalue)/((*f)(x));
}
f could be an exponential function: (e^x), simply x: (x+1), a multiple: (ax+1) , or anything else that approaches positive infinity as x does the same and has a property such that f(0)=1 and f(x)!=0 for any relevant x, depending on the desired function shape.
startvalue = 0 gives you a function similar to your number three, whilst any other value gives you a free choice of starting point for approach at x=0, yet still moving towards value as x approaches infinity. Note that approach(a,b,f,-1) -> negative infinity. Under some circumstances this may be undesirable.

Just as a side-note: I never tested the above code, it may have some compiler issues.  Gentleman
17  Developer / Technical / Re: The daily technical programming challenge on: October 20, 2010, 01:28:26 PM
perl -ne '$a=2;while($_>1){until($_%$a){print"$a\n";$_/=$a}$a++}'

I propose this:
perl -lne 'for($a=2;$_>1;$a++){until($_%$a){print$a;$_/=$a}}'

(After wasting half an hour on length reduction. Durr...?)
18  Developer / Technical / Re: Floating Point Error and Intervals on: October 10, 2010, 06:04:09 AM
Indeed. floating point values essentially have a varying precision at different points.

For example, x += 0.5 and y+= 0.5; with x=1 and y=2 may lead to x=1.499999, y=1.5 if these are the closest representable values in the current floating point type. This will cause collisions, which is why I asked whether or not your collisions are resolved correctly, since I would assume the objects would be moved the smallest possible distance to resolve the collision if the force differences are low.

Either way, this is as far as I know also why the "padding" discussed earlier exists, you'll want a collision larger than the maximum precision error possible before you can assume there actually is a collision.
19  Developer / Technical / Re: Floating Point Error and Intervals on: October 10, 2010, 03:30:50 AM
It sounds like the physics engine is trying to resolve the collision by pushing the top box to the left rather than up. Are you correctly identifying the direction an object should be moved on a collision?
20  Developer / Technical / State machines, tasty? on: October 08, 2010, 12:22:53 PM
{I'm drawing some purdy graphs here, they're all done in yEd, have a link, best graph editor I've seen so far.}

So, as usual, I've been lurking around, checking the posts, programming a bit, doing all the stuff I normally do.

At some point, I started messing about with state machines and thought I'd start developing a state machine-based entity editor.

I put together the core state machine (This is how programming usually works for me, I put together a core, then it all starts falling apart as I find a new interesting problem, I'm hoping I'll stick with this one though) and got thinking about some varying problems. Early on, I decided to use a concept called "Recursive State Machines", essentially a state machine where each node can be either a state or a state machine. Well suited to programming entity logic, as it can break out of states when necessary.

I conceptualised this around a simple entity I wanted to build:

At the time, I was thinking "This looks great, why isn't all my entities and whatever else I've done in this form?"

I continued working over a few days, adding a couple of different features I wanted in this core and merging it with some older code bases I enjoy using.

In the midst of development, I realised what should have struck me much earlier: there's lots of (interesting) problems that pop up and have to be solved, like player entities and other things that do multiple actions at once:


This is, as I understand it, something you normally run into in finite state machines, since multiple overlapping states aren't supported.
To solve this problem, I added just that, and revised the recursive state machines to allow for a state, alternatively one or more state machines inside a single node:


This problem solved, I'm slowly reaching a point where I'll be forced to add some sort of user interface to all this. Since causing problems for myself is fun, I'm building as much as possible of the system itself inside the same state machine engine.

Still, I'm curious; anyone else been trying out state machine-based entities and game development? I've understood it's popular in at least game AI, and I'm keen on hearing about the experiences and problems of others using this methodology.
Pages: [1] 2 3 4
Theme orange-lt created by panic