Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length

 
Advanced search

1073343 Posts in 43972 Topics- by 36006 Members - Latest Member: Afrobear

December 18, 2014, 11:57:36 PM
  Show Posts
Pages: [1] 2 3 ... 20
1  Developer / Technical / Re: Cubic Bezier Curve and UI on: August 28, 2014, 09:32:37 PM
My suggestion would be to use a 1D catmullrom spline.  In C++ code it looks something like:

Code:
template <typename T> T CMRSpline (const Vector<T,4>& k, T i) {

const T k0 = static_cast<T>(0.0);
const T k1 = static_cast<T>(0.5);
const T k2 = static_cast<T>(1.0);
const T k3 = static_cast<T>(1.5);
const T k4 = static_cast<T>(2.0);
const T k5 = static_cast<T>(2.5);

// setup matrix
const Matrix<T,4,4> cmr_matrix( -k1, k3, -k3, k1,
k2, -k5, k4, -k1,
-k1, k0, k1, k0,
k0, k2, k0, k0 );

Vector<T,4> c = Mul(cmr_matrix,k);
return ((c.x*i + c.y)*i + c.z)*i + c.w;
}

If my understanding is correct (its been a long time since I've played around with this) is that Bezier Curves don't hit their control points, they just come close.  Where-as Catmull-Rom splines hit all their control points.

In your loop you'll want something like:

Code:
t+=getFrameTime()*speed;
objPos.x = OriginalPos.x + (DestPos.x - OriginalPos.x)*CMRSpline(vx,t);
objPos.y = OriginalPos.y + (DestPos.y - OriginalPos.y)*CMRSpline(vy,t);

Notice how its = and not +=.  Also vx would be the x-coordinate of the four control points, and vy would be the y-coordinate of the four control points.

If you want to cap the interpolation, cap the i (which you called t) value like:

Code:
i = min(max(i,0),1);
2  Developer / Technical / Re: Distributed/Decentralized MMO Networking on: February 14, 2014, 03:49:37 AM
The biggest problem I forsee is that ISP's throttle P2P data.
3  Developer / Technical / Re: Adaptive A.I. software on: February 10, 2014, 11:36:19 AM
I was unable to make out the audio in the video, it was FAR too low.
4  Developer / Technical / Re: Converting between enums on: January 18, 2014, 01:39:50 PM
I added the volatile change as described above, and optimze with -O3.  I'm not an assembly pro, but I have used it (and waded through it) from time to time.  As expected in both the switch and function situations, its been completely inlined, and that inlined code rolled through the optimizer.  From a cursory glance the switch seems to be the fastest.  I didn't check the loop or the template (the loop will never be faster, though it'll probably be unrolled and would be 'as fast', and the template approach only works for constants, and wasn't implemented properly).

In the switch version here's what's happening:

Code:
.LC0:
.string "Assertion failed."
.LC1:
.string "Success."

These are just defining the raw string values, this is the assembly equivalent of:

Code:
const char* string_1 = "Assertion failed."

Code:
main:
sub rsp, 40
mov DWORD PTR [rsp], 0
mov DWORD PTR [rsp+4], 1
mov DWORD PTR [rsp+8], 2
mov DWORD PTR [rsp+12], 3
mov DWORD PTR [rsp+16], 1
mov DWORD PTR [rsp+20], 2
mov DWORD PTR [rsp+24], 3
mov DWORD PTR [rsp+28], 0

This defines the beginning of the main function, increments the stack pointer, then saves all the enum values that it needs to the stack.  Each 'slot' on the stack you can think of as a variable in C++.  For example the C++ variable 'n1' is clearly stored at [rsp + 0]
, s1 at [rsp + 4], ect...

Code:
mov ecx, DWORD PTR [rsp+16]
mov eax, DWORD PTR [rsp]

Here we move n2 into register ecx, and n1 into eax.  It doesn't seem to need ecx till much later on so I'm not sure why it loads it so early, perhaps its something to do with label alignment.

Code:
cmp eax, 1
je .L3
jle .L41

Here its comparing n1 with 1.  If they are equal we jump to .L3.  If its less than or equal (ya weird, but its not incorrect) jump to .L41.

If you trace it through you'll see it just jumps around and does alot of compares.  Here's another interesting bit:

Code:
.L5:
cmp ecx, edx
je .L42
.L8:
mov edi, OFFSET FLAT:.LC0
call puts
.L38:
xor eax, eax
add rsp, 40
ret

We come into .L5 and then do a compare, jumping away if equal.  Though if its not equal we fall through to .L8 which prints the .LC0 string "Assertion Failed.".  .L38 sets eax to 0 (the return value of the main function), rewinds the stack, and exits the function.  Notice how this is right in the middle of the code, and is probably the most optimal place for it.

As you can see the outcome of these sort of low-level optimzations are hard to predict, and best left to the compiler.  All sorts of criteria come into play.  Code alignment restrictions may cause it to shift instructions sooner or later.  Jumps are often eliminated by using well placed code and alot of 'fall through'.  The compiler has a TON of clever tricks up its sleeves.

Unless its a single function that, after profiling, takes a MASSIVE portion of your CPU time, its not worth the time or effort.  A person can hand optimize these things better than a CPU BUT it takes a MASSIVE amount of time and expertise to do so.  Its best to just make the code clear, concise, correct, and easy to maintain and let the compiler do its thing.  If after profiling you find that a particular function is causing grief then by all means address it, but thats only AFTER profiling.

Here's the thing.  This is all good and fun for something to do when bored, but the performance difference is minimal.  As far as to which is faster?  Really both will be similar.  Cache coherence, and the function usage in the rest of your code will become a much large issue performance wise than the difference between these two functions.
5  Developer / Technical / Re: assignment "fallback" on: January 14, 2014, 10:59:47 PM
Why not just return an empty string?
6  Developer / Technical / Re: D3D11 and effect (.fx) files on: January 04, 2014, 08:51:01 AM
The amount internet documentation on the subject is a bit pathetic.  You'd be better off getting a book.  I got this one and it's served me well for the most part.

I can't find a good book either.  My only issue is that the book is D3D9/shader model 3.  I'm looking for D3D11/shader model 5.  I know there were a number of changes, I just don't know what they all are.
7  Developer / Technical / D3D11 and effect (.fx) files on: January 02, 2014, 10:34:45 AM
I'd like to better understand effect (.fx) files.  Up to now I've used individual shader functions in .hlsl files, but being able to group multiple functions, and shader state in a single file is something I'd like to do.  Unfortunately even after Google-ing around for a few days I cannot seem to get any relevant info on them.  I've downloaded a number of different examples but they all seem to use different syntax.  For example some use:
Code:
VertexShader = compile vs_5_0 vertex_shader();
while others use:
Code:
SetVertexShader( CompileShader(vs_5_0,vertex_shader()) );
What's the difference?  What shader state can I set?  What are the types?  Is it better to use global variables or constant buffers?  Ect...
 
I can't seem to find a good tutorial or examples.  Even the samples I could find on MSDN were sparse at best.  I don't need info on how to compile or run the .fx files (thats all well documented) but rather what additional features/types/states/ect... do .fx files have that isn't found in normal hlsl files?

Any suggestions for links, websites, or even books would be greatly appreciated.
8  Developer / Technical / Re: Converting between enums on: January 02, 2014, 10:29:56 AM
Perhaps, but it is not particularly maintainable.
When an enum changes or has stuff added to it, a whole lot of code needs to be written manually. Plus, this switch gets very long very quickly.

I don't see why a switch is any more or less maintainable than a map.  When you add new enums you will have to add new entries, either to the switch or to the map.  Both will be essentially the same code.

Code:
case Enum0::new_enum: return Enum1::bob;

isn't more or less maintainable than:

Code:
map.emplace(Enum0::new_enum,Enum1::bob);

So unless the enums are numbered and can be cast directly (as mentioned above) or there is a fixed relationship between them (also mentioned above), you're gonna have to have some maintenance code somewhere.
9  Developer / Technical / Re: True Type Font library on: January 02, 2014, 10:17:45 AM
True type fonts use an internal coordinate system called 'font units' that (if I remember correctly) range from -16k to +16k regardless of the actual size the glyphs are to be rendered at.  The general idea is that the y=0 axis is the 'base line' of the font. So the 'o' part of the letter 'g' would sit above the base line, while the 'hook' of the letter 'g' would hang below it.  So this is what is returned (but as a float) as far as vertex coordinates.  Anywhere in the comments it says 'font units' this is what is used.

This webpage: https://developer.apple.com/fonts/ttrefman/rm06/Chap6AATIntro.html will explain what most of the values mean.  Note that not all features described in that page are supported.

Scaling is easy though.  GetMasterRect() returns a rectangle that fully encloses every glyph in the font.  Scaling the glyphs against this rect will bring them into whatever range you desire.  GetGlyphRect() will return a rectangle that encloses just that single glyph (useful for converting fonts to a bitmap, not so much for direct rendering).

What I did when rendering the fonts was to store them in vertex buffers.  So say I needed the word 'Attack' on a button.  I'd start with an offset of (0,0) and append the glyph 'A' to the vertex buffer.  Then I'd look up the distance between the 'A' glyph and the 't' glyph using GetKerning(), which returns both the x and y offset in font units.  So I'd just add this kerning value to the offset.  I would then get the 't' glyph, add the offset to all the vertices and append this to the vertex buffer.  Rinse/repeat till the word is finishes.  For rendering a simple matrix scaling/translation would place the word where I needed it on screen.  From the master rect is was easy to calculate the desired scaling.
10  Developer / Technical / Re: Converting between enums on: December 30, 2013, 03:37:27 PM
A switch statement (as mentioned above) will compile to very fast code.  Whether it uses if's or a jump table internally will depend on what the compiler deems appropriate, but you'll have a very hard time getting it any faster short of hand coded assembly.

So unless you have a massive number of enumerations, or need to dynamically change the mappings, a switch is by far your easiest and fastest option.
11  Developer / Technical / Re: Visual studio, resources, and shaders. on: October 30, 2013, 10:30:25 AM
Took a couple more days than expected but it works ; )
I'm a happy programmer!
12  Developer / Technical / Re: Visual studio, resources, and shaders. on: October 21, 2013, 07:30:01 AM
Ok a small update for those who are interested.  After playing around with different attempts I decided to 'bite the bullet' and just rewrite my own version of fxc.exe ; )  I'm not actually rewritting the internal hlsl compiler, rather just calling D3DCompile() as I imagine fxc.exe does, but rewritting everything else.  Its nearly done (should be done today or tomorrow), and I thought I'd post here to see if anyone was interested.

What it will do that fxc.exe doesn't, or do differently?  It supports multiple shader functions in a single file, so you can have your vertex shader, geometry shader, and pixel shader in the same .hlsl file.  It outputs to header files (as fxc currently does) but also supports namespaces.  It can also output the shaders to an actual .obj file that can be linked directly into a C++ program.  It works with Visual Studio express (should probably work for other versions too but I've only tested it against VS 2012 express), so you can just drop it in and use it like the old hlsl compiler.

I don't really intend this to be the 'be all and end all' of .hlsl compilers, but it covers all my needs, and should be similar to the old version in most cases.
13  Developer / Technical / Re: Visual studio, resources, and shaders. on: October 17, 2013, 03:31:20 AM
Ya so it would seem.  I wish they'd release a reasonably priced single person version (I'd be willing to pay $100-ish for it), but $500 - $1000 for a product that'll last maybe 2 years before upgraded is ridiculous.  They really want to push you into buying a MSDN subscription, but thats ridiculous too, not to mention rather useless.  I am convinced that MS fundamentally just does not understand their customers.

Anyways thanks for the link, this'll go on the 'back-burner' until I get more time...
14  Developer / Technical / Re: Visual studio, resources, and shaders. on: October 16, 2013, 04:10:17 AM
Interesting link, I've tried for a few hours now to get it to work with no luck.  I'm using VS express, so there is no resource editor, nor any way in the IDE to embed resources.  So I can't add a resource to a .vcxproj file, take a look at the code, and modify it.  I've looked at a couple .vcxproj files online and tried my best to modify a test project, but I get nothing.  I don't know whether I'm doing something wrong, or it just isn't supported.
15  Developer / Technical / Visual studio, resources, and shaders. on: October 15, 2013, 05:18:12 AM
I've been working on, what has become, a relatively large project with a large number of shaders.  I'm using VS2012 'I'm broke'/express edition.  Right now all my shaders are compiled through visual studio to .cso files, which I load manually at run time.  And it works.  That said...

I'd like to somehow bundle the .cso files into the executable so that I don't need to worry about losing them, or mess with versions (sometimes debug shaders are a bit different than release shaders to make it easier to track errors, things like that), and just keep things neat and tidy.

The first option that came to mind was to somehow use the Resource compiler (rc.exe) to bundle the shaders in, but it seems rc.exe requires a resource script, and churns out a .resource file which is then linked in with the linker.

What I'd like to do is to be able to, in a post-build step (perhaps pre-link, perhaps post-link, either is fine) be able to take all the .cso files from a given directory and append them to the executable.  Then of course be able to load these files at run time.  I don't want to have to manage a separate script file, as I want to be able to add/delete shaders at a whim and not have to worry about keeping a script file up to date.  If possible I'd like to be able to do it with the standard suite of VS command line tools.

Any ideas guys?
16  Developer / Technical / Re: [POLL] What C++/C IDE are you using? [WINDOWS] on: September 25, 2013, 11:07:32 AM
Then if it's this superior, why MSVC doesn't come with it, instead of it's own compiler? And why I don't hear about people using clang in favor of gcc/mingw? WTF

Now that would be awesome.  I love the VS IDE but hate its compiler, if they could allow easy integration with GCC or CLang (much like you can drop the Intel compiler right in) that would be amazing.  It'd make things SO much easier if I could with 1 project file and 1 IDE compile across multiple compilers.

BUILD -> Batch Build... -> be awesome!!
17  Developer / Technical / Re: 2D top-down / 3D first-person - Possible with Trixels? on: September 09, 2013, 03:59:13 AM
No I don't.  It was more a of a demo than anything.  Only 1 level, no enemies, you just ran around collecting coins.  We only had 1 month so that was as far as I got.

If you have a place you'd like to upload it to I'd be happy to.  The video was shot using a nvidia 9800 gtx (ya a very old card), so it runs fine even on slower systems.  Fez wasn't out at the time I made this, but he was working on it, and I saw a promo/trailer for Fez which is what I based the idea on.  Though it differs from Fez in that Fez (if I remember correctly) converted its tiles from a voxel format to a polygon format (hence the 'trixel' thing).  I used a true ray tracer and its 100% voxels beginning to end.

I'd love to one day release a metroid-vania style game with this sort of engine.  But I'd need a very good pixel artist and level designer.  Designing an interesting 3D level is much more time consuming than a 2D level.  That said, the possibilities are endless.  I could imagine a Castlevania style castle with corridors upon corridors, being able to travel in, and up, and around, like a huge winding labyrinth.  But its only eye candy if you can't make use of the extra dimension, and I'm a programmer, not a very good level designer and a horrible artist, so thats as far as that project went.
18  Developer / Technical / Re: 2D top-down / 3D first-person - Possible with Trixels? on: September 06, 2013, 01:50:44 AM
Ya the atomontage engine is nice.  A few years back I did this:

http://www.youtube.com/watch?v=4ZRYqqyvFY8

for a TigSource competition.  So I am familiar with voxels Wink
19  Developer / Technical / Re: 2D top-down / 3D first-person - Possible with Trixels? on: September 05, 2013, 05:52:41 PM
Voxel always stood for 'volume element' much like pixel came from 'picture element' and texel comes from 'texture' element.  In all cases it was designed to describe a fundamentally smallest element of a set (granted there are sub-pixels which is a bit of a misnomer...).

I understand that marketing and PR means the masses think that a cube is a voxel (cough.. minecraft... cough), but this is the technical section of a game dev site, I think we should at least attempt to use proper terminology.

/rant
20  Developer / Technical / Re: Using genetic algorithms to train the AI in a 4x strategy game on: August 29, 2013, 11:52:59 AM
Very interesting, keep posting updates.  I wish more strategy games would put time into their AI development, the last few I played (Civ V, Shogun 2, and XCom) had an AI that was laughably bad.
Pages: [1] 2 3 ... 20
Theme orange-lt created by panic