Show Posts
|
|
Pages: 1 ... 4 5 [6] 7 8 ... 38
|
|
101
|
Community / Townhall / Re: Final day on Kickstarter for our online game creation tool!
|
on: September 27, 2012, 06:19:55 PM
|
|
Congratulations on getting funded!
It's a good concept, but I think expressivity is much too low at this stage. My space invaders shoot themselves in the foot, because they can't distinguish their own bullets from the players' (is this the reason they don't fire in your example?). More filtering needs to be available.
Even the fundamentals of behavior are a bit dubious- they're not as accessible or beginner-friendly as you might've thought. Events created named messages that get received internally. Trouble with that is you have to poke around an object's list of actions & match up message names just to see how the object works, which is awfully tedious since half the point is to understand what others have made, and exacerbated by the tiny window you have into the actions. Plus, it burdens the creator with maintaining an economy of message names (perhaps forcing them to use an arbitrary/confusing name, or making a desired behavior impossible).
What you have is something happens -> message created -> message received -> do this. Simpler & more powerful is something happens -> do these things. I'll hazard a guess that the message thing has to do with the way Craftyy objects work internally, and you're really just exposing spurious implementation details to users.
|
|
|
|
|
102
|
Player / Games / Re: Super Hexagon weird side effects
|
on: September 26, 2012, 05:25:09 PM
|
Examples, other than the Tetris effect, would be awareness of the absence of a head-up display in the natural human field of view after playing a first-person shooter Get closed-eye hallucinations from most games after a couple hours of play. But one time I played so much Mount & Blade in a day that the basic shapes & colors of its HUD overlayed my vision until the next morning. Weird shit.
|
|
|
|
|
103
|
Developer / Playtesting / Re: Hellbender: Ravaging ops showcase.
|
on: September 23, 2012, 11:24:46 PM
|
|
Hellbender, doesn't that have something to do with Fury^3? Checking it out.
BTW, wrong forum, and nice avatar.
EDIT- Seems a good start. Flight controls don't have quite the right feel, though. The automatic roll return is way too slow. Rotation feels kinda jerky. I recommend you get the control feeling good in first-person, then go and work on the third-person camera to account for it. As I recall, Terminal Velocity & Fury^3 are best played FP (never played Hellbender).
Also I'd prefer if in first-person the reticle stayed stationary. Having it show the rotation rate looks cool, but it's not as functional, and it's kind of redundant (I can see how I'm rotating by my view of the world).
|
|
|
|
|
104
|
Developer / Technical / Re: Popularity of component-based engines (AAA)
|
on: September 23, 2012, 11:18:08 PM
|
I wasn't implying that you were talking out of your ass. I just wasn't comprehending. That's my problem. I do see what you are saying with respect to performance degradation by use of virtual function calls. I will have do some studying on component systems in general. Always good to broaden horizons  I'll start the google-fu. Any particular articles you think are worth reading? Unfortunately I never bookmarked any good articles, but I did save this thread, which aggregates some interesting links (but few useful ones TBH). The most fascinating artciles are about data-oriented design as it pertains to game implementation. IIRC there are a few presentations from the AAA industry about this as well. EDIT- I definitely used these links. Strong recommend on every one there. Happy hunting!
|
|
|
|
|
105
|
Developer / Technical / Re: Popularity of component-based engines (AAA)
|
on: September 23, 2012, 10:48:42 PM
|
I was just using parallel_for as an example of using threads on a single data structure.
But again since the underlying structure is separate from the higher level interface I still don't see any performance gains in that particular portion of the code. The essence of what I'm saying is that hierarchies & components carry lots of baggage respectively, and component baggage is conducive to parallelism. It's one of the most widely-cited motivations for components, so I'm not just talking out my ass. Polymorphism is anathema to parallelism, because you don't know what method(s) will be called, and therefore what side-effects (data writes) will take place. With components you explicitly bring in whatever other components you need. Notion that you can use whatever kind of interface you want & still parallelize as easily is incorrect. With the most naive entity hierarchy, you update() entities in turn, and they implement arbitrarily complex behaviors (changes in state) according to their types. Blatantly not parallelizable. Anything you do to remedy this situation moves in the direction of a component system. So you might as well start with a component system, if you thought you'd need to parallelize. The whole idea and power of entity/component designs is that it expresses inter-entity dependencies very succinctly. Which makes it inherently difficult to parallelize along inter-entity boundaries (if I understand correctly that is what you're suggesting). If each component is independent not only from other components in the same entity, but also similar components in other entities, there's no point in a entity/component design. At that point your just over-engineering the solution.
No- of course you often need to involve other components within the in-context entity, and ones from other entities. But you do it prudently & explicitly. Inter-entity bounds are the easiest to parallelize along, because you can keep data & control flow restricted to very narrow, easily-understood lines. A tautological example(plox read muh effortpost). Projectiles are fired, and they hit objects. Actually complex stuff: you have object creation, within-entity and between-entity dependence, effects depending on the entity... So the hierarchy way is to have fire() and hit() virtuals. fire() is generally known to create bullet(s), as well as do any additional things incurred by the firing. hit() is for taking damage (etc.). OK cool: we can have any behavior we want per firing or hit object, stack behaviors effortlessly, what-have-you. But it's not remotely parallelizable. We don't know how an entity will fire() or hit() until it's invoked, and when that happens, anything could happen anywhere. I'll also whip out this truism: it's not efficient enough to be worth parallelizing in the first place. Best save the highest-hanging fruit until last. The parallelizable component way: have Spawner, Gun, and Hittable (except with a better name) components. When a Gun is fired, all that happens is its fired flag switches to true. When Guns are updated, a fired Gun gives instructions to its Spawner to create a bullet, which it'll do when the Spawners are updated. Probably tells its AudioSource to go bang (etc.). If a particular Gun needs to do anything exceptional, like overheat or fall out of the user's hands, it can either be handled via some type enum which is sorted against before the Guns are updated, or by another component altogether that listens in on its Gun's activity. In either case we know what everything's gonna do a-priori. Hitting is less generalizable, but clearly any such component would look into any collisions its Collider was involved in, & check if any bullets hit it. Ditto the above for anything exceptional. Note that we don't have data/control flow problems. None of the many-writer problem that plagues parallelism. Anyone can safely tell any Gun to fire anytime. If we needed behavior where a Gun is prevented from firing, just add a withheld flag to Gun. Guns create bullets via Spawners they control exclusively. And so on. Everything broken into short, simple, tractable steps. Functionality in hierarchical solutions is paradoxically too lax and too urgent at the same time, like a stoned tweeker. "Do this now, but do it however you want". Folks solve the "do this now" part elegantly with message passing, while the "however you want" part is fundamental to polymorphism (see above). I think big message passing schemes can be a crutch anyway, because as some wise TIGer once said, the public interface is a message passing mechanism. I digress. Point is components are absolutely, positively, tautologically, theoretically, practically, emotionally, physically, spiritually and eternally superior for parallelism.
|
|
|
|
|
106
|
Developer / Technical / Re: Popularity of component-based engines (AAA)
|
on: September 23, 2012, 02:13:54 AM
|
That list of physics steps. I dont see how a hierarchical or component based design affects it. That would be something you fork and join during the physics update step (if it makes sense to do so). If you are talking about a situation where you might be able to work on different parts of the data set (let's say an array of bounding boxes) you could just use a thread for different offsets into that data. You could even use a prepackaged library to do that for you like the Intel Threading Building Blocks parallel_for. Info on parallel for here http://threadingbuildingblocks.org/codesamples.phpBut you could say the same for any part of the game code. It depends on how it performs in practice. My assumption is you can't identify all bottlenecks a-priori. What libraries you use to implement parallelism has nothing to do with the relative advantage of components vs. hierarchies.
|
|
|
|
|
107
|
Developer / Technical / Re: Popularity of component-based engines (AAA)
|
on: September 23, 2012, 02:07:54 AM
|
Where do components come in? Well, component systems are already about splitting things into discrete, minimal steps on homogeneous data. They already have implicit synchronization points that bookend their steps. So if it comes to parallelizing, provided you haven't abused your components, you have much less work in shuffling things around to make them parallel. Your data structures & algorithms already look alot like parallel ones.
Not so for hierarchies. The less you/the computer knows what's going on (the two are indistinguishable under polymorphism), the less certain you can be of synchronization points, the bounds on the data that gets written, etc. I like your last paragraph minus its last sentence. Components are strictly superior w.r.t. parallelism. Again, there's a reason they were adopted at a peculiar time in the AAA industry. Are we thinking of the same thing when talking about components? None of this rings remotely true to me. Components are entirely focused around being heterogenous - each object is different components. Nor are there any obvious synchronization bounds that you wouldn't get in a non-component architecture - components run arbitrary code, and have free access to the entire game state while doing so, meaning they are not generally paralizable. We likely aren't- I tend to bring too many preconceived notions into the discussions (as others do)- but this point stands. Each object being composed of different components doesn't imply heterogeneity as you define it, because you're only dealing with one component type at a time. Otherwise, you're abusing the component system. If we're speaking of cases where components or hierarchies degenerate to one another thanks to poor programming, we can't speak meaningfully of the benefits offered by either. Again, components are an organizing doctrine among other things. I'm not even clear what you mean by data structures well suited for parallelism - that would be flat arrays or trees afaik, but I don't see how components encourage that.
Physics engines are not a good example of component behaviour - the vast majority of the physics engine, even in a AAA game, is a separate library, that doesn't use components for its internal storage (that being very tuned, instead). Components are only used at all for easy interaction with the rest of the game engine, and the editor UI.
You're right that a physics engine isn't a good example of components, and I considered adding a disclaimer to this effect. It's the sort of thing you'd wanna factor out into a system unto its own (or more likely, use an existing library for). Was just a convenient example to illustrate parallelism with. Point stands that components for entities versus hierarchies is superior for parallelism. You break entities into various small aspects, for both behavior and data storage, which can be parallelized more easily. Hierarchy solutions have lots of local complexity and side effects. Flat arrays happen to be ideal for storing component data.
|
|
|
|
|
108
|
Developer / Technical / Re: Unity is great... Why don't I 'get it'?
|
on: September 22, 2012, 06:55:35 PM
|
|
Regarding 3, there's Edit -> Project Settings -> Script Execution Order, if you need to enforce an order of component updates.
Seconding fecal_brunch (best username) on 1 and 2. You'll never hack an existing realistic physics engine through its public interface to do exactly what you want. Roll your own.
|
|
|
|
|
109
|
Developer / Technical / Re: Popularity of component-based engines (AAA)
|
on: September 22, 2012, 06:42:06 PM
|
So yea, organizing your data differently can help performance, but you don't need to do it for most of the game code... Depends entirely on the game. You can say I don't need it, you don't need it, he doesn't need it, but someone obviously needs it. Besides, the notion that working around components gives you a lower floor & higher ceiling for optimization is ironclad. Because I (among many others) find components easier anyway, it's not even a trade-off: it's one area where components have hierarchies beat hands-down. In the case where you wouldn't have needed to cajole the data had you been working with hierarchies, you get slightly better performance for free. In the case where you need to heavily optimize various areas of the code anyway, you get a far smaller workload for free. Also, why do you keep getting the game industry as the ultimate example? They mostly use C++ so I don't think they are a good example anyway. Just because it's the game industry don't mean they know what they are doing that well. As far as I can tell they might waste millions of dolalrs because of bad practices and bad coding. They just have the brute force to make their game work, and even then alot of big companies' games don't work that well. I'm not. Refer back again to the OP. Notice how the question is framed. Much of the AAA industry's behavior is clearly irrational, but none of your arguments suggest they were wrong in emphasizing components over the last few years. Sure, they totes just use brute force to make games work, hammering the code like John fucking Henry until it's fast & stable. Look, it's no coincidence that components became dominant around the jump to the seventh console generation, where you need to parallelize lest you waste 1/3 or 1/6 of your general processing power. Indies (but not all) tend to make games that require little in the first place. Different teams have different needs. The efficiency need (et al) is best addressed by components, that's the simple truth. I'm all for components but I'm forced to agree with PompiPompi when it comes to the parallelization issue. Regardless of your data structures, the restrictions on parallel algorithms are the synchronization points/dependencies, which you can't just wish away with a simple data reorganization. The notion that component based algorithms can easily scale linearly with the cores on a system is simply naive.
For example, you couldn't run the physics code simultaneously with the rendering code, or you could easily end up with situations where the entities position isn't synchronized with its location on screen (ie. you can't run dependent systems in parallel). In fact most components are interdependent. A physics component of one entity needs information on other entities (did they collide? is momentum being transferred? ect...). Similar with lighting components (which components are emitting light, which cast shadows, ect...), gameplay components, most of your components will interact.
In fact, its designed in such a way. Where inheritance based objects express the interaction between components in a single object quite concisely and efficiently, entities express the interaction between components in separate entities quite concisely and efficiently. Its a trade off, but in no way does it make the underlying algorithms more or less conducent to parallelization.
It's not based on wishes, but an organizing doctrine conducive to parallelism, which does depend on the superstructure. I think you've misunderstood game parallelism. The ghetto kind where you have various systems running simultaneously, like say audio and physics, isn't worth the effort. It's what AFAIK PompiPompi meant, but he never explained heself. This is the most naive kind, and one that's widely discouraged, for good reason: synchronization overhead eats up most of the speedup. Also mega error-prone; hard for programmers to keep track of. Real, useful parallelism is applied when doing lots of independent things to homogeneous data. For a physics engine, you could: - integrate across all rigid bodies
- perform broadphase collision detection across all colliders
- perform narrowphase c.d. across all colliders
- determine & apply change in set of contacts
- solve all constraints
No single step is phenomenally difficult to parallelize. And once you parallelize each step, you've parallelized the whole. That's the gist of it: make the whole parallel by making the steps parallel. Where do components come in? Well, component systems are already about splitting things into discrete, minimal steps on homogeneous data. They already have implicit synchronization points that bookend their steps. So if it comes to parallelizing, provided you haven't abused your components, you have much less work in shuffling things around to make them parallel. Your data structures & algorithms already look alot like parallel ones. Not so for hierarchies. The less you/the computer knows what's going on (the two are indistinguishable under polymorphism), the less certain you can be of synchronization points, the bounds on the data that gets written, etc. I like your last paragraph minus its last sentence. Components are strictly superior w.r.t. parallelism. Again, there's a reason they were adopted at a peculiar time in the AAA industry.
|
|
|
|
|
110
|
Developer / Technical / Re: Popularity of component-based engines (AAA)
|
on: September 22, 2012, 03:47:59 AM
|
And why do I need to change the design of 100% of the code when only 10% of the code is responsible for 99% of the CPU usage? OP is starting a project, and asking about why components became dominant (you know, when people start projects). I didn't suggest anyone change anything. Parallelism is overkill in 99.9% of indie projects anyway. Still, there's good reason why AAA devs design their code with it in mind. If using the inheritance style, you better pray that 10% is something easy to factor out (see bottom of post). Edit: Also I hope you are aware how ridiculous you sound since some algorithm can't be parallalized so easily(split into severaal threads) and they might take most of the CPU usage. Obviously, yet in all cases components are still superior w.r.t. to parallelism, which is the point. Easier integration of parallelism doesn't cease to be an advantage just because it doesn't apply to every single case. So I don't see how components design magically turns all your algorithms into parallel and give you a factor by the number of cores.
boon to parallelism
Components help with task- and data-level parallelism
Christ you're obtuse. The straw man's voice in your head is not me. ...The interface for Render() stays exactly the same and you could implement it for the inheritance style model or for a component model.
That said the games I've programmed are not what you would call pushing the limits of hardware. The only bottlenecks I've come across are what I've previously stated (rendering and AI). In the future if I start to have lots of complex object interactions I might feel the virtual indirection hit. I just haven't had it happen yet.
Right. Neither have I ever done a game where something other than rendering, AI, or physics would have prohibitive performance with entity hierarchies. However, over the last couple years I've gotten the impression that making things easy for the computer and for the programmer aren't so different. With components, you operate broadly on homogeneous data. With hierarchies, you operate deeply on heterogeneous data. I find the former so much more straightforward to program. Only in trivial cases do I find no difference at all. Regarding efficiency, check out this article about the risks of bottleneck-less systemic inefficiency. Hasn't hit me before, but the danger hadn't occurred to me until I read that. Fascinating that with profiling, you can fail to see the forest for the trees. Thing is, if it hit the highly interdependent gamey code, you'd have some serious refactoring of the worst kind on your hands. I prefer the component style anyway, but one cool bonus is it insulates me from the risk of tons of real shitty grunt work. I'll add that complexity of interactions might be a motivator for using components, but scale is perhaps an even bigger one. The inheritance style just doesn't scale well to thousands and tens of thousands of entities.
|
|
|
|
|
111
|
Developer / Technical / Re: Popularity of component-based engines (AAA)
|
on: September 21, 2012, 04:36:14 PM
|
I never understood how parallalism by luck works. Even if we do pretend it has a performance advantage, it's definitly not better design wise, unless you really do need this type of dynamics instead of keeping it simple. But then, why do you use a static typed language if you need this kind of dynamics?
What on Earth is that? Components help with task- and data-level parallelism, neither of which are "by luck". Done right you get nearly N times the performance with N cores. Maybe I misunderstand the word component but I thought it's just a way to structure the interface. Under the hood you can do what you want. In regards to cache coherency the debate is more about the array-of-structs vs the struct-of-arrays situations (the latter being better for cache coherency).
The thing is that's all under the hood, you can still write a inheritance or component based architecture on top of the underlying implementation. It's really just about how you layout your rendering data in the lower level part of your rendering code.
Am I misunderstanding the question?
Perhaps I'm misunderstanding your misunderstanding, but you're right about components largely being a way to structure an interface. However I think the question is about how to add data and behavior to entities, and how to use that stuff over entities' lifetimes. Big-picture issues about a peculiar part of game implementation. Either we assemble entities via flat components or a deep inheritance hierarchy. We couldn't say whether array-of-structs or struct-of-arrays is best for performance without knowing the particular component. Still, we can be damn sure one or the other (probably both) for a component will be faster than the equivalent behavior in a derived entity. I mean, the more you optimize an inheritance hierarchy, the more it looks like a component system. Optimize a component system, and it becomes more 'pure' (looks more like a component system). Note he framed his question as why did the industry adopt components? Reckon he's talking how the choice affects potentially affects the whole span from low- to high-level, and in any terms: performance, stability, maintainability, what-have-you. A broad question.
|
|
|
|
|
112
|
Developer / Technical / Re: Popularity of component-based engines (AAA)
|
on: September 20, 2012, 02:18:56 PM
|
It's primarily because of how CPUs and GPUs evolved over the past years. Cache ( misses ) and fewer draw calls are important factors in performance nowadays, and those align a bit better with component based engine / data design.
This. Add creeping core counts, and components being a boon to parallelism (they also help to integrate SIMD). For ease of development in teams, components are said to keep bugs down. They certainly increase modularity, though I've never heard this cited as a motivation.
|
|
|
|
|
113
|
Player / Games / Re: Catamites have Site now
|
on: September 20, 2012, 01:48:05 PM
|
|
Only one I've played is Space Funeral. Enjoyed the theme, it was a little compelling, but not enough to keep me playing a generic console-style RPG.
Does he have any good gamey games?
|
|
|
|
|
114
|
Player / Games / Re: Catamites have Site now
|
on: September 19, 2012, 11:36:38 PM
|
(that's also my problem with those that bash tale of tales). Bitch please. Samyn brought disrespect, and it got brought right back to he jive ass. sry for the language, finna join dees juiced internet throw-downs
|
|
|
|
|
115
|
Developer / Design / Re: I love extra credits
|
on: September 19, 2012, 10:30:58 PM
|
It's condescending because they present material in a way that shows they don't expect to be challenged, nor to challenge viewers. They get away with it because they target the lowest common denominator. You're contradicting yourself. Condescension is when you "speak down" to other people. But targeting the lowest common denominator is the exact opposite. Attempting to appeal to the broadest demographic possible is anathema to condescension. And given the response of many of the posters in this thread, I would have to say they do plenty to challenge their viewers. No, I'm not. Even the lowest common denominator is capable of reading a short essay with the same substance. Not many people give a fuck about being spoken to like a kindergartener through a narrated picture book, even though they ought to. Extra Credits is one of the few web series that actually challenges its viewers to think. It encourages them to actively consider serious topics that rise in every area of game development. I can't help but wonder about anyone who calls the show condescending or pretentious. If you feel Extra Credits is "speaking down" to you, perhaps they aren't the problem. Wicked burn brah, shame I wasn't criticizing the content but the presentation.
|
|
|
|
|
116
|
Developer / Design / Re: I love extra credits
|
on: September 19, 2012, 08:37:27 PM
|
I think that a fundamental difference is the objective of the two shows. Egoraptor fills his episodes with exaggeration and funny asides. He also uses a fair amount of strong language regularly. Extra Credits has a much lighter and family-friendly tone, and never really deviates from their overarching discussion. They make no real attempts at humor, jokes, or exaggeration in the discussion, and only occasionally insert such elements into the visuals. Despite its more positive attitude and upbeat tone, it takes a fairly serious approach to all the material it presents. Sequelitis entertains, while informing the audience every so often. Extra Credits informs and inspires the audience, while entertaining them every so often. EC is boring preachy fuddy duddy fare. Don't get me wrong; I think obscenity for comic effect, editing gimmickry, flamboyance and such are pushed way too hard in video series like Sequelitis. But at least he's trying to make interesting videos. EC is just a fuck-ugly comic with an obnoxious voiceover, interspersed with some lame old memes. While the content is sometimes interesting, it'd be much better (& more quickly) presented in short essays. And that's the root of EC's problem: it insults the viewer's intelligence by presuming he's too dumb or OCD to just read the material. Obviously true, because no effort is made to make a good video, rather than chipmunk narration reinforced with clipart-style images. You say yourself: They make no real attempts at humor, jokes, or exaggeration in the discussion, and only occasionally insert such elements into the visuals. It's condescending because they present material in a way that shows they don't expect to be challenged, nor to challenge viewers. They get away with it because they target the lowest common denominator. Regarding family friendliness, just checked out s5e1 cause I haven't seen any in a long time. Among the first words were "holy hell".
|
|
|
|
|
118
|
Developer / Technical / Re: Another Entity/Component question
|
on: September 19, 2012, 05:20:00 PM
|
@Danmark There is certainly many different ways that you can implement this, and I am by no means claiming to be perfect. I'm merely sharing, and it looks like I may learn something from this thread. Each class that handles my components has a map of objectID versus whatever data I could need for that component. For example, a sprite component basically an ID versus the sprite data, which in this case actually is a struct. I don't know why I just specified a file name in my previous post.
Ah. In that case, my system is functionally identical, as object IDs map to any necessary data, except that a struct component instance is among that data by definition. Sounds like you have something very versatile going. What I think Danmark is saying (and I agree with him) isn't about implementation details. Rather in the documentation that I've read what you are calling a 'component' is usually called a 'system' or 'function' and what you are calling 'data' is what others call the 'component'.
Yes, those were the semantics niggles. The reason I use a Map instead of a list of structs, is that it's faster to search by ID this way, as well as when I want to get the components associated with an ID, I have them all right next to each other in the map. I meant 'list' in the broad sense of a sequence of contiguous items, not 'dynamic array'. Actually have fixed-allocation in static arrays (although I could adapt to dynamic arrays). Per component subsystem, entity IDs are also indices into the entity->component mapping array, and component IDs are also indices into the array of (potential) component instances. All lightweight O(1) stuff with minimal indirection. IDs and indices are one and the same. Kinda like using the this pointer as a unique ID, except safer & more versatile. For example, a simple Script Component is : typedef string Script; typedef vector<Script> ScriptList; typedef unsigned long ObjectID;
map <ObjectID, ScriptList>
With something like this, I can easily see that there is more than one script associated to an object by getting the size of the ScriptList instead of iterating through components, and checking each struct's ObjectID that it stores. This also saves a tiny bit of memory, although very little in most cases. If my logic flawed don't hesitate to point out where. This is just what makes sense to me. No, your logic's fine, I just prefer something more general while retaining lightweightness. For a simple implementation, I'd have a ScriptList as a member of the Scriptable component struct, and the component subsystem would have the mapping of object ID to Scriptable (described above) built-in. Of course, when you start factoring out common terms- having minimal sets of unique Scripts and ScriptLists respectively, my Scriptables would still need to refer to lots of external data that defines them. Then what's the point of the structs? Well map<ScriptID, Script> map<ScriptListID, ScriptID> map<ObjectID, ScriptListID> is the form of what needs to happen anyway (though I'd use arrays again), and handling the last line via the guaranteed workings of the subsystem is no more taxing on the programmer. It's an economy of features. The normal case is a 1:1 mapping between an ObjectID and lots of local unique data per component instance, the case handled optimally by structs, and things like this script problem are more unusual. A component that only needs one field is exceptional, yet it's not a killer to the struct concept, because you don't lose much by using a one field struct instead of something slightly more compact. We might conceive of cases where a component needs no fields whatsoever. Then I'd just make a Flags component. Again, economy of features. Keep complexity low & generality high while respecting efficiency.
|
|
|
|
|
119
|
Developer / Technical / Re: Collision Detection in AS3
|
on: September 19, 2012, 02:00:05 AM
|
And that function works by checking if the object is out of the range instead of in the range of another object
But it doesn't work, for precisely the reason I describe. You need to reformulate that entire function.It works on the top just nowhere else. Not surprised. The unique thing about that case is it comes last in sequence.
Think about it: isif(obj1.bmp.x < (obj2.bmp.x - (obj2.bmp.width / 2))) sufficient to know that there's no collision? What if one of the 3 other cases is false? The way your function works, if that first is true and so didCollide is set to false, the remaining 3 cases can't affect the value of didCollide. Just because the +x bound of the character didn't hit the obstacle, he doesn't collide, regardless of whether his y or -x bounds collide.EDIT- realized when I went to bed last night I was wrong. Sorry, gotta stop drunkposting on the technical board. What you (are trying to) have is if our +x bound is left of obstacle's -x bound or our -x bound is right of obstacle's +x bound or our +y bound is below the obstacle's -y bound or our -y bound is above the obstacle's +y bound then we don't collideOnly a collision if none of our bounds are on the wrong sides of the obstacle bounds (yes, double negative). Reason it doesn't work is because your conditions are kaput & don't produce correct bounds. Note that this is equivalent to: if our +x bound is right of obstacle's -x bound and our -x bound is left of obstacle's +x bound and our +y bound is above the obstacle's -y bound and our -y bound is below the obstacle's +y bound then we do collideOnly a collision if all our bounds are on the correct sides of the obstacle bounds. Two different ways of saying the same thing. Note also that, in either case, it's an instantaneous logical thing, but you have it as a sequence of distinct checks. Something to look out for for efficiency. More importantly, it's more straightforward to see what it means written as just logic (one if and a bunch of &&s or ||s). Write readable code. As for getting the bounds in the first place, Adobe's reference is crap, so I dunno how bmp centers are handled. nikki's way should work though.
|
|
|
|
|
120
|
Developer / Design / Re: 'Wisdom of the crowd' balancing concept
|
on: September 19, 2012, 01:41:24 AM
|
Yeah, what I meant was that the lack of success, but multitude of attempts, hints at the difficulty of a possible solution. Nobody's provided an example of it being tried. I think that's a rabbit-hole though. There's no point a game reaches when the feedback becomes clear... Of course, I'm merely trying to establish narrow bounds on the usefulness of the technique proposed. Ok, so the current game I'm working on does this. It goes a step further and generates much of the final graphics. Much of it isn't generated in "real time," and is generated over the entire dev period, evolving under guidance, but the benefits are the same, almost exactly... That's crazy. Please make a thread on it. I don't think that's true, to be unable to measure the value of a change Just have good metrics. If a vote is worth making it is worth measuring. It introduces extraneous dead-reckoning into the system, vastly inflating complexity. Ok, so in that case why not just test numbers randomly? Humans are good at one thing: fuzzy associations. Computers are good at one thing: precision. Fine-tuning numbers only is better based on random sampling for a base line. Have a computer adjust numbers a tiny little bit, measure the results, then use that as a basis for fiddling more. No matter what the system has to determine whether a change made the game better or not. Users will not be good at this, because they are influenced by so much...
Even if you want to have users vote you still need to weight their votes. So you need some way to determine if a vote was constructive or not. That means you need a measurement system. And since you have that you're better off just exploring balance points automatically... Random numbers wouldn't exploit players' intuition. You sample lots of space, but in a smart, selective way, thanks to the players. Sampling lots of possibility space, but only the pertinent parts- I now see the resemblance to AI, but it's rather superficial, as it's implemented by people instead of programs. The crux of the solution is having a good way of measuring whether the quality of a system improves. If you want to start getting predictive - say predicting the global impact of a change based on metrics regarding a particular area of play - then you're back to inter-dependency, because your system would need to determine how one area of gameplay impacts another. If you want to avoid inter-dependency that you need to go the multiple game-type route.
Users most definitely have good insight into balance. That was never an issue. The question is can you determine the difference between a good change and a bad one based on metrics? The more accurate your metrics, and the faster they measure, the more inter-dependencies they have to unravel. Tracking an immensely complex multidimensional problem as it changes over time. Nah, too hard, not going there. Players may implicitly introduce long-term knowledge by recalling earlier times from their play experiences & influencing their votes thereby, which is okay, because they're smart people, not dumb machines. Trying to imbue a machine with knowledge about the goodness of balancing, dat cray-cray I'm speculating a little. The current solution we are discussing is new to me in many ways, but the idea of leveraging player input is not. I was doing theoretical work on measuring user-data for generated systems/content for websites a few years ago. Before (and after) that I was doing theoretical AI work, even some stuff to do with their application to games. Right now I'm working on a Poker AI. That's machine learning of abstract ideas out the wazoo. This is my stomping ground. It's just so big I'm always having to re-think it to relate my ideas to new circumstances.
I wasn't trying to be a dick. I like tl;dr. Most TIG posts are 3-4x too short for me. No meat. TIG is more like a social hangout than a place of collaborative discussion. Conversation is more for the exercise than the development of ideas. Let the scope run wild. I don't care. Sorry, I didn't feel you were being a dick. I'm being a dick. I want this discussion to conclude soon, because I've lost interest thanks to the diminishing of interesting content (no offence). OFC I should just stop poasting... As for your credentials, I don't doubt you've spent alot of time on related problems, I just know you don't argue from authority (even that wouldn't imply correctness). However, your experience would be much more convincing if you posted some examples of your (ongoing or completed) work. Link to your site in your sig or something. On TIGSource. I agree there's too much superficial posting, but it's important to strike a balance between posts too long & too short to convey a message. Hah. I can't tell if you're mocking me. One day tigress, one day. edit: If you're really curious you can check this out: http://forums.tigsource.com/index.php?topic=26827.0I talk about some things to do with proc-gen stories. Gimmy says my explanation for the "how" is vague... but I wonder if that's because he's just not a programmer, or I've spent so much time working with AIs that I forget how anyone else thinks. It may make sense to you or not. Not mocking you. Sounds like interesting stuff. Started reading the linked thread, but it was too long.
|
|
|
|
|