Welcome, Guest. Please login or register.

Login with username, password and session length

 
Advanced search

1411579 Posts in 69386 Topics- by 58445 Members - Latest Member: Mansreign

May 05, 2024, 02:23:16 PM

Need hosting? Check out Digital Ocean
(more details in this thread)
TIGSource ForumsDeveloperTechnical (Moderator: ThemsAllTook)Separating Simulation and Rendering
Pages: [1]
Print
Author Topic: Separating Simulation and Rendering  (Read 2117 times)
raigan
Level 5
*****


View Profile
« on: July 02, 2010, 06:16:10 AM »

This is something that's been bothering me for a while, I keep putting off trying to decide on a solution because so far nothing I can think of doesn't suck. This is going to be a lengthy-ish post, sorry.


I have a game where I want the simulation to be independent of the rendering, for various reasons (such as allowing me to create a sim-only game and play back a replay at faster-than-real-time by omitting the rendering).

But also it's just a general challenge/experiment, because I've been meaning to try the whole component-based-design approach, and this seems like the perfect opportunity since the interaction between components is only one-way (the sim components are oblivious to the rendering components, and the rendering components read sim state).

A rendering component would be responsible for displaying a particular game entity's graphics onscreen, as well as emitting e.g particles and playing sounds.

My problem is: how do I actually implement this in nice, simple way?

So far components just seem stupid, but I'm convinced that's because I have OOP-blinders that are preventing me from seeing the "right" way to do this.

-the sim components now need to expose all of their internal state (in a read-only manner) to the rendering components. This is a lot of annoying boilerplate code to write, plus it seems stupid to make this public when its only needed by one specific external object. (Sadly this is in AS3 so there's no "friending")

-there needs to be some sort of event system so that i.e the rendering component for a bomb is notified when the simulation component of the bomb decides its time to blow up. This shouldn't be a generic event system though, because there's a known one-to-one mapping (i.e each sim component can broadcast directly to its corresponding rendering component). The sim itself is already notified of these events, so I figured that it could pass them up a level (to whoever is managing the sim and rendering components).

(this maybe deserves its own thread, but I'm using direct function calls as "events", because in this case -- the sim is the only listener -- it seems stupid to do sim.RespondToEvent(new Event(entityID, eventType, some_parameter)) when I could instead do sim.Event_BombExploded())


Anyway, my problem is that I keep coming back to the same implementation idea: inheritance! If I do something like this:

Entity_Bomb implements Entity
Entity_Bomb_Renderable extends Entity_Bomb, implements Renderable

This way, the child class (Entity_Bomb_Renderable) already has access to all the internal state of the bomb -- no need to write boilerplate code to expose it, and no need to expose it to everyone in general. Also, the child class is notified of all events already: when a bomb explodes, it calls its own private Explode() function, which then e.g broadcasts an event to the sim. The Renderable class can just override this function to do whatever rendering-specific stuff needs to be done (emit particles, play sound) and then defer to its parent class to do the sim stuff as usual.


So, am I crazy, or in this case is the OOP solution just as good as components? I realize that the "Entity_Bomb_Renderable extends Entity_Bomb, implements Renderable" should set of red flags, because clearly this could be composed instead of inherited -- there could be a bomb object which has one component/aspect that deals with Entity stuff and another component/aspect that deals with Renderable stuff. But getting the two components to communicate seems stupidly messy and hard compared to just extending the class.

Sigh.. I feel like I'm totally missing the point of everything Sad

Logged
slembcke
Level 3
***



View Profile WWW
« Reply #1 on: July 02, 2010, 08:21:03 AM »

I've found that an MVC-ish approach works really well for my own (Chipmunk Physics based) games. At the entity level I have three groupings:
  • Physics objects - rigid bodies, collision shapes and joints.
  • Sprites - Just the geometry and rendering information, transforms are provided by rigid bodies.
  • Game Objects - A controller-like object that groups a bunch of related physics objects and sprites together.

As a simple example, you have a ball in your game. It has a single rigid body, a single collision shape, and a single sprite. So by creating a ball game object, it creates the rigid body, collision shape and sprite, and links the collision shape and sprite to the rigid body. In a more complicated example, you may have a vehicle with several rigid bodies linked by joints and other constraints, and collision shapes and sprites to match.

At the world level, I also have 3 types of objects:

  • Physics Space - Physics objects are added to this and it controls all the simulation.
  • Renderer - Sprites are added to this and it manages the rendering state and renders the sprites.
  • GameWorld - Controller object for the game's main loop. Handles events, updates the physics at a fixed interval and invokes the renderer for each frame.

So it's not exactly MVC, but it's close enough to make an analogy. Physics objects are the model, and the physics space is the only object that knows how to simulate them. Sprites are the view, they are drawn by the renderer object using transform information from the physics objects. Game objects are the controllers, they contain all the update/gameplay logic that controls animations, joints, etc of objects underneath them and are in turn controlled and managed by the Game world.

To expand further, physics objects and sprites are "dumb" objects. In general, most of their behavior is controlled by the physics space and renderer. Individual collision shapes don't have code for colliding against other shapes, rigid bodies don't know how to resolve collisions and sprites don't know about setting blending modes or shaders. They mostly just declare that they have physical properties, a shape, or a certain image to be drawn in a certain way. The game world and renderer contain the code to deal with a collection of objects as a whole and the interaction between them. The physics space knows how to detect and resolve collisions, and the renderer knows how to manage the rendering state for the sprites. They also don't contain logic for controlling themselves either. That is placed in update and draw methods in the game object. The game object would control things like motor strength, thruster forces from the update method so it can be called at a fixed interval while animation or fading code would be placed in the draw method which is run once each frame. I found that doing this the two level approach of dumb objects with game-world-level objects to control and sequence them works much better and more cleanly than trying to stuff all of that into a single object and implementing a ton of interfaces.

Lastly, (shut up already!) I don't strictly use sprites. I was attempting to write a nice generic and efficient 2D scenegraph, but I found that it was a huge waste of time. Setting up a bunch of scene nodes was tedious when you just wanted a sprite. It's usually just simpler to write exactly the 1000 lines of renderer code that your game needs. No more no less. Want to add shadows or some other special effect to your game? You could muck with composing complicated composite scene nodes to get the layering right, or you could just add another for loop loop that renders things in the correct order. Writing a generic renderer for a specific game, or even 3 specific (but probably very similar) games is a waste of time. Unlike writing a general purpose physics engine, I did not find this to be a lot of fun.
« Last Edit: July 02, 2010, 08:27:33 AM by slembcke » Logged

Scott - Howling Moon Software Chipmunk Physics Library - A fast and lightweight 2D physics engine.
koliver66
Level 0
*


View Profile
« Reply #2 on: July 02, 2010, 08:32:09 AM »

Halfway through your post I was thinking inheritance, but I wouldn't put anything except drawing in the renderer.  Your main loop can call it or not, depending on the game state.  
Logged
slembcke
Level 3
***



View Profile WWW
« Reply #3 on: July 02, 2010, 08:35:34 AM »

Oh, I should point out that for a non physics game, you can probably just move all the physics logic into the game objects and handle the inter-object interactions into the game world. Still, the point being not to put all the code each object needs to do into the object itself. That way it's easy to compose game objects or have multiple sprites per object.
Logged

Scott - Howling Moon Software Chipmunk Physics Library - A fast and lightweight 2D physics engine.
raigan
Level 5
*****


View Profile
« Reply #4 on: July 02, 2010, 09:59:44 AM »

I guess I should provide more details: there is no physics -- all entity behaviour is FSM-/logic-based.

I can understand rigid bodies + sprites: in this case there is a nice well-defined and non-transient interface -- the bodies "publish" pos+orn (and maybe linear+angular velocity) and the sprites are positioned and rotated appropriately.

Sadly my graphical state is based on more than just pos+orn, for instance enemies which have charging/cooldown periods (i.e a gun charging up to shoot), the current value of the "charge" counter/timer is used to e.g procedurally change the colour or size or animation frame of the graphics.

Each enemy is different so there's no nice uniform interface for communicating state between the simulation and the renderer, there are just a bunch of specific-to-the-enemy state variables that influence the rendering.

So, problem #1 is that it seems stupid to add a bunch of "getters" that let the rendering components pull state out of the simulation components -- if the simulation is oblivious to/independent of the rendering, why am I adding a bunch of functions to the simulation objects that are only used by the renderer?!


A more difficult problem to address is problem #2: a lot of state is transient/implicit. For example, there is an enemy which decides if it will shoot a bullet this frame, and if so calls its private ShootBullet() function which performs a "hitscan" raycast through the world and responds to the results.

This is the important problem: the bullet isn't explicitly modeled, we don't ask the sim to spawn a Bullet object or anything, the enemy simply issues a raycast query and reacts based on the result: if the ray hits the player, it notifies the sim that the player was shot, and if not it doesn't do anything.

There are many such "events" that occur within the enemy FSMs for which there is no permanent state -- it's not like position+orientation which you can always lookup, it only exists transiently/temporarily when updating the enemy's FSM.

So, in order to support rendering a bullet, the enemy would need to broadcast some sort of "I shot a bullet" event which contains the information relevant to rendering (i.e the intersection position+normal of where the bullet hit the world, etc.). But this is stupid -- rendering-only code has corrupted our beautiful pure simulation!

Now, this transient bullet-related state *could* be stored as explicit persistent state in the enemy:
didIFireABulletThisFrame:Boolean
bullet_hit_pos:vec2
bullet_hit_n:vec2

Clearly this is stupid, but even if we don't store this state explicitly and instead broadcast it via an event of some type, it is still a problem because the only reason to broadcast this event is to transmit state information to the renderer!

If this was implemented via components (the sim component of the enemy would issue a "I shot a bullet" event) this would be stupid because the only purpose of the event is to transmit state to the rendering component; if there is no rendering component, I shouldn't be broadcasting this event because no one is listening. And in general this seems like rendering state is encroaching into the pure simulation.

We could also just extend the enemy with a rendering version which "traps" these events:
Code:
public Shooter
{
  private function ShootBullet():void
  {
    //<do hitscan/raycast, etc. here>
  }
}

public Shooter_Graphical extends Shooter
{
  private override function ShootBullet():void
  {
    super.ShootBullet();//defer actual non-rendering behaviour to our parent
    
    //<rendering-related code here, i.e draw bullet, play sound, etc.>
  }
}

But this in turn also seems stupid, because we now have tons of classes which are just adding a bit of code and then deferring to their parent class -- it's clear that we're just layering some behaviour on top of some existing behaviour , which should be achievable in a less heavy-handed way than to jam both behaviours together into a single class.

I just want to lay the rendering code OVER the simulation, not intertwine both of them in this bastardized way!


Maybe I'm just being too picky and anal about this, but I'm sick of writing code that gets the job done but is a horrible mess; I want a nice clean solution damn it!!
« Last Edit: July 02, 2010, 10:05:08 AM by raigan » Logged
nikki
Level 10
*****


View Profile
« Reply #5 on: July 02, 2010, 10:39:55 AM »

this might be too obvious and simple but:
give all types that need it Draw() and Update() methods,
and use them when you need them !
Logged
slembcke
Level 3
***



View Profile WWW
« Reply #6 on: July 02, 2010, 11:15:10 AM »

Well, again. The rendering objects are basically just dumb public structs. I don't expose all the game object stuff to the renderer, but instead push the state into the rendering objects during the draw() method. Incidentally the physics works pretty much the same way, but is done from the update() method instead. Think of it as a synchronization pass. The draw() method is when you calculate what color, size, position, etc that your graphics should be and you record the graphics state into the dumb rendering structs. Then after all the draw() methods have been called for all the game objects, the renderer makes a pass over all the structs and renders them.

With the way my stuff is set up, I would simply add a bullet streak/sprite and maybe a hit spark to the renderer instead of sending an event. We've heavily used this sort of immediate mode rendering for GUI elements in the past. Just clear out all of the immediate mode objects after each time you render a frame.

Again, the benefit I've found from decoupling the gamestate and renderer is that it makes it really easy to synchronize multiple graphical objects to a single game entity. Take even simple 2D blob shadows in an isometric game for instance. You render the background, then all the shadow blobs, then all the sprites and effects sorted by scene depth. If each entity drew itself and all it's parts, this would be a disaster. You would have to call into each object several times or you would have shadows over the top of objects. By separating the rendering components, you can add the shadow blobs to one layer and the sprites and effects to another. Then all the game object has to do is to add the sprites to the correct layers and synchronize their positions in the draw() method.
Logged

Scott - Howling Moon Software Chipmunk Physics Library - A fast and lightweight 2D physics engine.
raigan
Level 5
*****


View Profile
« Reply #7 on: July 02, 2010, 11:54:58 AM »

@slembcke: okay, that makes a lot of sense.

It still seems wrong to have the simulation object e.g "add a bullet streak/sprite and maybe a hit spark to the renderer", because if rendering is turned off then this shouldn't happen at all.

We could of course make a "null-renderer" that just ignores all such requests, or we could have the entity first check the state of a global "is rendering enabled?" flag before making these requests, but in both cases we're missing the point: we know a priori that rendering is off, which means we should be able to avoid ever generating/running the code that does "add a bullet streak/sprite and maybe a hit spark to the renderer" in the first place.. right? That code should simply not exist if we're running with rendering turned off.

For some reason how to easily accomplish this confuses the hell out of me.

 

Logged
slembcke
Level 3
***



View Profile WWW
« Reply #8 on: July 02, 2010, 02:01:48 PM »

I dunno. I've gone the multiple renderer path before and it works nicely. Though it was a normal/debug renderer and not a null one.

Basically you have to inform the renderer that events happen somehow, often times I do that through the game world. Like a gameworld.bullet_impact() method or somesuch instead of putting the graphics setup code directly into the logic code. So it's not so much broadcasting generic events, but triggering methods on the gameworld that something happened. From there it can add sprites, spawn new enemies or whatever. You aren't specifically sending events to the renderer, you are sending them to the gameworld and it's doing what it wants with them which may or may not include doing something with the renderer. You could implement the logic for ignoring graphics at that level easily enough too.
Logged

Scott - Howling Moon Software Chipmunk Physics Library - A fast and lightweight 2D physics engine.
raigan
Level 5
*****


View Profile
« Reply #9 on: July 03, 2010, 06:05:50 AM »

Yeah, I think I'm being stupid -- what you describe is great. Thanks! Smiley
Logged
floatstarpx
Level 1
*



View Profile
« Reply #10 on: July 03, 2010, 06:49:05 AM »

if you want to look at doing it with components (either static or dynamic aggregation) - it can be really powerful, and really fun... I find now that for me, it's  to think about going back to inheritance based solutions... I think we'd be a lot slower developing without our component system.

the way I handle this particular scenario is to have 3 (or more, I suppose) components:
  • ComponentPosition - just holds the object's position/orientation
  • ComponentPhysicsBody - customisable physics component, data driven..this gets the object's ComponentPosition and writes in the new position each frame
  • ComponentSprite / ComponentAnimate / ComponentTexturePoly etc.. - different rendering components...these get a ComponentPosition from the object they belong to and render it where required..

This means, if you have an object which doesn't contain a ComponentPhysicsBody -it just has a ComponentPosition and renders still etc..
You can still do this without 'physics', by just having different ComponentXYZ behaviour components which do whatever is required to move the position about.

the rendering components just register for a callback from the render scene, and then handle their own rendering within themselves.

I don't have all the components having 'draw' and 'update', but let them register for a callback from the renderer or main update. I have several components that don't update/draw/etc.. so this saves calling all of those pointlessly.
Logged
Triplefox
Level 9
****



View Profile WWW
« Reply #11 on: July 03, 2010, 09:42:23 PM »

In my own component system, I've avoided any archetype definition; each component has custom lookup methods based on the entity's ID, so that they can optimize themselves for the common cases and not get bogged down in maintaining a reference from the entity proper(all the entity has is references to component destruction functions so it can be deallocated cleanly).

Right now I have my components add graphics information to a buffer during their update; the buffer gets sorted and rendered every frame. I have the flexibility to use alternate strategies that add to the buffer outside of the component update, though - which is useful should the components update multiple times before a frame is drawn(this can happen if you start slicing up the timestep differently among every component, which you might do for slowdown/fastforwarding).
Logged

bateleur
Level 10
*****



View Profile
« Reply #12 on: July 04, 2010, 07:12:09 AM »

-the sim components now need to expose all of their internal state (in a read-only manner) to the rendering components. This is a lot of annoying boilerplate code to write, plus it seems stupid to make this public when its only needed by one specific external object. (Sadly this is in AS3 so there's no "friending")

This paragraph suggests to me that you are trying to write the sim and render components at a high level of independence. If you want to do that you'll need to use interfaces - probably quite big ones - but I don't recommend it.

Whenever I've written code which has a sim/render separation I do exactly the opposite. The renderer is very tightly dependent on the exact sim code and knows all of its internals. No inheritance, though. You use AS3's package system. So you have, for example:

Code:
package com.mydomain.myapp {
public class MyThingSim {
 var someField:SomeType; // Note - neither public nor private
 // Some code here
}

package com.mydomain.myapp {
public class MyThingRenderer {
 public function renderThing(thingSim:MyThingSim) {
  someMethod(thingSim.someField);
 }
}

...but never allow access the other way around. The simulation should not access the renderer at all or know anything about it.
Logged

Theotherguy
Level 1
*



View Profile
« Reply #13 on: July 04, 2010, 11:28:17 AM »

There are a few ways to do this.

Like you said, components are one way of separating rendering from simulation. However, using a reference to the simulation object is not a good way to do this.

A better way is to use a messaging system, an event system, or a publisher-subscriber approach.

Messaging System
This is the idea that your components can only communicate with each other using generic messages that have to be parsed and interpreted. What you could do is create a generic message class that has something like a header and an ID, which could simply be an enum. Then, you could have messages extending the base message class which components can choose to cast the generic message to.

So, it would be something like this:

1. Simulation component has a major change of state. Let's say its health goes to zero, and it must die.

2. The simulation component sends a message up the component tree to its parent and all of its siblings which says "HEY, I'M DEAD!" all of the other components see this as a generic message, but it has a header from an enum called "DeathMessage."

3. One of your rendering components, which handles particle effects, sees the generic death message, and realizes that it is a type of message it can handle. It looks at the message to see all of the information related to the kind of death that happened, and uses this to render the component's particle effects upon death.

4. All of the other components simply ignore the death message and continue on, unless they explicitly have a handler for that kind of message.

Advantages to the message system include the fact that it doesn't require the components to know ANYTHING about the other component's internal state -- they only need to know about the shared message interface. Disadvantages are the fact that components which don't need to know about the death message now know about it, and have to do parsing to see that they have ignored it. Another disadvantage is that you have to maintain a consistent message interface even as internal state in each component changes.

Event System
In an event system, different components connect to one another directly through events. Events are essentially just delegated functions which get called whenever another class decides to call it. It's basically a way to ensure that a function will get called in one class when another condition is true in another class. Here's how an event system would work:

1. Your simulation component has a "death" event. This is guaranteed to be triggered whenever the simulation component dies.

2. Your rendering component registers a function with the death event.

3. When the simulation component dies, the death event is triggered, and all functions registered to the event get called.

4. The function inside your component is triggered, causing an explosion to occur.

Advantages of the event system include the fact that events are standard features in many languages and are very fast, and that internal state doesn't have to be known. Disadvantages include the loss of control over the synchronization of events, and the overhead of making an event call as opposed to a simple function call.

Publisher-Subscriber model
I'm pretty sure this isn't actually implemented in many games today, but its something I picked up in the robotics industry. When working with a robot, you're often working with many different computers and programs running at once. For instance, the robot I'm working on right now has two 32-bit ubuntu machines controlling arms, 2 simple embedded micro controllers for the neck, and a 64-bit windows machine accepting commands from the user. In this kind of environment, parallel computation and system-independent communication is needed. I see major parallels between this and a multi-threaded component system (where each of your game components runs in its own thread.) The way we solved this problem is to model each machine and even each individual sensor and process as a server that published its data to "topics," which just stored the data there in a buffer. Then, other machines would "subscribe" to these topics, and pull information from them as needed.

Here's how a publisher-subscriber model would work in a component-based game engine:

1. Your game, or even an external program, stores a list of topics which contain buffers into which arbitrary message classes are stored.

2. When a new component is added to the game entity, it can choose to publish information to a topic. Each topic will store messages of a particular type. In this case, the simulation component publishes to the "death event" topic.

3. The rendering component subscribes to the "death event" topic. The topic handler registers a "callback" method for the rendering component, which is essentially just a function that gets called whenever a new message is published to the topic.

4. In the rendering component's callback function, it renders an explosion whenever a death event gets published on the death event topic.

Advantages to the publisher-subscriber paradigm include the fact that it is completely thread safe, and even allows communication between different processes and machines. If you used this method, you could even have every single component in your game be a different thread which responds to events in the system gracefully. It also inherits all of the advantages of the message system without the need for components that don't need to hear the message having to parse it to ignore it, and also inherits the advantages of the event system in that a direct function call happens whenever a new message is published to a topic.

Disadvantages include the HUGE overhead involved in publishing and subscribing to topics.
Logged

floatstarpx
Level 1
*



View Profile
« Reply #14 on: July 04, 2010, 01:39:19 PM »

Messaging System
This is the idea that your components can only communicate with each other using generic messages that have to be parsed and interpreted. What you could do is create a generic message class that has something like a header and an ID, which could simply be an enum. Then, you could have messages extending the base message class which components can choose to cast the generic message to.

So, it would be something like this:

1. Simulation component has a major change of state. Let's say its health goes to zero, and it must die.

2. The simulation component sends a message up the component tree to its parent and all of its siblings which says "HEY, I'M DEAD!" all of the other components see this as a generic message, but it has a header from an enum called "DeathMessage."

3. One of your rendering components, which handles particle effects, sees the generic death message, and realizes that it is a type of message it can handle. It looks at the message to see all of the information related to the kind of death that happened, and uses this to render the component's particle effects upon death.

4. All of the other components simply ignore the death message and continue on, unless they explicitly have a handler for that kind of message.

Advantages to the message system include the fact that it doesn't require the components to know ANYTHING about the other component's internal state -- they only need to know about the shared message interface. Disadvantages are the fact that components which don't need to know about the death message now know about it, and have to do parsing to see that they have ignored it. Another disadvantage is that you have to maintain a consistent message interface even as internal state in each component changes.

Yeah, I think this works really well... I use messaging between objects and components in my set up, and it's effective.. We use it almost exactly as you've described above... - except we try to split components further for separate behaviour..
e.g. we have a ComponentHealth (which will update itself on receiving a MessageDamage, and possibly send a MessageDestroyed to the root object if it runs out of health)
but then we might have something like a ComponentRunScriptOnDestroy, or ComponentDropItemOnDestroy, for example... which act when they get the MessageDestroyed.

For the case of an object actually being deleted, it just sets as 'deleted' (without messages) and it clears itself up from the object level by unregistering all components etc... - so it is possible to 'remove' an object without MessageDestroying it (e.g. we don't want it to actually destroy/explode/behave in that way, just be gone)...

but we hybridise this... I don't always use messages.. I have some components that know explicitly about other Components.. I don't find this a problem.. on object build, my 'sprite render' Component may grab a reference to the ComponentPosition in the same object (and hold on to it), and call public functions to get information out when it needs to render. I don't think this is a problem (and hasn't proven to be), and really avoids all that overengineering/unnecessary worrying.. and it's faster.
I think it's usually fairly obvious when it's good to use a message as opposed to calling a public function on a specific 'known' component, or at least it has been so far!
Logged
Pages: [1]
Print
Jump to:  

Theme orange-lt created by panic