Holy balls, thank you sir.
It will take time to process this huge volume of info though, will probably reread it from time to time.
I believe it's a bit hard to follow which is why I want to use it as a draft and make it better.
The only thing I fear so far is that how much maintainability it will need in the end.
It's why I don't recommand the architecture I'm using to everybody. It requires a wide knowldege, just because all the tools I gathered are totally separate and you have to know about them, how to use them, etc.
Also, it's high-grade C++. Unfortunately it means not everybody can maintain this code. But it also mean that I, myself, can do more with less, so it's ok in my case.
Knowledge of bleeding-edge techniques to manage concurrency is clearly a requirements.
- Edit - Also it feels your approach is rather complex, which might bring issues to the table.
First, keep in mind that it's the summary of several months of hard work, shaped as a braindump, so it should look more complex than it really is.
It's my fault for giving too much information. It happen a lot.
Half of what I said is just principle I used to manage complexity. The other half is tools I've built to follow these principles to manage complexity. What I mean is that I described the tools but not much the usage.
I used all my knowledge and learning to put the complex part in tools, so that the game-specific code is isolated and works fine.
When I use my system, it's mostly like using normal code, just cut in tasks and pushed in scheduling functions. I don't have to think about concurrency other than when I choose if the data types have to be updated concurrently or not.
So basically, it's only setting up a framework to achieve these goals:
- scale with the hardware concurrency capabilities;
- efficient enough (for my specific game);
- easy to write game code (it should be written mostly like normal sequential code);
I think the explanation would be better with some diagram and some code example though.
I could paste real usage examples of usage here, but it all looks dumb code.
Like, here is how I initialize the client systems:
Client::Client( NetRush& netrush_app )
: m_netrush_app( netrush_app )
{
using namespace netrush::system;
using namespace netrush::view;
NR_LOG( "#### Client Creation... ####" );
auto graphic_config = m_netrush_app.config().get_sub( "netrush.graphic" );
m_graphic_system = std::make_unique<GraphicSystem>( m_netrush_app.task_scheduler(), "NetRush", graphic_config );
m_input_engine = std::make_unique<InputEngine>( m_netrush_app.task_scheduler(), m_graphic_system->engine().window_handle() );
m_zoneview_system = std::make_unique<zoneview::ZoneViewSystem>( *m_graphic_system, *m_input_engine );
// Exit when Escape key is pressed.
m_input_engine->schedule( InputTask( "exit_on_escape", [&]( InputState& input_state )
{
if( input_state.keyboard().is_down( KEY_ESCAPE ) )
m_netrush_app.request_exit();
}).reschedule().until( [&]{ return m_netrush_app.is_exiting(); } ) );
auto zone_view = m_zoneview_system->create_zoneview( zone::IdGenerator().make_id<zone::Zone>() );
NR_LOG( "#### Client Creation - DONE. ####" );
}
Here is how I setup the graphic update tasks (m_tasks is a TaskChain<GraphicData>)
void GraphicEngine::schedule_default_tasks()
{
m_tasks.push_back( GraphicTask( names::GRAPHIC_WINDOW_UPDATE
, []( const GraphicUpdateInfo& )
{
update_windows();
}).reschedule() );
m_tasks.push_back( GraphicTask( names::GRAPHIC_RENDERING_TASK
, []( const GraphicUpdateInfo& info )
{
UCX_ASSERT_NOT_NULL( Ogre::Root::getSingletonPtr() );
Ogre::Root::getSingleton().renderOneFrame( static_cast<Ogre::Real>( info.delta_fsecs ) );
}).reschedule() );
}
Here is how the input system is initialize itself:
m_state = std::make_unique<State>( window_handle );
static const auto INPUT_SYNC_FREQUENCY = milliseconds(16);
// Launch the input udpate task.
auto task = system::AsyncTask( names::TASK_INPUT_TASKS, [&]()
{
m_state->update();
m_tasks.execute( m_state->current_state() );
});
task.reschedule().interval( INPUT_SYNC_FREQUENCY );
m_task_end_sync.sync_with_task( task );
// Now we're ready to schedule a task for calling our task chain.
task_scheduler.schedule( std::move(task) );
AsyncTask is a typedef for Task<InputData>.
The m_task_end_sync.sync_with_task( task ); just make sure that if the input system is destroyed, it will first notify the task to end and wait for it, terminating in synch with it. This makes everything very deterministic. I use it in each core system construction/destruction for synchronizing with update tasks.
As you can see, other than classic modern c++ idioms, all the complexity is hidden. What you feel is complex is actually the implementation of the tools to hide complexity, which really are glue code using boost/tbb/c++11 libs. I also don't even use futures much yet because they becomes interesting only since the lat boost release.
If you look at this code and feel it's complicated, then really don't even try to do the same.
I believe managing concurrent code will be far simpler once we get some tools that are proposed for the next C++ major standard (c++17 is the current target) so currently we still have to explain how to tools work before you use it and build them because there's nothing standard.