I know of three kinds of valid timestep modes:
Render-locked and fixed timestep
Variable timestep based on framerate
Independently variable framerate and timestep
Render-locked and fixed is old school and is the "cleanest" way to do things in terms of computation and reproducibility, but it can't adapt to varied situations well. That said I would recommend it unless you have a compelling reason to do the others because they're more complex and still have flaws of their own.
Variable timestep based on framerate is what most of this thread is discussing: faster rendering = smaller timesteps. It's great except for the floating point error, which can add up to things like
some players moving faster than others in online play. You will also have to cap the timestep, or risk unplayability on slow systems or bugs resulting from a "rare outlier" giant timestep. And because you're updating the game as fast as possible, computers will burn through their CPU unless you also cap updates with a lower bound too. Burning through the CPU is expected if you're running a console, but it's not so nice to someone's laptop or phone batteries. But on the other hand, you get to play with timing more readily, which is great for games that need to do that.
Independently variable framerate and timestep is the modern AAA method. This method involves multiple subsystems (render, animation, physics, AI, scripted events, etc.) parallelizing their processing; the resulting timestep comes from the synchronization of all subsystems, which may use a timing totally independent of whatever is going on in the render code. You can solve numeric drift in this way by locking down all the simulation-dependent code to use fixed increments and timings, while letting the other stuff run freely.
Sounds great, right? We should all go for fully variable systems, shouldn't we? Except that doing this parallelization and synchronization can exact a penalty on input responsiveness: render may keep going steadily, but the other systems have to catch up with each other before something visibly happens onscreen. Hence you may introduce delays of several frames, depending on how much needs to be synchronized and the lagginess of the display. In a 30fps game, a 1 frame delay is a ~15ms penalty. If you're delayed by five or six frames, players may not be able to tell exactly what's wrong with your controls, but they aren't going to FEEL very good. And if the framerate drifts, the responsiveness will drift too, screwing up precise timing.
Glaiel-Gamer's strategy of fixed timesteps with occasional skipped renders is a form of this parallelization, done only in the worst-case scenario where a tradeoff has to be made between render and simulation. Does it add response lag? Yes, when frames are skipped, because now the simulation is doing things the player can't see. It's a little bit "driftier" than if he had locked to the render rate. But it will
look smoother, and if it happens infrequently, it's probably a good tradeoff, since people will notice jerky movement more readily than a few dozen ms of variable responsiveness.
In summary: This stuff sure is complicated! And no one solution is really "best."For more:
http://www.gamasutra.com/view/feature/1942/programming_responsiveness.phphttp://www.eurogamer.net/articles/digitalfoundry-lag-factor-article