Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Canvas animation: Benefits of separating update and render loop?

I am creating some simple user controlled simulations with JavaScript and the canvas element.

Currently I have a separate update loop (using setTimeout) and render loop (using requestAnimationFrame).

Updates are scaled using a time delta, so consistency is not critical in that sense. The reason is rather that I don't want any hick-ups in the render loop to swallow user input or otherwise make the simulation less responsive.

The update loop will likely run at a lower (but hopefully fixed) frame rate.

Is this a good practice in JavaScript, or are there any obvious pitfalls? My hope is that the update loop will receive priority, but my understanding of the event loop might be a bit simplistic. (In worst case, the behaviour differs between VM implementations.)

Example code:

function update() {
  // Update simulation and process input
  setTimeout(update, 1000 / UPDATE_RATE);
}

function render() {
  // Render simulation onto canvas
  requestAnimationFrame(render);
}

function init() {
  update();
  render();
}
like image 811
Drenmi Avatar asked Sep 16 '25 15:09

Drenmi


1 Answers

These concerns have been addressed in Game Development with Three.js by Isaac Sukin. It covers both the case of low rendering frame rates, which was the primary concern of this question:

[...] at low frame rates and high speeds, your object will be moving large distances every frame, which can cause it to do strange things such as move through walls.

It also covers the converse case, with high rendering frame rates, and relatively slow physics computations:

At high frame rates, computing your physics might take longer than the amount of time between frames, which will cause your application to freeze or crash.

In addition, it also addresses the concept of determinism, which becomes important in multiplayer games, and games that rely on it for things like replays or anti-cheat mechanisms:

Additionally, we would like perfect reproducibility. That is, every time we run the application with the same input, we would like exactly the same output. If we have variable frame deltas, our output will diverge the longer the program runs due to accumulated rounding errors, even at normal frame rates.


The practice of running multiple loops is advised against, as this can have severe and hard to debug performance implications. Instead an approach is taken where time deltas are accumulated in the rendering loop, until a fixed, preset size is reached, at which point it is passed to the physics loop for processing:

A better solution is to separate physics update time-steps from frame refresh time-steps. The physics engine should receive fixed-size time deltas, while the rendering engine should determine how many physics updates should occur per frame.

Here's some example code, showing a minimum implementation in JavaScript:

var INVERSE_MAX_FPS = 1 / 60;
var frameDelta = 0;
var lastUpdate = Date.now();

function render() {
  // Update and render simulation onto canvas
  requestAnimationFrame(render);

  var now = Date.now();

  frameDelta += now - lastUpdate;
  lastUpdate = now;

  // Run as many physics updates as we have missed
  while(frameDelta >= INVERSE_MAX_FPS) {
    update();
    frameDelta -= INVERSE_MAX_FPS;
  }
}

function init() {
  render();
}

With the following code, no matter how long since the last rendered frame, as many physics updates as required will be processed. Any residual time delta will be carried over to the next frame.

Note that the target maximum FPS might need to be adjusted depending on how slow the simulation runs.

like image 140
Drenmi Avatar answered Sep 18 '25 05:09

Drenmi