APIs

WebGPU is the successor technology of WebGL, and the programming model of canvascs is very different from canvasfs. Although they are all two-dimensional rendering models, they can actually handle three-dimensional situations.

Lands' WebGPU support framework basically adopts a double-buffering architecture, one is the current screen texture (read-only), and the other is the rendering output texture (write-only). Of course, we can also directly calculate the color and write it to the output buffer without considering the current screen data when rendering.

1. JS APIs
const canvas = await Lan.canvascs(1800, 1200, {
  preludes : 'util,cmpx',
  bitdepth : 32,
  initfill : (data) => {},
  displays : 'rgbc',
  interval : 1,
  storages : [100 * 200 * 4, ...],
  textures : ['url', HTMLCanvasElement, ...]
});
await Lan.loop(canvas.render);
canvas.finish();

In the option parameter, preludes represents the code libraries preloaded in advance. Here's a list of possible libraries at the moment:

initfill is used to initialize canvas data. data is a Float32Array, and every 4-bytes represents a pixel. For other parameters, please refer to the subsequent sections of this document.

1.1、async canvas.render(time, timeDelta, frame)

This interface is a rendering interface, usually called by Lan.loop.

1.2、canvas.finish()

This interface is a resource release interface, which is usually called after rendering to release resources such as buffers and textures. You can also directly put this interface into the second parameter of Lan.loop for calling after the loop ends:

Lan.loop(canvas.render, canvas.finish);
2. Compute Pipeline

We can write multiple $CS compute pipeline programs, all program codes will be consolidated (concat) into one, and the following code will be added at the front:

// constants
const dimx = $canvas_width;
const dimy = $canvas_height;
const dimi = vec2i(dimx, dimy);
const dimu = vec2u(dimx, dimy);
const dimf = vec2f(dimx, dimy);
const dimc = vec2f($canvas_width / 2, $canvas_height / 2);
const aspect = $canvas_width / $canvas_height;

// uniform
struct UNIFORM {time:f32, timeDelta:f32, frame:u32, mouse:i32, mousex:i32, mousey:i32};
@group(0) @binding(0) var UF:UNIFORM;

// input and output textures
@group(0) @binding(1) var TI:texture_2d;
@group(0) @binding(2) var TO:texture_storage_2d;
2.1 uniform variables

We can access all uniform parameters through UF:

2.2 @compute function

In the compute pipeline program, we can write multiple @compute functions, and they will be executed in the order of declaration. Normally, we update the output buffer in the last @compute function. A typical @compute function is as follows:

@compute @workgroup_size(16, 16) @dispatch(100, 100)
fn mainCS(@builtin(global_invocation_id) id:vec3u) {
  if (id.x >= dimx || id.y >= dimy) { return; }
  rseed(id); // initialize random seed

  // coordinates, x in [-.5, .5] and y follows the aspect ratio
  var uv = coord(id);
  
  // render
  var col = render(uv);

  // output
  outstore(id, vec4f(col, 1.));
}

Where, @dispatch(X, Y) is a unique syntax of Lands, which specifies the number of compute groups to be dispatched, where X and Y can be decimals. The default value for @workgroup_size is (1, 1), and the default value for @dispatch is ceil(canvas_dimensions / workgroup_size). Lands can automatically complete the default declaration, so we can simply declare a compute function as follows:

@compute mainCS(id) {
  rseed(id);
  ......
}

By default, the xy coordinates of the id correspond to the pixel coordinates of the canvas one-to-one, and we no longer need to check whether the coordinates are out of bounds.

3. Resource Bindings

The data and resources that the compute pipeline can use during calculation include: uniform, storage space, and texture. Our binding order is: uniform, system default texture, custom buffer, external texture. What needs to be bound in our program are custom buffers and external textures:

// user defined storages
@storage hist:array<atomic<u32>>;

// external textures
@texture tex:texture_2d<f32>;
4. Progressive Rendering

Here the Progressive Rendering refers to the situation where colors are averaged after accumulation. We use the alpha channel (color.w) to store the number of accumulations, and color.rgb is the actual accumulated color. The final color displayed on the screen is vec4(color.rgb / color.w, 1), and the alpha channel is always 1.