We Make Data Science Visible

For GPU compute, does WebGL 2 fill WebGL 1 gaps?

We recently built an interactive visualization of vegetation over time in Iceland. Since we wanted real-time analysis of a large dataset with no loading spinners, we decided to implement our app using GPU compute. Our dataset consists of 204 images, or rasters, with each image mapping the state of Iceland’s vegetation over one month. Since we wanted to analyze all of this data at once, taking advantage of a user’s highly-parallel GPU rather than a traditional server-side CPU approach made sense.

For modern browsers, we had two APIs to choose from, WebGL 1 and WebGL 2. We wanted to support a wide array of browsers, and unfortunately WebGL 2 is not yet supported in Edge or Safari, so we chose WebGL 1.

Since WebGL 1 was not created with compute in mind, it was an open question how difficult this approach would be. In the end, development went well, but we encountered quite a few challenges. This post explores how we tackled those challenges with WebGL 1, and what the solution would have looked like with WebGL 2.

WebGL 1 Challenge #1 – Loading the Data

The images in our dataset need to be uploaded to the GPU in such a way that all 204 images can be sampled in a single pass. If we had only a few images, we could upload each image to its own WebGL texture, and sample these textures separately. However, for 204 images, we would exceed the maximum texture limit of most GPU hardware, and even if we could create that many textures, sampling them separately would probably result in terrible performance. So our solution in WebGL 1 was to load the images into a single texture atlas, dividing the data across the R, G, B, and A channels. This allows us to sample from a single texture, but complicates texture storage and lookup quite a bit.

WebGL 2 Solution

This challenge is easily solved in WebGL 2. Arrays of textures can be uploaded as-is to WebGL 2, stored as a single 3D texture, and sampled using “sampler2DArray”. So our 204 images could be stored as a texture array with 204 levels. This is a perfect match for our use case, and would allow us to skip a lot of finicky math in our texture atlas approach.

Sampling the data in WebGL 1
Sampling the data in WebGL 2

WebGL 1 Challenge #2 – Ints and Floats

In WebGL 1, the first int-vs-float challenge comes when we want to sample a certain pixel of one of our data images. For example, the pixel with coordinates (2, 3) of a 5-pixel-square image. Unfortunately, WebGL 1 only supports specifying texture coordinates as floating point numbers between 0 and 1. So we need pass in the texture’s size in pixels as a uniform to our shaders, and then we normalize our texture coordinates before sampling. We also need to make sure to sample the center of the pixel, meaning we add 0.5 to X and Y before normalizing.

A second int-vs-float challenge is that WebGL 1 texture samplers always return floats, even though we store integer data in our texture and we want integer data in our shaders. This didn’t require us to write extra code, but the implicit conversion to floats made the math a bit harder to understand.

WebGL 2 Solution

WebGL 2’s “texelFetch” would allow us to directly sample a texture using integer coordinates, and if we need to know the size of a texture, “textureSize” provides that without the need to pass in custom uniforms. WebGL 2 also directly supports sampling integer textures as...integers!

Normalized coordinates in WebGL 1
Integer coordinates in WebGL 2

WebGL 1 Challenge #3 – Extensions

For our application, we need to render data into 32-bit buffers and read these 32-bit floats from the GPU to JavaScript. This involves enabling WebGL 1 extensions which only some browser/hardware combinations support. Worse, the specific operations these extensions cover ended up being narrow enough that some browsers still failed to run our app, even though they claimed to support the extensions. We wanted to give quick feedback to users on unsupported platforms, so we ended up writing a quick test which performs the operations we need, and which runs before the full app is loaded. If the browser fails the test, we show a message explaining that the browser is unsupported.

WebGL 2 Solution

Texture format support is greatly increased in WebGL 2, meaning the features we need would be supported without extensions in browsers with WebGL 2 support.

Conclusion

Unsurprisingly, since WebGL 1 was designed with graphics in mind, it has many quirks when it’s used for parallel computation. WebGL 2 elegantly solves many of these problems, and when browser support gets better, it will be the clear choice. For now, we’re glad we went with an approach that maximized browser compatibility. In the future, we’ll keep an eye on WebGL 2 browser support, and also on the GPU for the Web Community Group for discussion of a possible upcoming low-level GPU API, allowing even more powerful applications to run in the browser.