Effortless Offloading: Next.js Meets WebWorkers
Optimising performance is a constant pursuit in the ever-evolving landscape of web development. As we push the boundaries of what our applications can do, the need for efficiency becomes more pronounced.
In this article, we’ll look at how we can use the next-webworker-pool
NPM package to offload CPU-intensive work off the main thread and achieve a buttery-smooth application.
To get a good grasp on this, we’ll be looking at a common issue websites have to face - generating previews of user-uploaded images - and how this can be solved using a tiny NPM package.
Generating previews of user-uploaded images
To generate the preview of an image in JavaScript, we need to:
- load the image(s) via an
<input type="file" />
control - load the
File
into anHTMLImageElement
or<img />
- create a
<canvas />
, draw the<img />
element inside and convert it back into aBlob
Selecting images
To allow users to select images that need previews, we need to make use of the <input type="file" />
element:
const onImagesSelected: React.ChangeEventHandler<HTMLInputElement> = (event) => {
const files = Array.from(event.target.files!);
// ... rest of code
};
// ...
return (
// ...
<input type="file" multiple onChange={onImagesSelected} />
// ...
);
We now have access to the user-selected files inside of onImagesSelected
.
Loading files as images
To draw an image into the canvas, we first need to load the file Blob
s into <img />
elements:
async function loadImage(blob: Blob) {
return new Promise<HTMLImageElement>((resolve, reject) => {
const img = new Image();
img.onload = () => {
URL.revokeObjectURL(img.src);
resolve(img);
};
img.onerror = () => {
URL.revokeObjectURL(img.src);
reject(new Error('Failed to load image'));
};
img.crossOrigin = 'annonymous';
img.src = URL.createObjectURL(blob);
});
}
By using this function, we can await loadImage(files[...])
and receive an HTMLImageElement
in return, which has the loaded image inside.
Drawing the preview
To draw the image into a custom-sized canvas, we are going to load it using loadImage
, then draw it into a <canvas />
, finally converting the result back into a Blob:
export async function generateBlobPreview(blob: Blob): Promise<Blob> {
// Load image
const img = await loadImage(blob);
// Create canvas and get context
const canvas = document.createElement('canvas');
canvas.width = canvas.height = 300;
const context = canvas.getContext('2d')!;
// Draw image (stretched)
context.drawImage(img, 0, 0, canvas.width, canvas.height);
return new Promise<Blob>((resolve, reject) => {
// Convert canvas to blob
canvas.toBlob((blob) => {
// Destroy the canvas to free memory
canvas.remove();
if (blob == null) {
reject(new Error('Failed to draw preview'));
return;
}
resolve(blob);
});
});
}
With the code above, we can easily generate previews from our user-selected files, by calling await generateBlobPreview(files[...])
.
We can now go back to our onImagesSelected
function and update it to use generateBlobPreview
and generate previews for our user:
const onImagesSelected: React.ChangeEventHandler<HTMLInputElement> = (event) => {
const files = Array.from(event.target.files!);
await Promise.all(
files.map(async img => {
const blob = await generateBlobPreview(img);
const url = URL.createObjectURL(blob);
// Update our state tracking preview URLs
setPreviews(old => [...old, url]);
});
);
};
And that’s it. The result:
But our current solution has a major issue. The UI stutters and freezes, making this a terrible user experience, especially in a more complex page that has animations and other interactions.
You can access the example project to see it all in action.
Introducing WebWorkers
Next.js supports WebWorkers out of the box. The syntax for creating a WebWorker
is new Worker(new URL('relative-path-to-worker.ts', import.meta.url))
.
The key aspect that Next.js uses to support the workers is new URL(..., import.meta.url)
. The Next.js compiler is scanning for these expressions in your projects, and creates an entry point starting at the worker file, compiling it and its dependencies as it would do for any other Typescript file. Then, when the Worker is created, the bundled worker code is loaded and executed.
This requirement can become limiting if we want to have more than one worker for some tasks: the first argument of URL
must be a string to be detected by Next.js, so we cannot easily create a pool of workers.
next-webworker-pool
To create a pool of workers, we can use the next-webworker-pool
NPM package:
npm install --save next-webworker-pool
After installing it, we can use the exported createWorkerPool<I, O>
function to describe our worker pool:
const workerPool = createWorkerPool<Input, Output>(
() => new Worker(new URL('../workers/my-worker.ts', import.meta.url)),
{ maxWorkers: 10 },
);
// execute a task
const result = await workerPool.executeTask(myInput).promise;
// terminate the worker pool when all tasks are done
workerPool.terminate();
The first argument of createWorkerPool
is a factory function
that will be used to create the worker instances as needed. By using this, Next.js can easily detect that we use a WebWorker and create the entry-point accordingly.
The second argument allows us to specify how many workers we want at most to run concurrently. This can be an arbitrary number, as 10
, or based on navigator.hardwareConcurrency
, which can tell you how many cores are available on the device (I recommend using Math.max(1, navigator.hardwareConcurrency / 2)
to avoid overloading the user’s device).
Drawing images in a WebWorker
Inside WebWorkers, some of the Web APIs are unavailable. In fact, there is a limited set of APIs that we can use.
The Canvas
API is unavailable inside a WebWorker. Fortunately, the OffscreenCanvas
is available, and we can make use of it inside our worker:
import type { Task } from 'next-webworker-pool';
globalThis.addEventListener('message', (event: MessageEvent<Task<Blob>>) => {
const { data: imageBlob } = event.data;
// Convert the Blob to an ImageBitmap, so we can draw in the offscreen canvas
createImageBitmap(imageBlob).then((imageBitmap) => {
// Create an offscreen canvas and get its context
const canvas = new OffscreenCanvas(300, 300);
const context = canvas.getContext(
'2d'
)! as OffscreenCanvasRenderingContext2D;
// Draw our custom-sized image
context.drawImage(imageBitmap, 0, 0, 300, 300);
// Convert the canvas to a Blob and send it back to the main thread
// NOTE: The `any` cast is needed because the type definition for
// `convertToBlob` is missing in the TypeScript DOM library.
(canvas as any)
.convertToBlob()
.then((blob: Blob) => {
// Send the generated Blob back to the main thread
globalThis.postMessage({
id: event.data.id,
data: blob,
});
})
.catch((err: any) => {
// Send the error back to the main thread
globalThis.postMessage({
id: event.data.id,
error: err.message,
});
});
});
});
The WebWorker code is very similar to our previous generateBlobPreview function. It receives a message containing a Blob
from the main thread, convers it to a bitmap drawn into an OffscreenCanvas
, that is converted back into a Blob
, which is finally sent back to the main thread.
Now we need to update our onImagesSelected
function to use the WebWorker:
const onImagesSelected: React.ChangeEventHandler<HTMLInputElement> = (event) => {
const files = Array.from(event.target.files!);
// Create a worker pool with our workers
const workerPool = createWorkerPool<Blob, Blob>(
() => new Worker(new URL('../workers/preview.worker.ts', meta.import.url)),
{ maxWorkers: Math.max(1, navigator.hardwareConcurrency / 2) },
);
// Create all the image previews
await Promise.all(
files.map(async img => {
const blob = await workerPool.executeTask(img).promise;
const url = URL.createObjectURL(blob);
// Update our state tracking preview URLs
setPreviews(old => [...old, url]);
});
);
// Terminate the worker pool to free up resources
workerPool.terminate();
};
This is very similar to what we had before. The result:
As you can see, our issue has disappeared. The UI stays responsive, and because no frames are skipped, we can see each preview being rendered into the application. If the page had other animations and interactions, they would still be responsive.
This new, performant, version can be found in this example project.
Gotchas
When using WebWorkers, you should keep in mind a few things:
WebWorkers support a subset of the Web APIs, therefore not all features can be offloaded onto a WebWorker.
WebWorkers have been supported for over 10 years in all major browsers, but some of the API have just recently been implemented in all browsers (for example the OffscreenCanvas
), so do provide fallbacks when using a specific API.
When using Jest, it will complain about usages of meta.import.url
inside the codebase. To mitigate this, add the babel-plugin-transform-import-meta
plugin to your configuration when running in the test environment:
// babel.config.js
const isTestEnv = process.env.NODE_ENV === 'test';
module.exports = {
plugins: [
//...,
...(isTestEnv ? ['babel-plugin-transform-import-meta'] : [])
],
//...,
};
To conclude, using WebWorkers in a web application is a must when carrying out CPU-intensive operations. Whilst Next.js supports them out-of-box, using the next-webworker-pool
package enables us to use them concurrently and make our code more efficient.