Firstly of 2025, I lastly determined to construct myself a new portfolio. I nonetheless just about appreciated the one I made again in 2021, however I felt the necessity to put to good use all of the cool stuff I’ve discovered these previous couple years working with WebGPU. And, in addition to, half of the initiatives featured in my case research had been put offline anyway, so it was about time.
I didn’t actually know the place I used to be going at this level, besides that:
- It could, in fact, characteristic a number of procedurally generated WebGPU scenes. I already had a number of ideas to discover in thoughts, like particles or boids simulation.
- I needed to maintain the design myself. It could appear bizarre, particularly since I used to be very pleased with what Gilles got here up designing for my final portfolio, and in addition as a result of I do suck at design. However this is able to give me extra freedom, and I’ve additionally all the time appreciated constructing issues from scratch alone.
- Final however not least, it needed to be enjoyable!
1. The journey
The (powerful) design and content material course of
Don’t do that!
At first, I had no concept what to do design clever. Fonts, colours: there are such a lot of issues that might go unsuitable.
I began with easy mild and darkish colours, saved the fonts Gilles had chosen for my earlier portfolio and began to repeat/paste its previous textual content content material. It didn’t really feel that nice, and it wasn’t enjoyable for certain.

I positively wanted colours. I might have wasted a number of hours (or days) choosing the proper pairing, however as an alternative I made a decision this may very well be the suitable alternative to make use of this random shade palette generator utility I’ve coded a number of years in the past. I cleaned the code a bit, created a repo, revealed it to npm and added it to my challenge. I additionally barely modified the tone of the copywriting, and that led me to one thing nonetheless not that nice, however a bit extra enjoyable.

I let it website for some time and began engaged on different components of the location, similar to integrating the CMS or experimenting with the WebGPU scenes. It’s solely after an extended iteration course of that I’ve lastly arrange my thoughts on this sort of old skool video video games retro vibe blended with a extra cheerful, cartoonish aesthetic, virtually Sweet Crush-esque. Impactful headings, popping animations, banded gradients… you title it.
In fact, I’ve by no means gone so far as making a Figma challenge (I did choose a number of reference photos as a moodboard although) and simply examined a ton of stuff immediately with code till I felt it wasn’t that unhealthy anymore. All in all, it was a really lengthy and painful course of, and I assume each designer would agree at this level: don’t do that!

Do you really learn portfolios content material?
One other painful level was to decide on the precise content material and total construction of the location. Do I would like detailed case research pages? Do I would like pages in any respect? Will the customers even learn all these lengthy blocks of textual content I’ll wrestle to put in writing?
Ultimately, I selected to drop the case research pages. I had a few causes to take action:
- Usually occasions the challenge finally ends up being put offline for numerous causes, and you find yourself showcasing one thing the consumer can not go to anymore. That is precisely what occurred on my earlier portfolio.
- Many of the consumer work I’ve been doing these previous years has been for companies, and I’m not all the time allowed to publicly share them. I’ve no drawback with that, nevertheless it barely lowered the variety of initiatives I might spotlight.
From there on, it was a fast resolution to simply go along with a single touchdown web page. I’d put direct hyperlinks to the initiatives I might spotlight and small movies of all the opposite initiatives or private works I might characteristic. On high of that, I’d add a number of “about” sections blended with my WebGPU scenes, and that’d be the gist of it.
Talking of the WebGPU scenes, I actually needed them to be significant, not only a technical demonstration of what I might do. However we’ll get to that later.
The ultimate UX twist
After a number of months, I felt like I used to be coming into the ultimate stage of growth. The web page construction was largely completed, all my numerous sections have been there and I used to be engaged on the ultimate animations and micro-interactions tweakings.
So I took a step again, and appeared again at my preliminary expectations. I had my WebGPU scenes showcasing my numerous technical abilities. I had dealt with the design myself, and it wasn’t that unhealthy. However have been the flashy colours and animations sufficient to make it a very enjoyable expertise total?
I feel you already know the reply. One thing was lacking.
Aside from the random shade palette switcher, the UX principally consisted of scroll-driven animations. Many of the 3D scenes interactions have been rudimentary. I wanted an concept.
The design already had this online game cheerful look. So… What if I turned my entire portfolio right into a sport?
As soon as once more, I began writing down my concepts:
- The consumer would wish to work together with the completely different UI components to unlock the theme switcher and shade palette generator buttons.
- Every WebGPU scene might function a method to unlock the next content material, appearing as a really primary “puzzle” sport.
- Maintain observe of the consumer total progress.
- Permit the consumer to skip the entire sport course of in the event that they need to.
This implies many of the customers wouldn’t ever make it to the footer, or use this random palette generator software I’ve struggled to implement. This may very nicely be essentially the most riskiest, stupidest resolution I’ve made to this point. However it might give my portfolio this distinctive and enjoyable contact I used to be in search of within the first place, so I went all in.
In fact, it goes with out saying it implied a significant refactoring of the entire code and I wanted to provide you with authentic interplay concepts for the WebGPU scenes, however I prefer to suppose it was value it.


2. Technical research
Now that you recognize all of the whys, let’s take a look on the hows!
Tech stack
I’ve determined to attempt Sanity Studio as I’ve by no means labored with it earlier than and as I knew it might be a comparatively small challenge, it’d be an ideal match to begin utilizing it. Although I felt like I simply scratched its floor, I appreciated the general developer expertise it offered. Then again, I already had expertise working with Nuxt3 so this was a simple selection.
No want to say why I selected GSAP and Lenis — everybody is aware of these are nice instruments to ship clean animated web sites.
In fact, the WebGPU scenes needed to be completed with gpu-curtains, the 3D engine I spent a lot time engaged on these previous two years. It was an effective way to check it in a real-life situation and gave me the chance to repair a number of bugs or add a pair options alongside the way in which.
And since I needed the entire course of to be as clear as doable, I’ve revealed the entire supply code as a monorepo on GitHub.
Animations
I gained’t go too deep into how I dealt with the assorted animations, just because I’ve primarily used CSS and a little bit of GSAP right here and there, largely for canvas animations, SplitText results or the movies carousel utilizing ScrollTrigger observer.
The essential scenes
There are quite a lot of parts on the web site that wanted to attract one thing onto a and react to the theme and/or shade palette adjustments.
To deal with that, I created a Scene.ts class:
import kind { ColorPalette } from "@martinlaxenaire/color-palette-generator";
export interface SceneParams {
container: HTMLElement;
progress?: quantity;
palette?: ColorPalette;
colours?: ColorModelBase[];
}
export class Scene {
#progress: quantity;
container: HTMLElement;
colours: ColorModelBase[];
isVisible: boolean;
constructor({ container, progress = 0, colours = [] }: SceneParams) {
this.container = container;
this.colours = colours;
this.#progress = progress;
this.isVisible = true;
}
onResize() {}
onRender() {}
setSceneVisibility(isVisible: boolean = true) {
this.isVisible = isVisible;
}
setColors(colours: ColorModelBase[]) {
this.colours = colours;
}
get progress(): quantity {
return this.#progress;
}
set progress(worth: quantity) {
this.#progress = isNaN(worth) ? 0 : worth;
this.onProgress();
}
forceProgressUpdate(progress: quantity = 0) {
this.progress = progress;
}
lerp(begin = 0, finish = 1, quantity = 0.1) {
return (1 - quantity) * begin + quantity * finish;
}
onProgress() {}
destroy() {}
}
Since switching theme from mild to darkish (or vice versa) additionally updates the colour palette by tweaking the HSV worth part of the colours a bit, I’ve simply put a setColors() methodology in there to deal with these adjustments.
The progress dealing with right here is definitely a stay of when the WebGPU scenes animations have been largely scroll-driven (earlier than I launched the sport mechanisms), however since a number of scenes nonetheless used it, I saved it in there.
All of the 2D canvas scenes prolong that class, together with the WebGPU fallback scenes, the theme switcher button or the dynamic favicon generator (did you discover that?).
The WebGPU scenes
One of many very cool options launched by WebGPU is that you may render to a number of components utilizing just one WebGPU system. I used this to construct 4 completely different scenes (we’ll take a better have a look at every of them under), that each one prolong a WebGPUScene.ts class:
import { GPUCurtains } from "gpu-curtains";
import kind { ComputeMaterial, RenderMaterial } from "gpu-curtains";
import { Scene } from "./Scene";
import kind { SceneParams } from "./Scene";
import {
QualityManager,
kind QualityManagerParams,
} from "./utils/QualityManager";
export interface WebGPUSceneParams extends SceneParams {
gpuCurtains: GPUCurtains;
targetFPS?: QualityManagerParams["targetFPS"];
}
export class WebGPUScene extends Scene {
gpuCurtains: GPUCurtains;
qualityManager: QualityManager;
high quality: quantity;
_onVisibilityChangeHandler: () => void;
constructor({
gpuCurtains,
container,
progress = 0,
colours = [],
targetFPS = 55,
}: WebGPUSceneParams) {
tremendous({ container, progress, colours });
this.gpuCurtains = gpuCurtains;
this._onVisibilityChangeHandler =
this.onDocumentVisibilityChange.bind(this);
this.qualityManager = new QualityManager({
label: `${this.constructor.title} high quality supervisor`,
updateDelay: 2000,
targetFPS,
onQualityChange: (newQuality) => this.onQualityChange(newQuality),
});
this.high quality = this.qualityManager.high quality.present;
doc.addEventListener(
"visibilitychange",
this._onVisibilityChangeHandler
);
}
override setSceneVisibility(isVisible: boolean = true) {
tremendous.setSceneVisibility(isVisible);
this.qualityManager.energetic = isVisible;
}
onDocumentVisibilityChange() {
this.qualityManager.energetic = this.isVisible && !doc.hidden;
}
compilteMaterialOnIdle(materials: ComputeMaterial | RenderMaterial) {
if (!this.isVisible && "requestIdleCallback" in window) {
window.requestIdleCallback(() => {
materials.compileMaterial();
});
}
}
override onRender(): void {
tremendous.onRender();
this.qualityManager.replace();
}
onQualityChange(newQuality: quantity) {
this.high quality = newQuality;
}
override destroy(): void {
tremendous.destroy();
doc.removeEventListener(
"visibilitychange",
this._onVisibilityChangeHandler
);
}
}
Within the actual model, this class additionally handles the creation of a Tweakpane GUI folder (helpful for debugging or tweaking values), however for the sake of readability I eliminated the associated code right here.
As you may see, every of those scenes intently displays its personal efficiency utilizing a customized QualityManager class. We’ll speak about that later, within the efficiency part.
Okay, now that we’ve got the fundamental structure in thoughts, let’s break down every of the WebGPU scenes!
Since WebGPU is just not absolutely supported but, I’ve created fallback variations utilizing the 2D canvas API and the Scene class we’ve seen above for every of the next scenes.
Hero scene
The scenes featured within the portfolio in some way respect a form of complexity order, which means the extra you advance within the portfolio, the extra technically concerned the scenes develop into.
In that approach, the hero scene is by far the most straightforward technically talking, however it needed to look notably hanging and fascinating to instantly seize the consumer’s consideration. It was thought as some type of cell puzzle sport splash display.
It’s product of a primary, single fullscreen quad. The concept right here is to first rotate its UV parts every body, map them to polar coordinates and use that to create coloured triangles segments.
// Middle UVs at (0.5, 0.5)
var centeredUV = uv - vec2f(0.5);
// Apply rotation utilizing a 2D rotation matrix
let angleOffset = params.time * params.pace; // Rotation angle in radians
let cosA = cos(angleOffset);
let sinA = sin(angleOffset);
// Rotate the centered UVs
centeredUV = vec2(
cosA * centeredUV.x - sinA * centeredUV.y,
sinA * centeredUV.x + cosA * centeredUV.y
);
// Convert to polar coordinates
let angle = atan2(centeredUV.y, centeredUV.x); // Angle in radians
let radius = size(centeredUV);
// Map angle to triangle index
let totalSegments = params.numTriangles * f32(params.nbColors) * params.fillColorRatio;
let normalizedAngle = (angle + PI) / (2.0 * PI); // Normalize to [0,1]
let triIndex = flooring(normalizedAngle * totalSegments); // Get triangle index
// Compute fractional half for mixing
let segmentFraction = fract(normalizedAngle * totalSegments); // Worth in [0,1] inside section
let isEmpty = (i32(triIndex) % i32(params.fillColorRatio)) == i32(params.fillColorRatio - 1.0);
let colorIndex = i32(triIndex / params.fillColorRatio) % params.nbColors; // Use half as many shade indices
let shade = choose(vec4(params.colours[colorIndex], 1.0), vec4f(0.0), isEmpty);
There’s really a wavy noise utilized to the UV beforehand utilizing concentric circles, however you get the thought.
Curiously sufficient, essentially the most tough half was to attain the rounded rectangle coming into animation whereas preserving the proper facet ratio. This was completed utilizing this perform:
fn roundedRectSDF(uv: vec2f, decision: vec2f, radiusPx: f32) -> f32 {
let facet = decision.x / decision.y;
// Convert pixel values to normalized UV house
let marginUV = vec2f(radiusPx) / decision;
let radiusUV = vec2f(radiusPx) / decision;
// Modify radius X for facet ratio
let radius = vec2f(radiusUV.x * facet, radiusUV.y);
// Middle UV round (0,0) and apply scale (progress)
var p = uv * 2.0 - 1.0; // [0,1] → [-1,1]
p.x *= facet; // repair facet
p /= max(0.0001, params.showProgress); // apply scaling
p = abs(p);
// Half dimension of the rounded rect
let halfSize = vec2f(1.0) - marginUV * 2.0 - radiusUV * 2.0;
let halfSizeScaled = vec2f(halfSize.x * facet, halfSize.y);
let d = p - halfSizeScaled;
let outdoors = max(d, vec2f(0.0));
let dist = size(outdoors) + min(max(d.x, d.y), 0.0) - radius.x * 2.0;
return dist;
}
Highlighted movies slider scene
Subsequent up is the highlighted movies slider. The unique concept got here from an previous WebGL prototype I had constructed a number of years in the past and by no means used.
The concept is to displace the planes vertices to wrap them round a cylinder.
var place: vec3f = attributes.place;
// curve
let angle: f32 = 1.0 / curve.nbItems;
let cosAngle = cos(place.x * PI * angle);
let sinAngle = sin(place.x * PI * angle);
place.z = cosAngle * curve.itemWidth;
place.x = sinAngle;
I clearly used this for the years titles, whereas the movies and path results behind them are distorted utilizing a post-processing move.
Whereas this was initially tied to the vertical scroll values (and I actually appreciated the sensation it produced), I needed to replace its conduct after I switched to the entire gamification concept, making it an horizontal carousel.
Because of gpu-curtains DOM to WebGPU syncing capabilities, it was comparatively simple to arrange the movies grid prototype utilizing the Aircraft class.
The path impact is finished utilizing a compute shader writing to a storage texture. The compute shader solely runs when needed, which implies when the slider is shifting. I’m certain it might have been completed in a hundreds alternative ways, nevertheless it was excuse to play with compute shaders and storage textures. Right here’s the compute shader concerned:
struct Rectangles {
sizes: vec2f,
positions: vec2f,
colours: vec4f
};
struct Params {
progress: f32,
depth: f32
};
@group(0) @binding(0) var backgroundStorageTexture: texture_storage_2d;
@group(1) @binding(0) var params: Params;
@group(1) @binding(1) var rectangles: array;
fn sdfRectangle(heart: vec2f, dimension: vec2f) -> f32 {
let dxy = abs(heart) - dimension;
return size(max(dxy, vec2(0.0))) + max(min(dxy.x, 0.0), min(dxy.y, 0.0));
}
@compute @workgroup_size(16, 16) fn foremost(
@builtin(global_invocation_id) GlobalInvocationID: vec3
) {
let bgTextureDimensions = vec2f(textureDimensions(backgroundStorageTexture));
if(f32(GlobalInvocationID.x) <= bgTextureDimensions.x && f32(GlobalInvocationID.y) <= bgTextureDimensions.y) {
let uv = vec2f(f32(GlobalInvocationID.x) / bgTextureDimensions.x - params.progress,
f32(GlobalInvocationID.y) / bgTextureDimensions.y);
var shade = vec4f(0.0, 0.0, 0.0, 0.0); // Default to black
let nbRectangles: u32 = arrayLength(&rectangles);
for (var i: u32 = 0; i < nbRectangles; i++) {
let rectangle = rectangles[i];
let rectDist = sdfRectangle(uv - rectangle.positions, vec2(rectangle.sizes.x * params.depth, rectangle.sizes.y));
shade = choose(shade, rectangle.colours * params.depth, rectDist < 0.0);
}
textureStore(backgroundStorageTexture, vec2(GlobalInvocationID.xy), shade);
}
}
I believed I used to be completed right here, however whereas working manufacturing construct exams I stumbled upon a problem. Sadly, preloading all these movies to make use of as WebGPU textures resulted in an enormous preliminary payload and in addition considerably affected the CPU load. To mitigate that, I’ve carried out a sequential video preloading the place I’d have to attend for every video to have sufficient knowledge earlier than loading the following one. This gave an enormous increase relating to preliminary load time and CPU overhead.

Invoices scene
The third WebGPU scene was initially presupposed to represent my very own take at 3D boids simulations, utilizing instancing and a compute shader. After a bit of labor, I had a bunch of cases that have been following my mouse, however the finish end result was not dwelling as much as my expectations. The spheres have been generally overlapping one another, or disappearing behind the perimeters of the display. I saved enhancing it, including self-collision, edge detections and attraction/repulsion mechanisms till I used to be glad sufficient with the end result.
I prefer to name it the “invoices” scene, as a result of the sphere cases right here really characterize all of the invoices I really issued throughout my freelance profession, scaled primarily based on the quantities. Since I’m utilizing google sheets to deal with most of my accounting, I’ve made slightly script that gathers all my invoices quantity in a single, separate non-public sheet every time I’m updating my accounting sheets. I then fetch and parse that sheet to create the cases. It was a enjoyable little facet train and turns this scene into an mockingly significant experiment: every time you click on and maintain, you form of assist me gather my cash.
The compute shader makes use of a buffer ping-pong approach: you begin with two identically crammed buffers (e.g. packed uncooked knowledge) then at every compute dispatch name, you learn the info from the primary buffer and replace the second accordingly. As soon as completed, you swap the 2 buffers earlier than the following name and repeat the method.
Should you’re conversant in WebGL, that is typically completed with textures. WebGPU and compute shaders enable us to take action with buffers, which is far more highly effective. Right here is the entire compute shader code:
struct ParticleB {
place: vec4f,
velocity: vec4f,
rotation: vec4f,
angularVelocity: vec4f,
knowledge: vec4f
};
struct ParticleA {
place: vec4f,
velocity: vec4f,
rotation: vec4f,
angularVelocity: vec4f,
knowledge: vec4f
};
struct SimParams {
deltaT: f32,
mousePosition: vec3f,
mouseAttraction: f32,
spheresRepulsion: f32,
boxReboundFactor: f32,
boxPlanes: array
};
@group(0) @binding(0) var params: SimParams;
@group(0) @binding(1) var particlesA: array;
@group(0) @binding(2) var particlesB: array;
fn constrainToFrustum(pos: vec3, ptr_velocity: ptr>, radius: f32) -> vec3 {
var correctedPos = pos;
for (var i = 0u; i < 6u; i++) { // Loop by way of 6 frustum planes
let aircraft = params.boxPlanes[i];
let dist = dot(aircraft.xyz, correctedPos) + aircraft.w;
if (dist < radius) { // If contained in the aircraft boundary (radius = 1)
// Transfer the purpose contained in the frustum
let correction = aircraft.xyz * (-dist + radius); // Push contained in the frustum
// Apply the place correction
correctedPos += correction;
// Mirror velocity with damping
let regular = aircraft.xyz;
let velocityAlongNormal = dot(*(ptr_velocity), regular);
if (velocityAlongNormal < 0.0) { // Guarantee we solely mirror if shifting in the direction of the aircraft
*(ptr_velocity) -= (1.0 + params.boxReboundFactor) * velocityAlongNormal * regular;
}
}
}
return correctedPos;
}
fn quaternionFromAngularVelocity(omega: vec3f, dt: f32) -> vec4f {
let theta = size(omega) * dt;
if (theta < 1e-5) {
return vec4(0.0, 0.0, 0.0, 1.0);
}
let axis = normalize(omega);
let halfTheta = 0.5 * theta;
let sinHalf = sin(halfTheta);
return vec4(axis * sinHalf, cos(halfTheta));
}
fn quaternionMul(a: vec4f, b: vec4f) -> vec4f {
return vec4(
a.w * b.xyz + b.w * a.xyz + cross(a.xyz, b.xyz),
a.w * b.w - dot(a.xyz, b.xyz)
);
}
fn integrateQuaternion(q: vec4f, angularVel: vec3f, dt: f32) -> vec4f {
let omega = vec4(angularVel, 0.0);
let dq = 0.5 * quaternionMul(q, omega);
return normalize(q + dq * dt);
}
@compute @workgroup_size(64) fn foremost(
@builtin(global_invocation_id) GlobalInvocationID: vec3
) {
var index = GlobalInvocationID.x;
var vPos = particlesA[index].place.xyz;
var vVel = particlesA[index].velocity.xyz;
var collision = particlesA[index].velocity.w;
var vQuat = particlesA[index].rotation;
var angularVelocity = particlesA[index].angularVelocity.xyz;
var vData = particlesA[index].knowledge;
let sphereRadius = vData.x;
var newCollision = vData.y;
collision += (newCollision - collision) * 0.2;
collision = smoothstep(0.0, 1.0, collision);
newCollision = max(0.0, newCollision - 0.0325);
let mousePosition: vec3f = params.mousePosition;
let minDistance: f32 = sphereRadius; // Minimal allowed distance between spheres
// Compute attraction in the direction of sphere 0
var directionToCenter = mousePosition - vPos;
let distanceToCenter = size(directionToCenter);
// Decelerate when near the attractor
var dampingFactor = smoothstep(0.0, minDistance, distanceToCenter);
if (distanceToCenter > minDistance && params.mouseAttraction > 0.0) { // Solely entice if outdoors the minimal distance
vVel += normalize(directionToCenter) * params.mouseAttraction * dampingFactor;
vVel *= 0.95;
}
// Collision Dealing with: Packing spheres as an alternative of pushing them away
var particlesArrayLength = arrayLength(&particlesA);
for (var i = 0u; i < particlesArrayLength; i++) {
if (i == index) {
proceed;
}
let otherPos = particlesA[i].place.xyz;
let otherRadius = particlesA[i].knowledge.x;
let collisionMinDist = sphereRadius + otherRadius;
let toOther = otherPos - vPos;
let dist = size(toOther);
if (dist < collisionMinDist) {
let pushDir = normalize(toOther);
let overlap = collisionMinDist - dist;
let pushStrength = otherRadius / sphereRadius; // radius
// Push away proportionally to overlap
vVel -= pushDir * (overlap * params.spheresRepulsion) * pushStrength;
newCollision = min(1.0, pushStrength * 1.5);
let r = normalize(cross(pushDir, vVel));
angularVelocity += r * size(vVel) * 0.1 * pushStrength;
}
}
let projectedVelocity = dot(vVel, directionToCenter); // Velocity part in the direction of mouse
let mainSphereRadius = 1.0;
if(distanceToCenter <= (mainSphereRadius + minDistance)) {
let pushDir = normalize(directionToCenter);
let overlap = (mainSphereRadius + minDistance) - distanceToCenter;
// Push away proportionally to overlap
vVel -= pushDir * (overlap * params.spheresRepulsion) * (2.0 + params.mouseAttraction);
newCollision = 1.0;
if(params.mouseAttraction > 0.0) {
vPos -= pushDir * overlap;
}
let r = normalize(cross(pushDir, vVel));
angularVelocity += r * size(vVel) * 0.05;
}
vPos = constrainToFrustum(vPos, &vVel, sphereRadius);
// Apply velocity replace
vPos += vVel * params.deltaT;
angularVelocity *= 0.98;
let updatedQuat = integrateQuaternion(vQuat, angularVelocity, params.deltaT);
// Write again
particlesB[index].place = vec4(vPos, 0.0);
particlesB[index].velocity = vec4(vVel, collision);
particlesB[index].knowledge = vec4(vData.x, newCollision, vData.z, vData.w);
particlesB[index].rotation = updatedQuat;
particlesB[index].angularVelocity = vec4(angularVelocity, 1.0);
}
One among my foremost inspirations for this scene was this superior demo by Patrick Schroen. I spent quite a lot of time in search of the suitable rendering methods to make use of and at last arrange my thoughts on volumetric lighting. The implementation is sort of much like what Maxime Heckel defined in this glorious breakdown article. Funnily sufficient, I used to be already deep into my very own implementation when he launched that piece, and I owe him the thought of utilizing a blue noise texture.
As a facet observe, through the growth section this was the primary scene that required an precise consumer interplay and it performed a pivotal position in my resolution to show my folio right into a sport.
Open supply scene
For the final scene, I needed to experiment a bit extra with particles and curl noise as a result of I’ve all the time appreciated how natural and delightful it could get. I had already revealed an article utilizing these ideas, so I needed to provide you with one thing completely different. Jaume Sanchez’ Polygon Shredder positively was a significant inspiration right here.
Since this experiment was a part of my open supply dedication part, I had the thought to make use of my GitHub statistics as a knowledge supply for the particles. Every statistic (variety of commits, followers, points closed and so forth) is assigned to a shade and changed into a bunch of particles. You may even toggle them on and off utilizing the filters within the data pop-up. As soon as once more, this modified a relatively technical demo into one thing extra significant.
Whereas engaged on the portfolio, I used to be additionally exploring new rendering methods with gpu-curtains similar to planar reflections. Historically used for mirror results or flooring reflections, it consists of rendering part of your scene a second time however from a unique digital camera angle and projecting it onto a aircraft. Having nailed this, I believed it might be an ideal match there and added it to the scene.
Final however not least, and as a reminder of the retro video video games vibe, I needed so as to add a pixelated mouse path post-processing impact. I quickly realized it might be approach an excessive amount of although, and ended up exhibiting it solely when the consumer is definitely drawing a line, making it extra refined.

Efficiency and accessibility
On such extremely interactive and immersive pages, efficiency is essential. Listed here are a number of methods I’ve used to attempt to keep essentially the most fluid expertise throughout all units.
Dynamic imports
I’ve used Nuxt dynamic imported parts and lazy hydration for nearly each non vital parts of the web page. In the identical approach, all WebGPU scenes are dynamically loaded provided that WebGPU is supported. This considerably decreased the preliminary web page load time.
// pseudo code
import kind { WebGPUHeroScene } from "~/scenes/hero/WebGPUHeroScene";
import { CanvasHeroScene } from "~/scenes/hero/CanvasHeroScene";
let scene: WebGPUHeroScene | CanvasHeroScene | null;
const canvas = useTemplateRef("canvas");
const { colours } = usePaletteGenerator();
onMounted(async () => {
const { $gpuCurtains, $hasWebGPU, $isReducedMotion } = useNuxtApp();
if ($hasWebGPU && canvas.worth) {
const { WebGPUHeroScene } = await import("~/scenes/hero/WebGPUHeroScene");
scene = new WebGPUHeroScene({
gpuCurtains: $gpuCurtains,
container: canvas.worth,
colours: colours.worth,
});
} else if (canvas.worth) {
scene = new CanvasHeroScene({
container: canvas.worth,
isReducedMotion: $isReducedMotion,
colours: colours.worth,
});
}
});
I’m not notably keen on Lighthouse experiences however as you may see the take a look at end result is sort of good (observe that it’s working with out WebGPU although).

Monitoring WebGPU efficiency in actual time
I’ve briefly mentionned it earlier, however every WebGPU scene really displays its personal efficiency by conserving observe of its FPS charge in actual time. To take action, I’ve written 2 separate lessons: FPSWatcher, that information the common FPS over a given time period, and QualityManager, that makes use of a FPSWatcher to set a present high quality score on a 0 to 10 scale primarily based on the common FPS.
That is what they appear like:
export interface FPSWatcherParams {
updateDelay?: quantity;
onWatch?: (averageFPS: quantity) => void;
}
export default class FPSWatcher {
updateDelay: quantity;
onWatch: (averageFPS: quantity) => void;
frames: quantity[];
lastTs: quantity;
elapsedTime: quantity;
common: quantity;
constructor({
updateDelay = 1000, // ms
onWatch = () => {}, // callback referred to as each ${updateDelay}ms
}: FPSWatcherParams = {}) {
this.updateDelay = updateDelay;
this.onWatch = onWatch;
this.frames = [];
this.lastTs = efficiency.now();
this.elapsedTime = 0;
this.common = 0;
}
restart() {
this.frames = [];
this.elapsedTime = 0;
this.lastTs = efficiency.now();
}
replace() {
const delta = efficiency.now() - this.lastTs;
this.lastTs = efficiency.now();
this.elapsedTime += delta;
this.frames.push(delta);
if (this.elapsedTime > this.updateDelay) {
const framesTotal = this.frames.scale back((a, b) => a + b, 0);
this.common = (this.frames.size * 1000) / framesTotal;
this.frames = [];
this.elapsedTime = 0;
this.onWatch(this.common);
}
}
}
It’s very primary: I simply report the elapsed time between two render calls, put that into an array and run a callback each updateDelay milliseconds with the newest FPS common worth.
It’s then utilized by the QualityManager class, that does all of the heavy lifting to assign an correct present high quality rating:
import kind { FPSWatcherParams } from "./FPSWatcher";
import FPSWatcher from "./FPSWatcher";
export interface QualityManagerParams {
label?: string;
updateDelay?: FPSWatcherParams["updateDelay"];
targetFPS?: quantity;
onQualityChange?: (newQuality: quantity) => void;
}
export class QualityManager {
label: string;
fpsWatcher: FPSWatcher;
targetFPS: quantity;
#lastFPS: quantity | null;
#energetic: boolean;
onQualityChange: (newQuality: quantity) => void;
high quality: {
present: quantity;
min: quantity;
max: quantity;
};
constructor({
label = "High quality supervisor",
updateDelay = 1000,
targetFPS = 60,
onQualityChange = (newQuality) => {},
}: QualityManagerParams = {}) {
this.label = label;
this.onQualityChange = onQualityChange;
this.high quality = {
min: 0,
max: 10,
present: 7,
};
this.#energetic = true;
this.targetFPS = targetFPS;
this.#lastFPS = null;
this.fpsWatcher = new FPSWatcher({
updateDelay,
onWatch: (averageFPS) => this.onFPSWatcherUpdate(averageFPS),
});
}
get energetic() {
return this.#energetic;
}
set energetic(worth: boolean) {
if (!this.energetic && worth) {
this.fpsWatcher.restart();
}
this.#energetic = worth;
}
onFPSWatcherUpdate(averageFPS = 0) {
const lastFpsRatio = this.#lastFPS
? Math.spherical(averageFPS / this.#lastFPS)
: 1;
const fpsRatio = (averageFPS + lastFpsRatio) / this.targetFPS;
// if fps ratio is over 0.95, we must always enhance
// else we lower
const boostedFpsRatio = fpsRatio / 0.95;
// clean change multiplier keep away from large adjustments in high quality
// besides if we have seen an enormous change from final FPS values
const smoothChangeMultiplier = 0.5 * lastFpsRatio;
// high quality distinction that must be utilized (quantity with 2 decimals)
const qualityDiff =
Math.spherical((boostedFpsRatio - 1) * 100) * 0.1 * smoothChangeMultiplier;
if (Math.abs(qualityDiff) > 0.25) {
const newQuality = Math.min(
Math.max(
this.high quality.present + Math.spherical(qualityDiff),
this.high quality.min
),
this.high quality.max
);
this.setCurrentQuality(newQuality);
}
this.#lastFPS = averageFPS;
}
setCurrentQuality(newQuality: quantity) {
this.high quality.present = newQuality;
this.onQualityChange(this.high quality.present);
}
replace() {
if (this.energetic) {
this.fpsWatcher.replace();
}
}
}
Probably the most tough half right here is to easily deal with the standard adjustments to keep away from large drops or beneficial properties in high quality. You additionally don’t need to fall in a loop the place for instance:
- The typical FPS are poor, so that you degrade your present high quality.
- You detect a high quality loss and subsequently resolve to change off an vital characteristic, similar to shadow mapping.
- Eradicating the shadow mapping provides you a FPS increase and after the anticipated delay the present high quality is upgraded.
- You detect a high quality acquire, resolve to re-enable shadow mapping and shortly sufficient, you’re again to step 1.
Usually, the standard score is used to replace issues similar to the present pixel ratio of the scene, body buffers resolutions, variety of shadow maps PCF samples, volumetric raymarching steps and so forth. In worst case eventualities, it could even disable shadow mapping or submit processing results.
Accessibility
Lastly, the location needed to respect at the least the fundamental accessibility requirements. I’m not an accessibility knowledgeable and I’ll have made a number of errors right here and there, however the important thing factors are that the HTML is semantically right, it’s doable to navigate utilizing the keyboard and the prefers-reduced-motion desire is revered. I achieved that by disabling solely the gamification idea for these customers, eradicating each CSS and JavaScript animations, and made the scenes fall again to their 2D canvas variations, with out being animated in any respect.
Conclusion
Effectively, it was an extended journey, wasn’t it?
Engaged on my portfolio these previous 6 months has been a really demanding job, technically but additionally emotionally. I’m nonetheless having quite a lot of self doubts concerning the total design, key UX selections or degree of creativity. I additionally do suppose that it form of actually sums up who I’m as a developer but additionally as an individual. Ultimately, it’s most likely what issues most.
I hope that you just’ve learnt a number of issues studying this case research, whether or not it’d be about technical stuff or my very own inventive course of. Thanks all, and keep in mind: keep enjoyable!









