• About Us
  • Privacy Policy
  • Disclaimer
  • Contact Us
AimactGrow
  • Home
  • Technology
  • AI
  • SEO
  • Coding
  • Gaming
  • Cybersecurity
  • Digital marketing
No Result
View All Result
  • Home
  • Technology
  • AI
  • SEO
  • Coding
  • Gaming
  • Cybersecurity
  • Digital marketing
No Result
View All Result
AimactGrow
No Result
View All Result

Constructing a Twin-Scene Fluid X-Ray Reveal Impact in Three.js

Admin by Admin
March 23, 2026
Home Coding
Share on FacebookShare on Twitter



I’m Cullen Webber, a inventive full-stack developer based mostly in Perth, Australia, with a ardour for graphics programming and crafting immersive experiences on the internet.

This tutorial walks you thru making a fluid X-ray impact in Three.js, leveraging a render pipeline powered by TSL (Three.js Shading Language) and WebGPU.

A WebGL model can be out there within the WebGL department of the GitHub repository (Bloom is sort of completely different).

Breaking Down the Render Pipeline

This impact breaks down into 5 elements. It begins with a canvas-drawn mouse path, which feeds right into a ping-pong fluid simulation that diffuses it. Alongside this, two instanced Three.js scenes, one stable and one X-ray, are rendered to separate textures earlier than a closing post-processing cross composes and stylizes the end result.

Flowchart of the rendering pipeline from cursor input to screen output, showing how the mouse trail, fluid simulation, two 3D scenes, and post-processing compositor connect.

Creating the Mouse Path Canvas

The pipeline begins with a 2D canvas producing a easy black-on-white round masks. That is then wrapped in a Three.js CanvasTexture, so the fluid simulation within the subsequent step can pattern it as a texture every body.

export default class MouseTrail { ...
	
	#createCanvas(width, peak) {
		this.canvas = doc.createElement("canvas");
		this.canvas.width = width;
		this.canvas.peak = peak;
		this.ctx = this.canvas.getContext("2nd");
		this.lineWidth = Math.max(width * 0.2, 100);

		this.ctx.fillStyle = "white";
		this.ctx.fillRect(0, 0, width, peak);
	}

	#createTexture() {
		this.texture = new THREE.CanvasTexture(this.canvas);
		this.texture.minFilter = THREE.LinearFilter;
		this.texture.magFilter = THREE.LinearFilter;
		this.texture.generateMipmaps = false;
	}
	// ...
}

Updating The Path

Every body, the path easily follows the cursor (utilizing linear interpolation), stopping jagged strains within the fluid simulation. When the cursor stops, the path fades out, letting the stable scene restore itself. The draw methodology merely clears the canvas and strokes a single thick line that seems with motion and fades when idle.

export default class MouseTrail { ...
	
	replace(mouseX, mouseY) {
		const targetX = mouseX * this.canvas.width;
		const targetY = mouseY * this.canvas.peak;

		if (this.currentX === null) {
			this.currentX = targetX;
			this.currentY = targetY;
			this.lastX = targetX;
			this.lastY = targetY;
			return;
		}

		this.#lerp(targetX, targetY);
		this.#updateOpacity();
		this.#draw();

		this.lastX = this.currentX;
		this.lastY = this.currentY;
		this.texture.needsUpdate = true;
	}

	#draw() {
		const { canvas, ctx, lineWidth } = this;

		ctx.fillStyle = "white";
		ctx.fillRect(0, 0, canvas.width, canvas.peak);

		if (this.opacity > 0.01) {
			ctx.beginPath();
			ctx.moveTo(this.lastX, this.lastY);
			ctx.lineTo(this.currentX, this.currentY);
			ctx.lineCap = "spherical";
			ctx.lineWidth = lineWidth;
			ctx.strokeStyle = `rgba(0, 0, 0, ${this.opacity})`;
			ctx.stroke();
		}
	}
	// ...
}

Remodeling the Mouse Path right into a Fluid

The fluid simulation takes the mouse path canvas as enter, remodeling it right into a dynamic fluid impact. On every body, the path is subtle outward, modulated with FBM (Fractional Brownian movement) noise, and progressively fades to white.

Implementing a Suggestions Loop with Ping-Pong Rendering

This makes use of a way referred to as ping-pong rendering. Two render targets are maintained, and every body one is learn from whereas the opposite is written to, then they’re swapped. The pair is critical as a result of the GPU can not learn and write the identical texture in a single cross. Goal A holds the earlier body’s end result, the shader samples it and writes to Goal B, then they commerce locations and the cycle continues.

export default class FluidSim { ...

	#createRenderTargets() {
		const opts = {
			minFilter: THREE.LinearFilter,
			magFilter: THREE.LinearFilter,
			depthBuffer: false,
			stencilBuffer: false,
		};
		this.targetA = new THREE.RenderTarget(this.width, this.peak, opts);
		this.targetB = new THREE.RenderTarget(this.width, this.peak, opts);

		this.prevNode = texture(this.targetA.texture);
		this.maskNode = texture(this.targetA.texture);
	}

	#createFBOScene() {
		this.fboScene = new THREE.Scene();
		this.fboCamera = new THREE.OrthographicCamera(-1, 1, 1, -1, -1, 1);

		this.inputNode = texture(new THREE.Texture());

		const materials = new MeshBasicNodeMaterial();
		materials.colorNode = this.#createFluidShader();

		const geo = new THREE.PlaneGeometry(2, 2);
		// Flip geometry UVs Y so render goal read-back is self-consistent in WebGPU
		const uvAttr = geo.attributes.uv;
		for (let i = 0; i < uvAttr.rely; i++) {
			uvAttr.setY(i, 1.0 - uvAttr.getY(i));
		}
		this.fboQuad = new THREE.Mesh(geo, materials);
		this.fboScene.add(this.fboQuad);
	}

	replace(renderer, trailTexture) {
		this.prevNode.worth = this.targetA.texture;
		this.inputNode.worth = trailTexture;

		renderer.setRenderTarget(this.targetB);
		renderer.render(this.fboScene, this.fboCamera);
		renderer.setRenderTarget(null);

		// Replace masks to learn from the just-rendered goal
		this.maskNode.worth = this.targetB.texture;

		// Swap
		const temp = this.targetA;
		this.targetA = this.targetB;
		this.targetB = temp;
	}
	// ...

}

The prevNode and maskNode are TSL texture nodes that act because the bridge between this simulation and the remainder of the pipeline. prevNode is what the shader samples from throughout the fluid cross, maskNode is what the post-processing compositor reads from downstream.

The simulation runs in its personal scene with an orthographic digital camera and a fullscreen quad, so each pixel within the render goal will get processed by the fluid shader.

Every body, the replace methodology units prevNode to the final rendered body, passes within the present mouse path texture, renders the fluid shader to the opposite goal, updates maskNode to the end result, and swaps.

Constructing the Fluid Shader

The shader samples FBM noise to generate a small UV offset per pixel, giving the fluid a turbulent, uneven look. With out it, the fluid spreads evenly, making a flat blur. The noise runs at excessive frequency throughout 4 octaves, then is scaled down simply sufficient to introduce delicate motion with out breaking apart the feel.

#createFluidShader() { ...

	const side = this.peak / this.width;
	const aspectVec = this.width < this.peak ? vec2(1.0, 1.0 / side) : vec2(side, 1.0);

	return Fn(() => { ...
		const uvCoord = uv();
		const disp = mul(mul(fbm(mul(uvCoord, 20.0), float(4)), aspectVec), 0.01);
		// ...
	}

}

The aspectVec adjusts for UV coordinates being normalized from 0 to 1, making certain the displacement doesn’t stretch on non-square viewports.

Every body, the earlier body is sampled at 5 positions: the present pixel and 4 neighbors offset by the noise. The darkest worth from these samples is stored utilizing min(). As a result of the path paints black on white, this makes darkish areas bleed outward, creating the spreading. The noise offsets make sure the end result doesn’t appear like a uniform blur.

#createFluidShader() { ...

	const blendDarken = Fn(([base, blend]) => min(mix, base));

	return Fn(() => { ...
		const texel  = this.prevNode.pattern(uvCoord);
		const texel2 = this.prevNode.pattern(vec2(add(uvCoord.x, disp.x), uvCoord.y));
		const texel3 = this.prevNode.pattern(vec2(sub(uvCoord.x, disp.x), uvCoord.y));
		const texel4 = this.prevNode.pattern(vec2(uvCoord.x, add(uvCoord.y, disp.y)));
		const texel5 = this.prevNode.pattern(vec2(uvCoord.x, sub(uvCoord.y, disp.y)));

		const floodcolor = texel.rgb.toVar();
		floodcolor.assign(blendDarken(floodcolor, texel2.rgb));
		floodcolor.assign(blendDarken(floodcolor, texel3.rgb));
		floodcolor.assign(blendDarken(floodcolor, texel4.rgb));
		floodcolor.assign(blendDarken(floodcolor, texel5.rgb));
		// ...
	}
}

The brand new mouse path is mixed in the identical means. Darker areas of the path overwrite lighter values, letting the newest actions present via.

#createFluidShader() { ...

	return Fn(() => { ...
		const flippedUV = vec2(uvCoord.x, sub(float(1.0), uvCoord.y));
		const enter = this.inputNode.pattern(flippedUV);
		const mixed = blendDarken(floodcolor, enter.rgb);
		// ...
	}
	// ...
}

A small quantity of white is added every body and clamped to 1.0. Darkish pixels progressively drift again towards white, so when the cursor stops, the fluid slowly fades and the stable scene reappears. At 0.015 per body, it takes roughly one second at 60 fps for a totally black pixel to return to white.

#createFluidShader() { ...

	return Fn(() => { ...
		return min(vec3(1.0), add(mixed, vec3(0.015)));
	}
	// ...
}

The Masks Output

The output is a grayscale texture up to date each body. White means present the stable scene, black means reveal the skeleton. The maskNode exposes this as a TSL texture node that plugs straight into the post-processing compositor.

Instancing the Strong & X-Ray Scenes

The whole reveal impact depends on two scenes rendered with the identical structure and digital camera angle. One scene exhibits the stable physique, the opposite the skeleton. Each are composited later within the post-processing pipeline, so even slight variations between them will trigger the reveal to look incorrect.

Each scenes share a digital camera, surroundings map, fog, and lighting setup. The one variations are the fashions themselves and a few minor materials tweaks on the skeleton. Every thing else is an identical.

export default class Scene { ...

	#createScene() {
		const scene = new THREE.Scene();
		scene.fog = new THREE.Fog(0x000000, 1, 3);
		scene.background = new THREE.Shade(0x000000);
		scene.surroundings = this.envMap;
		scene.environmentIntensity = 0.1;

		const gentle = new THREE.PointLight(0xffffff, 0.75);
		gentle.place.set(1, 2, 1);
		scene.add(gentle);

		return scene;
	}
	// ...
}

The #createScene() methodology is known as twice, as soon as for solidScene and as soon as for wireScene. Fog and a black background fade the figures on the edges, stopping them from chopping sharply towards the darkness. The surroundings map is generated from RoomEnvironment and processed via a PMREM generator, offering delicate ambient gentle with out including a number of particular person lights. The depth is stored low at 0.1, because the Fresnel materials contributes many of the visible weight.

Positioning & Instancing the Fashions

Twelve copies of every mannequin are rendered, however solely two draw calls are used, one per scene, due to InstancedMesh. The InstancedModel class masses a DRACO-compressed .glb, extracts the geometry by mesh identify, applies the Fresnel materials, and arranges all situations in a grid.

export default class InstancedModel { ...

	#setPositions(mesh) {
		const { rely, spacing } = this;
		const gridSize = Math.ceil(Math.sqrt(rely));
		const halfSize = ((gridSize - 1) * spacing) / 2;
		const spacingZ = spacing * 0.65;
		const halfSizeZ = ((gridSize - 1) * spacingZ) / 2;
		const dummy = new THREE.Object3D();

		for (let i = 0; i < rely; i++) {
			const x = i % gridSize;
			const z = Math.ground(i / gridSize);
			const xOffset = z % 2 === 1 ? spacing / 2 : 0;

			dummy.place.set(
				x * spacing - halfSize + xOffset,
				0,
				z * spacingZ - halfSizeZ,
			);
			dummy.updateMatrix();
			mesh.setMatrixAt(i, dummy.matrix);
		}
		mesh.instanceMatrix.needsUpdate = true;
	}
	// ...
}

The grid makes use of a hexagonal stagger. Each different row will get offset by half a spacing unit on the X axis. This stops it trying like a inflexible spreadsheet and provides it a extra pure, packed association. The Z spacing is compressed to 0.65 of the X spacing so the grid feels tighter entrance to again, which works higher with the digital camera angle used.

Matching the Skeleton to the Physique

To get the skeleton to sit down appropriately contained in the physique, each fashions must occupy the identical house. Actual topology isn’t required; the skeleton simply wants to suit neatly inside the physique mesh. In Blender, centre each fashions on the origin, match their scale, apply all transforms, and export them as .glb information with DRACO compression.

Constructing the Glowing Materials

That is what provides the figures their look. The Fresnel impact makes edges glow vibrant whereas surfaces going through the digital camera keep darkish, creating that X-ray, hologram really feel. We combine between a near-black core and a vibrant blue on the edges, then pipe that very same color into the emissive channel so the figures glow on their very own while not having sturdy scene lighting.

export perform createFresnelMaterial({
  heightMax = 1.0,
  roughness = 1.0,
  shade = vec3(0.2, 0.6, 1.0),
  emissiveIntensity = 0.75,
}) {
  const materials = new MeshStandardNodeMaterial({
    metalness: 0,
    roughness,
  });

  const fresnel = pow(
    sub(float(1.0), normalView.dot(positionViewDirection.negate())),
    float(1.0),
  );

  const coreColor = vec3(0.0, 0.05, 0.1);
  const fresnelColor = combine(coreColor, shade, fresnel);

  const heightFade = smoothstep(0.5, heightMax, positionLocal.y);
  const finalColor = fresnelColor.mul(heightFade);

  materials.colorNode = finalColor;
  materials.emissiveNode = finalColor.mul(emissiveIntensity);

  return materials;
}

Each fashions are reduce on the torso to avoid wasting vertices. A smoothstep alongside native Y fades the underside to black, hiding the exhausting edge and creating the looks of sunshine falloff.

Including Digital camera Motion with Contact Fallback

Each scenes share a single PerspectiveCamera with a slender 17° discipline of view. The tight FOV compresses depth, making the grid really feel like a wall of figures somewhat than a scattered crowd. The digital camera follows the cursor with a easy, damped ease whereas sustaining a set look level, including a delicate sense of depth throughout motion.

Constructing the Submit-Processing Pipeline

That is the place every little thing comes collectively. The PostProcessing class takes each scenes, the digital camera, and the fluid masks, compositing them into the ultimate picture via a sequence of TSL results.

export default class PostProcessing { ...
	constructor(renderer, solidScene, wireScene, digital camera, fluidMaskNode) { ...
		this.pipeline = new THREE.RenderPipeline(renderer);
		this.#compose();
		// ...
	}

	#compose() { ...
		const solidPass = cross(this.solidScene, this.digital camera);
		const solidColor = solidPass.getTextureNode("output");

		const wirePass = cross(this.wireScene, this.digital camera);
		const wireColor = wirePass.getTextureNode("output");
		// ...
	}
}

Every scene has its personal render cross, producing a texture node that may be sampled downstream.

Bloom impacts solely the stable scene, including a delicate glow to the Fresnel edges whereas preserving the skeleton’s element (I felt the scene misplaced quite a lot of its mojo when the bloom was utilized on the skeleton scene).

export default class PostProcessing { ...	
	#compose() { ...
		const bloomPass = bloom(solidColor.pattern(screenUV), 0.4, 0.05);
		// ...
	}
	// ...
}

Scan strains are layered over the bloom. A high-frequency sine wave alongside the display’s Y axis is clamped to unfavorable values, darkening the picture and conserving the impact subtractive somewhat than including brightness.

export default class PostProcessing { ...	
	#compose() { ...
		const scanRaw = sin(mul(screenUV.y, float(1250.0)));
		const scanDarken = clamp(scanRaw, -1.0, 0.0).mul(-0.15);
		const scanLines = sub(float(1.0), scanDarken);
		const bloomWithScanLines = bloomPass.mul(scanLines);
		// ...
	}
	// ...
}

The fluid masks composite varieties the core of the impact. The masks is inverted and used to mix between the processed stable scene and the uncooked wire scene.

export default class PostProcessing { ...	
	#compose() { ...
		const fluidMask = sub(float(1.0), this.fluidMaskNode.pattern(screenUV).r);
		const blended = combine(
			bloomWithScanLines,
			wireColor.pattern(screenUV),
			fluidMask,
		);
		// ...
	}
	// ...
}

After that it’s simply environment. Movie grain so the picture doesn’t really feel too clear, a slight desaturation to tug again the blue a bit, and a color grade that mixes darkish blue into the blacks to elevate the shadows. Truthfully these had been all simply tweaked by eye till it felt proper.

export default class PostProcessing { ...	
	#compose() { ...

		const noise = mx_noise_float(
			vec3(screenUV.mul(2000.0), time.mul(20.0)),
		).mul(0.015);

		const withEffects = blended.sub(noise);

		const luminance = dot(withEffects, vec3(0.299, 0.587, 0.114));

		const desaturated = combine(
			vec3(luminance, luminance, luminance),
			withEffects,
			float(0.985),
		);

		const lowContrast = combine(vec3(0.0, 0.0, 0.2), desaturated, float(0.9));

		this.pipeline.outputNode = lowContrast;
		// ...
	}
	// ...
}

Understanding the Render Loop

The orchestration is easy. Every body updates the scene, feeds the mouse place into the path, runs the fluid simulation from that enter, and renders the post-processing pipeline.

class Three { ...
	#animate() {
		const delta = this.clock.getDelta();

		this.scene.animate(delta, this.clock.elapsedTime);

		// Replace mouse path → fluid sim
		this.mouseTrail.replace(
			this.scene.cameraRig.mouseNormalized.x,
			this.scene.cameraRig.mouseNormalized.y,
		);

		this.fluidSim.replace(this.context.renderer, this.mouseTrail.texture);

		// Render every little thing (scene passes + results)
		this.postProcessing.render();

		requestAnimationFrame(() => this.#animate());
	}
	// ...
}

The Ultimate Product

Right here’s the ultimate impact with every little thing wired up. Mouse path, fluid simulation, each instanced scenes, and the post-processing pipeline all operating collectively.

Conclusion

If you wish to take it additional, every little thing right here is modular. Swap the fashions, change the fluid behaviour, tweak the post-processing and also you’ve bought one thing utterly completely different. I’m all the time experimenting with this type of stuff so be at liberty to achieve out on X @sinzvii in case you have questions or simply need to chat about Three.js. Thanks for studying.

Tags: BuildingDualSceneEffectfluidRevealThree.jsXRay
Admin

Admin

Next Post
Gemini 2.5 mannequin household expands

Gemini 2.5 mannequin household expands

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended.

Single-player sickos rejoice, PlayStation 5 will get one other non live-service sport within the type of a Stellar Blade sequel, dropping ā€œearlier than 2027ā€

Single-player sickos rejoice, PlayStation 5 will get one other non live-service sport within the type of a Stellar Blade sequel, dropping ā€œearlier than 2027ā€

May 21, 2025
Anthropic Launches Claude Sonnet 4.5 with New Coding and Agentic State-of-the-Artwork Outcomes

Anthropic Launches Claude Sonnet 4.5 with New Coding and Agentic State-of-the-Artwork Outcomes

September 30, 2025

Trending.

Exporting a Material Simulation from Blender to an Interactive Three.js Scene

Exporting a Material Simulation from Blender to an Interactive Three.js Scene

August 20, 2025
Moonshot AI Releases š‘Øš’•š’•š’†š’š’•š’Šš’š’ š‘¹š’†š’”š’Šš’…š’–š’‚š’š’” to Exchange Mounted Residual Mixing with Depth-Sensible Consideration for Higher Scaling in Transformers

Moonshot AI Releases š‘Øš’•š’•š’†š’š’•š’Šš’š’ š‘¹š’†š’”š’Šš’…š’–š’‚š’š’” to Exchange Mounted Residual Mixing with Depth-Sensible Consideration for Higher Scaling in Transformers

March 16, 2026
AI-Assisted Menace Actor Compromises 600+ FortiGate Gadgets in 55 Nations

AI-Assisted Menace Actor Compromises 600+ FortiGate Gadgets in 55 Nations

February 23, 2026
10 tricks to begin getting ready! • Yoast

10 tricks to begin getting ready! • Yoast

July 21, 2025
Introducing Sophos Endpoint for Legacy Platforms – Sophos Information

Introducing Sophos Endpoint for Legacy Platforms – Sophos Information

August 28, 2025

AimactGrow

Welcome to AimactGrow, your ultimate source for all things technology! Our mission is to provide insightful, up-to-date content on the latest advancements in technology, coding, gaming, digital marketing, SEO, cybersecurity, and artificial intelligence (AI).

Categories

  • AI
  • Coding
  • Cybersecurity
  • Digital marketing
  • Gaming
  • SEO
  • Technology

Recent News

Uncharted Actor To Star In Romance Film About Aggressive Players Who Catch Emotions

Uncharted Actor To Star In Romance Film About Aggressive Players Who Catch Emotions

March 23, 2026
Gemini 2.5 mannequin household expands

Gemini 2.5 mannequin household expands

March 23, 2026
  • About Us
  • Privacy Policy
  • Disclaimer
  • Contact Us

Ā© 2025 https://blog.aimactgrow.com/ - All Rights Reserved

No Result
View All Result
  • Home
  • Technology
  • AI
  • SEO
  • Coding
  • Gaming
  • Cybersecurity
  • Digital marketing

Ā© 2025 https://blog.aimactgrow.com/ - All Rights Reserved