1. Introduction
On this tutorial, we’ll construct an Infinite Canvas: a spatial picture gallery that extends endlessly in all instructions. Photographs repeat seamlessly, customers can pan freely on the X, Y, and Z axes utilizing mouse, contact, or keyboard enter, and all the pieces is engineered for top refresh charges, together with 120 fps on 120 Hz screens ({hardware} permitting).
The part is totally interactive and works easily on each desktop and cell. Drag to pan, scroll or pinch to zoom, and discover the house with out synthetic bounds.
The objective isn’t just to render numerous photographs, however to create the phantasm of infinity whereas preserving the expertise fluid and responsive.
Why this tutorial exists
I made a decision to jot down this after repeatedly seeing variations of this sample through the years, constructing associated programs myself previously, and noticing a spot in clear, end-to-end explanations of the best way to truly implement it in a contemporary, production-ready manner. I had beforehand explored a 2D model of an infinite drag grid with out WebGL, and this text is the results of pushing that concept additional into full 3D. The objective right here is to not current a brand new idea, however to doc a concrete method, the tradeoffs behind it, and the reasoning that formed the implementation.
The infinite, spatial gallery sample explored on this tutorial shouldn’t be a brand new thought. Variations of this method have appeared in several varieties through the years. Particularly, Chakib Mazouni has publicly explored an analogous visible sample in prior experiments. This tutorial presents my very own implementation and engineering method, and focuses on how the system is constructed and reasoned about finish to finish.
2. Idea: Faking Infinity
True infinity shouldn’t be sensible in rendering. As a substitute, we pretend it.
For this demo, the canvas is populated with Baroque-era artworks, as a result of if you happen to’re going to float endlessly via house, you would possibly as properly do it surrounded by dramatic lighting and extreme chiaroscuro (the photographs are sourced from the Artwork Institute of Chicago Open Entry assortment).
The core thought is straightforward: the digital camera strikes freely, however the world is generated solely across the digital camera. House is split into equally sized chunks, and solely the chunks inside a sure radius of the digital camera exist at any given time.
Every chunk accommodates a deterministic format of picture planes. As a result of the format is deterministic, chunks will be destroyed and recreated with out visible discontinuities. As you progress, previous chunks fall away and new ones seem, creating the phantasm of an countless canvas.
Consider it as an infinite grid the place solely a small window is ever rendered.
3. Implementation
Lazy-loading the Scene
The Infinite Canvas is heavy by nature, so we lazy-load the whole scene. This retains the preliminary bundle gentle and avoids blocking the preliminary render whereas Three.js initializes.
const LazyInfiniteCanvasScene = React.lazy(() =>
import("./scene").then((mod) => ({ default: mod.InfiniteCanvasScene }))
);
export operate InfiniteCanvas(props: React.ComponentProps) {
return (
);
}
Utilizing React.Suspense right here is intentional. The fallback is null as a result of the canvas is often a full-bleed aspect and we don’t need format shifts. If you happen to do desire a loader, you’ll be able to change it with a progress UI pushed by the feel loading progress later within the article.
Chunk-Primarily based World Era
We divide house right into a 3D grid of equally sized cubic chunks. The digital camera can journey indefinitely, however we solely hold a hard and fast variety of chunks alive across the digital camera.
First we compute the present chunk coordinates from the digital camera’s base place:
const cx = Math.flooring(s.basePos.x / CHUNK_SIZE);
const cy = Math.flooring(s.basePos.y / CHUNK_SIZE);
const cz = Math.flooring(s.basePos.z / CHUNK_SIZE);
Then, each time the digital camera crosses into a brand new chunk, we regenerate the lively chunk checklist utilizing a precomputed set of offsets. The diagram beneath illustrates this: the digital camera (marked C) sits within the middle chunk, surrounded by its quick neighbors in all instructions.
Z-1 (behind) Z=0 (digital camera depth) Z+1 (forward)
┌─────┬─────┬─────┐ ┌─────┬─────┬─────┐ ┌─────┬─────┬─────┐
│-1,-1│ 0,-1│ 1,-1│ │-1,-1│ 0,-1│ 1,-1│ │-1,-1│ 0,-1│ 1,-1│
├─────┼─────┼─────┤ ├─────┼─────┼─────┤ ├─────┼─────┼─────┤
│-1,0 │ 0,0 │ 1,0 │ │-1,0 │ C │ 1,0 │ │-1,0 │ 0,0 │ 1,0 │
├─────┼─────┼─────┤ ├─────┼─────┼─────┤ ├─────┼─────┼─────┤
│-1,1 │ 0,1 │ 1,1 │ │-1,1 │ 0,1 │ 1,1 │ │-1,1 │ 0,1 │ 1,1 │
└─────┴─────┴─────┘ └─────┴─────┴─────┘ └─────┴─────┴─────┘
This 3×3×3 neighborhood means 27 chunks are lively at any time, a hard and fast price no matter how far the digital camera has traveled.
setChunks(
CHUNK_OFFSETS.map((o) => ({
key: `${ucx + o.dx},${ucy + o.dy},${ucz + o.dz}`,
cx: ucx + o.dx,
cy: ucy + o.dy,
cz: ucz + o.dz,
}))
);
Two vital particulars right here:
- The render price stays flat as a result of the variety of chunks is fixed.
- Chunk IDs are secure strings, so React can mount and unmount chunk teams predictably.
Deterministic Airplane Layouts
Inside every chunk we generate a format of picture planes. The format should be deterministic, the identical chunk coordinates ought to all the time produce the identical planes. That manner we will destroy and recreate chunks freely with out visible jumps.
Chunk format technology is deferred so it by no means competes with enter dealing with. If the browser helps it, we schedule it throughout idle time:
React.useEffect(() => {
let canceled = false;
const run = () => !canceled && setPlanes(generateChunkPlanesCached(cx, cy, cz));
if (typeof requestIdleCallback !== "undefined") {
const id = requestIdleCallback(run, { timeout: 100 });
return () => {
canceled = true;
cancelIdleCallback(id);
};
}
const id = setTimeout(run, 0);
return () => {
canceled = true;
clearTimeout(id);
};
}, [cx, cy, cz]);
The generateChunkPlanes operate converts chunk coordinates right into a deterministic seed, then makes use of it to position planes randomly inside the chunk bounds:
export const generateChunkPlanes = (cx: quantity, cy: quantity, cz: quantity): PlaneData[] => {
const planes: PlaneData[] = [];
const seed = hashString(`${cx},${cy},${cz}`);
for (let i = 0; i < 5; i++) {
const s = seed + i * 1000;
const r = (n: quantity) => seededRandom(s + n);
const dimension = 12 + r(4) * 8;
planes.push({
id: `${cx}-${cy}-${cz}-${i}`,
place: new THREE.Vector3(
cx * CHUNK_SIZE + r(0) * CHUNK_SIZE,
cy * CHUNK_SIZE + r(1) * CHUNK_SIZE,
cz * CHUNK_SIZE + r(2) * CHUNK_SIZE
),
scale: new THREE.Vector3(dimension, dimension, 1),
mediaIndex: Math.flooring(r(5) * 1_000_000),
});
}
return planes;
};
Outcomes are cached with LRU eviction to keep away from regenerating layouts the person has already visited:
const MAX_PLANE_CACHE = 256;
const planeCache = new Map();
export const generateChunkPlanesCached = (cx: quantity, cy: quantity, cz: quantity): PlaneData[] => {
const key = `${cx},${cy},${cz}`;
const cached = planeCache.get(key);
if (cached) {
// Transfer to finish for LRU ordering
planeCache.delete(key);
planeCache.set(key, cached);
return cached;
}
const planes = generateChunkPlanes(cx, cy, cz);
planeCache.set(key, planes);
// Evict oldest entries
whereas (planeCache.dimension > MAX_PLANE_CACHE) {
const firstKey = planeCache.keys().subsequent().worth;
if (firstKey) planeCache.delete(firstKey);
}
return planes;
};
As soon as we now have a listing of airplane slots, we map them to actual media. The modulo makes a finite dataset repeat indefinitely:
const mediaItem = media[plane.mediaIndex % media.length];
The result’s a “repeatable universe”: restricted inputs, limitless traversal.
Media Planes and Fading Logic
Every picture is a PlaneGeometry with a MeshBasicMaterial. The attention-grabbing half shouldn’t be the geometry, however when it’s seen.
We fade planes based mostly on two distances:
- Grid distance: how far the chunk is from the digital camera chunk
- Depth distance: how far the airplane is from the digital camera alongside Z
Right here’s the core fade computation, executed on each body for seen (or not too long ago seen) planes:
const dist = Math.max(
Math.abs(chunkCx - cam.cx),
Math.abs(chunkCy - cam.cy),
Math.abs(chunkCz - cam.cz)
);
const absDepth = Math.abs(place.z - cam.camZ);
const gridFade =
dist <= RENDER_DISTANCE
? 1
: Math.max(0, 1 - (dist - RENDER_DISTANCE) / Math.max(CHUNK_FADE_MARGIN, 0.0001));
const depthFade =
absDepth <= DEPTH_FADE_START
? 1
: Math.max(0, 1 - (absDepth - DEPTH_FADE_START) / Math.max(DEPTH_FADE_END - DEPTH_FADE_START, 0.0001));
const goal = Math.min(gridFade, depthFade * depthFade);
state.opacity = goal < INVIS_THRESHOLD && state.opacity < INVIS_THRESHOLD
? 0
: lerp(state.opacity, goal, 0.18);
And right here’s the sensible optimization that retains overdraw and sorting underneath management. When a airplane is totally opaque we allow depth writing, when it fades out we finally disable it and conceal the mesh solely:
const isFullyOpaque = state.opacity > 0.99;
materials.opacity = isFullyOpaque ? 1 : state.opacity;
materials.depthWrite = isFullyOpaque;
mesh.seen = state.opacity > INVIS_THRESHOLD;
This “fade then disable” method provides clean transitions, nevertheless it additionally avoids paying for invisible work.
Digital camera Controller
The controller turns enter into movement, with inertia.
We accumulate enter (mouse drag, wheel, contact gestures, keyboard), accumulate it right into a goal velocity, after which ease the precise velocity towards it. This avoids twitchy motion and makes the house really feel bodily.
Pointer panning updates the goal velocity whereas dragging:
if (s.isDragging) {
s.targetVel.x -= (e.clientX - s.lastMouse.x) * 0.025;
s.targetVel.y += (e.clientY - s.lastMouse.y) * 0.025;
s.lastMouse = { x: e.clientX, y: e.clientY };
}
Zooming is dealt with by way of wheel scroll (desktop) and pinch distance (contact). We accumulate scroll into scrollAccum and apply it steadily:
s.scrollAccum += e.deltaY * 0.006;
s.targetVel.z += s.scrollAccum;
s.scrollAccum *= 0.8;
Inertia is the mix between present and goal velocity:
s.velocity.x = lerp(s.velocity.x, s.targetVel.x, VELOCITY_LERP);
s.velocity.y = lerp(s.velocity.y, s.targetVel.y, VELOCITY_LERP);
s.velocity.z = lerp(s.velocity.z, s.targetVel.z, VELOCITY_LERP);
s.basePos.x += s.velocity.x;
s.basePos.y += s.velocity.y;
s.basePos.z += s.velocity.z;
digital camera.place.set(s.basePos.x + s.drift.x, s.basePos.y + s.drift.y, s.basePos.z);
s.targetVel.x *= VELOCITY_DECAY;
s.targetVel.y *= VELOCITY_DECAY;
s.targetVel.z *= VELOCITY_DECAY;
The vital bit is that we replace basePos moderately than immediately pushing the digital camera from each occasion. That provides you one predictable, frame-based integration level, which additionally makes chunk updates a lot simpler to cause about.
4. Refinement
Efficiency
This part is constructed with efficiency as a first-class concern. Each a part of the system is designed to attenuate body time and keep away from spikes, leading to a constantly clean expertise. In apply, the canvas is able to reaching as much as 120 fps on high-refresh shows, and body charges usually stay very excessive on each desktop and cell units.
1) Throttle chunk updates whereas zooming
When customers zoom rapidly, the digital camera can cross a number of chunk boundaries in a short while. Rebuilding chunk lists on each boundary is wasteful, so updates are throttled based mostly on zooming state and Z velocity:
const isZooming = Math.abs(s.velocity.z) > 0.05;
const throttleMs = getChunkUpdateThrottleMs(isZooming, Math.abs(s.velocity.z));
if (s.pendingChunk && shouldThrottleUpdate(s.lastChunkUpdate, throttleMs, now)) {
const { cx: ucx, cy: ucy, cz: ucz } = s.pendingChunk;
s.pendingChunk = null;
s.lastChunkUpdate = now;
setChunks(
CHUNK_OFFSETS.map((o) => ({
key: `${ucx + o.dx},${ucy + o.dy},${ucz + o.dz}`,
cx: ucx + o.dx,
cy: ucy + o.dy,
cz: ucz + o.dz,
}))
);
}
2) Cap pixel density and disable costly defaults
We clamp DPR (particularly on contact units), and explicitly decide out of antialiasing. This favors secure body time over barely softer edges, which is an efficient tradeoff for a scene stuffed with layered quads.
const dpr = Math.min(window.devicePixelRatio || 1, isTouchDevice ? 1.25 : 1.5);
3) Don’t render what you’ll be able to’t see
Fading is barely a transition. As soon as a airplane is totally clear, it’s faraway from rendering and now not writes to the depth buffer. This retains the scene light-weight even when many planes overlap.
Responsiveness
The canvas adapts mechanically to:
- Contact vs mouse enter
- Excessive-DPI shows
- Gadget efficiency constraints
Controls and hints replace dynamically relying on the enter technique.
5. Wrap-Up
The Infinite Canvas demonstrates the best way to create the phantasm of boundless house with out boundless price. The important thing strategies (chunk-based streaming, deterministic technology, distance-based culling, and inertia-driven enter) mix right into a system that feels expansive however stays predictable.
What We Constructed
- A 3D infinite grid that renders solely what’s close to the digital camera
- Clean, inertia-based navigation for mouse, contact, and keyboard
- A fade system that gracefully handles planes coming into and leaving view
- Efficiency tuned for 120 fps on succesful {hardware}
The place to Go Subsequent
Click on-to-focus interplay. Raycast from pointer place to detect which airplane the person clicked, then animate the digital camera to middle on it. This turns the canvas from pure exploration right into a browsable gallery.
Video textures. Exchange static photographs with THREE.VideoTexture. The structure doesn’t change; simply swap the feel supply. Take into account pausing movies for planes exterior the fade threshold to save lots of decode prices.
Dynamic content material loading. As a substitute of a hard and fast media array, fetch content material based mostly on chunk coordinates. Chunk (5, -3, 2) may request photographs from /api/chunk?x=5&y=-3&z=2, enabling really infinite, non-repeating content material.
Depth-based theming. Use the Z place to shift colour grading or fog density. Deeper layers may really feel hazier or tinted, creating visible “eras” as you zoom via.
Collision-free layouts. The present random placement can overlap planes. A extra subtle generator may use Poisson disk sampling or grid snapping to ensure separation.
The actual takeaway is the sample itself. When you perceive the best way to stream a world round a shifting viewpoint, you’ll be able to apply it to maps, timelines, information visualizations, or anything that advantages from the sensation of infinite house.










