• About Us
  • Privacy Policy
  • Disclaimer
  • Contact Us
AimactGrow
  • Home
  • Technology
  • AI
  • SEO
  • Coding
  • Gaming
  • Cybersecurity
  • Digital marketing
No Result
View All Result
  • Home
  • Technology
  • AI
  • SEO
  • Coding
  • Gaming
  • Cybersecurity
  • Digital marketing
No Result
View All Result
AimactGrow
No Result
View All Result

Exploring the HTML-in-Canvas Proposal | Codrops

Admin by Admin
May 13, 2026
Home Coding
Share on FacebookShare on Twitter



This put up and the demos beneath are primarily based on an experimental function that’s at present behind a flag, allow it at chrome://flags/#canvas-draw-element in any Chromium-based browser. If the flag is unavailable or the demos nonetheless don’t work, attempt Chrome Canary.

Not too long ago, social media has been buzzing a couple of new proposal from the WICG that goals to render conventional HTML inside a Canvas, and I’ve to say, I’m fairly enthusiastic about it. I’ve been ready for one thing like this ever since I first got here throughout a tweet in regards to the thought again in 2024, so naturally I needed to dive in and see what it’s all about.

The Downside

For years, the net has drawn a tough line between two worlds: the structured, accessible richness of HTML and the uncooked, pixel-level management of canvas.

Should you needed accessible UI elements and the flexibleness of CSS, you stayed within the DOM. However when you wanted full management over rendering (whether or not for video games, 3D scenes, or customized shaders), you needed to change to canvas.

The Proposal

The HTML-in-Canvas proposal goals to deal with this limitation by enabling to render actual HTML content material immediately, whereas preserving key DOM advantages like structure, accessibility, and CSS styling.

The API introduces three important primitives:

  • A layoutsubtree attribute that opts canvas kids into structure
  • A drawElementImage() methodology that renders a baby factor into the canvas
  • A paint occasion that fires every time a canvas baby adjustments

Placing all of it collectively, the API seems to be like this:


  

{...content material}

const canvas = doc.getElementById("supply");
const content material = doc.getElementById("content material");
const ctx = canvas.getContext("2nd");

canvas.onpaint = () => {
  ctx.reset();
  ctx.drawElementImage(content material, 0, 0);
};

canvas.requestPaint();

In the intervening time, this function is behind a flag. You possibly can allow it at chrome://flags/#canvas-draw-element in any Chromium-based browser. If the flag doesn’t seem or the demos nonetheless don’t work after enabling it, attempt utilizing Chrome Canary.

For safety causes, the proposal imposes some limitations on what will be rendered inside the canvas. That stated, these constraints are far much less restrictive than these of the alternate options talked about within the Workarounds part. I like to recommend reviewing the complete specification, notably the privacy-preserving portray part.

Seeing It in Motion

After I began experimenting with the proposal, I started interested by what this might imply for the way forward for the net, not simply when it comes to attention-grabbing results and interactions, but in addition the brand new sorts of use instances it may unlock. I ended up organizing these concepts into 4 broad classes.

1. The Fundamentals: Publish-processing

With simply these earlier snippets, your content material, accessible and styled with CSS, is rendered immediately right into a canvas. From there, you should utilize that canvas as a texture wherever you want it, for instance as enter to a shader.

On this first demo, I reuse the canvas content material as a texture inside a set of shaders constructed with React Three Fiber and React Postprocessing.

Think about creating a phenomenal hero part in your touchdown web page and with the ability to simply layer post-processing results on high of it to make it much more spectacular, with out having to fret about search engine optimisation or whether or not search engine crawlers can nonetheless learn the content material. The DOM remains to be there, the content material remains to be accessible to crawlers, it’s simply being rendered some other place.

Notes & references
The fluid impact comes from https://github.com/whatisjery/react-fluid-distortion
The rain impact is predicated on a Shadertoy snippet.
The pixelated impact makes use of the built-in pixelation impact from React Postprocessing.

2. A Small Characteristic

Not every little thing needs to be a full-screen impact. And to be truthful, with a few of these results we’re additionally undoing a part of the accessibility we simply gained (the pixelated impact is likely to be a bit a lot 😅).

One use case I discover notably attention-grabbing for HTML-in-Canvas is including small, delicate interactions to the UI, issues that had been beforehand laborious (or almost unattainable) to realize, whereas nonetheless sustaining a clear, high-performance interface. The purpose is to introduce these wow results in particular interactions that seize the person’s consideration.

For example, I’ll mimic this vanish enter snippet created by Rauno, a bit of textual content that fades away if you press Enter.

The trick behind this snippet is that it makes use of a hidden canvas positioned completely on high of the enter discipline. When the person presses Enter, the canvas is revealed and the identical textual content is drawn onto it utilizing matching font types, making it seem as if the enter remains to be there. From that time on, it’s only a matter of manipulating the canvas pixels on every body.

With HTML-in-Canvas, we are able to obtain the similar outcome with out counting on the “hidden canvas” trick, for the reason that enter itself will be rendered immediately into the canvas.

This instance by Matt Rothenberg is one other nice demonstration of this type of use case. The impact that seems if you click on “Submit” creates a delicate however impactful wow impact for the person.

3. Transitions

One other good use case for HTML-in-Canvas is making use of transition results between sections of content material or total pages.

On this demo, I experimented with a curl impact, utilizing a Shadertoy snippet as a place to begin. Sure, the traditional iBook web page transition is now surprisingly simple to recreate on the internet.

Constructing on the identical thought, right here’s one other experiment the place the positioning’s content material is revealed in a particular method because the person logs in.

4. Constructing 2D UIs in a 3D world

Constructing 2D person interfaces for 3D net scenes is normally fairly a tedious process, not less than for me. Typically talking, we don’t have the complete energy of CSS relating to structure (Flexbox, Grid), and even primary design options like field shadows or borders. All the pieces needs to be dealt with on the shader stage.

There are a couple of totally different approaches we are able to take right here.

Let’s say we’ve a scene with a 3D mannequin of a pc, and we wish to show one thing on its display screen. A easy texture, or perhaps a video, may not be sufficient, so we determine to construct a completely interactive interface.

If our stack is React + React Three Fiber, one frequent method is to make use of the HTML element from Drei, which lets us connect HTML content material to things within the scene.

Nonetheless, this doesn’t all the time give us the outcome we’re searching for. In lots of instances, we wish the interface to really feel really embedded within the 3D world, not simply layered on high of it, and we can also wish to apply shader results to it. We bumped into this actual subject whereas constructing the arcade scene for the basement.studio website.

On the time, we solved it utilizing uikit, an amazing library that gives accessible elements and structure primitives for React Three Fiber. We rendered the UI right into a render goal, utilized a fraction shader to it, and used that because the display screen texture, ensuing on this lab.

However now, there’s a 3rd choice. Whereas uikit is highly effective, it’s nonetheless restricted in comparison with CSS. With HTML-in-Canvas, we are able to comply with an analogous method whereas leveraging the complete energy of HTML and CSS.

In truth, the creator of Three.js has already been experimenting with this proposal. In launch 184, he launched HTMLTexture, a brand new texture class that renders dwell HTML by way of this new browser API.

The implementation additionally features a new add-on known as InteractionManager, which for HTMLTexture computes a CSS matrix3d remodel on every body, permitting the browser to deal with hit-testing, hover, focus, and enter natively, with out the necessity for raycasting or artificial occasions.

Thanks to those two new options in Three.js, the next demo was very simple to create.

The supply code for this demo seems to be one thing like this:

import "./types.css";

import { Footer } from "@/elements/structure/footer";
import { Header } from "@/elements/structure/header";
import { GridBackground } from "@/elements/ui/grid-background";

import { ComputerScreen } from "./elements/computer-screen";
import { Scene } from "./elements/scene";

const BasicUI = () => (
  <>
    
> ); export default BasicUI;

There are two key elements to concentrate to: and .

is solely the content material rendered inside the pc, written totally in HTML. We simply add an ID to its container so we are able to reference it later from .

export const ComputerScreen = () => {
  {...content material}

  return (
    

{...content material}

); };

is the place the magic occurs. Under is the complete element, with feedback on the important thing components.

"use consumer";

import { ContactShadows, Float, OrbitControls, Stage } from "@react-three/drei";
import { Canvas, useFrame, useLoader, useThree } from "@react-three/fiber";
import { Suspense, useEffect, useRef } from "react";
import { HTMLTexture, Mesh, sort ShaderMaterial } from "three";
import { InteractionManager } from "three/addons/interplay/InteractionManager.js";
import { GLTFLoader } from "three/addons/loaders/GLTFLoader.js";

import { screenMaterial } from "./crt-effect";

sort ScreenMaterial = ShaderMaterial &  null ;
const materials = screenMaterial as ScreenMaterial;

const Mac = () => {
  const gltf = useLoader(GLTFLoader, "/mac.glb");
  const { gl, digital camera } = useThree();
  const screenRef = useRef(null);

  const interactions = useRef(null);

  useEffect(() => {
    // We retrieve the factor 
    const factor = doc.getElementById("computer_screen");
    if (!factor) throw new Error("#computer_screen factor not discovered");
    
    // We create a texture from that factor utilizing HTML-in-Canvas
    const texture = new HTMLTexture(factor);

    // Create an Interplay Supervisor to ahead pointer occasions from the 3D aircraft to the DOM factor
    interactions.present = new InteractionManager();

    // We connect the feel to the pc display screen aircraft
    materials.uniforms.map.worth = texture;
    materials.map = texture;
    
    // Join the interplay supervisor to the renderer and digital camera
    interactions.present.join(gl, digital camera);

    // Register the display screen aircraft mesh to obtain pointer occasions
    if (screenRef.present) interactions.present.add(screenRef.present);

    window.dispatchEvent(new Occasion("mac-canvas-ready"));
  }, [gl, camera]);

  useFrame(({ clock }) => {
    materials.uniforms.uTime.worth = clock.elapsedTime;
    interactions.present?.replace();
  });

  return (
    
      
      
        
      
    
  );
};

export const Scene = () => (
  
);

This approach will make it much easier to build interfaces for web games, interactive experiences, and even VR/AR applications using WebXR.

Workarounds

If we don’t want to wait for the proposal to be fully implemented and broadly supported across browsers, there are currently a few alternatives for achieving this kind of behavior.

On one hand, libraries like html2canvas attempt to emulate CSS properties directly in a canvas. It’s an interesting workaround, but far from perfect. As the documentation itself states: Since each CSS property needs to be manually coded to render correctly, html2canvas will never have full CSS support. The library tries to support the most commonly used CSS properties to the extent that it can.

That said, it’s proven to be good enough in practice, as it was used in the Next.js Conf 2024 badge to achieve this kind of effect.

The background distorts as the transparent badge moves over it. This is real HTML, rendered as a texture via html2canvas and used in a shader to achieve the effect.

On the other hand, another approach is to use the SVG element. While html2canvas includes an option to render HTML this way, its implementation is fairly minimal, so you can achieve the same result without relying on the library.

The SVG element lets you embed HTML content inside an SVG. From there, you can use native browser APIs to serialize the SVG into a base64-encoded image and draw it onto a canvas, as shown in this example. Once again, not all accessibility features or properties are preserved with this approach.

Final Thoughts

HTML-in-Canvas feels like one of those ideas that makes you wonder why it didn’t exist before. It’s still early and experimental, but the potential is clear. If this direction holds, we might finally stop thinking in terms of “DOM vs. canvas” and start treating them as part of the same rendering pipeline.

That’s a meaningful shift.

Tags: CodropsExploringHTMLinCanvasproposal
Admin

Admin

Next Post
FluentCleaner Obtain | TechSpot

FluentCleaner Obtain | TechSpot

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended.

Novel AI mannequin impressed by neural dynamics from the mind | MIT Information

Novel AI mannequin impressed by neural dynamics from the mind | MIT Information

May 3, 2025
Save $100 On Our Favourite Residence Printer

Save $100 On Our Favourite Residence Printer

February 9, 2026

Trending.

Researchers Uncover Crucial GitHub CVE-2026-3854 RCE Flaw Exploitable by way of Single Git Push

Researchers Uncover Crucial GitHub CVE-2026-3854 RCE Flaw Exploitable by way of Single Git Push

April 29, 2026
Google Introduces Simula: A Reasoning-First Framework for Producing Controllable, Scalable Artificial Datasets Throughout Specialised AI Domains

Google Introduces Simula: A Reasoning-First Framework for Producing Controllable, Scalable Artificial Datasets Throughout Specialised AI Domains

April 21, 2026
Undertaking possession (fairness and fairness)

Your work diary | Seth’s Weblog

May 6, 2026
The way to Clear up the Wall Puzzle in The place Winds Meet

The way to Clear up the Wall Puzzle in The place Winds Meet

November 16, 2025
The Obtain: the tech reshaping IVF and the rise of balcony photo voltaic

The Obtain: the tech reshaping IVF and the rise of balcony photo voltaic

May 7, 2026

AimactGrow

Welcome to AimactGrow, your ultimate source for all things technology! Our mission is to provide insightful, up-to-date content on the latest advancements in technology, coding, gaming, digital marketing, SEO, cybersecurity, and artificial intelligence (AI).

Categories

  • AI
  • Coding
  • Cybersecurity
  • Digital marketing
  • Gaming
  • SEO
  • Technology

Recent News

FluentCleaner Obtain | TechSpot

FluentCleaner Obtain | TechSpot

May 13, 2026
Exploring the HTML-in-Canvas Proposal | Codrops

Exploring the HTML-in-Canvas Proposal | Codrops

May 13, 2026
  • About Us
  • Privacy Policy
  • Disclaimer
  • Contact Us

© 2025 https://blog.aimactgrow.com/ - All Rights Reserved

No Result
View All Result
  • Home
  • Technology
  • AI
  • SEO
  • Coding
  • Gaming
  • Cybersecurity
  • Digital marketing

© 2025 https://blog.aimactgrow.com/ - All Rights Reserved