DrawingCanvas API: Replace imperative extension methods with stateful canvas-based drawing model#377
DrawingCanvas API: Replace imperative extension methods with stateful canvas-based drawing model#377JimBobSquarePants wants to merge 172 commits intomainfrom
Conversation
Our WebGPU backend is already very heavily based on Vello. The staged scene pipeline, shader structure, and a lot of the GPU-side execution model follow that design quite closely. The actual GPU render part is already very fast. The remaining cost is mostly the work around it: scene preparation, encoding, staging, and scheduling. I have already spent weeks pushing that down, so I do not think there is some obvious untapped win left there that would suddenly disappear by wrapping a different renderer. That is why I would not recommend trying to build a separate backend from scratch on top of Vello or Graphite. At this point, that would mostly mean redoing a large amount of integration work to arrive back at the same class of bottlenecks, because the hard part is not just “having a modern GPU rasterizer”, it is fitting that rasterizer into the rest of the drawing architecture. |
I may still do it. The goal is not a fully featured alternative rasterizer, since the main value/outcome is not the alternative rasterizer itself. It's the thoughts arising from discovering and critically looking at the API with fresh eyes while going through the process of "fitting that rasterizer into the rest of the drawing architecture" (and taking the existing backend out of the equation). But that's just my initial thoughts, I may change my mind. This is the first case of doing something MASSIVE with AI in SixLabors repos while also giving an attempt for a review. If you have other suggestions about how to start critical analysis of an area that's also mostly new to me and for providing tangible help here, I'm open to them. |
Honestly, I don't know other than pulling it down and having a look at how it works from a user perspective. I'd probably focus on that rather than the backends because that what most developers will be working with. I am biased of course but I think the new canvas API is a breath of fresh air... I've spent weeks now testing and refining this. AI isn't nearly capable of writing this stuff on its own and I have been hands on during every step of the design and development. AI was used mainly to bounce ideas off, help rewrite tests and port existing rasterizers to our API not in the design of the APIs themselves. I found and fixed several pre-existing issues and have managed to simplify and optimize at almost every point of the processing pipeline compared to our current main. No software is perfect and if we discover down the line that the shape of something isn't quite right, we can update it in the next major. I'm more than happy to take that on the chin. |
I agree that having this API is a good move, I'm just thinking out loud on how to approach this thing as a reviewer.
The backend extension points are also API-s, so I'm really hoping we can bring there as much validation as possible, but you may be right that it's not the no1 priority for V4. Anyways, I'll pull down the current state, start random experiments and see what happens. |
Prerequisites
Breaking Changes: DrawingCanvas API
Fix #106
Fix #244
Fix #344
Fix #367
This is a major breaking change. The library's public API has been completely redesigned around a canvas-based drawing model, replacing the previous collection of imperative extension methods.
What changed
The old API surface — dozens of
IImageProcessingContextextension methods likeDrawLine(),DrawPolygon(),FillPolygon(),DrawBeziers(),DrawImage(),DrawText(), etc. — has been removed entirely. These methods were individually simple but suffered from several architectural limitations:The new model:
DrawingCanvasAll drawing now goes through
IDrawingCanvas/DrawingCanvas<TPixel>, a stateful canvas that queues draw commands and flushes them as a batch.Via
Image.Mutate()(most common)Standalone usage (without
Image.Mutate)DrawingCanvas<TPixel>can be constructed directly against an image frame:Canvas state management
The canvas supports a save/restore stack (similar to HTML Canvas or SkCanvas):
State includes
DrawingOptions(graphics options, shape options, transform) and clip paths.SaveLayercreates an offscreen layer that composites back onRestore.IDrawingBackend— bring your own rendererThe library's rasterization and composition pipeline is abstracted behind
IDrawingBackend. This interface has the following methods:FlushCompositions<TPixel>TryReadRegion<TPixel>Process()andDrawImage()).The library ships with
DefaultDrawingBackend(CPU, tiled fixed-point rasterizer). An experimental WebGPU compute-shader backend (ImageSharp.Drawing.WebGPU) is also available, demonstrating how alternate backends plug in. Users can provide their own implementations — for example, GPU-accelerated backends, SVG emitters, or recording/replay layers.Backends are registered on
Configuration:Migration guide
ctx.Fill(color, path)ctx.ProcessWithCanvas(c => c.Fill(Brushes.Solid(color), path))ctx.Fill(brush, path)ctx.ProcessWithCanvas(c => c.Fill(brush, path))ctx.Draw(pen, path)ctx.ProcessWithCanvas(c => c.Draw(pen, path))ctx.DrawLine(pen, points)ctx.ProcessWithCanvas(c => c.DrawLine(pen, points))ctx.DrawPolygon(pen, points)ctx.ProcessWithCanvas(c => c.Draw(pen, new Polygon(new LinearLineSegment(points))))ctx.FillPolygon(brush, points)ctx.ProcessWithCanvas(c => c.Fill(brush, new Polygon(new LinearLineSegment(points))))ctx.DrawText(text, font, color, origin)ctx.ProcessWithCanvas(c => c.DrawText(new RichTextOptions(font) { Origin = origin }, text, Brushes.Solid(color), null))ctx.DrawImage(overlay, opacity)ctx.ProcessWithCanvas(c => c.DrawImage(overlay, sourceRect, destRect))ProcessWithCanvasblock — commands are batched and flushed togetherOther breaking changes in this PR
AntialiasSubpixelDepthremoved — The rasterizer now uses a fixed 256-step (8-bit) subpixel depth. The oldAntialiasSubpixelDepthproperty (default: 16) controlled how many vertical subpixel steps the rasterizer used per pixel row. The new fixed-point scanline rasterizer integrates area/cover analytically per cell rather than sampling at discrete subpixel rows, so the "depth" is a property of the coordinate precision (24.8 fixed-point), not a tunable sample count. 256 steps gives ~0.4% coverage granularity — more than sufficient for all practical use cases. The old default of 16 (~6.25% granularity) could produce visible banding on gentle slopes.GraphicsOptions.Antialias— now controlsRasterizationMode(antialiased vs aliased). Whenfalse, coverage is snapped to binary usingAntialiasThreshold.GraphicsOptions.AntialiasThreshold— new property (0–1, default 0.5) controlling the coverage cutoff in aliased mode. Pixels with coverage at or above this value become fully opaque; pixels below are discarded.Benchmarks
The DrawPolygonAll benchmark renders a 7200x4800px path of the state of Mississippi with a 2px stroke.
Due to the fused design of our rasterizer, we're absolutely dominating. 🚀🚀🚀🚀🚀