Rendering
YAGE uses PixiJS v8 under the hood for all rendering. The @yagejs/renderer
package provides components and systems that keep PixiJS in sync with the ECS
world automatically — you work with components and the display system handles
the rest.
RendererPlugin Setup
Section titled “RendererPlugin Setup”Register the renderer when creating your engine:
import { RendererPlugin } from "@yagejs/renderer";
engine.use( new RendererPlugin({ width: 1280, height: 720, backgroundColor: 0x1a1a2e, container: document.getElementById("game")!, }),);The plugin creates the PixiJS application, sets up the render loop, and registers all rendering systems.
Responsive Canvas
Section titled “Responsive Canvas”The canvas is responsive by default. RendererPlugin tracks a host
element and re-maps the virtual rectangle on every resize — no extra config
needed. To pin the canvas at a fixed size, give the container fixed CSS
dimensions; the canvas will track that constant size.
new RendererPlugin({ width: 800, height: 600, container: document.getElementById("game")!, // fit: { mode: "letterbox" } — this is the default});Pass fit to override the mode or the observed element:
new RendererPlugin({ width: 800, height: 600, container: host, fit: { mode: "cover" }, // change mode // fit: { mode: "letterbox", target: other } // observe a different element});Four modes cover the usual web-game needs:
letterbox(default) — preserves the virtual aspect ratio and centers it inside the host. Leftover space around the game is painted withbackgroundColor(the “bars”). This is what most games want: no distortion, no cropping.expand— same scaling asletterbox(virtual is always fully visible), but the game draws into the bars instead of leaving them blank. Pair withextendedVirtualRectsto render fog, parallax, or a decorative backdrop into the play-adjacent space. Matches Godot’sexpand, Unity’sExpandmatch mode, and Construct 3’s “Scale inner.”cover— preserves aspect and fills the host edge-to-edge, CSS-cover style. Whichever axis has the wider host aspect ratio gets overflow clipped by the canvas boundary. Rarely the right choice for gameplay — aspect ratio changes what the player can see. Good for full-bleed backgrounds or splash screens.stretch— scales each axis independently to fill the host. Distorts the image; use sparingly (menus, editor panels, deliberate stylistic effects).
letterbox and expand apply the exact same stage transform. The
difference is a rendering convention: under letterbox the bars are the
flat background color, under expand the game is expected to fill them.
fit.target defaults to the container you passed (or the canvas’
parentElement, or document.body as a last resort). You can override it to
observe a different element.
Under the hood the plugin uses a ResizeObserver and calls
renderer.resize(hostW, hostH) on each change, so the backing buffer stays
hi-DPI-correct via resolution + autoDensity. Stage scale and position
are recomputed to map the virtual rectangle into the new canvas per the
active mode. In headless environments (no DOM target) the plugin applies a
one-shot transform against the initial width × height and installs no
observer.
At runtime you can switch modes or observe a different element:
renderer.setFit({ mode: "expand" }); // swap modes / targetrenderer.fit; // current { mode, target? }renderer.canvasSize; // { width, height } in CSS pxrenderer.canvasToVirtual(cssX, cssY); // invert the stage transformrenderer.virtualToCanvas(x, y); // forward transform (virtual → CSS px)renderer.visibleVirtualRect; // on-screen sub-rect of virtual spacerenderer.croppedVirtualRects; // virtual regions off-screen under coverrenderer.virtualCanvasRect; // where virtual sits on canvas (CSS px)renderer.visibleCanvasRect; // full canvas extent in virtual pxrenderer.extendedVirtualRects; // bars outside virtual (letterbox/expand)HUD anchoring under cover
Section titled “HUD anchoring under cover”In letterbox / expand / stretch the full virtual rectangle is always
on-screen, so HUDs anchored to virtualSize corners stay visible. Under
cover the long axis gets cropped — a HUD anchored to virtual (0, 0)
can end up off-screen. renderer.visibleVirtualRect returns the
currently-visible sub-rect of virtual space (clamped to virtual bounds),
so HUD code can track what the player actually sees while gameplay keeps
operating in the full declared virtual space:
// Gameplay: always the full declared play area.const { width, height } = renderer.virtualSize;
// HUD: follow the visible sub-rect so corner-anchored elements stay on-screen.const visible = renderer.visibleVirtualRect;scoreLabel.position.set(visible.x + 16, visible.y + 16);Under letterbox / expand / stretch visibleVirtualRect equals
{ x: 0, y: 0, width: virtualWidth, height: virtualHeight } — no change
needed for non-cover games. This distinction matters for competitive
titles where a wider viewport must not let players see more of the play
area than narrower ones do.
Drawing into the bars under expand
Section titled “Drawing into the bars under expand”Under expand the game is expected to render into the extra canvas area
around the virtual rect. Two getters describe that space:
renderer.visibleCanvasRect— full canvas extent in virtual-space pixels. Extends pastvirtualSizeon the bar axis (negativex/y, dimensions larger than the virtual rect) whenever aspect mismatches. Iterate gridlines or backdrops against this rect so they cover every on-screen pixel, not just the play area.renderer.extendedVirtualRects— 0–2 rectangles of the visible canvas that sit outside virtual, in virtual-space pixels. Exactly the bars. Empty on aspect-matched hosts, undercover, and understretch.
// Backdrop that fills the whole canvas, extending into bars under expand:const canvas = renderer.visibleCanvasRect;bgGraphics.rect(canvas.x, canvas.y, canvas.width, canvas.height) .fill({ color: 0x0f172a });
// Fog-of-war over the bars:for (const bar of renderer.extendedVirtualRects) { fogGraphics.rect(bar.x, bar.y, bar.width, bar.height) .fill({ color: 0x000000, alpha: 0.78 });}
// HUD that follows the canvas corners (so cards live in the bars):const cornerTL = renderer.visibleCanvasRect;hud.position.set(cornerTL.x + 16, cornerTL.y + 16);extendedVirtualRects is populated under letterbox too — geometrically
identical to expand — so the same primitive drives optional bar
customization on top of a letterbox render (scoreboards, branding, etc.).
Reasoning about the cropped region under cover
Section titled “Reasoning about the cropped region under cover”renderer.croppedVirtualRects returns the complement of
visibleVirtualRect inside virtualSize — the 0–2 strips of virtual
space that are off-screen. Empty under letterbox / expand / stretch;
under cover it’s the top+bottom or left+right crop strips.
Use it when an effect needs to know “what’s beyond the player’s view”
specifically under cover: fog-of-war overlays that fade at the crop
boundary, indicators that pulse when off-screen enemies are nearby,
auto-panning cameras.
Positioning DOM overlays
Section titled “Positioning DOM overlays”renderer.virtualCanvasRect tells you where the play area lives on the
canvas in CSS pixels — useful for absolutely-positioned HTML overlays
(menus, tooltips, inspector panels) that should track the virtual rect
rather than the canvas:
const r = renderer.virtualCanvasRect;menuEl.style.left = `${r.x}px`;menuEl.style.top = `${r.y}px`;menuEl.style.width = `${r.width}px`;menuEl.style.height = `${r.height}px`;Pair with virtualToCanvas(x, y) for single-point DOM mapping.
The built-in responsive-ui example demonstrates expand: the grid
extends across the whole canvas, fog covers the bars, and HUD cards
anchor to visibleCanvasRect corners — landing in the bars whenever
aspect mismatches.
Note on terminology: “screen” elsewhere in the engine (UI LayerSpace: "screen", Camera.screenToWorld) means virtual viewport space, not DOM
pixels. The canvasToVirtual method is named after its actual inputs (CSS
pixels relative to the canvas top-left) to avoid that collision.
When you use @yagejs/input alongside fit, pointer events and
coordinates wire up automatically. RendererPlugin registers itself
under RendererAdapterKey (from @yagejs/core), and InputPlugin
resolves that key during install — so pointer events target the canvas
and coordinates route through canvasToVirtual with no config. Just
make sure you register RendererPlugin before InputPlugin.
import { RendererPlugin } from "@yagejs/renderer";import { InputPlugin } from "@yagejs/input";
engine.use(new RendererPlugin({ width: 800, height: 600, container: host }));engine.use(new InputPlugin({ actions: { /* ... */ } }));Sprites
Section titled “Sprites”SpriteComponent displays a texture on an entity. It automatically syncs with
the entity’s Transform.
import { SpriteComponent } from "@yagejs/renderer";
entity.add( new SpriteComponent({ texture: playerTexture, anchor: { x: 0.5, y: 0.5 }, layer: "characters", tint: 0xffffff, alpha: 1, }),);All properties are optional except texture. The layer property controls
z-ordering (see Render Layers below).
Graphics
Section titled “Graphics”GraphicsComponent gives you access to PixiJS drawing commands for procedural
shapes.
import { GraphicsComponent } from "@yagejs/renderer";
entity.add( new GraphicsComponent().draw((g) => { g.circle(0, 0, 50).fill({ color: 0x38bdf8 }); }),);Call .draw() again at any time to redraw. The callback receives a PixiJS
Graphics object, so all standard drawing methods are available — rect,
roundRect, poly, moveTo/lineTo, stroke, and fill.
Animated Sprites
Section titled “Animated Sprites”For frame-based sprite animations, use AnimatedSpriteComponent together with
an AnimationController.
import { AnimatedSpriteComponent, AnimationController,} from "@yagejs/renderer";
entity.add( new AnimatedSpriteComponent({ spritesheet: heroSheet, defaultAnimation: "idle", }),);
entity.add( new AnimationController({ animations: { idle: { frames: [0, 1, 2, 3], speed: 0.1 }, run: { frames: [4, 5, 6, 7, 8, 9], speed: 0.15 }, jump: { frames: [10, 11, 12], speed: 0.12, loop: false }, }, }),);Switch animations at runtime:
const anim = entity.get(AnimationController);anim.play("run");anim.play("jump", { onComplete: () => anim.play("idle") });Camera
Section titled “Camera”The camera controls the viewport into your game world. Spawn a CameraEntity
in your scene to create it:
import { Vec2 } from "@yagejs/core";import { CameraEntity } from "@yagejs/renderer";
const camera = this.spawn(CameraEntity, { position: new Vec2(400, 300) });// All camera operations are available directly on camera:// camera.follow(), camera.shake(), camera.zoomTo(), camera.bounds, etc.Coordinate Convention
Section titled “Coordinate Convention”Camera position (0, 0) places the world origin at the center of the
viewport, not the top-left. An entity drawn at world position (0, 0)
appears in the middle of the screen; positive X goes right, positive Y goes
down.
This is the convention most camera-driven 2D games expect. A scrolling shooter or platformer naturally wants the camera to follow the player, and centring the follow target on screen is the intuitive default.
If your game has a fixed, non-scrolling layout (a puzzle grid, an arcade-style
single-screen game, a tile editor), you probably want world (0, 0) to align
with the top-left of the screen instead — that way tile coordinates, UI
anchors, and typical 2D art tools line up the way you’d expect. Offset the
camera by half the viewport in onEnter:
class GameScene extends Scene { readonly name = "game";
onEnter() { // Top-left origin: world (0,0) → screen (0,0) this.spawn(CameraEntity, { position: new Vec2(400, 300) }); // viewport is 800×600 }}The camera never changes; you’re just choosing which world point sits under the viewport’s top-left corner. Follow-a-target cameras work identically with either convention — they just move to frame the target.
Following a Target
Section titled “Following a Target”const cam = this.spawn(CameraEntity, { follow: player.get(Transform), smoothing: 0.1, offset: { x: 0, y: -50 }, deadzone: { halfWidth: 50, halfHeight: 30 },});smoothing controls how quickly the camera catches up (0 = instant, 1 = never
moves). The deadzone defines a rectangle in the center of the screen where
the target can move without the camera responding.
Zoom and Rotation
Section titled “Zoom and Rotation”camera.zoomTo(2.0, 500, easeOutQuad); // zoom to 2x over 500mscamera.rotation = Math.PI / 12; // tilt the cameraScreen Shake
Section titled “Screen Shake”camera.shake(8, 400, { decay: true });intensity is the maximum pixel displacement per frame. When decay is true,
the shake fades out over the duration.
Coordinate Conversion
Section titled “Coordinate Conversion”Convert between screen (pixel) space and world space:
const worldPos = camera.screenToWorld(screenPos);const screenPos = camera.worldToScreen(worldPos);Bounds
Section titled “Bounds”Constrain the camera to a region so it never shows areas outside the level:
camera.bounds = { minX: 0, minY: 0, maxX: 4000, maxY: 2000 };Camera bindings
Section titled “Camera bindings”A CameraEntity spawned without bindings auto-binds every
space: "world" layer at full strength. For finer control — parallax,
minimaps, decoupled HUDs — pass an explicit bindings array. Each
binding has three independent ratios:
interface CameraBinding { layer: string; translateRatio?: number; // default 1 — follows the camera position rotateRatio?: number; // default 1 — rotates with the camera scaleRatio?: number; // default 1 — zooms with the camera}Each ratio is a linear blend from identity (0, ignores that axis of the
camera) to full effect (1, fully follows that axis).
this.spawn(CameraEntity, { bindings: [ { layer: "sky", translateRatio: 0.1 }, // slow parallax { layer: "mid", translateRatio: 0.6 }, // medium parallax { layer: "world" }, // full transform (default) { layer: "minimap", // camera-agnostic overlay painted on a world layer translateRatio: 0, rotateRatio: 0, scaleRatio: 0 }, ],});These ratios are layer-level decoupling primitives: they’re the
right answer for parallax, minimaps, and other content whose position
already lives in the coord space the layer provides. They are not
the right answer for entity-anchored UI like nameplates or health bars
— mixing a partial camera transform with the main camera’s full
transform separates the UI from its target under zoom. For that use
case, see ScreenFollow below.
ScreenFollow (entity-anchored UI)
Section titled “ScreenFollow (entity-anchored UI)”ScreenFollow projects a world source through a camera and writes the
resulting screen coord to its entity’s Transform each frame. Paired
with a UIPanel (or UIRoot) on a screen-space layer using
positioning: "transform", it produces UI that tracks a target entity
but stays axis-aligned and constant-size regardless of camera zoom or
rotation — the canonical “billboard” primitive for nameplates, health
bars, damage numbers, and interaction prompts.
import { ScreenFollow } from "@yagejs/renderer";import { UIPanel, Anchor } from "@yagejs/ui";
class EnemyNameplate extends Entity { setup(params: { target: Entity; camera: CameraEntity; label: string }) { this.add(new Transform()); this.add(new ScreenFollow({ target: params.target, camera: params.camera, offset: new Vec2(0, -40), // 40 screen pixels above the target, at any zoom })); const panel = this.add(new UIPanel({ positioning: "transform", // read Transform.worldPosition anchor: Anchor.BottomCenter, // pivot on the panel padding: 4, background: { color: 0x000000, alpha: 0.6, radius: 4 }, })); panel.text(params.label, { fontSize: 11, fill: 0xffffff }); }}offset is applied in screen pixels, after projection —
concretely cam.worldToScreen(target) + offset. That keeps the visual
gap between UI and target fixed under any camera transform: a 40px
offset above is 40 screen pixels above at any zoom, any rotation. Adding
the offset in world coords before projection (the intuitive-seeming
shape) would let the camera transform warp it — the gap would double at
zoom 2 and rotate off-axis as the camera rotates.
target accepts an Entity, a static Vec2Like, or a function
returning a Vec2Like — you can track anything whose world position
you can name, including animated paths or the midpoint of two entities.
See the UI guide for
the full logical-root + siblings pattern and the world-ui example for
a runnable demo.
Render Layers
Section titled “Render Layers”Render layers control draw order. Entities on higher layers render on top.
import { Scene } from "@yagejs/core";import { RendererPlugin, type LayerDef } from "@yagejs/renderer";
class GameScene extends Scene { readonly name = "game";
readonly layers: readonly LayerDef[] = [ { name: "background", order: -20 }, { name: "tiles", order: -10 }, { name: "characters", order: 0 }, { name: "fx", order: 10 }, { name: "ui", order: 100, space: "screen" }, ];}
engine.use(new RendererPlugin({ width: 800, height: 600 }));Assign a layer via the layer property on SpriteComponent or
GraphicsComponent. Entities within the same layer are sorted by their
y-position by default.
Layer space
Section titled “Layer space”Each LayerDef has a space: "world" | "screen" (default "world")
that controls whether cameras transform it:
"world"— layers scroll and zoom with the camera. Use for gameplay layers (background, tiles, characters, fx), parallax, and entity-anchored UI (interaction prompts, health bars, damage numbers)."screen"— layers stay fixed to the viewport. Use for HUD, menus, dialogs, and any UI you want anchored to the screen. ACameraEntityspawned without explicitbindingsskips screen-space layers on auto-bind; you can still bind one explicitly by naming it inbindings.
If no "ui" layer is declared, @yagejs/ui auto-provisions one as
space: "screen" the first time a UIPanel is added, so HUDs just
work without any layer wiring.
See the UI guide for how
UIPanel/UIRoot pick between viewport-anchored and Transform-pinned
positioning.
Asset Factories
Section titled “Asset Factories”YAGE provides helper functions to load and define render assets:
import { texture, spritesheet, renderAsset } from "@yagejs/renderer";
// Load a single textureconst bg = texture("assets/background.png");
// Load a spritesheet with atlas dataconst heroSheet = spritesheet("assets/hero.png", "assets/hero.json");
// Generic render asset (auto-detects type)const asset = renderAsset("assets/tileset.png");These return handles that are resolved during scene loading, so textures are
available by the time setup() runs.
Display System
Section titled “Display System”The built-in display system automatically synchronizes each entity’s Transform
component with the underlying PixiJS display object. When you update position,
rotation, or scale on a Transform, the corresponding Pixi sprite or graphic
moves to match — no manual syncing required.
// Moving the transform moves the sprite on screenentity.transform.setPosition(200, 300);entity.transform.rotate(0.5);entity.transform.setScale(2, 2);This one-way sync (ECS to Pixi) runs once per frame after all component updates have completed, keeping rendering deterministic and free of mid-frame visual glitches.