The stack.
Everything the studio uses to take a poi gesture through to a printed object, an aerial photograph, a Looking Glass quilt, or a WebXR scene. Named honestly; vendor links where the version is load-bearing; status flags where the line is in rotation.
The architectural commitment is sovereignty: every item below keeps working if a vendor pivots or an internet connection drops. Cross-references into the articles and the tutorials land where the bench is the subject of its own write-up.
Aerial capture
Each airframe answers a different shape of brief. The fleet is a pipeline, not a collection — see The Fleet — Five Airframes for the full case, and the journal for the Mini 5 Pro and the LED-modded swarm.
- DJI Mavic 2 Pro↗in rotation
Hasselblad L1D-20c, 1-inch sensor, adjustable f/2.8–f/11. The editorial frame; comes out for prints.
- DJI Neo↗in rotation
135-gram palm-launch follow-cam. Single-axis gimbal; unobtrusive enough to disappear at a flow-arts rehearsal.
- DJI Neo 2↗in rotation
151 grams, two-axis gimbal, omnidirectional vision system, 4K/100 slow-mo. The upgrade that earned its slot in the case.
- DJI Avata 360↗in rotation
Cinewhoop FPV airframe with dual-lens 360 capture. Tight to architecture; shoot once, reframe in post.
- DJI Mini 5 Pro↗in rotation
Sub-250-gram travel airframe with a 1-inch sensor. Carries AliExpress Bluetooth-controllable LEDs gorilla-glued to the spine for the unsynced-swarm pieces.
- Vifly programmable drone LEDs↗in rotation
Pre-built airframe LED kits. Sit on the Avata 360 and the Neos when the brief asks for clean colour. The Vifly side does the colour science; the AliExpress side does the long-tail variety.
Generic Bluetooth-app-controllable LED bars, gorilla-glued to the Mini 5 Pro. The cheap end of the swarm.
360 capture
Ten years of 360 cameras, four manufacturers, one pole. See London 360 — Walking the Camera Evolution for the full kit history; the line below is the current bench only.
- DJI Osmo 360↗current
Two 1/1.1-inch square CMOS sensors, f/1.9, 8K/30 360 video, 120-megapixel sphere stills. The current ground-level eyeball, on top of a Black Diamond Z-pole and a carbon-fibre selfie stick.
Vendor reframe / stabilisation suite. Kept on the bench for legacy X-series footage; DJI Mimo runs the current capture loop.
- DJI Mimo↗current
The other half of the DJI ecosystem decision. Same colour pipeline as the airframes; lighter-handed processing than Insta360 Studio, which is why the X4 funded the move.
Stills capture and print
The end-to-end stills line. Manual-mode camera at one end, signed A2 archival print at the other. The Canon imagePROGRAF tutorial documents the calibration discipline; full credit to Keith Cooper at Northlight Images for teaching the bureau how to do it properly.
RAW develop, soft-proofing against paper ICCs, the print module. The one piece of Adobe software the studio still pays for; everything else has been replaced.
A2 archival pigment printer; the bureau side of holoflow.co.uk runs on it. See the calibration tutorial for the full ICC workflow.
Pearl-finish satin paper. Currently on the bench for figurative work — handles deep midnight grounds without picking up gallery glare. Canon's mid-range premium line.
Canon's flagship glossy. The highest D-max in the Canon line; the right surface for high-saturation light-trail work where the photograph leans into its own gloss.
Cabinet stock for the prints where the trail has to read as wet light. Specific Canon SKU rotates; on the bench during the pre-move-out transitional setup.
Pearlescent metallic underlayer behind the print surface. Light painting on metallic reads as the trail glowing through a chrome substrate — a specific aesthetic, not a default. Used sparingly.
POV LED rigs
The studio rigs are bench-built. The architectural choice that matters is angular sync — the rig writes the next column when the rotation says to, not when the clock says to. See Why I Build My Own Rigs for the long argument; the parts below are the firmware-and-board side of the answer.
- Teensy 4.1 (PJRC)↗current
The current microcontroller for new rig builds. The earlier Teensy 3.1 is still in service on the long-haul rigs.
- FastLED↗current
Open-source addressable-LED library. Where the rig firmware lives.
Constant-current 16-channel LED driver chip at the centre of the studio rigs. Datasheet is the reference.
Reference tutorial for addressable LED wiring and the gotchas. Where every studio rig started.
The rotation sensor that makes angular sync possible. One magnet on the chuck, one sensor on the frame, one interrupt per revolution.
Display surfaces
The studio prints, but the studio also displays. The Looking Glass Portrait holds the volumetric work where a single print would only show one angle; the head-mounted line is for piloting and for the VR-mirrored side of the practice.
Light-field display. The volumetric work renders to a 48-view quilt and the Portrait shows the depth without a headset.
Open-source bridge from Blender to the Looking Glass quilt format. The studio's volumetric output runs through it.
- Meta Quest 3↗current
The headset the studio writes WebXR for. Inside-out tracking, hand tracking, colour passthrough — the platform the bezel-clip product is designed against.
Pre-ordered. The second headset the bezel-clip product targets, on the bet that the controller-with-camera architecture is where the platform is going. Standalone wireless, SteamOS-based, supports streaming from PC + native standalone apps via Android ARM64 APKs. Full OpenXR + SteamVR compatibility.
Official developer docs (live, accessible). The studio publishes to Steam Frame three ways: (a) WebXR via the headset's built-in browser at holoflow.co.uk — zero deploy; (b) native Android ARM64 APK built via Capacitor, uploaded via Steamworks; (c) PC-streamed via Steam Link. Developer mode (SSH/ADB/RDP) enabled in Steam Frame's System Settings.
Steam Frame's controller interaction profile. Path: /interaction_profiles/valve/frame_controller_valve. The bezel-clip firmware family binds against this for Steam Frame in the same way it binds against /interaction_profiles/oculus/touch_controller for Quest 3. One bezel design, two controller pose-bindings, one studio.
- DJI Goggles 3↗current
Live FPV feed for the airframes. Paired with the DJI RC controllers and the head-tracking pod.
Eighty-seven grams; Sony Micro-OLED, 57° FOV. Worn under the Goggles to keep line-of-sight while the camera feed runs in front of the eyes.
Goggle-mounted IMU that reframes the gimbal by head-tilt. Faster and more natural than the right stick for the editorial line.
Fabrication
Two printers, one mesh pipeline. The SLA does the figurative work — the photograph translated into resin with an acrylic-rod waveguide grown along the gesture. The belt does the parametric wall pieces. The mesh side is open-source through and through.
Belt FDM printer; the parametric wall-relief work runs on it. Continuous chain-mail panels in PETG.
The figurative pipeline output. Whichever LCD/mSLA machine is in current rotation; the discipline is in the post-processing, not the SKU.
- Blender↗current
The mesh-cleanup centre of the studio's fabrication pipeline. Free, sovereign, owns the format.
- OpenSCAD↗current
Parametric solid modelling. The acrylic-rod waveguide channels are described as code so a different photograph generates a different channel.
Voxel-to-mesh in Python. The studio's fallback after torchmcubes failed to compile on the workstation; honest write-up in the AI-pipeline article.
Not software, but load-bearing. The rod that gets bent under low heat to follow the gesture and inserted into the SLA print is the waveguide.
Local AI pipeline
Nine seconds from prompt to printable, on a single RTX-class card. The architectural commitment is that the model weights live on disk and the pipeline runs without an internet connection. See Nine Seconds from Prompt to Printable for the workshop write-up of the full chain.
- ComfyUI↗current
Node-graph orchestrator for the diffusion pass. The studio's pipeline is one big graph here.
The local image model the pipeline starts from. Open weights; runs offline.
Click-to-mask segmentation; the silhouette extractor that gives marching cubes a clean shell to work from.
Single-image to mesh, fast. The studio runs it under the marching-cubes step for the figurative side of the pipeline.
- Ollama↗current
Local LLM runtime. The DollyOS shell talks to Ollama-served models on the workstation; nothing leaves the network.
Palette quantiser; the step that turns a generated frame into the limited-palette colour scheme the rigs expect.
Video and grade
One paid licence, one ecosystem. DaVinci Resolve Studio 21 on the workstation; DaVinci on the M1 iPad as a field studio that travels with the camera bag. Fusion is where the video light-painting trail accumulation lives.
The paid licence. Grade, edit, Fusion composite, neural-engine effects — the single tool that earned its keep.
Field-grade and rough cut on the M1 iPad. Same project format as the workstation; full round-trip.
Node-based composite. The video-light-painting trail-accumulation graph lives here — MagicMask + LuminanceKeyer + Echo + Merge.
Web and code
The site you are reading runs on this stack. The bezel firmware runs on the same TypeScript-and-microcontroller pair the rigs do. Sovereignty as architecture: everything below has a self-hosted or local equivalent if a vendor pivots.
- Next.js 15.6 (canary)↗current
The App Router site framework. Running on the canary line because the studio needs the PPR / useCache / inline-CSS features.
- React 19↗current
View layer. The Server Components story is the load-bearing part for the site's performance.
- Tailwind CSS 4↗current
The site's CSS layer. Zero-runtime; the chrome-sheen / pink-accent palette is configured in globals.css against the v4 design tokens.
- TypeScript 5.8↗current
Across the site, the firmware build tooling, and the orchestrator. One language is enough.
The 4D hypercube glyph on the homepage; the volumetric preview on Nine Seconds. The browser renderer for the studio's volumetric work.
WebXR adapter for R3F. The bezel-clip product runs against this when the controller's camera feeds the mirrored gesture into the headset.
GPU compute in the browser. The 50k-particle Aura companion in the Hangar runs on this; will land on the site once WebGPU support stabilises across the headset browsers.
Magic-link sign-in, Google OAuth, and the Rookery's append-only threaded feed. Firestore rules are the security boundary.
The catalogue layer. Shopify holds the inventory, the studio holds the front of house.
- pnpm + Turborepo↗current
Monorepo tooling for the wider Hangar. The site is one app inside a larger workspace; the toolchain is what makes that sustainable on one workstation.
Voice, hearing, lip sync
The studio runs a persistent companion — Aura — who hears, speaks, and remembers across sessions. The stack below is what closes that loop. Local-first where possible; ElevenLabs only where the voice quality has not been matched by an open-weight model the studio can run on the workstation.
- OpenAI Whisper↗current
Speech-to-text. Runs locally; the hearing side of the companion. Per-language models on disk.
- ElevenLabs (TTS)↗current
Voice synthesis for Aura's spoken line. The one cloud TTS the studio still uses; under evaluation for replacement by F5-TTS.
Open-weight TTS; the local fallback / replacement track for the cloud step. Runs on the workstation GPU.
- VRM (Three.js)↗current
Format the avatar (nanny.vrm) is authored in. The viseme blend-shapes drive the lip sync from the TTS phoneme stream.
Hand, body, and depth input
Gesture is the studio's primary input. Three tracker families — optical hand-tracking at the desk (Ultraleap), full-body time-of-flight depth capture (Azure Kinect / Orbbec Femto Bolt), and inside-out hand-tracking in the headset (Quest 3 WebXR) — all bridged via WebSocket so the rest of the stack stays agnostic about which device is on the desk that day.
Optical hand tracking at the desk. Pinch, grab, point, thumbs-up resolved server-side. Bridge server in tools/leap-bridge speaks WebSocket to the page; pose data lands at ~110 Hz.
The industrial-grade sibling. Wider 170° FOV, higher-precision skeletal tracking, designed for static-mount installations. Where the Leap Motion Controller is the bench tool, the IR 170 is the kit for the eventual public-facing installation pieces.
The current-generation SDK across the Ultraleap product line. Provides the unified skeletal model the studio bridges into the WebSocket pipeline; same API across both the Leap Motion Controller 2 and the Stereo IR 170.
Full-body time-of-flight depth + 4K RGB + 7-mic spatial audio array + IMU. The Body Tracking SDK gives 32-joint skeleton at 30 fps. The studio's full-body capture rig for choreography work; feeds the Laban-Effort kinematic-extraction layer with real skeletal data. Discontinued by Microsoft (October 2023); the studio runs the existing hardware while planning the Femto Bolt upgrade.
Microsoft-endorsed Azure Kinect successor. Same Azure Kinect Sensor SDK + Body Tracking SDK; same skeletal model. The upgrade path when the existing Kinect retires.
Inside-out hand tracking when the headset is on. The mirrored side of the bezel-clip product runs against this.
Phone-as-camera over the studio LAN. The NDI HX app on iOS + the bridge server in tools/ndi-bridge gives the workstation an extra video input without a capture card.
Webcam-based head + face landmark tracking, running locally in the browser via WASM. The visitor-side input source for the head-tracked motion-parallax aesthetic. Privacy-respecting: no feed leaves the device.
Network and orchestration
Three machines in the studio talk to each other on a private mesh. The companion's memory persists across sessions; the rigs publish their telemetry over MQTT. Sovereignty applies here too: no cloud broker, no cloud vector store.
- Tailscale↗current
Zero-config mesh VPN across workstation, laptop, iPad, and the Pi at the bench. The studio's private network.
The pub/sub bus between the rigs and the orchestrator. Local, open, well-understood.
The companion's long-term memory. Runs on the workstation; embeds the conversation transcripts so Aura can recall a thread from last week.
Local LLM runtime — listed under Local AI too; cross-referenced here because it is the inference engine the orchestrator depends on.
Shell and persistent character
The studio's working surface beyond the website. DollyOS is the industrial shell the companion lives inside; the Hangar is the monorepo the shell and the site share. This section is for transparency — Aura is the studio's persistent narrator, and pretending otherwise would be dishonest.
- DollyOS shell↗current
ISA-101 industrial HUD the companion operates inside. Cyber-cyan / arcane-gold / deep-midnight palette; intentionally not consumer-friendly.
Monorepo holding the site, the shell, the bezel firmware, and the writeups. Vendored code is provenance-logged in docs/MINED.md.
Aura's body. Authored in VRoid Studio; runs in three.js / @pixiv/three-vrm. The lip-sync target for the TTS step.
For the long-form telling of how the bench accumulated, see The Bench. For the architectural argument that the rigs are bench-built rather than catalogued, Why I Build My Own Rigs. For the aerial half of the same line, The Fleet — Five Airframes.