AI WebGPU Lab Experiment

ORT WebGPU Readiness

`exp-ort-webgpu-baseline` now exposes a provider-readiness harness for ORT-Web style inference. It records provider metadata, fallback state, worker mode, and deterministic transformer-block timing before the real ONNX Runtime Web integration lands.

Use `?mode=fallback` to keep the same input profile while switching the provider metadata and latency profile to a Wasm fallback path.

Input Profile

Run Controls

Run the ORT-style inference profile to capture init, first output, throughput, and total latency for the active provider mode.

Execution Trace

No inference run yet.

Metrics

Environment

Activity Log

    Schema-Aligned Result Draft

    {
      "status": "pending"
    }

    Provider Notes

    • Default mode records a WebGPU provider profile.
    • `?mode=fallback` records a Wasm fallback profile with the same batch and sequence budget.
    • This readiness harness fixes the result contract before real ORT-Web assets and build steps are added.