Skip to Content
🚧 The WebNN documentation is work-in-progress. Share feedback →
ReferenceBrowser CompatibilityWebNN API

Browser Compatibility

WebNN APIs

May 20, 2025: DirectML was officially deprecated during Microsoft Build 2025. WebNN will leverage Windows ML to access OpenVINO and other EPs to get hardware acceleration.

Platform / Build ConditionsCPU (device: "cpu")GPU (device: "gpu")NPU (device: "npu")
ChromeOS (webnn_use_tflite default true)TFLite (LiteRT) with XNNPACK delegate (tflite/graph_impl_tflite.ccSetUpXNNPackDelegate)TFLite delegate: Chrome ML GPU if WEBNN_USE_CHROME_ML_API (controlled by features.gni), otherwise OpenCL delegate when BUILD_TFLITE_WITH_OPENCL; without either, runs on XNNPACK/CPU (tflite/graph_impl_tflite.cc)No dedicated delegate; request falls back to CPU/XNNPACK (tflite/graph_impl_tflite.cc)
Linux (webnn_use_tflite default true)Same TFLite + XNNPACK pathNo native GPU backend today; execution remains on CPU via XNNPACK (webnn_context_provider_impl.cc falls through to TFLite)Not supported; falls back to CPU
macOS ≥14.4 on Apple Silicon with feature kWebNNCoreML enabled (default)Core ML backend (webnn_context_provider_impl.cc; coreml/context_impl_coreml.mm) selecting MLComputeUnitsCPUOnly (coreml/graph_impl_coreml.mm)Core ML using MLComputeUnitsCPUAndGPU or MLComputeUnitsAll (gated by kWebNNCoreMLExplicitGPUOrNPU)Core ML using MLComputeUnitsCPUAndNeuralEngine or MLComputeUnitsAll (coreml/graph_impl_coreml.mm)
macOS Intel, macOS <14.4, or Core ML feature disabledFalls through to TFLite + XNNPACK (webnn_context_provider_impl.cc)TFLite delegates as available (no Core ML)TFLite fallback only
Windows 11 24H2+ with feature kWebNNOnnxRuntime enabledONNX Runtime (Windows ML) (ort/context_provider_ort.cc; webnn_context_provider_impl.cc) selecting CPU EP (ort/environment.cc)ONNX Runtime selecting GPU EP with CPU fallback (ort/environment.cc)ONNX Runtime selecting NPU EP with CPU fallback (ort/environment.cc)
Windows (default build: ONNX Runtime feature off)TFLite + XNNPACK fallback (webnn_context_provider_impl.cc)DirectML backend when kWebNNDirectML feature is on and GPU feature is enabled (dml/context_provider_dml.cc); otherwise TFLiteDirectML NPU path when hardware is available (dml/context_provider_dml.cc); otherwise TFLite
AndroidTFLite + XNNPACK (tflite/graph_impl_tflite.cc)TFLite GPU delegate via OpenCL when BUILD_TFLITE_WITH_OPENCL (or Chrome ML if bundled); otherwise CPU fallbackTFLite NNAPI delegate when BUILD_TFLITE_WITH_NNAPI (typical Android build); otherwise CPU fallback
iOS (current shipping defaults)Core ML feature disabled by default (public/mojom/features.mojom), so TFLite + XNNPACKSame as CPU (no Core ML delegate by default)Same as CPU
  • Backend selection order is defined in webnn_context_provider_impl.cc: Windows tries ONNX Runtime first, then DirectML, then the TFLite fallback; Apple builds try Core ML before TFLite; all other platforms go straight to TFLite.
  • features.gni enables TFLite (webnn_use_tflite) across Linux, ChromeOS, Android, Windows, and Apple; webnn_use_chrome_ml_api gates access to Chrome ML GPU delegates.
  • TFLite delegates are optional: if a requested delegate (GPU/NPU) is missing or fails, execution transparently falls back to the XNNPACK CPU path (graph_impl_tflite.cc).
  • ONNX Runtime support currently requires Windows 11 24H2+, the kWebNNOnnxRuntime flag, and uses execution-provider selection logic in environment.cc to bind the appropriate hardware (GPU/NPU) with CPU fallbacks.
  • Core ML respects the requested device by adjusting MLModelConfiguration.computeUnits; without kWebNNCoreMLExplicitGPUOrNPU, GPU/NPU requests default to MLComputeUnitsAll (graph_impl_coreml.mm).

Note

  • The WebNN API mainly supported with Chromium-based browsers on ChromeOS, Linux, macOS, Windows and Android.
  • Chromium-based browsers include but are not limited to Google Chrome, Microsoft Edge, Opera, Vivaldi, Brave, Samsung Internet etc.
InterfaceMethodChromium Version
navigator.mlM112
MLM112
MLcreateContext()M112
MLContextM112
MLContextdispatch()M128
MLContextcreateTensor()M129
MLContextreadTensor(tensor)M129
MLContextreadTensor(tensor, outputData)M129
MLContextwriteTensor()M129
MLContextopSupportLimits()M128
MLGraphM112
MLOperandM112
MLOperandMLNumberM132
MLTensorM124
MLTensordestroy()M124
MLGraphBuilderM112
MLGraphBuilderMLGraphBuilder() constructorM112
MLGraphBuilderinput(name, descriptor)M112
MLGraphBuilderconstant(descriptor, buffer)M112
MLGraphBuilderconstant(type, value)M112
MLGraphBuilderbuild(outputs)M112
Last updated on