-
Notifications
You must be signed in to change notification settings - Fork 158
Closed
Description
Currently, sending data from the VideoDecoder to Canvas involves:
- VideoDecoder.decode
- VideoFrame.createImageBitmap
- createImageBitmap again to flipY the imageOrientation (it seems that Chrome currently doesn't support the ImageBitmapOptions parameter on VideoFrame)
- CanvasContext.transferImageFromBitmap
Under the hood, this I suspect this converts VIdeoFrame YUV to ImageBitmap RGB, rather than directly rendering the YUV planes of the video frame in the webgl canvas. (Please let me know if I'm wrong)
There's also some lifecycle management on the frames and bitmaps to close and destroy them.
Ideally, the video frames (or perhaps decoder) should have a method to directly render to a canvas. This is similar to how Android's decoder operates, and for lack of better description "feels" like it is closer to achieving zero copy.
bradisbell and BenV
Metadata
Metadata
Assignees
Labels
No labels