-
Notifications
You must be signed in to change notification settings - Fork 6k
[Impeller] Support 'SyncMode' in 'CommandBuffer::SubmitCommands' #37772
Changes from all commits
1a1cc2a
1b611f9
ac84daa
7f82399
347007b
96f69a2
b170635
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -29,8 +29,22 @@ bool CommandBufferGLES::IsValid() const { | |
| } | ||
|
|
||
| // |CommandBuffer| | ||
| bool CommandBufferGLES::OnSubmitCommands(CompletionCallback callback) { | ||
| bool CommandBufferGLES::OnSubmitCommands(SyncMode sync_mode, | ||
| CompletionCallback callback) { | ||
| const auto result = reactor_->React(); | ||
| if (result) { | ||
| const auto& gl = reactor_->GetProcTable(); | ||
| switch (sync_mode) { | ||
| case CommandBuffer::SyncMode::kWaitUntilScheduled: | ||
| gl.Flush(); | ||
| break; | ||
| case CommandBuffer::SyncMode::kWaitUntilCompleted: | ||
| gl.Finish(); | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. So looking at #37709, a couple of ideas come to mind:
The above approach can be lightly tweaked later on if/when we support parallel command encoding (for example, by making the Sorry about the runaround here! This aspect of our API is a bit unresolved as we haven't thrown together/documented a strategy for handling device resource synchronization comprehensively yet. FYI @chinmaygarde, I'm thinking now might be the time to clean up my old notes about this and do a write up detailing how we can achieve maximal parallel encoding without exposing any explicit synchronization primitives above the render layer. It's feeling like the right time to address this since it's been popping up in the discussions around Vulkan too.
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @bdero It looks like you don't want to change the API, and expect to be able to solve the problem differently on different backends. I guess we can further discuss the following two scenarios encountered on developing Scenario 1: Render the display list to texture on the Raster thread, and access the texture on the IO thread.OpenGLESOn the backend of OpenGLES, when the IO thread accesses the texture, we cannot know whether the command that the Raster thread renders the content to the texture is executed or not. In this PR, I want to do a MetalIn this scenario, it seems that Metal will not encounter this problem, because the commands corresponding to render to texture and access texture are all in the same CommandQueue, and the order in which they enter the queue in this scenario is determined (if I am wrong , please correct me). VulkanTODO Scenario 2: Blit operation to copy texture to buffer. We need to ensure that the cpu buffer has content after the blit operation API call ends.OpenGLESAs you said, the implementation of OpenGLES uses MetalWe need to ensure that after the blit operation is completed, the cpu buffer already has data. This PR proposes to use SyncMode to ensure that if the API is not changed, we can also implement the corresponding functions through VulkanTODO SummarizeIn scenario 1, if the OpenGLES backend does not have SyncMode, it seems that there is no way to ensure that the logic is correct, because OpenGLES 2.0 does not support In Scenario 2, Introducing SyncMode can make things sampler, otherwise Do you have any thoughts on the above two questions? Thanks!
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
The render pass and blit pass should both be encoded on the raster thread in serial. Would there be any issues with having
Given the behavior of The Interestingly, the GLES 1.1 docs happen to be more explicit about it, but the description itself is really no different -- it would have been nice if this helpful note had survived to later specs. :)
While blocking the raster thread will be unavoidable for GL, using a completion callback for device->host transfers would allow us to avoid blocking the raster thread unnecessarily on Metal, Vulkan, and most other backends we're likely to implement over time (for Vulkan, we could invoke the callback from a worker thread after waiting on a fence, for example).
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Ahh, I see. You mean "if the render pass and blit pass both be encoded on the raster thread in serial, then there is no more synchronization issue". Yes, that's right.
That sounds good, I'll give it a try :)
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Ah, I wasn't aware we were making a second context on the IO thread. Do you foresee any downside to just using one context and performing all of the GLES activity on the raster thread given the lack of appropriate synchronization primitives in GLES 2? We could always add a multi-context solution later on that gets used in the presence of GLES 3 procs, but I suspect the overhead savings would be somewhat negligible.
Indeed!
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Since the context of the raster thread is used for rendering, and the context of the io thread is used for texture uploading, I am worried that all gl activity on the raster thread will cause jank.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ah, right. Sorry, I meant more with respect to the device->host transfer. Performing new resource host->device transfers on the IO thread seems perfectly reasonable and safe. |
||
| break; | ||
| case CommandBuffer::SyncMode::kDontCare: | ||
| break; | ||
| } | ||
| } | ||
| if (callback) { | ||
| callback(result ? CommandBuffer::Status::kCompleted | ||
| : CommandBuffer::Status::kError); | ||
|
|
||
Uh oh!
There was an error while loading. Please reload this page.