-
-
Notifications
You must be signed in to change notification settings - Fork 1k
Open
Description
I opened an issue in dotnet/runtime (37216) regarding what I thought at the beginning was a poor register allocation algorithm for hardware intrinsic. It evolved into why BDN do not optimize the code. And that's why I just opened an issue here.
In a nutshell, I have implemented a double double type and I want to optimize the "naive" port from a C library I did with some SIMD instructions. So I compare between the naive code and the SIMD code. Looking at the diassembly I found out the code is not optimized at all. Ever. Except when I force COMPlus_TieredCompilation=0. That is the only moment the code get's optimized. Go so the linked issue in dotnet/runtime for details.
My questions are then:
- What are the conditions for the code to be optimized?
- How does BDN force the runtime to optimize the code?
- Is there any way with BDN to get stats (Diagnoser) of how code is optimized by the runtime and when?
abelbraaksma and alexcovington
Metadata
Metadata
Assignees
Labels
No labels