|
1 | | -Timing: |
| 1 | +# JNI Invocation Overhead |
2 | 2 |
|
3 | | -The original Java.Interop effort weanted a type-safe and simple binding. As such, it usedd SafeHandles. |
| 3 | +The original Java.Interop effort wanted a *type-safe* and *simple* |
| 4 | +binding around JNI. As such, it used `SafeHandle`s. |
4 | 5 |
|
5 | 6 | As the Xamarin.Forms team has turned their attention to profiling |
6 | 7 | Xamarin.Forms apps, and finding major Xamarin.Android-related |
7 | | -performance issues, performance needs to be considered. |
| 8 | +performance issues, performance needed to be considered. |
8 | 9 |
|
9 | 10 | For example, GC object allocation is a MAJOR concern for them; |
10 | 11 | ideally, you could have ZERO GC ALLOCATIONS performed when |
11 | 12 | invoking a Java method. |
12 | 13 |
|
13 | | -SafeHandles don't fit "nicely" in that world; every method that returns a SafeHandle ALLOCATES A NEW GC OBJECT. |
| 14 | +`SafeHandle`s don't fit "nicely" in that world; every method that |
| 15 | +returns a `SafeHandle` ALLOCATES A NEW GC OBJECT. |
14 | 16 |
|
15 | | -So...how bad is it? |
| 17 | +So...how bad is that? |
16 | 18 |
|
17 | | -What's in this directory is a VERY TRIMMED DOWN Java.Interop layer. |
18 | | -Really, it's NOT Java.Interop; it's the core generated JniEnvironment.g.cs (as `jni.cs`) |
19 | | -with code for both SafeHandles and IntPtr-oriented invocation strategies. |
| 19 | +What's in this directory is insanity: there are four different "strategies" |
| 20 | +for dealing with JNI: |
20 | 21 |
|
21 | | -The test? Invoke java.util.Arrays.binarySearch(int[], int) for 10,000,000 times. |
| 22 | + 1. `SafeHandle` All The Things! (`SafeTiming`) |
22 | 23 |
|
23 | | -Result: |
| 24 | + 2. Xamarin.Android JNI handling from 2011 until Xamarin.Android 6.1 (2016) |
| 25 | + (`XAIntPtrTiming`) |
| 26 | + |
| 27 | + This uses `IntPtr`s *everywhere*, e.g. `JNIEnv::CallObjectMethod()` returns |
| 28 | + an `IntPtr`. |
| 29 | + |
| 30 | + 3. "Happier Medium?" (`JIIntPtrTiming`) |
| 31 | + |
| 32 | + `IntPtr`s everywhere means it's trivial to forget that |
| 33 | + a JNI handle is a GREF vs. an LREF vs… What if we used the same `JNIEnv` |
| 34 | + invocation logic as `XAIntPtrTiming`, but instead of `IntPtr`s everywhere |
| 35 | + we instead had a `JniObjectReference` structure? |
| 36 | + |
| 37 | + 4. "Optimize (3)" (`JIPinvokeTiming`) |
| 38 | + |
| 39 | + (3) was slower than (2). What if we rethought the `JNIEnv` |
| 40 | + invocation logic and removed all the `Marshal.GetDelegateForFunctionPointer()` |
| 41 | + invocations with normal P/Invokes? |
| 42 | + |
| 43 | +To compare these four strategies, `jnienv-gen.exe` was updated so that *all* |
| 44 | +of them could be emitted into the same `.cs` file, into separate namespaces. |
| 45 | +These "core" JNI bindings could then be used with to invoke |
| 46 | +`java.util.Arrays.binarySearch(int[], int)`, 10,000,000 times, and compare |
| 47 | +the results. |
| 48 | + |
| 49 | +Result in 2015 (commit [25de1f38][25de]): |
| 50 | + |
| 51 | +[25de]: https://github.com/xamarin/Java.Interop/commit/25de1f38bb6b3ef2d4c98d2d95923a4bd50d2ea0 |
24 | 52 |
|
25 | 53 | # SafeHandle timing: 00:00:02.7913432 |
26 | 54 | # Average Invocation: 0.00027913432ms |
27 | | - # JniObjectReference timing: 00:00:01.9809859 |
| 55 | + # JIIntPtrTiming timing: 00:00:01.9809859 |
28 | 56 | # Average Invocation: 0.00019809859ms |
29 | 57 |
|
30 | | -Basically, with a `JniObjectReference` struct-oriented approach, SafeHandles take ~1.4x as long to run. |
31 | | -Rephrased: the JniObjectReference struct takes 70% of the time of SafeHandles. |
| 58 | +Basically, with a `JniObjectReference` struct-oriented approach, SafeHandles |
| 59 | +take ~1.4x longer to run. Rephrased: the `JniObjectReference` struct takes |
| 60 | +70% of the time of SafeHandles. |
32 | 61 |
|
33 | 62 | Ouch. |
34 | 63 |
|
35 | 64 | What about the current Xamarin.Android "all IntPtrs all the time!" approach? |
36 | 65 |
|
37 | 66 | # SafeHandle timing: 00:00:02.8118485 |
38 | 67 | # Average Invocation: 0.00028118485ms |
39 | | - # JniObjectReference timing: 00:00:02.0061727 |
| 68 | + # XAIntPtrTiming timing: 00:00:02.0061727 |
40 | 69 | # Average Invocation: 0.00020061727ms |
41 | 70 |
|
42 | | -The performance difference is comparable -- SafeHandles take ~1.4x as long to run, or |
43 | | -IntPtrs take ~70% as long as using SafeHandles. |
| 71 | +The performance difference is comparable -- SafeHandles take ~1.4x as long to |
| 72 | +run, or IntPtrs take ~70% as long as using SafeHandles. |
44 | 73 |
|
45 | | -Interesting -- but probably not *that* interesting -- is that in an absolute sense, the `JniObjectReference` |
46 | | -struct was *faster* than the `IntPtr` approach, even though `JniObjectReference` contains *both* an `IntPtr` |
47 | | -*and* an enum -- and is thus bigger! |
| 74 | +Interesting -- but probably not *that* interesting -- is that in an absolute |
| 75 | +sense, the `JniObjectReference` struct was *faster* than the `IntPtr` approach, |
| 76 | +even though `JniObjectReference` contains *both* an `IntPtr` *and* an enum -- |
| 77 | +and is thus bigger! |
48 | 78 |
|
49 | 79 | That doesn't make any sense. |
50 | 80 |
|
51 | | -Regardless, `JniObjectReference` doesn't appear to be *slower*, and thus should be a viable option here. |
| 81 | +Regardless, `JniObjectReference` doesn't appear to be *slower*, and thus should |
| 82 | +be a viable option here. |
52 | 83 |
|
53 | 84 | --- |
54 | 85 |
|
@@ -90,3 +121,35 @@ when passed as an argument to native code they'll be automagically pinned and ke |
90 | 121 | The current (above) timing comparison uses `IntPtr` for arguments. |
91 | 122 |
|
92 | 123 | We should standardize on `JniObjectReference` (again). |
| 124 | + |
| 125 | +## 2021 Timing Update |
| 126 | + |
| 127 | +How do these timings compare in 2021 on Desktop Mono (macOS)? |
| 128 | + |
| 129 | + # SafeTiming timing: 00:00:09.3850449 |
| 130 | + # Average Invocation: 0.00093850449ms |
| 131 | + # XAIntPtrTiming timing: 00:00:04.4930288 |
| 132 | + # Average Invocation: 0.00044930288ms |
| 133 | + # JIIntPtrTiming timing: 00:00:04.5563368 |
| 134 | + # Average Invocation: 0.00045563368ms |
| 135 | + # JIPinvokeTiming timing: 00:00:03.4710383 |
| 136 | + # Average Invocation: 0.00034710383ms |
| 137 | + |
| 138 | +In an absolute sense, things are worse: 10e6 invocations in 2015 took 2-3sec. |
| 139 | +Now, they're taking at least 3.5sec. |
| 140 | + |
| 141 | +In a relative sense, `SafeHandles` got *worse*, and takes 2.09x longer than |
| 142 | +`XAIntPtrTiming`, and 2.7x longer than `JIPinvokeTiming`! |
| 143 | + |
| 144 | +What about .NET Core 3.1? After some finagling, *that* can work too! |
| 145 | + |
| 146 | + # SafeTiming timing: 00:00:05.1734443 |
| 147 | + # Average Invocation: 0.00051734443ms |
| 148 | + # XAIntPtrTiming timing: 00:00:03.1048897 |
| 149 | + # Average Invocation: 0.00031048897ms |
| 150 | + # JIIntPtrTiming timing: 00:00:03.4353958 |
| 151 | + # Average Invocation: 0.00034353958ms |
| 152 | + # JIPinvokeTiming timing: 00:00:02.7470934 |
| 153 | + # Average Invocation: 0.00027470934000000004ms |
| 154 | + |
| 155 | +Relative performance is a similar story: `SafeHandle`s are slowest. |
0 commit comments