-
Notifications
You must be signed in to change notification settings - Fork 112
Add an efficient iterator for ScaledInterpolation objects #79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
This is really nice! :) Although I realize that we're pushing the limits here, I don't think we're doing anything pathological. Especially, since Julia by design encourages the type of composition that comes from using interpolation, scaling and extrapolation together, I really feel that if this is a problem for type inference, then type inference needs to improve. |
|
Correct, I'm just pointing out a potential challenge. But I'm also no longer sure that this is the correct diagnosis---I tried this on an |
|
Actually, I was right the first time: bumping |
|
OK to merge this? |
|
Yeah, sure - it just adds functionality without breaking anything, right? I definitely like the |
Add an efficient iterator for ScaledInterpolation objects
|
I revisited the scaling code to implement Hessian evaluation today, and realized that the use case for this iterator is really different from what My motivation for bringing this up at all is that, except for in this iterator, When scaling with the iterator, on the other hand, the bounds of the domain are still Perhaps something like |
|
Yes, there's nothing very However, none of this applies for a general product iterator. It might indeed be worth splitting out. I haven't even thought about the API, however. |
One frequent use for interpolation is to represent a quantity on a "coarse" grid and then construct an array (or perform a computation) in which the object is sampled on a finer grid. In this case, grid points are visited in a predictable sequence, and it turns out we can exploit that predictability to substantially improve the efficiency of interpolation. In the case where the fine grid is much finer than the coarse grid, the cost of the computation goes from
O(3^N)(in the case ofQuadraticinterpolation) toO(3). Especially for largerN, this is a very substantial savings.The key idea is the following: suppose our coarse grid is 3xMxN, and we're interpolating up to a 201xMxN grid. That means the middle 100 points all correspond, via
coordlookupon the coarser grid, to an interval(1.5,2.5), i.e., we have 100 points in a row that have constantix_1, ix_2, ix_3values. Since interpolation looks like this:where
pm,p, andppare "partial" results on the trailing dimensions (dimensions 2 and 3), we can compute these partials and then reuse them for all 100 interpolations.In practice, using the test added here:
so we get an approximate 4-fold speed improvement in 3d (which is huge, although 9-fold would have been even better 😉).
One note: in the help text and test,
foo!is writtenbut this assumes fast linear indexing of
dest. Ideally it should be written asbut that turns out to perform poorly due to a type-inference failure. I suspect we're running up against the built-in limits of recursion depth in type inference, i.e, we're getting to the point of having such deep layers of
A{B{C{D...}}}thatinference.jlis deciding to punt. We should either look into bumping these limits another notch, or we'll have to start being very careful about adding yet more layers of wrapper-types. In particular, while I haven't tested yet I fear that this may behave badly forExtrapolationtypes simply because they add one extra layer.