Skip to content

add oop jac forwarddiff_color_jacobian #65

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Oct 6, 2019
Merged

add oop jac forwarddiff_color_jacobian #65

merged 8 commits into from
Oct 6, 2019

Conversation

huanglangwen
Copy link
Contributor

forwarddiff_color_jacobian(oop_f, x, sparsity = nothing) == ForwardDiff.jacobian(oop_f, x)
jacout::Array{T,2} = forwarddiff_color_jacobian(oop_f, x, colorvec = ... , sparsity = ... )
jacout::T = forwarddiff_color_jacobian(oop_f, x, colorvec = ... , sparsity = ... , jac_prototype :: T = jac)

@huanglangwen
Copy link
Contributor Author

Is it good for merge? @ChrisRackauckas


function g(x) # out-of-place
global fcalls += 1
dx = zero(x)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should make this test be on static arrays instead, so it tests that no mutation is used by failing on mutation

for j in 1:chunksize
col_index = (i-1)*chunksize + j
(col_index > maxcolor) && return J
J = J + partials.(vec(fx), j) .* [(i==col_index ? 1 : 0) for i in 1:ncols]'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there should be a way to remove the array allocation here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm thinking of the delta tensor but that would introduce yet another dependency.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The delta tensor? Is that a FillArrays.jl object?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought there would be some lazy Kronecker delta tensors, but looks like there's no such thing. FillArrays.jl doesn't work, the slice of eye generates sparse vector and its product with a normal array would be sparseCSC which is worse.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't find any to make a general way of removing this array allocation. If we focus at StaticArrays, maybe I can make some overloads for + .

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be fine to have a special branch for StaticArrays. How does ForwardDiff do it? It has a special routine for Jacobians that doesn't allocate.

@ChrisRackauckas
Copy link
Member

To clarify the purpose here, the OOP version is really for two cases:

  1. Static arrays. This is for very small systems and shouldn't allocate at all. This should be the fastest version of the function possible, so hopefully we the differentiation code won't slow it down.
  2. Some forms of automatic differentiation do best with OOP code, and when they do so they will be allocating so we "don't need to care" about speed in this case.

So we really just need to make sure that SArray works and that case is fast.

@ChrisRackauckas ChrisRackauckas merged commit 06d1417 into JuliaDiff:master Oct 6, 2019
@huanglangwen huanglangwen deleted the oop branch October 6, 2019 17:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants