Skip to content

Automatic Differentiation with coloring #17

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Jun 19, 2019
Merged

Automatic Differentiation with coloring #17

merged 5 commits into from
Jun 19, 2019

Conversation

pkj-m
Copy link
Contributor

@pkj-m pkj-m commented Jun 19, 2019

Added function compute_jacobian! to compute Jacobian matrices using a coloring vector. Also added tests.

using ForwardDiff: Dual, jacobian

fcalls = 0
function f(dx,x)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Prefer 4 spaces for indentation

end

p = Tuple.(eachrow(partials_array))
t = zeros(Dual{Nothing, Float64, maximum(color)},0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
t = zeros(Dual{Nothing, Float64, maximum(color)},0)
t = zeros(Dual{typeof(f), Float64, maximum(color)},length(x))

It's usually better to pre-allocate. The first part here being typeof(f) will tag the Duals so they won't accidentally overlap with other differentiations.

p = Tuple.(eachrow(partials_array))
t = zeros(Dual{Nothing, Float64, maximum(color)},0)
for i in 1:length(x)
push!(t, Dual(x[i], p[i]))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
push!(t, Dual(x[i], p[i]))
t[i] = Dual(x[i], p[i])

end
end

J = Array(J)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't needed.

@ChrisRackauckas
Copy link
Member

So there's quite a few things that can be improved performance-wise, but it makes sense to do that in a follow up PR. Ideally we would just allocate everything upfront, and let the user pass in a cache that will store everything. Then we just need to use those cache variables to effectively do the computation without allocating. I'll merge and at least do a partial PR to show what I mean by this.

Additionally, we should be using chunk sizes. I find https://github.com/FluxML/Tracker.jl/blob/master/src/forward.jl a good reference on how to implement that. That will be easier to implement when we have a cache.

So I'm going to merge and we can keep handling this. I'll fix the tab spacing because that was my doing.

@ChrisRackauckas ChrisRackauckas merged commit 67848c3 into JuliaDiff:master Jun 19, 2019
@pkj-m pkj-m mentioned this pull request Jun 19, 2019
7 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants