Skip to content

Conversation

@torfjelde
Copy link
Member

No description provided.

@torfjelde torfjelde mentioned this pull request Apr 8, 2021
@codecov
Copy link

codecov bot commented Apr 8, 2021

Codecov Report

Merging #1578 (3047dc4) into master (b9db77c) will increase coverage by 4.71%.
The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1578      +/-   ##
==========================================
+ Coverage   73.37%   78.08%   +4.71%     
==========================================
  Files          24       23       -1     
  Lines        1292     1424     +132     
==========================================
+ Hits          948     1112     +164     
+ Misses        344      312      -32     
Impacted Files Coverage Δ
src/variational/VariationalInference.jl 53.84% <0.00%> (-3.30%) ⬇️
src/Turing.jl 100.00% <0.00%> (ø)
src/core/Core.jl 100.00% <0.00%> (ø)
src/contrib/inference/dynamichmc.jl 100.00% <0.00%> (ø)
src/utilities/Utilities.jl
src/core/ad.jl 84.81% <0.00%> (+0.08%) ⬆️
src/contrib/inference/sghmc.jl 98.41% <0.00%> (+0.19%) ⬆️
src/inference/gibbs.jl 97.33% <0.00%> (+0.23%) ⬆️
src/inference/AdvancedSMC.jl 96.52% <0.00%> (+0.56%) ⬆️
src/inference/ess.jl 93.75% <0.00%> (+0.89%) ⬆️
... and 13 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update bffc8be...3047dc4. Read the comment docs.

@torfjelde
Copy link
Member Author

@devmotion Tests are running like 10mins faster here compared to #1577 , so we're looking at a ~15% slowdown?

@devmotion
Copy link
Member

Is this relative increase consistent with the timings in your experiments?

1 similar comment
@devmotion
Copy link
Member

Is this relative increase consistent with the timings in your experiments?

@torfjelde
Copy link
Member Author

Was difficult to see a significant slowdown in those experiments. We observed a ~15% slowdown for model2 (bernoulli coin-flip), so maybe?

@devmotion
Copy link
Member

I am a bit surprised though since my impression was that it was mainly compile time regressions. So I assumed it would have a smaller impact in the tests since we use the same model repeatedly.

@torfjelde
Copy link
Member Author

I guess we didn't check AD-related stuff :/ E.g. maybe Zygote is significantly slower?
We need to recompile the model for different types, so maybe that's why we're seeing such an increase in testing time too?

Anyways, it's probably a good idea to run the benchmarks present in https://github.com/TuringLang/TuringExamples for the different versions and see if we can find something clearcut.
I'll get on it, but uncertain if I'll have it available today.

In the meantime, can we try to get the outstanding PRs merged? E.g. bump AbstractMCMC versions across the board and merge #1567 ?

@devmotion
Copy link
Member

In the meantime, can we try to get the outstanding PRs merged? E.g. bump AbstractMCMC versions across the board and merge #1567 ?

Sure, let's finish these other PRs.

@torfjelde torfjelde closed this Jun 7, 2021
@yebai yebai deleted the tor/compile-time-check branch December 16, 2021 20:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants