- 
          
- 
                Notifications
    You must be signed in to change notification settings 
- Fork 5.7k
Specialized inv(F::SVD) #32126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Specialized inv(F::SVD) #32126
Conversation
| Don't know why some of the runs failed, but I guess this is ready to be reviewed/merged. | 
| Push :) | 
| Once again, a kind bump. | 
| I guess the main question is if this should return another  | 
| 
 Note that this discussion is already happening in JuliaLang/LinearAlgebra.jl#635. It seems that people are agreeing (me included) that we should return the dense inverse. In any case, it shouldn't hurt to merge this as a performance improvement (see @StefanKarpinski's comment on this). | 
| Actually, I believe this changes the behavior from generalized inverse to an inverse since the old version is based on  | 
| 
 I'm sorry, I'm not sure I understand. Are you raising this as a point of objection? | 
| Wouldn't inverting the singular values and then multiplying be more accurate then multiplying and then taking the matrix inverse? | 
| @StefanKarpinski This is exactly what this code does. There is no matrix inverse here. I think there are two differences to the old code. The first is as @andreasnoack noticed that this does not truncate at  inv(F::Factorization{T}) where {T} = (n = size(F, 1); ldiv!(F, Matrix{T}(I, n, n)))
# SVD least squares
function ldiv!(A::SVD{T}, B::StridedVecOrMat) where T
    k = searchsortedlast(A.S, eps(real(T))*A.S[1], rev=true)
    view(A.Vt,1:k,:)' * (view(A.S,1:k) .\ (view(A.U,:,1:k)' * B))
endyou use  | 
| 
 I'm not sure which one I prefer but I think we should be aware when are just improving performance and when we change behavior. The safer solution here would probably be to continue to truncate and file an issue to discuss changing the behavior. I suspect the specialized method would still be faster since it avoids the very inefficient transposition by multiplication. | 
| Thanks for the clarification. However, I don't see why the truncation would be something we'd want to preserve in any case. Isn't not truncating always better than truncating? Otherwise put, isn't the inverse always preferable over the generalized inverse? Maybe I'm missing something? Also, since this is an implementation detail, this should be technically non-breaking as well, right? | 
| bump, as 1.3 feature freeze is coming up (can this be in 1.3 if merged after feature freeze?). I feel this can be merged as is (see my comments/questions above). Otherwise, I can easily add the artificial truncation, as mentioned by @andreasnoack, if requested. | 
| 
 No. It depends on the objective. In many applications, it's better to use the pseudo inverse. 
 This PR changes actual behavior so packages might break because of this. You could argue for truncating and not truncating but 
 | 
| 
 True but this doesn't make it a breaking change, does it? I mean, I can rely on all sorts of implementation details which, if changed, would break my code. The question is whether it is part of the API. And I don't think that the truncation is part of the API of  
 Anyways, I get your point and, frankly, I don't really care. I'll add the artificial truncation so that we can merge this. | 
| Done. New timings (on a different machine): julia> @btime inv(F);
  79.599 μs (13 allocations: 313.02 KiB)
julia> @btime inv_old_new(F); # no truncation
  40.300 μs (4 allocations: 156.41 KiB)
julia> @btime inv_new(F); # with truncation
  41.099 μs (8 allocations: 156.59 KiB) | 
| @staticfloat I'm assuming that the "tester" failures are spurious. I can't make sense of the error messages. | 
| Yes, I agree. | 
Performance improvement (see JuliaLang/LinearAlgebra.jl#633):
Closes JuliaLang/LinearAlgebra.jl#633.