More tweaks#375
Conversation
3bed38d to
8665c4a
Compare
eabfce9 to
0c903ac
Compare
Codecov Report❌ Patch coverage is
... and 1 file with indirect coverage changes 🚀 New features to boost your workflow:
|
|
Let's make this a draft too to cut down on CI thrash |
f5857b3 to
32e182d
Compare
f5faaf6 to
2359d28
Compare
lkdvos
left a comment
There was a problem hiding this comment.
Left some comment throughout, there are some things that I am not entirely convinced by but the rest looks great, thanks for working through all of this!
For the similarstoragetype(tensor, storagetype) calls that you added, this seems like something we should probably discuss over a separate PR, and it would be great if we could consolidate this one to get the remainder of the fixes in.
Would you be up for splitting these two things, and then getting this merged?
The same kind of holds for some of the comments I made too, if we can just postpone the things that are not obvious, but already get the other parts in, that would probably be helpful.
(Note that I am very much aware that none of this is your fault and this PR has lived for too long so the design shifts a bit, for which I do apologize!)
|
It's completely fine!! This has stayed open as I work through adding more tests for MPSKit, so I think we can pare off the simpler stuff we agree on, and then discuss things that are more contentious. |
|
Your PR no longer requires formatting changes. Thank you for your contribution! |
8a12178 to
ad62dad
Compare
d29251a to
3c5a575
Compare
lkdvos
left a comment
There was a problem hiding this comment.
It seems like some of the rebasing and the github UI has made it hard to spot the comments I left before, although I think many of them are still unresolved and could be discussed :)
|
GitHub UI acting up again. Does anyone have any objections to squashing this later today if tests pass? |
| end | ||
| end | ||
| # constructors from another TensorMap -- no-op | ||
| TensorMap{T, S, N₁, N₂, A}(t::TensorMap{T, S, N₁, N₂, A}) where {T, S <: IndexSpace, N₁, N₂, A <: DenseVector{T}} = t |
There was a problem hiding this comment.
| TensorMap{T, S, N₁, N₂, A}(t::TensorMap{T, S, N₁, N₂, A}) where {T, S <: IndexSpace, N₁, N₂, A <: DenseVector{T}} = t |
I would actually be in favor of only having the second implementation, since there is a subtle semantic difference that I would like to retain:
In general, (although this is very poorly documented), the difference between a constructor and a converter is that a converter is meant to be as lossless as possible, and is allowed to take ownership of all parts of the original data. On the other hand, a constructor is typically meant to make an independent copy. You can see this for regular Arrays in the example below:
Where it becomes a bit more vague is of course that you want to have a constructor that takes the different fields and builds up an object from that, without having to create an independent copy (e.g. the inner constructors of a type). This is also where the Julia docs are completely underspecified, so this is more something we should try and figure out ourselves.
I would say that in this case though, creating an independent copy is probably the safest solution, and it is probably easier to reason in generic code that if you call TensorMap{...}(t::TensorMap), you get something that does not share data with t, but I am of course open to other opinions :)
julia> a = rand(3)
3-element Vector{Float64}:
0.3155466339261911
0.6265705423537413
0.6035244268238111
julia> b = Array(a)
3-element Vector{Float64}:
0.3155466339261911
0.6265705423537413
0.6035244268238111
julia> b[1] = 1
1
julia> a
3-element Vector{Float64}:
0.3155466339261911
0.6265705423537413
0.6035244268238111
julia> c = convert(Array, a)
3-element Vector{Float64}:
0.3155466339261911
0.6265705423537413
0.6035244268238111
julia> c[1] = 1
1
julia> a
3-element Vector{Float64}:
1.0
0.6265705423537413
0.6035244268238111
lkdvos
left a comment
There was a problem hiding this comment.
I left some comments, and tried to minimize some of the diff. I think overall this PR looks pretty good, and especially the tests are starting to look like they contain less and less @allowscalar and more idiomatic code so this is great work!
How would you feel about punting on the more in-depth design decisions, such as making the TensorOperations functions auto-promote storagetypes, and merging the rest?
Co-authored-by: Lukas Devos <ldevos98@gmail.com>
This will completely negate the point of this PR, which is to finally get things to actually work for MPSKit and also possibly break quite a few of the tests and force the return of |
| function MatrixAlgebraKit.findtruncated_svd(values::CuSectorVector, strategy::S) where {S <: MatrixAlgebraKit.TruncationStrategy} | ||
| # returning a CuSectorVector wrecks things in truncate_{co}domain | ||
| # because of scalar indexing | ||
| return CUDA.CUDACore.Adapt.adapt(Vector{Int}, MatrixAlgebraKit.findtruncated(values, strategy)) |
There was a problem hiding this comment.
I don't think we are guaranteed to always return a <:AbstractVector{Int}, sometimes this can also be a <:AbstractVector{Bool}. Do you think we could put a hook into the truncate_(co)domain functions directly and then just specialize there?
There was a problem hiding this comment.
Hm, could we instead split this into two lines and do
raw_ind = MatrixAlgebraKit.findtruncated(values, strategy)
return adapt(Vector{eltype(raw_ind)}, raw_ind)
For whatever reason, adapt(Vector, ) wasn't working :(
Needed to get more MPSKit examples working