-
Notifications
You must be signed in to change notification settings - Fork 60
More tweaks #375
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
More tweaks #375
Changes from all commits
9e1ebfe
ea787ae
c3d33e2
eb6b985
c4653c8
b46e23e
5932066
46e5037
9fdc185
e948370
28dca1e
259309e
f23ce5c
3504de3
830d955
1fe40a7
55d25b5
a08bd23
921ffc6
ed07969
285ac16
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||
|---|---|---|---|---|
|
|
@@ -21,7 +21,6 @@ struct TensorMap{T, S <: IndexSpace, N₁, N₂, A <: DenseVector{T}} <: Abstrac | |||
| end | ||||
| return TensorMap{T, S, N₁, N₂, A}(data, space) | ||||
| end | ||||
|
|
||||
| # constructors from data | ||||
| function TensorMap{T, S, N₁, N₂, A}( | ||||
| data::A, space::TensorMapSpace{S, N₁, N₂} | ||||
|
|
@@ -34,6 +33,9 @@ struct TensorMap{T, S <: IndexSpace, N₁, N₂, A <: DenseVector{T}} <: Abstrac | |||
| return new{T, S, N₁, N₂, A}(data, space) | ||||
| end | ||||
| end | ||||
| # constructors from another TensorMap -- no-op | ||||
| TensorMap{T, S, N₁, N₂, A}(t::TensorMap{T, S, N₁, N₂, A}) where {T, S <: IndexSpace, N₁, N₂, A <: DenseVector{T}} = t | ||||
|
kshyatt marked this conversation as resolved.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
I would actually be in favor of only having the second implementation, since there is a subtle semantic difference that I would like to retain: In general, (although this is very poorly documented), the difference between a constructor and a converter is that a converter is meant to be as lossless as possible, and is allowed to take ownership of all parts of the original data. On the other hand, a constructor is typically meant to make an independent copy. You can see this for regular Where it becomes a bit more vague is of course that you want to have a constructor that takes the different fields and builds up an object from that, without having to create an independent copy (e.g. the inner constructors of a type). This is also where the Julia docs are completely underspecified, so this is more something we should try and figure out ourselves. I would say that in this case though, creating an independent copy is probably the safest solution, and it is probably easier to reason in generic code that if you call julia> a = rand(3)
3-element Vector{Float64}:
0.3155466339261911
0.6265705423537413
0.6035244268238111
julia> b = Array(a)
3-element Vector{Float64}:
0.3155466339261911
0.6265705423537413
0.6035244268238111
julia> b[1] = 1
1
julia> a
3-element Vector{Float64}:
0.3155466339261911
0.6265705423537413
0.6035244268238111
julia> c = convert(Array, a)
3-element Vector{Float64}:
0.3155466339261911
0.6265705423537413
0.6035244268238111
julia> c[1] = 1
1
julia> a
3-element Vector{Float64}:
1.0
0.6265705423537413
0.6035244268238111
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I guess this and the adapt thing are the main outstanding points, right? |
||||
| TensorMap{T, S, N₁, N₂, A}(t::TensorMap{T, S, N₁, N₂}) where {T, S <: IndexSpace, N₁, N₂, A <: DenseVector{T}} = TensorMap(A(t.data), space(t)) | ||||
|
|
||||
| """ | ||||
| Tensor{T, S, N, A<:DenseVector{T}} = TensorMap{T, S, N, 0, A} | ||||
|
|
@@ -407,11 +409,6 @@ for randf in (:rand, :randn, :randexp, :randisometry) | |||
| end | ||||
| end | ||||
|
|
||||
| # Moving arbitrary TensorMaps to CPU | ||||
| #----------------------------- | ||||
| to_cpu(t::TensorMapWithStorage{T, Vector{T}}) where {T} = t # no op | ||||
| to_cpu(t::TensorMap) = convert(TensorMapWithStorage{scalartype(t), similarstoragetype(scalartype(t))}, t) | ||||
|
|
||||
| # Efficient copy constructors | ||||
| #----------------------------- | ||||
| Base.copy(t::TensorMap) = typeof(t)(copy(t.data), t.space) | ||||
|
|
||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we are guaranteed to always return a
<:AbstractVector{Int}, sometimes this can also be a<:AbstractVector{Bool}. Do you think we could put a hook into thetruncate_(co)domainfunctions directly and then just specialize there?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, could we instead split this into two lines and do
For whatever reason,
adapt(Vector, )wasn't working :(