Skip to content

More tweaks#375

Open
kshyatt wants to merge 20 commits into
mainfrom
ksh/cuda_tweaks
Open

More tweaks#375
kshyatt wants to merge 20 commits into
mainfrom
ksh/cuda_tweaks

Conversation

@kshyatt
Copy link
Copy Markdown
Member

@kshyatt kshyatt commented Feb 18, 2026

Needed to get more MPSKit examples working

Comment thread ext/TensorKitCUDAExt/auxiliary.jl Outdated
Comment thread ext/TensorKitCUDAExt/cutensormap.jl Outdated
Comment thread ext/TensorKitCUDAExt/cutensormap.jl Outdated
Comment thread ext/TensorKitCUDAExt/cutensormap.jl Outdated
Comment thread ext/TensorKitCUDAExt/cutensormap.jl Outdated
Comment thread ext/TensorKitCUDAExt/cutensormap.jl Outdated
Comment thread src/tensors/braidingtensor.jl Outdated
Comment thread src/tensors/treetransformers.jl Outdated
@codecov
Copy link
Copy Markdown

codecov Bot commented Feb 26, 2026

Codecov Report

❌ Patch coverage is 66.66667% with 6 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
ext/TensorKitCUDAExt/cutensormap.jl 0.00% 3 Missing ⚠️
src/tensors/tensor.jl 0.00% 2 Missing ⚠️
src/tensors/abstracttensor.jl 85.71% 1 Missing ⚠️
Files with missing lines Coverage Δ
ext/TensorKitCUDAExt/truncation.jl 96.77% <100.00%> (ø)
src/tensors/adjoint.jl 89.65% <ø> (-0.35%) ⬇️
src/tensors/braidingtensor.jl 88.51% <100.00%> (+0.07%) ⬆️
src/tensors/diagonal.jl 90.23% <100.00%> (ø)
src/tensors/indexmanipulations.jl 72.50% <100.00%> (-1.13%) ⬇️
src/tensors/tensoroperations.jl 96.27% <100.00%> (-1.40%) ⬇️
src/tensors/abstracttensor.jl 56.34% <85.71%> (+1.24%) ⬆️
src/tensors/tensor.jl 83.23% <0.00%> (-0.58%) ⬇️
ext/TensorKitCUDAExt/cutensormap.jl 70.83% <0.00%> (-3.84%) ⬇️

... and 1 file with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@kshyatt kshyatt marked this pull request as draft February 27, 2026 11:14
@kshyatt
Copy link
Copy Markdown
Member Author

kshyatt commented Feb 27, 2026

Let's make this a draft too to cut down on CI thrash

@kshyatt kshyatt force-pushed the ksh/cuda_tweaks branch 2 times, most recently from f5857b3 to 32e182d Compare March 12, 2026 12:36
@kshyatt kshyatt force-pushed the ksh/cuda_tweaks branch 2 times, most recently from f5faaf6 to 2359d28 Compare March 23, 2026 14:24
@lkdvos lkdvos mentioned this pull request Mar 26, 2026
Copy link
Copy Markdown
Member

@lkdvos lkdvos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left some comment throughout, there are some things that I am not entirely convinced by but the rest looks great, thanks for working through all of this!

For the similarstoragetype(tensor, storagetype) calls that you added, this seems like something we should probably discuss over a separate PR, and it would be great if we could consolidate this one to get the remainder of the fixes in.
Would you be up for splitting these two things, and then getting this merged?

The same kind of holds for some of the comments I made too, if we can just postpone the things that are not obvious, but already get the other parts in, that would probably be helpful.

(Note that I am very much aware that none of this is your fault and this PR has lived for too long so the design shifts a bit, for which I do apologize!)

Comment thread ext/TensorKitCUDAExt/cutensormap.jl Outdated
Comment thread src/tensors/abstracttensor.jl Outdated
Comment thread ext/TensorKitCUDAExt/cutensormap.jl Outdated
Comment thread ext/TensorKitCUDAExt/cutensormap.jl Outdated
Comment thread src/tensors/abstracttensor.jl
Comment thread src/tensors/indexmanipulations.jl Outdated
Comment thread src/tensors/indexmanipulations.jl Outdated
Comment thread src/tensors/indexmanipulations.jl Outdated
Comment thread src/tensors/tensoroperations.jl Outdated
@kshyatt
Copy link
Copy Markdown
Member Author

kshyatt commented Mar 31, 2026

It's completely fine!! This has stayed open as I work through adding more tests for MPSKit, so I think we can pare off the simpler stuff we agree on, and then discuss things that are more contentious.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Mar 31, 2026

Your PR no longer requires formatting changes. Thank you for your contribution!

Comment thread src/tensors/braidingtensor.jl
Comment thread src/tensors/indexmanipulations.jl Outdated
Comment thread src/tensors/indexmanipulations.jl Outdated
Comment thread src/tensors/indexmanipulations.jl Outdated
Comment thread src/tensors/abstracttensor.jl Outdated
Comment thread src/tensors/adjoint.jl Outdated
Comment thread src/tensors/adjoint.jl Outdated
Comment thread src/tensors/indexmanipulations.jl
Copy link
Copy Markdown
Member

@lkdvos lkdvos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like some of the rebasing and the github UI has made it hard to spot the comments I left before, although I think many of them are still unresolved and could be discussed :)

Comment thread ext/TensorKitCUDAExt/cutensormap.jl Outdated
@kshyatt
Copy link
Copy Markdown
Member Author

kshyatt commented May 13, 2026

GitHub UI acting up again. Does anyone have any objections to squashing this later today if tests pass?

@kshyatt kshyatt marked this pull request as ready for review May 13, 2026 08:20
Comment thread src/tensors/braidingtensor.jl Outdated
@kshyatt kshyatt force-pushed the ksh/cuda_tweaks branch from d83060c to 28dca1e Compare May 14, 2026 11:54
Comment thread src/tensors/tensor.jl
end
end
# constructors from another TensorMap -- no-op
TensorMap{T, S, N₁, N₂, A}(t::TensorMap{T, S, N₁, N₂, A}) where {T, S <: IndexSpace, N₁, N₂, A <: DenseVector{T}} = t
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
TensorMap{T, S, N₁, N₂, A}(t::TensorMap{T, S, N₁, N₂, A}) where {T, S <: IndexSpace, N₁, N₂, A <: DenseVector{T}} = t

I would actually be in favor of only having the second implementation, since there is a subtle semantic difference that I would like to retain:

In general, (although this is very poorly documented), the difference between a constructor and a converter is that a converter is meant to be as lossless as possible, and is allowed to take ownership of all parts of the original data. On the other hand, a constructor is typically meant to make an independent copy. You can see this for regular Arrays in the example below:

Where it becomes a bit more vague is of course that you want to have a constructor that takes the different fields and builds up an object from that, without having to create an independent copy (e.g. the inner constructors of a type). This is also where the Julia docs are completely underspecified, so this is more something we should try and figure out ourselves.

I would say that in this case though, creating an independent copy is probably the safest solution, and it is probably easier to reason in generic code that if you call TensorMap{...}(t::TensorMap), you get something that does not share data with t, but I am of course open to other opinions :)

julia> a = rand(3)
3-element Vector{Float64}:
 0.3155466339261911
 0.6265705423537413
 0.6035244268238111

julia> b = Array(a)
3-element Vector{Float64}:
 0.3155466339261911
 0.6265705423537413
 0.6035244268238111

julia> b[1] = 1
1

julia> a
3-element Vector{Float64}:
 0.3155466339261911
 0.6265705423537413
 0.6035244268238111

julia> c = convert(Array, a)
3-element Vector{Float64}:
 0.3155466339261911
 0.6265705423537413
 0.6035244268238111

julia> c[1] = 1
1

julia> a
3-element Vector{Float64}:
 1.0
 0.6265705423537413
 0.6035244268238111

Copy link
Copy Markdown
Member

@lkdvos lkdvos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left some comments, and tried to minimize some of the diff. I think overall this PR looks pretty good, and especially the tests are starting to look like they contain less and less @allowscalar and more idiomatic code so this is great work!

How would you feel about punting on the more in-depth design decisions, such as making the TensorOperations functions auto-promote storagetypes, and merging the rest?

Comment thread src/tensors/indexmanipulations.jl Outdated
Comment thread src/tensors/indexmanipulations.jl Outdated
Comment thread src/tensors/diagonal.jl Outdated
Comment thread src/tensors/braidingtensor.jl
Comment thread src/tensors/braidingtensor.jl Outdated
Comment thread src/tensors/abstracttensor.jl Outdated
Comment thread src/tensors/tensoroperations.jl Outdated
Co-authored-by: Lukas Devos <ldevos98@gmail.com>
@kshyatt
Copy link
Copy Markdown
Member Author

kshyatt commented May 14, 2026

How would you feel about punting on the more in-depth design decisions, such as making the TensorOperations functions auto-promote storagetypes, and merging the rest?

This will completely negate the point of this PR, which is to finally get things to actually work for MPSKit and also possibly break quite a few of the tests and force the return of @allowscalar.

@kshyatt kshyatt force-pushed the ksh/cuda_tweaks branch from 9e9b0ed to a37a133 Compare May 15, 2026 10:33
@kshyatt kshyatt force-pushed the ksh/cuda_tweaks branch from a37a133 to 3504de3 Compare May 15, 2026 11:26
function MatrixAlgebraKit.findtruncated_svd(values::CuSectorVector, strategy::S) where {S <: MatrixAlgebraKit.TruncationStrategy}
# returning a CuSectorVector wrecks things in truncate_{co}domain
# because of scalar indexing
return CUDA.CUDACore.Adapt.adapt(Vector{Int}, MatrixAlgebraKit.findtruncated(values, strategy))
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we are guaranteed to always return a <:AbstractVector{Int}, sometimes this can also be a <:AbstractVector{Bool}. Do you think we could put a hook into the truncate_(co)domain functions directly and then just specialize there?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, could we instead split this into two lines and do

raw_ind = MatrixAlgebraKit.findtruncated(values, strategy)
return adapt(Vector{eltype(raw_ind)}, raw_ind)

For whatever reason, adapt(Vector, ) wasn't working :(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants