Skip to content
5 changes: 4 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,8 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [Unreleased](https://github.com/qutip/QuantumToolbox.jl/tree/main)
- Add error message for bad input in state/operator generating functions ([#603])
- Add error message for bad input in state/operator generating functions. ([#603])
- Fix `thermal_dm` and `enr_thermal_dm` for extreme cases. ([#614], [#613])

## [v0.39.1]
Release date: 2025-11-19
Expand Down Expand Up @@ -383,3 +384,5 @@ Release date: 2024-11-13
[#591]: https://github.com/qutip/QuantumToolbox.jl/issues/591
[#596]: https://github.com/qutip/QuantumToolbox.jl/issues/596
[#603]: https://github.com/qutip/QuantumToolbox.jl/issues/603
[#613]: https://github.com/qutip/QuantumToolbox.jl/issues/613
[#614]: https://github.com/qutip/QuantumToolbox.jl/issues/614
12 changes: 6 additions & 6 deletions src/qobj/energy_restricted.jl
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
#=
This file defines the energy restricted space structure.
This file defines the excitation number restricted space structure.
=#

export EnrSpace, enr_state_dictionaries
Expand Down Expand Up @@ -194,13 +194,13 @@ function enr_thermal_dm(
D = s_enr.size
idx2state = s_enr.idx2state

diags = ComplexF64[prod((nvec ./ (1 .+ nvec)) .^ idx2state[idx]) for idx in 1:D]
diags /= sum(diags)

exp_minus_β = @. 1 - (1 / (nvec + 1)) # use "1-1/(n+1)" instead of "n/(n+1)" for numerical stability (works for n=0 and Inf)
P_n = _complex_float_type(nvec)[prod(exp_minus_β .^ idx2state[idx]) for idx in 1:D]
P_n /= sum(P_n)
if getVal(sparse)
return QuantumObject(spdiagm(0 => diags), Operator(), s_enr)
return QuantumObject(spdiagm(0 => P_n), Operator(), s_enr)
else
return QuantumObject(diagm(0 => diags), Operator(), s_enr)
return QuantumObject(diagm(0 => P_n), Operator(), s_enr)
end
end

Expand Down
11 changes: 6 additions & 5 deletions src/qobj/states.jl
Original file line number Diff line number Diff line change
Expand Up @@ -118,13 +118,14 @@ Density matrix for a thermal state (generating thermal state probabilities) with
!!! warning "Beware of type-stability!"
If you want to keep type stability, it is recommended to use `thermal_dm(N, n, sparse=Val(sparse))` instead of `thermal_dm(N, n, sparse=sparse)`. See [this link](https://docs.julialang.org/en/v1/manual/performance-tips/#man-performance-value-type) and the [related Section](@ref doc:Type-Stability) about type stability for more details.
"""
function thermal_dm(N::Int, n::Real; sparse::Union{Bool,Val} = Val(false))
β = log(1.0 / n + 1.0)
data = [exp(-β * ComplexF64(j)) for j in 0:(N-1)]
function thermal_dm(N::Int, n::T; sparse::Union{Bool,Val} = Val(false)) where {T<:Real}
exp_minus_β = 1 - (1 / (n + 1)) # use "1-1/(n+1)" instead of "n/(n+1)" for numerical stability (works for n=0 and Inf)
P_n = _complex_float_type(T)[exp_minus_β^j for j in 0:(N-1)]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately this is way slower than the exp version. thermal_dm(3000, 0.2; sparse=Val(true)) becomes 50% slower than before.

Copy link
Member Author

@ytdHuang ytdHuang Dec 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But this implementation gives correct results for all known extreme cases. I think this is more important.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And that's why I was talking about branching on the finiteness of beta. Since precomputing the normalization factor is problematic for dealing with nbar=Inf, simply replacing the first element in the original array already works and is faster, i.e.

    data =     P_n = _complex_float_type(T)[j == 0 && !isfinite(β) ? one(β) : exp(-β * j) for j in 0:(N-1)]

This is doing basically the same thing as the slower version since the pow function fallbacks to log and exp.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I didn't fully understand your point in the beginning

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the other PR @yuyichao what saying that perhaps it is better to preallocate the vector and then filling, to avoid comprehension. Is it still true here?

I also would avoid the use of the Boltzmann function we have just introduced.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The avoiding comprehension part was mainly to not write too much code in the comprehension (for coding style only). It is true that comprehension can basically only be as performant as loop and not better but that's not too important.

The code in the loop (and therefore would've been in comprehension) was more complex in the other PR since I wanted to reuse the result for the summation computation. However, since the analytical expression to calculate the sum had issue with infinite nbar and is therefore disgarded, that's less of a problem (granted 1 + 1/n is already computed but that has always been the case) and the code in the loop is fairly simple for the current version.

P_n /= sum(P_n)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's wrong with the in-place modification?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

JET.jl reports some issues, so I keep it as P_n /= sum(P_n) instead of ./=

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What’s the issue? Why is it recommending allocating a new array?

Copy link
Member Author

@ytdHuang ytdHuang Dec 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It relates to something else in our package

I prefer to keep it as it is for now.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So you mean that this use is perfectly fine?

I still don't understand why such a clear cut and simple optimization is problematic. I literally can't see any reason for the non-inplace version to be better. If there is one, it would be good to state it and maybe mention in a comment.

Copy link
Member Author

@ytdHuang ytdHuang Dec 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I also don't understand. It somehow makes the compiler feel like we are making the Qobj as a 3D-array.

I even think that the issue reported by JET.jl is not a problem from our package.

Anyway, I would prefer to merge this bug fix first, we can improve those things in a future and separate PR if necessary.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that we should avoid allocating extra vectors unnecessary. We should understand what is going on here.

if getVal(sparse)
return QuantumObject(spdiagm(0 => data ./ sum(data)), Operator(), N)
return QuantumObject(spdiagm(0 => P_n), Operator(), N)
else
return QuantumObject(diagm(0 => data ./ sum(data)), Operator(), N)
return QuantumObject(diagm(0 => P_n), Operator(), N)
end
end

Expand Down
1 change: 1 addition & 0 deletions src/utilities.jl
Original file line number Diff line number Diff line change
Expand Up @@ -190,6 +190,7 @@ _complex_float_type(::Type{Int32}) = ComplexF32
_complex_float_type(::Type{Int64}) = ComplexF64
_complex_float_type(::Type{Float32}) = ComplexF32
_complex_float_type(::Type{Float64}) = ComplexF64
_complex_float_type(::Type{BigFloat}) = Complex{BigFloat}
_complex_float_type(::Type{Complex{Int32}}) = ComplexF32
_complex_float_type(::Type{Complex{Int64}}) = ComplexF64
_complex_float_type(::Type{Complex{Float32}}) = ComplexF32
Expand Down
9 changes: 9 additions & 0 deletions test/core-test/enr_state_operator.jl
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,15 @@
I_enr = enr_identity(space_enr)
size_enr = space_enr.size

# enr_thermal_dm (extreme cases)
ρTd0 = enr_thermal_dm(space_enr, 0.0)
ρTs0 = enr_thermal_dm(space_enr, 0.0; sparse = Val(true))
ρTd∞ = enr_thermal_dm(space_enr, Inf)
ρTs∞ = enr_thermal_dm(space_enr, Inf; sparse = Val(true))
@test tr(ρTd0) ≈ tr(ρTs0) ≈ tr(ρTd∞) ≈ tr(ρTs∞) ≈ 1.0
@test ρTd0.data ≈ ρTs0.data ≈ fock_dm(size_enr, 0).data
@test ρTd∞.data ≈ ρTs∞.data ≈ maximally_mixed_dm(size_enr).data

# tensor between normal and ENR space
ρ_tot = tensor(ρ_s, ρ_enr)
opstring = sprint((t, s) -> show(t, "text/plain", s), ρ_tot)
Expand Down
22 changes: 17 additions & 5 deletions test/core-test/states_and_operators.jl
Original file line number Diff line number Diff line change
Expand Up @@ -36,11 +36,23 @@
end

@testset "thermal state" begin
ρTd = thermal_dm(5, 0.123)
ρTs = thermal_dm(5, 0.123; sparse = true)
N = 5

# extreme cases
ρTd0 = thermal_dm(N, 0.0)
ρTs0 = thermal_dm(N, 0.0; sparse = Val(true))
ρTd∞ = thermal_dm(N, Inf)
ρTs∞ = thermal_dm(N, Inf; sparse = Val(true))
@test tr(ρTd0) ≈ tr(ρTs0) ≈ tr(ρTd∞) ≈ tr(ρTs∞) ≈ 1.0
@test ρTd0 ≈ ρTs0 ≈ fock_dm(N, 0)
@test ρTd∞ ≈ ρTs∞ ≈ maximally_mixed_dm(N)

# general case (also test BigFloat)
ρTd = thermal_dm(N, big(0.123))
ρTs = thermal_dm(N, big(0.123); sparse = Val(true))
@test isoper(ρTd)
@test ρTd.dims == [5]
@test tr(ρTd) ≈ 1.0
@test ρTd.dims == [N]
@test tr(ρTd) ≈ tr(ρTs) ≈ 1.0
@test ρTd.data ≈ spdiagm(
0 => Float64[
0.8904859864731106,
Expand All @@ -50,7 +62,7 @@
0.00012815292116369966,
],
)
@test typeof(ρTs.data) <: AbstractSparseMatrix
@test ρTs.data isa AbstractSparseMatrix
@test ρTd ≈ ρTs
end

Expand Down
Loading