-
-
Notifications
You must be signed in to change notification settings - Fork 48
✨ Add non-linear control flow conversions between QC and QCO dialects #1396
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
✨ Add non-linear control flow conversions between QC and QCO dialects #1396
Conversation
Codecov Report❌ Patch coverage is 📢 Thoughts on this report? Let us know! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 10
🤖 Fix all issues with AI agents
In `@mlir/lib/Conversion/QCOToQC/QCOToQC.cpp`:
- Around line 1360-1364: The legality predicate passed to
target.addDynamicallyLegalOp for scf::ForOp currently rejects any TensorType
result, causing non-qubit tensors to be routed to ConvertQCOScfForOp; update the
predicate so it only returns illegal for tensors whose element type is
qco::QubitType (e.g., detect TensorType t and check t.getElementType() ==
qco::QubitType::get(context)), or assert that all tensors are qubit tensors,
ensuring classical tensors (e.g., tensor<i32>) remain legal and are not lowered
by ConvertQCOScfForOp.
- Around line 910-940: The current matchAndRewrite for tensor::ExtractOp
contains an unreachable MemRefType branch and an unsafe store-scan that can
null-deref on non-constant indices, use store order instead of matching indices,
and can replace with non-dominating values; change the implementation to always
lower tensor.extract to a memref.load instead of scanning memref::StoreOp users.
Concretely, in matchAndRewrite (tensor::ExtractOp) remove the branch that
inspects MemRefType and the loop over memrefUsers/dyn_cast<memref::StoreOp>, and
unconditionally call rewriter.replaceOpWithNewOp<memref::LoadOp>(op,
adaptor.getTensor(), adaptor.getIndices()); (if you prefer a more advanced fix
keep the constant index check and add a dominance check before replacing with a
store value, using the same symbols: matchAndRewrite, adaptor.getTensor(),
adaptor.getIndices(), memref::StoreOp).
- Around line 1343-1358: The legality predicates for scf::YieldOp and
scf::ConditionOp only check for scalar qco::QubitType and qc::QubitType and miss
memref<... qc.qubit> (from tensor->memref conversion); update the lambdas passed
to target.addDynamicallyLegalOp<scf::YieldOp> and
target.addDynamicallyLegalOp<scf::ConditionOp> to also treat MemRefType whose
getElementType() is qco::QubitType::get(context) or qc::QubitType::get(context)
as illegal, and likewise extend the scf::WhileOp predicate to check for
qc::QubitType inside MemRefType elements when examining op->getResultTypes();
locate these predicates by the calls to addDynamicallyLegalOp for scf::YieldOp,
scf::WhileOp, and scf::ConditionOp and modify the type checks to inspect
MemRefType::getElementType().
In `@mlir/lib/Conversion/QCToQCO/QCToQCO.cpp`:
- Around line 1674-1676: Fix the minor typo in the comment inside the QCToQCO
conversion where you handle memref::StoreOp: change the comment "// gGet the
qubit" to "// Get the qubit" near the dyn_cast<memref::StoreOp>(user) / storeOp
block so the comment is correct and concise.
- Around line 1349-1356: The loop that collects qcoQubits reads
qubitMap[storeOp.getValue()] without checking the key; add an assertion/guard
that the map contains storeOp.getValue() before accessing it (e.g.,
assert(qubitMap.contains(...) || qubitMap.count(...)) or bail with an error) to
match the file's existing patterns and prevent inserting null Values into
qcoQubits; update the block handling memref::StoreOp (the storeOp variable and
its getValue() call) to only push_back when the presence check passes.
- Around line 1414-1421: Before indexing qubitMap with op.getMemRef() add a
presence check and handle the missing-key case: verify
getState().qubitMap[op->getParentRegion()].contains(op.getMemRef()) (or find()
!= end()) and if missing emit a clear error/return failure/assert (so the
rewrite doesn't dereference a nonexistent entry). Keep the rest of the logic
(creating tensor::ExtractOp via tensor::ExtractOp::create(rewriter,
op->getLoc(), qco::QubitType::get(rewriter.getContext()), tensor,
op.getIndices()) and updating qubitMap[op.getResult()] = extractOp.getResult())
unchanged, but only execute them after the presence check succeeds.
In `@mlir/lib/Dialect/QCO/Builder/QCOProgramBuilder.cpp`:
- Around line 670-691: The code assumes init args are defined by
tensor::FromElementsOp and that index Value is an arith::ConstantOp; add null
checks and early guards: when iterating scfFor.getInitArgs(), verify
arg.getDefiningOp<tensor::FromElementsOp>() is non-null before using
fromTensorOp and skip or handle the case if it's not; when index is a Value,
check that getDefiningOp<arith::ConstantOp>() returns non-null before calling
getValue(), and handle non-constant indices (e.g., skip, assert, or emit a
diagnostic) instead of dereferencing a null; keep using
updateQubitTracking(extractedElement, extractOp.getResult(),
extractOp->getParentRegion()) only after these guards succeed.
In `@mlir/unittests/Conversion/QCOToQC/test_conversion_qco_to_qc.cpp`:
- Line 348: The test's failure message is wrong: update the FAIL() call in
test_conversion_qco_to_qc.cpp (the FAIL() << "Conversion error during QC-QCO
conversion for scf nested";) to reflect the actual direction QCO→QC—e.g., change
the string to "Conversion error during QCO→QC conversion for scf nested" or
"Conversion error during QCO-QC conversion for scf nested" so the message
matches the test (QCO to QC) and aids debugging.
- Around line 41-82: Extract the duplicated test fixture and helpers into a
shared header and update both test files to include it: create a header (e.g.,
ConversionTestFixture.h) that defines the ConversionTest class and its helpers
(context, SetUp, runCanonicalizationPass, buildQCIR, buildQCOIR) and also move
the getOutputString helper there; then replace the inline definitions in
test_conversion_qco_to_qc.cpp and test_conversion_qc_to_qco.cpp with an `#include`
of the new header so both tests reuse the same ConversionTest and helper
implementations.
- Around line 366-368: In the TEST_F named ScfForNestedTensorQCOtoQCTest (inside
the ConversionTest suite) fix the typo in the comment from "tesnor" to "tensor"
and update the erroneous error message that currently states the wrong
conversion direction (in the test's failure/assert message when conversion
fails) to correctly say conversion from QCO to QC; locate the strings near the
test's call to buildQCOIR and the assertion/error output and replace them with
the corrected wording.
♻️ Duplicate comments (1)
mlir/unittests/Conversion/QCToQCO/test_conversion_qc_to_qco.cpp (1)
42-55: Test fixture is well-organized; minor improvement possible.The test fixture follows a clean pattern. Previous review noted that
loadAllAvailableDialects()can mask missing dialect registrations; since you've explicitly registered all needed dialects in the registry, you could remove this call to be more explicit, but it's not critical.
mlir/unittests/Conversion/QCOToQC/test_conversion_qco_to_qc.cpp
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🤖 Fix all issues with AI agents
In `@mlir/lib/Conversion/QCToQCO/QCToQCO.cpp`:
- Around line 1336-1343: In the documentation comment in QCToQCO.cpp (the
example showing conversion of memref.alloc to tensor.from_elements), fix the
typo "tensore" to "tensor" in the output example (`%tensor =
tensor.from_elements %q0, %q1, %q2 : tensore<3x!qco.qubit>` -> change "tensore"
to "tensor`) so the prose and MLIR example are correct.
In `@mlir/lib/Dialect/QC/Builder/QCProgramBuilder.cpp`:
- Around line 495-572: The finalize() auto‑deallocation of allocatedQubits is
unsafe because QCProgramBuilder allows allocations inside nested SCF regions
(scfFor, scfWhile, scfIf) so values can be out of scope at finalize(); fix by
making allocation tracking region‑aware or forbidding cross‑region allocations:
update QCProgramBuilder allocation logic to record the parent Region/Block for
each allocation (track alongside allocatedQubits), and in finalize() emit
deallocations inside the same Region/Block where the allocation was created (or
assert/error if that Region is not the entry block); alternatively, add a guard
in scfFor/scfWhile/scfIf (and any function-builder entry points) to disallow
calls to the allocate API outside the main entry block and surface a clear
diagnostic from finalize()/allocation site when a nested allocation is detected.
In `@mlir/lib/Dialect/QCO/Builder/QCOProgramBuilder.cpp`:
- Around line 632-639: The dealloc routine is validating and erasing the qubit
against qubit.getParentRegion() instead of the region where the Dealloc is
inserted; update QCOProgramBuilder::dealloc to compute the target region (the
builder's current/insertion region or the DeallocOp's parent region) and call
validateQubitValue(targetRegion, qubit) and
validQubits[targetRegion].erase(qubit) before creating DeallocOp::create(*this,
qubit); keep references to validateQubitValue, validQubits and DeallocOp::create
so reviewers can locate the changes.
- Around line 669-699: The current loop over scfFor.getInitArgs() updates qubit
tracking for every tensor initArg, corrupting unrelated tracking; instead
determine which scf.for result corresponds to the extracted tensor (find the
result index/position of 'tensor' among scfFor results) and use that position to
select the matching initArg, get its FromElementsOp (fromTensorOp) and only call
updateQubitTracking(fromTensorOp.getElements()[val], extractOp.getResult(),
extractOp->getParentRegion()) for that one initArg; keep the existing
index-to-int logic (the index variant handling using
arith::ConstantOp/IntegerAttr) but apply it only to the matched initArg rather
than iterating all initArgs.
In `@mlir/unittests/Conversion/QCOToQC/test_conversion_qco_to_qc.cpp`:
- Around line 84-112: getOutputString is filtering for qco::QCODialect and
qco::DeallocOp but after QCO→QC conversion the module contains QC ops; change
the filter to use qc::QCDialect instead of qco::QCODialect and update the
Dealloc op check to qc::DeallocOp (i.e., acquire qcDialect via
moduleOp->getContext()->getLoadedDialect<qc::QCDialect>() and use opDialect ==
qcDialect || opDialect == scfDialect || llvm::isa<func::ReturnOp>(op) ||
llvm::isa<func::CallOp>(op) so QC gate ops (qc.h, qc.x, qc.alloc, etc.) are
printed).
In `@mlir/unittests/Conversion/QCToQCO/test_conversion_qc_to_qco.cpp`:
- Around line 57-83: The canonicalization helper runCanonicalizationPass
currently swallows failures; change its signature to return LogicalResult and
return failure() when pm.run(module) fails (and success() otherwise). Update
callers buildQCIR and buildQCOIR to check the returned LogicalResult from
runCanonicalizationPass(module.get()) and assert or emit a test failure (e.g.,
assert(succeeded(...)) or return a failing LogicalResult from the builder
wrappers) so tests abort on canonicalization failure; adjust function signatures
and call sites accordingly (runCanonicalizationPass, buildQCIR, buildQCOIR).
- Around line 90-107: The test currently filters printed ops by qcoDialect,
scfDialect, func::ReturnOp and func::CallOp, which omits tensor dialect ops
produced by the QC->QCO conversion; update the walk lambda in
test_conversion_qc_to_qco.cpp to also load the tensor dialect (e.g.
getLoadedDialect<tensor::TensorDialect>()) and include a check opDialect ==
tensorDialect in the same conditional so tensor ops (like tensor.from_elements
and tensor.extract) are printed and compared.
♻️ Duplicate comments (5)
mlir/unittests/Conversion/QCToQCO/test_conversion_qc_to_qco.cpp (3)
45-55: Avoid loading all dialects in this test context.
Loading every available dialect can mask missing registrations and reduce test strictness; the explicit registry already declares what’s needed.♻️ Suggested change
- context->loadAllAvailableDialects();
85-86: getOutputString doesn’t need OwningOpRef; preferconst ModuleOp&.
This avoids unnecessary ownership and better reflects read-only behavior.♻️ Suggested change
- getOutputString(mlir::OwningOpRef<mlir::ModuleOp>& module) { + getOutputString(const mlir::ModuleOp& module) { ... - auto* moduleOp = module->getOperation(); + auto* moduleOp = module.getOperation();Update call sites accordingly (e.g.,
getOutputString(*input)andgetOutputString(*expectedOutput)).
148-151: Printed-IR equality is brittle.
Formatting or canonicalization changes can break these tests without semantic changes. Consider FileCheck-style assertions or structural checks instead of full-string equality.Also applies to: 193-196, 241-244, 279-282, 314-316, 358-361, 403-405, 469-471
mlir/lib/Conversion/QCOToQC/QCOToQC.cpp (1)
944-975: tensor.extract folding to last store is unsafe and can crash.
This path assumes a defining op and constant indices (possible null deref), relies on store order instead of index matching, and can replace with non-dominating values. Prefer always lowering tomemref.load, or add strict dominance/index checks.🛠️ Safe, always-correct rewrite
- const auto memref = adaptor.getTensor(); - // Remove the extract operations following a scf.for operation - // if the region of the converted memref is the same as the extract - // operation - if (memref.getDefiningOp()->getParentRegion() == op->getParentRegion()) { - // get the users of the memref register - const auto memrefUsers = llvm::to_vector(memref.getUsers()); - // Get the index where the value was extracted - int64_t index = -1; - auto constantOp = - adaptor.getIndices().front().getDefiningOp<arith::ConstantOp>(); - const auto indexToStore = - dyn_cast<IntegerAttr>(constantOp.getValue()).getInt(); - // Find the appropriate store operation depending on the index to get - // the qubit - for (auto* user : llvm::reverse(memrefUsers)) { - if (llvm::isa<memref::StoreOp>(user)) { - index++; - if (index == indexToStore) { - auto storeOp = dyn_cast<memref::StoreOp>(user); - rewriter.replaceOp(op, storeOp.getValueToStore()); - } - } - } - } - // Replace other extract operations with a memref.load operation - else { - rewriter.replaceOpWithNewOp<memref::LoadOp>(op, adaptor.getTensor(), - adaptor.getIndices()); - } + rewriter.replaceOpWithNewOp<memref::LoadOp>(op, adaptor.getTensor(), + adaptor.getIndices());mlir/unittests/Conversion/QCOToQC/test_conversion_qco_to_qc.cpp (1)
41-82: Test fixture duplicates code from QCToQCO tests.The
ConversionTestfixture and helpers are nearly identical to those inmlir/unittests/Conversion/QCToQCO/QCToQCO.cpp(lines 44-82). Consider extracting shared utilities into a common header to reduce duplication.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In `@mlir/lib/Conversion/QCOToQC/QCOToQC.cpp`:
- Around line 886-939: Fix the typos in the docstring/examples around
ConvertQCOTensorFromElementsOp: change "tensore<3x!qco.qubit>" to
"tensor<3x!qco.qubit>" in the tensor.from_elements example, and ensure the
memref load example reflects the converted element type as "!qc.qubit" (not
"!qco.qubit"); update the comment block above ConvertQCOTensorFromElementsOp /
the following tensor.extract example accordingly so the example types match the
memref.alloc and memref.load forms shown.
In `@mlir/lib/Conversion/QCToQCO/QCToQCO.cpp`:
- Around line 1348-1371: The matchAndRewrite for memref::AllocOp must not rely
on op->getUsers() order or count stores to infer tensor shape: gather all
memref::StoreOp users of the alloc (use qubitMap lookups as now), extract and
validate their constant index operands (fail/assert if non-constant), sort the
store ops by that index, build qcoQubits in sorted order, obtain the tensor
length from the memref's declared shape (use the alloc's result type /
RankedTensorType or memref type shape) instead of qcoQubits.size(), then create
tensor::FromElementsOp (FromElementsOp::create) with that tensorType and insert
the mapping into qubitMap as before (keep references to matchAndRewrite,
qubitMap, memref::AllocOp, memref::StoreOp, tensor::FromElementsOp).
- Around line 1468-1475: The current creation of qcoTypes (used for scf::IfOp
and scf::WhileOp result types) assumes every tracked value is a scalar qc.qubit
and emits !qco.qubit for each entry, but regionMap[op] can contain
memref<...qc.qubit> which should convert to tensor<...qco.qubit>; update the
logic that builds qcoTypes (instead of using qcQubits.size() and
qco::QubitType::get unconditionally) to iterate the tracked values in
regionMap[op] and for each value inspect its type: map qc.qubit -> !qco.qubit
and map memref<...qc.qubit> (and similarly shaped memref variants) ->
tensor<...qco.qubit> (preserving shape and rank), then use that computed
TypeRange when creating scf::IfOp::create and similarly apply the same per-value
conversion when building result types for scf::WhileOp.
In `@mlir/lib/Dialect/QCO/Builder/QCOProgramBuilder.cpp`:
- Around line 648-716: The tensorExtract implementation currently ignores
non-constant indices and silently skips qubit-tracking updates; in
QCOProgramBuilder::tensorExtract detect when the index is not a compile-time
integer (i.e., when std::holds_alternative<int64_t> is false or when the Value
index does not resolve to an arith::ConstantOp with an IntegerAttr) and emit a
fatal/explicit diagnostic instead of continuing (use the builder/location to
report the error tied to extractOp or getLoc()); ensure you do this before any
tracking calls (before calling updateQubitTracking) so invalid dynamic indices
cannot produce duplicate live qubit entries and keep references to extractOp,
updateQubitTracking, validQubits and tensor::ExtractOp to locate the change.
♻️ Duplicate comments (1)
mlir/lib/Conversion/QCOToQC/QCOToQC.cpp (1)
944-977: Unsafetensor.extractlowering (store-scan, null deref, SSA hazards).
memref.getDefiningOp()can be null, the index is assumed constant, the scan uses store order rather than index equality, and replacements may violate dominance. Prefer a always-correctmemref.loadlowering (or add dominance + index checks).🐛 Safer lowering
- if (memref.getDefiningOp()->getParentRegion() == op->getParentRegion()) { - const auto memrefUsers = llvm::to_vector(memref.getUsers()); - int64_t index = -1; - auto constantOp = - adaptor.getIndices().front().getDefiningOp<arith::ConstantOp>(); - const auto indexToStore = - dyn_cast<IntegerAttr>(constantOp.getValue()).getInt(); - for (auto* user : llvm::reverse(memrefUsers)) { - if (llvm::isa<memref::StoreOp>(user)) { - index++; - if (index == indexToStore) { - auto storeOp = dyn_cast<memref::StoreOp>(user); - rewriter.replaceOp(op, storeOp.getValueToStore()); - } - } - } - } else { - rewriter.replaceOpWithNewOp<memref::LoadOp>(op, adaptor.getTensor(), - adaptor.getIndices()); - } + rewriter.replaceOpWithNewOp<memref::LoadOp>(op, adaptor.getTensor(), + adaptor.getIndices());
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@mlir/lib/Conversion/QCOToQC/QCOToQC.cpp`:
- Around line 1368-1398: The legality predicates currently use isQCOQubitType
which returns true for tensor<qco.qubit> and therefore routes container-valued
ops into scalar-only lowering; update the predicates for scf::IfOp,
scf::WhileOp, func::FuncOp (and the related scf::YieldOp, scf::ConditionOp,
scf::ForOp, func::CallOp checks) to explicitly accept only qco::QubitType scalar
instances (e.g., test Type::isa<qco::QubitType>() or equivalent) and reject any
tensor/container qubit types so container results are not treated as scalar
qubits (alternatively add an assert/failure when a container qco qubit type is
encountered to fail fast).
♻️ Duplicate comments (1)
mlir/lib/Conversion/QCOToQC/QCOToQC.cpp (1)
942-976: Fix unsafe tensor.extract store-scan and add a safe fallback.The current optimization can dereference a null defining op (block-arg memrefs), assumes constant indices, and may leave the op unreplaced if no matching store is found. Safer is to always lower to
memref.load(or fall back when checks fail). This was already flagged earlier; re-highlighting for correctness.✅ Safer always-correct lowering
LogicalResult matchAndRewrite(tensor::ExtractOp op, OpAdaptor adaptor, ConversionPatternRewriter& rewriter) const override { - const auto memref = adaptor.getTensor(); - // Remove the extract operations following a scf.for operation - // if the region of the converted memref is the same as the extract - // operation - if (memref.getDefiningOp()->getParentRegion() == op->getParentRegion()) { - // Get the index where the value was extracted - auto extractIndex = - adaptor.getIndices().front().getDefiningOp<arith::ConstantIndexOp>(); - assert(extractIndex && "Expected constant index for tensor index"); - const auto indexToStore = extractIndex.value(); - - // Find the store operation with the same index - for (const auto* user : memref.getUsers()) { - if (auto storeOp = dyn_cast<memref::StoreOp>(user)) { - auto memrefIndex = storeOp.getIndices() - .front() - .getDefiningOp<arith::ConstantIndexOp>(); - assert(memrefIndex && "Expected constant index for memref index"); - - // Replace the extract op with the qubit value - if (indexToStore == memrefIndex.value()) { - rewriter.replaceOp(op, storeOp.getValueToStore()); - break; - } - } - } - } - // Replace other extract operations with a memref.load operation - else { - rewriter.replaceOpWithNewOp<memref::LoadOp>(op, adaptor.getTensor(), - adaptor.getIndices()); - } + rewriter.replaceOpWithNewOp<memref::LoadOp>(op, adaptor.getTensor(), + adaptor.getIndices()); return success(); }
denialhaag
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I went through all the changes again and only found a couple of small suggestions.
I like your implementation for looping through lists of qubits. I see your point that the implementation in the QCO dialect is a bit verbose, but we might just have to accept that.
Judging by the tests, the current implementation is quite flexible, allowing us to loop through arbitrary lists of qubits. I don't have a strong stance on whether we should reintroduce the concept of registers. That said, I don't think they would reduce the complexity of the implementation, especially if we want to loop through arbitrary lists.
I would hand over to @burgholzer again for his input. 😌
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
mlir/include/mlir/Dialect/QCO/Builder/QCOProgramBuilder.h (1)
1287-1307: Document the newRegion*parameter invalidateQubitValue.Line 1288’s doc block lists only
qubit; please add@param regionand fix the grammar in the next block to avoid API confusion.✍️ Suggested doc fix
/** * `@brief` Validate that a qubit value is valid and unconsumed * `@param` qubit Qubit value to validate + * `@param` region Region that owns the qubit SSA value * `@throws` Aborts if qubit is not tracked (consumed or never created) */ void validateQubitValue(Value qubit, Region* region) const; @@ - * `@param` region The Region in where the qubits are defined. + * `@param` region The Region where the qubits are defined. */ void updateQubitTracking(Value inputQubit, Value outputQubit, Region* region);
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@mlir/include/mlir/Dialect/QCO/Builder/QCOProgramBuilder.h`:
- Around line 1092-1258: Add runtime validation to prevent container qubit types
for scalar-only ops: in QCOProgramBuilder methods scfIf (validate the ValueRange
qubits), scfWhile (validate ValueRange args), funcFunc (validate argTypes and
resultTypes) and funcCall (validate operands), check that each
operand/argument/result is a scalar qubit type (not a MemRefType/TensorType or
other container of qubits) and emit a clear diagnostic or assert if a container
type is found; alternatively, if you prefer documentation-only, update the API
comments for scfIf, scfWhile, funcFunc and funcCall to state explicitly that
only scalar qubit types are accepted and converters cannot handle memref/tensor
containers.
- Around line 1052-1086: The Value branch of index handling in tensorExtract and
tensorInsert lacks type validation and can accept non-IndexType Values; update
the implementations (or utils::variantToValue usage) to detect when the index
variant holds a Value and assert or error if that Value's type is not
mlir::IndexType (e.g., using indexValue.getType().isa<mlir::IndexType>()),
emitting a clear diagnostic or failing fast with a descriptive message
mentioning tensorExtract/tensorInsert and the invalid index type; alternatively,
if you prefer a contract-based approach, document the precondition on
tensorExtract and tensorInsert that Value indices must be IndexType.
| /** | ||
| * @brief Construct a tensor.extract operation at the given index | ||
| * | ||
| * @param tensor The tensor where the value is extracted | ||
| * @param index The index where the value is extracted | ||
| * @return The extracted value | ||
| * | ||
| * @par Example: | ||
| * ```c++ | ||
| * q = builder.tensorExtract(tensor, iv); | ||
| * ``` | ||
| * ```mlir | ||
| * %q = tensor.extract %tensor[%iv] : tensor<3x!qco.qubit> | ||
| * ``` | ||
| */ | ||
| Value tensorExtract(Value tensor, const std::variant<int64_t, Value>& index); | ||
|
|
||
| /** | ||
| * @brief Construct a tensor.insert operation at the given index | ||
| * | ||
| * @param element The inserted value | ||
| * @param tensor The tensor where the value is inserted | ||
| * @param index The index where the value is inserted | ||
| * @return The resulting tensor | ||
| * | ||
| * @par Example: | ||
| * ```c++ | ||
| * newTensor = builder.tensorInsert(element, tensor, iv); | ||
| * ``` | ||
| * ```mlir | ||
| * %newTensor = tensor.insert %q into %tensor[%iv] : tensor<3x!qco.qubit> | ||
| * ``` | ||
| */ | ||
| Value tensorInsert(Value element, Value tensor, | ||
| const std::variant<int64_t, Value>& index); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
rg -n -C10 "void.*tensorExtract|Value.*tensorExtract|void.*tensorInsert|Value.*tensorInsert" mlir/lib/Dialect/QCO/Builder/QCOProgramBuilder.cpp | head -100Repository: munich-quantum-toolkit/core
Length of output: 1721
🏁 Script executed:
fd -t f "\.h" mlir/include | grep -i utils | head -20Repository: munich-quantum-toolkit/core
Length of output: 198
🏁 Script executed:
rg -n "variantToValue" mlir/include/mlir/Dialect/QCO/QCOUtils.h mlir/include/mlir/Dialect/Utils/Utils.hRepository: munich-quantum-toolkit/core
Length of output: 182
🏁 Script executed:
cat -n mlir/include/mlir/Dialect/Utils/Utils.h | sed -n '40,80p'Repository: munich-quantum-toolkit/core
Length of output: 1750
🏁 Script executed:
rg -n "constantFromScalar" mlir/include/mlir/Dialect/Utils/Utils.h -A 20 | head -60Repository: munich-quantum-toolkit/core
Length of output: 1907
The int64_t case is handled correctly, but Value indices lack type validation.
The tensorExtract and tensorInsert implementations use utils::variantToValue() which properly normalizes int64_t indices to index constants via builder.getIndexAttr(). However, when a Value index is provided, it is returned as-is with no validation that it has IndexType. This allows invalid IR if a non-index Value is passed. Add a runtime check to ensure Value indices have IndexType, or document this as a precondition.
🤖 Prompt for AI Agents
In `@mlir/include/mlir/Dialect/QCO/Builder/QCOProgramBuilder.h` around lines 1052
- 1086, The Value branch of index handling in tensorExtract and tensorInsert
lacks type validation and can accept non-IndexType Values; update the
implementations (or utils::variantToValue usage) to detect when the index
variant holds a Value and assert or error if that Value's type is not
mlir::IndexType (e.g., using indexValue.getType().isa<mlir::IndexType>()),
emitting a clear diagnostic or failing fast with a descriptive message
mentioning tensorExtract/tensorInsert and the invalid index type; alternatively,
if you prefer a contract-based approach, document the precondition on
tensorExtract and tensorInsert that Value indices must be IndexType.
| /** | ||
| * @brief Construct a scf.for operation with iter args | ||
| * | ||
| * @param lowerbound Lowerbound of the loop | ||
| * @param upperbound Upperbound of the loop | ||
| * @param step Stepsize of the loop | ||
| * @param initArgs Initial arguments for the iter args | ||
| * @param body Function that builds the body of the for operation | ||
| * @return ValueRange of the results | ||
| * | ||
| * @par Example: | ||
| * ```c++ | ||
| * builder.scfFor(lb, ub, step, initArgs, [&](Value iv, ValueRange iterArgs) | ||
| * -> llvm::SmallVector<Value> { auto q1 = builder.x(iterArgs[0]); | ||
| * return {q1}; | ||
| * }); | ||
| * ``` | ||
| * ```mlir | ||
| * %q1 = scf.for %iv = %lb to %ub step %step iter_args(%arg0 = %q0) | ||
| * -> !qco.qubit { | ||
| * %q2 = qco.x %arg0 : !qco.qubit -> !qco.qubit | ||
| * scf.yield %q2 : !qco.qubit | ||
| * } | ||
| * ``` | ||
| */ | ||
| ValueRange | ||
| scfFor(const std::variant<int64_t, Value>& lowerbound, | ||
| const std::variant<int64_t, Value>& upperbound, | ||
| const std::variant<int64_t, Value>& step, ValueRange initArgs, | ||
| llvm::function_ref<llvm::SmallVector<Value>(Value, ValueRange)> body); | ||
| /** | ||
| * @brief Construct a scf.while operation with return values | ||
| * | ||
| * @param args Arguments for the while loop | ||
| * @param beforeBody Function that builds the before body of the while | ||
| * operation | ||
| * @param afterBody Function that builds the after body of the while operation | ||
| * @return ValueRange of the results | ||
| * | ||
| * @par Example: | ||
| * ```c++ | ||
| * builder.scfWhile(args, [&](ValueRange iterArgs) -> llvm::SmallVector<Value> | ||
| * { auto q1 = builder.h(iterArgs[0]); | ||
| * auto [q2, measureRes] = builder.measure(q1); | ||
| * builder.scfCondition(measureRes, q2); | ||
| * return {q2}; | ||
| * }, [&](ValueRange iterArgs) -> llvm::SmallVector<Value> { | ||
| * auto q1 = builder.x(iterArgs[0]); | ||
| * return {q1}; | ||
| * }); | ||
| * ``` | ||
| * ```mlir | ||
| * %q1 = scf.while (%arg0 = %q0): (!qco.qubit) -> (!qco.qubit) { | ||
| * %q2 = qco.h(%arg0) | ||
| * %q3, %result = qco.measure %q2 : !qco.qubit | ||
| * scf.condition(%result) %q3 : !qco.qubit | ||
| * } do { | ||
| * ^bb0(%arg0 : !qco.qubit): | ||
| * %q4 = qco.x %arg0 : !qco.qubit -> !qco.qubit | ||
| * scf.yield %q4 : !qco.qubit | ||
| * } | ||
| * ``` | ||
| */ | ||
| ValueRange | ||
| scfWhile(ValueRange args, | ||
| llvm::function_ref<llvm::SmallVector<Value>(ValueRange)> beforeBody, | ||
| llvm::function_ref<llvm::SmallVector<Value>(ValueRange)> afterBody); | ||
|
|
||
| /** | ||
| * @brief Construct a scf.if operation with return values | ||
| * | ||
| * @param condition Condition for the if operation | ||
| * @param qubits Qubits used in the if/else body | ||
| * @param thenBody Function that builds the then body of the if | ||
| * operation | ||
| * @param elseBody Function that builds the else body of the if operation | ||
| * @return ValueRange of the results | ||
| * | ||
| * @par Example: | ||
| * ```c++ | ||
| * builder.scfIf(condition, qubits, [&]() -> llvm::SmallVector<Value> { | ||
| * auto q1 = builder.h(q0); | ||
| * return {q1}; | ||
| * }, [&]() -> llvm::SmallVector<Value> { | ||
| * auto q1 = builder.x(q0); | ||
| * return {q1}; | ||
| * }); | ||
| * ``` | ||
| * ```mlir | ||
| * %q1 = scf.if %condition -> (!qco.qubit) { | ||
| * %q2 = qco.h %q0 : !qco.qubit -> !qco.qubit | ||
| * scf.yield %q2 : !qco.qubit | ||
| * } else { | ||
| * %q2 = qco.x %q0 : !qco.qubit -> !qco.qubit | ||
| * scf.yield %q2 : !qco.qubit | ||
| * } | ||
| * ``` | ||
| */ | ||
| ValueRange scfIf(const std::variant<bool, Value>& condition, | ||
| ValueRange qubits, | ||
| llvm::function_ref<llvm::SmallVector<Value>()> thenBody, | ||
| llvm::function_ref<llvm::SmallVector<Value>()> elseBody); | ||
|
|
||
| /** | ||
| * @brief Construct a scf.condition operation with yielded values | ||
| * | ||
| * @param condition Condition for condition operation | ||
| * @param yieldedValues ValueRange of the yieldedValues | ||
| * @return Reference to this builder for method chaining | ||
| * | ||
| * @par Example: | ||
| * ```c++ | ||
| * builder.scfCondition(condition, yieldedValues); | ||
| * ``` | ||
| * ```mlir | ||
| * scf.condition(%condition) %q0 : !qco.qubit | ||
| * ``` | ||
| */ | ||
| QCOProgramBuilder& scfCondition(Value condition, ValueRange yieldedValues); | ||
|
|
||
| //===--------------------------------------------------------------------===// | ||
| // Func operations | ||
| //===--------------------------------------------------------------------===// | ||
|
|
||
| /** | ||
| * @brief Construct a func.call operation with return values | ||
| * | ||
| * @param name Name of the function that is called | ||
| * @param operands ValueRange of the used operands | ||
| * @return ValueRange of the results | ||
| * | ||
| * @par Example: | ||
| * ```c++ | ||
| * auto q1 = builder.funcCall("test", {q0}); | ||
| * ``` | ||
| * ```mlir | ||
| * %q1 = func.call @test(%q0) : (!qco.qubit) -> !qco.qubit | ||
| * ``` | ||
| */ | ||
| ValueRange funcCall(StringRef name, ValueRange operands); | ||
|
|
||
| /** | ||
| * @brief Construct a func.func operation with return values | ||
| * | ||
| * @param name Name of the function that is called | ||
| * @param argTypes TypeRange of the arguments | ||
| * @param resultTypes TypeRange of the results | ||
| * @param body Body of the function | ||
| * @return Reference to this builder for method chaining | ||
| * | ||
| * @par Example: | ||
| * ```c++ | ||
| * builder.funcFunc("test", argTypes, resultTypes, [&](ValueRange args) -> | ||
| * llvm::SmallVector<Value> { auto q1 = builder.h(args[0]); return {q1}; | ||
| * }) | ||
| * ``` | ||
| * ```mlir | ||
| * func.func @test(%arg0 : !qco.qubit) -> !qco.qubit { | ||
| * %q1 = qco.h %arg0 : !qco.qubit -> !qco.qubit | ||
| * func.return %q1 : !qco.qubit | ||
| * } | ||
| * ``` | ||
| */ | ||
| QCOProgramBuilder& | ||
| funcFunc(StringRef name, TypeRange argTypes, TypeRange resultTypes, | ||
| llvm::function_ref<llvm::SmallVector<Value>(ValueRange)> body); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Enforce/document scalar-only qubit operands for scf.if/scf.while/func.
Only scf.for supports qubit containers in the current QC↔QCO conversion flow; scf.if, scf.while, and func.* are scalar-qubit-only. The builder API currently accepts ValueRange without an explicit restriction, so it can emit IR the converters cannot handle. Please add runtime validation (preferred) or clearly document the restriction in these APIs.
Based on learnings “In the QCToQCO and QCOToQC conversion passes (mlir/lib/Conversion/QCToQCO/QCToQCO.cpp and mlir/lib/Conversion/QCOToQC/QCOToQC.cpp), only scf.for operations can handle memref (in QC) or tensor (in QCO) containers of qubits as operands/results. The scf.if and scf.while operations, as well as func operations, are restricted to scalar qubit types only and never handle memref/tensor containers. This constraint applies to both dialects and their conversions.”
📝 Suggested doc note (minimal)
@@
/**
* `@brief` Construct a scf.while operation with return values
+ * `@note` Only scalar !qco.qubit operands/results are supported by current
+ * QC↔QCO conversions.
@@
/**
* `@brief` Construct a scf.if operation with return values
+ * `@note` Only scalar !qco.qubit operands/results are supported by current
+ * QC↔QCO conversions.
@@
/**
* `@brief` Construct a func.call operation with return values
+ * `@note` Only scalar !qco.qubit operands/results are supported by current
+ * QC↔QCO conversions.
@@
/**
* `@brief` Construct a func.func operation with return values
+ * `@note` Only scalar !qco.qubit operands/results are supported by current
+ * QC↔QCO conversions.🤖 Prompt for AI Agents
In `@mlir/include/mlir/Dialect/QCO/Builder/QCOProgramBuilder.h` around lines 1092
- 1258, Add runtime validation to prevent container qubit types for scalar-only
ops: in QCOProgramBuilder methods scfIf (validate the ValueRange qubits),
scfWhile (validate ValueRange args), funcFunc (validate argTypes and
resultTypes) and funcCall (validate operands), check that each
operand/argument/result is a scalar qubit type (not a MemRefType/TensorType or
other container of qubits) and emit a clear diagnostic or assert if a container
type is found; alternatively, if you prefer documentation-only, update the API
comments for scfIf, scfWhile, funcFunc and funcCall to state explicitly that
only scalar qubit types are accepted and converters cannot handle memref/tensor
containers.
Description
This PR adds support for the conversion of the
scfoperationsscf.while,scf.for,scf.ifand the conversion of additional functions between theqcand theqcodialect. This allows the conversion of programs with nonlinear controlflow.Reopened since #1391 got automatically closed after the branch was deleted.
Checklist: