-
-
Notifications
You must be signed in to change notification settings - Fork 1
feat: wire release command, add tar.xz support, unified installers #277
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Wire up `core build release` subcommand (was orphaned) - Wire up `core monitor` command (missing import in full variant) - Add installer scripts for Unix (.sh) and Windows (.bat) - setup: Interactive with variant selection - ci: Minimal for CI/CD environments - dev: Full development variant - go/php/agent: Targeted development variants - All scripts include security hardening: - Secure temp directories (mktemp -d) - Architecture validation - Version validation after GitHub API call - Proper cleanup on exit - PowerShell PATH updates on Windows (avoids setx truncation) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add tar.xz archive support using Borg's compress package - ArchiveXZ() and ArchiveWithFormat() for configurable compression - Better compression ratio than gzip for release artifacts - Consolidate 12 installer scripts into 2 unified scripts - install.sh and install.bat with BunnyCDN edge variable support - Subdomains: setup.core.help, ci.core.help, dev.core.help, etc. - MODE and VARIANT transformed at edge based on subdomain - Installers prefer tar.xz with automatic fallback to tar.gz - Fixed CodeRabbit issues: HTTP status patterns, tar error handling, verify_install params, VARIANT validation, CI PATH persistence Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- .core/build.yaml - cross-platform build configuration - .core/release.yaml - release workflow configuration Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Consolidate planning documents in tasks/plans/ directory. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add curl timeout (--max-time) to prevent hanging on slow networks - Rename TMPDIR to WORK_DIR to avoid clobbering system env var - Add chmod +x to ensure binary has execute permissions - Add error propagation after subroutine calls in batch file - Remove System32 install attempt in CI mode (use consistent INSTALL_DIR) - Fix HTTP status regex for HTTP/2 compatibility Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Summary of ChangesHello @Snider, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the Core CLI's build and release capabilities by integrating a new release command and introducing more efficient archiving methods. It also streamlines the installation process across different operating systems and environments through unified, intelligent installer scripts. The changes aim to improve developer experience, build efficiency, and deployment reliability by centralizing configuration and addressing several operational issues. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces significant new functionality by wiring up the release command, adding tar.xz compression support, and creating unified installers for Windows and Unix-like systems. The changes are well-structured, and the new installer scripts are robust. I've identified a few areas for improvement, mainly concerning robustness in the Windows installer and context propagation in the new release command. Overall, this is a great contribution.
📝 WalkthroughWalkthroughAdds multi-target build/release configs, platform installers, RAG (Qdrant + Ollama) ingestion/querying and CLI, deploy/Ansible tooling with embedded-Python Coolify client, a Wails+Angular IDE variant, many frontend assets, archive format support, AI metrics/RAG integration, and removes several planning docs. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant CLI as "CLI (rag)"
participant RagPkg as "pkg/rag"
participant Ollama as "Ollama API"
participant Qdrant as "Qdrant DB"
User->>CLI: run `rag ingest` / `rag query`
CLI->>RagPkg: call Ingest(...) / Query(...)
RagPkg->>Ollama: request embeddings (Embed / EmbedBatch)
Ollama-->>RagPkg: return embeddings
RagPkg->>Qdrant: UpsertPoints / Search with vectors
Qdrant-->>RagPkg: ack / return results
RagPkg-->>CLI: formatted output or stats
CLI-->>User: display results
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In `@install.bat`:
- Line 1: Convert install.bat to use CRLF line endings instead of LF: open the
file (the line containing "@echo off") and change its EOL setting to CRLF so the
batch script uses Windows-style line endings to avoid label parsing issues on
some environments; ensure any other lines in install.bat are saved with CRLF as
well and commit the file with the updated line endings.
- Around line 35-50: The curl call inside the Windows batch version resolution
block (the for /f ... 'curl -fsSL
"https://api.github.com/repos/%REPO%/releases/latest"' invocation that sets
VERSION) can hang; modify that curl invocation to include a timeout flag (e.g.,
--max-time 10 or -m 10) so the for loop fails fast like the Bash installer;
update the curl command inside the for /f (...) in the VERSION resolution logic
to include the timeout and ensure the existing error checks for empty or
unchanged VERSION remain unchanged.
In `@install.sh`:
- Around line 158-175: In install_ci(), avoid invoking the binary by name for
the final version check (the `${BINARY} --version` line) because CI images may
not have /usr/local/bin on PATH; instead call the absolute installed path (e.g.
/usr/local/bin/${BINARY}) for the version check or compute
BIN_PATH="/usr/local/bin/${BINARY}" and use that; keep existing chmod/mv logic
and only change the final invocation to use the absolute path to ensure the
check succeeds in minimal CI environments.
In `@pkg/build/buildcmd/cmd_release.go`:
- Around line 47-67: The error handling in this block uses cli.WrapVerb and
cli.Err instead of the project's E() helper; update the handlers so os.Getwd
errors are returned via E("build.release", "get working directory", err), the
missing config case uses E("build.release", "release config not found", nil) or
E with a wrapped error, and release.LoadConfig failures use E("build.release",
"load config", err) — locate the calls around os.Getwd, release.ConfigExists,
and release.LoadConfig and replace the cli.WrapVerb / cli.Err uses with the
E(...) helper including the service name "build.release", a short failure
description, and the underlying error where available.
🧹 Nitpick comments (1)
pkg/i18n/locales/en_GB.json (1)
238-239: Prefer consistent “dry run” phrasing across strings.Most existing entries use “dry run” (no hyphen). Consider aligning this one for UI consistency.
♻️ Suggested tweak
- "release.dry_run_hint": "(dry-run) no artifacts will be published", + "release.dry_run_hint": "(dry run) no artifacts will be published",
Add RAG (Retrieval Augmented Generation) tools for storing documentation in Qdrant vector database and querying with semantic search. This replaces the Python tools/rag implementation with a native Go solution. New commands: - core rag ingest [directory] - Ingest markdown files into Qdrant - core rag query [question] - Query vector database with semantic search - core rag collections - List and manage Qdrant collections Features: - Markdown chunking by sections and paragraphs with overlap - UTF-8 safe text handling for international content - Automatic category detection from file paths - Multiple output formats: text, JSON, LLM context injection - Environment variable support for host configuration Dependencies: - github.com/qdrant/go-client (gRPC client) - github.com/ollama/ollama/api (embeddings API) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 10
🤖 Fix all issues with AI agents
In `@internal/cmd/rag/cmd_ingest.go`:
- Around line 68-79: Before constructing rag.IngestConfig, validate chunkSize
and chunkOverlap: ensure chunkSize > 0 and chunkOverlap >= 0 and chunkOverlap <
chunkSize; if validation fails, return an error (or print a clear message and
exit) so you don't build rag.ChunkConfig with invalid values. Add the checks
right before creating cfg (using variables chunkSize and chunkOverlap) and
surface a useful error indicating the expected ranges and the offending values.
- Around line 113-165: IngestDirectory and IngestFile are exported but live
under an internal package so external consumers (e.g., MCP) cannot import them;
either move these functions to a public package (e.g., pkg/rag) by changing the
package declaration and file location and updating all callers to import that
package (keep the function names and existing calls to rag.NewQdrantClient,
rag.Ingest, rag.IngestFile, etc.), or make them truly internal by unexporting
them (rename to ingestDirectory and ingestFile), remove the “exported” comments,
and ensure all callers are within the same internal tree; pick one approach and
apply consistently across callers and imports.
In `@internal/cmd/rag/cmd_query.go`:
- Around line 54-57: Validate the numeric inputs before casting to uint64: add
guard clauses that check the `limit` flag value and the `topK` parameter are >=
0 (and >0 if zero is invalid) and return an error or default before creating
`rag.QueryConfig` or using `topK`; specifically, validate the `limit` variable
before constructing `rag.QueryConfig{ Limit: uint64(limit) }` and validate
`topK` before converting it to uint64/using it (where `topK` is referenced), and
ensure you return a clear error message when the value is negative to prevent
uint64 wraparound.
In `@internal/cmd/rag/cmd_rag.go`:
- Around line 9-35: The flag defaults in initFlags() currently use hard-coded
literals which overwrite environment-initialized values; change the
PersistentFlags().StringVar/IntVar calls to pass the existing variables as the
default arguments (e.g., use qdrantHost, qdrantPort, ollamaHost, ollamaPort,
model) so the env overrides set earlier persist, and ensure the variables
(qdrantHost, ollamaHost, model, etc.) are initialized in their var block before
initFlags() runs.
In `@pkg/rag/chunk.go`:
- Around line 32-110: ChunkMarkdown uses byte length checks (len()) for chunk
sizing but ChunkConfig.Size is in characters; replace those len() checks with
rune counts (use utf8.RuneCountInString) so non-ASCII text sizes are correct:
change the section size check (currently len(section) <= cfg.Size) to compare
utf8.RuneCountInString(section) and change the paragraph accumulation check
(len(currentChunk)+len(para)+2 <= cfg.Size) to compare
utf8.RuneCountInString(currentChunk)+utf8.RuneCountInString(para)+2 (the +2
remains for the two newline characters counted as runes); keep the existing
rune-safe overlap logic (the []rune conversion and cfg.Overlap handling) as-is
and import "unicode/utf8".
In `@pkg/rag/ingest.go`:
- Around line 51-100: Replace plain fmt.Errorf returns with the E(...) helper to
add contextual service/action info: wrap errors from
filepath.Abs(cfg.Directory), os.Stat(absDir), qdrant.CollectionExists(ctx,
cfg.Collection), qdrant.DeleteCollection(ctx, cfg.Collection),
qdrant.CreateCollection(ctx, cfg.Collection), filepath.WalkDir and the
empty-files error using E("rag", "<action>", err) (e.g., action values like
"resolve directory", "access directory", "check collection", "delete
collection", "create collection", "walk directory", "no markdown files") so each
return uses E(service, action, err) for consistent contextual errors; also apply
the same replacement for the other occurrences noted around the 166-168 and
176-210 blocks that use fmt.Errorf.
In `@pkg/rag/ollama.go`:
- Around line 66-98: Replace the fmt.Errorf calls in Embed and EmbedBatch to use
the project's E() helper for contextual errors: in OllamaClient.Embed, wrap the
error returned from o.client.Embed using E("ollama", "failed to generate
embedding", err) and replace the empty response error with E("ollama", "empty
embedding response", nil); in OllamaClient.EmbedBatch, wrap the per-item embed
error using E("ollama", fmt.Sprintf("failed to embed text %d", i), err) so all
error returns use the E(service, action, err) convention.
- Around line 35-48: The NewOllamaClient uses http.DefaultClient which has no
timeout; add a Timeout field to OllamaConfig (type time.Duration, default 30s if
zero) and update NewOllamaClient to construct an http.Client with Timeout:
cfg.Timeout (or 30*time.Second when cfg.Timeout==0) and pass that client to
api.NewClient; update any code that constructs OllamaConfig to allow the new
field but treat zero as default so behavior stays safe.
In `@pkg/rag/qdrant.go`:
- Around line 37-48: The error returns in Qdrant-related functions (e.g.,
NewQdrantClient, EnsureCollection/CreateCollection, DeleteCollection,
UpsertPoints, SearchPoints, and Close) must be wrapped using the E() helper for
consistent contextual errors; replace fmt.Errorf or raw error returns with
E("qdrant", "<action>", err) using descriptive action strings like "connect",
"create collection", "delete collection", "upsert points", "search points",
etc., so each return becomes return nil, E("qdrant", "connect", err) (or
appropriate signature) and ensure the E helper is referenced/imported where
needed.
In `@pkg/rag/query.go`:
- Around line 38-55: The Query function currently returns fmt.Errorf on
embedding and search failures; replace those with the E() helper to provide
consistent contextual errors: wrap the embedding error (from ollama.Embed) using
E("ollama", "generate query embedding", err) and wrap the search error (from
qdrant.Search) using E("qdrant", "search collection", err); update the two
return statements in Query to use these E(...) calls so the service name,
failure description, and underlying error are included.
🧹 Nitpick comments (9)
internal/cmd/rag/cmd_collections.go (2)
29-38: Prefer DI-managed Qdrant client lifecycle.
Direct instantiation bypasses ServiceForT and lifecycle hooks. Consider obtaining the Qdrant client via DI and managing startup/shutdown centrally for consistent config and reuse.
As per coding guidelines 'Services should be managed via dependency injection using ServiceForT' and 'Use OnStartup and OnShutdown hooks for service lifecycle management'.
35-50: Use E() for contextual errors in the collections flow.
Wrapping errors with E() keeps service context and failure descriptions consistent.🔧 Suggested fix
- return fmt.Errorf("failed to connect to Qdrant: %w", err) + return E("rag.collections", "connect to Qdrant", err)As per coding guidelines 'Use the E() helper function for contextual error handling with service name, failure description, and underlying error'.
internal/cmd/rag/cmd_ingest.go (2)
37-63: Use DI-managed Qdrant/Ollama services in the ingest command.
Creating clients directly in the command bypasses ServiceForT and lifecycle hooks. Consider obtaining these clients via DI and centralizing startup/shutdown.
As per coding guidelines 'Services should be managed via dependency injection using ServiceForT' and 'Use OnStartup and OnShutdown hooks for service lifecycle management'.
44-66: Wrap connection/health/model errors with E().
This keeps error context consistent across the CLI and exported helpers.🔧 Suggested fix
- return fmt.Errorf("failed to connect to Qdrant: %w", err) + return E("rag.ingest", "connect to Qdrant", err)As per coding guidelines 'Use the E() helper function for contextual error handling with service name, failure description, and underlying error'.
Also applies to: 121-162
internal/cmd/rag/cmd_query.go (3)
32-48: Prefer DI-managed Qdrant/Ollama clients.
Direct client creation bypasses ServiceForT and lifecycle hooks; consider retrieving them via DI for consistent configuration and lifecycle control.
As per coding guidelines 'Services should be managed via dependency injection using ServiceForT' and 'Use OnStartup and OnShutdown hooks for service lifecycle management'.
33-51: Use E() for connection errors.
This keeps error context consistent and structured.🔧 Suggested fix
- return fmt.Errorf("failed to connect to Qdrant: %w", err) + return E("rag.query", "connect to Qdrant", err)As per coding guidelines 'Use the E() helper function for contextual error handling with service name, failure description, and underlying error'.
Also applies to: 82-90
80-107: Exported functions in internal package violate encapsulation — reconsider placement if external use is intended.The functions
QueryDocsandQueryDocsContextare exported (capitalized) but located ininternal/cmd/rag/, which prevents external packages from importing them due to Go's internal path restriction. The comment claims they're "exported for use by other packages (e.g., MCP)", but:
- No imports of
internal/cmd/ragexist anywhere in the codebase- MCP code lives in the public
pkg/mcp/package and does not reference these functions- Only internal usage:
QueryDocsContextcallsQueryDocswithin the same fileIf these helpers are genuinely needed by external consumers (like MCP), move them to a public package like
pkg/rag/query.go. If they're only for internal CLI commands, unexport them (lowercase) and update the comment. Per coding guidelines, internal packages must maintain proper encapsulation.pkg/rag/ingest.go (1)
103-170: Consider streaming upserts to cap memory growth.
Accumulating all points before batching can spike memory on large datasets; consider flushing batches as you go (per file or per batch) to keep memory bounded.pkg/rag/qdrant.go (1)
30-34: Consider wiring QdrantClient via ServiceForT if it’s a long-lived service.
If this client is shared, prefer DI withServiceFor[T]()and manage lifecycle withOnStartup/OnShutdown(and IPC where applicable) to align with service conventions. As per coding guidelines: **/*.go: Services should be managed via dependency injection usingServiceFor[T](); UseOnStartupandOnShutdownhooks for service lifecycle management; Use IPC message-passing system for service communication.
Implement infrastructure deployment system with:
- pkg/ansible: Pure Go Ansible executor
- Playbook/inventory parsing (types.go, parser.go)
- Full execution engine with variable templating, loops, blocks,
conditionals, handlers, and fact gathering (executor.go)
- SSH client with key/password auth and privilege escalation (ssh.go)
- 35+ module implementations: shell, command, copy, template, file,
apt, service, systemd, user, group, git, docker_compose, etc. (modules.go)
- pkg/deploy/coolify: Coolify API client wrapping Python swagger client
- List/get servers, projects, applications, databases, services
- Generic Call() for any OpenAPI operation
- pkg/deploy/python: Embedded Python runtime for swagger client integration
- internal/cmd/deploy: CLI commands
- core deploy servers/projects/apps/databases/services/team
- core deploy call <operation> [params-json]
This enables Docker-free infrastructure deployment with Ansible-compatible
playbooks executed natively in Go.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Fix fmt.Sprintf format verb error in ssh.go (remove unused stat command) - Fix errcheck warnings by explicitly ignoring best-effort operations - Fix ineffassign warning in cmd_ansible.go All golangci-lint checks now pass for deploy packages. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Address CodeQL security alert by using the user's known_hosts file for SSH host key verification when available. Falls back to accepting any key only when known_hosts doesn't exist (common in containerized or ephemeral environments). Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 20
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
go.mod (1)
5-82:⚠️ Potential issue | 🟠 MajorAfter modifying dependencies, run workspace sync and vulnerability checks
After updating go.mod with new dependencies, ensure
core go work syncis executed to synchronize the workspace, then run vulnerability/license checks viagovulncheckfor the newly added modules:
core go work sync— syncs workspace configurationgovulncheck ./...— scans for known vulnerabilities (run in CI or locally if available)Confirm the new dependency versions (go-embed-python, ollama, qdrant/go-client, grpc, protobuf, etc.) align with your vulnerability and license policies before release.
🤖 Fix all issues with AI agents
In `@internal/cmd/deploy/cmd_deploy.go`:
- Around line 127-131: In printItem, guard the uuid[:8] slice to avoid panics
when the "uuid" value is shorter than 8 characters: check the length of the uuid
string (from item["uuid"]) and use a safe substring (e.g., slice to min(8,
len(uuid)) or fall back to the whole uuid) before passing it to
cli.DimStyle.Render so short IDs won't cause a runtime panic.
In `@pkg/ansible/executor.go`:
- Around line 77-203: Replace direct fmt.Errorf wraps with the E() contextual
error helper in Run, runPlay and runRole: in Run when parsing the playbook
(replace fmt.Errorf("parse playbook: %w", err) with E("executor","parse
playbook",err)), in Run's play loop when returning per-play errors (replace
fmt.Errorf("play %d (%s): %w", i, plays[i].Name, err) with E("executor",
fmt.Sprintf("play %d (%s)", i, plays[i].Name), err)), in runRole when parsing a
role (replace fmt.Errorf("parse role %s: %w", roleRef.Role, err) with
E("executor", fmt.Sprintf("parse role %s", roleRef.Role), err)), and where
runRole returns task errors (wrap the underlying error with E("executor",
fmt.Sprintf("role %s task", roleRef.Role), err)) so all contextual error returns
use E() while preserving existing variable-restore behavior.
- Around line 302-348: runLoop overwrites e.vars[loopVar] and
e.vars[task.LoopControl.IndexVar] but never restores prior values; save the
previous values (and whether they existed) for loopVar and the index var before
the for-loop, then restore them after the loop (using defer or a finally-style
restore) so variables are restored even if executeModule returns an error or the
loop breaks early; update runLoop (referencing runLoop, e.vars, loopVar,
task.LoopControl.IndexVar, executeModule) to delete the var if it didn't exist
previously or reset it to its original value if it did, and ensure this
restoration occurs before returning.
- Around line 529-587: gatherFacts is populating host-specific facts into the
shared e.vars via SetVar, causing cross-host contamination; stop writing global
vars here and keep facts per-host: remove or replace the SetVar calls in
gatherFacts and instead store the same keys under a per-host map (e.facts[host]
already exists) or extend Executor with a host-scoped vars map (e.g.,
hostVars[host]) and write
ansible_hostname/ansible_fqdn/ansible_distribution/ansible_distribution_version/ansible_architecture/ansible_kernel
there; also ensure template/variable resolution logic prefers e.facts or
host-scoped vars over the global e.vars (or change SetVar signature to accept a
host parameter) so facts for one host don’t leak into others.
- Around line 384-406: runIncludeTasks currently templates include/import paths
but passes them to ParseTasks as-is, which resolves relative paths against the
process CWD; change it to resolve relative to the playbook base dir before
parsing. After computing path = e.templateString(...), join it with the play's
base path (e.g. filepath.Join(play.basePath, path) or play.BasePath depending on
the Play struct) to produce the full include path, then call e.parser.ParseTasks
on that joined path; add an import for filepath if needed and preserve existing
error wrapping and subsequent runTaskOnHosts calls.
- Around line 797-834: The applyFilter method currently no-ops the "b64decode"
branch; update the b64decode handling inside func (e *Executor)
applyFilter(value, filter string) string to actually decode the input using
base64.StdEncoding.DecodeString(value) and return the decoded string, and on
decode errors fall back to returning the original value (keep function signature
unchanged). Ensure you import encoding/base64 if not present and only modify the
"b64decode" case to perform decoding with the described fallback.
- Around line 943-968: TemplateFile currently calls tmpl.Execute(&buf, e.vars)
which ignores host/task-scoped variables and facts that
templateString/resolveExpr handle; change TemplateFile so before tmpl.Execute
you build a rendering context by merging e.vars with the host-level vars
(including inventory_hostname), task-level vars (task.Vars) and task/facts
(task.Facts) — overlaying task vars/facts on top of global e.vars — and then
pass that merged map to tmpl.Execute; keep the existing fallback to
templateString(string(content), host, task) unchanged for parse/exec errors.
In `@pkg/ansible/modules.go`:
- Around line 1365-1380: The code slices key[:40] in the module
(moduleAuthorizedKey) when building the sed and grep commands which will panic
for keys shorter than 40 bytes; fix both occurrences by clamping the slice end
with something like prefixLen := min(len(key), 40) and use key[:prefixLen] in
the sed and grep command constructions so short keys are handled safely while
preserving the original behavior for longer keys.
- Around line 642-653: The modulePackage function currently probes for
apt/yum/dnf but always calls moduleApt; update modulePackage to handle non-apt
systems by checking the detected package manager (inspect stdout from the which
command) and either delegate to appropriate handlers (e.g., call new moduleYum
or moduleDnf functions if you implement them) or return a clear error when
yum/dnf is detected and unsupported; modify modulePackage (and add
moduleYum/moduleDnf stubs if needed) so that the branch for yum/dnf does not
fall through to moduleApt.
- Around line 945-969: The current sleep machinery is broken: sleepChan
busy-loops and never actually sleeps; replace it by having ctxSleep use
time.After to wait the given seconds and remove sleepChan. Specifically, import
"time", change ctxSleep(ctx, seconds) to spawn a goroutine that does select {
case <-ctx.Done(): case <-time.After(time.Duration(seconds)*time.Second): } then
close the returned channel, and delete the sleepChan function and any references
to it (e.g., where ctxSleep is used like in modulePause).
In `@pkg/ansible/parser.go`:
- Around line 27-143: Replace all fmt.Errorf wrappers in ParsePlaybook,
ParseInventory, ParseTasks and ParseRole with the E(...) helper to provide
structured context: for each os.ReadFile or yaml.Unmarshal error in
ParsePlaybook, ParseInventory, ParseTasks return nil, E("ansible", "read
playbook"/"parse playbook"/"read inventory"/"parse inventory"/"read
tasks"/"parse tasks", err) (use the appropriate short description per call); for
the processPlay and extractModule error returns in ParsePlaybook and ParseTasks
wrap the returned error with E("ansible", "process play <i>"/"task <i>", err);
and in ParseRole replace the role-not-found fmt.Errorf with return nil,
E("ansible", fmt.Sprintf("role %s not found", name), nil) and similarly wrap
defaults/vars yaml.Unmarshal/read errors with E("ansible", "load role
defaults"/"load role vars", err) where applicable. Ensure each replacement
references the exact function names ParsePlaybook, ParseInventory, ParseTasks,
ParseRole and the helper E(service, message, err).
In `@pkg/ansible/ssh.go`:
- Around line 142-150: Replace the fmt.Errorf calls in the SSH connection code
with the E() contextual error helper: when handling the dial error (the block
that returns fmt.Errorf("dial %s: %w", addr, err)) call E("ansible.ssh", "dial",
err, "addr", addr) (or equivalent signature your E() uses) and when handling
ssh.NewClientConn failure (the block that closes conn then returns
fmt.Errorf("ssh connect %s: %w", addr, err)) close conn and return
E("ansible.ssh", "connect", err, "addr", addr); apply the same pattern anywhere
else in pkg/ansible/ssh.go where fmt.Errorf("%s: %w", ...) is used so all
contextual errors use E() with service name "ansible.ssh", a short failure
description ("dial"/"connect"/etc.), the underlying err, and relevant context
keys like addr.
- Around line 185-202: The current become wrapping embeds passwords into the
command string (using echo '%s' | sudo -S ...), which leaks them; update the
become handling for c.become in the SSH command wrapper to avoid embedding
c.becomePass or c.password into cmd: construct sudo invocations without echo
(use "sudo -S -u <user> bash -c '<escapedCmd>'" or "sudo -n -u <user> ...") and
write the password to the remote session's stdin instead (use the SSH session's
stdin pipe or equivalent) when c.becomePass or c.password is present; keep the
fallback passwordless sudo branch (sudo -n ...) and continue to use escapedCmd
and becomeUser as before.
- Around line 253-319: The Upload and Download paths are prepending "sudo"
themselves which causes double sudo and bypasses the SSHClient
become-user/password logic; change them to use the SSHClient's existing become
handling instead of hard-coding "sudo": remove manual "sudo" from dirCmd,
writeCmd and the Download cmd and route the operations through the same
Run/Run-like path that applies c.become, c.becomeUser and c.becomePass (or
implement a helper that wraps the write session with the same become escalation
used by Run); also ensure the write path (session2 Start/Wait) uses that helper
or Run with stdin so the become user/password are honored rather than ignored.
- Around line 45-66: NewSSHClient sets a default for SSHConfig.Timeout but never
stores it on the SSHClient and Connect uses a hard-coded 30s; update
NewSSHClient to assign cfg.Timeout to a timeout field on SSHClient (e.g.,
client.timeout) and modify SSHClient.Connect to use that timeout value
(SSHClient.timeout or SSHConfig.Timeout) for the SSH dial/connection instead of
the fixed 30*time.Second so custom timeouts are honored.
- Around line 130-134: The ssh.ClientConfig currently uses
ssh.InsecureIgnoreHostKey() which disables host key verification; update the SSH
configuration by adding a HostKeyCallback field on the SSHConfig type (e.g.,
SSHConfig.HostKeyCallback) and plumb it into the ssh.ClientConfig's
HostKeyCallback instead of always calling ssh.InsecureIgnoreHostKey(); default
the new field to ssh.InsecureIgnoreHostKey() for backward compatibility (and
document the risk) or, preferably, implement and call a known_hosts loader to
return a verification callback when none is provided so callers can supply
custom verification functions and avoid MITM exposure (update the code that
builds ssh.ClientConfig, the SSHConfig struct, and any constructors that create
SSHConfig).
In `@pkg/deploy/coolify/client.go`:
- Around line 63-90: The Call method currently calls python.CoolifyScript
without forwarding the client's timeout and SSL settings, so update the
python.CoolifyScript API and its uses: change the CoolifyScript function
signature (in pkg/deploy/python/python.go) to accept timeout (int) and verifySSL
(bool), modify the generator to emit those values instead of the hardcoded
timeout=30 and verify_ssl=True, and update the call site in Client.Call to pass
c.Timeout and c.VerifySSL when invoking python.CoolifyScript; ensure any other
callers of CoolifyScript are updated to the new signature.
In `@pkg/deploy/python/python.go`:
- Around line 108-129: The CoolifyScript function currently ignores json.Marshal
errors which can produce invalid JSON; change CoolifyScript to return (string,
error), handle json.Marshal error and return it (use the E(...) helper pattern
for wrapping), and update its single caller in pkg/deploy/coolify/client.go to
accept the (string, error) result and propagate or handle the error accordingly;
make sure references to CoolifyScript and the caller's usage are updated to the
new signature so marshal failures surface instead of silently producing bad
payloads.
- Around line 33-69: RunScript currently ignores ctx and uses cmd.Output() and
os.PathError; change it to run the command in a context-aware way, capture
stderr, and wrap errors with E(). Concretely, for the cmd returned by
ep.PythonCmd(...): create bytes.Buffer for Stdout and Stderr, assign to
cmd.Stdout/cmd.Stderr, Start the cmd, then Wait in a goroutine while
concurrently selecting on ctx.Done(); on cancellation call cmd.Process.Kill()
and return E(op, "script cancelled", ctx.Err()). When cmd.Wait returns an error,
detect *exec.ExitError to extract stderr bytes and return E(op, "script failed",
fmt.Errorf("%s: %w", string(stderrBytes), exitErr)). Also replace other error
returns with E(op, "...", err) and keep Init() usage unchanged.
In `@pkg/i18n/locales/en_GB.json`:
- Around line 258-261: Add missing localization keys for each deploy subcommand
by adding entries like "deploy.servers.short", "deploy.projects.short",
"deploy.apps.short", "deploy.databases.short", "deploy.services.short",
"deploy.team.short", and "deploy.call.short" into en_GB.json (following the
existing "deploy.short"/"deploy.long" pattern), then update the code that
currently uses hardcoded English descriptions to call i18n.T(...) (e.g.
i18n.T('deploy.servers.short')) wherever the deploy subcommand descriptions are
rendered (search for usages around the deploy command handling and replace
hardcoded strings with i18n.T references).
🧹 Nitpick comments (8)
pkg/deploy/python/python.go (1)
14-31: Consider DI-managed lifecycle for the embedded Python runtimeLine 15 initializes a package-level singleton via sync.Once with no shutdown path. To improve lifecycle management/testing, wire this runtime as a service using ServiceFor*python.EmbeddedPython with OnStartup/OnShutdown hooks. As per coding guidelines, Services should be managed via dependency injection using ServiceForT and Use OnStartup and OnShutdown hooks for service lifecycle management.
pkg/deploy/coolify/client.go (1)
41-52: Use E() helper for new error pathsLine 44 and Line 47 wrap errors with
fmt.Errorf. Please switch toE()so errors carry consistent service context. As per coding guidelines, Use the E() helper function for contextual error handling with service name, failure description, and underlying error.internal/cmd/deploy/cmd_deploy.go (2)
151-167: Use cmd.Context() for request-scoped cancellationHandlers use
context.Background()(Line 157, 176, 195, etc.), which ignores cancellation/timeouts. Prefercmd.Context()and pass it through to client calls.🔧 Example adjustment (apply similarly to other handlers)
- servers, err := client.ListServers(context.Background()) + servers, err := client.ListServers(cmd.Context())
266-271: Use E() helper for JSON param errorsLine 270 wraps JSON errors via
fmt.Errorf. Please useE()so errors carry consistent service context. As per coding guidelines, Use the E() helper function for contextual error handling with service name, failure description, and underlying error.internal/cmd/deploy/cmd_ansible.go (2)
215-253: Use cmd.Context() as the base context
runAnsibleandrunAnsibleTestcreate contexts fromcontext.Background(), so CLI cancellation/timeouts are ignored. Prefercmd.Context().🔧 Example adjustment
- ctx := context.Background() + ctx := cmd.Context() @@ - ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second) + ctx, cancel := context.WithTimeout(cmd.Context(), 30*time.Second)
100-102: Align new error paths with E() helperLine 101 uses
fmt.Errorffor playbook-not-found; please wrap withE()for consistent context. As per coding guidelines, Use the E() helper function for contextual error handling with service name, failure description, and underlying error.pkg/ansible/modules.go (1)
126-131: Standardize module errors via E() helperLine 131 (and other module error paths) use
fmt.Errorf. Please switch toE()for consistent service context. As per coding guidelines, Use the E() helper function for contextual error handling with service name, failure description, and underlying error.pkg/ansible/ssh.go (1)
335-355: Remove the unused stat command construction.
The firstcmdassignment is immediately overwritten (Line 338–355), which is dead code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
🤖 Fix all issues with AI agents
In `@pkg/ansible/modules.go`:
- Around line 14-20: executeModule currently changes the SSHClient become state
for a task (using SSHClient.SetBecome when Task.Become is true) but never
restores the previous state; capture the client's current become state and user
before calling SetBecome (read via existing getters or add accessors on
SSHClient if needed), then restore that saved state after module execution (use
defer to ensure restoration on success or error) so
SSHClient.SetBecome(previousState, previousUser, "") is called at the end of
executeModule; reference executeModule, SSHClient.SetBecome, Task.Become and
Task.BecomeUser when making the change.
In `@pkg/deploy/python/python.go`:
- Around line 95-101: DevOpsPath currently ignores the error from
os.UserHomeDir(), which can produce an empty/invalid path; modify DevOpsPath to
check the error returned by os.UserHomeDir() (or change the function signature
to return (string, error)) and return a clear error when the home directory
cannot be determined instead of silently joining an empty string, updating
callers if you opt for the (string, error) signature; locate the DevOpsPath
function and the os.UserHomeDir() call to implement the error check and proper
error propagation or return value change.
In `@tools/rag/ingest.py`:
- Around line 120-125: The function definition for ingest_directory contains
stray tokens ("do we") making it a syntax error; update the signature to a valid
Python def such as def ingest_directory(directory: Path, client: QdrantClient,
collection: str, verbose: bool = False) -> dict: (remove the extraneous words)
and ensure the function body and return type remain consistent with this
signature; also verify that Path and QdrantClient are imported or referenced
correctly where ingest_directory is used.
- Around line 33-40: The VECTOR_DIM is hardcoded to 768 while EMBEDDING_MODEL is
configurable, causing dimension mismatches and the code creates the Qdrant
collection before verifying the model; fix by generating a test embedding from
the chosen EMBEDDING_MODEL (call the embedding function used elsewhere) before
creating the collection, set VECTOR_DIM = len(test_embedding), and use that
derived dimension when calling the collection creation logic (the code that
builds the Qdrant collection). Also move the model availability/validation check
to run before collection creation so any model errors are caught prior to
creating the collection.
In `@tools/rag/query.py`:
- Around line 16-20: The context output currently emits unescaped attribute
values (e.g., query/source/category) which can break the tag-like format; fix by
HTML-escaping those attribute strings before formatting the tag-like output:
import and use html.escape(value, quote=True) (or an equivalent escape_attr
helper) wherever the code builds the attribute string for context output
(including the block that formats query/source/category and the similar code
around the lines corresponding to 116-127), ensuring all attributes are escaped
consistently.
- Around line 153-170: The try/except around client.get_collection used for the
stats path is catching Exception broadly and leaving the exception variable
unused; replace this with an explicit existence check or a specific exception
catch: call client.collection_exists(args.collection) and if false print
"Collection not found: {args.collection}" and return, then call
client.get_collection(args.collection) for info; alternatively catch
qdrant_client.exceptions.UnexpectedResponse (or the library's 404-specific
exception) instead of Exception to handle the not-found case when using
client.get_collection.
In `@tools/rag/README.md`:
- Around line 102-111: The README's fenced code blocks need explicit languages
and safe fencing to satisfy markdownlint MD040 and avoid nested-fence issues:
for the "## Before Answering UI Questions" block, add language labels (e.g., use
```markdown for the outer explanatory block and ```bash for the python command)
and ensure closing fences match; for the CLAUDE/example block that contains
inner triple-backticks (the "How do I make a Flux button?" embedding/results
example) either switch the outer fence to quadruple backticks or add an
appropriate language label like ```text and keep inner triple-backticks
unchanged; apply the same changes to the second affected block (the one noted as
also applies to 161-171).
In `@tools/rag/requirements.txt`:
- Around line 1-2: The requirements use open-ended ranges for qdrant-client and
ollama which risks pulling breaking releases; update the pinned specs to include
conservative upper bounds (e.g., change "qdrant-client>=1.7.0" to
"qdrant-client>=1.7.0,<1.16.0" and "ollama>=0.1.0" to "ollama>=0.1.0,<0.6.0") or
alternatively add a lockfile/constraints to freeze exact working versions so
installations remain reproducible.
🧹 Nitpick comments (6)
pkg/deploy/coolify/client.go (1)
204-216:extractArraysilently returns nil for unexpected response structures.When
result["result"]exists but isn't a[]any, or when individual items aren'tmap[string]any, the function silently skips them and may return an empty slice. This can mask API response parsing issues or schema changes.Consider logging or returning an error when the expected structure isn't found, especially if
result["result"]exists but has an unexpected type.♻️ Optional: Add defensive logging
func extractArray(result map[string]any) ([]map[string]any, error) { + raw, exists := result["result"] + if !exists { + return nil, nil + } + arr, ok := raw.([]any) + if !ok { + // Unexpected type - could log for debugging + return nil, nil + } - if arr, ok := result["result"].([]any); ok { - items := make([]map[string]any, 0, len(arr)) - for _, item := range arr { - if m, ok := item.(map[string]any); ok { - items = append(items, m) - } + items := make([]map[string]any, 0, len(arr)) + for _, item := range arr { + if m, ok := item.(map[string]any); ok { + items = append(items, m) } - return items, nil } - return nil, nil + return items, nil }internal/cmd/deploy/cmd_deploy.go (3)
28-70: Subcommand descriptions should use i18n for localization consistency.All subcommand
Shortdescriptions are hardcoded English strings (e.g.,"List Coolify servers"), while the root command usesi18n.T(). This inconsistency means these strings won't be translated.♻️ Example fix for one subcommand
var serversCmd = &cobra.Command{ Use: "servers", - Short: "List Coolify servers", + Short: i18n.T("cmd.deploy.servers.short"), RunE: runListServers, }Apply similar changes to all subcommands and add corresponding keys to
en_GB.json.
151-168: Usecmd.Context()instead ofcontext.Background()to honor cancellation.All handler functions use
context.Background(), which ignores any cancellation signals from the CLI framework. Usingcmd.Context()ensures proper context propagation for timeouts and user-initiated cancellation (e.g., Ctrl+C).♻️ Suggested fix for runListServers (apply to all handlers)
func runListServers(cmd *cobra.Command, args []string) error { client, err := getClient() if err != nil { return err } - servers, err := client.ListServers(context.Background()) + servers, err := client.ListServers(cmd.Context()) if err != nil { return err }
88-104: Redundant environment variable reads ingetClient().Lines 96-101 re-read
COOLIFY_URLandCOOLIFY_TOKENfrom the environment, but lines 74-75 already set the flag defaults to these env vars. The fallback logic is therefore redundant and could cause confusion if someone expects flag precedence over env vars.♻️ Simplified getClient
func getClient() (*coolify.Client, error) { cfg := coolify.Config{ BaseURL: coolifyURL, APIToken: coolifyToken, Timeout: 30, VerifySSL: true, } - if cfg.BaseURL == "" { - cfg.BaseURL = os.Getenv("COOLIFY_URL") - } - if cfg.APIToken == "" { - cfg.APIToken = os.Getenv("COOLIFY_TOKEN") - } - return coolify.NewClient(cfg) }internal/cmd/deploy/cmd_ansible.go (1)
100-223: Use E() helper for contextual error wrapping in runAnsible.
Several returns wrap underlying errors withfmt.Errorf; preferE(...)for consistent service context (inventory load, playbook failure, etc.).As per coding guidelines, Use the E() helper function for contextual error handling with service name, failure description, and underlying error.♻️ Example adjustment
- if err := executor.SetInventory(invPath); err != nil { - return fmt.Errorf("load inventory: %w", err) - } + if err := executor.SetInventory(invPath); err != nil { + return E("deploy.ansible", "load inventory", err) + } ... - if err := executor.Run(ctx, playbookPath); err != nil { - return fmt.Errorf("playbook failed: %w", err) - } + if err := executor.Run(ctx, playbookPath); err != nil { + return E("deploy.ansible", "playbook failed", err) + }pkg/ansible/modules.go (1)
249-253: Wrap module I/O errors with E() for context.
UseE(...)for wrapped errors to align with project error handling.As per coding guidelines, Use the E() helper function for contextual error handling with service name, failure description, and underlying error.♻️ Example adjustment
- return nil, fmt.Errorf("read script: %w", err) + return nil, E("ansible.module", "read script", err)
…esktop app Wire up AI infrastructure with unified pkg/ai package (metrics JSONL, RAG integration), move RAG under `core ai rag`, add `core ai metrics` command, and enrich task context with Qdrant documentation. Add `--target` flag to all security commands for external repo scanning, `core security jobs` for distributing findings as GitHub Issues, and consistent error logging across scan/deps/alerts/secrets commands. Add Core IDE Wails v3 desktop app with Angular 20 frontend, MCP bridge (loopback-only HTTP server), WebSocket hub, and Claude Code bridge. Production-ready with Lethean CIC branding, macOS code signing support, and security hardening (origin validation, body size limits, URL scheme checks, memory leak prevention, XSS mitigation). Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 14
🤖 Fix all issues with AI agents
In `@internal/cmd/security/cmd_alerts.go`:
- Around line 182-259: runAlertsForTarget currently swallows fetch errors for
fetchDependabotAlerts, fetchCodeScanningAlerts and fetchSecretScanningAlerts by
using "if err == nil" and never logging failures; update each fetch block to
check for err != nil and log a warning (e.g. using the existing logger or fmt)
that includes the function name and error (e.g. "fetchDependabotAlerts failed:
%v") so users know which alert set is incomplete, then only proceed to iterate
when err == nil; keep the existing filtering logic and AlertOutput construction
unchanged.
In `@internal/core-ide/build/linux/appimage/build.sh`:
- Around line 38-39: The mv invocation using the unquoted glob "mv
${APP_NAME}*.AppImage \"${APP_NAME}.AppImage\"" can match multiple files and
fail; update the build step that references APP_NAME so it selects exactly one
source file before renaming—either remove/rotate old ${APP_NAME}*.AppImage files
first or programmatically pick the single intended artifact (e.g., the newest
match) and then mv that single filename to "${APP_NAME}.AppImage"; ensure you
modify the line that constructs the source from the glob and reference APP_NAME
so only one file is passed to mv.
In `@internal/core-ide/build/windows/msix/template.xml`:
- Around line 10-50: The MSIX template uses placeholder values; update the
Installer Path attribute and PackageInformation fields to match the actual
product values used in app_manifest.xml (replace Path="wails-angular-template"
with the executable name used at build e.g. "core-ide" or "core-ide.exe", set
PackageName/PackageDisplayName to "Core IDE", set
PublisherName/PublisherDisplayName to "Lethean Community Interest Company", and
change Application Id from "com.wails.wails-angular-template" to
"com.lethean.core-ide"), or clearly mark this file as a build-time template and
wire the templating variables so these fields (Path, PackageName,
PackageDisplayName, PublisherName, PublisherDisplayName, Id) are substituted
during the build.
In `@internal/core-ide/build/windows/Taskfile.yml`:
- Around line 33-44: The package task uses POSIX shell conditional syntax and
will fail on Windows; update the package task (symbol: package, variable:
FORMAT) to be platform-aware by either adding platforms: [linux,darwin] to
restrict it to POSIX systems or by providing a separate Windows variant that
uses PowerShell/cmd logic (e.g. a package:windows task or a platforms: [windows]
block) that invokes the correct commands (call create:msix:package vs
create:nsis:installer) using PowerShell conditional syntax so the task runs
correctly on each OS.
- Around line 68-90: The template conditional for USE_MSIX_TOOL in the
create:msix:package task evaluates the string "false" as truthy, causing
--use-msix-tool to always be selected; update the conditional in the cmds block
that currently reads {{if .USE_MSIX_TOOL}}...{{end}} to an explicit string
comparison (e.g., use {{if eq .USE_MSIX_TOOL "true"}}) mirroring the pattern
used for PRODUCTION so --use-msix-tool is only emitted when USE_MSIX_TOOL is
exactly "true".
In `@internal/core-ide/build/windows/wails.exe.manifest`:
- Line 3: Replace the placeholder assembly identity name in the manifest by
updating the assemblyIdentity element's name attribute: change
name="com.wails.wails-angular-template" to the real product identity name
"com.lethean.core-ide" so the assemblyIdentity element matches the product
identity used elsewhere; locate the assemblyIdentity element and edit its name
attribute accordingly.
In `@internal/core-ide/claude_bridge.go`:
- Around line 94-116: broadcastLoop performs concurrent WriteMessage calls on
websocket.Conn (unsafe) while HandleWebSocket also writes directly, causing a
race; change the client representation from raw *websocket.Conn in cb.clients to
a wrapper (e.g., clientConn with conn *websocket.Conn and send chan []byte or a
sync.Mutex write lock) and spawn a dedicated write goroutine (writePump) per
client that reads from send and calls conn.WriteMessage; update broadcastLoop to
push messages into client.send (instead of calling WriteMessage) and change
HandleWebSocket to send its initial message via that send channel (and ensure
cleanup closes the send channel and the conn on client removal).
- Around line 81-86: The select block in the Claude bridge message-sending code
is misindented; reformat the block so the select statement and its cases align
with surrounding code (e.g., align "select {" with the surrounding control
indentation and ensure "case cb.broadcast <- message:" and "default:" are
indented consistently), keeping the channel send to cb.broadcast and the
fallback log.Printf("Claude bridge: broadcast channel full, dropping message")
unchanged.
In `@internal/core-ide/frontend/public/style.css`:
- Around line 22-28: The `@font-face` rule for font-family "Inter" declares
font-weight: 400 but loads Inter-Medium.ttf (a medium/500 file); update the
declaration to make weight and file match by either changing font-weight to 500
for the `@font-face` rule or replacing the src file with Inter-Regular.ttf so that
font-weight: 400 actually loads a regular weight; modify the `@font-face` block
for "Inter" accordingly.
In `@internal/core-ide/frontend/src/app/app.html`:
- Line 14: The template directly accesses $event.target.value which is typed as
EventTarget|null in strict mode; change the input binding to call a component
handler (e.g., onNameInput) and implement onNameInput(event: Event) { const
input = event.target as HTMLInputElement; this.name.set(input.value); } so you
use a proper cast and avoid template type errors; alternatively use
$any($event.target).value if you prefer a quick template-only fix; update the
(input) from (input)="name.set($event.target.value)" to
(input)="onNameInput($event)" (or the $any variant) and keep the name() getter
and name.set(...) calls unchanged.
In `@internal/core-ide/frontend/src/app/app.spec.ts`:
- Around line 17-22: The test in app.spec.ts expects an <h1> but the App
component's template (App) only contains <router-outlet>, so update the spec:
either change the assertion to check that the router-outlet exists (e.g., assert
compiled.querySelector('router-outlet') is present) or configure the test to use
RouterTestingModule with a stub route that renders a component containing the
expected <h1> and then assert on that stub component's text; locate this logic
in the test block (it('should render title', ...) in app.spec.ts) and modify to
use RouterTestingModule or adjust the expectation accordingly.
In `@internal/core-ide/frontend/src/app/pages/tray/tray.component.ts`:
- Around line 433-443: The methods openIDE, emitAction, and openProject call
Events.Emit() directly which will break during SSR; update each to follow the
IdeComponent.emitAction() pattern by checking the platform (e.g.,
isPlatformBrowser) before calling Events and performing a dynamic import of the
Events module only when running in the browser, then call the imported
Events.Emit with the same arguments; ensure you reference the existing methods
openIDE, emitAction, and openProject to implement the platform guard and dynamic
import so SSR does not execute browser-only code.
- Around line 1-4: TrayComponent currently imports Events from
'@wailsio/runtime' at module scope which breaks SSR; remove the top-level Events
import and instead inject PLATFORM_ID and use Angular's isPlatformBrowser check
inside TrayComponent (e.g., in ngOnInit) to dynamically
import('@wailsio/runtime') and assign Events only when running in the browser,
and ensure any teardown in ngOnDestroy references the dynamically-loaded Events
instance if present; update class methods that use Events to guard against it
being undefined when not in a browser.
In `@internal/core-ide/mcp_bridge.go`:
- Around line 141-154: The handleMCPInfo handler sets
Access-Control-Allow-Origin to only "http://localhost", causing inconsistent
CORS with the WebSocket upgrader in claude_bridge.go which accepts
"http://127.0.0.1"; update handleMCPInfo (and any similar endpoints) to use a
consistent policy for local dev — either allow both origins ("http://localhost"
and "http://127.0.0.1") or use a wildcard "*" (or mirror the origin header
dynamically) so the WebSocket upgrader and handleMCPInfo behave the same.
🧹 Nitpick comments (26)
internal/core-ide/build/Taskfile.yml (1)
28-31: Consider narrowing the sources pattern for better performance.The
**/*pattern will match all files includingnode_modules/(potentially 100k+ files), causing Task to stat every file on each invocation to check for changes. This can slow down incremental build detection.♻️ Suggested refinement
sources: - - "**/*" + - "src/**/*" + - "public/**/*" + - "*.json" + - "*.ts" + - "*.js" + - "*.html" generates: - dist/**/*Adjust the patterns to match your actual frontend source structure while excluding
node_modules/and build artifacts.pkg/ai/rag.go (1)
30-41: Consider reusing clients for repeated calls.New
QdrantClientandOllamaClientinstances are created on every invocation. IfQueryRAGForTaskis called frequently (e.g., for batch task processing), this adds connection overhead. Consider accepting pre-initialized clients as parameters or caching at a higher level if this becomes a hot path.pkg/ai/metrics.go (1)
44-51: Consider usingE()helper for contextual error handling.The coding guidelines specify using the
E()helper for contextual error handling. Currently, errors usefmt.Errorf. Consider wrapping errors withE()to include service name and failure context for consistency.As per coding guidelines: "Use the E() helper function for contextual error handling with service name, failure description, and underlying error".
♻️ Example refactor for one error
+import "github.com/host-uk/core/pkg/log" + +const op = "ai.Record" + func Record(event Event) error { // ... dir, err := metricsDir() if err != nil { - return err + return log.E(op, "failed to get metrics directory", err) } if err := os.MkdirAll(dir, 0o755); err != nil { - return fmt.Errorf("create metrics directory: %w", err) + return log.E(op, "failed to create metrics directory", err) }internal/cmd/ai/cmd_tasks.go (1)
169-174: Consider logging errors fromai.Recordinstead of silently discarding.The error returned by
ai.Recordis discarded. While fire-and-forget semantics may be intentional for non-critical telemetry, silently ignoring errors can mask configuration or I/O issues. Consider logging at debug level to aid troubleshooting.🔧 Suggested improvement
// Record task claim event - _ = ai.Record(ai.Event{ + if err := ai.Record(ai.Event{ Type: "task.claimed", AgentID: cfg.AgentID, Data: map[string]any{"task_id": task.ID, "title": task.Title}, - }) + }); err != nil { + // Non-fatal: log but don't fail the claim operation + cli.Debug("failed to record task.claimed event: %v", err) + }internal/cmd/ai/cmd_metrics.go (1)
103-103: Fix the comment example to match supported units.The docstring says
"30d"but the switch only supportsd,h,m- the example should likely be"30m"to demonstrate minutes.📝 Suggested fix
-// parseDuration parses a human-friendly duration like "7d", "24h", "30d". +// parseDuration parses a human-friendly duration like "7d", "24h", "30m".internal/core-ide/build/config.yml (1)
60-62: Remove or complete the placeholderothersection.This section contains boilerplate data (
My Other Data) with no functional purpose. Consider removing it to keep the configuration clean, or populate it with actual configuration values if needed.🧹 Proposed fix to remove placeholder
-# Other data -other: - - name: My Other Datainternal/core-ide/build/linux/nfpm/nfpm.yaml (3)
6-17: Avoid hard-coded version metadata.
version: "0.1.0"risks shipping stale package versions; prefer build-time injection so releases reflect the actual build.♻️ Suggested tweak
-version: "0.1.0" +version: ${VERSION}
19-22: Consider installing to /usr/bin for distro packages.
/usr/local/binis typically reserved for locally-built software; system packages usually target/usr/bin.♻️ Suggested change
- - src: "./bin/core-ide" - dst: "/usr/local/bin/core-ide" + - src: "./bin/core-ide" + dst: "/usr/bin/core-ide"
27-45: Add explicit deps for the postinstall utilities.
The postinstall script callsupdate-desktop-databaseandupdate-mime-database; without explicit deps, the updates may silently not run.♻️ Suggested additions
depends: - libgtk-3-0 - libwebkit2gtk-4.1-0 + - desktop-file-utils + - shared-mime-info @@ rpm: depends: - gtk3 - webkit2gtk4.1 + - desktop-file-utils + - shared-mime-info @@ archlinux: depends: - gtk3 - webkit2gtk-4.1 + - desktop-file-utils + - shared-mime-infointernal/cmd/security/cmd_security.go (1)
197-206: Consider documenting whichRepofields are populated for synthetic targets.
buildTargetRepocreates a minimalRepowith onlyNameset. Callers must be aware that other fields (e.g.,Path,Type,CI) remain at zero values. This works for current usage but could cause issues if downstream code assumes populated fields.Minor: Remove the extra blank line at 206.
♻️ Add doc comment clarifying minimal initialization
-// buildTargetRepo creates a synthetic Repo entry for an external target (e.g. "wailsapp/wails"). +// buildTargetRepo creates a synthetic Repo entry for an external target (e.g. "wailsapp/wails"). +// Only the Name field is populated; other Repo fields remain at zero values. func buildTargetRepo(target string) (*repos.Repo, string) { parts := strings.SplitN(target, "/", 2) if len(parts) != 2 || parts[0] == "" || parts[1] == "" { return nil, "" } return &repos.Repo{Name: parts[1]}, target } -internal/cmd/security/cmd_secrets.go (1)
129-191: Consider extracting shared alert formatting logic.
runSecretsForTargetduplicates the alert processing and output logic fromrunSecrets(lines 70-127). While acceptable for now, this duplication exists across allrunXXXForTargetfunctions in this PR.A future refactor could extract common patterns (JSON output, table formatting) into shared helpers to reduce maintenance burden.
internal/cmd/security/cmd_scan.go (1)
208-221: Minor redundancy:RepoandData["target"]both containfullName.The
Repofield (line 212) andData["target"](line 214) store the same value. Consider removing the redundant"target"key fromData.♻️ Remove redundant target from Data map
_ = ai.Record(ai.Event{ Type: "security.scan", Timestamp: time.Now(), Repo: fullName, Data: map[string]any{ - "target": fullName, "total": summary.Total, "critical": summary.Critical, "high": summary.High, "medium": summary.Medium, "low": summary.Low, }, })internal/cmd/security/cmd_jobs.go (2)
68-72: Consider usingbuildTargetRepofor consistency.This validation duplicates the logic in
buildTargetRepo(fromcmd_security.go). Using the existing helper would ensure consistent validation.♻️ Use buildTargetRepo for validation
func createJobForTarget(target string) error { - parts := strings.SplitN(target, "/", 2) - if len(parts) != 2 { - return fmt.Errorf("invalid target format: use owner/repo") + repo, _ := buildTargetRepo(target) + if repo == nil { + return cli.Err("invalid target format: use owner/repo (e.g. wailsapp/wails)") }
85-99: Inconsistent error handling pattern withif err == nilafter error check.The pattern of checking
if err != nilfollowed byif err == nilworks but is unusual. Consider using an early continue or restructuring.♻️ Restructure for clarity
// Code scanning codeAlerts, err := fetchCodeScanningAlerts(target) if err != nil { cli.Print("%s %s: failed to fetch code scanning alerts: %v\n", cli.WarningStyle.Render(">>"), target, err) fetchErrors++ - } - if err == nil { + } else { for _, alert := range codeAlerts { // ... rest of loop } }internal/core-ide/README.md (2)
6-6: Add alt text to the image for accessibility.The image reference lacks alternative text, which impacts accessibility for screen reader users.
- +
16-47: Add language specifiers to fenced code blocks.Several code blocks lack language identifiers, which affects syntax highlighting and accessibility tooling. Based on the static analysis hints.
📝 Suggested fix
- ``` + ```bash wails3 init -n MyWailsApp -t https://github.com/Snider/wails-angular-template@v0.0.1 cd MyWailsApp ```- ``` + ```bash wails3 dev ```- ``` + ```bash wails3 build ```- ``` + ```bash go run . ```internal/core-ide/frontend/public/style.css (1)
56-59: Duplicate.resultselector with differing properties.The
.resultclass is defined twice: once at lines 56-59 and again at lines 103-108. The second definition addsmarginandtext-alignwhile repeatingheightandline-height. Consider consolidating into a single rule.♻️ Proposed consolidation
Remove lines 56-59 and keep only the more complete definition at lines 103-108:
-button { - ... -} - -.result { - height: 20px; - line-height: 20px; -} - -body { +button { + ... +} + +body {The definition at lines 103-108 already includes all necessary properties.
Also applies to: 103-108
internal/core-ide/frontend/src/app/components/sidebar/sidebar.component.ts (1)
139-200: Consider initializingnavItemsas a readonly property.The
navItemsarray is created once in the constructor and never mutated. Declaring it asreadonlywould better express this intent and prevent accidental modifications.♻️ Suggested improvement
export class SidebarComponent { `@Input`() currentRoute = 'dashboard'; `@Output`() routeChange = new EventEmitter<string>(); + readonly navItems: NavItem[]; + constructor(private sanitizer: DomSanitizer) { this.navItems = this.createNavItems(); } - - navItems: NavItem[];internal/core-ide/claude_bridge.go (2)
170-183:sendToMCPholds lock during potentially blocking write.Holding
reconnectMuwhile callingWriteMessagecan block the reconnection loop inconnectToMCPif the write is slow or the connection is degraded. Consider using a write channel or a separate mutex for writes.
126-136: Silently ignored JSON marshal error.Line 127 discards the error from
json.Marshal. While unlikely to fail for this simple map, it's better to handle it explicitly.🛡️ Handle marshal error
- connMsg, _ := json.Marshal(map[string]any{ + connMsg, err := json.Marshal(map[string]any{ "type": "system", "data": "Connected to Claude bridge", "timestamp": time.Now(), }) + if err != nil { + log.Printf("Claude bridge marshal error: %v", err) + conn.Close() + return + }internal/core-ide/mcp_bridge.go (2)
48-56: Consider using E() helper for contextual error handling.Per coding guidelines, Go files should use the
E()helper function for contextual error handling with service name, failure description, and underlying error.🔧 Suggested change
- return fmt.Errorf("failed to get Wails app reference") + return E("MCPBridge", "failed to get Wails app reference", nil)As per coding guidelines: "Use the E() helper function for contextual error handling with service name, failure description, and underlying error".
132-138: HTTP server lacks graceful shutdown support.
http.ListenAndServeblocks until error and has no graceful shutdown mechanism. When the application exits, active connections will be dropped abruptly. Consider usinghttp.ServerwithShutdown()tied to the application lifecycle.🛠️ Suggested approach
func (b *MCPBridge) startHTTPServer() { // ... existing setup ... server := &http.Server{ Addr: addr, Handler: mux, } // Store server reference for shutdown b.server = server if err := server.ListenAndServe(); err != nil && err != http.ErrServerClosed { log.Printf("MCP HTTP server error: %v", err) } } // Add ServiceShutdown for graceful cleanup func (b *MCPBridge) ServiceShutdown() error { ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) defer cancel() return b.server.Shutdown(ctx) }internal/core-ide/build/linux/appimage/build.sh (2)
8-15: Add validation for required environment variables.The script uses
APP_NAME,APP_BINARY,ICON_PATH, andDESKTOP_FILEwithout verifying they are set. Withset -euxo pipefail, undefined variables will cause cryptic errors.🛡️ Proposed fix to validate required variables
# Define variables +: "${APP_NAME:?APP_NAME must be set}" +: "${APP_BINARY:?APP_BINARY must be set}" +: "${ICON_PATH:?ICON_PATH must be set}" +: "${DESKTOP_FILE:?DESKTOP_FILE must be set}" + APP_DIR="${APP_NAME}.AppDir"
31-33: Consider verifying the integrity of the downloaded linuxdeploy binary.Downloading and executing an external binary without checksum verification poses a supply chain risk. While linuxdeploy is from a reputable source, verifying the SHA256 checksum would harden the build process.
internal/core-ide/build/windows/nsis/project.nsi (1)
91-92: Minor: Inconsistent casing in shortcut creation.Line 91 uses
CreateShortcutwhile line 92 usesCreateShortCut. While NSIS is case-insensitive, consistent casing improves readability.📝 Proposed fix for consistency
CreateShortcut "$SMPROGRAMS\${INFO_PRODUCTNAME}.lnk" "$INSTDIR\${PRODUCT_EXECUTABLE}" - CreateShortCut "$DESKTOP\${INFO_PRODUCTNAME}.lnk" "$INSTDIR\${PRODUCT_EXECUTABLE}" + CreateShortcut "$DESKTOP\${INFO_PRODUCTNAME}.lnk" "$INSTDIR\${PRODUCT_EXECUTABLE}"internal/core-ide/build/darwin/Taskfile.yml (1)
67-75: Make codesign identity configurable (ad‑hoc default).
Hard‑coding--sign -blocks using a real Developer ID in CI/release builds.🔧 Suggested change
create:app:bundle: summary: Creates an `.app` bundle + vars: + CODESIGN_IDENTITY: '{{.CODESIGN_IDENTITY | default "-"}}' cmds: - mkdir -p {{.BIN_DIR}}/{{.APP_NAME}}.app/Contents/{MacOS,Resources} - cp build/darwin/icons.icns {{.BIN_DIR}}/{{.APP_NAME}}.app/Contents/Resources - cp {{.BIN_DIR}}/{{.APP_NAME}} {{.BIN_DIR}}/{{.APP_NAME}}.app/Contents/MacOS - cp build/darwin/Info.plist {{.BIN_DIR}}/{{.APP_NAME}}.app/Contents - - codesign --force --deep --sign - {{.BIN_DIR}}/{{.APP_NAME}}.app + - codesign --force --deep --sign "{{.CODESIGN_IDENTITY}}" {{.BIN_DIR}}/{{.APP_NAME}}.app
Fixes across 25 files addressing 46+ review comments: - pkg/ai/metrics.go: handle error from Close() on writable file handle - pkg/ansible: restore loop vars after loop, restore become settings, fix Upload with become=true and no password (use sudo -n), honour SSH timeout config, use E() helper for contextual errors, quote git refs in checkout commands - pkg/rag: validate chunk config, guard negative-to-uint64 conversion, use E() helper for errors, add context timeout to Ollama HTTP calls - pkg/deploy/python: fix exec.ExitError type assertion (was os.PathError), handle os.UserHomeDir() error - pkg/build/buildcmd: use cmd.Context() instead of context.Background() for proper Ctrl+C cancellation - install.bat: add curl timeouts, CRLF line endings, use --connect-timeout for archive downloads - install.sh: use absolute path for version check in CI mode - tools/rag: fix broken ingest.py function def, escape HTML in query.py, pin qdrant-client version, add markdown code block languages - internal/cmd/rag: add chunk size validation, env override handling Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🤖 Fix all issues with AI agents
In `@install.bat`:
- Around line 73-84: The CI install step (:install_ci) currently runs "%BINARY%
--version" which may hit an older binary on PATH; replace that check by calling
the existing :verify_install routine (or otherwise invoke the installed binary
via the absolute path produced by :install_binary) so the verification targets
the binary you just installed (update the :install_ci flow to call
:verify_install instead of running "%BINARY% --version").
In `@pkg/ansible/modules.go`:
- Around line 14-27: The code in executeModule only applies become when
task.Become is true, so an explicit task.Become=false cannot override an
existing client.become; change the block that currently checks "if task.Become
!= nil && *task.Become" to "if task.Become != nil" — capture
oldBecome/oldUser/oldPass, call client.SetBecome(*task.Become, task.BecomeUser,
"") (use empty pass as before) to apply the explicit true/false value, and defer
client.SetBecome(oldBecome, oldUser, oldPass) to restore the previous state;
reference executeModule, Task.Become, Task.BecomeUser and SSHClient.SetBecome to
locate and update the logic.
- Around line 1-11: The file pkg/ansible/modules.go is not formatted to gofmt
standards; run gofmt on that file (e.g. use "gofmt -w pkg/ansible/modules.go" or
your editor's Go format action) to fix import spacing and overall formatting for
the package ansible and its import block so CI passes.
In `@pkg/ansible/ssh.go`:
- Around line 1-18: The file declared as package ansible (ssh.go) is not
gofmt-formatted; run gofmt (or gofmt -w) on ssh.go (or run goimports/gofmt
across the package) to fix formatting, then re-run CI; ensure imports and
spacing in the top-level block (package ansible and the import list referencing
bytes, context, fmt, io, net, os, filepath, strings, sync, time,
github.com/host-uk/core/pkg/log, golang.org/x/crypto/ssh,
golang.org/x/crypto/ssh/knownhosts) are properly formatted so the linter/CI
passes.
In `@pkg/deploy/coolify/client.go`:
- Around line 41-90: Replace plain fmt.Errorf calls in NewClient and Call with
the project's contextual error helper core.E (or E()) so errors retain
service/context metadata: for validation errors in NewClient (when BaseURL or
APIToken empty) wrap with core.E("coolify", "validation", err) or
core.E("coolify", "validation", "COOLIFY_URL not set")/core.E("coolify",
"validation", "COOLIFY_TOKEN not set"); for runtime failures use
core.E("coolify", "init", err) around python.Init, and in Call replace
fmt.Errorf usages around python.CoolifyScript generation, python.RunScript
invocation, and json.Unmarshal parsing with core.E("coolify", "runtime", err)
(or more specific action names like "generate-script", "api-call",
"parse-response") while preserving the underlying error via wrapping; apply the
same pattern to the fallback JSON array parsing error path so all error returns
use core.E with the service name "coolify" and appropriate failure descriptors.
In `@pkg/rag/ollama.go`:
- Around line 1-12: This file (package rag, ollama.go) is misformatted and
failing CI; run gofmt -w on the file or run go fmt for the package (e.g., gofmt
-w pkg/rag/ollama.go or go fmt ./pkg/rag/...) to fix import grouping/spacing and
overall formatting so the file builds cleanly; no code changes otherwise.
In `@pkg/rag/qdrant.go`:
- Around line 1-225: This file fails formatting in CI; run gofmt (e.g. gofmt -w
or go fmt ./...) to reformat the source and commit the changes so functions and
types like QdrantConfig, NewQdrantClient, UpsertPoints, pointIDToString and
valueToGo are properly formatted; re-run CI to verify the formatting error is
resolved.
🧹 Nitpick comments (7)
pkg/ai/metrics.go (1)
86-99: Consider using UTC for consistent cross-timezone behavior.Line 88 uses
time.Localfor date iteration, which could cause inconsistencies if events are recorded with different timezone settings. Consider using UTC for both storage and retrieval for predictable behavior across different environments.♻️ Suggested improvement
- for d := time.Date(since.Year(), since.Month(), since.Day(), 0, 0, 0, 0, time.Local); !d.After(now); d = d.AddDate(0, 0, 1) { + for d := time.Date(since.Year(), since.Month(), since.Day(), 0, 0, 0, 0, time.UTC); !d.After(now); d = d.AddDate(0, 0, 1) {internal/cmd/rag/cmd_query.go (1)
28-41: Usecmd.Context()for proper cancellation support.Line 30 uses
context.Background()which ignores any cancellation signals from the CLI framework. Usingcmd.Context()would properly propagate cancellation when the user interrupts the command.♻️ Suggested fix
func runQuery(cmd *cobra.Command, args []string) error { question := args[0] - ctx := context.Background() + ctx := cmd.Context() // Connect to Qdranttools/rag/ingest.py (2)
179-181: Consider catching specific exceptions for better error reporting.The bare
except Exceptioncatch (line 179) masks the specific error type. Consider catchingOSErrorfor file operations andollama.RequestError(or similar) for embedding failures to provide more actionable error messages.
243-250: Remove extraneous f-prefix from static string.Line 245 has an f-string without any placeholders, which is flagged by Ruff (F541).
♻️ Proposed fix
- print(f"\n{'=' * 50}") + print("\n" + "=" * 50) print(f"Ingestion complete!")Or alternatively:
- print(f"\n{'=' * 50}") + print(f"\n{'='*50}")internal/cmd/deploy/cmd_deploy.go (1)
88-104: Prefercmd.Context()and wrap errors with E() for consistent cancellation + context.
Usingcontext.Background()ignores CLI cancellation, and raw errors lose context. Consider switching tocmd.Context()and wrapping with the E() helper.As per coding guidelines, **/*.go: Use the E() helper function for contextual error handling with service name, failure description, and underlying error.♻️ Example adjustment
+import "github.com/host-uk/core/pkg/log" ... - servers, err := client.ListServers(context.Background()) + servers, err := client.ListServers(cmd.Context()) if err != nil { - return err + return log.E("deploy.listServers", "list servers", err) }- return coolify.NewClient(cfg) + client, err := coolify.NewClient(cfg) + if err != nil { + return nil, log.E("deploy", "create client", err) + } + return client, nilAlso applies to: 151-167, 170-187, 189-206, 208-225, 227-244, 246-258, 260-279
internal/cmd/deploy/cmd_ansible.go (1)
91-229: Usecmd.Context()for cancellation and wrap errors with E().
These flows should honor CLI cancellation and provide contextual error wrapping.As per coding guidelines, **/*.go: Use the E() helper function for contextual error handling with service name, failure description, and underlying error.♻️ Example adjustment
+import "github.com/host-uk/core/pkg/log" ... - ctx := context.Background() + ctx := cmd.Context() ... - if err := executor.Run(ctx, playbookPath); err != nil { - return fmt.Errorf("playbook failed: %w", err) - } + if err := executor.Run(ctx, playbookPath); err != nil { + return log.E("deploy.ansible", "playbook failed", err) + }- ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second) + ctx, cancel := context.WithTimeout(cmd.Context(), 30*time.Second)Also applies to: 232-311
pkg/ansible/modules.go (1)
175-274: Wrap module validation errors with E() for context.
Module input validation currently returns plainfmt.Errorf, which loses structured context.As per coding guidelines, **/*.go: Use the E() helper function for contextual error handling with service name, failure description, and underlying error.♻️ Example adjustment
- return nil, fmt.Errorf("shell: no command specified") + return nil, log.E("ansible.module.shell", "no command specified", nil)
…arget Replace --dry-run (default false) with --we-are-go-for-launch (default false) so `core build release` is safe by default. Remove darwin/amd64 from default build targets (arm64 only for macOS). Fix cmd_project.go to use command context instead of context.Background(). Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@pkg/build/buildcmd/cmd_release.go`:
- Line 100: The i18n key used in the cli.Print call is incorrect: replace the
non-existent "i18n.done.pass" (and do not use a "common." namespace) with the
correct key "label.success" inside the i18n.T call used by
buildSuccessStyle.Render in the cli.Print invocation (symbols: cli.Print,
buildSuccessStyle.Render, i18n.T); ensure the string passed to i18n.T is exactly
"label.success" so the localized "Success" label is rendered.
🧹 Nitpick comments (2)
pkg/build/buildcmd/cmd_release.go (2)
16-21: Package-level flag variables may cause issues with concurrent use.These global variables work for typical CLI usage but could cause race conditions if commands are executed concurrently (e.g., in tests or embedded usage). Consider using a command-local struct or leveraging Cobra's persistent flags pattern.
♻️ Alternative: command-local state
type releaseOptions struct { version string draft bool prerelease bool goForLaunch bool } func newReleaseCommand() *cli.Command { opts := &releaseOptions{} cmd := &cli.Command{ Use: "release", Short: i18n.T("cmd.build.release.short"), Long: i18n.T("cmd.build.release.long"), RunE: func(cmd *cli.Command, args []string) error { return runRelease(cmd.Context(), !opts.goForLaunch, opts.version, opts.draft, opts.prerelease) }, } cmd.Flags().BoolVar(&opts.goForLaunch, "we-are-go-for-launch", false, i18n.T("cmd.build.release.flag.go_for_launch")) // ... other flags return cmd }
94-96: Error fromrelease.Runnot wrapped withE()helper.For consistency with the error handling pattern used elsewhere in this function (lines 49, 59, 65), consider wrapping this error.
♻️ Proposed fix
rel, err := release.Run(ctx, cfg, dryRun) if err != nil { - return err + return core.E("release", "run release", err) }Based on learnings: Use the E() helper function for contextual error handling with service name, failure description, and underlying error.
|
|
||
| // Print summary | ||
| cli.Blank() | ||
| cli.Print("%s %s\n", buildSuccessStyle.Render(i18n.T("i18n.done.pass")), i18n.T("cmd.build.release.completed")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if "i18n.done.pass" key exists in locale files
echo "=== Searching for 'i18n.done.pass' ==="
rg '"i18n\.done\.pass"' pkg/i18n/locales/ 2>/dev/null || echo "Key not found"
# Check what keys are available under common.label in en_GB.json
echo -e "\n=== Keys under 'common.label' in en_GB.json ==="
rg '"common\.label\.' pkg/i18n/locales/en_GB.json 2>/dev/null | head -20
# Also check if the key appears anywhere else in the codebase
echo -e "\n=== All references to 'done.pass' in codebase ==="
rg 'done\.pass' . --include="*.go" --include="*.json" 2>/dev/null || echo "No references found"
# Check the specific file to see current state
echo -e "\n=== Line 100 in cmd_release.go ==="
sed -n '98,102p' pkg/build/buildcmd/cmd_release.go 2>/dev/null || echo "File not found or cannot read"Repository: host-uk/core
Length of output: 584
🏁 Script executed:
#!/bin/bash
# Get the locale file structure to understand available keys
echo "=== Checking en_GB.json structure ==="
wc -l pkg/i18n/locales/en_GB.json
# Look at the beginning to understand JSON structure
echo -e "\n=== First 50 lines of en_GB.json ==="
head -50 pkg/i18n/locales/en_GB.json
# Search for keys containing "done", "pass", "success", "completed"
echo -e "\n=== Keys containing 'done' ==="
rg '"[^"]*done[^"]*"' pkg/i18n/locales/en_GB.json
echo -e "\n=== Keys containing 'success' ==="
rg '"[^"]*success[^"]*"' pkg/i18n/locales/en_GB.json
echo -e "\n=== Keys containing 'pass' ==="
rg '"[^"]*pass[^"]*"' pkg/i18n/locales/en_GB.json
# Check for common patterns
echo -e "\n=== Keys starting with 'common.' ==="
rg '"common\.' pkg/i18n/locales/en_GB.json | head -20Repository: host-uk/core
Length of output: 4050
🏁 Script executed:
#!/bin/bash
# Get context around the "success" key definition
echo "=== Context around 'success' key ==="
rg -B5 -A5 '"success"' pkg/i18n/locales/en_GB.json | head -40
# Get context around the "done" key definition
echo -e "\n=== Context around 'done' key ==="
rg -B5 -A5 '"done"' pkg/i18n/locales/en_GB.json
# Check how other build completion messages use i18n
echo -e "\n=== Other build/release related keys ==="
rg '"(build|release|completed)' pkg/i18n/locales/en_GB.json | head -20
# Search for how i18n.T() is called with success-related keys in the codebase
echo -e "\n=== Usage of success/done keys in Go code ==="
rg 'i18n\.T\("(success|done|pass)' pkg/ --include="*.go"Repository: host-uk/core
Length of output: 2595
🏁 Script executed:
#!/bin/bash
# Search for how label keys are used in the Go codebase
echo "=== Usage of label keys in Go code ==="
rg 'i18n\.(T|Label)\("label\.' pkg/ -A2
# Also check other patterns with label
echo -e "\n=== All label key usages ==="
rg 'label\.(done|success)' pkg/i18n/locales/en_GB.json
# Check what the release completion context should be
echo -e "\n=== Release-related keys in locale ==="
rg '"release\.' pkg/i18n/locales/en_GB.json | head -15Repository: host-uk/core
Length of output: 1129
Invalid i18n key reference with incorrect proposed fix.
The key "i18n.done.pass" doesn't exist in the locale file. The proposed replacement "common.label.success" is also incorrect—there is no "common." namespace in the locale structure. The correct key is "label.success" (which maps to "Success").
🐛 Proposed fix
- cli.Print("%s %s\n", buildSuccessStyle.Render(i18n.T("i18n.done.pass")), i18n.T("cmd.build.release.completed"))
+ cli.Print("%s %s\n", buildSuccessStyle.Render(i18n.T("label.success")), i18n.T("cmd.build.release.completed"))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| cli.Print("%s %s\n", buildSuccessStyle.Render(i18n.T("i18n.done.pass")), i18n.T("cmd.build.release.completed")) | |
| cli.Print("%s %s\n", buildSuccessStyle.Render(i18n.T("label.success")), i18n.T("cmd.build.release.completed")) |
🤖 Prompt for AI Agents
In `@pkg/build/buildcmd/cmd_release.go` at line 100, The i18n key used in the
cli.Print call is incorrect: replace the non-existent "i18n.done.pass" (and do
not use a "common." namespace) with the correct key "label.success" inside the
i18n.T call used by buildSuccessStyle.Render in the cli.Print invocation
(symbols: cli.Print, buildSuccessStyle.Render, i18n.T); ensure the string passed
to i18n.T is exactly "label.success" so the localized "Success" label is
rendered.
Summary
core build release,core monitor)Changes
Release Command & Archive Support
pkg/build/archive.go: Add tar.xz support via Borg compress packagepkg/build/buildcmd/cmd_release.go: Complete release command implementationinternal/variants/full.go: Wire monitor and rag commandsUnified Installers
install.sh: Unified bash installer with BunnyCDN edge transformationinstall.bat: Unified Windows installer with proper error propagationRAG Implementation (New)
pkg/rag/: Core RAG functionality (Qdrant client, Ollama embeddings, chunking)internal/cmd/rag/: CLI commands (ingest, query, collections)Test plan
go build ./...passesgo test ./...passes🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Bug Fixes / Behavior
Documentation
Chores