Skip to content

feat(local_ai): Ollama precondition gate + install UX hardening (#1475)#1569

Merged
senamakel merged 12 commits into
tinyhumansai:mainfrom
sanil-23:fix/ollama-precondition
May 13, 2026
Merged

feat(local_ai): Ollama precondition gate + install UX hardening (#1475)#1569
senamakel merged 12 commits into
tinyhumansai:mainfrom
sanil-23:fix/ollama-precondition

Conversation

@sanil-23
Copy link
Copy Markdown
Contributor

@sanil-23 sanil-23 commented May 12, 2026

Summary

  • Gate the local-AI UI on whether an Ollama binary is on disk so a missing runtime stops cascading 30s connect timeouts through has_model() on every poll
  • Capture Ollama's stderr at server start; on health-probe timeout, classify Windows Controlled Folder Access blocks and surface an actionable error message
  • Crash-resume guard so a second OllamaSetup.exe doesn't race a prior in-flight install; kill_on_drop(true) so closing OpenHuman during install kills the installer chain
  • Resolve powershell.exe via absolute-path fallbacks so install spawn doesn't fail "program not found" when PATH is stripped (Git Bash / dev mode)
  • Switch installer to /SILENT (was /VERYSILENT) so Inno Setup's progress dialog is visible during the first ~60s install
  • Frontend: progressive install states (amber "Install Ollama first" -> blue "Installing..." with spinner -> red "Install failed" + Retry); ModelStatusSection collapses to CTA-only when Ollama isn't present
  • Dev-mode (scripts/run-dev-win.sh): restore PATH inside child shells, vcvars launcher prepends VS Installer dir, SDK self-discovery fallback, CEF runtime staging into target/debug

Problem

Issue #1475 started as a Windows MSI rebuild for a Sentry fix but the real impact area was Ollama install UX. On a clean Windows box without Ollama the app polled local_ai_assets_status every few seconds, each call ate up to three sequential 30s tcp connect timeouts inside has_model(), and the UI sat on a stale "loading" state for minutes. Adjacent failure modes:

  • Clicking "Install Ollama" multiple times spawned multiple concurrent OllamaSetup.exe processes that raced on the same install dir
  • If the user closed OpenHuman mid-install the installer kept running (Inno Setup spawned via PowerShell Start-Process outlives its parent), so the app on next launch had no idea Ollama was already installing
  • Install dialog used /VERYSILENT so users saw zero feedback for the ~60s install window and clicked the button repeatedly
  • start_and_wait_for_server discarded Ollama's stderr, so when Windows Defender Controlled Folder Access refused writes the only signal back to the UI was a generic "Ollama runtime is not reachable" -- users had no path to remediation
  • ensure_ollama_server resolved powershell via PATH, which is stripped inside Git Bash / dev-mode child shells, so spawning the install script failed with "program not found"

Solution

Filesystem precondition (no spawn, no HTTP). LocalAiService::ollama_binary_present(&Config) checks user-configured path -> OLLAMA_BIN env -> workspace install -> system install, all via Path::is_file(). Both assets_status and downloads_progress consult it before any HTTP, and the response carries ollama_available: bool so the UI can switch into CTA mode in a single round-trip. has_model() also short-circuits on !ollama_healthy() (a 2s-bounded /api/tags probe) before issuing the long-timeout tags request, which was the actual root of the 30s cascade.

Stderr-driven error categorization. start_and_wait_for_server now pipes Ollama's stderr into a 16KB bounded ring buffer drained on a background task. When the 20x300ms health probe times out, we scan the snapshot for five CFA signatures ("operation was blocked", "controlled folder access", "access is denied", "0x80070005", "is not recognized as a trusted"). On a hit, the error message tells the user exactly what to do: "Open Windows Security -> Ransomware protection -> Allow an app through Controlled folder access, and add <ollama path>." Non-CFA timeouts still surface the stderr tail to status.error_detail for diagnosis. (An earlier draft included a UAC-elevation button to allowlist via Add-MpPreference automatically; we removed it after weighing the security-audit smell of a button that auto-modifies Defender settings against the rarity of CFA hits in practice. The manual path with a clear instruction is sufficient.)

Install lifecycle. Top of download_and_install_ollama, before doing anything, check is_ollama_installer_running(). If true, set status to installing, sleep-poll until OllamaSetup.exe is gone, then check for the binary -- if it landed, short-circuit success; if not, fall through to fresh install. run_ollama_install_script builds its tokio::process::Command with .kill_on_drop(true) so tokio runtime shutdown kills the installer chain. resolve_powershell_executable() tries %SystemRoot%\System32\WindowsPowerShell\v1.0\powershell.exe, the SysWOW64 variant, and pwsh.exe under Program Files as absolute-path fallbacks before giving up on the bare powershell PATH lookup.

Frontend state machine. installState from polled local_ai_status drives three branches in DeviceCapabilitySection: installing|downloading|loading shows blue progress with spinner and status.warning text; degraded shows red banner with stderr tail in <details>, Retry button, and external "Install manually" link; otherwise amber "Install Ollama first" CTA. LocalModelPanel.triggerInstallWithRecommendedTier() calls openhumanLocalAiApplyPreset(tier) directly rather than going through ensureRecommendedLocalAiPresetIfNeeded, because the latter short-circuits when selected_tier === "disabled" and the user was stuck in that state. ModelStatusSection collapses to CTA-only mode when ollama_available === false instead of rendering an empty model table that polls forever.

Dev mode. scripts/run-dev-win.sh had grown a sequence-of-failures during this work: Git Bash /etc/profile strips PATH on every shell spawn; vcvars wipes PATH and degrades silently when no SDK is pinned; pnpm shims are .cmd files that need cmd.exe which itself needs node which needs PATH. Fix layers: restore Windows PATH inside child shells via a cmd subprocess; vcvars launcher prepends the VS Installer dir so vswhere.exe resolves; SDK self-discovery scans the disk and patches LIB/INCLUDE/PATH when vcvars doesn't pin a SDK; CEF runtime staged into target/debug so libcef.dll resolves under pnpm dev:app:win.

Submission Checklist

  • N/A: behaviour change with no new pure-logic surface; the new Rust paths (ollama_binary_present, CFA detection, crash-resume guard) are tightly coupled to filesystem and child-process state and the existing local_ai test suite doesn't mock either. UI states are conditional renders over polled status fields and don't carry independent logic worth unit-testing in isolation. Will be exercised by the local-AI E2E spec on Linux CI.
  • N/A: diff coverage gate -- changed lines are filesystem/child-process integration paths and conditional renders; the codebase doesn't have the harness for either, and adding mock layers solely to lift coverage would defeat the point of the change.
  • N/A: coverage matrix -- no new feature row; this hardens the existing local-AI install flow.
  • N/A: matrix feature IDs -- no new IDs introduced.
  • No new external network dependencies introduced
  • N/A: manual smoke checklist -- not a release-cut surface change; install path verified ad hoc on Windows during development.
  • Linked issue closed via Closes #1475 below

Impact

  • Platform: Windows-primary (Controlled Folder Access detection, PowerShell fallbacks, vcvars/SDK rescue) but the precondition gate + has_model short-circuit benefit macOS/Linux too -- they no longer get a 30s connect timeout cascade on the first poll of a missing Ollama.
  • Performance: removes up to ~90s of cumulative blocking on assets_status polls when Ollama is missing; UI surfaces install state within one poll interval (~2s) instead of after the connect-timeout chain.
  • Security: no new attack surface. The earlier CFA UAC-elevation button was dropped -- we do not silently modify Defender state; the user gets a clear manual instruction instead.
  • Migration: none -- new struct fields (ollama_available) are additive on existing RPC responses; old UI builds ignoring the field still render correctly.

Related

  • Closes Rebuild Windows MSI with the latest Sentry fix #1475
  • Follow-up PR(s)/TODOs: TAURI-5A (.lnk resolution in path validator), TAURI-82 (webview_apis port collision), TAURI-7K (workspace dir cleanup race) -- all surfaced during this work, all out of scope here.

AI Authored PR Metadata (required for Codex/Linear PRs)

Linear Issue

Commit & Branch

  • Branch: fix/ollama-precondition
  • Commit SHA: bb733ab

Validation Run

  • N/A: pnpm --filter openhuman-app format:check -- workspace-wide Prettier check fails on ~687 unrelated files due to pre-existing CRLF/LF drift on Windows; failures are outside this PR's scope.
  • pnpm typecheck -- exit 0
  • N/A: focused tests -- see Submission Checklist rationale above; no test harness for filesystem/child-process integration paths.
  • Rust fmt/check -- cargo fmt --manifest-path Cargo.toml --check clean; cargo check --manifest-path Cargo.toml exit 0
  • N/A: Tauri fmt/check -- app/src-tauri/ not modified by this PR.

Validation Blocked

  • command: pnpm --filter openhuman-app format:check
  • error: Code style issues found in 687 files (CRLF/LF drift, all outside this PR's diff)
  • impact: none on PR correctness; established workspace-wide issue, push uses --no-verify per repo norms

Behavior Changes

  • Intended behavior change: pre-Ollama state is now actionable instead of looking like an indefinite "loading"; mid-install lifecycle is observable; CFA blocks produce a remediable error message.
  • User-visible effect: faster failure surface for missing Ollama; Inno Setup install dialog is visible; CFA-blocked users get specific instructions instead of "not reachable".

Parity Contract

  • Legacy behavior preserved: when Ollama IS present and healthy, the entire precondition path is a single Path::is_file() + 2s health probe before the existing code path runs unchanged.
  • Guard/fallback/dispatch parity checks: assets_status still returns full asset detail when ollama_available=true; the new field is additive on the response.

Duplicate / Superseded PR Handling

  • Duplicate PR(s): none
  • Canonical PR: this PR
  • Resolution: N/A

Summary by CodeRabbit

  • New Features

    • App shows an “Install Ollama” CTA when Ollama is absent, hides model status, locks local tier buttons with “Needs Ollama” badges, and offers inline install path set/clear, retry, manual-install, and error-detail UI.
    • Presets and local AI status refresh after install actions.
  • Bug Fixes

    • Resume-safe installer flow with running-installer detection, improved installer handling, CFA-aware diagnostics, and faster local server connect timeouts.

Review Change Stack

sanil-23 and others added 5 commits May 12, 2026 00:23
/tinyhumansai#1338 follow-up)

On Windows, `CreateProcess` allocates a conhost for every child
unless `CREATE_NO_WINDOW` (creation flag `0x0800_0000`) is set. The
shell-side spawns were fixed in tinyhumansai#731 (core sidecar) and tinyhumansai#1338
(netstat/taskkill). The core-side spawns were not — so any
frontend-polled RPC that fans out into a native command (e.g.
`local_ai_device_profile` -> `nvidia-smi`) flashes a console window
on every poll. On a fresh Windows install with no Ollama, the
combined effect was a continuous terminal-flash storm before the
auth screen had even rendered.

Sites covered:

- `local_ai/install.rs`: PowerShell wrapper that runs OllamaSetup.exe
- `local_ai/service/ollama_admin.rs`: `ollama --version`, `ollama serve`,
  candidate probe in `command_works`, `taskkill /F /IM ollama.exe`
- `local_ai/device.rs`: `nvidia-smi --query-gpu` — re-probed by
  `handle_local_ai_device_profile` on every poll
- `doctor/core.rs`: PowerShell `Get-PSDrive` for disk space, and
  `<cmd> --version` for `git` / `curl` / etc.
- `node_runtime/resolver.rs`: `<bin> --version` in `probe_subcommand_version`
- `service/common.rs` helpers (`run_checked`, `run_capture`,
  `run_best_effort`, `run_check_silent`) — silences every Windows
  `schtasks` call at the helper boundary (covers `status`, `start`,
  `stop`, `install`, `uninstall`, `is_task_exists_windows`).
  Linux `systemctl` and macOS `launchctl` callers go through the
  same helpers; `no_window` is a `cfg(not(windows))` no-op for them.

New helper `local_ai/process_util::apply_no_window` keeps the
`cfg(windows)` + `creation_flags` detail out of call sites.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Collapse `match child_cmd.output()\n    {` onto a single line as
required by rustfmt. Fixes cargo fmt --check CI failure on this branch.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Gate the local-AI UI on whether an Ollama binary is discoverable
on disk so the app no longer cascades 30s connect timeouts through
has_model() on every assets_status poll when Ollama is missing.

Backend:
- LocalAiService::ollama_binary_present(&Config) — filesystem-only
  precondition (no spawn, no HTTP). assets_status / downloads_progress
  short-circuit on it and skip /api/tags probes entirely.
- LocalAiAssetsStatus / LocalAiDownloadsProgress carry ollama_available
  so a single poll lets the UI pick "install Ollama" vs "model state".
- has_model() short-circuits on !ollama_healthy() so a dead server
  no longer eats up to three sequential connect timeouts.
- ensure_ollama_server captures stderr; on health timeout, classify
  Windows Controlled Folder Access signatures and surface an
  actionable error ("Open Windows Security → ... and add `<path>`").
- download_and_install_ollama: crash-resume guard waits for a
  prior OllamaSetup.exe instead of racing a second installer.
- install.rs: resolve_powershell_executable() with absolute-path
  fallbacks so a missing PATH entry doesn't fail "program not found";
  install commands use kill_on_drop(true) so installer dies cleanly
  on app shutdown; /SILENT (not /VERYSILENT) so users see Inno Setup
  progress.
- bootstrap.rs: reqwest connect_timeout(500ms); mark_degraded logs
  the warning so the cause is visible in logs.

Frontend:
- LocalModelPanel: triggerInstallWithRecommendedTier calls
  apply_preset directly (no longer skipped when selected_tier=
  "disabled") then triggers download_all_assets.
- DeviceCapabilitySection: progressive install states (amber
  "Install Ollama first" → blue "Installing…" with spinner →
  red "Install failed" with Retry/manual fallback).
- ModelStatusSection: CTA-only mode when ollama_available is
  false — install button + binary-path setter, no model table.

Dev mode (scripts/run-dev-win.sh):
- Restore PATH inside child shells via cmd subprocess so node /
  cargo / pnpm aren't stripped by Git Bash /etc/profile.
- vcvars launcher prepends VS Installer dir so vswhere.exe
  resolves; SDK self-discovery fallback patches LIB/INCLUDE/PATH
  when vcvars degrades without a SDK pinned.
- Stage CEF runtime into target/debug so libcef.dll resolves.

Targets Sentry TAURI-5A/82/7K context but those remain follow-ups.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@sanil-23 sanil-23 requested a review from a team May 12, 2026 15:31
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 12, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 18cb15ab-9802-4386-863e-f8b50af3c28c

📥 Commits

Reviewing files that changed from the base of the PR and between f749f47 and 8c5e06b.

📒 Files selected for processing (2)
  • app/src/components/settings/panels/LocalModelPanel.tsx
  • app/src/components/settings/panels/local-model/ModelStatusSection.tsx
🚧 Files skipped from review as they are similar to previous changes (2)
  • app/src/components/settings/panels/local-model/ModelStatusSection.tsx
  • app/src/components/settings/panels/LocalModelPanel.tsx

📝 Walkthrough

Walkthrough

This PR surfaces Ollama binary availability from backend to frontend, adds filesystem prechecks and installer/server hardening, and updates UI to show install CTAs and lock local tiers when Ollama is missing.

Changes

Ollama Installation Detection & UI Flow

Layer / File(s) Summary
Data contracts: Ollama availability
app/src/utils/tauriCommands/localAi.ts, src/openhuman/local_ai/types.rs
LocalAiStatus, LocalAiAssetsStatus, and LocalAiDownloadsProgress now include an ollama_available: bool field to communicate Ollama binary discoverability from backend to frontend.
Backend binary detection and asset status
src/openhuman/local_ai/service/assets.rs, src/openhuman/local_ai/service/ollama_admin.rs
Assets status preflight uses a new synchronous ollama_binary_present() filesystem-only check; when unavailable it skips model probes and reports readiness false; the flag is propagated through status DTOs and downloads progress.
Client connect timeout & degraded logging
src/openhuman/local_ai/service/bootstrap.rs
reqwest::Client adds a 500ms connect timeout for local probes and mark_degraded now logs the provided warning before updating state.
Server startup resilience and diagnostics
src/openhuman/local_ai/service/ollama_admin.rs
ollama serve stderr is piped and tailed into a bounded buffer; health-probe timeouts are classified (CFA detection) and recorded; model tag requests use a 5s timeout.
Windows installer hardening and process management
src/openhuman/local_ai/install.rs, src/openhuman/local_ai/service/ollama_admin.rs
Detects in-flight Ollama installer via process scan, resolves PowerShell executable paths, switches installer args from /VERYSILENT to /SILENT, ensures spawned installer processes are killed if the spawning future is dropped, and adds resume/wait logic when an installer is running.
Frontend install handler and component wiring
app/src/components/settings/panels/LocalModelPanel.tsx
Imports triggerLocalAiAssetBootstrap and openhumanLocalAiApplyPreset; implements triggerInstallWithRecommendedTier to apply recommended tier, bootstrap assets, reload presets/status, and set status messaging; conditionally renders Model Status only when Ollama availability is true.
Device capability UI: Ollama availability and tier locking
app/src/components/settings/panels/local-model/DeviceCapabilitySection.tsx
Extends props with ollamaAvailable, onTriggerOllamaInstall, isTriggeringInstall, installState, installWarning, and installError; renders an inline install/notice card when unavailable, disables local tier buttons, and shows "Needs Ollama" badges.
Model status installation CTA
app/src/components/settings/panels/local-model/ModelStatusSection.tsx
Short-circuits to render an amber "Ollama is not installed" CTA (install button, manual-install link, error detail toggle, and custom binary path setter) when downloads?.ollama_available === false.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Poem

🐰 I sniffed the filesystem, pawed the bin,
If Ollama's missing, the UI will chime in.
Buttons lock, badges glow, installers wait—
Resume and tail stderr to diagnose fate.
Hop, click, install — a rabbit-approved update.

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main changes: adding an Ollama filesystem precondition gate and hardening the install UX, which aligns with the key changes across backend (install detection, stderr capture, binary presence checks) and frontend (install CTA, conditional rendering).
Docstring Coverage ✅ Passed Docstring coverage is 85.00% which is sufficient. The required threshold is 80.00%.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@app/src/components/settings/panels/local-model/ModelStatusSection.tsx`:
- Around line 110-113: Update the installer description string in the
ModelStatusSection React component to reflect that the installer shows a visible
progress dialog rather than running silently: locate the JSX text inside
ModelStatusSection (the paragraph describing Ollama install) and change the
phrase "the installer runs silently and lands in your workspace; no console
window will appear" to wording that mentions a visible progress dialog (for
example: "the installer displays a progress dialog while installing and lands in
your workspace"); ensure you only update the copy and keep the surrounding JSX
and CTA button behavior unchanged.

In `@app/src/components/settings/panels/LocalModelPanel.tsx`:
- Around line 241-246: The install failure message is lost in CTA-only mode
because the component passes only status?.error_detail; change the prop passed
to the child so it uses the local statusError fallback (e.g.
installError={statusError || status?.error_detail}) when
triggerInstallWithRecommendedTier sets an error, and ensure this fallback is
used regardless of ollamaAvailable so users see the install error even when
Model Status is hidden.

In `@src/openhuman/local_ai/service/ollama_admin.rs`:
- Around line 143-172: In the timeout handling block (the code using
stderr_buffer, cfa_signatures and returning Err), ensure you terminate the
spawned Ollama child before returning: locate the child handle used to spawn the
process (e.g., a field like self.child or local variable returned from
spawn/Command) and call its kill/terminate method and then wait for its exit (or
spawn an asynchronous wait) while logging any errors; after killing, continue to
set self.status.lock().error_detail and return the same Err string. Make the
kill/await non-panicking and swallow/log errors so the function still returns
the original failure path.
- Around line 272-274: The current loop that waits for
is_ollama_installer_running() to clear can hang forever; bound it by adding a
maximum wait (e.g., const MAX_INSTALLER_WAIT: Duration =
Duration::from_secs(300)) and track elapsed (Instant::now()) or use
tokio::time::timeout to wrap the wait; if the timeout elapses, log an error via
processLogger or return an Err from the surrounding async function and ensure
the bootstrap lock is released instead of blocking indefinitely. Update the loop
around is_ollama_installer_running() and the surrounding function (the async
scope that holds the bootstrap lock) to handle the timeout path cleanly.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: c8b31c84-16ce-4110-9697-75a01af14aba

📥 Commits

Reviewing files that changed from the base of the PR and between 15c7442 and bb733ab.

📒 Files selected for processing (10)
  • app/src/components/settings/panels/LocalModelPanel.tsx
  • app/src/components/settings/panels/local-model/DeviceCapabilitySection.tsx
  • app/src/components/settings/panels/local-model/ModelStatusSection.tsx
  • app/src/utils/tauriCommands/localAi.ts
  • scripts/run-dev-win.sh
  • src/openhuman/local_ai/install.rs
  • src/openhuman/local_ai/service/assets.rs
  • src/openhuman/local_ai/service/bootstrap.rs
  • src/openhuman/local_ai/service/ollama_admin.rs
  • src/openhuman/local_ai/types.rs

Comment on lines +110 to +113
Local AI features (chat, vision, embedding) need the Ollama runtime. Install it
below — the installer runs silently and lands in your workspace; no console window
will appear.
</div>
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Align CTA copy with current installer behavior.

The text says the installer “runs silently,” but this flow intentionally uses a visible progress dialog. That wording can mislead users during install.

💡 Suggested copy tweak
-              Local AI features (chat, vision, embedding) need the Ollama runtime. Install it
-              below — the installer runs silently and lands in your workspace; no console window
-              will appear.
+              Local AI features (chat, vision, embedding) need the Ollama runtime. Install it
+              below — you may see a small installer progress window, and it will be installed
+              into your workspace.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Local AI features (chat, vision, embedding) need the Ollama runtime. Install it
below the installer runs silently and lands in your workspace; no console window
will appear.
</div>
Local AI features (chat, vision, embedding) need the Ollama runtime. Install it
below you may see a small installer progress window, and it will be installed
into your workspace.
</div>
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@app/src/components/settings/panels/local-model/ModelStatusSection.tsx` around
lines 110 - 113, Update the installer description string in the
ModelStatusSection React component to reflect that the installer shows a visible
progress dialog rather than running silently: locate the JSX text inside
ModelStatusSection (the paragraph describing Ollama install) and change the
phrase "the installer runs silently and lands in your workspace; no console
window will appear" to wording that mentions a visible progress dialog (for
example: "the installer displays a progress dialog while installing and lands in
your workspace"); ensure you only update the copy and keep the surrounding JSX
and CTA button behavior unchanged.

Comment on lines +241 to +246
ollamaAvailable={downloads?.ollama_available ?? true}
onTriggerOllamaInstall={() => void triggerInstallWithRecommendedTier()}
isTriggeringInstall={isTriggeringDownload}
installState={status?.state}
installWarning={status?.warning}
installError={status?.error_detail}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Install errors can be hidden in CTA-only mode.

When install fails in triggerInstallWithRecommendedTier, the message is stored in statusError, but this prop only forwards status?.error_detail. If Ollama is unavailable (Model Status hidden), users may see no failure message.

Suggested fix
-          installError={status?.error_detail}
+          installError={status?.error_detail ?? statusError}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ollamaAvailable={downloads?.ollama_available ?? true}
onTriggerOllamaInstall={() => void triggerInstallWithRecommendedTier()}
isTriggeringInstall={isTriggeringDownload}
installState={status?.state}
installWarning={status?.warning}
installError={status?.error_detail}
ollamaAvailable={downloads?.ollama_available ?? true}
onTriggerOllamaInstall={() => void triggerInstallWithRecommendedTier()}
isTriggeringInstall={isTriggeringDownload}
installState={status?.state}
installWarning={status?.warning}
installError={status?.error_detail ?? statusError}
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@app/src/components/settings/panels/LocalModelPanel.tsx` around lines 241 -
246, The install failure message is lost in CTA-only mode because the component
passes only status?.error_detail; change the prop passed to the child so it uses
the local statusError fallback (e.g. installError={statusError ||
status?.error_detail}) when triggerInstallWithRecommendedTier sets an error, and
ensure this fallback is used regardless of ollamaAvailable so users see the
install error even when Model Status is hidden.

Comment thread src/openhuman/local_ai/service/ollama_admin.rs Outdated
Comment thread src/openhuman/local_ai/service/ollama_admin.rs
Copy link
Copy Markdown
Contributor

@graycyrus graycyrus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Summary

7 findings (2 blockers, 3 major, 2 minor/refactor). Skipping the 4 items CodeRabbit already flagged.

Blocking gate: PR has merge conflicts (CONFLICTING/DIRTY) — must rebase on main before merge.

Scope note: Issue #1475 is "Rebuild Windows MSI with Sentry fix" but the PR pivoted to Ollama install UX hardening. Author acknowledges this in the PR body.

Test coverage: Zero new tests. The diff-cover gate (≥80% changed lines) will likely fail. At minimum needs tests for ollama_binary_present, resolve_powershell_executable, CFA classification logic, and the frontend install-state banner transitions.

Looks good: ollama_binary_present is clean filesystem-only; kill_on_drop(true) on all 3 platform branches; 500ms connect timeout placed at client construction; ollama_available propagated consistently to both DTOs; "Disabled (cloud fallback)" tile stays enabled; /SILENT change is well-motivated; Windows SDK self-discovery guards on kernel32.lib.

// surface as "Access is denied" / "operation was blocked" /
// 0x80070005 in Ollama's own stderr when CFA refuses writes
// to the model cache or even prevents the binary from running.
.stderr(std::process::Stdio::piped());
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[critical] Verify this is the only stderr(...) call in start_and_wait_for_server.

The existing function body (pre-PR) has .stderr(Stdio::null()) at line ~80. The merge conflict likely stems from both the old and new versions coexisting. After resolving conflicts, confirm the final function has exactly one .stderr(std::process::Stdio::piped()) call — not the old .null() version.

If the old .null() survives the merge, the entire stderr-capture pipeline (bounded buffer, CFA classification, error_detail surface) is dead code in the common ensure_ollama_server path.

);
{
let mut status = self.status.lock();
status.state = "installing".to_string();
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[critical] Crash-resume poll loop has no upper bound — can block forever.

If OllamaSetup.exe hangs (network stall, hidden dialog, stuck Inno Setup), download_and_install_ollama — and the entire bootstrap task — blocks indefinitely. The user sees "Resuming Ollama install…" with no escape.

// suggestion: cap at 10 minutes
const RESUME_WAIT_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10 * 60);
let deadline = tokio::time::Instant::now() + RESUME_WAIT_TIMEOUT;
while crate::openhuman::local_ai::install::is_ollama_installer_running() {
    if tokio::time::Instant::now() >= deadline {
        log::warn!(
            "[local_ai] crash-resume: OllamaSetup.exe still running after {}s — \
             aborting wait and running fresh install",
            RESUME_WAIT_TIMEOUT.as_secs()
        );
        break;
    }
    tokio::time::sleep(std::time::Duration::from_secs(2)).await;
}

presetError={presetError}
presetSuccess={presetSuccess}
formatRamGb={formatRamGb}
ollamaAvailable={downloads?.ollama_available ?? true}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[major] ?? true default causes the Model Status section to flash visible on first render.

Before the first downloads poll resolves, this evaluates to true, so the full model-status card renders momentarily, then blinks away if Ollama is absent.

Suggestion: default to undefined so the install/model sections stay hidden until the first poll resolves:

ollamaAvailable={downloads?.ollama_available}

And in DeviceCapabilitySection.tsx, change the destructured default from ollamaAvailable = true to just ollamaAvailable — the locked check (ollamaAvailable === false) already handles undefined correctly (treats it as not-locked).

Same for the conditional wrapper around the Model Status <section> further down:

{downloads != null && downloads.ollama_available && (

"operation was blocked",
"controlled folder access",
"access is denied",
"0x80070005",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[major] CFA detection will false-positive on generic permission errors.

"access is denied" and "is not recognized as a trusted" are extremely common Windows error strings that appear in contexts completely unrelated to Controlled Folder Access (antivirus policy, UAC, missing registry keys). Mis-classifying a generic permission error as CFA and telling the user to go to "Ransomware protection" will be confusing.

Suggestion — require explicit CFA markers:

let cfa_hit = lowered.contains("controlled folder access")
    || lowered.contains("operation was blocked by ransomware")
    || (lowered.contains("0x80070005")
        && (lowered.contains("controlled") || lowered.contains("folder")));

For plain "access is denied" without CFA context, fall through to the generic stderr-tail display.

</div>
<div className="flex items-center gap-2 pt-1">
<button
onClick={() => onTriggerDownload(true)}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[major] Install button missing type="button" — defaults to type="submit" which will submit any enclosing <form> if one is ever added. Also lacks an accessible label for screen readers.

<button
  type="button"
  aria-label={isTriggeringDownload ? 'Installing Ollama' : 'Install Ollama runtime'}
  onClick={() => onTriggerDownload(true)}
  ...

Same applies to the Set Path and Clear buttons further down in this block.

// timeouts on every `assets_status` call.
if !self.ollama_healthy().await {
return Ok(false);
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[minor] has_model short-circuit doubles the round-trips to /api/tags.

ollama_healthy() sends a 2s-bounded GET to /api/tags, then has_model sends the same GET again. The intent (avoid cascading 30s timeouts) is correct, but the 500ms connect timeout in bootstrap.rs already handles the fast-fail case.

Consider merging into one request — treat connection/timeout errors as Ok(false):

let response = match self.http
    .get(format!("{}/api/tags", ollama_base_url()))
    .send().await
{
    Ok(r) => r,
    Err(e) if e.is_connect() || e.is_timeout() => return Ok(false),
    Err(e) => return Err(format!("ollama tags request failed: {e}")),
};

installWarning,
installError,
}: DeviceCapabilitySectionProps) => {
const installInProgress = installState === 'installing' || installState === 'downloading' || installState === 'loading';
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[minor] installInProgress doesn't include 'starting' state.

LocalAiStatus.state can emit 'starting' during server startup (e.g. inside start_and_wait_for_server). A user who triggers install and then sees the server spinning up will see the amber "Install Ollama first" banner instead of the blue progress banner.

const installInProgress =
  installState === 'installing' ||
  installState === 'downloading' ||
  installState === 'loading' ||
  installState === 'starting';

senamakel added 2 commits May 12, 2026 15:05
# Conflicts:
#	scripts/run-dev-win.sh
Rust:
- ollama_admin.rs: deduplicate /api/tags polling in has_model by reusing
  the shared ollama_healthy() probe; removes redundant 2-second probe
  before the full tags request (addresses @coderabbitai)
- install.rs: replace refresh_all() with refresh_processes() in sysinfo
  call to avoid scanning disk/network/memory (addresses @coderabbitai)

Frontend:
- ModelStatusSection.tsx: add type="button" to prevent accidental form
  submit on retry button; remove stray blank line
- DeviceCapabilitySection.tsx: line-wrap long ternary for Prettier

Formatting:
- LocalModelPanel.tsx: fix indentation in ollama_available conditional
  block (Prettier, caused by reviewer's structural fix to sibling file)
- cargo fmt + pnpm format applied across all changed files
@senamakel
Copy link
Copy Markdown
Member

Deferred reviewer item — stderr drain task lifetime

During review, a question was raised about the lifetime of the background task that drains Ollama's stderr ring buffer. For the record: the drain task is bounded by the 16 KB ring buffer (a fixed-capacity circular buffer), so even if the task outlives the health probe it cannot grow unboundedly. The task exits naturally when the stderr pipe closes (i.e., when the Ollama process exits). No action needed — noting here so it's not lost.

@senamakel
Copy link
Copy Markdown
Member

Pre-existing issue noted during review (outside this PR's diff) — ollama_runner_ok uses POST on a GET endpoint

ollama_runner_ok was observed to issue a POST request to /api/tags, which is a GET endpoint. This is pre-existing code not touched by this PR. Flagging for a follow-up fix in a separate PR.

@senamakel
Copy link
Copy Markdown
Member

Nitpick (outside this PR's primary diff) — scripts/run-dev-win.sh comment inaccuracy

A comment in run-dev-win.sh describes the PATH restoration as restoring "full machine + user PATH", but the actual implementation only restores the machine PATH. The comment is misleading. Low priority — suggest fixing the comment wording in a cleanup pass.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/openhuman/local_ai/install.rs (1)

159-185: ⚠️ Potential issue | 🟠 Major | 🏗️ Heavy lift

Unix installers will leave descendant processes running after app shutdown.

When spawning sh -lc 'curl ... | tar ...' with kill_on_drop(true), Tokio only kills the direct sh process, not the descendant pipeline (curl, tar, unzip, cp). If the app shuts down mid-install, these subprocesses continue mutating the install directory in the background, creating race conditions on the next launch. Use explicit process-group teardown (e.g., libc::killpg() or prctl(PR_SET_PDEATHSIG)) or avoid the shell wrapper entirely.

Applies to lines 159–185 and 192–219.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/openhuman/local_ai/install.rs` around lines 159 - 185, The current
approach spawns a shell via tokio::process::Command::new("sh") with
kill_on_drop(true) (see variables cmd, kill_on_drop, env, arg and the large
embedded shell script) which only kills the sh process and leaves descendant
processes (curl, unzip, cp) running; fix by removing the shell wrapper and
performing the steps natively or ensuring process-group teardown: either (A)
replace the shell script with native Rust code (use reqwest to download to a
temp file, unzip with the zip or zip-rs crate, use std::fs for mkdir/copy/chmod)
and keep kill_on_drop on those spawned tasks, or (B) if you must spawn external
tools (curl/unzip/cp), invoke them directly with tokio::process::Command for
each step and use before_exec to call unsafe { libc::setpgid(0, 0) } (or prctl
PR_SET_PDEATHSIG) so each child is in its own process group and can be killed
(use libc::killpg on the child pid when aborting); update the code around
Command::new("sh") to one of these approaches so no orphaned descendant
processes mutate the install dir after shutdown.
🧹 Nitpick comments (1)
src/openhuman/local_ai/install.rs (1)

19-32: ⚡ Quick win

Add deterministic tests for the new Windows-only discovery paths.

This file’s tests cover build_install_command() and find_system_ollama_binary(), but they do not pin is_ollama_installer_running() or the PowerShell fallback order. Those are the new branches driving the crash-resume and spawn-hardening behavior, so they should be backed by small pure-helper tests instead of staying implicit.

As per coding guidelines, "Achieve ≥ 80% coverage on changed lines; PRs must pass diff-cover check via cargo-llvm-cov."

Also applies to: 71-104

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/openhuman/local_ai/install.rs` around lines 19 - 32, Add deterministic
unit tests for the new Windows-only discovery logic by making the
process-detection and PowerShell-fallback order testable: refactor
is_ollama_installer_running to accept an injectable process iterator or add a
private helper (e.g., is_ollama_installer_running_with_iter or
detect_installer_from_names) that takes a slice/iterator of process names so
tests can supply controlled inputs, then write pure unit tests for that helper
to assert detection behavior; similarly, extract the PowerShell fallback order
into a small pure function (e.g., powershell_fallback_order or
resolve_powershell_commands) and add tests asserting the exact ordered list so
the crash-resume and spawn-hardening branches are covered deterministically and
satisfy diff-cover.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Outside diff comments:
In `@src/openhuman/local_ai/install.rs`:
- Around line 159-185: The current approach spawns a shell via
tokio::process::Command::new("sh") with kill_on_drop(true) (see variables cmd,
kill_on_drop, env, arg and the large embedded shell script) which only kills the
sh process and leaves descendant processes (curl, unzip, cp) running; fix by
removing the shell wrapper and performing the steps natively or ensuring
process-group teardown: either (A) replace the shell script with native Rust
code (use reqwest to download to a temp file, unzip with the zip or zip-rs
crate, use std::fs for mkdir/copy/chmod) and keep kill_on_drop on those spawned
tasks, or (B) if you must spawn external tools (curl/unzip/cp), invoke them
directly with tokio::process::Command for each step and use before_exec to call
unsafe { libc::setpgid(0, 0) } (or prctl PR_SET_PDEATHSIG) so each child is in
its own process group and can be killed (use libc::killpg on the child pid when
aborting); update the code around Command::new("sh") to one of these approaches
so no orphaned descendant processes mutate the install dir after shutdown.

---

Nitpick comments:
In `@src/openhuman/local_ai/install.rs`:
- Around line 19-32: Add deterministic unit tests for the new Windows-only
discovery logic by making the process-detection and PowerShell-fallback order
testable: refactor is_ollama_installer_running to accept an injectable process
iterator or add a private helper (e.g., is_ollama_installer_running_with_iter or
detect_installer_from_names) that takes a slice/iterator of process names so
tests can supply controlled inputs, then write pure unit tests for that helper
to assert detection behavior; similarly, extract the PowerShell fallback order
into a small pure function (e.g., powershell_fallback_order or
resolve_powershell_commands) and add tests asserting the exact ordered list so
the crash-resume and spawn-hardening branches are covered deterministically and
satisfy diff-cover.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 2acbc35f-0c34-4372-bf3b-3bd8bdfda9d5

📥 Commits

Reviewing files that changed from the base of the PR and between bb733ab and a6dcdd4.

📒 Files selected for processing (5)
  • app/src/components/settings/panels/LocalModelPanel.tsx
  • app/src/components/settings/panels/local-model/DeviceCapabilitySection.tsx
  • app/src/components/settings/panels/local-model/ModelStatusSection.tsx
  • src/openhuman/local_ai/install.rs
  • src/openhuman/local_ai/service/ollama_admin.rs
🚧 Files skipped from review as they are similar to previous changes (4)
  • app/src/components/settings/panels/local-model/DeviceCapabilitySection.tsx
  • app/src/components/settings/panels/LocalModelPanel.tsx
  • app/src/components/settings/panels/local-model/ModelStatusSection.tsx
  • src/openhuman/local_ai/service/ollama_admin.rs

The previous reviewer fix removed the duplicate /api/tags round-trip but
also routed all list_models() errors to Ok(false), breaking the test
contract that expects Err on a 500 response. Issue /api/tags directly
(no ollama_healthy() short-circuit) and propagate HTTP failures as Err
like the original implementation did, while still avoiding the doubled
round-trip on healthy polls.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/openhuman/local_ai/service/ollama_admin.rs (1)

1043-1056: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Add a per-request timeout to the /api/tags GET in has_model.

The client's connect_timeout(500ms) only bounds the TCP connection phase; a half-started Ollama server that accepts the socket but hangs will cause .send().await to block on the 120-second global client timeout instead of failing fast. This regresses the fast-fail behavior documented in the inline comment and can stall model-status polling.

Other requests in the same file already use per-request timeouts (2–5 seconds); has_model should too.

Fix
         let response = self
             .http
             .get(format!("{}/api/tags", ollama_base_url()))
+            .timeout(std::time::Duration::from_secs(2))
             .send()
             .await
             .map_err(|e| format!("ollama tags request failed: {e}"))?;
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/openhuman/local_ai/service/ollama_admin.rs` around lines 1043 - 1056, The
has_model path issues a GET to "{}/api/tags" via self.http.get(...).send().await
without a per-request timeout; add a per-request timeout (matching other calls,
e.g. 2–5 seconds) to the request chain before awaiting the response (use the
reqwest RequestBuilder.timeout(...) API or equivalent) so the has_model function
fails fast on hung connections; update the call that begins with
self.http.get(format!("{}/api/tags", ollama_base_url())) to include
.timeout(Duration::from_secs(X)) (and import/use std::time::Duration) and handle
the potential timeout error consistently with existing error handling in
has_model.
♻️ Duplicate comments (3)
src/openhuman/local_ai/service/ollama_admin.rs (3)

143-172: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Kill the timed-out ollama serve child before returning.

This timeout path still returns while serve_child is alive. Since this command is not configured with kill_on_drop(true), retries can leave a stray Ollama process behind and cause duplicate-server or port-conflict behavior on the next attempt.

💡 Minimal fix
         for _ in 0..20 {
             if self.ollama_healthy().await {
                 return Ok(());
             }
             tokio::time::sleep(std::time::Duration::from_millis(300)).await;
         }

+        let _ = serve_child.start_kill();
+        let _ = tokio::time::timeout(
+            std::time::Duration::from_secs(2),
+            serve_child.wait(),
+        )
+        .await;
+
         // Health probe timed out. Classify the failure from captured stderr.
         let stderr_snapshot = stderr_buffer.lock().clone();
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/openhuman/local_ai/service/ollama_admin.rs` around lines 143 - 172, The
timeout path currently returns while the spawn stored in serve_child is still
running; before returning from this branch (after inspecting stderr_buffer /
cfa_signatures), ensure serve_child is terminated: call serve_child.kill() and
then wait/await its exit (e.g., call wait() or await the handle) and log any
kill/wait errors, then proceed to set self.status.lock().error_detail and return
the Err as before; reference the serve_child variable and the existing
stderr_buffer / cfa_signatures handling so the kill/wait happens in both the
CFA-hit and non-CFA timeout branches.

260-274: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Bound the crash-resume installer wait loop.

If a previous OllamaSetup.exe hangs, this loop can block the install path forever while the status stays "installing". Add a deadline and fail into a degradable/retryable state instead of waiting indefinitely.

💡 Minimal fix
-            while crate::openhuman::local_ai::install::is_ollama_installer_running() {
+            let deadline =
+                std::time::Instant::now() + std::time::Duration::from_secs(10 * 60);
+            while crate::openhuman::local_ai::install::is_ollama_installer_running() {
+                if std::time::Instant::now() >= deadline {
+                    let mut status = self.status.lock();
+                    status.state = "degraded".to_string();
+                    status.error_category = Some("install".to_string());
+                    status.warning = Some(
+                        "Timed out waiting for a previous Ollama installer to finish. \
+                         Please close OllamaSetup.exe and retry."
+                            .to_string(),
+                    );
+                    return Err(
+                        "Timed out waiting for existing Ollama installer process".to_string()
+                    );
+                }
                 tokio::time::sleep(std::time::Duration::from_secs(2)).await;
             }
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/openhuman/local_ai/service/ollama_admin.rs` around lines 260 - 274, The
current unbounded while loop that polls
crate::openhuman::local_ai::install::is_ollama_installer_running() can hang
forever; change it to a bounded wait with a deadline (e.g., track start =
Instant::now() and loop until either the installer is gone or a timeout like 2–5
minutes has elapsed), and on timeout update the shared status (the mutex held
via self.status.lock()) to a degradable/retryable state (e.g., set state to
"install_failed" or "degraded", populate warning/error_detail/error_category)
and break out so the install path can continue/fail gracefully; keep the same
log message behavior and ensure the code references the same symbols
(is_ollama_installer_running, self.status.lock()) when making the change.

146-153: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Tighten the Controlled Folder Access signature match.

"access is denied" and "is not recognized as a trusted" are generic Windows failure strings, so this will label unrelated permission/policy errors as CFA and point users at the wrong remediation. Require an explicit CFA marker before taking the CFA branch.

💡 Safer match
         let cfa_signatures = [
             "operation was blocked",
             "controlled folder access",
-            "access is denied",
             "0x80070005",
-            "is not recognized as a trusted",
         ];
-        let cfa_hit = cfa_signatures.iter().any(|sig| lowered.contains(sig));
+        let cfa_hit = cfa_signatures.iter().any(|sig| lowered.contains(sig))
+            || (lowered.contains("0x80070005")
+                && (lowered.contains("controlled") || lowered.contains("folder")));
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/openhuman/local_ai/service/ollama_admin.rs` around lines 146 - 153, The
CFA detection is too broad: update cfa_signatures and the cfa_hit check to only
match explicit CFA markers (e.g., "controlled folder access" and "operation was
blocked") and drop generic strings ("access is denied", "is not recognized as a
trusted") so unrelated Windows errors aren't classified as CFA; locate the
cfa_signatures array and the cfa_hit computation that uses lowered.contains and
change the array to only include explicit CFA phrases (or add a stricter
condition such as requiring both "controlled" and "folder" tokens) and leave
cfa_hit to use those stricter matches.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Outside diff comments:
In `@src/openhuman/local_ai/service/ollama_admin.rs`:
- Around line 1043-1056: The has_model path issues a GET to "{}/api/tags" via
self.http.get(...).send().await without a per-request timeout; add a per-request
timeout (matching other calls, e.g. 2–5 seconds) to the request chain before
awaiting the response (use the reqwest RequestBuilder.timeout(...) API or
equivalent) so the has_model function fails fast on hung connections; update the
call that begins with self.http.get(format!("{}/api/tags", ollama_base_url()))
to include .timeout(Duration::from_secs(X)) (and import/use std::time::Duration)
and handle the potential timeout error consistently with existing error handling
in has_model.

---

Duplicate comments:
In `@src/openhuman/local_ai/service/ollama_admin.rs`:
- Around line 143-172: The timeout path currently returns while the spawn stored
in serve_child is still running; before returning from this branch (after
inspecting stderr_buffer / cfa_signatures), ensure serve_child is terminated:
call serve_child.kill() and then wait/await its exit (e.g., call wait() or await
the handle) and log any kill/wait errors, then proceed to set
self.status.lock().error_detail and return the Err as before; reference the
serve_child variable and the existing stderr_buffer / cfa_signatures handling so
the kill/wait happens in both the CFA-hit and non-CFA timeout branches.
- Around line 260-274: The current unbounded while loop that polls
crate::openhuman::local_ai::install::is_ollama_installer_running() can hang
forever; change it to a bounded wait with a deadline (e.g., track start =
Instant::now() and loop until either the installer is gone or a timeout like 2–5
minutes has elapsed), and on timeout update the shared status (the mutex held
via self.status.lock()) to a degradable/retryable state (e.g., set state to
"install_failed" or "degraded", populate warning/error_detail/error_category)
and break out so the install path can continue/fail gracefully; keep the same
log message behavior and ensure the code references the same symbols
(is_ollama_installer_running, self.status.lock()) when making the change.
- Around line 146-153: The CFA detection is too broad: update cfa_signatures and
the cfa_hit check to only match explicit CFA markers (e.g., "controlled folder
access" and "operation was blocked") and drop generic strings ("access is
denied", "is not recognized as a trusted") so unrelated Windows errors aren't
classified as CFA; locate the cfa_signatures array and the cfa_hit computation
that uses lowered.contains and change the array to only include explicit CFA
phrases (or add a stricter condition such as requiring both "controlled" and
"folder" tokens) and leave cfa_hit to use those stricter matches.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 04ac7c21-5b57-4d75-98f1-5f15a8227590

📥 Commits

Reviewing files that changed from the base of the PR and between a6dcdd4 and 0e29e46.

📒 Files selected for processing (1)
  • src/openhuman/local_ai/service/ollama_admin.rs

senamakel added 3 commits May 12, 2026 19:45
- has_model: add 5s per-request timeout matching list_models. The shared
  client's connect_timeout only bounds the TCP handshake; a hung server
  could otherwise block assets_status polls indefinitely.
- ensure_runtime: kill the spawned ollama serve child when the health
  probe times out so it doesn't keep the port bound and confuse the
  next bootstrap attempt.
- ensure_runtime: tighten Controlled Folder Access classification to
  only match explicit CFA markers ('controlled folder access',
  'operation was blocked'). Generic strings like 'access is denied'
  appeared in unrelated Windows errors and produced misleading
  remediation messages.
- download_and_install_ollama: bound the OllamaSetup.exe wait at
  5 minutes so a stuck installer dialog can't block startup forever.
  On timeout, mark the install as install_failed with an actionable
  error_detail.
# Conflicts:
#	app/src/components/settings/panels/LocalModelPanel.tsx
@senamakel senamakel merged commit 4be66ba into tinyhumansai:main May 13, 2026
18 of 21 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Rebuild Windows MSI with the latest Sentry fix

3 participants