-
Notifications
You must be signed in to change notification settings - Fork 5
feat(core): add PageIndex-inspired patterns for skill discovery #37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Implements patterns from anthropics/knowledge-work-plugins: Tree Module (packages/core/src/tree/): - Hierarchical skill taxonomy with 12 categories - Tree generator from skill tags and metadata - Tree serializer for JSON/Markdown export - Related skills graph with 4 relation types Reasoning Engine (packages/core/src/reasoning/): - LLM-based tree traversal for skill discovery - Explainable recommendations with reasoning chains - Search planning with category relevance scoring - Result caching with 5-minute TTL Connectors (packages/core/src/connectors/): - Tool-agnostic ~~ placeholder system - 13 connector categories (crm, chat, email, etc.) - Auto-suggest mappings from MCP config - Placeholder detection and replacement utilities Execution Flow (packages/core/src/execution/): - Step-by-step execution tracking with metrics - Automatic retry with exponential backoff - Standalone vs Enhanced mode detection - Capability-based feature gating CLI Integration: - New `skillkit tree` command with --generate, --stats flags - Enhanced `skillkit recommend` with --explain, --reasoning flags TUI Integration: - Tree view mode in Marketplace (toggle with 'v') - Arrow key navigation for tree hierarchy
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
📝 WalkthroughWalkthroughAdds a new Tree CLI command and broad tree/reasoning/connector/execution subsystems across core, CLI, and TUI: new types, engines, generators, serializers, services, and UI integration enabling hierarchical skill trees, reasoning-enhanced recommendations, connector placeholder tooling, and execution flow orchestration. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant CLI as TreeCommand
participant Core as Core API
participant FS as Filesystem
User->>CLI: run `skillkit tree --generate`
CLI->>Core: loadIndexFromCache()
Core->>FS: read index file
FS-->>Core: index data
Core->>Core: generateSkillTree(index.skills)
Core-->>CLI: SkillTree
CLI->>FS: saveTree(skill-tree.json)
FS-->>CLI: saved confirmation
CLI->>User: display stats / rendered tree
sequenceDiagram
participant User
participant CLI as RecommendCommand
participant Reason as ReasoningEngine
participant LLM as LLM
participant Tree as SkillTreeService
User->>CLI: `recommend --reasoning --explain`
CLI->>Reason: search(query)
Reason->>LLM: buildSearchPlan prompt
LLM-->>Reason: SearchPlan
Reason->>Tree: traverseTree(plan)
Tree-->>Reason: TreeSearchResult[]
Reason->>LLM: generateExplanation(skill)
LLM-->>Reason: ExplainedMatch
Reason-->>CLI: ExplainedRecommendation[]
CLI->>User: show recommendations with explanations & paths
sequenceDiagram
participant Caller
participant Exec as ExecutionManager
participant Step as StepDefinition
participant Tools as MCP Tools
Caller->>Exec: createFlow(skill, steps)
Caller->>Exec: executeFlow(flowId)
Exec->>Step: execute(input, context)
Step->>Tools: call external MCP tools (if enhanced)
Tools-->>Step: tool result
Step-->>Exec: step output
Exec->>Exec: update metrics & step status
Exec-->>Caller: flow complete / status
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 13
🤖 Fix all issues with AI agents
In `@packages/cli/src/commands/tree.ts`:
- Around line 213-214: The call to parseInt when computing maxDepth from
this.depth can produce NaN for invalid input which breaks renderNode; validate
the parsed value (e.g., parseInt(this.depth, 10)) to ensure it's a finite
non-negative integer and fall back to the default (3) or clamp to a max value
before calling renderNode. Update the code around the maxDepth calculation
(where this.depth is parsed) to check Number.isFinite/Number.isInteger (or
!Number.isNaN) and enforce a safe range, or alternatively use clipanion's
built-in number validation for the depth option so renderNode receives a valid
integer.
- Line 19: The TREE_PATH constant uses process.env.HOME with a literal '~'
fallback which leaves an unexpanded tilde in the path; update the code that
defines TREE_PATH (the const named TREE_PATH that calls join(...)) to use the
platform homedir API instead of a '~' fallback — e.g., import/require the os
module and use os.homedir() (or a safe helper that expands tildes) so
join(os.homedir(), '.skillkit', 'skill-tree.json') is used, ensuring the path is
valid when HOME is undefined.
In `@packages/core/src/execution/manager.ts`:
- Around line 162-167: The timeout timer created for timeoutPromise is never
cleared, causing a timer leak; modify the code around Promise.race where you
call stepDef.execute and create the setTimeout so you store the timer id (from
setTimeout) and ensure you clearTimeout(timerId) once executePromise settles
(use finally on the race or on executePromise) or switch to an AbortController
and pass its signal into stepDef.execute (then call controller.abort() / clear
the timer on completion) so the timer is cleaned up when step.output is
resolved.
In `@packages/core/src/execution/mode.ts`:
- Around line 38-43: DEFAULT_MCP_CONFIG_PATHS currently uses process.env.HOME ||
'~', which yields literal "~" segments that break your later startsWith('~')
expansion; change the fallback to a real home path (e.g., use os.homedir()) or
to an empty string so join produces a usable relative path. Update
DEFAULT_MCP_CONFIG_PATHS to derive the home base from process.env.HOME ??
os.homedir() ?? '' and keep the rest of the entries unchanged so the subsequent
tilde-expansion logic (the code that checks startsWith('~')) works correctly.
In `@packages/core/src/reasoning/engine.ts`:
- Around line 549-556: The cache key generated by getCacheKey currently only
uses query.query, context.name, maxResults, and minConfidence and therefore can
collide when different context stacks share a name; update getCacheKey (in the
getCacheKey method) to incorporate the context stack details (e.g., languages,
frameworks, libraries) or a deterministic hash of the context stack object
(context.stack or context?.stack) so keys change when stack contents change;
compute a stable serialization or hash of the relevant stack fields and include
that value in the JSON.stringify payload returned by getCacheKey.
- Around line 545-547: The callLLM method currently returns a hardcoded mock
string which ignores the configured LLM provider and causes
extractJsonFromResponse to fail silently; update the callLLM implementation to
read the engine's provider config and either (a) instantiate and call the
appropriate client for supported providers (e.g., use OpenAI client for
'openai', Anthropic client for 'anthropic', Ollama client for 'ollama') and
return the provider's raw text response, or (b) if a provider is not
implemented, throw an explicit error so the failure surfaces instead of falling
back to mocks; modify the method referenced as callLLM and ensure callers like
extractJsonFromResponse receive real provider output or an error.
In `@packages/core/src/reasoning/prompts.ts`:
- Around line 216-224: The function validateCategoryScore currently returns
category: '' losing caller context; update validateCategoryScore to extract and
validate the category from the incoming payload (the local variable score)
instead of hardcoding an empty string—e.g., read score.category (or score.name
if that's the field used elsewhere), ensure it's a string and use it, otherwise
fallback to an empty string; keep the existing numeric clamping for score.score
and the string check for reasoning.
- Around line 186-214: validateSearchPlan currently trusts array elements from
the LLM and downstream code calls .toLowerCase() on categories/keywords which
can crash for non-strings; update validateSearchPlan to filter each array field
to only include typeof === 'string' elements (primaryCategories,
secondaryCategories, keywords and filters.tags/frameworks/languages) -- e.g.,
when plan.primaryCategories is an array, return plan.primaryCategories.filter(x
=> typeof x === 'string') instead of the raw array; keep the existing strategy
validation as-is.
In `@packages/core/src/recommend/engine.ts`:
- Around line 672-684: The initReasoning method in ReasoningRecommendationEngine
currently hardcodes the ReasoningEngine instantiation with provider: 'mock',
making explanations non-real; change initReasoning to obtain provider/config
from the engine's configuration or constructor args instead of hardcoding—e.g.,
add a Reasoning config property to ReasoningRecommendationEngine (or an options
param on its constructor), read that provider/config and pass it to new
ReasoningEngine(...) (or omit the provider to let ReasoningEngine use its
default factory), and preserve existing calls to loadSkills and generateTree;
update ReasoningRecommendationEngine, its constructor, and initReasoning to wire
the injected config through to ReasoningEngine.
In `@packages/core/src/tree/generator.ts`:
- Around line 18-28: The private field skillMap is being populated in
generateTree (cleared and set from SkillSummary items) but never read; remove
the unused skillMap field and all references to it (the declaration "private
skillMap: Map<string, SkillSummary> = new Map();" and the
skillMap.clear()/skillMap.set(...) lines in generateTree) to eliminate dead
state, or alternatively add a clear doc comment on its intended future use if
you intend to keep it—choose one and apply consistently in the TreeGenerator
class (constructor, generateTree, and class fields).
In `@packages/tui/src/screens/Marketplace.tsx`:
- Around line 94-98: Wrap the loadOrGenerateTree() call and subsequent
setTreeState/updateTreeItems logic in a try/catch so errors aren’t swallowed; if
loadOrGenerateTree() throws, catch the error and call setError(...) with a
useful message (e.g., error.message or String(error)) and optionally clear or
reset tree state, otherwise proceed to setTreeState(tree) and call
updateTreeItems(tree.currentNode) when tree and currentNode exist.
- Around line 200-207: The traversal over newPath can continue using a stale
node when node.children.find(...) returns undefined; modify the loop in
Marketplace (where state.tree.rootNode, newPath and updateTreeItems are used) to
handle a missing segment by detecting when child is undefined and then: 1) break
out of the loop (or return) instead of continuing with the stale node, 2) choose
a clear fallback (e.g., set node to state.tree.rootNode or null) and call
updateTreeItems with that fallback, and 3) emit a warning/log indicating the
invalid path segment so callers can update the path UI. Ensure these changes
touch the loop that resolves child and the code path that calls updateTreeItems.
In `@packages/tui/src/services/tree.service.ts`:
- Line 12: The const TREE_PATH in tree.service.ts uses process.env.HOME with a
'~' fallback; extract this into a shared constant (e.g., export TREE_PATH from a
new shared constants module) so all occurrences reuse the same logic, and
implement it using a reliable homedir call (use os.homedir() or process.env.HOME
|| os.homedir()) and path.join to build the path ('.skillkit',
'skill-tree.json'); then update all other occurrences of TREE_PATH across the
codebase to import the shared constant so the fix is applied in one place.
🧹 Nitpick comments (18)
packages/core/src/recommend/types.ts (1)
287-328: Avoid type drift between recommendation and reasoning explanation/search plan shapes.These new types mirror reasoning counterparts; consider extracting shared definitions (or a common module) to keep them in sync and avoid mismatched fields downstream.
packages/core/src/reasoning/engine.ts (2)
65-76: Cached responses drop exploredPaths and reasoning summary.Cache entries only store results, so cache hits return empty paths and generic reasoning. Consider caching those fields to keep outputs consistent.
138-149: Consider parallelizing explainBatch to reduce latency.The loop awaits each explanation sequentially;
Promise.all(or a small concurrency limit) can improve throughput if calls are independent.🛠️ Suggested refactor
async explainBatch( skills: Array<{ skill: SkillSummary; score: number }>, profile: ProjectProfile ): Promise<ExplainedRecommendation[]> { - const results: ExplainedRecommendation[] = []; - - for (const { skill, score } of skills) { - const explained = await this.explain(skill, score, profile); - results.push(explained); - } - - return results; + return Promise.all( + skills.map(({ skill, score }) => this.explain(skill, score, profile)) + ); }packages/cli/src/commands/recommend.ts (2)
159-162: Consider if--show-pathalone should trigger the reasoning engine.When
showPathis true but neitherexplainnorreasoningis set, the user might expect lightweight path display without LLM-based reasoning overhead. Currently, all three flags triggerhandleReasoningRecommendations, which initializes the full reasoning engine.If displaying the tree path can be done without the reasoning engine, consider separating that case.
358-447: Consider extracting shared display logic to reduce duplication.
displayExplainedRecommendationsanddisplayRecommendationsshare significant code for rendering scores, quality badges, and warnings. Extracting a common helper (e.g.,renderSkillItem) would reduce duplication and ease future maintenance.♻️ Example helper extraction
// Helper to render common skill display elements private renderSkillScore(score: number, name: string, quality?: number): void { const scoreColor = score >= 70 ? colors.success : score >= 50 ? colors.warning : colors.muted; const scoreBar = progressBar(score, 100, 10); const qualityDisplay = quality != null ? ` ${formatQualityBadge(quality)}` : ''; console.log(` ${scoreColor(`${score}%`)} ${colors.dim(scoreBar)} ${colors.bold(name)}${qualityDisplay}`); }packages/tui/src/screens/Marketplace.tsx (1)
483-487: Type assertion could be avoided with proper type guards.The cast
(item as TreeNodeDisplay).skillCountassumes the type without validation. Since the conditional check already verifiesisCategory() && 'skillCount' in item, you could use a local variable with proper narrowing.♻️ Cleaner type handling
- <Show when={isCategory() && 'skillCount' in item}> - <text fg={terminalColors.textMuted}> - ({(item as TreeNodeDisplay).skillCount} skills) - </text> - </Show> + <Show when={isCategory() && 'skillCount' in item && typeof (item as TreeNodeDisplay).skillCount === 'number'}> + {(() => { + const display = item as TreeNodeDisplay; + return ( + <text fg={terminalColors.textMuted}> + ({display.skillCount} skills) + </text> + ); + })()} + </Show>packages/core/src/tree/serializer.ts (2)
12-15: Consider distinguishing JSON parse errors from schema validation errors.
deserializeTreethrows on both malformed JSON and schema validation failures, but the error messages would differ. For debugging, it may help to catchJSON.parseerrors separately and provide a clearer message.♻️ Improved error handling
export function deserializeTree(json: string): SkillTree { - const parsed = JSON.parse(json); - return SkillTreeSchema.parse(parsed); + let parsed: unknown; + try { + parsed = JSON.parse(json); + } catch (e) { + throw new Error(`Invalid JSON in skill tree: ${e instanceof Error ? e.message : String(e)}`); + } + return SkillTreeSchema.parse(parsed); }
153-161: O(n²) complexity for detecting moved skills.
compareTreeVersionsusesoldSkills.find()inside a loop overnewSkills, resulting in O(n×m) complexity. For large skill sets, this could become slow. Consider using a Map for O(1) lookups.♻️ Use Map for O(n) lookup
+ const oldSkillMap = new Map(oldSkills.map((s) => [s.name, s])); + for (const newSkill of newSkills) { - const oldSkill = oldSkills.find((s) => s.name === newSkill.name); + const oldSkill = oldSkillMap.get(newSkill.name); if (oldSkill && !arraysEqual(oldSkill.path, newSkill.path)) { moved.push({ skill: newSkill.name, from: oldSkill.path, to: newSkill.path, }); } }packages/core/src/tree/index.ts (1)
1-25: Redundant re-exports: named exports duplicate wildcard exports.Lines 1-4 use
export *which already exports all symbols from each module. Lines 6-25 then re-export the same symbols by name, causing duplication. Choose one approach:
- Use only
export *for simplicity, or- Use only named exports for explicit API control
♻️ Option 1: Keep only wildcard exports
export * from './types.js'; export * from './generator.js'; export * from './serializer.js'; export * from './graph.js'; - -export { TreeGenerator, generateSkillTree } from './generator.js'; -export { - serializeTree, - deserializeTree, - saveTree, - loadTree, - treeToText, - treeToMarkdown, - compareTreeVersions, - TREE_FILE_NAME, -} from './serializer.js'; -export { - buildSkillGraph, - getRelatedSkills, - findSkillsByRelationType, - getSkillPath, - findSkillsInCategory, - serializeGraph, - deserializeGraph, -} from './graph.js';packages/core/src/tree/generator.ts (1)
207-225:isSkillCategorizedduplicatesfilterSkillsByCategorylogic.Both methods check tags and keywords against
CATEGORY_TAXONOMY. Consider reusingfilterSkillsByCategoryto avoid logic drift:private isSkillCategorized(skill: SkillSummary): boolean { return CATEGORY_TAXONOMY.some( (category) => this.filterSkillsByCategory([skill], category).length > 0 ); }packages/core/src/execution/manager.ts (2)
239-254:cancelFlowmarks status but doesn't interrupt in-flight step execution.Setting
flow.status = 'failed'doesn't stop the currently executing step. TheexecuteFlowloop will continue until the current step completes. Consider adding anAbortControllerto the flow context that step executors can check.
272-276: Division by zero possible whencompletedFlowsis 0.If
completedFlowsreaches 0 (e.g., after clearing metrics),updateAverageDurationwould divide by zero. This can't happen in the current flow sincecompletedFlows++is called beforeupdateAverageDuration, but it's fragile if the code changes.🛡️ Defensive check
private updateAverageDuration(duration: number): void { const completedCount = this.metrics.completedFlows; + if (completedCount === 0) return; this.metrics.averageDuration = (this.metrics.averageDuration * (completedCount - 1) + duration) / completedCount; }packages/core/src/connectors/types.ts (1)
48-140: Inconsistent placeholder casing.The
crmcategory uses~~CRM(uppercase) while all other categories use lowercase (e.g.,~~chat,🔧 Suggested fix for consistent casing
crm: { - placeholder: '~~CRM', + placeholder: '~~crm', category: 'crm',packages/core/src/execution/types.ts (1)
92-106:MapinFlowMetricswon't serialize to JSON directly.The
stepMetricsfield uses aMap, which doesn't serialize to JSON withJSON.stringify(). If this type is intended to be persisted or transmitted, consider usingRecord<string, ...>instead, or document that callers must convert it before serialization.🔧 Alternative using Record for JSON compatibility
export interface FlowMetrics { totalFlows: number; completedFlows: number; failedFlows: number; averageDuration: number; - stepMetrics: Map< - string, - { - executionCount: number; - successCount: number; - failureCount: number; - averageDuration: number; - } - >; + stepMetrics: Record< + string, + { + executionCount: number; + successCount: number; + failureCount: number; + averageDuration: number; + } + >; }packages/core/src/execution/mode.ts (1)
93-120: Silent error handling is appropriate but consider logging for debugging.The
detectMcpServersfunction silently continues on errors, which is good for robustness. However, for debugging purposes, consider adding optional verbose logging or emitting the error information in a way that's accessible to callers who need to diagnose configuration issues.packages/core/src/tree/graph.ts (2)
202-216: Non-null assertion could mask missing skills.The non-null assertion
skillMap.get(r.skillName)!at line 210 assumes the skill exists, but then line 215 filters out falsy values anyway. If a relation references a skill not in theskillsarray, this would produceundefinedbeing passed through until filtered. Consider removing the assertion and letting the filter handle it:🔧 Safer approach without assertion
return node.relations .filter((r) => { if (types && !types.includes(r.relationType)) return false; if (r.strength < minStrength) return false; return true; }) .slice(0, limit) - .map((r) => ({ - skill: skillMap.get(r.skillName)!, - relationType: r.relationType, - strength: r.strength, - reason: r.reason, - })) - .filter((r) => r.skill); + .map((r) => { + const skill = skillMap.get(r.skillName); + if (!skill) return null; + return { + skill, + relationType: r.relationType, + strength: r.strength, + reason: r.reason, + }; + }) + .filter((r): r is RelatedSkillResult => r !== null);
313-322:deserializeGraphwill throw on invalid JSON.
JSON.parseat line 314 will throw aSyntaxErroron malformed JSON. If this is the intended behavior (fail-fast), consider documenting it. If graceful handling is preferred, wrap in try/catch and returnnullsimilar toloadTree.packages/tui/src/services/tree.service.ts (1)
6-6: Unused import:treeToText.The
treeToTextfunction is imported but not used anywhere in this file.🔧 Remove unused import
import { generateSkillTree, loadTree, saveTree, - treeToText, loadIndex, type SkillTree, type TreeNode, } from '@skillkit/core';
| private async callLLM(prompt: string): Promise<string> { | ||
| return `Mock response for: ${prompt.slice(0, 50)}...`; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
cat -n packages/core/src/reasoning/engine.ts | sed -n '530,560p'Repository: rohitg00/skillkit
Length of output: 1186
🏁 Script executed:
rg "provider" packages/core/src/reasoning/engine.ts -B 2 -A 2Repository: rohitg00/skillkit
Length of output: 406
🏁 Script executed:
rg "callLLM" packages/core/src/reasoning/engine.ts -B 3 -A 3Repository: rohitg00/skillkit
Length of output: 590
🏁 Script executed:
rg "class.*Engine|constructor" packages/core/src/reasoning/engine.ts -A 10 | head -50Repository: rohitg00/skillkit
Length of output: 735
🏁 Script executed:
rg "extractJsonFromResponse" packages/core/src/reasoning/engine.ts -B 2 -A 5Repository: rohitg00/skillkit
Length of output: 773
🏁 Script executed:
rg "LLM|llm" packages/core/src/reasoning/engine.ts | head -20Repository: rohitg00/skillkit
Length of output: 219
🏁 Script executed:
fd "\.ts$" packages/core/src/reasoning/ | xargs rg "class.*LLM|function.*LLM|provider" | grep -i "llm\|provider" | head -30Repository: rohitg00/skillkit
Length of output: 347
🏁 Script executed:
cat packages/core/src/reasoning/engine.ts | grep -n "private\|public" | grep -i "llm\|provider"Repository: rohitg00/skillkit
Length of output: 219
🏁 Script executed:
cat packages/core/src/reasoning/prompts.ts | grep -n "extractJsonFromResponse" -A 15 | head -25Repository: rohitg00/skillkit
Length of output: 545
🏁 Script executed:
cat packages/core/src/reasoning/types.ts | rg "ReasoningProvider|ReasoningConfig" -A 5Repository: rohitg00/skillkit
Length of output: 541
🏁 Script executed:
rg "this.config.provider" packages/core/src/reasoning/engine.ts -B 2 -A 5Repository: rohitg00/skillkit
Length of output: 536
🏁 Script executed:
rg "catch" packages/core/src/reasoning/engine.ts -B 3 -A 2 | head -40Repository: rohitg00/skillkit
Length of output: 454
🏁 Script executed:
rg "mockSearchPlan|mockExplanation" packages/core/src/reasoning/engine.ts -B 1 -A 8 | head -50Repository: rohitg00/skillkit
Length of output: 1347
LLM provider config is ignored—callLLM always returns a mock string, causing silent degradation.
When a non-mock provider is configured (e.g., 'openai', 'anthropic', 'ollama'), callLLM still returns Mock response for: ${prompt.slice(0, 50)}..., which contains no JSON. The extractJsonFromResponse call fails, triggering the catch block and falling back to mock implementations with no error. This means the configured provider is effectively never used.
Wire callLLM to the configured provider (instantiate client, make API call) or fail fast to surface the missing implementation.
🤖 Prompt for AI Agents
In `@packages/core/src/reasoning/engine.ts` around lines 545 - 547, The callLLM
method currently returns a hardcoded mock string which ignores the configured
LLM provider and causes extractJsonFromResponse to fail silently; update the
callLLM implementation to read the engine's provider config and either (a)
instantiate and call the appropriate client for supported providers (e.g., use
OpenAI client for 'openai', Anthropic client for 'anthropic', Ollama client for
'ollama') and return the provider's raw text response, or (b) if a provider is
not implemented, throw an explicit error so the failure surfaces instead of
falling back to mocks; modify the method referenced as callLLM and ensure
callers like extractJsonFromResponse receive real provider output or an error.
| const tree = await loadOrGenerateTree(); | ||
| setTreeState(tree); | ||
| if (tree.tree && tree.currentNode) { | ||
| updateTreeItems(tree.currentNode); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tree loading error is silently ignored.
If loadOrGenerateTree() throws an exception, the error won't be surfaced to the user. Consider catching errors and displaying them via setError().
🛡️ Proposed fix
- const tree = await loadOrGenerateTree();
- setTreeState(tree);
- if (tree.tree && tree.currentNode) {
- updateTreeItems(tree.currentNode);
- }
+ try {
+ const tree = await loadOrGenerateTree();
+ setTreeState(tree);
+ if (tree.tree && tree.currentNode) {
+ updateTreeItems(tree.currentNode);
+ }
+ } catch (e) {
+ // Tree loading is non-critical; log but don't block UI
+ console.error('Failed to load skill tree:', e);
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const tree = await loadOrGenerateTree(); | |
| setTreeState(tree); | |
| if (tree.tree && tree.currentNode) { | |
| updateTreeItems(tree.currentNode); | |
| } | |
| try { | |
| const tree = await loadOrGenerateTree(); | |
| setTreeState(tree); | |
| if (tree.tree && tree.currentNode) { | |
| updateTreeItems(tree.currentNode); | |
| } | |
| } catch (e) { | |
| // Tree loading is non-critical; log but don't block UI | |
| console.error('Failed to load skill tree:', e); | |
| } |
🤖 Prompt for AI Agents
In `@packages/tui/src/screens/Marketplace.tsx` around lines 94 - 98, Wrap the
loadOrGenerateTree() call and subsequent setTreeState/updateTreeItems logic in a
try/catch so errors aren’t swallowed; if loadOrGenerateTree() throws, catch the
error and call setError(...) with a useful message (e.g., error.message or
String(error)) and optionally clear or reset tree state, otherwise proceed to
setTreeState(tree) and call updateTreeItems(tree.currentNode) when tree and
currentNode exist.
| type TreeNode, | ||
| } from '@skillkit/core'; | ||
|
|
||
| const TREE_PATH = join(process.env.HOME || '~', '.skillkit', 'skill-tree.json'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same HOME fallback issue.
This is the third occurrence of this pattern. Consider extracting TREE_PATH to a shared constants module to ensure consistency and fix the issue in one place.
🤖 Prompt for AI Agents
In `@packages/tui/src/services/tree.service.ts` at line 12, The const TREE_PATH in
tree.service.ts uses process.env.HOME with a '~' fallback; extract this into a
shared constant (e.g., export TREE_PATH from a new shared constants module) so
all occurrences reuse the same logic, and implement it using a reliable homedir
call (use os.homedir() or process.env.HOME || os.homedir()) and path.join to
build the path ('.skillkit', 'skill-tree.json'); then update all other
occurrences of TREE_PATH across the codebase to import the shared constant so
the fix is applied in one place.
- Use os.homedir() instead of process.env.HOME || '~' fallback (tree.ts, tree.service.ts, mode.ts) - Add guard for division by zero in tree stats percentage - Validate parseInt result for --depth flag - Fix average stats calculation to use cacheMisses count - Extract category from data in validateCategoryScore - Remove unused skillMap field from TreeGenerator - Make reasoning provider configurable instead of hardcoded - Add try/catch for tree loading errors in Marketplace - Handle missing tree nodes during traversal
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Store setTimeout ID and clear it in finally block when step completes successfully before timeout. This prevents orphaned timers from accumulating during long-running applications.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| if (!options.reasoning || !this.reasoningEngine) { | ||
| return { | ||
| ...baseResult, | ||
| recommendations: baseResult.recommendations.map((r) => ({ | ||
| ...r, | ||
| treePath: [], | ||
| })), | ||
| }; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔴 Explanations not generated when --explain is used without --reasoning flag
When users invoke skillkit recommend --explain, they expect to see detailed explanations for recommendations. However, the ReasoningRecommendationEngine.recommendWithReasoning method has a guard clause that exits early if options.reasoning is falsy, preventing explanation generation even when options.explainResults is true.
Click to expand
Root Cause
In packages/core/src/recommend/engine.ts:701, the guard clause checks:
if (!options.reasoning || !this.reasoningEngine) {
return {
...baseResult,
recommendations: baseResult.recommendations.map((r) => ({
...r,
treePath: [],
})),
};
}The CLI at packages/cli/src/commands/recommend.ts:160 correctly routes --explain to the reasoning path:
if (this.explain || this.reasoning || this.showPath) {
return await this.handleReasoningRecommendations(profile, index);
}But when calling recommendWithReasoning at line 211, it passes:
reasoning: this.reasoning, // false when only --explain is used
explainResults: this.explain, // trueSince reasoning is false, the method returns early at line 701, and the explanation generation code at lines 727-734 is never executed.
Expected vs Actual
- Expected:
skillkit recommend --explaingenerates detailed explanations for each recommendation - Actual: Returns recommendations with empty
treePatharrays and noexplanationfields
Impact
Users who specifically request explanations via --explain flag receive no explanations, making the feature non-functional unless they also add --reasoning.
Recommendation: Change the guard clause to check for either reasoning or explainResults:
if ((!options.reasoning && !options.explainResults) || !this.reasoningEngine) {Alternatively, move the explanation generation (lines 727-734) outside the if (options.reasoning) block so explanations can be generated independently.
Was this helpful? React with 👍 or 👎 to provide feedback.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
In `@packages/core/src/execution/mode.ts`:
- Around line 99-101: The tilde expansion for configPath uses process.env.HOME
which can be undefined; update the logic in the mode.ts tilde-handling (where
configPath is expanded) to use os.homedir() instead: import homedir (or
os.homedir) if not already imported and replace process.env.HOME || '' with
homedir() so "~" expands consistently with DEFAULT_MCP_CONFIG_PATHS; ensure you
only call homedir() when configPath.startsWith('~').
In `@packages/core/src/reasoning/prompts.ts`:
- Around line 88-112: The buildSearchPlanPrompt function inserts raw user input
(query, categories) into the prompt, risking prompt-injection; sanitize and/or
escape these inputs before replacing into SEARCH_PLANNING_PROMPT by: 1)
normalizing/whitelisting characters (or escaping template markers like "{{" and
"}}"), 2) stripping suspicious instruction-like phrases (e.g., "ignore previous
instructions", "follow these new rules", etc.) or disallowing control-sequence
characters, and 3) applying the same sanitization to category strings; implement
this sanitization in a helper (e.g., sanitizePromptInput) and call it for query
and categories in buildSearchPlanPrompt (also ensure context fields like
context.type, languages, frameworks are escaped similarly).
- Around line 173-184: The extractJsonFromResponse function uses a greedy regex
that can capture across multiple objects; replace it with a robust extractor
that finds the first JSON object by scanning from the first '{' and tracking
balanced braces (handle string literals and escape characters so braces inside
strings are ignored), return JSON.parse of the balanced substring when the brace
count returns to zero, and throw the existing errors if no JSON start is found
or parsing fails; update the function extractJsonFromResponse to implement this
brace-balancing approach instead of using the /\{[\s\S]*\}/ regex.
In `@packages/core/src/recommend/engine.ts`:
- Around line 676-682: The constructor is unsafely casting a
Partial<ReasoningConfig> to ReasoningConfig; change the class field
reasoningConfig to be Partial<ReasoningConfig> (or explicitly merge with
defaults) and stop using the direct cast in the constructor, or merge the passed
reasoningConfig with default values before assigning so required fields are
guaranteed (you can reuse createReasoningEngine or a defaultReasoningConfig
object to perform the merge); update any usages that expect a full
ReasoningConfig to either call createReasoningEngine or accept Partial inputs
accordingly.
In `@packages/tui/src/screens/Marketplace.tsx`:
- Around line 230-240: The traversal in handleTreeBack uses treeState() and
walks segments but doesn't handle a missing segment; modify the loop so that
when const child = node.children.find(...) returns undefined you stop traversal
and exit gracefully (as handleTreeEnter does) — e.g., if (!child) {
updateTreeItems(node); return; } — referencing handleTreeBack, treeState,
node.children and updateTreeItems so you avoid continuing with an undefined
node.
🧹 Nitpick comments (12)
packages/core/src/recommend/engine.ts (3)
748-754:searchPlanfields are hardcoded placeholders.
primaryCategoriesandsecondaryCategoriesare always empty arrays, andstrategyis always'breadth-first'. If this is intentional placeholder behavior, consider documenting it or deriving actual values fromsearchResult.
775-791: Inconsistent casing in keyword extraction.
profile.typeis added as-is (line 779) while framework and language names are lowercased. This may cause matching inconsistencies downstream.✨ Proposed fix for consistent casing
private extractKeywords(profile: ProjectProfile): string[] { const keywords: string[] = []; if (profile.type) { - keywords.push(profile.type); + keywords.push(profile.type.toLowerCase()); }
805-809: Factory function doesn't exposereasoningConfigparameter.The constructor accepts
reasoningConfig, but the factory only passesweights. Users wanting to configure reasoning must bypass the factory and use the constructor directly.♻️ Proposed fix
export function createReasoningRecommendationEngine( - weights?: Partial<ScoringWeights> + weights?: Partial<ScoringWeights>, + reasoningConfig?: Partial<import('../reasoning/types.js').ReasoningConfig> ): ReasoningRecommendationEngine { - return new ReasoningRecommendationEngine(weights); + return new ReasoningRecommendationEngine(weights, reasoningConfig); }packages/core/src/reasoning/engine.ts (3)
138-150: Sequential awaits inexplainBatchmay cause unnecessary latency.Each explanation is awaited sequentially. For independent LLM calls,
Promise.allwould reduce total latency.♻️ Suggested refactor
async explainBatch( skills: Array<{ skill: SkillSummary; score: number }>, profile: ProjectProfile ): Promise<ExplainedRecommendation[]> { - const results: ExplainedRecommendation[] = []; - - for (const { skill, score } of skills) { - const explained = await this.explain(skill, score, profile); - results.push(explained); - } - - return results; + return Promise.all( + skills.map(({ skill, score }) => this.explain(skill, score, profile)) + ); }
514-520:TreeGeneratoris instantiated on every call togetSkillPath.Creating a new
TreeGeneratorinstance for each path lookup is wasteful. Consider caching the generator instance as a class field.♻️ Suggested refactor
export class ReasoningEngine { private config: ReasoningConfig; private skillMap: Map<string, SkillSummary> = new Map(); private tree: SkillTree | null = null; + private treeGenerator: TreeGenerator = new TreeGenerator(); private cache: Map<string, ReasoningCacheEntry> = new Map(); // ... private getSkillPath(skillName: string): string[] { if (!this.tree) return ['Uncategorized']; - const generator = new TreeGenerator(); - const path = generator.getPath(this.tree, skillName); + const path = this.treeGenerator.getPath(this.tree, skillName); return path || ['Uncategorized']; }
571-577: Cache eviction uses FIFO, not LRU—frequently accessed entries may be evicted.The current implementation evicts the oldest inserted entry regardless of access patterns. For a cache, LRU (least recently used) is typically more effective.
This is a minor optimization opportunity; FIFO is acceptable for a simple bounded cache.
packages/core/src/execution/mode.ts (2)
67-69: Substring matching may cause false positives.Using
includes()for server name matching could inadvertently match unintended servers. For example, a server named"my-gmail-mock"or"notgmail"would satisfy the requirement for"gmail".Consider using word boundary matching or exact matching if stricter validation is needed:
♻️ Suggested stricter matching alternative
const missingRequired = requiredServers.filter( - (server) => !availableServers.some((s) => s.toLowerCase().includes(server.toLowerCase())) + (server) => !availableServers.some((s) => { + const serverLower = s.toLowerCase(); + const reqLower = server.toLowerCase(); + return serverLower === reqLower || serverLower.includes(`-${reqLower}`) || serverLower.startsWith(`${reqLower}-`); + }) );
210-216: Error message may be empty when no servers are detected.If
detectExecutionModeis called with norequiredServersoroptionalServers, and no MCP servers are found, the mode will be'standalone'andmissingServerswill be an empty array. The error message would then show "Missing: " with nothing listed, which is confusing.♻️ Suggested improvement
export function requireEnhancedMode(result: ModeDetectionResult): void { if (result.mode !== 'enhanced') { + const reason = result.missingServers.length > 0 + ? `Missing servers: ${result.missingServers.join(', ')}` + : 'No MCP servers detected'; throw new Error( - `This skill requires enhanced mode with MCP servers. Missing: ${result.missingServers.join(', ')}` + `This skill requires enhanced mode with MCP servers. ${reason}` ); } }packages/core/src/tree/generator.ts (2)
111-114: Consider using a Set for subcategory skill lookup.The current implementation checks
node.skills.includes(skill.name)for each skill against each subcategory node, which is O(n*m). For large skill sets, using aSetwould improve performance.♻️ Proposed optimization
+ const subcategorySkillNames = new Set( + subcategoryNodes.flatMap((node) => node.skills) + ); const directSkills = categorySkills.filter( - (skill) => - !subcategoryNodes.some((node) => node.skills.includes(skill.name)) + (skill) => !subcategorySkillNames.has(skill.name) );
131-155:filterSkillsByCategoryandisSkillCategorizedshare duplicated logic.Both methods perform nearly identical tag/keyword matching. Consider extracting a shared helper to reduce duplication.
♻️ Proposed refactor
+ private skillMatchesCategory(skill: SkillSummary, category: CategoryMapping): boolean { + const skillTags = (skill.tags || []).map((t) => t.toLowerCase()); + const skillName = skill.name.toLowerCase(); + const skillDesc = (skill.description || '').toLowerCase(); + + const tagMatch = skillTags.some((tag) => category.tags.includes(tag)); + const keywordMatch = category.keywords.some( + (keyword) => skillName.includes(keyword) || skillDesc.includes(keyword) + ); + + return tagMatch || keywordMatch; + } private filterSkillsByCategory( skills: SkillSummary[], category: CategoryMapping ): SkillSummary[] { - return skills.filter((skill) => { - const skillTags = (skill.tags || []).map((t) => t.toLowerCase()); - const skillName = skill.name.toLowerCase(); - const skillDesc = (skill.description || '').toLowerCase(); - - const tagMatch = skillTags.some((tag) => - category.tags.includes(tag) - ); - - const keywordMatch = category.keywords.some( - (keyword) => - skillName.includes(keyword) || skillDesc.includes(keyword) - ); - - const categoryNameMatch = - skillName.includes(category.category.toLowerCase()) || - skillDesc.includes(category.category.toLowerCase()); - - return tagMatch || keywordMatch || categoryNameMatch; - }); + return skills.filter((skill) => this.skillMatchesCategory(skill, category) || + skill.name.toLowerCase().includes(category.category.toLowerCase()) || + (skill.description || '').toLowerCase().includes(category.category.toLowerCase()) + ); } private isSkillCategorized(skill: SkillSummary): boolean { - for (const category of CATEGORY_TAXONOMY) { - const skillTags = (skill.tags || []).map((t) => t.toLowerCase()); - const skillName = skill.name.toLowerCase(); - const skillDesc = (skill.description || '').toLowerCase(); - - const tagMatch = skillTags.some((tag) => category.tags.includes(tag)); - - const keywordMatch = category.keywords.some( - (keyword) => - skillName.includes(keyword) || skillDesc.includes(keyword) - ); - - if (tagMatch || keywordMatch) { - return true; - } - } - return false; + return CATEGORY_TAXONOMY.some((category) => this.skillMatchesCategory(skill, category)); }Also applies to: 201-219
packages/tui/src/screens/Marketplace.tsx (1)
479-484: Unreachable fallback icon case.The ternary
isCategory() ? '📁' : isSkill() ? '📄' : '📁'has an unreachable final'📁'since items are either categories or skills. Consider simplifying for clarity.♻️ Simplified icon logic
<text fg={isSelected() ? terminalColors.accent : terminalColors.textMuted} width={3} > - {isCategory() ? '📁' : isSkill() ? '📄' : '📁'} + {isCategory() ? '📁' : '📄'} </text>packages/cli/src/commands/tree.ts (1)
94-98: Consider static import fortreeToMarkdown.The dynamic import adds complexity. Since
@skillkit/coreis already imported at the top, this could be a static import unless there's a specific reason for lazy loading.♻️ Static import alternative
At the top of the file:
import { loadIndex as loadIndexFromCache, generateSkillTree, saveTree, loadTree, + treeToMarkdown, type SkillTree, type TreeNode, } from '@skillkit/core';In the execute method:
if (this.markdown) { - const { treeToMarkdown } = await import('@skillkit/core'); console.log(treeToMarkdown(tree)); return 0; }
| const fullPath = configPath.startsWith('~') | ||
| ? configPath.replace('~', process.env.HOME || '') | ||
| : configPath; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent home directory resolution: use homedir() here too.
While DEFAULT_MCP_CONFIG_PATHS was fixed to use homedir(), this tilde expansion still uses process.env.HOME || ''. If HOME is undefined, this produces paths like /.mcp.json (root filesystem) rather than the user's home directory.
🔧 Suggested fix
const fullPath = configPath.startsWith('~')
- ? configPath.replace('~', process.env.HOME || '')
+ ? configPath.replace('~', homedir())
: configPath;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const fullPath = configPath.startsWith('~') | |
| ? configPath.replace('~', process.env.HOME || '') | |
| : configPath; | |
| const fullPath = configPath.startsWith('~') | |
| ? configPath.replace('~', homedir()) | |
| : configPath; |
🤖 Prompt for AI Agents
In `@packages/core/src/execution/mode.ts` around lines 99 - 101, The tilde
expansion for configPath uses process.env.HOME which can be undefined; update
the logic in the mode.ts tilde-handling (where configPath is expanded) to use
os.homedir() instead: import homedir (or os.homedir) if not already imported and
replace process.env.HOME || '' with homedir() so "~" expands consistently with
DEFAULT_MCP_CONFIG_PATHS; ensure you only call homedir() when
configPath.startsWith('~').
| export function buildSearchPlanPrompt( | ||
| query: string, | ||
| categories: string[], | ||
| context?: ProjectProfile | ||
| ): string { | ||
| let prompt = SEARCH_PLANNING_PROMPT | ||
| .replace('{{query}}', query) | ||
| .replace('{{categories}}', categories.join('\n')); | ||
|
|
||
| if (context) { | ||
| const languages = context.stack.languages.map(l => l.name).join(', '); | ||
| const frameworks = context.stack.frameworks.map(f => f.name).join(', '); | ||
|
|
||
| prompt = prompt | ||
| .replace('{{#if context}}', '') | ||
| .replace('{{/if}}', '') | ||
| .replace('{{context.languages}}', languages) | ||
| .replace('{{context.frameworks}}', frameworks) | ||
| .replace('{{context.type}}', context.type || 'unknown'); | ||
| } else { | ||
| prompt = prompt.replace(/\{\{#if context\}\}[\s\S]*?\{\{\/if\}\}/g, ''); | ||
| } | ||
|
|
||
| return prompt; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
User input in prompts is not escaped—potential prompt injection risk.
The query parameter is inserted directly into the prompt without sanitization. A malicious query like "ignore previous instructions and..." could manipulate LLM behavior.
Consider sanitizing or validating user input before insertion, or at minimum documenting this risk for downstream consumers.
🤖 Prompt for AI Agents
In `@packages/core/src/reasoning/prompts.ts` around lines 88 - 112, The
buildSearchPlanPrompt function inserts raw user input (query, categories) into
the prompt, risking prompt-injection; sanitize and/or escape these inputs before
replacing into SEARCH_PLANNING_PROMPT by: 1) normalizing/whitelisting characters
(or escaping template markers like "{{" and "}}"), 2) stripping suspicious
instruction-like phrases (e.g., "ignore previous instructions", "follow these
new rules", etc.) or disallowing control-sequence characters, and 3) applying
the same sanitization to category strings; implement this sanitization in a
helper (e.g., sanitizePromptInput) and call it for query and categories in
buildSearchPlanPrompt (also ensure context fields like context.type, languages,
frameworks are escaped similarly).
| export function extractJsonFromResponse(response: string): unknown { | ||
| const jsonMatch = response.match(/\{[\s\S]*\}/); | ||
| if (!jsonMatch) { | ||
| throw new Error('No JSON found in response'); | ||
| } | ||
|
|
||
| try { | ||
| return JSON.parse(jsonMatch[0]); | ||
| } catch { | ||
| throw new Error('Invalid JSON in response'); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Greedy regex may capture unintended JSON when response contains multiple objects.
The regex /\{[\s\S]*\}/ greedily matches from the first { to the last }, which can incorrectly capture content between separate JSON objects in the response.
🛠️ Suggested fix
export function extractJsonFromResponse(response: string): unknown {
- const jsonMatch = response.match(/\{[\s\S]*\}/);
+ // Non-greedy match to capture the first complete JSON object
+ const jsonMatch = response.match(/\{[\s\S]*?\}/);
if (!jsonMatch) {
throw new Error('No JSON found in response');
}Note: A non-greedy match is a simple improvement, but for complex nested JSON, a proper JSON extraction approach (e.g., iterating to find balanced braces) would be more robust.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export function extractJsonFromResponse(response: string): unknown { | |
| const jsonMatch = response.match(/\{[\s\S]*\}/); | |
| if (!jsonMatch) { | |
| throw new Error('No JSON found in response'); | |
| } | |
| try { | |
| return JSON.parse(jsonMatch[0]); | |
| } catch { | |
| throw new Error('Invalid JSON in response'); | |
| } | |
| } | |
| export function extractJsonFromResponse(response: string): unknown { | |
| // Non-greedy match to capture the first complete JSON object | |
| const jsonMatch = response.match(/\{[\s\S]*?\}/); | |
| if (!jsonMatch) { | |
| throw new Error('No JSON found in response'); | |
| } | |
| try { | |
| return JSON.parse(jsonMatch[0]); | |
| } catch { | |
| throw new Error('Invalid JSON in response'); | |
| } | |
| } |
🤖 Prompt for AI Agents
In `@packages/core/src/reasoning/prompts.ts` around lines 173 - 184, The
extractJsonFromResponse function uses a greedy regex that can capture across
multiple objects; replace it with a robust extractor that finds the first JSON
object by scanning from the first '{' and tracking balanced braces (handle
string literals and escape characters so braces inside strings are ignored),
return JSON.parse of the balanced substring when the brace count returns to
zero, and throw the existing errors if no JSON start is found or parsing fails;
update the function extractJsonFromResponse to implement this brace-balancing
approach instead of using the /\{[\s\S]*\}/ regex.
| constructor( | ||
| weights?: Partial<ScoringWeights>, | ||
| reasoningConfig?: Partial<import('../reasoning/types.js').ReasoningConfig> | ||
| ) { | ||
| super(weights); | ||
| this.reasoningConfig = reasoningConfig as import('../reasoning/types.js').ReasoningConfig; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unsafe type cast from partial config.
Line 681 casts Partial<ReasoningConfig> to ReasoningConfig, which can cause runtime errors if required fields are missing. Either keep it as Partial and let createReasoningEngine handle defaults, or validate/merge with defaults here.
🛡️ Proposed fix
constructor(
weights?: Partial<ScoringWeights>,
reasoningConfig?: Partial<import('../reasoning/types.js').ReasoningConfig>
) {
super(weights);
- this.reasoningConfig = reasoningConfig as import('../reasoning/types.js').ReasoningConfig;
+ this.reasoningConfig = reasoningConfig;
}And update the field declaration:
- private reasoningConfig?: import('../reasoning/types.js').ReasoningConfig;
+ private reasoningConfig?: Partial<import('../reasoning/types.js').ReasoningConfig>;🤖 Prompt for AI Agents
In `@packages/core/src/recommend/engine.ts` around lines 676 - 682, The
constructor is unsafely casting a Partial<ReasoningConfig> to ReasoningConfig;
change the class field reasoningConfig to be Partial<ReasoningConfig> (or
explicitly merge with defaults) and stop using the direct cast in the
constructor, or merge the passed reasoningConfig with default values before
assigning so required fields are guaranteed (you can reuse createReasoningEngine
or a defaultReasoningConfig object to perform the merge); update any usages that
expect a full ReasoningConfig to either call createReasoningEngine or accept
Partial inputs accordingly.
| const state = treeState(); | ||
| if (state?.tree) { | ||
| let node = state.tree.rootNode; | ||
| for (const segment of newPath) { | ||
| const child = node.children.find(c => c.name === segment); | ||
| if (child) { | ||
| node = child; | ||
| } | ||
| } | ||
| updateTreeItems(node); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing node handling in handleTreeBack traversal.
Unlike handleTreeEnter, this traversal doesn't handle the case where a path segment is not found. For consistency and robustness, add the same defensive handling.
🛡️ Proposed fix
const state = treeState();
if (state?.tree) {
let node = state.tree.rootNode;
for (const segment of newPath) {
const child = node.children.find(c => c.name === segment);
if (child) {
node = child;
+ } else {
+ // Path segment not found - reset to root
+ setCurrentPath([]);
+ updateTreeItems(state.tree.rootNode);
+ return;
}
}
updateTreeItems(node);
}🤖 Prompt for AI Agents
In `@packages/tui/src/screens/Marketplace.tsx` around lines 230 - 240, The
traversal in handleTreeBack uses treeState() and walks segments but doesn't
handle a missing segment; modify the loop so that when const child =
node.children.find(...) returns undefined you stop traversal and exit gracefully
(as handleTreeEnter does) — e.g., if (!child) { updateTreeItems(node); return; }
— referencing handleTreeBack, treeState, node.children and updateTreeItems so
you avoid continuing with an undefined node.
Summary
Implements patterns from anthropics/knowledge-work-plugins to enhance skill discovery and execution:
~~placeholder system (13 categories)New Files
tree/reasoning/connectors/execution/CLI Changes
skillkit tree- Browse skills in hierarchical treeskillkit tree --generate- Generate tree from indexskillkit recommend --explain- Show reasoning for matchesskillkit recommend --reasoning- Use LLM-based searchTUI Changes
vkey)Test plan
skillkit tree --generateskillkit recommend --explainSummary by CodeRabbit