Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -51,3 +51,4 @@ Thumbs.db
# =====================
.vscode/
.idea/
.venv/
4 changes: 4 additions & 0 deletions .jules/bolt.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,3 +45,7 @@
## 2026-02-10 - Group-By for Multi-Count Statistics
**Learning:** Executing multiple `count()` queries with different filters (e.g., for different statuses) causes redundant database scans and network round-trips.
**Action:** Use a single SQL `GROUP BY` query to fetch counts for all categories/statuses at once, then process the results in Python.

## 2026-02-11 - File I/O in Hot Paths
**Learning:** Checking file modification time (`os.path.getmtime`) in a property getter seems harmless ("just a stat call"), but when accessed multiple times per request (e.g., fetching 4 different config values sequentially), the redundant disk I/O compounds and introduces measurable synchronous latency.
**Action:** Throttle filesystem checks in hot-reloading configurations. Even a small interval (e.g., 5 seconds) effectively eliminates redundant I/O during a single request's lifecycle without sacrificing the ability to hot-reload.
9 changes: 7 additions & 2 deletions backend/adaptive_weights.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ class AdaptiveWeights:
_instance = None
_weights = None
_last_loaded = 0
_last_checked = 0

def __new__(cls):
if cls._instance is None:
Expand Down Expand Up @@ -40,8 +41,12 @@ def _load_weights(self):
self._weights = {}

def _check_reload(self):
# Optimization: Checking mtime is fast (stat call).
self._load_weights()
# Throttle mtime checks to at most once every 5 seconds
# This prevents multiple disk I/O operations per request
current_time = time.time()
Copy link

Copilot AI Feb 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For interval throttling, prefer time.monotonic() over time.time(). time.time() can jump backwards/forwards due to NTP or manual clock changes, which could cause reload checks to be skipped for longer than intended or run too frequently.

Suggested change
current_time = time.time()
current_time = time.monotonic()

Copilot uses AI. Check for mistakes.
if current_time - self._last_checked > 5:
self._last_checked = current_time
Comment on lines +44 to +48
Copy link

Copilot AI Feb 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The throttle interval is a magic number (5). Consider extracting it to a named constant (e.g., RELOAD_CHECK_INTERVAL_SECONDS) so it’s easier to tune and understand, and so future changes don’t require touching logic code.

Copilot uses AI. Check for mistakes.
self._load_weights()
Comment on lines +47 to +49
Copy link

Copilot AI Feb 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_last_checked is updated before _load_weights() runs. If _load_weights() fails transiently (e.g., JSONDecodeError while the file is being written), subsequent accesses will be throttled and won’t retry for ~5s. Consider only updating _last_checked after a successful load/check (e.g., have _load_weights() return a success flag), or skip updating on exception.

Copilot uses AI. Check for mistakes.
Comment on lines +44 to +49
Copy link

Copilot AI Feb 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new throttling behavior should have a focused unit test (e.g., with time patched/frozen) asserting that multiple sequential getters only trigger a single os.path.getmtime/reload within the throttle window. The PR description mentions this measurement, but there’s no test covering it currently.

Copilot uses AI. Check for mistakes.
Comment on lines +44 to +49
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Throttle currently risks stale-write overwrites on update paths.

Line 47 throttles all callers. Because mutators call _check_reload() too, an external file update within the 5s window can be skipped, then overwritten by _save_weights() using stale in-memory data.

Suggested fix: allow forced reload for mutating paths
-    def _check_reload(self):
+    def _check_reload(self, force: bool = False):
         # Throttle mtime checks to at most once every 5 seconds
         # This prevents multiple disk I/O operations per request
         current_time = time.time()
-        if current_time - self._last_checked > 5:
+        if force or current_time - self._last_checked >= 5:
             self._last_checked = current_time
             self._load_weights()
-        self._check_reload() # Ensure we have latest
+        self._check_reload(force=True)  # Ensure write path merges latest disk state
-        self._check_reload()
+        self._check_reload(force=True)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/adaptive_weights.py` around lines 44 - 49, The throttling in
_check_reload (uses self._last_checked and calls self._load_weights) prevents
reloads for up to 5s and can cause stale in-memory data to be saved by
_save_weights; change _check_reload to accept a force (or bypass) parameter
(e.g., force=False) that skips the time check and always calls _load_weights
when force=True, and update all mutating paths that call _check_reload (the
mutator methods that call _check_reload before _save_weights) to call it with
force=True so external file updates are never skipped before a save.


def _save_weights(self):
try:
Expand Down