-
Notifications
You must be signed in to change notification settings - Fork 581
Feat/memories backend implementation #777
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Feat/memories backend implementation #777
Conversation
✨ Features: - Display 'On this day last year' for memories from exactly 1 year ago - Format location-based memories as 'Trip to [Location], [Year]' (e.g., 'Trip to Jaipur, 2025') - Fix MediaView slideshow and info buttons not working in memory albums 🐛 Bug Fixes: - Fixed event bubbling issue where MediaView control clicks closed the entire viewer - Conditionally render MemoryViewer backdrop only when MediaView is closed - Prevent click handlers from interfering with MediaView controls 🎨 UI Improvements: - Enhanced FeaturedMemoryCard with contextual year display - Updated MemoryCard title formatting for better location context - Improved memory viewing experience with proper z-index layering 📦 Technical Changes: - Backend: Added reverse geocoding for location names in memory clustering - Backend: Fixed latitude/longitude handling for images without GPS data - Frontend: Refactored MemoryViewer JSX structure for proper conditional rendering - Frontend: Integrated MediaView component with full zoom/slideshow/info functionality This commit completes the Memories feature implementation with Google Photos-style presentation and fixes critical UX issues with the image viewer controls.
- Format Python files in backend/app/ with Black - Format TypeScript files in frontend/src/components/Memories/ with Prettier - Fix code style issues to meet project standards
✨ Added: - Updated docs/overview/features.md with Memories section - Created docs/frontend/memories.md with detailed documentation 📚 Documentation includes: - Feature overview (On This Day, Location/Date memories) - Memory types and sections (Recent, This Year, All) - Filtering options and implementation - Component architecture (MemoriesPage, MemoryCard, MemoryViewer) - State management with Redux Toolkit - API endpoints and parameters - Backend clustering algorithm (DBSCAN) - Reverse geocoding with city database - Bug fixes and improvements - Testing guidelines - Performance considerations - Future enhancement ideas The documentation provides both user-facing feature descriptions and technical implementation details for developers.
|
|
📝 WalkthroughWalkthroughAdds a Memories feature: database schema updates and migrations to store GPS/timestamp, metadata extraction and migration tools, memory clustering utilities, new backend /api/memories endpoints, frontend Redux, service and UI components, documentation, tests, and verification tooling. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Frontend
participant Redux
participant API as Memories API
participant DB as SQLite DB
participant Cluster as MemoryClustering
Client->>Frontend: user triggers "Generate Memories"
Frontend->>Redux: dispatch(fetchAllMemoriesData)
Redux->>API: POST /api/memories/generate (params)
API->>DB: db_get_all_images_for_memories()
DB-->>API: images with latitude/longitude/captured_at
API->>Cluster: cluster_memories(images)
Cluster->>Cluster: spatial (DBSCAN) & temporal grouping
Cluster-->>API: memories[]
API-->>Redux: GenerateMemoriesResponse
Redux-->>Frontend: store updated (allMemories)
Frontend->>Client: render Memories UI
sequenceDiagram
participant Client
participant API as Memories API
participant DB as SQLite DB
Client->>API: GET /api/memories/on-this-day
API->>DB: db_get_images_by_year_month(current_month, current_day)
DB-->>API: images grouped across years
API->>API: aggregate by year, build response
API-->>Client: OnThisDayResponse (images, years)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~70 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 17
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/app/utils/images.py (1)
155-207: Fix GPS “truthy” checks + normalizecaptured_attype consistently.
if latitude and longitude:should beis not Nonechecks (0.0 is valid and currently logs as “not extracted”).- Consider normalizing
captured_atto an ISO string (orNone) unconditionally to avoid mixed types reachingdb_bulk_insert_images.Suggested tweak:
- if latitude and longitude: + if latitude is not None and longitude is not None: logger.info(f"GPS extracted for {os.path.basename(image_path)}: ({latitude}, {longitude})") @@ - "captured_at": captured_at.isoformat() if isinstance(captured_at, datetime.datetime) and captured_at else captured_at, # Can be None + "captured_at": captured_at.isoformat() if captured_at else None,
🧹 Nitpick comments (21)
backend/app/database/images.py (4)
22-35: Consider usingtotal=Truewith explicitNotRequiredfor optional fields.Using
total=Falsemakes all fields optional, including core fields likeidandpaththat should always be present. Consider keepingtotal=True(or omitting it) and usingNotRequiredfromtypingfor truly optional fields:-class ImageRecord(TypedDict, total=False): +from typing import NotRequired + +class ImageRecord(TypedDict): """Represents the full images table structure""" id: ImageId path: ImagePath folder_id: FolderId thumbnailPath: str metadata: Union[Mapping[str, Any], str] isTagged: bool - isFavourite: bool + isFavourite: NotRequired[bool] # New fields for Memories feature - latitude: Optional[float] - longitude: Optional[float] - captured_at: Optional[datetime] + latitude: NotRequired[Optional[float]] + longitude: NotRequired[Optional[float]] + captured_at: NotRequired[Optional[datetime]]
81-93: Consider adding a composite index for spatial queries.The spatial queries in
db_get_images_near_locationfilter on bothlatitudeandlongitudesimultaneously. A composite index would be more efficient than separate single-column indexes for these range queries.cursor.execute( "CREATE INDEX IF NOT EXISTS ix_images_latitude ON images(latitude)" ) cursor.execute( "CREATE INDEX IF NOT EXISTS ix_images_longitude ON images(longitude)" ) + cursor.execute( + "CREATE INDEX IF NOT EXISTS ix_images_lat_lon ON images(latitude, longitude)" + ) cursor.execute( "CREATE INDEX IF NOT EXISTS ix_images_captured_at ON images(captured_at)" )
265-283: Move import outside the loop for better performance.The
image_util_parse_metadataimport is inside the loop, which adds overhead on each iteration. Move it to the top of the function or module level.+ from app.utils.images import image_util_parse_metadata + # Group results by image ID images_dict = {} for ( image_id, ... ) in results: if image_id not in images_dict: - # Safely parse metadata JSON -> dict - from app.utils.images import image_util_parse_metadata - metadata_dict = image_util_parse_metadata(metadata)
719-723: Consider using date range for index utilization.Using
strftime()prevents SQLite from using the index oncaptured_at. A date range query would be more efficient:- WHERE strftime('%Y', i.captured_at) = ? - AND strftime('%m', i.captured_at) = ? + WHERE i.captured_at >= ? AND i.captured_at < ?Then calculate the start and end of the month:
from calendar import monthrange start_date = f"{year:04d}-{month:02d}-01" last_day = monthrange(year, month)[1] end_date = f"{year:04d}-{month:02d}-{last_day:02d}T23:59:59"backend/app/utils/memory_clustering.py (1)
297-299: Fallback todatetime.now()may cause misleading memory IDs.When no dates are available, falling back to
datetime.now()fordate_objmeans the memory ID will be based on the generation time rather than the actual memory content. Consider using a content-based hash instead.else: date_start = date_end = None - date_obj = datetime.now() + # Use a hash of image IDs instead of current time + date_obj = None + + # Create memory ID + if date_obj: + memory_id = f"{memory_type}_{int(date_obj.timestamp())}_{len(images)}" + else: + # Content-based ID when no date available + content_hash = hash(tuple(img.get('id', '') for img in images[:5])) + memory_id = f"{memory_type}_nodate_{abs(content_hash)}_{len(images)}"backend/main.py (2)
15-15: Startup migration call is OK; consider avoiding redundant work on fresh DB.Calling
db_migrate_add_memories_columns()right afterdb_create_images_table()is safe, but it will always run PRAGMA + CREATE INDEX even on brand-new DBs. Consider guarding with “only if existing DB” (e.g., a lightweight check for missing columns/indexes) if startup time matters.Also applies to: 50-50
29-29: Router include relies on router-defined prefix—ensure it’s consistently set.
app.include_router(memories_router)assumes the router itself has the/api/memoriesprefix. If that prefix is missing or changes, routes/log line✅ Memories feature enabled at /api/memoriesbecomes misleading. Consider passingprefix="/api/memories"here for a single source of truth.Also applies to: 137-141
frontend/src/components/Memories/MemoriesPage.tsx (1)
262-320: Filter can yield a blank page (no per-filter empty state).When
hasAnyDatais true but the selected filter eliminates all items, none of the sections render and there’s no empty-state messaging. Consider showing an EmptyState like “No memories match this filter”.Also applies to: 343-425
frontend/src/store/hooks.ts (1)
8-14: Use.withTypes()for react-redux 9.x typed hooks.The project uses react-redux 9.2.0, which supports the modern
.withTypes()API (available since v9.1+). Update the typed hooks to:export const useAppDispatch = useDispatch.withTypes<AppDispatch>(); export const useAppSelector = useSelector.withTypes<RootState>();This is the recommended pattern in react-redux 9.x and provides cleaner syntax than the type annotation approach currently in use.
docs/backend/backend_python/openapi.json (1)
1460-1480: Consider adding error responses for the/api/memories/on-this-dayendpoint.This endpoint only defines a 200 response, but the description mentions it can raise
HTTPExceptionif the database query fails. Consider adding 500 error response documentation for consistency with other endpoints."responses": { "200": { "description": "Successful Response", "content": { "application/json": { "schema": { "$ref": "#/components/schemas/OnThisDayResponse" } } } + }, + "500": { + "description": "Internal Server Error", + "content": { + "application/json": { + "schema": {} + } + } } }backend/test_memories_api.py (2)
35-56: Consider adding request timeouts for robustness.The API calls lack timeout parameters, which could cause the test script to hang indefinitely if the server becomes unresponsive. Consider adding timeouts consistent with
check_server().response = requests.post( f"{BASE_URL}/generate", params={ "location_radius_km": 5.0, "date_tolerance_days": 3, "min_images": 2 - } + }, + timeout=30 )
1-167: Consider converting to pytest for CI integration.This manual test script is useful for development but won't integrate with CI/CD pipelines. Consider either converting to pytest with proper assertions or clearly documenting this as a manual verification tool only.
backend/migrate_add_memories_columns.py (2)
21-47: Code duplication with verify_memories_setup.py.The
Colorsclass and print helper functions (print_header,print_success,print_error,print_info) are duplicated frombackend/app/utils/verify_memories_setup.py. Consider extracting these into a shared utility module to maintain DRY principle.# Create backend/app/utils/console_utils.py with shared Colors and print helpers # Then import in both files: from app.utils.console_utils import Colors, print_header, print_success, print_error, print_info
1-14: Consider adding a backup recommendation in the docstring.For a one-time migration script that modifies the database schema, it's good practice to recommend backing up the database before running. This protects users against unexpected issues.
""" One-time migration script to add Memories feature columns. Run this ONCE after pulling the new code. + +IMPORTANT: Back up your database before running this migration: + cp app/database/PictoPy.db app/database/PictoPy.db.backup This script adds:backend/app/utils/verify_memories_setup.py (2)
16-22: Consider extracting shared Colors class to avoid duplication.The
Colorsclass is duplicated inbackend/migrate_add_memories_columns.py. Consider extracting it to a shared utility module to maintain DRY principles.
252-268: Consider returning an exit code based on verification results.The script could return a non-zero exit code when checks fail, enabling use in CI/CD pipelines or automated setup scripts.
def main(): """Run all verification checks""" print(f"\n{Colors.BOLD}PictoPy Memories Feature Verification{Colors.RESET}") print(f"{Colors.BOLD}====================================={Colors.RESET}") results = { 'Dependencies': check_dependencies(), 'File Structure': check_file_structure(), 'Database Schema': check_database_schema(), 'Module Imports': check_imports(), 'API Routes': check_api_routes(), } print_summary(results) + + # Return non-zero exit code if any checks failed + all_passed = all(result is not False for result in results.values()) + return 0 if all_passed else 1 if __name__ == '__main__': - main() + exit(main())backend/app/utils/extract_location_metadata.py (2)
168-178: Fragile ISO datetime parsing logic.The timezone stripping logic using
split('-')is brittle and hard to follow. It works by coincidence but can fail for edge cases. Consider using a more robust approach.# Try ISO format first (handles timezone) if 'T' in date_str: try: - # Remove timezone suffix for simpler parsing - date_str_clean = date_str.replace('Z', '').split('+')[0].split('-') - # Rejoin only date-time parts (not timezone) - if len(date_str_clean) >= 3: - date_str_clean = '-'.join(date_str_clean[:3]) - captured_at = datetime.fromisoformat(date_str_clean) + # Remove timezone suffix for simpler parsing + clean = date_str.replace('Z', '+00:00') + # Remove timezone offset if present + if '+' in clean: + clean = clean.split('+')[0] + elif clean.count('-') > 2: # Has negative timezone offset + # Find the timezone separator (last '-' after 'T') + t_idx = clean.index('T') + last_dash = clean.rfind('-') + if last_dash > t_idx: + clean = clean[:last_dash] + captured_at = datetime.fromisoformat(clean) except Exception: pass
303-323: Consider using executemany for improved batch performance.The comment says "Batch update" but each row is updated individually. For large datasets,
executemany()with prepared statements or a single transaction with explicitBEGINcould improve performance.# Batch update database if updates: logger.info(f"Updating {len(updates)} images...") - for update_data in updates: - cursor.execute(""" - UPDATE images - SET latitude = ?, - longitude = ?, - captured_at = ? - WHERE id = ? - """, ( - update_data['latitude'], - update_data['longitude'], - update_data['captured_at'], - update_data['id'] - )) + cursor.executemany(""" + UPDATE images + SET latitude = ?, + longitude = ?, + captured_at = ? + WHERE id = ? + """, [ + (u['latitude'], u['longitude'], u['captured_at'], u['id']) + for u in updates + ]) conn.commit()backend/app/routes/memories.py (2)
129-131: Move import to module level for consistency.
db_get_all_images_for_memoriesis imported inside the function while similar functions are imported at module level. Unless avoiding a circular import, move it to the top.from app.database.images import ( db_get_images_with_location, db_get_images_by_date_range, - db_get_images_by_year_month + db_get_images_by_year_month, + db_get_all_images_for_memories )And remove the inline import at line 130.
362-365: Avoid accessing private methods ofMemoryClustering.Calling
_cluster_by_locationand_filter_valid_images(underscore-prefixed methods) creates tight coupling to implementation details. Consider adding a public method toMemoryClusteringfor location-only clustering.# In MemoryClustering class, add: def cluster_locations_only(self, images: List[dict]) -> List[List[dict]]: """Cluster images by location only (no date grouping).""" valid_images = self._filter_valid_images(images) return self._cluster_by_location(valid_images)Then update the endpoint:
- location_clusters = clustering._cluster_by_location( - clustering._filter_valid_images(images) - ) + location_clusters = clustering.cluster_locations_only(images)frontend/src/store/slices/memoriesSlice.ts (1)
177-196: Catch block is unreachable for thunk rejections.When
createAsyncThunkactions are dispatched, rejections don't throw—they return rejected action objects. The catch block will only catch non-thunk errors (which are unlikely here). Consider using.unwrap()if you need to catch rejections.export const fetchAllMemoriesData = createAsyncThunk< void, void, { rejectValue: string } >( 'memories/fetchAllData', async (_, { dispatch, rejectWithValue }) => { try { - await Promise.all([ - dispatch(fetchOnThisDay()), - dispatch(fetchRecentMemories(30)), - dispatch(fetchYearMemories(365)), - dispatch(fetchAllMemories()) - ]); + // Use unwrap() to convert rejected actions to thrown errors if needed + await Promise.all([ + dispatch(fetchOnThisDay()).unwrap(), + dispatch(fetchRecentMemories(30)).unwrap(), + dispatch(fetchYearMemories(365)).unwrap(), + dispatch(fetchAllMemories()).unwrap() + ]); } catch (error) { const apiError = error as ApiError; return rejectWithValue(apiError.message); } } );Alternatively, remove the try-catch since individual thunks already track their errors in state.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
frontend/package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (24)
backend/app/database/images.py(11 hunks)backend/app/routes/memories.py(1 hunks)backend/app/utils/extract_location_metadata.py(1 hunks)backend/app/utils/images.py(5 hunks)backend/app/utils/memory_clustering.py(1 hunks)backend/app/utils/verify_memories_setup.py(1 hunks)backend/extract_metadata_simple.py(1 hunks)backend/main.py(4 hunks)backend/migrate_add_memories_columns.py(1 hunks)backend/requirements.txt(2 hunks)backend/test_auto_gps_extraction.py(1 hunks)backend/test_memories_api.py(1 hunks)docs/backend/backend_python/openapi.json(6 hunks)frontend/src/app/store.ts(2 hunks)frontend/src/components/Memories/FeaturedMemoryCard.tsx(1 hunks)frontend/src/components/Memories/MemoriesPage.tsx(1 hunks)frontend/src/components/Memories/MemoryCard.tsx(1 hunks)frontend/src/components/Memories/MemoryViewer.tsx(1 hunks)frontend/src/components/Memories/index.ts(1 hunks)frontend/src/routes/AppRoutes.tsx(2 hunks)frontend/src/services/memoriesApi.ts(1 hunks)frontend/src/store/hooks.ts(1 hunks)frontend/src/store/slices/memoriesSlice.ts(1 hunks)frontend/tsconfig.json(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (10)
frontend/src/store/hooks.ts (1)
frontend/src/app/store.ts (2)
AppDispatch(26-26)RootState(24-24)
frontend/src/components/Memories/MemoryCard.tsx (2)
backend/app/routes/memories.py (1)
Memory(51-63)frontend/src/services/memoriesApi.ts (4)
Memory(32-44)getThumbnailUrl(462-470)formatPhotoCount(330-332)formatDateRangeRelative(341-389)
frontend/src/store/slices/memoriesSlice.ts (1)
frontend/src/services/memoriesApi.ts (6)
Memory(32-44)MemoryImage(20-27)generateMemories(120-139)ApiError(104-108)getTimeline(148-169)getOnThisDay(176-186)
backend/main.py (1)
backend/app/database/images.py (2)
db_create_images_table(58-109)db_migrate_add_memories_columns(112-165)
backend/test_auto_gps_extraction.py (1)
backend/app/utils/extract_location_metadata.py (2)
MetadataExtractor(27-356)extract_all(197-233)
backend/app/utils/verify_memories_setup.py (3)
backend/migrate_add_memories_columns.py (5)
Colors(21-27)print_header(31-35)print_success(37-39)print_error(41-43)print_info(45-47)backend/app/utils/extract_location_metadata.py (1)
main(359-386)backend/test_memories_api.py (1)
main(137-163)
backend/migrate_add_memories_columns.py (1)
backend/app/utils/verify_memories_setup.py (5)
Colors(16-22)print_header(24-28)print_success(30-32)print_error(34-36)print_info(42-44)
backend/app/database/images.py (3)
backend/app/models/FaceNet.py (1)
close(27-29)backend/app/models/FaceDetector.py (1)
close(69-79)backend/app/utils/images.py (1)
image_util_parse_metadata(524-541)
backend/app/utils/images.py (1)
backend/app/utils/extract_location_metadata.py (1)
extract_all(197-233)
frontend/src/services/memoriesApi.ts (1)
backend/app/routes/memories.py (7)
MemoryImage(41-48)Memory(51-63)GenerateMemoriesResponse(66-72)TimelineResponse(75-80)OnThisDayResponse(83-89)LocationCluster(92-98)LocationsResponse(101-105)
🔇 Additional comments (32)
backend/app/database/images.py (3)
112-166: Well-structured migration function with proper error handling.Good defensive coding: checks table existence before proceeding, uses PRAGMA to detect existing columns, and handles transactions properly with commit/rollback. The logging provides useful feedback on migration progress.
520-596: LGTM - datetime handling with SQLite.The date range query is well-structured. SQLite handles Python
datetimeobjects correctly when using the sqlite3 adapter. The optional favorites filter is cleanly integrated.Note: The same import-in-loop pattern exists here (line 574) - consider moving it outside.
177-194: The concern in this review is not applicable. Theimage_util_prepare_image_recordsfunction already ensures thatlatitude,longitude, andcaptured_atare always included in every record dictionary (even if set toNone), specifically to satisfy the SQL INSERT statement's named parameters. NoKeyErrorwill occur.backend/app/utils/memory_clustering.py (4)
70-103: LGTM - Simple reverse geocoding implementation.The Haversine distance calculation is correct, and the linear search through ~30 cities is acceptable. The 50km default radius is reasonable for city matching.
140-204: Well-structured main clustering method.The separation of images into GPS-based and date-only categories is clear, with appropriate fallback handling. Good use of logging to track processing counts.
591-635: LGTM - DBSCAN implementation with proper coordinate handling.The coordinates are correctly converted to radians for the haversine metric, and noise points are handled by creating individual clusters. Note: The
epsparameter issue was flagged in the earlier comment about initialization.
637-848: Helper methods are well-implemented.The temporal clustering, memory creation, and utility functions follow consistent patterns with proper datetime handling and error logging. The title and description generation provides good user-facing text.
frontend/src/routes/AppRoutes.tsx (1)
12-12: LGTM - Route correctly wired to MemoriesPage.The Memories route is properly updated to render the new
MemoriesPagecomponent instead of theComingSoonplaceholder. The barrel export inindex.tscorrectly exposes the component, and the import path resolves as expected.frontend/tsconfig.json (1)
25-25: Verify if deprecated TypeScript options are present and whether this suppression is necessary.The
ignoreDeprecations: "6.0"setting is the official TypeScript mechanism to suppress deprecation errors scheduled for TypeScript 6.0, but it should not be used as a permanent mask. Review which specific options in this configuration are deprecated and either migrate them or document why the suppression is needed. The current config uses modern settings (ES2020, ESNext, bundler moduleResolution), but the presence of this flag suggests at least one deprecated option requires attention.backend/requirements.txt (1)
34-34: Inline comments are helpful and versions are secure.The inline comments effectively document the dependency purposes for the Memories feature. Verification confirms that numpy 1.26.4 and scikit-learn 1.5.1 have no known CVEs. Both versions are from stable release series and are appropriately pinned for consistency.
frontend/src/components/Memories/index.ts (1)
1-14: Clean barrel exports.frontend/src/app/store.ts (1)
9-21: Store wiring formemorieslooks good.docs/backend/backend_python/openapi.json (2)
1308-1382: New Memories API endpoints are well-defined.The
/api/memories/generateendpoint has appropriate parameter constraints (location_radius_km: 0.1-100,date_tolerance_days: 1-30,min_images: 1-10) and proper response schemas. The OpenAPI spec correctly references the newGenerateMemoriesResponseandMemoryschemas.
2606-2686: Memory schema looks comprehensive.The
Memoryschema correctly defines all required fields includingmemory_id,title,location_name,date_start/date_end(nullable),images,thumbnail_image_id, and GPS coordinates. This aligns with the frontend'sMemoryinterface.frontend/src/components/Memories/FeaturedMemoryCard.tsx (1)
39-52: Good accessibility implementation.The component properly implements keyboard navigation with
role="button",tabIndex={0}, and Enter/Space key handling. Thearia-labelprovides meaningful context for screen readers.frontend/src/components/Memories/MemoryViewer.tsx (2)
211-248: Good image grid implementation with accessibility.The image grid properly handles click events, includes hover effects, and uses lazy loading. The structure is clean and performant.
281-291: The metadatawidthandheightfields are set to0to satisfy theImageMetadatatype interface requirements, but they are not used anywhere in the codebase. MediaView, ImageViewer, and MediaInfoPanel do not access or perform calculations with these values, so there is no risk of division-by-zero or aspect ratio issues. The fields can remain as-is since they're unused placeholders.frontend/src/components/Memories/MemoryCard.tsx (2)
26-34: Good defensive thumbnail resolution with fallback chain.The code properly handles missing thumbnails by checking for
thumbnail_image_id, falling back to the first image, and ultimately to a default placeholder. This prevents broken images in the UI.
55-68: Good accessibility implementation.Properly implements keyboard navigation with
role="button",tabIndex={0}, Enter/Space key handling, and a descriptivearia-label.backend/migrate_add_memories_columns.py (2)
113-126: Index creation is properly idempotent.Using
CREATE INDEX IF NOT EXISTSensures the migration can be safely re-run without errors. Good defensive approach.
199-214: Good error handling with rollback on failure.The migration properly rolls back changes on SQLite or unexpected errors, preventing partial migrations that could corrupt the database. The
finallyblock ensures the connection is always closed.backend/app/utils/verify_memories_setup.py (2)
46-73: LGTM!The dependency checking logic is sound with appropriate handling for version mismatches and missing packages.
194-226: LGTM!The API route verification correctly imports the FastAPI app and validates required endpoints with proper error handling.
backend/app/utils/extract_location_metadata.py (1)
197-233: LGTM!The
extract_allmethod properly handles edge cases (null, empty, bytes) and orchestrates the extraction process cleanly.backend/app/routes/memories.py (2)
41-106: LGTM!Response models are well-defined and align with the frontend TypeScript interfaces in
memoriesApi.ts.
246-313: LGTM!The "On This Day" endpoint correctly searches across previous years with proper error handling per year iteration.
frontend/src/store/slices/memoriesSlice.ts (2)
26-84: LGTM!The state interface is well-structured with granular loading and error states per section, enabling fine-grained UI feedback.
232-306: LGTM!The extra reducers correctly handle all async states (pending/fulfilled/rejected) for each thunk with appropriate error fallback messages.
frontend/src/services/memoriesApi.ts (4)
20-108: LGTM!TypeScript interfaces correctly mirror the backend Pydantic models from
memories.py.
341-389: LGTM!The relative date formatting logic is comprehensive and handles various time ranges appropriately.
398-434: LGTM!The memory title generation provides good UX by creating human-readable titles based on location and photo count with sensible fallbacks.
462-469: LGTM!Proper use of Tauri's
convertFileSrcfor desktop file path handling with a reasonable fallback.
| // Handle On This Day click - create a temporary memory from images | ||
| const handleOnThisDayClick = () => { | ||
| if (onThisDayImages.length > 0 && onThisDayMeta) { | ||
| const tempMemory: Memory = { | ||
| memory_id: 'on-this-day', | ||
| title: `On This Day - ${onThisDayMeta.today}`, | ||
| description: `Photos from ${onThisDayMeta.years.join(', ')}`, | ||
| location_name: 'Various locations', | ||
| date_start: onThisDayImages[0]?.captured_at || null, | ||
| date_end: onThisDayImages[onThisDayImages.length - 1]?.captured_at || null, | ||
| image_count: onThisDayImages.length, | ||
| images: onThisDayImages, | ||
| thumbnail_image_id: onThisDayImages[0]?.id || '', | ||
| center_lat: onThisDayImages[0]?.latitude || 0, | ||
| center_lon: onThisDayImages[0]?.longitude || 0 | ||
| }; | ||
| dispatch(setSelectedMemory(tempMemory)); | ||
| } | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On This Day temp memory should not assume images are sorted.
date_start/date_end use first/last array elements; if onThisDayImages isn’t sorted by captured_at, the range will be wrong. Compute min/max by timestamp (or sort a copy) before building the temp memory.
🤖 Prompt for AI Agents
In frontend/src/components/Memories/MemoriesPage.tsx around lines 176 to 194,
the temp "On This Day" memory assumes onThisDayImages are in chronological order
when setting date_start/date_end and thumbnail/center picks; compute the correct
min/max captured_at (or sort a shallow copy) and use the earliest timestamp as
date_start and the latest as date_end, and pick the thumbnail/center image
consistently (e.g., the earliest or a median) from that computed ordering;
update assignments to date_start, date_end, thumbnail_image_id, center_lat, and
center_lon to use those computed values.
| useEffect(() => { | ||
| const handleEsc = (e: KeyboardEvent) => { | ||
| if (e.key === 'Escape') { | ||
| handleCloseViewer(); | ||
| } | ||
| }; | ||
|
|
||
| if (memory) { | ||
| document.addEventListener('keydown', handleEsc); | ||
| // Prevent body scroll when modal is open | ||
| document.body.style.overflow = 'hidden'; | ||
| } | ||
|
|
||
| return () => { | ||
| document.removeEventListener('keydown', handleEsc); | ||
| document.body.style.overflow = 'unset'; | ||
| }; | ||
| }, [memory, handleCloseViewer]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ESC key closes the entire viewer even when MediaView is open.
When showMediaView is true, pressing ESC will close the MemoryViewer entirely instead of just MediaView. Consider only handling ESC when MediaView is not showing, or letting MediaView handle its own ESC.
useEffect(() => {
const handleEsc = (e: KeyboardEvent) => {
- if (e.key === 'Escape') {
+ if (e.key === 'Escape' && !showMediaView) {
handleCloseViewer();
}
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| useEffect(() => { | |
| const handleEsc = (e: KeyboardEvent) => { | |
| if (e.key === 'Escape') { | |
| handleCloseViewer(); | |
| } | |
| }; | |
| if (memory) { | |
| document.addEventListener('keydown', handleEsc); | |
| // Prevent body scroll when modal is open | |
| document.body.style.overflow = 'hidden'; | |
| } | |
| return () => { | |
| document.removeEventListener('keydown', handleEsc); | |
| document.body.style.overflow = 'unset'; | |
| }; | |
| }, [memory, handleCloseViewer]); | |
| useEffect(() => { | |
| const handleEsc = (e: KeyboardEvent) => { | |
| if (e.key === 'Escape' && !showMediaView) { | |
| handleCloseViewer(); | |
| } | |
| }; | |
| if (memory) { | |
| document.addEventListener('keydown', handleEsc); | |
| // Prevent body scroll when modal is open | |
| document.body.style.overflow = 'hidden'; | |
| } | |
| return () => { | |
| document.removeEventListener('keydown', handleEsc); | |
| document.body.style.overflow = 'unset'; | |
| }; | |
| }, [memory, handleCloseViewer]); |
🤖 Prompt for AI Agents
In frontend/src/components/Memories/MemoryViewer.tsx around lines 51 to 68, the
ESC key handler currently closes the entire MemoryViewer even when showMediaView
is true; update the logic so the ESC handling is disabled while MediaView is
open (or let MediaView handle its own ESC). Concretely, register the keydown
listener only when memory is present AND showMediaView is false, or keep the
listener but early-return in handleEsc if showMediaView is true; ensure the
cleanup still always removes the listener and restores
document.body.style.overflow.
| return () => { | ||
| document.removeEventListener('keydown', handleEsc); | ||
| document.body.style.overflow = 'unset'; | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use empty string instead of 'unset' for body overflow reset.
Setting document.body.style.overflow = 'unset' may not properly restore the original value. Use an empty string to remove the inline style entirely, allowing CSS to take over.
return () => {
document.removeEventListener('keydown', handleEsc);
- document.body.style.overflow = 'unset';
+ document.body.style.overflow = '';
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| return () => { | |
| document.removeEventListener('keydown', handleEsc); | |
| document.body.style.overflow = 'unset'; | |
| }; | |
| return () => { | |
| document.removeEventListener('keydown', handleEsc); | |
| document.body.style.overflow = ''; | |
| }; |
🤖 Prompt for AI Agents
In frontend/src/components/Memories/MemoryViewer.tsx around lines 64 to 67, the
cleanup sets document.body.style.overflow = 'unset' which can leave an inline
value instead of restoring CSS; change the cleanup to remove the inline style by
setting document.body.style.overflow = '' (empty string) so the inline overflow
is cleared and CSS rules take effect.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/app/utils/images.py (1)
24-30: Duplicate logger definitions - second overwrites the first.The logger is defined twice: line 24 uses
get_logger(__name__)(custom logger) and line 30 useslogging.getLogger(__name__)(standard logger). The second definition overwrites the first, which may cause inconsistent logging behavior.from app.logging.setup_logging import get_logger from app.utils.extract_location_metadata import MetadataExtractor logger = get_logger(__name__) -# GPS EXIF tag constant -GPS_INFO_TAG = 34853 - -logger = logging.getLogger(__name__) +# GPS EXIF tag constant +GPS_INFO_TAG = 34853
♻️ Duplicate comments (5)
backend/app/utils/verify_memories_setup.py (1)
120-123: Inconsistent database filename in warning message.The code checks for
PictoPy.db(line 118) but the warning message referencesgallery.db. This confuses users.if not db_path.exists(): - print_warning("Database file 'gallery.db' not found") + print_warning("Database file 'PictoPy.db' not found") print_info(" → Database will be created on first run") return None # Not an error, just not initialized yetfrontend/src/components/Memories/MemoriesPage.tsx (2)
154-171: Location/Date filtering viacenter_lat/center_lon === 0is incorrect.Using
0as "no GPS" breaks for valid locations on the equator/prime meridian (e.g., Gulf of Guinea, Null Island). Consider using an explicit discriminator from the backend (e.g.,location_name === 'Date-Based Memory') or null checks.// Calculate counts - const locationCount = allMemories.filter( - (m) => m.center_lat !== 0 || m.center_lon !== 0, - ).length; - const dateCount = allMemories.filter( - (m) => m.center_lat === 0 && m.center_lon === 0, - ).length; + const isDateBasedMemory = (m: Memory) => m.location_name === 'Date-Based Memory'; + const locationCount = allMemories.filter((m) => !isDateBasedMemory(m)).length; + const dateCount = allMemories.filter((m) => isDateBasedMemory(m)).length; // Simple filter function const applyFilter = (memories: Memory[]) => { if (filter === 'location') { - return memories.filter((m) => m.center_lat !== 0 || m.center_lon !== 0); + return memories.filter((m) => !isDateBasedMemory(m)); } if (filter === 'date') { - return memories.filter((m) => m.center_lat === 0 && m.center_lon === 0); + return memories.filter((m) => isDateBasedMemory(m)); } return memories; // 'all' };
188-207: On This Day temp memory assumes images are sorted.
date_start/date_enduse first/last array elements. IfonThisDayImagesisn't sorted bycaptured_at, the date range will be incorrect.Sort images or compute min/max dates explicitly:
const handleOnThisDayClick = () => { if (onThisDayImages.length > 0 && onThisDayMeta) { + // Sort images by captured_at to ensure correct date range + const sortedImages = [...onThisDayImages].sort((a, b) => { + const dateA = a.captured_at ? new Date(a.captured_at).getTime() : 0; + const dateB = b.captured_at ? new Date(b.captured_at).getTime() : 0; + return dateA - dateB; + }); const tempMemory: Memory = { memory_id: 'on-this-day', title: `On This Day - ${onThisDayMeta.today}`, description: `Photos from ${onThisDayMeta.years.join(', ')}`, location_name: 'Various locations', - date_start: onThisDayImages[0]?.captured_at || null, - date_end: - onThisDayImages[onThisDayImages.length - 1]?.captured_at || null, + date_start: sortedImages[0]?.captured_at || null, + date_end: sortedImages[sortedImages.length - 1]?.captured_at || null, image_count: onThisDayImages.length, images: onThisDayImages, - thumbnail_image_id: onThisDayImages[0]?.id || '', - center_lat: onThisDayImages[0]?.latitude || 0, - center_lon: onThisDayImages[0]?.longitude || 0, + thumbnail_image_id: sortedImages[0]?.id || '', + center_lat: sortedImages[0]?.latitude || 0, + center_lon: sortedImages[0]?.longitude || 0, };frontend/src/components/Memories/MemoryViewer.tsx (2)
57-74: ESC key still closes the entire viewer when MediaView is open.The past review comment on this code segment was not addressed. When
showMediaViewis true, pressing ESC will close the MemoryViewer entirely instead of just MediaView, bypassing MediaView's own keyboard handling.Apply this diff to respect MediaView's state:
useEffect(() => { const handleEsc = (e: KeyboardEvent) => { - if (e.key === 'Escape') { + if (e.key === 'Escape' && !showMediaView) { handleCloseViewer(); } }; if (memory) { document.addEventListener('keydown', handleEsc); // Prevent body scroll when modal is open document.body.style.overflow = 'hidden'; } return () => { document.removeEventListener('keydown', handleEsc); document.body.style.overflow = 'unset'; }; - }, [memory, handleCloseViewer]); + }, [memory, showMediaView, handleCloseViewer]);
70-73: Use empty string instead of 'unset' for body overflow reset.The past review comment on this code segment was not addressed. Setting
document.body.style.overflow = 'unset'may not properly restore the original value. Use an empty string to remove the inline style entirely, allowing CSS to take over.Apply this diff:
return () => { document.removeEventListener('keydown', handleEsc); - document.body.style.overflow = 'unset'; + document.body.style.overflow = ''; };
🧹 Nitpick comments (11)
backend/app/utils/verify_memories_setup.py (2)
17-51: Consider extracting shared CLI utilities to avoid code duplication.The
Colorsclass and print helper functions (print_header,print_success,print_error,print_warning,print_info) are duplicated frombackend/migrate_add_memories_columns.py. Extract these into a shared module (e.g.,app/utils/cli_helpers.py) to follow DRY principles.
67-80: Version comparison using string equality may produce false warnings.Comparing versions with
version != expected_versioncan flag warnings for compatible versions (e.g.,1.26.5vs1.26.4). Consider using semantic version comparison or removing strict version checks for informational purposes.- if expected_version and version != expected_version: - print_warning( - f"{package} installed (v{version}), expected v{expected_version}" - ) + if expected_version: + # Info only - exact version match not required + if version != expected_version: + print_info(f" Note: expected v{expected_version}") + print_success(f"{package} v{version}")frontend/src/services/memoriesApi.ts (1)
315-323:calculateYearsAgois imprecise for edge cases.Using
getFullYear()difference doesn't account for whether the anniversary date has passed this year. For example, on January 1, 2025, a photo from December 31, 2024 would show as "1 year ago" when it's actually 1 day ago.This is acceptable for a rough "years ago" display, but if precision matters, consider computing based on full date comparison.
backend/app/utils/images.py (1)
209-213: Minor: Redundant truthiness check on datetime object.The condition
isinstance(captured_at, datetime.datetime) and captured_atis redundant since datetime objects are always truthy. However, this doesn't affect correctness."captured_at": ( captured_at.isoformat() - if isinstance(captured_at, datetime.datetime) and captured_at + if isinstance(captured_at, datetime.datetime) else captured_at ), # Can be Nonedocs/backend/backend_python/openapi.json (1)
1460-1480: On This Day endpoint has no error responses defined.Unlike other memories endpoints,
/api/memories/on-this-dayonly defines a 200 response. Consider adding a 500 error response for consistency with other endpoints and to document potential database query failures mentioned in the description.frontend/src/components/Memories/MemoryCard.tsx (1)
26-36: Consider simplifying the thumbnail fallback chain.The nested ternary for thumbnail selection is complex. The logic tries
thumbnail_image_id, then first image, then first image path, then placeholder. Consider extracting to a helper function for clarity.Example refactor:
const getThumbnailUrl = (): string => { const thumbnailImage = memory.images.find((img) => img.id === memory.thumbnail_image_id) || memory.images[0]; if (thumbnailImage) { return getThumbnailUrl(thumbnailImage); } return '/photo.png'; // Default placeholder }; const thumbnailUrl = getThumbnailUrl();backend/app/database/images.py (1)
633-641: Consider additional safeguards for extreme latitude lon_offset values.While the clamping to 0.01 prevents division by zero, it can still produce very large
lon_offsetvalues (~180) near the poles. Consider adding an explicit upper bound check.Apply this diff:
# Calculate bounding box offsets lat_offset = radius_km / 111.0 cos_lat = abs(math.cos(math.radians(latitude))) # Clamp to avoid division by near-zero at poles - lon_offset = radius_km / (111.0 * max(cos_lat, 0.01)) + lon_offset = min(radius_km / (111.0 * max(cos_lat, 0.01)), 180.0)backend/app/routes/memories.py (2)
382-391: Avoid calling private methods of MemoryClustering.Directly accessing
_cluster_by_locationand_filter_valid_imagesbreaks encapsulation. Consider either:
- Making these methods public if they're part of the API surface
- Adding a dedicated public method to MemoryClustering for location-only clustering
Consider adding a public method to
MemoryClustering:def cluster_by_location_only(self, images: List[Dict[str, Any]]) -> List[List[Dict[str, Any]]]: """Public API for location-only clustering.""" valid_images = self._filter_valid_images(images) return self._cluster_by_location(valid_images)Then update the route:
- # Use internal method to get location clusters - location_clusters = clustering._cluster_by_location( - clustering._filter_valid_images(images) - ) + # Get location clusters + location_clusters = clustering.cluster_by_location_only(images)
382-386: Replace magic number with a constant or max value.Using
999999as a large date tolerance is unclear. Consider usingsys.maxsize,float('inf'), or a named constant likeNO_DATE_CLUSTERING.Example:
# At module level NO_DATE_CLUSTERING = sys.maxsize # Group all dates together # In the endpoint clustering = MemoryClustering( location_radius_km=location_radius_km, date_tolerance_days=NO_DATE_CLUSTERING, min_images_per_memory=1, )backend/app/utils/memory_clustering.py (2)
33-66: Consider externalizing city coordinates to a configuration file.The hardcoded
CITY_COORDINATESdictionary makes it difficult to add new cities or customize for different deployments. Consider moving to a JSON/YAML config file or database table.This would allow users to add their own cities without code changes:
# Load from config file with open('config/city_coordinates.json') as f: CITY_COORDINATES = json.load(f)Or make it extensible:
# Allow runtime additions def add_city_coordinate(city_name: str, latitude: float, longitude: float): CITY_COORDINATES[city_name] = (latitude, longitude)
326-359: Complex title generation logic could be extracted to a separate method.The title generation for location-based memories (lines 326-359) has nested conditionals for date formatting. Consider extracting the date range formatting logic to improve readability.
Example refactor:
def _format_location_title_with_dates(self, location_name: str, dates: List[datetime]) -> str: """Format location title with date range.""" if len(dates) <= 1: return location_name start_date = min(dates) end_date = max(dates) if start_date.strftime("%B %Y") == end_date.strftime("%B %Y"): # Same month return f"{location_name} in {start_date.strftime('%b %Y')}" elif start_date.year == end_date.year: # Same year return f"{location_name} - {start_date.strftime('%b')}-{end_date.strftime('%b %Y')}" else: # Different years return f"{location_name} - {start_date.strftime('%b %Y')} to {end_date.strftime('%b %Y')}"
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
backend/app/database/images.py(11 hunks)backend/app/routes/memories.py(1 hunks)backend/app/utils/extract_location_metadata.py(1 hunks)backend/app/utils/images.py(5 hunks)backend/app/utils/memory_clustering.py(1 hunks)backend/app/utils/verify_memories_setup.py(1 hunks)docs/backend/backend_python/openapi.json(6 hunks)docs/frontend/memories.md(1 hunks)docs/overview/features.md(2 hunks)frontend/src/components/Memories/FeaturedMemoryCard.tsx(1 hunks)frontend/src/components/Memories/MemoriesPage.tsx(1 hunks)frontend/src/components/Memories/MemoryCard.tsx(1 hunks)frontend/src/components/Memories/MemoryViewer.tsx(1 hunks)frontend/src/components/Memories/index.ts(1 hunks)frontend/src/services/memoriesApi.ts(1 hunks)
✅ Files skipped from review due to trivial changes (2)
- docs/overview/features.md
- docs/frontend/memories.md
🚧 Files skipped from review as they are similar to previous changes (2)
- frontend/src/components/Memories/FeaturedMemoryCard.tsx
- backend/app/utils/extract_location_metadata.py
🧰 Additional context used
🧬 Code graph analysis (5)
backend/app/utils/images.py (1)
backend/app/utils/extract_location_metadata.py (2)
MetadataExtractor(27-381)extract_all(209-247)
backend/app/utils/verify_memories_setup.py (1)
backend/migrate_add_memories_columns.py (3)
Colors(21-27)print_header(31-35)print_success(37-39)
backend/app/database/images.py (1)
backend/app/utils/images.py (1)
image_util_parse_metadata(534-551)
frontend/src/components/Memories/MemoryViewer.tsx (5)
frontend/src/store/hooks.ts (2)
useAppDispatch(13-13)useAppSelector(14-14)frontend/src/store/slices/memoriesSlice.ts (1)
selectSelectedMemory(329-329)frontend/src/features/imageSlice.ts (1)
setCurrentViewIndex(22-34)frontend/src/services/memoriesApi.ts (5)
generateMemoryTitle(399-435)formatLocationName(443-455)formatDateRangeRelative(342-390)formatPhotoCount(331-333)getThumbnailUrl(463-471)frontend/src/components/Media/MediaView.tsx (1)
MediaView(25-224)
frontend/src/services/memoriesApi.ts (3)
frontend/src/config/Backend.ts (1)
BACKEND_URL(1-1)backend/app/routes/memories.py (7)
MemoryImage(42-50)Memory(53-66)GenerateMemoriesResponse(69-76)TimelineResponse(79-85)OnThisDayResponse(88-95)LocationCluster(98-105)LocationsResponse(108-113)frontend/src/components/Memories/index.ts (2)
MemoryImage(14-14)Memory(14-14)
🔇 Additional comments (9)
backend/app/utils/verify_memories_setup.py (1)
211-243: Modifyingsys.pathcan cause import side effects.Inserting into
sys.pathat line 217 may cause unintended module resolution issues. This is acceptable for a standalone verification script, but ensure this script isn't imported elsewhere.frontend/src/services/memoriesApi.ts (2)
1-12: LGTM!Good use of centralized
BACKEND_URLfrom config, and the imports are appropriate for the service's responsibilities.
21-45: Interface definitions align well with backend schemas.The
MemoryImageandMemoryinterfaces correctly mirror the backend Pydantic models frombackend/app/routes/memories.py. The nullable fields (number | null,string | null) properly match the backend'sOptionaltypes.frontend/src/components/Memories/MemoriesPage.tsx (2)
37-58: LGTM!Clean skeleton components with appropriate Tailwind classes for dark mode support and consistent styling.
178-181: Data fetching on mount looks correct.The
useEffectproperly includesdispatchin the dependency array and triggersfetchAllMemoriesDataon mount.backend/app/utils/images.py (2)
155-156: LGTM - MetadataExtractor instantiation.Creating the extractor once outside the loop is efficient. Good practice.
173-216: GPS extraction integration looks good with proper error handling.The try/except ensures GPS extraction failures don't block image uploads. Logging provides visibility into extraction results. The record always includes latitude, longitude, and captured_at fields (even when None) to satisfy SQL INSERT parameters.
docs/backend/backend_python/openapi.json (2)
1308-1382: LGTM - Generate memories endpoint well-defined.The
/api/memories/generatePOST endpoint has proper parameter constraints (min/max values), sensible defaults, and appropriate response schemas.
1936-1972: LGTM - Memory schemas align with backend models.The
GenerateMemoriesResponse,Memory, andMemoryImageschemas correctly mirror the Pydantic models defined inbackend/app/routes/memories.py. Required fields and nullable types are properly specified.Also applies to: 2606-2746
| export const getThumbnailUrl = (image: MemoryImage): string => { | ||
| // Use Tauri's convertFileSrc for proper file path handling in desktop app | ||
| if (image.thumbnailPath) { | ||
| return convertFileSrc(image.thumbnailPath); | ||
| } | ||
|
|
||
| // Fallback to placeholder | ||
| return '/placeholder-image.png'; | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
fd -t f "placeholder" frontend/Repository: AOSSIE-Org/PictoPy
Length of output: 44
🏁 Script executed:
find frontend/public -type f -name "*.png" 2>/dev/null | head -20Repository: AOSSIE-Org/PictoPy
Length of output: 403
🏁 Script executed:
ls -la frontend/ | grep -E "(public|assets|static)"Repository: AOSSIE-Org/PictoPy
Length of output: 120
🏁 Script executed:
find frontend -type d -name "public" -o -type d -name "assets" -o -type d -name "static" 2>/dev/nullRepository: AOSSIE-Org/PictoPy
Length of output: 103
🏁 Script executed:
rg "placeholder-image" frontend/Repository: AOSSIE-Org/PictoPy
Length of output: 345
Add the missing /placeholder-image.png asset to frontend/public/.
The fallback is referenced in three places (getThumbnailUrl, MemoryViewer, and FeaturedMemoryCard), but the asset does not exist in frontend/public/. Without it, image loading failures will cascade throughout the application. Either add the placeholder image to frontend/public/ or update all three references to use an existing asset.
🤖 Prompt for AI Agents
In frontend/src/services/memoriesApi.ts around lines 463 to 471, the function
falls back to '/placeholder-image.png' but that asset is missing; add a
placeholder-image.png file to frontend/public/ (so '/placeholder-image.png'
resolves at runtime) or update all three references (getThumbnailUrl,
MemoryViewer, FeaturedMemoryCard) to point to an existing public asset path;
ensure the filename and path exactly match the string used in code and commit
the new file so builds and runtime requests succeed.
|
here's the demo video : demo-video-comp.mov |
rahulharpal1603
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
sure , i will fix it up shortly . also if you require any extra changes like the notification for the memories similar to what google photos does and any other further additions please feel free to reach out to me |
Sure. For now we will merge this and in future we can go for notifications. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@backend/main.py`:
- Line 49: Guard the call to db_migrate_add_memories_columns() so only one
process runs migrations: wrap the call in a check that only runs when a
designated env flag is set (e.g., os.getenv("RUN_MIGRATIONS") == "true") or when
the process is the primary/master worker, and skip it otherwise; also add a safe
try/except around db_migrate_add_memories_columns() to log failures without
crashing other workers. Ensure you update the startup path that currently calls
db_migrate_add_memories_columns() so it only executes when the env flag or
master-process condition is satisfied.
In `@docs/backend/backend_python/openapi.json`:
- Around line 2661-2683: The Memory schema currently requires center_lat and
center_lon but date-based memories can lack GPS; make center_lat and center_lon
nullable and remove them from the required list in the Memory schema (same
approach used by MemoryImage.latitude and MemoryImage.longitude) so the fields
can be present with null values or omitted for non-GPS/date-based memories;
update the "center_lat" and "center_lon" definitions to allow null (e.g., type:
["number","null"]) and remove them from the required array for the Memory
schema.
🧹 Nitpick comments (3)
docs/backend/backend_python/openapi.json (3)
1308-1312: API path prefix inconsistency.The new memory endpoints use
/api/memories/prefix while all existing endpoints use bare paths like/folders/,/albums/,/images/. This inconsistency may cause confusion for API consumers.Consider aligning with the existing convention by using
/memories/instead of/api/memories/.
1360-1381: Missing 500 error response documentation.The memory endpoints only document 200 and 422 responses, but don't include 500 Internal Server Error responses. Other endpoints in this API (e.g.,
/folders/add-folder,/images/) consistently document 500 error responses with appropriate error schemas.For API consistency, consider adding 500 error responses to all memory endpoints, referencing a suitable error schema.
3071-3105: Consider defining explicit properties fordate_range.The
date_rangefield uses a genericobjecttype withadditionalProperties: string. For better API documentation and client code generation, consider defining explicitstartandendproperties.Suggested improvement
"date_range": { - "additionalProperties": { - "type": "string" + "type": "object", + "properties": { + "start": { + "type": "string", + "title": "Start Date" + }, + "end": { + "type": "string", + "title": "End Date" + } }, - "type": "object", + "required": ["start", "end"], "title": "Date Range" },
backend/main.py
Outdated
| generate_openapi_json() | ||
| db_create_folders_table() | ||
| db_create_images_table() | ||
| db_migrate_add_memories_columns() # Add Memories columns to existing database |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard startup migrations to avoid multi‑worker contention.
If multiple worker processes start simultaneously, each will attempt the migration and can race on ALTER TABLE/index creation. This can trigger startup errors or noisy logs. Consider gating migrations to a single process via an env flag (or run migrations out‑of‑band).
🔧 Proposed guard
- db_migrate_add_memories_columns() # Add Memories columns to existing database
+ if os.getenv("PICTOPY_RUN_DB_MIGRATIONS", "1") == "1":
+ db_migrate_add_memories_columns() # Add Memories columns to existing database🤖 Prompt for AI Agents
In `@backend/main.py` at line 49, Guard the call to
db_migrate_add_memories_columns() so only one process runs migrations: wrap the
call in a check that only runs when a designated env flag is set (e.g.,
os.getenv("RUN_MIGRATIONS") == "true") or when the process is the primary/master
worker, and skip it otherwise; also add a safe try/except around
db_migrate_add_memories_columns() to log failures without crashing other
workers. Ensure you update the startup path that currently calls
db_migrate_add_memories_columns() so it only executes when the env flag or
master-process condition is satisfied.
…nd-implementation
…tion' into feat/memories-backend-implementation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@backend/app/routes/memories.py`:
- Around line 54-68: The Memory model currently declares center_lat and
center_lon as required floats which prevents nullable values; update the Memory
class to make these fields Optional[float] and assign a default of None (e.g.,
center_lat: Optional[float] = None, center_lon: Optional[float] = None) and
ensure Optional is imported from typing so the generated OpenAPI schema marks
them nullable and avoids validation errors for date-based memories without GPS.
In `@frontend/src/components/Memories/MemoryViewer.tsx`:
- Around line 41-53: handleToggleFavorite currently performs an optimistic UI
update by dispatching toggleImageFavorite after calling togglefav, but on API
failure it only logs to console; update handleToggleFavorite to either show a
user-facing toast (e.g., call showToast or enqueueSnackbar) with the error
message in the catch block and/or revert the optimistic change by dispatching
toggleImageFavorite(imageId) again to undo the state change; reference the
existing functions togglefav, toggleImageFavorite, and handleToggleFavorite and
ensure the catch block performs the toast and/or revert so users receive
immediate feedback when the API call fails.
In `@frontend/src/services/memoriesApi.ts`:
- Around line 34-46: The Memory interface defines center_lat and center_lon as
non-nullable numbers but the API can return null for memories without GPS;
update the Memory interface (symbols: Memory, center_lat, center_lon) to allow
null (e.g., number | null) and adjust any downstream consumers/type guards to
handle null values where Memory objects are used so code won’t assume numeric
coordinates exist.
🧹 Nitpick comments (7)
backend/app/routes/memories.py (2)
145-148: Move import to module level for clarity.The
db_get_all_images_for_memoriesimport is placed inside the function body. If this is to avoid circular imports, consider adding a comment explaining why. Otherwise, move it to the top-level imports for consistency with other database function imports.Proposed fix
from app.database.images import ( db_get_images_with_location, db_get_images_by_date_range, db_get_images_by_year_month, + db_get_all_images_for_memories, )Then remove the inline import at line 146.
389-422: Avoid accessing private methods ofMemoryClustering.The endpoint directly calls
_cluster_by_location,_filter_valid_images, and_reverse_geocodewhich are internal implementation details (prefixed with_). This creates tight coupling and fragile code that may break ifMemoryClusteringinternals change.Consider adding a public method to
MemoryClusteringspecifically for location-only clustering.Proposed approach
Add a public method to
MemoryClustering:# In memory_clustering.py def get_location_clusters(self, images: List[Dict]) -> List[Dict]: """Public API for location-only clustering.""" valid_images = self._filter_valid_images(images) clusters = self._cluster_by_location(valid_images) return [ { "images": cluster, "center_lat": sum(img["latitude"] for img in cluster) / len(cluster), "center_lon": sum(img["longitude"] for img in cluster) / len(cluster), "location_name": self._reverse_geocode(center_lat, center_lon), } for cluster in clusters if cluster ]Then use it in the endpoint:
location_clusters = clustering.get_location_clusters(images)backend/main.py (1)
49-60: Migration guard implemented, but consider fail-fast option for critical environments.The
RUN_MIGRATIONSenvironment variable guard addresses the multi-worker contention concern. However, the current implementation silently continues if migrations fail. For development/staging environments, consider an additional flag to fail-fast on migration errors to catch issues early.Proposed enhancement
if should_run_migrations: try: db_migrate_add_memories_columns() logger.info("Database migrations completed successfully") except Exception as e: logger.error(f"Failed to run database migrations: {e}", exc_info=True) + # Optionally fail-fast in non-production environments + if os.getenv("FAIL_ON_MIGRATION_ERROR", "false").lower() == "true": + raisefrontend/src/components/Memories/MemoryViewer.tsx (2)
61-80: Duplicate image transformation logic.The same image-to-Redux-format mapping appears twice: once in
handleImageClick(lines 61-80) and again inline in theMediaViewprops (lines 326-345). Extract this to a memoized helper to avoid duplication and ensure consistency.Proposed refactor
+ // Memoize transformed images for MediaView + const formattedImages = useMemo(() => { + if (!memory) return []; + return memory.images.map((img) => ({ + id: img.id, + path: img.path, + thumbnailPath: img.thumbnailPath, + folder_id: '', + isTagged: false, + isFavourite: img.isFavourite || false, + tags: [], + metadata: { + name: img.path.split('/').pop() || '', + date_created: img.captured_at, + width: 0, + height: 0, + file_location: img.path, + file_size: 0, + item_type: 'image' as const, + latitude: img.latitude || undefined, + longitude: img.longitude || undefined, + }, + })); + }, [memory]); // Handle image click - open MediaView const handleImageClick = useCallback( (index: number) => { if (!memory) return; - // Convert memory images to Image[] format for Redux state - const formattedImages = memory.images.map((img) => ({ - ...mapping logic... - })); - // Set images in Redux state first dispatch(setImages(formattedImages)); // Then set the current index dispatch(setCurrentViewIndex(index)); setShowMediaView(true); }, - [memory, dispatch], + [memory, dispatch, formattedImages], );Then use
formattedImagesdirectly in the MediaView:<MediaView onClose={handleMediaViewClose} type="image" onToggleFavorite={handleToggleFavorite} - images={memory.images.map((img) => ({ - ...mapping logic... - }))} + images={formattedImages} />Also applies to: 326-345
125-127: Consider memoizinghandleImageError.This handler is recreated on every render. While not critical for an
onErrorhandler, wrapping it inuseCallbackor moving it outside the component as a stable reference would be more consistent with the other handlers in this file.Proposed fix
+ // Handle image load error - show placeholder + const handleImageError = useCallback((e: React.SyntheticEvent<HTMLImageElement>) => { + e.currentTarget.src = '/photo.png'; + }, []); + // Don't render if no memory selected if (!memory) return null; // Generate better title and format location const displayTitle = generateMemoryTitle(memory); const displayLocation = formatLocationName(memory.location_name); - - // Handle image load error - const handleImageError = (e: React.SyntheticEvent<HTMLImageElement>) => { - e.currentTarget.src = '/photo.png'; - };frontend/src/store/slices/memoriesSlice.ts (2)
177-196: The try/catch infetchAllMemoriesDatawon't catch individual thunk rejections.When using
dispatch(thunk()), the returned promise resolves with the action (fulfilled or rejected), not the actual result. This meansPromise.allwill succeed even if individual thunks fail, and thecatchblock here will never execute from thunk failures.Since individual errors are already handled by each thunk's
extraReducers, this is functionally fine, but the try/catch is misleading. Consider either removing it or usingunwrapResultif you need to detect failures in the composite thunk.Option 1: Simplify by removing misleading try/catch
export const fetchAllMemoriesData = createAsyncThunk< void, void, - { rejectValue: string } + Record<string, never> >( 'memories/fetchAllData', - async (_, { dispatch, rejectWithValue }) => { - try { - await Promise.all([ - dispatch(fetchOnThisDay()), - dispatch(fetchRecentMemories(30)), - dispatch(fetchYearMemories(365)), - dispatch(fetchAllMemories()) - ]); - } catch (error) { - const apiError = error as ApiError; - return rejectWithValue(apiError.message); - } + async (_, { dispatch }) => { + await Promise.all([ + dispatch(fetchOnThisDay()), + dispatch(fetchRecentMemories(30)), + dispatch(fetchYearMemories(365)), + dispatch(fetchAllMemories()) + ]); } );Option 2: Properly handle rejections with unwrapResult
+import { unwrapResult } from '@reduxjs/toolkit'; + export const fetchAllMemoriesData = createAsyncThunk< void, void, { rejectValue: string } >( 'memories/fetchAllData', async (_, { dispatch, rejectWithValue }) => { try { - await Promise.all([ - dispatch(fetchOnThisDay()), - dispatch(fetchRecentMemories(30)), - dispatch(fetchYearMemories(365)), - dispatch(fetchAllMemories()) - ]); + const results = await Promise.allSettled([ + dispatch(fetchOnThisDay()).then(unwrapResult), + dispatch(fetchRecentMemories(30)).then(unwrapResult), + dispatch(fetchYearMemories(365)).then(unwrapResult), + dispatch(fetchAllMemories()).then(unwrapResult) + ]); + const rejected = results.filter(r => r.status === 'rejected'); + if (rejected.length > 0) { + return rejectWithValue('Some memories failed to load'); + } } catch (error) { const apiError = error as ApiError; return rejectWithValue(apiError.message); } } );
216-250: Consider normalizing state if performance becomes an issue.The
toggleImageFavoritereducer iterates through all memories and images to find and update matching IDs. For large photo libraries, this O(n×m) operation could cause UI lag.This is fine for now, but if performance issues arise, consider using RTK's
createEntityAdapterto normalize images by ID for O(1) lookups.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
backend/app/utils/folders.py (2)
17-37: Normalizeroot_pathto avoid KeyError and incorrect parent mapping.
dirpathis absolute butroot_pathmight not be; this can break the root comparison and causefolder_map[root_path]to KeyError or set the root’s parent incorrectly.🐛 Proposed fix
def folder_util_add_folder_tree(root_path, parent_folder_id=None, AI_Tagging=False, taggingCompleted=None): """ Recursively collect folder data and insert all folders in a single database transaction. All folders are initially inserted with NULL parent_id, which is updated after insertion. Returns the root folder's UUID and the folder map (containing folder_id and parent_id). """ + root_path = os.path.abspath(root_path) folders_data = [] folder_map = {} # Maps path to (folder_id, parent_id)
143-165: Prevent partial inserts whendb_update_parent_ids_for_subtreefails.If the update throws, the insert has already happened and you log-and-continue, leaving orphaned rows (NULL parent_id) and a mismatch with
added_count. Consider wrapping insert+update in a transaction or cleaning up inserted IDs on failure. Also normalizefolder_pathsodb_update_parent_ids_for_subtreeandfolder_mapstay consistent.🧹 Suggested cleanup + path normalization
- for folder_path in folders_to_add: + for folder_path in folders_to_add: + folder_path = os.path.abspath(folder_path) + folder_map = None try: # Add each new folder tree (including its subdirectories) root_folder_id, folder_map = folder_util_add_folder_tree( root_path=folder_path, parent_folder_id=parent_folder_id, AI_Tagging=False, # Default to False for new folders taggingCompleted=False, ) # Update parent IDs for the new folder tree db_update_parent_ids_for_subtree(folder_path, folder_map) @@ - except Exception as e: + except Exception as e: + if folder_map: + db_delete_folders_batch([folder_id for folder_id, _ in folder_map.values()]) # Log the error but continue with other folders logger.error(f"Error adding folder {folder_path}: {e}")backend/app/utils/images.py (1)
24-30: Duplicate logger initialization.The logger is initialized twice: once at line 24 using
get_logger(__name__)and again at line 30 usinglogging.getLogger(__name__). This creates two separate logger instances with potentially different configurations.🔧 Remove the duplicate logger
from app.logging.setup_logging import get_logger from app.utils.extract_location_metadata import MetadataExtractor logger = get_logger(__name__) # GPS EXIF tag constant GPS_INFO_TAG = 34853 - -logger = logging.getLogger(__name__)
🤖 Fix all issues with AI agents
In `@backend/app/routes/memories.py`:
- Around line 298-302: The sort key uses datetime.fromisoformat inside the
lambda for all_images.sort which can raise ValueError on malformed captured_at
strings; update the key logic (the lambda used in all_images.sort) to
defensively parse captured_at with a try/except (or a helper like
parse_captured_at) and return datetime.min when captured_at is missing or
parsing fails so malformed dates don't raise and the sort still works.
In `@backend/app/utils/memory_clustering.py`:
- Around line 215-221: The loop over location_clusters calls
self._create_simple_memory(temp_cluster, memory_type="location") and appends its
result directly; since _create_simple_memory can return None on error, guard the
append by checking the returned value (memory) is not None before appending to
the memories list—mirror the defensive pattern used in _cluster_date_images
where the result of _create_simple_memory is validated; locate the loop using
location_clusters and _cluster_by_date and add an if memory is not None (or
equivalent) around memories.append(memory).
- Around line 165-171: The GPS presence checks incorrectly drop valid 0.0
coordinates because they use truthiness; update all checks that do
img.get("latitude") and img.get("longitude") (e.g., the has_gps assignment in
the images loop, the two list comprehensions around lines 301–302, and the
negative checks around lines 553–554) to explicitly test against None (e.g.,
img.get("latitude") is not None and img.get("longitude") is not None) so 0.0
values are treated as valid coordinates; keep the rest of the logic unchanged.
- Around line 333-336: The current memory_id built in _create_simple_memory
(memory_id = f"{memory_type}_{int(date_obj.timestamp())}_{len(images)}") can
collide; replace this ad-hoc scheme by generating a unique id—either call the
existing _generate_memory_id(...) helper (providing memory_type, date_obj and
images or image hashes) or append a short UUID/nonce or a hash of the images
list to the id so it is unique even for same-second creations; update the
assignment to memory_id in _create_simple_memory accordingly and ensure any
downstream code expecting the prior format still works with the new id.
- Around line 193-195: The current sort uses memories.sort(key=lambda m:
m.get("date_start", ""), reverse=True) which still passes None through when the
"date_start" key exists and is set to None, causing TypeError during comparison;
update the sort key to coerce None to a consistent comparable value (e.g., an
empty string) such as memories.sort(key=lambda m: m.get("date_start') or "",
reverse=True) so GPS-only memories with date_start=None are handled safely and
sorted as intended (place this change where the memories.sort call is in
memory_clustering.py).
- Around line 563-585: In _filter_valid_images, when parsing captured_at fails
(inside the except branch where you currently log "Could not parse date..."),
ensure you do not leave the original string in img_copy["captured_at"]; instead
remove the key or set img_copy["captured_at"] = None so only datetime objects
(or None) are forwarded to downstream functions like _cluster_by_date; update
the block that logs the failure to explicitly clear img_copy["captured_at"] (or
pop it) after the logger.debug call to prevent string values from reaching
_cluster_by_date and causing TypeError during sorting.
In `@backend/test_memories_api.py`:
- Line 13: The tests use BASE_URL in backend/test_memories_api.py which points
to port 8000 but the server in backend/main.py runs on port 52123; update the
BASE_URL constant to use "http://localhost:52123/api/memories" so tests target
the correct server port (change the BASE_URL definition in
backend/test_memories_api.py).
- Around line 98-113: The health check in check_server currently targets port
8000 and prints a developer-specific absolute path; change the request URL in
check_server to use port 52123 (e.g., "http://localhost:52123/health") and
remove the hardcoded path in the exception message — replace the printed start
instructions with a generic, repo-relative command such as "cd backend" then
"python main.py" or reference an environment/config variable if available so the
message is not developer-specific.
🧹 Nitpick comments (15)
frontend/src/components/Media/MediaView.tsx (1)
108-126: Handle async favorite toggles to avoid unhandled rejections.
IfonToggleFavoritereturns a Promise, the current fire-and-forget call can surface unhandled rejections and closes the view even if the toggle fails. Consider making the handler async and awaiting the result.🔧 Suggested tweak
- const handleToggleFavourite = useCallback(() => { - if (currentImage) { - if (currentImage?.id) { - // Use custom handler if provided, otherwise use default - if (onToggleFavorite) { - onToggleFavorite(currentImage.id); - } else { - toggleFavourite(currentImage.id); - } - } - if (location.pathname === ROUTES.FAVOURITES) handleClose(); - } - }, [ + const handleToggleFavourite = useCallback(async () => { + if (!currentImage?.id) return; + try { + const result = onToggleFavorite + ? onToggleFavorite(currentImage.id) + : toggleFavourite(currentImage.id); + await Promise.resolve(result); + if (location.pathname === ROUTES.FAVOURITES) handleClose(); + } catch (err) { + console.error("Failed to toggle favourite", err); + } + }, [ currentImage, toggleFavourite, onToggleFavorite, location.pathname, handleClose, ]);backend/app/logging/setup_logging.py (1)
101-129: Provide a fallback logging config when config load fails.
Returning early leaves logging unconfigured if the JSON is missing or malformed. Consider a minimal fallback (e.g.,logging.basicConfig) so logs still emit.🛠️ Suggested fallback
config = load_config() if not config: - print("No logging configuration found. Using default settings.", file=sys.stderr) - return + logging.basicConfig(level=logging.INFO, stream=sys.stdout) + print("No logging configuration found. Using default settings.", file=sys.stderr) + returnbackend/app/utils/memory_clustering.py (1)
823-841: Prefer stable hashing for memory IDs.
Python’s built-inhash()is randomized per process; IDs can change between runs.🔧 Suggested fix (stable hash)
+import hashlib @@ - location_hash = hash((round(latitude, 2), round(longitude, 2))) + raw = f"{round(latitude, 2)},{round(longitude, 2)}".encode() + location_hash = hashlib.sha1(raw).hexdigest()[:10] @@ - return f"mem_{date_str}_{abs(location_hash)}" + return f"mem_{date_str}_{location_hash}" @@ - return f"mem_nodate_{abs(location_hash)}" + return f"mem_nodate_{location_hash}"frontend/src/services/memoriesApi.ts (1)
122-140: Use nullish checks for numeric options (avoid dropping 0).
If0is a valid value, current truthy checks skip it.🔧 Suggested fix
- if (options?.location_radius_km) + if (options?.location_radius_km !== undefined) params.append( 'location_radius_km', options.location_radius_km.toString(), ); - if (options?.date_tolerance_days) + if (options?.date_tolerance_days !== undefined) params.append( 'date_tolerance_days', options.date_tolerance_days.toString(), ); - if (options?.min_images) + if (options?.min_images !== undefined) params.append('min_images', options.min_images.toString());- if (options?.location_radius_km) + if (options?.location_radius_km !== undefined) params.append( 'location_radius_km', options.location_radius_km.toString(), ); - if (options?.date_tolerance_days) + if (options?.date_tolerance_days !== undefined) params.append( 'date_tolerance_days', options.date_tolerance_days.toString(), );- if (options?.location_radius_km) + if (options?.location_radius_km !== undefined) params.append( 'location_radius_km', options.location_radius_km.toString(), ); - if (options?.max_sample_images) + if (options?.max_sample_images !== undefined) params.append('max_sample_images', options.max_sample_images.toString());Also applies to: 159-179, 213-225
backend/main.py (1)
49-60: Logger used before initialization.The
loggervariable is initialized at line 91, but the migration code at lines 55, 57, and 60 usesloggerduring thelifespanstartup. Sincelifespanis called when the app starts (after module-level code executes), this should work, but the code organization is confusing.However, there's a more significant issue: if
RUN_MIGRATIONSdefaults to"true", multiple Uvicorn workers will still race on migrations. Consider defaulting to"false"in production multi-worker deployments or documenting this behavior.Consider documenting the migration behavior
# Only run migrations in the primary process or when explicitly enabled + # NOTE: In multi-worker deployments, set RUN_MIGRATIONS=false for all but one worker + # to avoid migration race conditions, or run migrations out-of-band before starting. should_run_migrations = os.getenv("RUN_MIGRATIONS", "true").lower() == "true"frontend/src/components/Memories/MemoryViewer.tsx (1)
69-103: Image metadata hardcodes zero values for width/height/file_size.When converting memory images to the Redux format,
width,height, andfile_sizeare hardcoded to0. While this may be acceptable if MediaView doesn't rely on these values, it could cause issues if other components or features use this metadata.Consider fetching actual dimensions if needed, or document this limitation.
backend/app/routes/images.py (1)
102-108: Consider optimizing image fetch after toggle.The current implementation calls
db_get_all_images()to find a single image by ID. For large image collections, this is inefficient. Consider adding a dedicateddb_get_image_by_id()function.This is not a regression from this PR, but worth noting for future optimization.
backend/test_memories_api.py (1)
23-31: Response truncation may hide important error details.The
print_responsefunction truncates output at 500 characters with[:500]. For debugging failures, this may hide critical information. Consider making truncation configurable or showing full output on errors.frontend/src/components/Memories/MemoryCard.tsx (1)
27-36: Redundant fallback logic in thumbnail URL resolution.The fallback chain has redundancy: if
thumbnailImageis falsy (line 29 fails to find by ID), you already fall back tomemory.images[0]. Then on line 34-35, you checkmemory.images[0]?.pathagain, which would be the same image.♻️ Suggested simplification
- // Get thumbnail image (first image or find by thumbnail_image_id) - const thumbnailImage = - memory.images.find((img) => img.id === memory.thumbnail_image_id) || - memory.images[0]; - - // Handle missing thumbnail gracefully - use path as fallback - const thumbnailUrl = thumbnailImage - ? getThumbnailUrl(thumbnailImage) - : memory.images[0]?.path - ? getThumbnailUrl(memory.images[0]) - : '/photo.png'; // Default placeholder + // Get thumbnail image (first image or find by thumbnail_image_id) + const thumbnailImage = + memory.images.find((img) => img.id === memory.thumbnail_image_id) || + memory.images[0]; + + // Handle missing thumbnail gracefully + const thumbnailUrl = thumbnailImage + ? getThumbnailUrl(thumbnailImage) + : '/photo.png'; // Default placeholderbackend/app/database/images.py (2)
560-578: Consider extracting repeated result-to-dict mapping logic.The same result-row-to-dict transformation is duplicated across five functions (
db_get_images_by_date_range,db_get_images_near_location,db_get_images_by_year_month,db_get_images_with_location,db_get_all_images_for_memories). A helper function would reduce duplication and ensure consistency.♻️ Example helper extraction
def _row_to_image_dict(row) -> dict: """Convert a database row to an image dictionary.""" from app.utils.images import image_util_parse_metadata return { "id": row[0], "path": row[1], "folder_id": str(row[2]) if row[2] else None, "thumbnailPath": row[3], "metadata": image_util_parse_metadata(row[4]), "isTagged": bool(row[5]), "isFavourite": bool(row[6]), "latitude": row[7] if row[7] else None, "longitude": row[8] if row[8] else None, "captured_at": row[9] if row[9] else None, "tags": row[10].split(",") if row[10] else None, }Also applies to: 652-670, 723-741, 790-808, 855-873
561-562: Move import outside the loop for efficiency.The
from app.utils.images import image_util_parse_metadatais executed inside the loop on every iteration. Move it to the function's top or module level.♻️ Suggested fix
+ from app.utils.images import image_util_parse_metadata + images = [] for row in results: - from app.utils.images import image_util_parse_metadata - images.append(This pattern repeats in
db_get_images_near_location(line 654),db_get_images_by_year_month(line 725),db_get_images_with_location(line 792), anddb_get_all_images_for_memories(line 857).backend/app/utils/verify_memories_setup.py (1)
139-153: Unusedcol_typevariable in column validation.The
required_columnsdict maps column names to types, butcol_typeis never used in the validation loop. Either remove the type values or add type validation if intended.♻️ Suggested simplification
- required_columns = { - "latitude": "FLOAT", - "longitude": "FLOAT", - "captured_at": "DATETIME", - } + required_columns = ["latitude", "longitude", "captured_at"] all_columns_exist = True - for col_name, col_type in required_columns.items(): + for col_name in required_columns: if col_name in columns: print_success(f"Column '{col_name}' ({columns[col_name]})")frontend/src/store/slices/memoriesSlice.ts (1)
165-181:fetchAllMemoriesDatawon't catch individual thunk rejections.
Promise.allon dispatched thunks resolves with the action objects (including rejected ones), not the actual results. Individual thunk rejections won't throw here, so the catch block won't trigger for API errors. Additionally, if any thunk rejects, the state will already have the error recorded viaextraReducers, making this wrapper's error handling redundant.Consider whether this orchestration thunk is needed, or simplify it:
♻️ Simplified approach
export const fetchAllMemoriesData = createAsyncThunk< void, void, - { rejectValue: string } ->('memories/fetchAllData', async (_, { dispatch, rejectWithValue }) => { - try { - await Promise.all([ - dispatch(fetchOnThisDay()), - dispatch(fetchRecentMemories(30)), - dispatch(fetchYearMemories(365)), - dispatch(fetchAllMemories()), - ]); - } catch (error) { - const apiError = error as ApiError; - return rejectWithValue(apiError.message); - } + { dispatch: AppDispatch } +>('memories/fetchAllData', async (_, { dispatch }) => { + // Dispatch all in parallel; individual errors handled by their own extraReducers + await Promise.all([ + dispatch(fetchOnThisDay()), + dispatch(fetchRecentMemories(30)), + dispatch(fetchYearMemories(365)), + dispatch(fetchAllMemories()), + ]); });backend/app/routes/memories.py (2)
137-140: Move import to module level.The import of
db_get_all_images_for_memoriesis inside the function. It should be at the top of the file with other database imports for consistency and to avoid repeated import overhead.🔧 Suggested fix
from app.database.images import ( db_get_images_with_location, db_get_images_by_date_range, db_get_images_by_year_month, + db_get_all_images_for_memories, )Then remove line 138 from the function.
359-360: Accessing private methods ofMemoryClusteringbreaks encapsulation.The code calls
clustering._cluster_by_location()andclustering._filter_valid_images(), which are internal implementation details (prefixed with_). If the clustering class changes its internals, this endpoint will break.Consider adding a public method to
MemoryClusteringlikeget_location_clusters(images)that encapsulates this logic, or make these methods part of the public API if they're intended for external use.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@backend/app/routes/memories.py`:
- Around line 283-289: The current filter using datetime.fromisoformat on
img["captured_at"] inside building day_images can raise for malformed strings or
trailing "Z", causing the outer try/except that wraps the year loop to skip the
entire year; update the filter to parse each image defensively: for each image
from db_get_images_by_year_month(target_year, current_month) attempt to read
img.get("captured_at") and in a per-image try/except parse it (e.g., strip a
trailing "Z" before fromisoformat or fallback to a robust parser) and only
include the image if parsing succeeds and the .day matches current_day, so one
bad captured_at doesn't drop the whole year's images and you can optionally log
or ignore parse errors for that image.
In `@backend/app/utils/memory_clustering.py`:
- Around line 288-297: The code sets date_obj = datetime.now() when there are no
image dates, causing non-deterministic memory IDs; change that to date_obj =
None so _generate_memory_id will emit a stable mem_nodate_* ID; update the same
logic at the other occurrence (the block around lines 334-336) to also assign
None instead of datetime.now() and ensure any downstream use of date_obj handles
None accordingly.
In `@backend/test_memories_api.py`:
- Around line 38-40: Add a module-level timeout constant (e.g., REQUEST_TIMEOUT)
and use it for all HTTP calls in this test file: pass timeout=REQUEST_TIMEOUT to
the requests.post call that hits "/generate" and to every requests.get call for
"/timeline", "/on-this-day", and "/locations" (the health check already uses
timeout). Update the calls around the response variables (the lines invoking
requests.post and requests.get) so they include the timeout kwarg to avoid tests
hanging on network stalls.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@backend/app/routes/memories.py`:
- Around line 316-324: The parse_captured_at function inconsistently handles a
trailing "Z" (UTC) and may return datetime.min for ISO strings that end with
"Z"; update parse_captured_at to normalize the string before parsing (e.g., if
captured_at endswith "Z" replace it with "+00:00" or strip the "Z" and/or use a
robust conversion) so datetime.fromisoformat can parse it correctly, preserve
the existing try/except behavior and still return datetime.min on failure; refer
to the parse_captured_at function to locate and modify this behavior.
🧹 Nitpick comments (4)
backend/app/utils/memory_clustering.py (2)
358-393: Consider adding defensive None check for consistency.This method appends the result of
_create_memorydirectly without checking forNone, unlike_cluster_location_imageswhich guards againstNonefrom_create_simple_memory. While_create_memorydoesn't currently returnNone, adding a guard would make the code more defensive and consistent.♻️ Optional consistency improvement
# Step 3: Create memory objects for temporal_cluster in temporal_clusters: if len(temporal_cluster) >= self.min_images_per_memory: memory = self._create_memory(temporal_cluster) - memories.append(memory) + if memory: + memories.append(memory)
520-521: Potential memory_id collision for date-based memories.The ID
"mem_date_{date_start.strftime('%Y%m%d')}"could collide if two different date-based memory clusters happen to start on the same day. Consider using_generate_memory_idfor consistency, or appending a hash of image IDs.♻️ Suggested fix for uniqueness
- memory_id = f"mem_date_{date_start.strftime('%Y%m%d')}" if date_start else f"mem_date_unknown_{hash(tuple(img['id'] for img in images[:5]))}" + # Use hash of image IDs for uniqueness + images_hash = abs(hash(tuple(img['id'] for img in images[:5]))) + memory_id = f"mem_date_{date_start.strftime('%Y%m%d')}_{images_hash}" if date_start else f"mem_date_unknown_{images_hash}"backend/app/routes/memories.py (2)
137-140: Consider moving import to module level.The
db_get_all_images_for_memoriesimport inside the function (line 138) is unusual. Moving it to the top-level imports would improve code clarity and follow Python conventions.♻️ Move import to module level
from app.database.images import ( db_get_images_with_location, db_get_images_by_date_range, db_get_images_by_year_month, + db_get_all_images_for_memories, )Then remove line 138:
- from app.database.images import db_get_all_images_for_memories
376-384: Reliance on internal methods breaks encapsulation.The endpoint accesses internal methods (
_cluster_by_location,_filter_valid_images,_reverse_geocode) directly. While this works, it couples the route tightly to the clustering implementation. Consider adding a public method likeget_location_clusters()toMemoryClusteringinstead.Also,
date_tolerance_days=999999is a magic number. A named constant or a dedicated "location-only" mode would be clearer.♻️ Suggested approach
Add a public method to
MemoryClustering:def get_location_clusters(self, images: List[Dict[str, Any]]) -> List[List[Dict[str, Any]]]: """Get location clusters without temporal sub-clustering.""" valid_images = self._filter_valid_images(images) return self._cluster_by_location(valid_images) if valid_images else []Then in the route:
clustering = MemoryClustering(location_radius_km=location_radius_km) location_clusters = clustering.get_location_clusters(images)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@backend/app/routes/memories.py`:
- Around line 246-248: The except block currently logs the exception message and
returns the raw exception text in the HTTPException; change it to log the full
stack trace (use logger.exception(...) or logger.error(..., exc_info=True)) and
raise HTTPException with a generic client-facing message (e.g. "Internal server
error" or "Failed to get timeline") without including str(e). Apply the same
pattern across all endpoints in this module so no handler returns raw exception
details to clients while internal logs retain the full stack for debugging; look
for the except blocks referencing logger and HTTPException (e.g., the except in
get_timeline) to update.
- Around line 121-178: The endpoints (generate_memories, get_timeline,
get_on_this_day, get_locations) are declared async but call synchronous DB
functions and CPU-bound clustering, which blocks the event loop; change each
endpoint from "async def" to a synchronous "def" so FastAPI runs them in a
threadpool, or alternatively wrap the blocking parts
(db_get_all_images_for_memories, MemoryClustering.cluster_memories, etc.) with
await run_in_threadpool(...) if you prefer keeping async signatures; update any
internal await usage accordingly and ensure imports include run_in_threadpool if
you choose wrapping.
🧹 Nitpick comments (1)
backend/app/routes/memories.py (1)
386-412: Avoid unnecessary coupling to MemoryClustering private helpers for location-only filtering.Using
_filter_valid_imagesis unnecessary here since this endpoint needs only GPS coordinates. The_filter_valid_imagesmethod performs date parsing that's not needed for location clustering. Use a simpler, location-only filter for clarity and efficiency.♻️ Localized filter for location clustering
- location_clusters = clustering._cluster_by_location(clustering._filter_valid_images(images)) + valid_images = [ + img for img in images + if img.get("latitude") is not None and img.get("longitude") is not None + ] + location_clusters = clustering._cluster_by_location(valid_images)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@backend/app/routes/memories.py`:
- Around line 379-387: The endpoint is calling private methods on
MemoryClustering (_cluster_by_location, _filter_valid_images, and potentially
_reverse_geocode) and uses a misleading date_tolerance_days=999999; add a public
API on MemoryClustering, e.g., cluster_by_location_only(self, images:
List[dict]) that calls self._filter_valid_images(images) and then
self._cluster_by_location(valid_images), then update the route to call
MemoryClustering.cluster_by_location_only(images) (and remove the unnecessary
date_tolerance_days override); this preserves encapsulation and removes the
magic-number usage.
🧹 Nitpick comments (1)
backend/app/routes/memories.py (1)
137-140: Move import to module level for consistency.The import of
db_get_all_images_for_memoriesis done inside the function while other database functions are imported at the module level (lines 23-27). This inconsistency can delay import-time errors to runtime.♻️ Suggested fix
At lines 23-27, add the missing import:
from app.database.images import ( db_get_images_with_location, db_get_images_by_date_range, db_get_images_by_year_month, + db_get_all_images_for_memories, )Then remove the inline import at line 138:
- # Fetch ALL images - from app.database.images import db_get_all_images_for_memories - - images = db_get_all_images_for_memories() + # Fetch ALL images + images = db_get_all_images_for_memories()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@backend/app/utils/memory_clustering.py`:
- Around line 139-143: Docstring and implementation disagree about the monthly
grouping threshold: the docstring says "≥5 photos per month" but the code uses
min_images_per_memory (default 2); either change the docstring in the FLEXIBLE
strategy description to reference min_images_per_memory (e.g., "Group by month
(if ≥ min_images_per_memory photos per month; default 2)") or change the default
value of min_images_per_memory to 5 where it’s defined/initialized; locate
references to min_images_per_memory in this module (and any function/class that
accepts it) and update either the docstring text or the default parameter
accordingly so behavior and documentation match.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@backend/app/utils/memory_clustering.py`:
- Around line 304-306: The sort key currently uses x.get("captured_at", "")
which can return None and raise a TypeError; update the key used when computing
sorted_images so None is coerced to a comparable value (e.g. use the captured_at
value or an empty string/timestamp fallback) — change the lambda in the
sorted(cleaned_images, key=...) expression to coerce None (for example via
(x.get("captured_at") or "") or converting to str/epoch) so GPS-only images
without dates are included without crashing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@backend/app/utils/memory_clustering.py`:
- Around line 849-867: The _generate_memory_id function currently uses Python's
built-in hash() which is non-deterministic across restarts; replace it with a
deterministic hash (e.g., hashlib.sha256) by serializing the rounded latitude
and longitude (and date_str when present) into a stable string (like
"lat:XX.XX|lon:YY.YY|date:YYYYMMDD"), compute a hex digest (optionally truncate
for length), and build the ID as f"mem_{date_str}_{digest}" or
f"mem_nodate_{digest}" inside _generate_memory_id to ensure stable IDs across
restarts.
- Around line 540-541: The fallback memory ID uses Python's non-deterministic
built-in hash (memory_id = ... hash(tuple(img['id'] for img in images[:5]))), so
replace that with a deterministic hash (e.g., compute a stable SHA256/MD5 hex
digest over the concatenated/serialized first N image ids) to generate the
fallback ID; update the memory_id assignment logic in the same block that sets
memory_id (when date_start is falsy) to use the deterministic digest of the
first 5 image ids and include a readable prefix like "mem_date_unknown_" to
preserve current formatting.
🧹 Nitpick comments (2)
backend/app/utils/memory_clustering.py (2)
80-90: Consider moving math imports to module level.The
mathfunctions are imported insidefind_nearest_city, which incurs import overhead on each call. For a frequently-called utility, moving these to module level would be slightly more efficient.Suggested refactor
from datetime import datetime from typing import List, Dict, Any, Optional from collections import defaultdict +from math import radians, cos, sin, asin, sqrt import numpy as npThen remove the import from inside
find_nearest_city:def find_nearest_city(latitude: float, longitude: float, max_distance_km: float = 50.0) -> Optional[str]: ... - from math import radians, cos, sin, asin, sqrt - def haversine_distance(lat1: float, lon1: float, lat2: float, lon2: float) -> float:
408-411: Consider guarding against memory creation failures for consistency.Unlike
_create_simple_memorywhich returnsNoneon failure,_create_memorycan raise unhandled exceptions. For defensive consistency, consider adding a guard or wrapping in try/except.Suggested pattern
for temporal_cluster in temporal_clusters: if len(temporal_cluster) >= self.min_images_per_memory: - memory = self._create_memory(temporal_cluster) - memories.append(memory) + try: + memory = self._create_memory(temporal_cluster) + if memory: + memories.append(memory) + except Exception as e: + logger.warning(f"Failed to create memory: {e}")Alternatively, add exception handling inside
_create_memoryto match_create_simple_memory's pattern.
|
@rahulharpal1603 i have removed the minor error that was present in the pr and have checked the functionality of the app it works as it should , then i also added some changes for the code rabbit sugggestion to remove and improve the potential breakpoints and documentation improvements including using hashlib for better hashing etc i also ran the ruff format and lint both in frontend and backend for linting and formatting of the code . there were no merge conflicts and it merged cleanly , please review it and let me know if any optimisation or changes have to be made |
|
Yesterday, we merged one change to our repo. As a result, some conflicts have resurfaced. Please resolve these. |
|
Also, the checks are not passing. Please resolve that too, and do not make any changes to the tsconfig file. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@backend/app/routes/memories.py`:
- Around line 18-19: The Memory response model is missing the returned "type"
field (MemoryClustering emits "location" or "date"), so add a typed field to the
Pydantic/FastAPI response model named type (e.g., type: Literal["location",
"date"] or type: str) in the Memory class definition and update any related
constructors/serializers to include it; after changing the Memory model,
regenerate the OpenAPI schema so the new field is exposed to clients. Ensure you
reference Memory and MemoryClustering when updating/validating the data flow.
|
@rahulharpal1603 i have resolved the merge conflicts , removed the line causing error in the tsconfig.json , as for the linting and formatting i have used the black and the ruff |

Feature: Complete Memories Implementation with Smart Clustering & UI Enhancements
solved the issuse #723
Overview
This PR implements a complete, production-ready Memories feature for PictoPy, similar to Google Photos' Memories. It automatically groups photos by location and date, creating meaningful albums that help users rediscover their photos.
Key Features
1. Smart Photo Clustering
2. "On This Day" Feature
3. Smart Titles & Formatting
4. Interactive UI
5. Full-Featured Image Viewer
Summary by CodeRabbit
New Features
Frontend
Docs & Tests