Conversation
Dupe search
|
Thanks for taking the time to contribute to this project. Upload Assistant is currently in a complete rewrite, and no new development is being conducted on this python source at this time. If you have come this far, please feel free to leave open, any pull requests regarding new sites being added to the source, as these can serve as the baseline for later conversion. If your pull request relates to a critical bug, this will be addressed in this code base, and a new release published as needed. If your pull request only addresses a quite minor bug, it is not likely to be addressed in this code base. Details for the new code base will follow at a later date. |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a new C411 tracker integration (search/upload/download), a French metadata utilities module, registers C411 in tracker setup and example config, and adds Changes
Sequence DiagramsequenceDiagram
participant Client
participant C411 as C411 Tracker
participant FREN as FRENCH Module
participant TMDB
participant FS as File System
participant API as Tracker API
Client->>C411: upload(meta, disctype)
C411->>TMDB: fetch TMDB data / search by TMDB ID or title
TMDB-->>C411: TMDB response
alt existing torrent found
C411-->>Client: return existing torrent info
else no existing torrent
C411->>FREN: build DESCRIPTION and metadata strings
FREN-->>C411: DESCRIPTION.json content
C411->>FS: write DESCRIPTION.json and payload
C411->>API: POST upload (with retries)
loop retry on transient errors
API-->>C411: error / timeout
C411->>API: retry upload
end
API-->>C411: success + infoHash
C411->>API: GET .torrent
API-->>FS: save .torrent
C411-->>Client: upload complete (infoHash)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
🧹 Nitpick comments (3)
src/trackers/FRENCH.py (1)
7-7: Hard-coded config import bypasses the dependency-injection pattern used by other tracker modules.All other tracker classes receive
configvia their constructor. Importing it at module level here creates a tight coupling and makes testing with different configs impossible. This also means any module that importsFRENCHwill transitively load the user's config at import time.♻️ Consider passing config or the API key to functions that need it
For example,
get_tmdb_translationsandget_desc_fullcould accept atmdb_api_keyparameter instead of importing config globally. C411.py already hasself.configavailable and could pass it through.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/FRENCH.py` at line 7, Remove the module-level import of config in FRENCH and thread configuration (or at least the tmdb API key) through the tracker API: update the FRENCH class constructor to accept a config parameter (or tmdb_api_key) and store it on self, then change functions that currently read the global config — specifically get_tmdb_translations and get_desc_full — to take tmdb_api_key (or use self.config) instead of importing config; update callers (e.g., C411.py) to pass self.config or the API key when invoking these methods so no global config import remains.src/trackers/C411.py (2)
324-383: Duplicate search logic is copy-pasted; second call toget_translation_fron Line 357 wastes a variable.The TMDB-based search block (Lines 335-354) and the title-based fallback search (Lines 363-382) share nearly identical XML-parsing code, and Line 329 already calls
get_translation_fr, making the second call on Line 357 redundant (it also produces the unuseddescrvariable flagged by Ruff).Consider extracting the XML parse + search into a small helper to avoid duplication.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/C411.py` around lines 324 - 383, search_existing duplicates the HTTP response parsing and calls fr.get_translation_fr twice; remove the redundant second call and extract the XML parsing + item extraction into a helper used by both search attempts. Specifically, in search_existing keep the initial title, descr = await fr.get_translation_fr(meta) call, then replace the repeated block that decodes response.text, builds etree.fromstring(...), iterates channel children and appends result[0].text to dupes with a single helper function (e.g., _parse_search_response(response_text) or parse_search_response(root_bytes) referenced from search_existing) and call that helper after each successful client.get; ensure the helper returns the list of dupes (or appends to the existing dupes) and remove the second fr.get_translation_fr invocation and its unused descr variable.
404-405: Copy-paste artifact:acm_nameshould bec411_nameor justname_data.This variable name was likely carried over from the ACM tracker template.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/C411.py` around lines 404 - 405, The variable acm_name is a copy-paste artifact from the ACM tracker; rename it to a tracker-appropriate identifier (e.g., c411_name or name_data) wherever it's assigned and referenced: change the assignment acm_name = await self.get_name(meta) to c411_name = await self.get_name(meta) (or name_data = ...), and update the subsequent use acm_name["name"] -> c411_name["name"] (or name_data["name"]) that feeds dot_name to avoid misleading identifiers; verify no other occurrences of acm_name remain in the C411 tracker code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/trackers/C411.py`:
- Line 392: There is a typo when reading the original title from the TMDB
metadata: in the assignment to tmdb_info['originalTitle'] the code looks up
meta.get("origial_title","") which always misses; change the lookup to
meta.get("original_title", "") so tmdb_info['originalTitle'] is populated
correctly (locate the assignment to tmdb_info['originalTitle'] in C411.py and
update the key string).
- Around line 397-411: meta["debug"] can be True and cause FileNotFoundError
because create_torrent_for_upload is skipped but torrent_file_path and
mediainfo_file_path are still read; move the async file reads (the aiofiles.open
reads of torrent_file_path and mediainfo_file_path) inside the non-debug branch
right after awaiting create_torrent_for_upload, or alternatively ensure
create_torrent_for_upload is always called before reading, so that the reads
only execute when the files exist; update the block around meta["debug"],
create_torrent_for_upload, torrent_file_path and mediainfo_file_path to reflect
this change.
- Around line 335-354: The XML parsing assumes children exist and can
IndexError; update the block that uses root[0], channel, result[0], and dupe to
be defensive: verify root has children before accessing root[0] (or locate the
<channel> element by tag), iterate channel children safely, for each <item>
check that it actually has child elements (or use element.find('title') /
element.findtext('title')) before reading text, skip items without titles, and
avoid bare indexing into result[0]; also ensure any parsing failures fall back
gracefully (skip the malformed entry) rather than crashing.
- Around line 137-145: The snippet in C411.py reads meta.get('uuid') into uuid
and sets obj2 to undocumented numeric category codes (415, 413, 414) based on
"4klight"/"hdlight" substrings and resolution, and a commented-out type
check—fix by either removing this block or replacing it with documented C411
category constants and restoring the intended type guard: if you keep the
behavior, define named constants (e.g., CAT_4K_LIGHT, CAT_HD_1080P,
CAT_HD_OTHER) and use them instead of raw numbers, re-enable the commented
condition (type == "ENCODE") if required, and add a comment explaining the
mapping from uuid substrings to those constants; ensure you reference the same
symbols (uuid, obj2, meta['resolution'], and type) so the change integrates with
surrounding logic.
In `@src/trackers/FRENCH.py`:
- Around line 487-492: The code computes mbps as int(video_track[0]['BitRate'])
/ 1_000_000 (variable mbps derived from video_track and BitRate) which yields
megabits/sec, but elsewhere the value is labeled "MB/s"; update the label to
"Mb/s" or "Mbps" to reflect megabits, or if you really need megabytes/sec divide
by 8_000_000 instead (keep the variable name mbps consistent or rename to
mbits_per_s if you change units), and update any display/formatting code that
emits the "MB/s" string accordingly.
- Around line 341-351: The loop checks item['Language'] directly which can raise
KeyError for tracks missing Language; update the loop that builds audio_parts to
use item.get('Language', '') (or guard with a local lang =
item.get('Language','')) and only process when lang in ("fr","fr-fr","fr-ca"),
keeping the existing async calls to map_audio_codec(item) and
get_audio_channels(item) and the return of the joined audio string; adjust
references to audio_track, has_french_audio, audio_parts, map_audio_codec and
get_audio_channels accordingly so no direct dict access to 'Language' occurs.
- Around line 498-500: The current early return after calling
get_subtitle_tracks(meta) (checking subtitle_tracks and returning '' ) aborts
description generation for releases without embedded subtitles; instead, remove
the early return and treat an empty subtitle_tracks as "no subtitles" so the
description generator continues — e.g., after subtitle_tracks = await
get_subtitle_tracks(meta) set a subtitles_section = '' when not subtitle_tracks
and otherwise build the subtitles_section from subtitle_tracks, then let the
rest of the description-building code use subtitles_section (ensuring any code
that assumed subtitle_tracks exists handles the empty case).
- Line 618: Replace direct dict indexing on track objects with safe .get()
defaults to avoid KeyError: update the audio track line construction (the
assignment to variable line that currently uses obj['Language'], obj['Format'],
obj['Channels']) to use obj.get('Language', 'Unknown'), obj.get('Format',
'Unknown') and obj.get('Channels', 0) (or another sane default) while keeping
kbps formatting; do the same for the subtitle track construction around the
separate assignment at line 633 (use obj.get('Language', 'Unknown') and
obj.get('Format', 'Unknown')). Ensure you only change the accessor style
(obj[...] -> obj.get(...)) and preserve existing variable names and formatting.
---
Nitpick comments:
In `@src/trackers/C411.py`:
- Around line 324-383: search_existing duplicates the HTTP response parsing and
calls fr.get_translation_fr twice; remove the redundant second call and extract
the XML parsing + item extraction into a helper used by both search attempts.
Specifically, in search_existing keep the initial title, descr = await
fr.get_translation_fr(meta) call, then replace the repeated block that decodes
response.text, builds etree.fromstring(...), iterates channel children and
appends result[0].text to dupes with a single helper function (e.g.,
_parse_search_response(response_text) or parse_search_response(root_bytes)
referenced from search_existing) and call that helper after each successful
client.get; ensure the helper returns the list of dupes (or appends to the
existing dupes) and remove the second fr.get_translation_fr invocation and its
unused descr variable.
- Around line 404-405: The variable acm_name is a copy-paste artifact from the
ACM tracker; rename it to a tracker-appropriate identifier (e.g., c411_name or
name_data) wherever it's assigned and referenced: change the assignment acm_name
= await self.get_name(meta) to c411_name = await self.get_name(meta) (or
name_data = ...), and update the subsequent use acm_name["name"] ->
c411_name["name"] (or name_data["name"]) that feeds dot_name to avoid misleading
identifiers; verify no other occurrences of acm_name remain in the C411 tracker
code.
In `@src/trackers/FRENCH.py`:
- Line 7: Remove the module-level import of config in FRENCH and thread
configuration (or at least the tmdb API key) through the tracker API: update the
FRENCH class constructor to accept a config parameter (or tmdb_api_key) and
store it on self, then change functions that currently read the global config —
specifically get_tmdb_translations and get_desc_full — to take tmdb_api_key (or
use self.config) instead of importing config; update callers (e.g., C411.py) to
pass self.config or the API key when invoking these methods so no global config
import remains.
| uuid = meta.get('uuid', "").lower() | ||
|
|
||
| if "4klight" in uuid: # and type == "ENCODE" | ||
| obj2 = 415 | ||
| elif "hdlight" in uuid: # and type == "ENCODE" | ||
| if meta['resolution'] == '1080p': | ||
| obj2 = 413 | ||
| else: | ||
| obj2 = 414 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check how meta['uuid'] is set in the codebase
echo "=== Where meta['uuid'] is assigned ==="
rg -n "meta\['uuid'\]\s*=" --type=py | head -20
echo ""
echo "=== Check in prep.py for UUID setup ==="
rg -n "uuid" src/prep.py | head -20
echo ""
echo "=== Look for meta.get('uuid') usage patterns ==="
rg -n "meta\.get\('uuid'" --type=py
echo ""
echo "=== Check if any tracker checks uuid for content strings ==="
rg -n "4klight\|hdlight" --type=py | head -20Repository: Audionut/Upload-Assistant
Length of output: 9169
🏁 Script executed:
#!/bin/bash
# Look for "4klight" and "hdlight" in the entire codebase
echo "=== Search for 4klight ==="
rg -n "4klight" --type=py
echo ""
echo "=== Search for hdlight ==="
rg -n "hdlight" --type=py
echo ""
echo "=== Look at C411.py context around lines 137-145 ==="
sed -n '120,160p' src/trackers/C411.pyRepository: Audionut/Upload-Assistant
Length of output: 1596
🏁 Script executed:
#!/bin/bash
# Look for C411 tracker class definition and documentation
echo "=== C411 class and imports ==="
head -50 src/trackers/C411.py
echo ""
echo "=== Search for 415, 413, 414 constants/documentation ==="
rg -n "415|413|414" src/trackers/C411.py | head -20
echo ""
echo "=== Look for category/type documentation ==="
rg -n "4k|HDLight|light.*encode" --type=py -i | head -20Repository: Audionut/Upload-Assistant
Length of output: 3999
🏁 Script executed:
#!/bin/bash
# Search for information about "4klight" and "hdlight" as scene group names or encoding types
echo "=== Check if these appear in any other projects or documentation ==="
rg -i "4klight|hdlight" --type=py --type=txt --type=md
echo ""
echo "=== Look for what type values can be in C411 tracker context ==="
sed -n '100,300p' src/trackers/C411.py | grep -A 5 -B 5 "type"
echo ""
echo "=== Check if category/resolution based logic exists before uuid check ==="
sed -n '90,145p' src/trackers/C411.pyRepository: Audionut/Upload-Assistant
Length of output: 8926
🏁 Script executed:
#!/bin/bash
# Check web for "4klight" and "hdlight" as scene group or encoding names
echo "=== Look for FRENCH tracker to understand C411 context ==="
head -50 src/trackers/FRENCH.py | head -30
echo ""
echo "=== Check what meta['type'] can be ==="
rg -n "meta\['type'\]|meta\.get\('type'" src/trackers/C411.py | head -10
echo ""
echo "=== Verify if meta['uuid'] is reliably set before this check ==="
sed -n '100,140p' src/trackers/C411.py | grep -n "uuid\|type"Repository: Audionut/Upload-Assistant
Length of output: 1721
🏁 Script executed:
#!/bin/bash
# Search for any documentation or comments about these specific category codes
echo "=== Search for comments about obj2 codes in C411 ==="
sed -n '50,160p' src/trackers/C411.py | grep -E "obj2|#.*4|#.*41"
echo ""
echo "=== Check if there's any wiki or documentation reference ==="
rg -n "c411.*wiki|c411.*doc|option.*tag" --type=py -i | head -10
echo ""
echo "=== Look at the full get_option_tag function ==="
sed -n '50,160p' src/trackers/C411.pyRepository: Audionut/Upload-Assistant
Length of output: 5508
The uuid string check for "4klight"/"hdlight" uses undefined category codes.
While meta['uuid'] can legitimately contain content-type strings (like "subsplease", "repack" in other parts of the codebase), the category codes 415, 413, and 414 are undocumented and don't appear in the standard C411 category range (10-31). Additionally, the commented-out type == "ENCODE" conditions suggest this is incomplete code. Either these category codes should be documented, or this check should be removed/completed if it's no longer needed.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/trackers/C411.py` around lines 137 - 145, The snippet in C411.py reads
meta.get('uuid') into uuid and sets obj2 to undocumented numeric category codes
(415, 413, 414) based on "4klight"/"hdlight" substrings and resolution, and a
commented-out type check—fix by either removing this block or replacing it with
documented C411 category constants and restoring the intended type guard: if you
keep the behavior, define named constants (e.g., CAT_4K_LIGHT, CAT_HD_1080P,
CAT_HD_OTHER) and use them instead of raw numbers, re-enable the commented
condition (type == "ENCODE") if required, and add a comment explaining the
mapping from uuid substrings to those constants; ensure you reference the same
symbols (uuid, obj2, meta['resolution'], and type) so the change integrates with
surrounding logic.
| try: | ||
| async with httpx.AsyncClient(timeout=5.0) as client: | ||
| response = await client.get(url=self.search_url, params=params) | ||
| if response.status_code == 200: | ||
| response_text = response.text.encode('utf-8') | ||
| root = etree.fromstring(response_text) | ||
| channel = root[0] | ||
| for result in channel: | ||
| if result.tag == 'item': | ||
| dupe = result[0] | ||
| dupes.append(dupe.text) | ||
| else: | ||
| console.print(f"[bold red]Failed to search torrents. HTTP Status: {response.status_code}") | ||
| except httpx.TimeoutException: | ||
| console.print("[bold red]Request timed out after 5 seconds") | ||
| except httpx.RequestError as e: | ||
| console.print(f"[bold red]Unable to search for existing torrents: {e}") | ||
| except Exception as e: | ||
| console.print(f"[bold red]Unexpected error: {e}") | ||
| await asyncio.sleep(5) |
There was a problem hiding this comment.
XML parsing assumes structure without defensive checks — IndexError risk.
channel = root[0] on Line 341 will raise IndexError if the XML root element has no children (e.g., empty or malformed API response). Similarly, dupe = result[0] on Line 344 assumes the first child of <item> is the title element.
🛡️ Add guards
root = etree.fromstring(response_text)
- channel = root[0]
+ if len(root) == 0:
+ console.print("[bold red]Empty XML response from C411 search")
+ return dupes
+ channel = root[0]
for result in channel:
if result.tag == 'item':
- dupe = result[0]
- dupes.append(dupe.text)
+ if len(result) > 0 and result[0].text:
+ dupes.append(result[0].text)🧰 Tools
🪛 Ruff (0.15.0)
[warning] 352-352: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/trackers/C411.py` around lines 335 - 354, The XML parsing assumes
children exist and can IndexError; update the block that uses root[0], channel,
result[0], and dupe to be defensive: verify root has children before accessing
root[0] (or locate the <channel> element by tag), iterate channel children
safely, for each <item> check that it actually has child elements (or use
element.find('title') / element.findtext('title')) before reading text, skip
items without titles, and avoid bare indexing into result[0]; also ensure any
parsing failures fall back gracefully (skip the malformed entry) rather than
crashing.
| if not meta["debug"]: | ||
| await self.common.create_torrent_for_upload(meta, self.tracker, 'C411') | ||
| torrent_file_path = f"{meta['base_dir']}/tmp/{meta['uuid']}/[{self.tracker}].torrent" | ||
| mediainfo_file_path = f"{meta['base_dir']}/tmp/{meta['uuid']}/MEDIAINFO.txt" | ||
|
|
||
| headers = { | ||
| "Authorization": f"Bearer {self.config['TRACKERS'][self.tracker]['api_key'].strip()}"} | ||
| acm_name = await self.get_name(meta) | ||
| dot_name = acm_name["name"].replace(" ", ".") | ||
| response = None | ||
|
|
||
| async with aiofiles.open(torrent_file_path, 'rb') as f: | ||
| torrent_bytes = await f.read() | ||
| async with aiofiles.open(mediainfo_file_path, 'rb') as f: | ||
| mediainfo_bytes = await f.read() |
There was a problem hiding this comment.
Debug mode will crash with FileNotFoundError when reading torrent/mediainfo files.
When meta["debug"] is True, Line 397-398 skips torrent creation. However, Lines 408-411 unconditionally read both the torrent file and mediainfo file, which won't exist yet. This will raise FileNotFoundError before reaching the debug branch at Line 432.
Move the file reads inside the non-debug upload block, or create the torrent before reading.
🐛 Suggested restructure
- if not meta["debug"]:
- await self.common.create_torrent_for_upload(meta, self.tracker, 'C411')
torrent_file_path = f"{meta['base_dir']}/tmp/{meta['uuid']}/[{self.tracker}].torrent"
mediainfo_file_path = f"{meta['base_dir']}/tmp/{meta['uuid']}/MEDIAINFO.txt"
headers = {
"Authorization": f"Bearer {self.config['TRACKERS'][self.tracker]['api_key'].strip()}"}
acm_name = await self.get_name(meta)
dot_name = acm_name["name"].replace(" ", ".")
response = None
- async with aiofiles.open(torrent_file_path, 'rb') as f:
- torrent_bytes = await f.read()
- async with aiofiles.open(mediainfo_file_path, 'rb') as f:
- mediainfo_bytes = await f.read()
-
data: dict[str, Any] = {
...
}
if meta["debug"] is False:
+ await self.common.create_torrent_for_upload(meta, self.tracker, 'C411')
+ async with aiofiles.open(torrent_file_path, 'rb') as f:
+ torrent_bytes = await f.read()
+ async with aiofiles.open(mediainfo_file_path, 'rb') as f:
+ mediainfo_bytes = await f.read()
+ files = {"torrent": torrent_bytes, "nfo": mediainfo_bytes}
...🧰 Tools
🪛 Ruff (0.15.0)
[error] 399-399: Probable insecure usage of temporary file or directory: "/tmp/"
(S108)
[error] 400-400: Probable insecure usage of temporary file or directory: "/tmp/"
(S108)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/trackers/C411.py` around lines 397 - 411, meta["debug"] can be True and
cause FileNotFoundError because create_torrent_for_upload is skipped but
torrent_file_path and mediainfo_file_path are still read; move the async file
reads (the aiofiles.open reads of torrent_file_path and mediainfo_file_path)
inside the non-debug branch right after awaiting create_torrent_for_upload, or
alternatively ensure create_torrent_for_upload is always called before reading,
so that the reads only execute when the files exist; update the block around
meta["debug"], create_torrent_for_upload, torrent_file_path and
mediainfo_file_path to reflect this change.
| # DUAL - Exactly 2 audios | ||
| if num_audio_tracks == 2 and has_french_audio: | ||
| language = "MULTi" | ||
|
|
||
| # MULTI - 3+ audios | ||
| if num_audio_tracks >= 3 and has_french_audio: | ||
| language = "MULTi" | ||
|
|
||
| # VOSTFR - Single audio (original) + French subs + NO French audio | ||
| if num_audio_tracks == 1 and original_lang and not has_french_audio and has_French_subs and audio_langs[0] == original_lang: | ||
| language = "VOSTFR" | ||
|
|
||
| # VO - Single audio (original) + NO French subs + NO French audio | ||
| if num_audio_tracks == 1 and original_lang and not has_french_audio and not has_French_subs and audio_langs[0] == original_lang: | ||
| language = "VO" | ||
|
|
||
| # FRENCH. - Single audio FRENCH | ||
| if num_audio_tracks == 1 and has_french_audio and audio_langs[0] == original_lang: | ||
| language = "FRENCH" | ||
|
|
||
| return language |
There was a problem hiding this comment.
Missing language classification: single French dub of non-French content returns empty string.
When there's exactly 1 audio track that is French, but the original language is not French (e.g., a Japanese anime dubbed to French), none of the conditions on Lines 34–51 match:
- Line 34/38:
num_audio_tracksis 1, not ≥ 2 - Line 42/46:
has_french_audioisTrue, sonot has_french_audiofails - Line 50:
audio_langs[0] == original_lang→'FRA' != 'JAP'fails
Result: language stays "", so the upload gets no language tag. This case (French dub of foreign content) should likely return "FRENCH".
| has_french_audio = any(item.get('Language', '') in ( | ||
| 'fr', 'fr-fr', 'fr-ca')for item in audio_track) | ||
| audio_parts: list[str] = [] | ||
| if has_french_audio: | ||
| for _, item in enumerate(audio_track): | ||
| if item['Language'] == "fr" or item['Language'] == "fr-fr" or item['Language'] == "fr-ca": | ||
| codec = await map_audio_codec(item) | ||
| channels = await get_audio_channels(item) | ||
| audio_parts.append(f"{codec} {channels}") | ||
| audio = ' '.join(audio_parts) | ||
| return audio |
There was a problem hiding this comment.
item['Language'] on Line 346 can raise KeyError for tracks without a Language field.
Line 341 correctly uses .get('Language', ''), but Line 346 switches to direct dict access. When has_french_audio is True, the loop iterates all tracks — including non-French tracks that may lack a Language key entirely.
🐛 Fix
- if item['Language'] == "fr" or item['Language'] == "fr-fr" or item['Language'] == "fr-ca":
+ if item.get('Language', '') in ("fr", "fr-fr", "fr-ca"):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/trackers/FRENCH.py` around lines 341 - 351, The loop checks
item['Language'] directly which can raise KeyError for tracks missing Language;
update the loop that builds audio_parts to use item.get('Language', '') (or
guard with a local lang = item.get('Language','')) and only process when lang in
("fr","fr-fr","fr-ca"), keeping the existing async calls to
map_audio_codec(item) and get_audio_channels(item) and the return of the joined
audio string; adjust references to audio_track, has_french_audio, audio_parts,
map_audio_codec and get_audio_channels accordingly so no direct dict access to
'Language' occurs.
| mbps = 0.0 | ||
| if video_track and video_track[0].get('BitRate'): | ||
| try: | ||
| mbps = int(video_track[0]['BitRate']) / 1_000_000 | ||
| except (ValueError, TypeError): | ||
| pass |
There was a problem hiding this comment.
Unit label mismatch: dividing by 1,000,000 gives Mb/s (megabits), but the label says "MB/s" (megabytes).
Line 490 computes int(BitRate) / 1_000_000 which converts bits/s → megabits/s. The description on Line 600 labels it MB/s which conventionally means megabytes per second. Either change the label to Mb/s (or Mbps), or divide by 8,000,000 if megabytes is intended.
Fix label
- desc_parts.append(
- f"[b][color=#3d85c6]Débit vidéo :[/color][/b] [i]{mbps:.2f} MB/s[/i]")
+ desc_parts.append(
+ f"[b][color=#3d85c6]Débit vidéo :[/color][/b] [i]{mbps:.2f} Mb/s[/i]")Also applies to: 600-600
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/trackers/FRENCH.py` around lines 487 - 492, The code computes mbps as
int(video_track[0]['BitRate']) / 1_000_000 (variable mbps derived from
video_track and BitRate) which yields megabits/sec, but elsewhere the value is
labeled "MB/s"; update the label to "Mb/s" or "Mbps" to reflect megabits, or if
you really need megabytes/sec divide by 8_000_000 instead (keep the variable
name mbps consistent or rename to mbits_per_s if you change units), and update
any display/formatting code that emits the "MB/s" string accordingly.
| if " ad" in str(obj.get('Title')).lower(): | ||
| flags.append("Audio Description") | ||
|
|
||
| line = f"{obj['Language']} / {obj['Format']} / {obj['Channels']}ch / {kbps:.2f}KB/s" |
There was a problem hiding this comment.
Direct key access on audio/subtitle track dicts (obj['Language'], obj['Format'], obj['Channels']) will raise KeyError if any field is missing.
MediaInfo tracks don't always include all fields. For example, some audio tracks may lack a Language field. The same issue applies to Line 633 for subtitle tracks.
🐛 Use .get() with defaults
- line = f"{obj['Language']} / {obj['Format']} / {obj['Channels']}ch / {kbps:.2f}KB/s"
+ line = f"{obj.get('Language', 'Unknown')} / {obj.get('Format', 'Unknown')} / {obj.get('Channels', '?')}ch / {kbps:.2f}KB/s"And on Line 633:
- line = f"{obj['Language']} / {obj['Format']}"
+ line = f"{obj.get('Language', 'Unknown')} / {obj.get('Format', 'Unknown')}"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| line = f"{obj['Language']} / {obj['Format']} / {obj['Channels']}ch / {kbps:.2f}KB/s" | |
| line = f"{obj.get('Language', 'Unknown')} / {obj.get('Format', 'Unknown')} / {obj.get('Channels', '?')}ch / {kbps:.2f}KB/s" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/trackers/FRENCH.py` at line 618, Replace direct dict indexing on track
objects with safe .get() defaults to avoid KeyError: update the audio track line
construction (the assignment to variable line that currently uses
obj['Language'], obj['Format'], obj['Channels']) to use obj.get('Language',
'Unknown'), obj.get('Format', 'Unknown') and obj.get('Channels', 0) (or another
sane default) while keeping kbps formatting; do the same for the subtitle track
construction around the separate assignment at line 633 (use obj.get('Language',
'Unknown') and obj.get('Format', 'Unknown')). Ensure you only change the
accessor style (obj[...] -> obj.get(...)) and preserve existing variable names
and formatting.
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (3)
src/trackers/FRENCH.py (2)
462-462:tmdb_api_keydefaults toFalsewhen missing, which gets sent as a query parameter.If the config key is absent,
api_key=Falseis sent to TMDB, resulting in a confusing 401 instead of a clear early error.Suggestion
- tmdb_api_key = config['DEFAULT'].get('tmdb_api', False) + tmdb_api_key = config['DEFAULT'].get('tmdb_api', '') + if not tmdb_api_key: + return "", ""🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/FRENCH.py` at line 462, tmdb_api_key is being set to False when missing which then gets sent to TMDB; change the default to None (tmdb_api_key = config['DEFAULT'].get('tmdb_api', None)) and add a validation immediately after assignment to raise a clear error (or log+exit) if tmdb_api_key is None or empty, so no request is made with api_key=False; update any downstream code that builds the TMDB query to assume tmdb_api_key is present after this validation.
96-185: Significant code duplication acrossget_audio_tracks,get_subtitle_tracks, andget_video_tracks.These three functions share ~90% identical boilerplate for navigating
meta['mediainfo']['media']['track'], differing only in the@typefilter value (and an optional commentary filter). Consider extracting a shared helper.Example refactor
async def _get_tracks_by_type(meta: dict[str, Any], track_type: str) -> list[dict[str, Any]]: if 'mediainfo' not in meta or 'media' not in meta['mediainfo']: return [] media_info = meta['mediainfo'] if not isinstance(media_info, dict): return [] media = media_info.get('media') if not isinstance(media, dict): return [] tracks = media.get('track', []) if not isinstance(tracks, list): return [] return [t for t in tracks if isinstance(t, dict) and t.get('@type') == track_type] async def get_audio_tracks(meta: dict[str, Any], filter: bool) -> list[dict[str, Any]]: tracks = await _get_tracks_by_type(meta, 'Audio') if filter: tracks = [t for t in tracks if "commentary" not in str(t.get('Title') or '').lower()] return tracks async def get_subtitle_tracks(meta: dict[str, Any]) -> list[dict[str, Any]]: return await _get_tracks_by_type(meta, 'Text') async def get_video_tracks(meta: dict[str, Any]) -> list[dict[str, Any]]: return await _get_tracks_by_type(meta, 'Video')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/FRENCH.py` around lines 96 - 185, The three functions get_audio_tracks, get_subtitle_tracks, and get_video_tracks duplicate the same mediainfo traversal and filtering logic; extract a shared helper (e.g., async _get_tracks_by_type(meta: dict[str, Any], track_type: str) -> list[dict[str, Any]]) that validates meta['mediainfo']['media']['track'] and returns only dict tracks matching track_type, then change get_audio_tracks to call _get_tracks_by_type('Audio') and apply the "commentary" Title filter when filter is True, change get_subtitle_tracks to call _get_tracks_by_type('Text'), and change get_video_tracks to call _get_tracks_by_type('Video') to remove the repeated boilerplate.src/trackers/C411.py (1)
324-383: Duplicate search logic — the TMDB and title-based search blocks are nearly identical.The two search blocks (Lines 335-354 and 363-382) share identical XML parsing and error handling. Extract a helper to reduce duplication and ensure both paths get the same defensive fixes.
Example refactor
async def _search_c411(self, params: dict[str, Any]) -> list[str]: dupes = [] try: async with httpx.AsyncClient(timeout=5.0) as client: response = await client.get(url=self.search_url, params=params) if response.status_code == 200: root = etree.fromstring(response.text.encode('utf-8')) if len(root) == 0: return dupes channel = root[0] for result in channel: if result.tag == 'item': title_el = result.find('title') if title_el is not None and title_el.text: dupes.append(title_el.text) else: console.print(f"[bold red]Failed to search torrents. HTTP Status: {response.status_code}") except httpx.TimeoutException: console.print("[bold red]Request timed out after 5 seconds") except httpx.RequestError as e: console.print(f"[bold red]Unable to search for existing torrents: {e}") except Exception as e: console.print(f"[bold red]Unexpected error: {e}") await asyncio.sleep(5) return dupes🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/C411.py` around lines 324 - 383, The search_existing method duplicates nearly identical HTTP/XML parsing/error-handling logic for TMDB-id and title searches; extract that logic into a helper (e.g., async def _search_c411(self, params: dict[str, Any]) -> list[str]) and have search_existing call it for both parameter sets. The helper should perform the httpx.AsyncClient request, check response.status_code == 200, encode response.text, parse with etree.fromstring, defensively verify root has children, access channel = root[0], iterate channel looking for <item> elements, use result.find('title') (or the exact tag name instead of result[0]) and verify title_el is not None and title_el.text before appending, and consolidate the same exception handling (httpx.TimeoutException, httpx.RequestError, generic Exception) so both search paths share identical, safe behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/trackers/C411.py`:
- Around line 386-416: In upload, the call to fr.get_desc_full that assigns
description is redundant because the same call is made again when building the
data dict; remove the first unused call and reuse a single retrieved value:
inside async def upload(...) eliminate the initial "description = await
fr.get_desc_full(meta, self.tracker)" and instead use the description variable
when constructing data (i.e., set description = await fr.get_desc_full(meta,
self.tracker) once before creating the data dict and reference that variable in
data["description"]); ensure references to description, get_desc_full, upload,
and the data dict remain consistent.
- Around line 68-75: The checks for language option flags vff and vfq are not
mutually exclusive and currently append duplicate entries to obj1; change the
independent ifs in the block that appends to obj1 so that when vff and vfq are
both True only the intended single code is appended (e.g., make the second and
third conditions into elif branches or reorder to use an if/elif/elif chain),
preserving the final fallback check (eng and not vff and not vfq) for appending
1; update the conditional sequence around obj1.append(4), obj1.append(5), and
obj1.append(2) to prevent multiple appends when both vff and vfq are set.
- Around line 155-165: The code currently converts episode_int to a string so
missing vs explicit 0 get conflated; fix by treating episode_int as an integer
and explicitly checking presence vs None: use meta.get('episode_int') (without
str()) and if it is None or not present set options_dict[6] = 96 (complete
season), otherwise set options_dict[6] = 96 + int(meta['episode_int']); update
the block around meta/episode handling in C411 (references: meta, episode_int,
options_dict[6]) so presence of an explicit 0 is detected via integer presence
rather than truthiness of a string.
---
Duplicate comments:
In `@src/trackers/C411.py`:
- Around line 335-354: The XML parsing assumes elements exist and can raise
IndexError; update the parsing in src/trackers/C411.py where you access channel
= root[0] and dupe = result[0] to defensively check children (e.g., ensure root
has at least one child before using root[0], find the <channel> element with
root.find('channel') or check len(root), and ensure each <item> has children or
use result.findtext()/result.find('...') before accessing result[0]); apply the
exact same defensive checks to the fallback search block that mirrors this
parsing so both paths handle missing/empty elements without raising IndexError
and still log/search gracefully.
- Around line 398-412: The code unconditionally reads torrent/mediainfo files
even when meta["debug"] is True, causing FileNotFoundError; move the async file
reads for torrent_file_path and mediainfo_file_path inside the same non-debug
branch where create_torrent_for_upload(meta, self.tracker, 'C411') is called
(i.e., inside if not meta["debug"] block) so that torrent_bytes and
mediainfo_bytes are only read when the files have been created; keep headers,
c411_name = await self.get_name(meta) and dot_name logic outside if needed, or
ensure any later usage of torrent_bytes/mediainfo_bytes is guarded by
meta["debug"] as well.
In `@src/trackers/FRENCH.py`:
- Around line 341-351: The loop that checks audio track language uses direct
dict access item['Language'] which can raise KeyError for tracks missing that
key; update the loop in the block that populates audio_parts (the for over
audio_track, the has_french_audio logic) to use item.get('Language', '') for all
language comparisons (e.g., compare item.get('Language','') to
"fr"/"fr-fr"/"fr-ca") before calling map_audio_codec and get_audio_channels and
appending to audio_parts so missing Language fields are handled safely.
- Line 619: The f-string building uses direct dict access on obj
(obj['Language'], obj['Format'], obj['Channels']) which can raise KeyError for
missing MediaInfo fields; update the code that assigns line (and the analogous
assignment at the other occurrence) to use obj.get('Language', 'Unknown'),
obj.get('Format', 'Unknown') and obj.get('Channels') with a safe fallback (e.g.,
convert to int/default '??' or omit "ch" when missing), and ensure kbps is
computed/logged safely (cast/zero-default before formatting) so the f-string
formatting (kbps:.2f) never raises an exception. Ensure you update both
occurrences referencing obj and kbps so missing fields are handled consistently.
- Around line 33-53: The language detection misses the case of a single French
dub for non-French originals and has redundant separate checks for 2 vs ≥3 audio
tracks; update the logic that sets language by (1) replacing the two separate
MULTi checks with a single condition: if num_audio_tracks >= 2 and
has_french_audio: language = "MULTi", and (2) add a condition to set language =
"FRENCH" when num_audio_tracks == 1 and has_french_audio and audio_langs[0] !=
original_lang (i.e., single French audio that is not the original language),
ensuring the existing single-audio original-language checks (VOSTFR/VO/FRENCH)
remain correct and ordered so the new single-dub case is matched.
- Around line 600-602: The displayed video bitrate label is wrong: the code
appends the value mbps (calculated as bits/s ÷ 1,000,000 => megabits/s) but
labels it "MB/s"; update the call that builds the description (the
desc_parts.append that formats {mbps:.2f} MB/s) to either change the unit label
to "Mb/s" to match the existing mbps value, or if you intend to show megabytes/s
convert mbps to MB/s by dividing by 8 and then label it "MB/s"; adjust the
formatting in the same desc_parts.append that references the mbps variable
accordingly.
---
Nitpick comments:
In `@src/trackers/C411.py`:
- Around line 324-383: The search_existing method duplicates nearly identical
HTTP/XML parsing/error-handling logic for TMDB-id and title searches; extract
that logic into a helper (e.g., async def _search_c411(self, params: dict[str,
Any]) -> list[str]) and have search_existing call it for both parameter sets.
The helper should perform the httpx.AsyncClient request, check
response.status_code == 200, encode response.text, parse with etree.fromstring,
defensively verify root has children, access channel = root[0], iterate channel
looking for <item> elements, use result.find('title') (or the exact tag name
instead of result[0]) and verify title_el is not None and title_el.text before
appending, and consolidate the same exception handling (httpx.TimeoutException,
httpx.RequestError, generic Exception) so both search paths share identical,
safe behavior.
In `@src/trackers/FRENCH.py`:
- Line 462: tmdb_api_key is being set to False when missing which then gets sent
to TMDB; change the default to None (tmdb_api_key =
config['DEFAULT'].get('tmdb_api', None)) and add a validation immediately after
assignment to raise a clear error (or log+exit) if tmdb_api_key is None or
empty, so no request is made with api_key=False; update any downstream code that
builds the TMDB query to assume tmdb_api_key is present after this validation.
- Around line 96-185: The three functions get_audio_tracks, get_subtitle_tracks,
and get_video_tracks duplicate the same mediainfo traversal and filtering logic;
extract a shared helper (e.g., async _get_tracks_by_type(meta: dict[str, Any],
track_type: str) -> list[dict[str, Any]]) that validates
meta['mediainfo']['media']['track'] and returns only dict tracks matching
track_type, then change get_audio_tracks to call _get_tracks_by_type('Audio')
and apply the "commentary" Title filter when filter is True, change
get_subtitle_tracks to call _get_tracks_by_type('Text'), and change
get_video_tracks to call _get_tracks_by_type('Video') to remove the repeated
boilerplate.
| if vff and vfq: | ||
| obj1.append(4) | ||
| if vfq: | ||
| obj1.append(5) | ||
| if vff: | ||
| obj1.append(2) | ||
| if eng and not vff and not vfq: # vo | ||
| obj1.append(1) |
There was a problem hiding this comment.
Language option tags are not mutually exclusive — duplicate entries when both vff and vfq are set.
When both vff and vfq are True, the code appends 4 (VF2), then also 5 (VFQ) and 2 (VFF) because these are independent if statements. This likely should be elif:
Fix
if vff and vfq:
obj1.append(4)
- if vfq:
+ elif vfq:
obj1.append(5)
- if vff:
+ elif vff:
obj1.append(2)
if eng and not vff and not vfq: # vo
obj1.append(1)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if vff and vfq: | |
| obj1.append(4) | |
| if vfq: | |
| obj1.append(5) | |
| if vff: | |
| obj1.append(2) | |
| if eng and not vff and not vfq: # vo | |
| obj1.append(1) | |
| if vff and vfq: | |
| obj1.append(4) | |
| elif vfq: | |
| obj1.append(5) | |
| elif vff: | |
| obj1.append(2) | |
| if eng and not vff and not vfq: # vo | |
| obj1.append(1) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/trackers/C411.py` around lines 68 - 75, The checks for language option
flags vff and vfq are not mutually exclusive and currently append duplicate
entries to obj1; change the independent ifs in the block that appends to obj1 so
that when vff and vfq are both True only the intended single code is appended
(e.g., make the second and third conditions into elif branches or reorder to use
an if/elif/elif chain), preserving the final fallback check (eng and not vff and
not vfq) for appending 1; update the conditional sequence around obj1.append(4),
obj1.append(5), and obj1.append(2) to prevent multiple appends when both vff and
vfq are set.
| if meta.get('no_season', False) is False: | ||
| season = str(meta.get('season_int', '')) | ||
| if season: | ||
| options_dict[7] = 120 + int(season) | ||
| # Episode | ||
| episode = str(meta.get('episode_int', '')) | ||
| if episode: # Episode 0 check is missing | ||
| options_dict[6] = 96 + int(episode) | ||
| else: | ||
| # pas d'épisode, on suppose que c'est une saison complete | ||
| options_dict[6] = 96 |
There was a problem hiding this comment.
Episode 0 is indistinguishable from "complete season" — both produce value 96.
Line 160 converts episode_int to string, so 0 becomes "0" (truthy). Then 96 + int("0") = 96, which is the same value assigned on Line 165 for the "no episode / complete season" case. If C411 differentiates these, the check needs adjustment.
- episode = str(meta.get('episode_int', ''))
- if episode: # Episode 0 check is missing
+ episode_int = meta.get('episode_int', '')
+ if episode_int not in ('', None) and int(episode_int) > 0:
+ options_dict[6] = 96 + int(episode_int)
+ else:
+ # No episode or episode 0, assume complete season
options_dict[6] = 96🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/trackers/C411.py` around lines 155 - 165, The code currently converts
episode_int to a string so missing vs explicit 0 get conflated; fix by treating
episode_int as an integer and explicitly checking presence vs None: use
meta.get('episode_int') (without str()) and if it is None or not present set
options_dict[6] = 96 (complete season), otherwise set options_dict[6] = 96 +
int(meta['episode_int']); update the block around meta/episode handling in C411
(references: meta, episode_int, options_dict[6]) so presence of an explicit 0 is
detected via integer presence rather than truthiness of a string.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (3)
src/trackers/C411.py (3)
324-383:search_existingduplicates the XML parsing block — extract a helper.The TMDB-ID search (lines 335–354) and name-based fallback search (lines 363–382) are near-identical blocks. Extracting the HTTP+XML logic into a small helper would eliminate ~20 lines of duplication and make both code paths easier to maintain.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/C411.py` around lines 324 - 383, search_existing duplicates the HTTP request + XML parsing logic twice; factor that into a small async helper (e.g., async def _fetch_dupes(self, params: dict[str, Any]) -> list[str]) that performs the httpx.AsyncClient request (timeout 5s), checks status_code, decodes response.text to bytes, parses with etree.fromstring, extracts <item> children texts, and performs the same exception handling (httpx.TimeoutException, httpx.RequestError, generic Exception with asyncio.sleep). Replace both duplicated blocks in search_existing to build their params (tmdbid or q) then call self._fetch_dupes(params) and extend/return its results. Ensure you reuse the same api_key extraction and console error messages exactly as before.
329-329:get_translation_fris called twice; reuse the first result.Line 329 calls
await fr.get_translation_fr(meta)and storestitle, but the same call is repeated on Line 357 in the fallback branch. Sincemetahasn't changed, the second call is redundant — just reuse thetitlefrom line 329.Also,
descris unused in both calls (Ruff RUF059). Prefix with_.♻️ Suggested fix
- title, descr = await fr.get_translation_fr(meta) + title, _descr = await fr.get_translation_fr(meta) params: dict[str, Any] = { ... } ... if not dupes: - title, descr = await fr.get_translation_fr(meta) params: dict[str, Any] = { 't': 'search', 'apikey': self.config['TRACKERS'][self.tracker]['api_key'].strip(), 'q': unidecode.unidecode(title.replace(" ", ".")) }Also applies to: 355-361
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/C411.py` at line 329, The call to fr.get_translation_fr(meta) is duplicated: remove the second await and reuse the previously returned title variable instead of calling get_translation_fr again in the fallback branch (refer to get_translation_fr, title and meta); also rename descr to _descr (or _descr variable) at the original call site to silence the unused-variable warning (RUF059) and update any references accordingly (there should be none).
559-574:download_torrentstreams response but the file handle scope is nested inside the client context — consider simplifying.The streaming with
aiter_bytes()is fine for large files, but for torrent files (typically small), a simpler.contentwrite would be simpler and avoid the nested async iteration. This is purely a readability nit.Also, per Ruff TRY300,
return Noneon Line 567 could move to anelseblock, though this is cosmetic.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/C411.py` around lines 559 - 574, In download_torrent, simplify the file write by replacing the async iteration over r.aiter_bytes() with a single write of r.content (i.e., await f.write(r.content)) since torrent files are small, and remove the nested async for loop; also tidy the control flow by moving the current try-block return None into an else block (or otherwise remove the redundant return inside the try) so the function only returns None from the else/except path. Reference: method download_torrent, variables r, client, and path.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/trackers/C411.py`:
- Line 422: The code is serializing tmdb_info with str() which produces a Python
repr (invalid JSON); change the serializer for the "tmdbData" payload to use
json.dumps(tmdb_info) (and add/import json if missing) so the API receives valid
JSON; also ensure tmdb_info is JSON-serializable (or convert problematic types)
and handle/detect serialization errors around the code that constructs the
"tmdbData" field in C411.py.
---
Duplicate comments:
In `@src/trackers/C411.py`:
- Around line 335-354: The XML parsing block blindly indexes into elements
(root[0], result[0]) which can raise IndexError; update the parsing in the C411
tracker to be defensive: after parsing response_text into root, verify root has
children (e.g., if len(root) == 0: log and return/continue), assign channel
safely (e.g., channel = root[0] only after check) or use root.find('channel') /
root.findall('item') to locate items, and before using result[0] ensure result
has at least one child (e.g., if len(result) == 0: continue) or use
result.findtext(...) to get text safely; also handle XMLSyntaxError from
etree.fromstring and optionally add an explicit except IndexError clause
alongside existing exceptions to log and skip malformed responses.
- Around line 386-388: In the upload method, the await call to fr.get_desc_full
that assigns to description is redundant because description is recomputed
later; remove the first call (the unused "description = await
fr.get_desc_full(meta, self.tracker)") from C411.upload so the function only
calls fr.get_desc_full once where the result is actually used (at the later line
around the existing second call).
- Around line 68-75: The code appends duplicate entries to obj1 when both flags
vff and vfq are true; change the second and third standalone checks into
exclusive branches so only one append occurs per condition—replace the separate
if checks for vfq and vff (that append 5 and 2) with elif branches following the
initial vff and vfq combined check, ensuring the logic using vff, vfq, eng and
obj1 is mutually exclusive.
- Around line 398-412: The code unconditionally reads torrent_file_path and
mediainfo_file_path even when meta["debug"] is True (create_torrent_for_upload
is skipped), causing FileNotFoundError; update the logic in the C411 upload flow
so file reads happen only when the torrent/media files exist — e.g., move the
aiofiles.open reads into the same branch that calls
self.common.create_torrent_for_upload or wrap them with an existence check
(os.path.exists or similar) and handle the debug case by setting
torrent_bytes/mediainfo_bytes to None or a stub; adjust any later use of
torrent_bytes/mediainfo_bytes accordingly (look at variables torrent_file_path,
mediainfo_file_path, and methods create_torrent_for_upload and get_name for
where to apply the guard).
- Around line 155-165: The code treats episode '0' the same as a missing episode
because it checks the truthiness of the stringified episode; change the
condition to detect presence of episode_int (not just truthiness) so episode 0
is handled as a real episode. Specifically, update the branch that reads
meta.get('episode_int', '') and sets options_dict[6] so it uses a presence check
(e.g., meta.get('episode_int') is not None or 'episode_int' in meta and
meta['episode_int'] != '') and then do options_dict[6] = 96 +
int(meta.get('episode_int')), otherwise leave options_dict[6] = 96 for a full
season; refer to meta.get('episode_int') and options_dict[6] to locate the
change.
- Around line 137-145: The code in C411.py uses magic numeric category codes
(obj2 = 415 / 413 / 414) when checking meta['uuid'] for "4klight"/"hdlight";
replace these undocumented literals with named constants or an enum (e.g.,
CATEGORY_4K_LIGHT, CATEGORY_HD_1080, CATEGORY_HD_OTHER) declared near the top of
the module or in a shared constants module, update the assignments (obj2 =
CATEGORY_4K_LIGHT etc.), and add a short comment documenting what each constant
represents so future reviewers understand the mapping for uuid checks and
resolution logic.
---
Nitpick comments:
In `@src/trackers/C411.py`:
- Around line 324-383: search_existing duplicates the HTTP request + XML parsing
logic twice; factor that into a small async helper (e.g., async def
_fetch_dupes(self, params: dict[str, Any]) -> list[str]) that performs the
httpx.AsyncClient request (timeout 5s), checks status_code, decodes
response.text to bytes, parses with etree.fromstring, extracts <item> children
texts, and performs the same exception handling (httpx.TimeoutException,
httpx.RequestError, generic Exception with asyncio.sleep). Replace both
duplicated blocks in search_existing to build their params (tmdbid or q) then
call self._fetch_dupes(params) and extend/return its results. Ensure you reuse
the same api_key extraction and console error messages exactly as before.
- Line 329: The call to fr.get_translation_fr(meta) is duplicated: remove the
second await and reuse the previously returned title variable instead of calling
get_translation_fr again in the fallback branch (refer to get_translation_fr,
title and meta); also rename descr to _descr (or _descr variable) at the
original call site to silence the unused-variable warning (RUF059) and update
any references accordingly (there should be none).
- Around line 559-574: In download_torrent, simplify the file write by replacing
the async iteration over r.aiter_bytes() with a single write of r.content (i.e.,
await f.write(r.content)) since torrent files are small, and remove the nested
async for loop; also tidy the control flow by moving the current try-block
return None into an else block (or otherwise remove the redundant return inside
the try) so the function only returns None from the else/except path. Reference:
method download_torrent, variables r, client, and path.
| "options": await self.get_option_tag(meta), | ||
| # "isExclusive": "Test Upload-Assistant", | ||
| "uploaderNote": "Upload-Assistant", | ||
| "tmdbData": str(tmdb_info), |
There was a problem hiding this comment.
str(tmdb_info) produces Python repr, not valid JSON — the API will receive malformed data.
str() on a Python dict produces output like {'id': 123, 'title': 'foo'} with single quotes and Python-specific literals (True/False/None), not valid JSON ({"id": 123, "title": "foo"} with true/false/null). The tracker API almost certainly expects a JSON string here.
🐛 Proposed fix
- "tmdbData": str(tmdb_info),
+ "tmdbData": json.dumps(tmdb_info),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/trackers/C411.py` at line 422, The code is serializing tmdb_info with
str() which produces a Python repr (invalid JSON); change the serializer for the
"tmdbData" payload to use json.dumps(tmdb_info) (and add/import json if missing)
so the API receives valid JSON; also ensure tmdb_info is JSON-serializable (or
convert problematic types) and handle/detect serialization errors around the
code that constructs the "tmdbData" field in C411.py.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (3)
src/trackers/C411.py (3)
329-329: Redundant async call —fr.get_translation_fr(meta)result is unused here.Neither
titlenordescrfrom this call is used in the first (TMDB ID) search. The same function is called again on line 357 when the title-based fallback runs. Remove this line to avoid the wasted call.Proposed fix
- title, descr = await fr.get_translation_fr(meta) params: dict[str, Any] = {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/C411.py` at line 329, Remove the redundant await call to fr.get_translation_fr(meta) that assigns title, descr before the TMDB-ID search: the values aren't used until the title-based fallback later (the second call around where title-based fallback happens), so delete the earlier "title, descr = await fr.get_translation_fr(meta)" invocation in src/trackers/C411.py to avoid the wasted async call and keep the later call intact; if fr.get_translation_fr has side effects, ensure those side effects are preserved where needed or moved to the later call.
19-31: Remove unnecessarypassstatement.The
passon line 31 is redundant after the preceding assignments.Proposed fix
self.banned_groups: list[str] = [] - pass🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/C411.py` around lines 19 - 31, The __init__ method of class C411 contains a redundant final "pass" after setting attributes; remove the unnecessary pass in C411.__init__ so the constructor ends with the attribute assignments (leave all existing attributes like self.config, self.common, self.tracker, self.base_url, self.id_url, self.upload_url, self.search_url, self.torrent_url, and self.banned_groups intact).
57-57: Shadows built-intype.
typeis reused as a local variable name throughoutget_option_tag(andget_nameon line 171). Consider renaming tomedia_typeorupload_typeto avoid shadowing the built-in.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/C411.py` at line 57, The local variable named "type" in get_option_tag (and similarly in get_name) shadows the built-in; rename it to a descriptive name like media_type (or upload_type) by changing the assignment "type = meta.get('type', '').upper()" to "media_type = meta.get('type', '').upper()" and update every reference inside get_option_tag and get_name to use media_type instead of type; ensure any string comparisons, conditionals, or returned values that used type are adjusted accordingly and run tests to confirm no other references remain.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/trackers/C411.py`:
- Line 166: The code returns Python's repr via return str(options_dict), which
is not valid JSON; replace that with proper JSON serialization by importing the
json module (if not already present) and returning json.dumps(options_dict) so
integer keys are converted to JSON string keys; also mirror the same fix for the
other occurrence using str(tmdb_info) to ensure the API receives valid JSON.
---
Duplicate comments:
In `@src/trackers/C411.py`:
- Around line 398-412: The code reads torrent_file_path and mediainfo_file_path
unconditionally causing FileNotFoundError when meta["debug"] is True because
create_torrent_for_upload is skipped; move the async file reads (the
aiofiles.open/await f.read() for torrent_bytes and mediainfo_bytes) inside the
same non-debug branch that calls self.common.create_torrent_for_upload so those
files are only read after they're created, keeping the rest of the upload logic
(including headers, c411_name = await self.get_name(meta), dot_name, and
response handling) outside or adjusted accordingly; ensure references to
torrent_file_path and mediainfo_file_path remain in scope where used.
- Line 422: Summary: tmdbData is being set with str(tmdb_info) which yields a
Python repr, not JSON. Replace the use of str(tmdb_info) with
json.dumps(tmdb_info) when constructing the tmdbData value so the stored value
is valid JSON; update the assignment where "tmdbData": str(tmdb_info) appears
(in src/trackers/C411.py, variable tmdb_info) to use json.dumps(tmdb_info)
instead — json is already imported so no new imports needed.
- Around line 68-75: The current sequential ifs allow duplicates when both vff
and vfq are true (obj1 gets 4, then 5, then 2); change the block to an exclusive
chain so only one value is appended: replace the multiple ifs with an if / elif
/ elif / elif structure such that the first branch checks (vff and vfq) and
appends 4, the next checks vfq and appends 5, the next checks vff and appends 2,
and the final branch handles eng (and if desired retains the not vff and not vfq
guard) and appends 1; update the block containing vff, vfq, eng, and obj1
accordingly.
- Around line 335-354: The XML parsing assumes specific indices (root[0] and
result[0]) which can raise IndexError on empty or malformed XML; update the
search logic that uses self.search_url and the duplicate block that builds dupes
to be defensive: after parsing (root = etree.fromstring(...)) verify root has
children (e.g., if len(root) == 0: handle/return), locate the channel safely
(use root.find or next(child for child in root if child.tag == 'channel') or
check len(root) > 0 before root[0]), iterate over channel.findall('item')
instead of indexing, and for each item use element.findtext(...) or check that
element has children before accessing element[0]; apply the same defensive
checks in the duplicate block that currently accesses result[0] so no IndexError
is raised for malformed or empty XML.
- Line 388: Remove the redundant call that assigns description via
fr.get_desc_full(meta, self.tracker) (the unused local variable description) and
ensure the later call used when building the data dict (the fr.get_desc_full
call used to populate data) remains; either delete the unused assignment or
reuse that result when constructing data so fr.get_desc_full is only invoked
once (refer to the description variable, fr.get_desc_full(meta, self.tracker),
and the data dict construction).
- Around line 137-145: The uuid-based branch is assigning undocumented magic
category codes (415, 413, 414) and has an incomplete condition (commented-out
type == "ENCODE"); replace the numeric literals with clearly named constants
(e.g., CATEGORY_4KLIGHT, CATEGORY_HDLIGHT_1080P, CATEGORY_HDLIGHT_OTHER)
declared near the top of C411.py, re-enable and use the type check
(meta.get('type') == "ENCODE") together with the uuid substring checks, add a
safe resolution lookup (meta.get('resolution')) before comparing to '1080p', and
either map the constants into the existing 10-31 category range or add a
one-line comment documenting why these codes lie outside that range and where
the mapping is defined so the intent is clear (update uses of obj2 accordingly).
- Around line 155-165: The bug is that episode 0 is treated as "no episode" due
to truthy string check; change the check to detect presence of episode_int
rather than its truthiness: read episode_int as an integer from meta (e.g.
episode_int = meta.get('episode_int')) and use "if episode_int is not None:" to
set options_dict[6] = 96 + int(episode_int), otherwise set options_dict[6] = 96;
update the code in C411.py where options_dict and meta are handled so the
presence of episode 0 is handled correctly (reference symbols: meta,
'episode_int', options_dict[6]).
---
Nitpick comments:
In `@src/trackers/C411.py`:
- Line 329: Remove the redundant await call to fr.get_translation_fr(meta) that
assigns title, descr before the TMDB-ID search: the values aren't used until the
title-based fallback later (the second call around where title-based fallback
happens), so delete the earlier "title, descr = await
fr.get_translation_fr(meta)" invocation in src/trackers/C411.py to avoid the
wasted async call and keep the later call intact; if fr.get_translation_fr has
side effects, ensure those side effects are preserved where needed or moved to
the later call.
- Around line 19-31: The __init__ method of class C411 contains a redundant
final "pass" after setting attributes; remove the unnecessary pass in
C411.__init__ so the constructor ends with the attribute assignments (leave all
existing attributes like self.config, self.common, self.tracker, self.base_url,
self.id_url, self.upload_url, self.search_url, self.torrent_url, and
self.banned_groups intact).
- Line 57: The local variable named "type" in get_option_tag (and similarly in
get_name) shadows the built-in; rename it to a descriptive name like media_type
(or upload_type) by changing the assignment "type = meta.get('type',
'').upper()" to "media_type = meta.get('type', '').upper()" and update every
reference inside get_option_tag and get_name to use media_type instead of type;
ensure any string comparisons, conditionals, or returned values that used type
are adjusted accordingly and run tests to confirm no other references remain.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
src/trackers/C411.py (2)
553-578: Minor cleanup:return Noneinsidetryand bareExceptioncatch.Per Ruff TRY300, the
return Noneon Line 571 should be in anelseblock; and the bareexcept Exception(BLE001) should specifyexcept Exception as e(already done) but ideally catch a narrower type, or at minimum keep it as-is with the understanding that download failure is non-fatal here.♻️ Proposed cleanup
try: async with httpx.AsyncClient(timeout=30.0, follow_redirects=True) as client: r = await client.get(self.torrent_url, params=params) r.raise_for_status() async with aiofiles.open(path, "wb") as f: async for chunk in r.aiter_bytes(): await f.write(chunk) - return None - except Exception as e: + except (httpx.HTTPStatusError, httpx.RequestError, OSError) as e: console.print( - f"[yellow]Warning: Could not download torrent file: {str(e)}[/yellow]") + f"[yellow]Warning: Could not download torrent file: {e!s}[/yellow]") console.print( "[yellow]Download manually from the tracker.[/yellow]") - return None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/C411.py` around lines 553 - 578, In download_torrent, move the successful completion return out of the try body into an else block so the try only contains operations that may raise (i.e., the HTTP request and file write) and only return None in the else after the try; also replace the broad except Exception with a narrower catch for expected errors (e.g., httpx.HTTPError and OSError) while still logging the exception via the existing variable name (e) so failures remain non-fatal and informative.
443-471: Unused# noqa: PERF203directives.
PERF203is not an active Ruff rule; these suppressions have no effect and add noise. Remove them.♻️ Proposed cleanup
- except httpx.HTTPStatusError as e: # noqa: PERF203 + except httpx.HTTPStatusError as e:- for attempt in range(max_retries): # noqa: PERF203 (or wherever placed)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/C411.py` around lines 443 - 471, The # noqa: PERF203 comments are unused noise in the upload flow; remove the " # noqa: PERF203" suffixes from the try line and the except httpx.HTTPStatusError as e: line so the code no longer contains ineffective Ruff suppressions (look for the try block that opens the httpx.AsyncClient call and the corresponding "except httpx.HTTPStatusError as e" handler in C411.py).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/trackers/C411.py`:
- Around line 153-156: options_dict is being populated with options_dict[2] =
[obj2] even when obj2 retains its default value 0, causing invalid payloads;
update the code that sets options_dict[2] (the block that references obj2) to
first validate obj2 (e.g., obj2 is not 0/None and is the expected
type/structure) and only assign options_dict[2] when valid, otherwise omit the
key and emit a warning/log via the same logger used nearby; locate the
assignment to options_dict[2] and the code that sets obj2 and add the guard and
logging there so only valid data is sent to the API.
- Around line 333-334: Remove the redundant await fr.get_translation_fr(meta)
call that assigns title, descr before the TMDB-ID branch (these values are
unused there); ensure the only call remains where translations are actually
needed (the later call that reads from meta), and in that later call rename
descr to _ to avoid unused-variable linting (i.e., keep
fr.get_translation_fr(meta) where translations are used and change its return
unpacking from "title, descr" to "title, _"). Ensure references are to
get_translation_fr, fr, meta, title, descr and the TMDB-ID/dupes branch so you
only remove the unused early call and adjust the later unpacking.
---
Duplicate comments:
In `@src/trackers/C411.py`:
- Line 392: Remove the redundant call to fr.get_desc_full: currently description
is assigned at "description = await fr.get_desc_full(meta, self.tracker)" and
then the function is called again when building data; instead call
fr.get_desc_full once, assign it to the variable description, and use that
variable when constructing data (update the code around the assignment and where
data is created so get_desc_full(meta, self.tracker) is not invoked twice).
- Line 426: The payload currently sets "tmdbData": str(tmdb_info) which
serializes Python repr (single quotes, True/None) instead of JSON; replace this
with a proper JSON serialization using json.dumps(tmdb_info) (optionally with
ensure_ascii=False if needed) and make sure the module imports json; update the
assignment where "tmdbData" is constructed (refer to the tmdb_info variable and
the payload/response dict building) so the C411 API receives valid JSON.
- Around line 339-358: The code currently indexes XML nodes without guards
(root[0] and result[0]) which will raise IndexError on empty/malformed XML;
update the parsing in the async search block to first confirm root has children
(e.g., if len(root) == 0: log and return/continue) or use
root.find('channel')/root.findall and assign channel defensively, and for each
<item> check that it has children before accessing result[0] (e.g., if
len(result) == 0: continue or use result.findtext(...) safely) and log/skipping
malformed items; apply the same defensive checks to the name-based search block
that also uses root[0] and result[0] so both paths never index empty nodes.
- Around line 72-79: The logic that appends language option tags to obj1 uses
independent if statements so when both vff and vfq are true you get duplicate
entries; change the independent checks into a prioritized if/elif chain so the
combined case is handled first: in the block that currently checks vff, vfq, and
eng (variables vff, vfq, eng updating obj1) replace the separate ifs with a
single if/elif sequence that first handles "vff and vfq" (append 4), then "vfq"
(append 5), then "vff" (append 2), and finally the eng fallback (append 1) to
ensure only one tag is appended per track.
- Around line 402-416: The code reads torrent_file_path and mediainfo_file_path
unconditionally which raises FileNotFoundError in debug mode because
create_torrent_for_upload is skipped; move or guard the aiofiles.open reads so
they only execute when meta["debug"] is False (i.e., after the
create_torrent_for_upload call), or check existence before opening
(os.path.exists) and skip/handle missing files; update the block around
create_torrent_for_upload, the subsequent reads of torrent_file_path and
mediainfo_file_path, and any use of torrent_bytes/mediainfo_bytes (e.g., in
upload logic using get_name and headers) so those variables are only referenced
when they were successfully loaded.
- Around line 141-149: The block in C411.py that sets obj2 based on
meta.get('uuid') uses undocumented numeric codes (415, 413, 414) and has the
important guard "and type == 'ENCODE'" commented out; either remove this branch
or make it explicit: restore the guard condition (e.g., check type == "ENCODE"
before assigning obj2) and replace magic numbers with named constants or
documented C411 extension codes and add a short comment explaining these codes
and when they're valid (e.g., C411_EXT_415 = ... , C411_EXT_413 = ... used when
uuid contains "4klight"/"hdlight" and type == "ENCODE", with resolution handling
for 1080p). Ensure the variable names (uuid, meta, obj2) and function/class
containing this logic are updated accordingly so the behavior and purpose of
these codes are clear.
- Around line 159-169: The code currently treats any non-empty episode string as
present, but episode "0" maps to 96 which collides with the "complete season"
fallback; change the logic to check presence via the raw integer
(meta.get('episode_int')) and encode episodes so that 96 is reserved for
"complete season" and episodes start at 97: if meta.get('episode_int') is not
None then set options_dict[6] = 96 + int(meta.get('episode_int')) + 1, otherwise
set options_dict[6] = 96; update the block that references meta, episode, and
options_dict[6] accordingly so episode 0 becomes 97 and no longer collides with
the fallback.
---
Nitpick comments:
In `@src/trackers/C411.py`:
- Around line 553-578: In download_torrent, move the successful completion
return out of the try body into an else block so the try only contains
operations that may raise (i.e., the HTTP request and file write) and only
return None in the else after the try; also replace the broad except Exception
with a narrower catch for expected errors (e.g., httpx.HTTPError and OSError)
while still logging the exception via the existing variable name (e) so failures
remain non-fatal and informative.
- Around line 443-471: The # noqa: PERF203 comments are unused noise in the
upload flow; remove the " # noqa: PERF203" suffixes from the try line and the
except httpx.HTTPStatusError as e: line so the code no longer contains
ineffective Ruff suppressions (look for the try block that opens the
httpx.AsyncClient call and the corresponding "except httpx.HTTPStatusError as e"
handler in C411.py).
Use template
Upgrade C411 template and bugfix if no translated title
Allow multiple subdirectory depth
C411 is a new french tracker
Summary by CodeRabbit
New Features
Chores