File: src/opengradient/client/llm.py lines 255 and 330
Both completion() and chat() strip the provider prefix from the model string before building the request payload:
pythonmodel_id = model.split("/")[1] # completion(), line 255
model=model.split("/")[1], # _ChatParams in chat(), line 330
All TEE_LLM enum values contain a / (e.g. "anthropic/claude-haiku-4-5"), so normal usage is safe.
But if a caller passes a plain string without a slash which is undocumented but not type-checked
Python raises IndexError: list index out of range with no helpful error message.
The _validate_model_string() function in og_langchain.py does check for /, but LLM.chat() and LLM.completion() have no such guard.
I hope this could fix it:
def _get_model_id(model: TEE_LLM) -> str:
value = model.value if isinstance(model, TEE_LLM) else str(model)
parts = value.split("/", 1)
if len(parts) != 2:
raise ValueError(
f"Invalid model identifier '{value}'. Expected 'provider/model-name' format."
)
return parts[1]
File: src/opengradient/client/llm.py lines 255 and 330
Both completion() and chat() strip the provider prefix from the model string before building the request payload:
pythonmodel_id = model.split("/")[1] # completion(), line 255
model=model.split("/")[1], # _ChatParams in chat(), line 330
All TEE_LLM enum values contain a / (e.g. "anthropic/claude-haiku-4-5"), so normal usage is safe.
But if a caller passes a plain string without a slash which is undocumented but not type-checked
Python raises IndexError: list index out of range with no helpful error message.
The _validate_model_string() function in og_langchain.py does check for /, but LLM.chat() and LLM.completion() have no such guard.
I hope this could fix it:
def _get_model_id(model: TEE_LLM) -> str:
value = model.value if isinstance(model, TEE_LLM) else str(model)
parts = value.split("/", 1)
if len(parts) != 2:
raise ValueError(
f"Invalid model identifier '{value}'. Expected 'provider/model-name' format."
)
return parts[1]