Skip to content

feat: updated sdk for highflame + restructured SDK#237

Open
rsharath wants to merge 5 commits intomainfrom
v2
Open

feat: updated sdk for highflame + restructured SDK#237
rsharath wants to merge 5 commits intomainfrom
v2

Conversation

@rsharath
Copy link
Contributor

No description provided.

default_headers=javelin_headers,
api_version="2024-02-15-preview",
)
print(f"Synchronous AzureOpenAI client key: {llm_api_key}")

Check failure

Code scanning / CodeQL

Clear-text logging of sensitive information High

This expression logs
sensitive data (password)
as clear text.
if st.button("Execute Query"):
if query_text.strip():
try:
result_df = pd.read_sql_query(query_text, conn)

Check failure

Code scanning / CodeQL

SQL query built from user-controlled sources High

This SQL query depends on a
user-provided value
.

headers = {"x-highflame-route": route, "x-api-key": self.api_key}
serialized["kwargs"]["model_kwargs"]["extra_headers"].update(headers)
print(f"Modified headers to: {headers}")

Check failure

Code scanning / CodeQL

Clear-text logging of sensitive information High

This expression logs
sensitive data (password)
as clear text.
llm_api_key = os.getenv("OPENAI_API_KEY")
if not api_key or not llm_api_key:
raise ValueError("Both HIGHFLAME_API_KEY and OPENAI_API_KEY must be set.")
print("OpenAI LLM API Key:", llm_api_key)

Check failure

Code scanning / CodeQL

Clear-text logging of sensitive information High

This expression logs
sensitive data (password)
as clear text.
try:
# Set (and print) the OpenAI key
openai_api_key = os.getenv("OPENAI_API_KEY") # define your openai api key here
print(f"Synchronous OpenAI client key: {openai_api_key}")

Check failure

Code scanning / CodeQL

Clear-text logging of sensitive information High

This expression logs
sensitive data (password)
as clear text.
openai_api_key = os.getenv("OPENAI_API_KEY")
api_key = os.getenv("HIGHFLAME_API_KEY")
javelin_headers = {"x-highflame-apikey": api_key}
print(f"[DEBUG] Synchronous OpenAI client key: {openai_api_key}")

Check failure

Code scanning / CodeQL

Clear-text logging of sensitive information High

This expression logs
sensitive data (password)
as clear text.
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @rsharath, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant architectural overhaul to the Highflame Python SDK, primarily by decoupling its core functionalities from the command-line interface. This strategic separation aims to provide a more streamlined and efficient experience for developers, allowing them to install only the components they need. The changes also encompass a thorough rebranding and standardization of naming conventions, ensuring a cohesive and future-proof foundation for the SDK's continued evolution. The inclusion of extensive new documentation and practical examples across various LLM providers and agent frameworks demonstrates the versatility and ease of integration of the updated SDK.

Highlights

  • SDK and CLI Separation: The core highflame package has been restructured into two distinct packages: highflame for the SDK and highflame-cli for the command-line interface. This change aims to reduce unnecessary dependencies for users who only require the SDK functionalities, resulting in a smaller installation footprint.
  • Comprehensive Rebranding: All references to 'Javelin' throughout the SDK, CLI, and documentation have been systematically replaced with 'Highflame', ensuring consistent branding across the entire ecosystem.
  • Generic Class Naming Convention: Key class names have been made more generic to improve clarity and reduce vendor lock-in perception. For instance, JavelinClient is now Client, JavelinConfig is Config, and JavelinClientError is ClientError.
  • Updated Configuration and Headers: Environment variables and HTTP headers have been updated from JAVELIN_* to HIGHFLAME_* (e.g., JAVELIN_API_KEY is now HIGHFLAME_API_KEY), standardizing the configuration parameters.
  • Extensive Documentation and Examples: New documentation files such as CLI_SEPARATION_PLAN.md, LOGGING.md, MIGRATION_GUIDE.md, and README_V2.md have been added. Furthermore, a wide array of new examples covering various LLM integrations (Anthropic, Azure OpenAI, Bedrock, Gemini, Mistral, OpenAI), agent frameworks (Google ADK, CrewAI, LangGraph, OpenAI Agents), RAG implementations, and guardrails usage have been introduced.
  • Customer Support Agent Example: A full-fledged customer support agent example has been added, showcasing the integration of LangGraph, Model Context Protocol (MCP), and Highflame's unified LLM provider capabilities, complete with database tools, web search, email functionality, and a Streamlit UI.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR updates the SDK for Highflame and restructures the SDK architecture. The changes introduce new chat completion handlers, add numerous example implementations demonstrating integration with various LLM providers, and include RAG (Retrieval Augmented Generation) examples.

Changes:

  • Added new chat completions module with base and specialized completion handlers
  • Introduced extensive example code for OpenAI, Mistral, and other provider integrations
  • Added RAG implementation examples with embedding support
  • Updated __init__.py with new exports and exception handling

Reviewed changes

Copilot reviewed 95 out of 120 changed files in this pull request and generated 9 comments.

Show a summary per file
File Description
v2/highflame/chat_completions.py New core module implementing chat and text completion handlers with provider-specific transformations
v2/highflame/init.py Updated module exports with exception classes and models
v2/examples/route_examples/*.py Example implementations for route-based SDK usage
v2/examples/rag/*.ipynb RAG implementation examples with Highflame integration
v2/examples/openai/*.py Comprehensive OpenAI integration examples including function calling and streaming
v2/examples/mistral/*.py Mistral AI integration examples
v2/examples/guardrails/*.py Guardrails service integration examples

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

base_url = provider_api_base.rstrip("/")
if model:
model_rules = self.rule_manager.get_rules(base_url, model)
print("model_rules", model_rules)
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Debug print statement should be removed from production code. This appears to be leftover debugging code that will clutter logs in production environments.

Suggested change
print("model_rules", model_rules)
logger.debug("Anthropic model_rules: %s", model_rules)

Copilot uses AI. Check for mistakes.

__all__ = [
"GatewayNotFoundError",
"GatewayAlreadyExistsError" "ProviderNotFoundError",
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing comma between string literals. Python will concatenate these strings, creating a single export "GatewayAlreadyExistsErrorProviderNotFoundError" instead of two separate exports. This should be "GatewayAlreadyExistsError", "ProviderNotFoundError",.

Copilot uses AI. Check for mistakes.
"Templates",
"Secret",
"Secrets",
"QueryBody",
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The QueryBody class is exported in __all__ but is not imported in the imports section at the top of the file. This will cause an error when users try to import this symbol.

Copilot uses AI. Check for mistakes.
import dotenv

from highflame import (
Client,
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The imported Client class does not match the actual class being used (Highflame). This import appears to be incorrect and should be removed or the code should be updated to use the correct import.

Suggested change
Client,
Highflame,

Copilot uses AI. Check for mistakes.
import dotenv

from highflame import (
Client,
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The imported Client class does not match the actual class being used (Highflame). This import appears to be incorrect and should be removed or the code should be updated to use the correct import.

Suggested change
Client,
Highflame,

Copilot uses AI. Check for mistakes.
import dotenv

from highflame import (
Client,
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The imported Client class does not match the actual class being used (Highflame). This import appears to be incorrect and should be removed or the code should be updated to use the correct import.

Copilot uses AI. Check for mistakes.
"cell_type": "markdown",
"metadata": {},
"source": [
"# RAG USING JAVELIN WITH COMMENTS"
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The title references "JAVELIN" but the code uses "Highflame". This inconsistency should be corrected to "RAG USING HIGHFLAME WITH COMMENTS" for clarity.

Suggested change
"# RAG USING JAVELIN WITH COMMENTS"
"# RAG USING HIGHFLAME WITH COMMENTS"

Copilot uses AI. Check for mistakes.
"cell_type": "markdown",
"metadata": {},
"source": [
"## Simple inializing javelin model"
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Corrected spelling of 'inializing' to 'initializing'.

Suggested change
"## Simple inializing javelin model"
"## Simple initializing javelin model"

Copilot uses AI. Check for mistakes.
response = azure_client.chat.completions.create(model="gpt-4o", messages=messages)

print("Structured Output (JSON) Response:")
print("Structured Output (JSON) Response:")
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duplicate print statement. The message "Structured Output (JSON) Response:" is printed twice consecutively, which is redundant. Remove one of these lines.

Suggested change
print("Structured Output (JSON) Response:")

Copilot uses AI. Check for mistakes.
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant rebranding effort, transitioning the SDK and CLI from 'Javelin' to 'Highflame', along with a plan to separate the CLI into its own package. New documentation files, including a CLI separation plan, a logging guide, a migration guide from v1 to v2, and an updated README, have been added to reflect these changes. Review comments highlight several areas for improvement across various example files: ensuring consistent use of the new 'Highflame' branding in code and markdown, correcting API key header names (e.g., x-api-key to x-highflame-apikey), updating client initialization from Highflame(mcp_url) to fastmcp.Client(mcp_url) where appropriate, refactoring duplicated code in agent examples for better maintainability, and addressing an unnecessary debug print statement. Additionally, a suggestion was made to standardize on httpx over requests in the CLI's pyproject.toml for dependency consistency.

if _mcp_client is None:
# FastMCP defaults to port 8000 with SSE transport
mcp_url = os.getenv("MCP_SERVER_URL", "http://0.0.0.0:9000/mcp")
_mcp_client = Highflame(mcp_url)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The get_mcp_client function is incorrectly initializing the MCP client using Highflame(mcp_url). The Highflame class is for the main SDK, not the MCP client. Based on your requirements.txt and mcp_server.py, you should be using fastmcp.Client.

Suggested change
_mcp_client = Highflame(mcp_url)
_mcp_client = Client(mcp_url)

Comment on lines +81 to +315
"from highflame import (\n",
" Client,\n",
" Config,\n",
" Route,\n",
" RouteNotFoundError,\n",
" QueryResponse\n",
")\n",
"\n",
"load_dotenv() # Load environment variables from .env file\n",
"\n",
"# Set up logging\n",
"logging.basicConfig(level=logging.INFO)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configuration\n",
"\n",
"Define the conversation flow and agent behavior:"
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {},
"outputs": [],
"source": [
"config = {\n",
" \"starting_node\": \"intro\",\n",
" \"main_prompt\": \"You are Alex, an automated assistant from Google, conducting a feedback collection session with a customer who recently interacted with our services. If you dont know the name of the customer, ask for it, donot make up a name/ say [customer name]. Your goal is to gather detailed feedback on their experience, ensuring they feel heard and valued. End the call with safe message for anything other than the expected response in our context.\",\n",
" \"nodes\": [\n",
" {\n",
" \"id\": \"intro\",\n",
" \"prompt\": \"Task:\\n1. Introduce yourself, stating that you are calling from Google to collect feedback.\\n2. Confirm if the callee is the correct customer.\\n - If not, use end_call to apologize for the confusion and hang up.\\n - If the customer is not available, use end_call to politely hang up, indicating you will call back later.\\n3. Explain the purpose of the call and ask if they are willing to provide feedback.\\n - If they agree, transition to feedback_questions.\\n - If they decline, use end_call to apologize for the inconvenience and hang up.\"\n",
" },\n",
" {\n",
" \"id\": \"feedback_questions\",\n",
" \"prompt\": \"Task:\\n1. Ask the customer a series of feedback questions, such as:\\n - How satisfied were you with our service?\\n - What did you like most about your experience?\\n - What can we improve on?\\n2. Allow the customer to provide detailed responses. Capture their feedback.\\n3. If the customer has no further comments, express gratitude for their time.\\n4. Ask if they would be willing to leave a public review on our website or social media.\\n - If yes, provide the necessary details and transition to review_request.\\n - If no, transition to end_call.\"\n",
" },\n",
" {\n",
" \"id\": \"review_request\",\n",
" \"prompt\": \"Task:\\n1. Thank the customer for agreeing to leave a review.\\n2. Provide them with the link or instructions on where to leave the review.\\n3. Offer to answer any final questions or provide assistance with the review process.\\n4. Once done, transition to end_call.\"\n",
" }\n",
" ],\n",
" \"edges\": [\n",
" {\n",
" \"id\": \"feedback_edge\",\n",
" \"prompt\": \"Transition to ask feedback questions if the customer agrees to provide feedback.\",\n",
" \"source_node\": \"intro\",\n",
" \"target_node\": \"feedback_questions\"\n",
" },\n",
" {\n",
" \"id\": \"review_edge\",\n",
" \"prompt\": \"Transition to the review request if the customer agrees to leave a public review.\",\n",
" \"source_node\": \"feedback_questions\",\n",
" \"target_node\": \"review_request\"\n",
" }\n",
" ]\n",
"}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Javelin Route Setup\n",
"\n",
"Function to set up and manage the Javelin route:"
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {},
"outputs": [],
"source": [
"def setup_javelin_route(javelin_client):\n",
" route_name = \"test_route_1\"\n",
" try:\n",
" existing_route = javelin_client.get_route(route_name)\n",
" print(f\"Found existing route '{route_name}'\")\n",
" return existing_route\n",
" except RouteNotFoundError:\n",
" route_data = {\n",
" \"name\": route_name,\n",
" \"type\": \"chat\",\n",
" \"enabled\": True,\n",
" \"models\": [\n",
" {\n",
" \"name\": \"gpt-3.5-turbo\",\n",
" \"provider\": \"openai\",\n",
" \"suffix\": \"/chat/completions\",\n",
" }\n",
" ],\n",
" \"config\": {\n",
" \"organization\": \"myusers\",\n",
" \"rate_limit\": 7,\n",
" \"retries\": 3,\n",
" \"archive\": True,\n",
" \"retention\": 7,\n",
" \"budget\": {\n",
" \"enabled\": True,\n",
" \"annual\": 100000,\n",
" \"currency\": \"USD\",\n",
" },\n",
" \"dlp\": {\"enabled\": True, \"strategy\": \"Inspect\", \"action\": \"notify\"},\n",
" },\n",
" }\n",
" route = Route.parse_obj(route_data)\n",
" try:\n",
" javelin_client.create_route(route)\n",
" print(f\"Route '{route_name}' created successfully\")\n",
" return route\n",
" except Exception as e:\n",
" print(f\"Failed to create route: {str(e)}\")\n",
" return None\n",
" except Exception as e:\n",
" print(f\"Error checking for existing route: {str(e)}\")\n",
" return None"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Secure AI Agent Class\n",
"\n",
"Main class implementation for the AI agent:"
]
},
{
"cell_type": "code",
"execution_count": 36,
"metadata": {},
"outputs": [],
"source": [
"class SecureAIAgent:\n",
" def __init__(self, config: Dict, javelin_config: Config):\n",
" self.config = config\n",
" self.javelin_config = javelin_config\n",
" self.setup_javelin_client()\n",
" self.system_prompt = self.create_full_prompt()\n",
" self.conversation_history = []\n",
"\n",
" def setup_javelin_client(self):\n",
" self.javelin_client = Highflame(self.javelin_config)\n",
"\n",
" def create_full_prompt(self) -> str:\n",
" nodes = self.config['nodes']\n",
" edges = self.config.get('edges', [])\n",
" \n",
" node_prompts = [f\"Node {node['id']}:\\n{node['prompt']}\\n\" for node in nodes]\n",
" edge_prompts = [f\"Edge {edge['id']} (from {edge['source_node']} to {edge['target_node']}):\\n{edge['prompt']}\\n\" for edge in edges]\n",
" \n",
" full_prompt = f\"\"\"\n",
"{self.config['main_prompt']}\n",
"\n",
"Available nodes and their tasks:\n",
"{\"\\n\".join(node_prompts)}\n",
"\n",
"Conversation flow (edges):\n",
"{\"\\n\".join(edge_prompts)}\n",
"\n",
"Your task:\n",
"1. Understand the user's intent and the current stage of the conversation.\n",
"2. Process the appropriate node based on the conversation flow.\n",
"3. Provide a response to the user, handling all necessary steps for the current node.\n",
"4. Use the edge information to determine when and how to transition between nodes.\n",
"\n",
"Remember to stay in character throughout the conversation.\n",
"Starting node: {self.config['starting_node']}\n",
"\"\"\"\n",
" return full_prompt\n",
"\n",
" async def process_message(self, message: str) -> str:\n",
" self.conversation_history.append({\"role\": \"user\", \"content\": message})\n",
"\n",
" try:\n",
" query_data = {\n",
" \"model\": \"gpt-3.5-turbo\",\n",
" \"messages\": [\n",
" {\"role\": \"system\", \"content\": self.system_prompt},\n",
" *self.conversation_history\n",
" ],\n",
" \"temperature\": 0.7,\n",
" }\n",
"\n",
" response: QueryResponse = self.javelin_client.query_route(\"test_route_1\", query_data)\n",
" ai_message = response['choices'][0]['message']['content']\n",
" self.conversation_history.append({\"role\": \"assistant\", \"content\": ai_message})\n",
"\n",
" return ai_message\n",
" except RouteNotFoundError:\n",
" logging.error(\"Route 'test_route_1' not found. Attempting to recreate...\")\n",
" setup_javelin_route(self.javelin_client)\n",
" raise\n",
" except Exception as e:\n",
" logging.error(f\"Error in process_message: {str(e)}\")\n",
" raise"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Running the Agent\n",
"\n",
"Function to run the agent interactively:"
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {},
"outputs": [],
"source": [
"async def run_agent():\n",
" try:\n",
" # Set up Javelin configuration\n",
" api_key = os.getenv(\"HIGHFLAME_API_KEY\")\n",
" javelin_virtualapikey = os.getenv(\"JAVELIN_VIRTUALAPIKEY\")\n",
" llm_api_key = os.getenv(\"LLM_API_KEY\")\n",
"\n",
" if not all([api_key, javelin_virtualapikey, llm_api_key]):\n",
" print(\"Error: Missing required environment variables. Please check your .env file.\")\n",
" return\n",
"\n",
" javelin_config = Config(\n",
" base_url=\"https://api.highflame.app\",\n",
" api_key=api_key,\n",
" javelin_virtualapikey=javelin_virtualapikey,\n",
" llm_api_key=llm_api_key,\n",
" )\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This example notebook appears to be using outdated code from the v1 SDK, which will cause it to fail. Specifically:

  1. The import from highflame import Client is incorrect; it should be from highflame import Highflame.
  2. The environment variable JAVELIN_VIRTUALAPIKEY is used, but it should be HIGHFLAME_VIRTUALAPIKEY.
  3. The Config object is initialized with javelin_virtualapikey, but the correct parameter name in v2 is virtual_api_key.
  4. Variable names like javelin_client and javelin_config should be updated to highflame_client and highflame_config for consistency.

Please update the entire example to use the new Highflame v2 conventions.

Comment on lines +4 to +44
const javelinApiKey = ""; // javelin api key here
const llmApiKey = ""; // llm api key


async function getCompletion() {
try {
const routeName = 'AzureOpenAIRoute';
const url = `${process.env.JAVELIN_BASE_URL}/v1/query/${routeName}`;

const response = await axios.post(
url,
{
messages: [
{ role: 'system', content: 'Hello, you are a helpful scientific assistant.' },
{ role: 'user', content: 'What is the chemical composition of sugar?' },
],
model: 'gpt-3.5-turbo',
},
{
headers: {
'x-api-key': javelinApiKey,
'api-key': llmApiKey,
},
}
);
console.log(response.data.choices[0].message.content);
} catch (error) {
if (error.response) {
console.error('Error status:', error.response.status);
console.error('Error data:', error.response.data);
} else {
console.error('Error message:', error.message);
}
}
}


// Function to stream responses from the API
async function streamCompletion() {
try {
const url = "https://api.javelin.live/v1/query/AzureOpenAIRoute";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This Javascript example has several issues that will prevent it from working correctly:

  1. The variable javelinApiKey should be renamed to highflameApiKey for consistency with the new SDK name.
  2. The header x-api-key is used for the Highflame API key, but the correct header is x-highflame-apikey.
  3. The URL https://api.javelin.live/v1/query/AzureOpenAIRoute is hardcoded and uses the old javelin.live domain. It should use highflame.app and be constructed from environment variables.

openai_api_version="2023-03-15-preview",
extra_headers={
"x-highflame-route": "azureopenai_univ",
"x-api-key": os.environ.get("HIGHFLAME_API_KEY"),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The header x-api-key is used here, but the correct header for the Highflame API key is x-highflame-apikey. Please update this to ensure authentication works as expected.

Suggested change
"x-api-key": os.environ.get("HIGHFLAME_API_KEY"),
"x-highflame-apikey": os.environ.get("HIGHFLAME_API_KEY"),

pytest.skip("OPENAI_API_KEY not set")

try:
llm = get_llm("openai")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This test calls get_llm("openai"), but the get_llm function in src/agent/llm.py does not accept a provider argument; it determines the provider from the LLM_PROVIDER environment variable. This test will fail. Please update the test to set the environment variable before calling get_llm().

Comment on lines +22 to +67
research_agent = LlmAgent(
model=LiteLlm(
model="openai/gpt-4o",
api_base="https://api.highflame.app/v1",
extra_headers={
"x-highflame-route": "openai_univ",
"x-api-key": HIGHFLAME_API_KEY,
"Authorization": f"Bearer {OPENAI_API_KEY}",
},
),
name="ResearchAgent",
instruction="Research the query and save findings in state['research'].",
output_key="research",
)

# Agent 2: Summarizer
summary_agent = LlmAgent(
model=LiteLlm(
model="openai/gpt-4o",
api_base="https://api.highflame.app/v1",
extra_headers={
"x-highflame-route": "openai_univ",
"x-api-key": HIGHFLAME_API_KEY,
"Authorization": f"Bearer {OPENAI_API_KEY}",
},
),
name="SummaryAgent",
instruction="Summarize state['research'] into state['summary'].",
output_key="summary",
)

# Agent 3: Reporter
report_agent = LlmAgent(
model=LiteLlm(
model="openai/gpt-4o",
api_base="https://api.highflame.app/v1",
extra_headers={
"x-highflame-route": "openai_univ",
"x-api-key": HIGHFLAME_API_KEY,
"Authorization": f"Bearer {OPENAI_API_KEY}",
},
),
name="ReportAgent",
instruction="Generate a report from state['summary'] and include a source URL.",
output_key="report",
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to the Gemini agent example, there's a lot of duplicated code for creating the three agents since the model configuration is the same for all. Consider refactoring this by creating the LiteLlm model instance once and reusing it to make the code cleaner and easier to maintain.

"source": [
"### How Javelin and CrewwAI Integrate\n",
"\n",
"## Javelin Setup:\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The markdown cells in this notebook still refer to "Javelin" instead of "Highflame". Please update these references to reflect the new SDK name. For example, "How Javelin and CrewwAI Integrate" should be "How Highflame and CrewwAI Integrate".

Comment on lines +88 to +89
" - The Javelin API is used to validate and refine the user's query by providing a route (`testing` in this case). It helps assess whether the request is suitable for email generation and aids in refining the query for clarity.\n",
" - **Headers**: Contains the API key and the route for Javelin (`x-javelin-route`).\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The markdown cells in this notebook still refer to "Javelin" instead of "Highflame". Please update these references to reflect the new SDK name.

version = "2.0.0"
dependencies = [
"highflame>=2.0.0",
"requests>=2.32.3",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

In the proposed pyproject.toml for the CLI, you're adding requests as a dependency. However, the core SDK seems to be using httpx. To maintain consistency and minimize the number of dependencies, consider using httpx for the CLI as well, unless requests is a specific requirement for another dependency.

base_url = provider_api_base.rstrip("/")
if model:
model_rules = self.rule_manager.get_rules(base_url, model)
print("model_rules", model_rules)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This print statement appears to be for debugging purposes. Please remove it before merging.

rsharath and others added 4 commits January 11, 2026 22:46
Major Changes:
- Renamed package from javelin_sdk to highflame (PyPI: highflame)
- Renamed main client class from JavelinClient to Highflame
- Updated all environment variables: JAVELIN_* → HIGHFLAME_*
- Updated HTTP headers: x-javelin-* → x-highflame-*
- Updated API endpoints: api-dev.javelin.live → api.highflame.app
- Updated configuration field names: javelin_api_key → api_key

Code Quality Improvements:
- Added py.typed marker for type hint support
- Implemented structured logging across SDK and services
  - Client initialization logging
  - Route query operation logging
  - Tracing configuration logging
- Added LOGGING.md with comprehensive logging guide

Documentation:
- Created README_V2.md with complete v2 documentation
- Created MIGRATION_GUIDE.md for v1 → v2 migration
- Created LOGGING.md with logging setup and best practices
- Created CLI_SEPARATION_PLAN.md with detailed strategy for future CLI separation
- Updated all code examples and documentation references

Configuration:
- Created v2/pyproject.toml with package name "highflame" (version 2.0.0)
- Created CLI_PYPROJECT.toml as template for future CLI package separation
- Updated package metadata and dependencies

Examples:
- Renamed all example files from javelin_* to highflame_*
- Updated all example imports and configuration references
- Reorganized example directories

This represents a complete rebranding from Javelin to Highflame with improved
code quality, better logging, and a clear path for future CLI package separation.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
- Updated pyproject.toml with comments about future CLI separation
- CLI will be published to PyPI as: highflame-cli (with hyphen)
- Python module remains: highflame_cli (with underscore)
- CLI command entry point: highflame

This clarifies the naming convention:
- pip install highflame (SDK)
- pip install highflame-cli (CLI tool)
- import highflame_cli (Python module)
- $ highflame (command)

See CLI_PYPROJECT.toml and CLI_SEPARATION_PLAN.md for separation details.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
Created PROJECT_STATUS.md documenting:
- All completed v2.0.0 tasks
- Current project structure and file organization
- Installation and usage instructions
- Complete documentation guide
- Next steps and roadmap (phases 1-5)
- Quality metrics and feature list
- Design decisions and rationale
- Git commit history
- Summary and release readiness

This provides a single source of truth for:
- Project status and progress
- How to use the SDK
- What's been completed
- What's planned next
- All available documentation

v2.0.0 is now ready for:
1. Testing and validation
2. PyPI publishing
3. User migration from v1
4. Future enhancements

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants