Conversation
| default_headers=javelin_headers, | ||
| api_version="2024-02-15-preview", | ||
| ) | ||
| print(f"Synchronous AzureOpenAI client key: {llm_api_key}") |
Check failure
Code scanning / CodeQL
Clear-text logging of sensitive information High
| if st.button("Execute Query"): | ||
| if query_text.strip(): | ||
| try: | ||
| result_df = pd.read_sql_query(query_text, conn) |
Check failure
Code scanning / CodeQL
SQL query built from user-controlled sources High
|
|
||
| headers = {"x-highflame-route": route, "x-api-key": self.api_key} | ||
| serialized["kwargs"]["model_kwargs"]["extra_headers"].update(headers) | ||
| print(f"Modified headers to: {headers}") |
Check failure
Code scanning / CodeQL
Clear-text logging of sensitive information High
| llm_api_key = os.getenv("OPENAI_API_KEY") | ||
| if not api_key or not llm_api_key: | ||
| raise ValueError("Both HIGHFLAME_API_KEY and OPENAI_API_KEY must be set.") | ||
| print("OpenAI LLM API Key:", llm_api_key) |
Check failure
Code scanning / CodeQL
Clear-text logging of sensitive information High
| try: | ||
| # Set (and print) the OpenAI key | ||
| openai_api_key = os.getenv("OPENAI_API_KEY") # define your openai api key here | ||
| print(f"Synchronous OpenAI client key: {openai_api_key}") |
Check failure
Code scanning / CodeQL
Clear-text logging of sensitive information High
| openai_api_key = os.getenv("OPENAI_API_KEY") | ||
| api_key = os.getenv("HIGHFLAME_API_KEY") | ||
| javelin_headers = {"x-highflame-apikey": api_key} | ||
| print(f"[DEBUG] Synchronous OpenAI client key: {openai_api_key}") |
Check failure
Code scanning / CodeQL
Clear-text logging of sensitive information High
Summary of ChangesHello @rsharath, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a significant architectural overhaul to the Highflame Python SDK, primarily by decoupling its core functionalities from the command-line interface. This strategic separation aims to provide a more streamlined and efficient experience for developers, allowing them to install only the components they need. The changes also encompass a thorough rebranding and standardization of naming conventions, ensuring a cohesive and future-proof foundation for the SDK's continued evolution. The inclusion of extensive new documentation and practical examples across various LLM providers and agent frameworks demonstrates the versatility and ease of integration of the updated SDK. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Pull request overview
This PR updates the SDK for Highflame and restructures the SDK architecture. The changes introduce new chat completion handlers, add numerous example implementations demonstrating integration with various LLM providers, and include RAG (Retrieval Augmented Generation) examples.
Changes:
- Added new chat completions module with base and specialized completion handlers
- Introduced extensive example code for OpenAI, Mistral, and other provider integrations
- Added RAG implementation examples with embedding support
- Updated
__init__.pywith new exports and exception handling
Reviewed changes
Copilot reviewed 95 out of 120 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
| v2/highflame/chat_completions.py | New core module implementing chat and text completion handlers with provider-specific transformations |
| v2/highflame/init.py | Updated module exports with exception classes and models |
| v2/examples/route_examples/*.py | Example implementations for route-based SDK usage |
| v2/examples/rag/*.ipynb | RAG implementation examples with Highflame integration |
| v2/examples/openai/*.py | Comprehensive OpenAI integration examples including function calling and streaming |
| v2/examples/mistral/*.py | Mistral AI integration examples |
| v2/examples/guardrails/*.py | Guardrails service integration examples |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| base_url = provider_api_base.rstrip("/") | ||
| if model: | ||
| model_rules = self.rule_manager.get_rules(base_url, model) | ||
| print("model_rules", model_rules) |
There was a problem hiding this comment.
Debug print statement should be removed from production code. This appears to be leftover debugging code that will clutter logs in production environments.
| print("model_rules", model_rules) | |
| logger.debug("Anthropic model_rules: %s", model_rules) |
|
|
||
| __all__ = [ | ||
| "GatewayNotFoundError", | ||
| "GatewayAlreadyExistsError" "ProviderNotFoundError", |
There was a problem hiding this comment.
Missing comma between string literals. Python will concatenate these strings, creating a single export "GatewayAlreadyExistsErrorProviderNotFoundError" instead of two separate exports. This should be "GatewayAlreadyExistsError", "ProviderNotFoundError",.
| "Templates", | ||
| "Secret", | ||
| "Secrets", | ||
| "QueryBody", |
There was a problem hiding this comment.
The QueryBody class is exported in __all__ but is not imported in the imports section at the top of the file. This will cause an error when users try to import this symbol.
| import dotenv | ||
|
|
||
| from highflame import ( | ||
| Client, |
There was a problem hiding this comment.
The imported Client class does not match the actual class being used (Highflame). This import appears to be incorrect and should be removed or the code should be updated to use the correct import.
| Client, | |
| Highflame, |
| import dotenv | ||
|
|
||
| from highflame import ( | ||
| Client, |
There was a problem hiding this comment.
The imported Client class does not match the actual class being used (Highflame). This import appears to be incorrect and should be removed or the code should be updated to use the correct import.
| Client, | |
| Highflame, |
| import dotenv | ||
|
|
||
| from highflame import ( | ||
| Client, |
There was a problem hiding this comment.
The imported Client class does not match the actual class being used (Highflame). This import appears to be incorrect and should be removed or the code should be updated to use the correct import.
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "# RAG USING JAVELIN WITH COMMENTS" |
There was a problem hiding this comment.
The title references "JAVELIN" but the code uses "Highflame". This inconsistency should be corrected to "RAG USING HIGHFLAME WITH COMMENTS" for clarity.
| "# RAG USING JAVELIN WITH COMMENTS" | |
| "# RAG USING HIGHFLAME WITH COMMENTS" |
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "## Simple inializing javelin model" |
There was a problem hiding this comment.
Corrected spelling of 'inializing' to 'initializing'.
| "## Simple inializing javelin model" | |
| "## Simple initializing javelin model" |
| response = azure_client.chat.completions.create(model="gpt-4o", messages=messages) | ||
|
|
||
| print("Structured Output (JSON) Response:") | ||
| print("Structured Output (JSON) Response:") |
There was a problem hiding this comment.
Duplicate print statement. The message "Structured Output (JSON) Response:" is printed twice consecutively, which is redundant. Remove one of these lines.
| print("Structured Output (JSON) Response:") |
There was a problem hiding this comment.
Code Review
This pull request introduces a significant rebranding effort, transitioning the SDK and CLI from 'Javelin' to 'Highflame', along with a plan to separate the CLI into its own package. New documentation files, including a CLI separation plan, a logging guide, a migration guide from v1 to v2, and an updated README, have been added to reflect these changes. Review comments highlight several areas for improvement across various example files: ensuring consistent use of the new 'Highflame' branding in code and markdown, correcting API key header names (e.g., x-api-key to x-highflame-apikey), updating client initialization from Highflame(mcp_url) to fastmcp.Client(mcp_url) where appropriate, refactoring duplicated code in agent examples for better maintainability, and addressing an unnecessary debug print statement. Additionally, a suggestion was made to standardize on httpx over requests in the CLI's pyproject.toml for dependency consistency.
| if _mcp_client is None: | ||
| # FastMCP defaults to port 8000 with SSE transport | ||
| mcp_url = os.getenv("MCP_SERVER_URL", "http://0.0.0.0:9000/mcp") | ||
| _mcp_client = Highflame(mcp_url) |
There was a problem hiding this comment.
The get_mcp_client function is incorrectly initializing the MCP client using Highflame(mcp_url). The Highflame class is for the main SDK, not the MCP client. Based on your requirements.txt and mcp_server.py, you should be using fastmcp.Client.
| _mcp_client = Highflame(mcp_url) | |
| _mcp_client = Client(mcp_url) |
| "from highflame import (\n", | ||
| " Client,\n", | ||
| " Config,\n", | ||
| " Route,\n", | ||
| " RouteNotFoundError,\n", | ||
| " QueryResponse\n", | ||
| ")\n", | ||
| "\n", | ||
| "load_dotenv() # Load environment variables from .env file\n", | ||
| "\n", | ||
| "# Set up logging\n", | ||
| "logging.basicConfig(level=logging.INFO)" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "## Configuration\n", | ||
| "\n", | ||
| "Define the conversation flow and agent behavior:" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": 34, | ||
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "config = {\n", | ||
| " \"starting_node\": \"intro\",\n", | ||
| " \"main_prompt\": \"You are Alex, an automated assistant from Google, conducting a feedback collection session with a customer who recently interacted with our services. If you dont know the name of the customer, ask for it, donot make up a name/ say [customer name]. Your goal is to gather detailed feedback on their experience, ensuring they feel heard and valued. End the call with safe message for anything other than the expected response in our context.\",\n", | ||
| " \"nodes\": [\n", | ||
| " {\n", | ||
| " \"id\": \"intro\",\n", | ||
| " \"prompt\": \"Task:\\n1. Introduce yourself, stating that you are calling from Google to collect feedback.\\n2. Confirm if the callee is the correct customer.\\n - If not, use end_call to apologize for the confusion and hang up.\\n - If the customer is not available, use end_call to politely hang up, indicating you will call back later.\\n3. Explain the purpose of the call and ask if they are willing to provide feedback.\\n - If they agree, transition to feedback_questions.\\n - If they decline, use end_call to apologize for the inconvenience and hang up.\"\n", | ||
| " },\n", | ||
| " {\n", | ||
| " \"id\": \"feedback_questions\",\n", | ||
| " \"prompt\": \"Task:\\n1. Ask the customer a series of feedback questions, such as:\\n - How satisfied were you with our service?\\n - What did you like most about your experience?\\n - What can we improve on?\\n2. Allow the customer to provide detailed responses. Capture their feedback.\\n3. If the customer has no further comments, express gratitude for their time.\\n4. Ask if they would be willing to leave a public review on our website or social media.\\n - If yes, provide the necessary details and transition to review_request.\\n - If no, transition to end_call.\"\n", | ||
| " },\n", | ||
| " {\n", | ||
| " \"id\": \"review_request\",\n", | ||
| " \"prompt\": \"Task:\\n1. Thank the customer for agreeing to leave a review.\\n2. Provide them with the link or instructions on where to leave the review.\\n3. Offer to answer any final questions or provide assistance with the review process.\\n4. Once done, transition to end_call.\"\n", | ||
| " }\n", | ||
| " ],\n", | ||
| " \"edges\": [\n", | ||
| " {\n", | ||
| " \"id\": \"feedback_edge\",\n", | ||
| " \"prompt\": \"Transition to ask feedback questions if the customer agrees to provide feedback.\",\n", | ||
| " \"source_node\": \"intro\",\n", | ||
| " \"target_node\": \"feedback_questions\"\n", | ||
| " },\n", | ||
| " {\n", | ||
| " \"id\": \"review_edge\",\n", | ||
| " \"prompt\": \"Transition to the review request if the customer agrees to leave a public review.\",\n", | ||
| " \"source_node\": \"feedback_questions\",\n", | ||
| " \"target_node\": \"review_request\"\n", | ||
| " }\n", | ||
| " ]\n", | ||
| "}" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "## Javelin Route Setup\n", | ||
| "\n", | ||
| "Function to set up and manage the Javelin route:" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": 35, | ||
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "def setup_javelin_route(javelin_client):\n", | ||
| " route_name = \"test_route_1\"\n", | ||
| " try:\n", | ||
| " existing_route = javelin_client.get_route(route_name)\n", | ||
| " print(f\"Found existing route '{route_name}'\")\n", | ||
| " return existing_route\n", | ||
| " except RouteNotFoundError:\n", | ||
| " route_data = {\n", | ||
| " \"name\": route_name,\n", | ||
| " \"type\": \"chat\",\n", | ||
| " \"enabled\": True,\n", | ||
| " \"models\": [\n", | ||
| " {\n", | ||
| " \"name\": \"gpt-3.5-turbo\",\n", | ||
| " \"provider\": \"openai\",\n", | ||
| " \"suffix\": \"/chat/completions\",\n", | ||
| " }\n", | ||
| " ],\n", | ||
| " \"config\": {\n", | ||
| " \"organization\": \"myusers\",\n", | ||
| " \"rate_limit\": 7,\n", | ||
| " \"retries\": 3,\n", | ||
| " \"archive\": True,\n", | ||
| " \"retention\": 7,\n", | ||
| " \"budget\": {\n", | ||
| " \"enabled\": True,\n", | ||
| " \"annual\": 100000,\n", | ||
| " \"currency\": \"USD\",\n", | ||
| " },\n", | ||
| " \"dlp\": {\"enabled\": True, \"strategy\": \"Inspect\", \"action\": \"notify\"},\n", | ||
| " },\n", | ||
| " }\n", | ||
| " route = Route.parse_obj(route_data)\n", | ||
| " try:\n", | ||
| " javelin_client.create_route(route)\n", | ||
| " print(f\"Route '{route_name}' created successfully\")\n", | ||
| " return route\n", | ||
| " except Exception as e:\n", | ||
| " print(f\"Failed to create route: {str(e)}\")\n", | ||
| " return None\n", | ||
| " except Exception as e:\n", | ||
| " print(f\"Error checking for existing route: {str(e)}\")\n", | ||
| " return None" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "## Secure AI Agent Class\n", | ||
| "\n", | ||
| "Main class implementation for the AI agent:" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": 36, | ||
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "class SecureAIAgent:\n", | ||
| " def __init__(self, config: Dict, javelin_config: Config):\n", | ||
| " self.config = config\n", | ||
| " self.javelin_config = javelin_config\n", | ||
| " self.setup_javelin_client()\n", | ||
| " self.system_prompt = self.create_full_prompt()\n", | ||
| " self.conversation_history = []\n", | ||
| "\n", | ||
| " def setup_javelin_client(self):\n", | ||
| " self.javelin_client = Highflame(self.javelin_config)\n", | ||
| "\n", | ||
| " def create_full_prompt(self) -> str:\n", | ||
| " nodes = self.config['nodes']\n", | ||
| " edges = self.config.get('edges', [])\n", | ||
| " \n", | ||
| " node_prompts = [f\"Node {node['id']}:\\n{node['prompt']}\\n\" for node in nodes]\n", | ||
| " edge_prompts = [f\"Edge {edge['id']} (from {edge['source_node']} to {edge['target_node']}):\\n{edge['prompt']}\\n\" for edge in edges]\n", | ||
| " \n", | ||
| " full_prompt = f\"\"\"\n", | ||
| "{self.config['main_prompt']}\n", | ||
| "\n", | ||
| "Available nodes and their tasks:\n", | ||
| "{\"\\n\".join(node_prompts)}\n", | ||
| "\n", | ||
| "Conversation flow (edges):\n", | ||
| "{\"\\n\".join(edge_prompts)}\n", | ||
| "\n", | ||
| "Your task:\n", | ||
| "1. Understand the user's intent and the current stage of the conversation.\n", | ||
| "2. Process the appropriate node based on the conversation flow.\n", | ||
| "3. Provide a response to the user, handling all necessary steps for the current node.\n", | ||
| "4. Use the edge information to determine when and how to transition between nodes.\n", | ||
| "\n", | ||
| "Remember to stay in character throughout the conversation.\n", | ||
| "Starting node: {self.config['starting_node']}\n", | ||
| "\"\"\"\n", | ||
| " return full_prompt\n", | ||
| "\n", | ||
| " async def process_message(self, message: str) -> str:\n", | ||
| " self.conversation_history.append({\"role\": \"user\", \"content\": message})\n", | ||
| "\n", | ||
| " try:\n", | ||
| " query_data = {\n", | ||
| " \"model\": \"gpt-3.5-turbo\",\n", | ||
| " \"messages\": [\n", | ||
| " {\"role\": \"system\", \"content\": self.system_prompt},\n", | ||
| " *self.conversation_history\n", | ||
| " ],\n", | ||
| " \"temperature\": 0.7,\n", | ||
| " }\n", | ||
| "\n", | ||
| " response: QueryResponse = self.javelin_client.query_route(\"test_route_1\", query_data)\n", | ||
| " ai_message = response['choices'][0]['message']['content']\n", | ||
| " self.conversation_history.append({\"role\": \"assistant\", \"content\": ai_message})\n", | ||
| "\n", | ||
| " return ai_message\n", | ||
| " except RouteNotFoundError:\n", | ||
| " logging.error(\"Route 'test_route_1' not found. Attempting to recreate...\")\n", | ||
| " setup_javelin_route(self.javelin_client)\n", | ||
| " raise\n", | ||
| " except Exception as e:\n", | ||
| " logging.error(f\"Error in process_message: {str(e)}\")\n", | ||
| " raise" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "## Running the Agent\n", | ||
| "\n", | ||
| "Function to run the agent interactively:" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": 37, | ||
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "async def run_agent():\n", | ||
| " try:\n", | ||
| " # Set up Javelin configuration\n", | ||
| " api_key = os.getenv(\"HIGHFLAME_API_KEY\")\n", | ||
| " javelin_virtualapikey = os.getenv(\"JAVELIN_VIRTUALAPIKEY\")\n", | ||
| " llm_api_key = os.getenv(\"LLM_API_KEY\")\n", | ||
| "\n", | ||
| " if not all([api_key, javelin_virtualapikey, llm_api_key]):\n", | ||
| " print(\"Error: Missing required environment variables. Please check your .env file.\")\n", | ||
| " return\n", | ||
| "\n", | ||
| " javelin_config = Config(\n", | ||
| " base_url=\"https://api.highflame.app\",\n", | ||
| " api_key=api_key,\n", | ||
| " javelin_virtualapikey=javelin_virtualapikey,\n", | ||
| " llm_api_key=llm_api_key,\n", | ||
| " )\n", |
There was a problem hiding this comment.
This example notebook appears to be using outdated code from the v1 SDK, which will cause it to fail. Specifically:
- The import
from highflame import Clientis incorrect; it should befrom highflame import Highflame. - The environment variable
JAVELIN_VIRTUALAPIKEYis used, but it should beHIGHFLAME_VIRTUALAPIKEY. - The
Configobject is initialized withjavelin_virtualapikey, but the correct parameter name in v2 isvirtual_api_key. - Variable names like
javelin_clientandjavelin_configshould be updated tohighflame_clientandhighflame_configfor consistency.
Please update the entire example to use the new Highflame v2 conventions.
| const javelinApiKey = ""; // javelin api key here | ||
| const llmApiKey = ""; // llm api key | ||
|
|
||
|
|
||
| async function getCompletion() { | ||
| try { | ||
| const routeName = 'AzureOpenAIRoute'; | ||
| const url = `${process.env.JAVELIN_BASE_URL}/v1/query/${routeName}`; | ||
|
|
||
| const response = await axios.post( | ||
| url, | ||
| { | ||
| messages: [ | ||
| { role: 'system', content: 'Hello, you are a helpful scientific assistant.' }, | ||
| { role: 'user', content: 'What is the chemical composition of sugar?' }, | ||
| ], | ||
| model: 'gpt-3.5-turbo', | ||
| }, | ||
| { | ||
| headers: { | ||
| 'x-api-key': javelinApiKey, | ||
| 'api-key': llmApiKey, | ||
| }, | ||
| } | ||
| ); | ||
| console.log(response.data.choices[0].message.content); | ||
| } catch (error) { | ||
| if (error.response) { | ||
| console.error('Error status:', error.response.status); | ||
| console.error('Error data:', error.response.data); | ||
| } else { | ||
| console.error('Error message:', error.message); | ||
| } | ||
| } | ||
| } | ||
|
|
||
|
|
||
| // Function to stream responses from the API | ||
| async function streamCompletion() { | ||
| try { | ||
| const url = "https://api.javelin.live/v1/query/AzureOpenAIRoute"; |
There was a problem hiding this comment.
This Javascript example has several issues that will prevent it from working correctly:
- The variable
javelinApiKeyshould be renamed tohighflameApiKeyfor consistency with the new SDK name. - The header
x-api-keyis used for the Highflame API key, but the correct header isx-highflame-apikey. - The URL
https://api.javelin.live/v1/query/AzureOpenAIRouteis hardcoded and uses the oldjavelin.livedomain. It should usehighflame.appand be constructed from environment variables.
| openai_api_version="2023-03-15-preview", | ||
| extra_headers={ | ||
| "x-highflame-route": "azureopenai_univ", | ||
| "x-api-key": os.environ.get("HIGHFLAME_API_KEY"), |
There was a problem hiding this comment.
| pytest.skip("OPENAI_API_KEY not set") | ||
|
|
||
| try: | ||
| llm = get_llm("openai") |
There was a problem hiding this comment.
| research_agent = LlmAgent( | ||
| model=LiteLlm( | ||
| model="openai/gpt-4o", | ||
| api_base="https://api.highflame.app/v1", | ||
| extra_headers={ | ||
| "x-highflame-route": "openai_univ", | ||
| "x-api-key": HIGHFLAME_API_KEY, | ||
| "Authorization": f"Bearer {OPENAI_API_KEY}", | ||
| }, | ||
| ), | ||
| name="ResearchAgent", | ||
| instruction="Research the query and save findings in state['research'].", | ||
| output_key="research", | ||
| ) | ||
|
|
||
| # Agent 2: Summarizer | ||
| summary_agent = LlmAgent( | ||
| model=LiteLlm( | ||
| model="openai/gpt-4o", | ||
| api_base="https://api.highflame.app/v1", | ||
| extra_headers={ | ||
| "x-highflame-route": "openai_univ", | ||
| "x-api-key": HIGHFLAME_API_KEY, | ||
| "Authorization": f"Bearer {OPENAI_API_KEY}", | ||
| }, | ||
| ), | ||
| name="SummaryAgent", | ||
| instruction="Summarize state['research'] into state['summary'].", | ||
| output_key="summary", | ||
| ) | ||
|
|
||
| # Agent 3: Reporter | ||
| report_agent = LlmAgent( | ||
| model=LiteLlm( | ||
| model="openai/gpt-4o", | ||
| api_base="https://api.highflame.app/v1", | ||
| extra_headers={ | ||
| "x-highflame-route": "openai_univ", | ||
| "x-api-key": HIGHFLAME_API_KEY, | ||
| "Authorization": f"Bearer {OPENAI_API_KEY}", | ||
| }, | ||
| ), | ||
| name="ReportAgent", | ||
| instruction="Generate a report from state['summary'] and include a source URL.", | ||
| output_key="report", | ||
| ) |
There was a problem hiding this comment.
| "source": [ | ||
| "### How Javelin and CrewwAI Integrate\n", | ||
| "\n", | ||
| "## Javelin Setup:\n", |
| " - The Javelin API is used to validate and refine the user's query by providing a route (`testing` in this case). It helps assess whether the request is suitable for email generation and aids in refining the query for clarity.\n", | ||
| " - **Headers**: Contains the API key and the route for Javelin (`x-javelin-route`).\n", |
| version = "2.0.0" | ||
| dependencies = [ | ||
| "highflame>=2.0.0", | ||
| "requests>=2.32.3", |
There was a problem hiding this comment.
In the proposed pyproject.toml for the CLI, you're adding requests as a dependency. However, the core SDK seems to be using httpx. To maintain consistency and minimize the number of dependencies, consider using httpx for the CLI as well, unless requests is a specific requirement for another dependency.
| base_url = provider_api_base.rstrip("/") | ||
| if model: | ||
| model_rules = self.rule_manager.get_rules(base_url, model) | ||
| print("model_rules", model_rules) |
Major Changes: - Renamed package from javelin_sdk to highflame (PyPI: highflame) - Renamed main client class from JavelinClient to Highflame - Updated all environment variables: JAVELIN_* → HIGHFLAME_* - Updated HTTP headers: x-javelin-* → x-highflame-* - Updated API endpoints: api-dev.javelin.live → api.highflame.app - Updated configuration field names: javelin_api_key → api_key Code Quality Improvements: - Added py.typed marker for type hint support - Implemented structured logging across SDK and services - Client initialization logging - Route query operation logging - Tracing configuration logging - Added LOGGING.md with comprehensive logging guide Documentation: - Created README_V2.md with complete v2 documentation - Created MIGRATION_GUIDE.md for v1 → v2 migration - Created LOGGING.md with logging setup and best practices - Created CLI_SEPARATION_PLAN.md with detailed strategy for future CLI separation - Updated all code examples and documentation references Configuration: - Created v2/pyproject.toml with package name "highflame" (version 2.0.0) - Created CLI_PYPROJECT.toml as template for future CLI package separation - Updated package metadata and dependencies Examples: - Renamed all example files from javelin_* to highflame_* - Updated all example imports and configuration references - Reorganized example directories This represents a complete rebranding from Javelin to Highflame with improved code quality, better logging, and a clear path for future CLI package separation. Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
- Updated pyproject.toml with comments about future CLI separation - CLI will be published to PyPI as: highflame-cli (with hyphen) - Python module remains: highflame_cli (with underscore) - CLI command entry point: highflame This clarifies the naming convention: - pip install highflame (SDK) - pip install highflame-cli (CLI tool) - import highflame_cli (Python module) - $ highflame (command) See CLI_PYPROJECT.toml and CLI_SEPARATION_PLAN.md for separation details. Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
Created PROJECT_STATUS.md documenting: - All completed v2.0.0 tasks - Current project structure and file organization - Installation and usage instructions - Complete documentation guide - Next steps and roadmap (phases 1-5) - Quality metrics and feature list - Design decisions and rationale - Git commit history - Summary and release readiness This provides a single source of truth for: - Project status and progress - How to use the SDK - What's been completed - What's planned next - All available documentation v2.0.0 is now ready for: 1. Testing and validation 2. PyPI publishing 3. User migration from v1 4. Future enhancements Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
No description provided.