-
Notifications
You must be signed in to change notification settings - Fork 4
Add ONNX inference documentation #49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Updated complete-library.mdx table with ONNX links for 5 models - Added ONNX buttons to LFM2-8B-A1B, LFM2-VL-3B, LFM2-VL-1.6B, LFM2-VL-450M, and LFM2.5-Audio-1.5B model pages Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Delete LEAP pages that are not in navigation and contain outdated content: - find-model.mdx - vibe-check-models.mdx - index.mdx - finetuning.mdx - laptop-support.mdx Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Delete docs pages that are not in navigation: - docs/index.mdx (redirect page) - docs/key-concepts/models.mdx (deprecated models page) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Change /leap/index references to /leap/edge-sdk/overview since the old LEAP index page was removed. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Make all model names in the Model Chart link to their respective model pages for easier navigation. Also adds LFM2-2.6B-Exp to table. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add llama.cpp tab with installation and usage instructions for VL models: LFM2-VL-3B, LFM2-VL-1.6B, LFM2-VL-450M, LFM2.5-VL-1.6B Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Style first-column model name links differently from checkmark links: - No underline, regular text appearance - Purple text + light background on hover - Distinct from green checkmark links in other columns Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
| @@ -0,0 +1,250 @@ | |||
| --- | |||
| title: "ONNX" | |||
| description: "ONNX provides cross-platform inference for LFM models across CPUs, GPUs, NPUs, and browsers via WebGPU." | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| description: "ONNX provides cross-platform inference for LFM models across CPUs, GPUs, NPUs, and browsers via WebGPU." | |
| description: "ONNX provides cross-platform inference for LFM models across CPUs, GPUs, NPUs, and browsers via WebGPU." | |
| description: "ONNX provides a platform-agnostic inference specification that allows running the model on device-specific runtimes that include CPU, GPU, NPU, and WebGPU." |
|
|
||
| ## Pre-exported Models | ||
|
|
||
| Pre-exported ONNX models are available from LiquidAI and the [onnx-community](https://huggingface.co/onnx-community). Check the [Model Library](/docs/models/complete-library) for a complete list of available formats. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd add a link on "from LiquidAI" that points on our HF
| ### Installation | ||
|
|
||
| ```bash | ||
| pip install onnxruntime transformers numpy huggingface_hub jinja2 | ||
|
|
||
| # For GPU support | ||
| pip install onnxruntime-gpu transformers numpy huggingface_hub jinja2 | ||
| ``` | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe instead point on the onnx-export repo directly or use it
git clone ...
uv sync
uv ryn ...
Uh oh!
There was an error while loading. Please reload this page.