Skip to content

Conversation

@alay2shah
Copy link
Contributor

@alay2shah alay2shah commented Jan 31, 2026

  • ONNX code examples for Python + WebGPU
  • LiquidONNX tool docs
  • Link model cards to table
  • Deprecate some pages for LLM.txt indexing
  • Vision llama cpp examples in model cards (lagging commit)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
alay2shah and others added 7 commits January 31, 2026 13:11
- Updated complete-library.mdx table with ONNX links for 5 models
- Added ONNX buttons to LFM2-8B-A1B, LFM2-VL-3B, LFM2-VL-1.6B,
  LFM2-VL-450M, and LFM2.5-Audio-1.5B model pages

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Delete LEAP pages that are not in navigation and contain outdated content:
- find-model.mdx
- vibe-check-models.mdx
- index.mdx
- finetuning.mdx
- laptop-support.mdx

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Delete docs pages that are not in navigation:
- docs/index.mdx (redirect page)
- docs/key-concepts/models.mdx (deprecated models page)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Change /leap/index references to /leap/edge-sdk/overview
since the old LEAP index page was removed.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Make all model names in the Model Chart link to their respective
model pages for easier navigation. Also adds LFM2-2.6B-Exp to table.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add llama.cpp tab with installation and usage instructions
for VL models: LFM2-VL-3B, LFM2-VL-1.6B, LFM2-VL-450M, LFM2.5-VL-1.6B

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Style first-column model name links differently from checkmark links:
- No underline, regular text appearance
- Purple text + light background on hover
- Distinct from green checkmark links in other columns

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@@ -0,0 +1,250 @@
---
title: "ONNX"
description: "ONNX provides cross-platform inference for LFM models across CPUs, GPUs, NPUs, and browsers via WebGPU."

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
description: "ONNX provides cross-platform inference for LFM models across CPUs, GPUs, NPUs, and browsers via WebGPU."
description: "ONNX provides cross-platform inference for LFM models across CPUs, GPUs, NPUs, and browsers via WebGPU."
description: "ONNX provides a platform-agnostic inference specification that allows running the model on device-specific runtimes that include CPU, GPU, NPU, and WebGPU."


## Pre-exported Models

Pre-exported ONNX models are available from LiquidAI and the [onnx-community](https://huggingface.co/onnx-community). Check the [Model Library](/docs/models/complete-library) for a complete list of available formats.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd add a link on "from LiquidAI" that points on our HF

Comment on lines +24 to +32
### Installation

```bash
pip install onnxruntime transformers numpy huggingface_hub jinja2

# For GPU support
pip install onnxruntime-gpu transformers numpy huggingface_hub jinja2
```

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe instead point on the onnx-export repo directly or use it

git clone ...
uv sync
uv ryn ... 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants