diff --git a/README.md b/README.md index efd2f04a..e1063bbf 100644 --- a/README.md +++ b/README.md @@ -14,7 +14,7 @@ A collection of on-device AI primitives for React Native with first-class Vercel ## AI SDK Compatibility | React Native AI | AI SDK | -|-----------------|--------| +| --------------- | ------ | | 0.11 and below | v5 | | 0.12 and above | v6 | @@ -35,11 +35,11 @@ Rozenite must be installed and enabled in your app. See the ## Available Providers -| Provider | Built-in | Platforms | Runtime | Description | -|----------|----------|-----------|---------|-------------| -| [Apple](#apple) | ✅ Yes | iOS | [Apple](https://developer.apple.com/documentation/FoundationModels) | Apple Foundation Models, embeddings, transcription, speech | -| [Llama](#llama) | ❌ No | iOS, Android | [llama.rn](https://github.com/mybigday/llama.rn) | Run GGUF models via llama.rn | -| [MLC](#mlc) | ❌ No | iOS, Android | [MLC LLM](https://github.com/mlc-ai/mlc-llm) | Run open-source LLMs via MLC runtime | +| Provider | Built-in | Platforms | Runtime | Description | +| --------------- | -------- | ------------ | ------------------------------------------------------------------- | ---------------------------------------------------------- | +| [Apple](#apple) | ✅ Yes | iOS | [Apple](https://developer.apple.com/documentation/FoundationModels) | Apple Foundation Models, embeddings, transcription, speech | +| [Llama](#llama) | ❌ No | iOS, Android | [llama.rn](https://github.com/mybigday/llama.rn) | Run GGUF models via llama.rn | +| [MLC](#mlc) | ❌ No | iOS, Android | [MLC LLM](https://github.com/mlc-ai/mlc-llm) | Run open-source LLMs via MLC runtime | --- @@ -64,46 +64,46 @@ No additional linking needed, works immediately on iOS devices (autolinked). ```typescript import { apple } from '@react-native-ai/apple' -import { +import { generateText, - embed, - experimental_transcribe as transcribe, - experimental_generateSpeech as speech + embed, + experimental_transcribe as transcribe, + experimental_generateSpeech as speech, } from 'ai' // Text generation with Apple Intelligence const { text } = await generateText({ model: apple(), - prompt: 'Explain quantum computing' + prompt: 'Explain quantum computing', }) // Generate embeddings const { embedding } = await embed({ model: apple.textEmbeddingModel(), - value: 'Hello world' + value: 'Hello world', }) // Transcribe audio const { text } = await transcribe({ model: apple.transcriptionModel(), - audio: audioBuffer + audio: audioBuffer, }) // Text-to-speech const { audio } = await speech({ model: apple.speechModel(), - text: 'Hello from Apple!' + text: 'Hello from Apple!', }) ``` #### Availability -| Feature | iOS Version | Additional Requirements | -|---------|-------------|------------------------| -| Text Generation | iOS 26+ | Apple Intelligence device | -| Embeddings | iOS 17+ | - | -| Transcription | iOS 26+ | - | -| Speech Synthesis | iOS 13+ | iOS 17+ for Personal Voice | +| Feature | iOS Version | Additional Requirements | +| ---------------- | ----------- | -------------------------- | +| Text Generation | iOS 26+ | Apple Intelligence device | +| Embeddings | iOS 17+ | - | +| Transcription | iOS 26+ | - | +| Speech Synthesis | iOS 13+ | iOS 17+ for Personal Voice | See the [Apple documentation](https://react-native-ai.dev/docs/apple/getting-started) for detailed setup and usage guides. @@ -115,11 +115,11 @@ Run any GGUF model on-device using [llama.rn](https://github.com/mybigday/llama. #### Supported Features -| Feature | Method | Description | -|---------|--------|-------------| -| Text Generation | `llama.languageModel()` | Chat, completion, streaming, reasoning models | -| Embeddings | `llama.textEmbeddingModel()` | Text embeddings for RAG and similarity search | -| Speech | `llama.speechModel()` | Text-to-speech with vocoder models | +| Feature | Method | Description | +| --------------- | ---------------------------- | --------------------------------------------- | +| Text Generation | `llama.languageModel()` | Chat, completion, streaming, reasoning models | +| Embeddings | `llama.textEmbeddingModel()` | Text embeddings for RAG and similarity search | +| Speech | `llama.speechModel()` | Text-to-speech with vocoder models | #### Installation @@ -134,7 +134,9 @@ import { llama } from '@react-native-ai/llama' import { generateText, streamText } from 'ai' // Create model instance (Model ID format: "owner/repo/filename.gguf") -const model = llama.languageModel('ggml-org/SmolLM3-3B-GGUF/SmolLM3-Q4_K_M.gguf') +const model = llama.languageModel( + 'ggml-org/SmolLM3-3B-GGUF/SmolLM3-Q4_K_M.gguf' +) // Download from HuggingFace (with progress) await model.download((progress) => { @@ -197,18 +199,18 @@ await model.prepare() // Generate response with Llama via MLC engine const { text } = await generateText({ model, - prompt: 'Explain quantum computing' + prompt: 'Explain quantum computing', }) ``` #### Available Models -| Model ID | Size | -|----------|------| -| `Llama-3.2-3B-Instruct` | ~2GB | +| Model ID | Size | +| ------------------------ | ------ | +| `Llama-3.2-3B-Instruct` | ~2GB | | `Phi-3-mini-4k-instruct` | ~2.5GB | -| `Mistral-7B-Instruct` | ~4.5GB | -| `Qwen2.5-1.5B-Instruct` | ~1GB | +| `Mistral-7B-Instruct` | ~4.5GB | +| `Qwen2.5-1.5B-Instruct` | ~1GB | > [!NOTE] > MLC requires iOS devices with sufficient memory (1-8GB depending on model). The prebuilt runtime supports the models listed above. For other models or custom configurations, you'll need to recompile the MLC runtime from source. @@ -221,9 +223,17 @@ Comprehensive guides and API references are available at [react-native-ai.dev](h Read the [contribution guidelines](/CONTRIBUTING.md) before contributing. +## Agent skills + +This repository provides agent skills to help you integrate and use the packages. You can easily install them with: + +`npx skills add https://github.com/callstackincubator/react-native-ai --skill json-render-react` + +or manually by copying the `skills/` directory in your `.cursor/` directory. + ## Made with ❤️ at Callstack -**react-native-ai** is an open source project and will always remain free to use. If you think it's cool, please star it 🌟. +**react-native-ai** is an open source project and will always remain free to use. If you think it's cool, please star it 🌟. [Callstack][callstack-readme-with-love] is a group of React and React Native geeks, contact us at [hello@callstack.com](mailto:hello@callstack.com) if you need any help with these or just want to say hi! diff --git a/skills/react-native-ai/SKILL.md b/skills/react-native-ai/SKILL.md new file mode 100644 index 00000000..37e6c280 --- /dev/null +++ b/skills/react-native-ai/SKILL.md @@ -0,0 +1,118 @@ +--- +name: react-native-ai-skills +description: Provides integration recipes for the React Native AI @react-native-ai packages that wrap the Llama.rn (Llama.cpp), MLC-LLM, Apple Foundation backends. Use when integrating local on-device AI in React Native, setting up providers, model management. +license: MIT +metadata: + author: Callstack + tags: react-native, ai, llama, apple, mlc, ncnn, vercel-ai-sdk, on-device +--- + +# React Native AI Skills + +## Overview + +Example workflow for integrating on-device AI in React Native apps using the @react-native-ai ecosystem. Available provider tracks (can be combined): + +- **Apple** – Apple Intelligence (iOS 26+) +- **Llama** – GGUF models via llama.rn +- **MLC** – MLC-LLM models +- **NCNN** – Low-level NCNN inference wrapper (vision, custom models) + +## Path Selection Gate (Must Run First) + +Before selecting any reference file, classify the user request: + +1. Select **Apple**: + - if you intend to build with: `apple`, `Apple Intelligence`, `Apple Foundation Models` + - if you want features: `transcription`, `speech synthesis`, `embeddings` on Apple devices + - optionally with capabilities: tool calling +2. Select **Llama**: + - if you intend to use the following technologies: `llama`, `GGUF`, `llama.rn`, `HuggingFace`, `SmolLM` + - if you want to perform the following operations: `embedding model`, `rerank`, `speech model` +3. Select **MLC**: + - if you intend to use a library that allows for custom models and involves build-time model optimizations +4. Select **NCNN**: + - if you need to use run low-level inference on bare metal tensors + - if you intend to run inference of custom models such as convolutional networks, multi-layer perceptrons, low-level inference, etc. + - DO NOT select NCNN if the prompt mentions LLMs only, this use case is better solved by other providers + +## Skill Format + +Each reference file follows a strict execution format: + +- Quick Command +- When to Use +- Prerequisites +- Step-by-Step Instructions +- Common Pitfalls +- Related Skills + +Use the checklists exactly as written before moving to the next phase. + +## When to Apply + +Reference this package when: + +- Integrating on-device AI in React Native apps +- Installing and configuring @react-native-ai providers +- Managing model downloads (llama, mlc) +- Wiring providers with Vercel AI SDK (generateText, streamText) +- Implementing SetupAdapter pattern for multi-provider apps +- Debugging native module or Expo plugin issues + +## Priority-Ordered Guidelines + +| Priority | Category | Impact | Start File | +| -------- | --------------------------- | ------ | -------------------------------- | +| 1 | Path selection and baseline | N/A | [quick-start][quick-start] | +| 2 | Apple provider | N/A | [apple-provider][apple-provider] | +| 3 | Llama provider | N/A | [llama-provider][llama-provider] | +| 4 | MLC-LLM provider | N/A | [mlc-provider][mlc-provider] | +| 5 | NCNN provider | N/A | [ncnn-provider][ncnn-provider] | + +## Quick Reference + +```bash +npm install + +# Provider-specific install +npm add @react-native-ai/apple +npm add @react-native-ai/llama llama.rn +npm add @react-native-ai/mlc +npm add @react-native-ai/ncnn-wrapper +``` + +Route by path: + +- Apple: [apple-provider][apple-provider] +- Llama: [llama-provider][llama-provider] +- MLC: [mlc-provider][mlc-provider] +- NCNN: [ncnn-provider][ncnn-provider] + +## References + +| File | Impact | Description | +| -------------------------------- | ------ | ------------------------------------------ | +| [quick-start][quick-start] | N/A | Shared preflight | +| [apple-provider][apple-provider] | N/A | Apple Intelligence setup and integration | +| [llama-provider][llama-provider] | N/A | GGUF models, llama.rn, model management | +| [mlc-provider][mlc-provider] | N/A | MLC models, download, prepare, Expo plugin | +| [ncnn-provider][ncnn-provider] | N/A | NCNN wrapper, loadModel, runInference | + +## Problem → Skill Mapping + +| Problem | Start With | +| ------------------------------------- | ---------------------------------------------- | +| Need path decision first | [quick-start][quick-start] | +| Integrate Apple Intelligence | [apple-provider][apple-provider] | +| Run GGUF models from HuggingFace | [llama-provider][llama-provider] | +| Run MLC-LLM models (Llama, Phi, Qwen) | [mlc-provider][mlc-provider] | +| Use NCNN for custom inference | [ncnn-provider][ncnn-provider] | +| Multi-provider app with SetupAdapter | [quick-start][quick-start] → provider-specific | +| Expo + native module setup | Provider-specific (each has Expo notes) | + +[quick-start]: references/quick-start.md +[apple-provider]: references/apple-provider.md +[llama-provider]: references/llama-provider.md +[mlc-provider]: references/mlc-provider.md +[ncnn-provider]: references/ncnn-provider.md diff --git a/skills/react-native-ai/references/apple-provider.md b/skills/react-native-ai/references/apple-provider.md new file mode 100644 index 00000000..dbf96554 --- /dev/null +++ b/skills/react-native-ai/references/apple-provider.md @@ -0,0 +1,78 @@ +# Apple Provider + +## Quick Command + +```bash +npm add @react-native-ai/apple +``` + +```ts +import { apple } from '@react-native-ai/apple' +import { generateText } from 'ai' + +const result = await generateText({ + model: apple(), + prompt: 'Explain quantum computing in simple terms', +}) +``` + +## When to Use + +- Use Apple Intelligence on iOS 26+ +- Need language model, embeddings, transcription, or speech + +## Prerequisites + +- [ ] React Native New Architecture +- [ ] iOS 26+ (Android not supported) +- [ ] Apple Intelligence enabled device +- [ ] Vercel AI SDK v5+ (`ai`) +- [ ] Android or iOS + +## Step-by-Step Instructions + +### 1. Install + +```bash +npm add @react-native-ai/apple +``` + +### 2. Availability Check + +```ts +import { apple } from '@react-native-ai/apple' + +if (apple.isAvailable()) { + // Use Apple provider +} +``` + +### 3. Model Types + +| Type | Method | Use Case | Documentation | +| ------------- | ---------------------------- | --------------------------------------- | -------------------------------------------------------- | +| Language | `apple.languageModel()` | Text generation, chat | https://www.react-native-ai.dev/docs/apple/generating | +| Embedding | `apple.textEmbeddingModel()` | RAG, similarity, prompt size estimation | https://www.react-native-ai.dev/docs/apple/embeddings | +| Transcription | `apple.transcriptionModel()` | Speech-to-text | https://www.react-native-ai.dev/docs/apple/transcription | +| Speech | `apple.speechModel()` | Text-to-speech | https://www.react-native-ai.dev/docs/apple/speech | + +### 4. Tool Calling + +```ts +import { createAppleProvider } from '@react-native-ai/apple' + +const apple = createAppleProvider({ availableTools: tools }) +const model = apple.languageModel() +``` + +## Common Pitfalls + +- **Wrong iOS version**: Apple Intelligence requires iOS 26+. +- **Simulator**: For now only physical devices are supported. +- **New Architecture**: React Native New Architecture is required. + +## Related Skills + +- [quick-start](quick-start.md) +- [llama-provider](llama-provider.md) +- [mlc-provider](mlc-provider.md) diff --git a/skills/react-native-ai/references/llama-provider.md b/skills/react-native-ai/references/llama-provider.md new file mode 100644 index 00000000..65f76b50 --- /dev/null +++ b/skills/react-native-ai/references/llama-provider.md @@ -0,0 +1,116 @@ +# Llama Provider + +## Quick Command + +```bash +npm add @react-native-ai/llama llama.rn +``` + +```ts +import { llama, downloadModel } from '@react-native-ai/llama' +import { generateText } from 'ai' + +const modelPath = await downloadModel( + 'ggml-org/SmolLM3-3B-GGUF/SmolLM3-Q4_K_M.gguf' +) +const model = llama.languageModel(modelPath) +await model.prepare() +const { text } = await generateText({ model, prompt: 'Hello' }) +``` + +## When to Use + +- Run GGUF models from HuggingFace on-device +- Need embeddings, reranking, or speech (TTS) with GGUF +- Use llama.rn bindings for llama.cpp + +## Prerequisites + +- [ ] React Native >= 0.76.0 +- [ ] llama.rn >= 0.10.0 +- [ ] Vercel AI SDK v5+ (`ai`) +- [ ] Android or iOS + +## Step-by-Step Instructions + +### 1. Install + +```bash +npm add @react-native-ai/llama llama.rn +``` + +### 2. Expo Setup (if using Expo) + +Add to `app.json` / `app.config.js`: + +```js +plugins: [ + [ + 'llama.rn', + { + enableEntitlements: true, + entitlementsProfile: 'production', + forceCxx20: true, + enableOpenCL: true, + }, + ], +] +``` + +### 3. Model ID Format + +Format: `owner/repo/filename.gguf` + +Examples: + +- `ggml-org/SmolLM3-3B-GGUF/SmolLM3-Q4_K_M.gguf` +- `Qwen/Qwen2.5-3B-Instruct-GGUF/qwen2.5-3b-instruct-q3_k_m.gguf` + +### 4. Storage APIs + +```ts +import { + downloadModel, + getModelPath, + isModelDownloaded, + removeModel, + getDownloadedModels, +} from '@react-native-ai/llama' + +// Download with progress +await downloadModel('owner/repo/model.gguf', (p) => console.log(p.percentage)) + +// Get path for existing model +const path = getModelPath('owner/repo/model.gguf') + +// Check if downloaded +const exists = await isModelDownloaded('owner/repo/model.gguf') +``` + +### 5. Model Types + +| Type | Method | Notes | Documentation | +| --------- | ---------------------------- | --------------------------------------- | ------------------------------------------------------------------------------------ | +| Language | `llama.languageModel()` | Text generation, chat | https://www.react-native-ai.dev/docs/llama/generating | +| Embedding | `llama.textEmbeddingModel()` | RAG, similarity, prompt size estimation | https://www.react-native-ai.dev/docs/llama/embeddings | +| Rerank | `llama.rerankModel()` | Document ranking, RAG | https://www.react-native-ai.dev/docs/llama/reranking | +| Speech | `llama.speechModel()` | Requires `vocoderPath` in opts | https://www.react-native-ai.dev/docs/llama/model-management#creating-model-instances | + +### 6. Lifecycle + +```ts +await model.prepare() // Load into memory +await model.unload() // Release when done +``` + +## Common Pitfalls + +- **Invalid model ID**: Must be `owner/repo/filename.gguf` (3+ parts). +- **Missing prepare()**: Call `prepare()` before generateText/streamText. +- **Expo**: Must add `llama.rn` plugin; refer to [llama.rn Expo docs](https://github.com/mybigday/llama.rn#expo). + +## Related Skills + +- [quick-start](quick-start.md) +- [apple-provider](apple-provider.md) +- [mlc-provider](mlc-provider.md) diff --git a/skills/react-native-ai/references/mlc-provider.md b/skills/react-native-ai/references/mlc-provider.md new file mode 100644 index 00000000..43748cab --- /dev/null +++ b/skills/react-native-ai/references/mlc-provider.md @@ -0,0 +1,107 @@ +# MLC Provider + +## Quick Command + +```bash +npm add @react-native-ai/mlc +``` + +```ts +import { mlc } from '@react-native-ai/mlc' +import { generateText } from 'ai' + +const model = mlc.languageModel('Llama-3.2-3B-Instruct') +await model.download() +await model.prepare() +const { text } = await generateText({ model, prompt: 'Hello' }) +``` + +## When to Use + +- Run MLC models (Llama, Phi, Qwen) on-device +- Need built-in model download and management +- Android or iOS 14+ + +## Prerequisites + +- [ ] React Native New Architecture +- [ ] Increased Memory Limit capability + +## Step-by-Step Instructions + +### 1. Install + +```bash +npm add @react-native-ai/mlc +``` + +### 2. Expo Config Plugin + +Add to `app.json`: + +```json +{ + "expo": { + "plugins": ["@react-native-ai/mlc"] + } +} +``` + +Then: + +```bash +npx expo prebuild --clean +``` + +### 3. Manual (iOS only non-Expo) + +If on iOS and not using Expo, add "Increased Memory Limit" capability in Xcode: + +1. Open iOS project in Xcode +2. Target → Signing & Capabilities → + Capability +3. Add "Increased Memory Limit" + +### 4. Model Lifecycle + +```ts +const model = mlc.languageModel('Llama-3.2-3B-Instruct') + +await model.download((event) => { + if (!Number.isNaN(event.percentage)) { + console.log(event.percentage) + } +}) +await model.prepare() +// ... use with generateText/streamText +await model.unload() +await model.remove() // Delete from disk +``` + +To run inference, use the Vercel AI SDK. + +For more details on MLC-LLM wrapper, refer to the [documentation](https://www.react-native-ai.dev/docs/mlc/generating). + +### 5. Available Models + +**ONLY THE FOLLOWING MODELS** are embedded with MLC-LLM package: + +- `Llama-3.2-1B-Instruct` +- `Llama-3.2-3B-Instruct` +- `Phi-3.5-mini-instruct` +- `Qwen2-1.5B-Instruct` + +Additional details are listed in [this documentation page](https://www.react-native-ai.dev/docs/mlc/model-management). + +To include a custom model, direct the user to clone the React Native AI monorepo https://github.com/callstackincubator/ai and modify the https://github.com/callstackincubator/ai/blob/main/packages/mlc/mlc-package-config-android.json and https://github.com/callstackincubator/ai/blob/main/packages/mlc/mlc-package-config-ios.json files to include the model, then build and use the package locally. + +## Common Pitfalls + +- **Simulator**: Prebuilt binaries do not work in iOS Simulator; use physical device or Mac (Designed for iPad). +- **Memory limit**: Must add Increased Memory Limit capability. +- **Broken download**: If `event.percentage` is NaN, call `model.remove()` and retry download. + +## Related Skills + +- [quick-start](quick-start.md) +- [llama-provider](llama-provider.md) +- [apple-provider](apple-provider.md) diff --git a/skills/react-native-ai/references/ncnn-provider.md b/skills/react-native-ai/references/ncnn-provider.md new file mode 100644 index 00000000..cb1dc37b --- /dev/null +++ b/skills/react-native-ai/references/ncnn-provider.md @@ -0,0 +1,89 @@ +# NCNN Provider + +## Quick Command + +```bash +npm add @react-native-ai/ncnn-wrapper +``` + +```ts +import { + loadModel, + runInference, + toFlatArray, +} from '@react-native-ai/ncnn-wrapper' + +await loadModel(modelPath, paramPath) // .bin and .param paths +const result = runInference([1, 2, 3]) +``` + +## When to Use + +- Low-level NCNN inference in React Native +- Custom vision or other NCNN models +- Need direct control over model loading and inference + +## Prerequisites + +- [ ] React Native >= 0.76.0 +- [ ] NCNN model files (.param, .bin) +- [ ] Expo optional (config plugin available) +- [ ] Android, iOS, MacOS, Linux or Windows + +## Step-by-Step Instructions + +### 1. Install + +```bash +npm add @react-native-ai/ncnn-wrapper +``` + +### 2. Load Model + +```ts +import { loadModel } from '@react-native-ai/ncnn-wrapper' + +await loadModel(paramPath, binPath) +``` + +### 3. Run Inference + +```ts +import { runInference, toFlatArray } from '@react-native-ai/ncnn-wrapper' + +// Input as number[] or Tensor +const output = runInference([1, 2, 3]) +// or +const output = runInference(toFlatArray(tensor)) +``` + +### 4. Tensor Utilities + +```ts +import { + createTensor, + fromFlatArray, + tensorSize, + toFlatArray, + type Tensor, +} from '@react-native-ai/ncnn-wrapper' +``` + +### 5. Model Info + +```ts +import { getModelInfo } from '@react-native-ai/ncnn-wrapper' + +const info = getModelInfo() +``` + +## Common Pitfalls + +- **Not a full AI SDK provider**: NCNN wrapper is lower-level; no generateText/streamText. Use for custom inference pipelines. +- **Model paths**: Ensure .param and .bin paths are correct and accessible. + +## Related Skills + +- [quick-start](quick-start.md) +- [llama-provider](llama-provider.md) +- [mlc-provider](mlc-provider.md) diff --git a/skills/react-native-ai/references/quick-start.md b/skills/react-native-ai/references/quick-start.md new file mode 100644 index 00000000..fcb747ab --- /dev/null +++ b/skills/react-native-ai/references/quick-start.md @@ -0,0 +1,56 @@ +# Quick Start – React Native AI + +## Quick Command + +```bash +npm install +npm add ai @react-native-ai/ +``` + +## When to Use + +- First-time setup of any @react-native-ai provider +- Need to decide which provider fits the use case +- Write consumer code to use the model / provider + +## Prerequisites + +- [ ] React Native >= 0.76.0 +- [ ] Vercel AI SDK v5+ (`ai` package) for generateText/streamText +- [ ] For Apple (Apple Intelligence) provider: iOS 26+, Apple Intelligence enabled + +## Step-by-Step Instructions + +### 1. Path Selection + +Classify the request into exactly one path: + +| Path | Trigger terms | Reference file | +| ------- | ------------------------------------- | ----------------- | +| Apple | apple, Apple Intelligence, iOS 26 | apple-provider.md | +| Llama | llama, GGUF, llama.rn, HuggingFace | llama-provider.md | +| MLC-LLM | mlc, Llama-3.2, Phi, Qwen, download | mlc-provider.md | +| NCNN | ncnn, loadModel, runInference, Tensor | ncnn-provider.md | + +### 2. Proceed to Provider + +Open the reference file for the selected path and follow its checklist. + +### 3. Consume the model + +If using LLM providers (Apple, Llama, MLC-LLM), the user code uses the [Vercel AI SDK](https://ai-sdk.dev/docs/introduction) to run inference on the model in text streaming or generation mode. + +If using NCNN, the user code supplies the input tensor and receives an output tensor. + +## Common Pitfalls + +- **Missing AI SDK**: Providers work with Vercel AI SDK; install `ai` for generateText/streamText. +- **Platform support**: Make sure the provider supports the current platform. +- **Expo**: The MLC-LLM package provides an Expo config plugin that needs to be added to `app.json`; Llama package needs the llama.rn plugin; this is described in reference files. + +## Related Skills + +- [apple-provider](apple-provider.md) +- [llama-provider](llama-provider.md) +- [mlc-provider](mlc-provider.md) +- [ncnn-provider](ncnn-provider.md)