Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
76 changes: 43 additions & 33 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ A collection of on-device AI primitives for React Native with first-class Vercel
## AI SDK Compatibility

| React Native AI | AI SDK |
|-----------------|--------|
| --------------- | ------ |
| 0.11 and below | v5 |
| 0.12 and above | v6 |

Expand All @@ -35,11 +35,11 @@ Rozenite must be installed and enabled in your app. See the

## Available Providers

| Provider | Built-in | Platforms | Runtime | Description |
|----------|----------|-----------|---------|-------------|
| [Apple](#apple) | ✅ Yes | iOS | [Apple](https://developer.apple.com/documentation/FoundationModels) | Apple Foundation Models, embeddings, transcription, speech |
| [Llama](#llama) | ❌ No | iOS, Android | [llama.rn](https://github.com/mybigday/llama.rn) | Run GGUF models via llama.rn |
| [MLC](#mlc) | ❌ No | iOS, Android | [MLC LLM](https://github.com/mlc-ai/mlc-llm) | Run open-source LLMs via MLC runtime |
| Provider | Built-in | Platforms | Runtime | Description |
| --------------- | -------- | ------------ | ------------------------------------------------------------------- | ---------------------------------------------------------- |
| [Apple](#apple) | ✅ Yes | iOS | [Apple](https://developer.apple.com/documentation/FoundationModels) | Apple Foundation Models, embeddings, transcription, speech |
| [Llama](#llama) | ❌ No | iOS, Android | [llama.rn](https://github.com/mybigday/llama.rn) | Run GGUF models via llama.rn |
| [MLC](#mlc) | ❌ No | iOS, Android | [MLC LLM](https://github.com/mlc-ai/mlc-llm) | Run open-source LLMs via MLC runtime |

---

Expand All @@ -64,46 +64,46 @@ No additional linking needed, works immediately on iOS devices (autolinked).

```typescript
import { apple } from '@react-native-ai/apple'
import {
import {
generateText,
embed,
experimental_transcribe as transcribe,
experimental_generateSpeech as speech
embed,
experimental_transcribe as transcribe,
experimental_generateSpeech as speech,
} from 'ai'

// Text generation with Apple Intelligence
const { text } = await generateText({
model: apple(),
prompt: 'Explain quantum computing'
prompt: 'Explain quantum computing',
})

// Generate embeddings
const { embedding } = await embed({
model: apple.textEmbeddingModel(),
value: 'Hello world'
value: 'Hello world',
})

// Transcribe audio
const { text } = await transcribe({
model: apple.transcriptionModel(),
audio: audioBuffer
audio: audioBuffer,
})

// Text-to-speech
const { audio } = await speech({
model: apple.speechModel(),
text: 'Hello from Apple!'
text: 'Hello from Apple!',
})
```

#### Availability

| Feature | iOS Version | Additional Requirements |
|---------|-------------|------------------------|
| Text Generation | iOS 26+ | Apple Intelligence device |
| Embeddings | iOS 17+ | - |
| Transcription | iOS 26+ | - |
| Speech Synthesis | iOS 13+ | iOS 17+ for Personal Voice |
| Feature | iOS Version | Additional Requirements |
| ---------------- | ----------- | -------------------------- |
| Text Generation | iOS 26+ | Apple Intelligence device |
| Embeddings | iOS 17+ | - |
| Transcription | iOS 26+ | - |
| Speech Synthesis | iOS 13+ | iOS 17+ for Personal Voice |

See the [Apple documentation](https://react-native-ai.dev/docs/apple/getting-started) for detailed setup and usage guides.

Expand All @@ -115,11 +115,11 @@ Run any GGUF model on-device using [llama.rn](https://github.com/mybigday/llama.

#### Supported Features

| Feature | Method | Description |
|---------|--------|-------------|
| Text Generation | `llama.languageModel()` | Chat, completion, streaming, reasoning models |
| Embeddings | `llama.textEmbeddingModel()` | Text embeddings for RAG and similarity search |
| Speech | `llama.speechModel()` | Text-to-speech with vocoder models |
| Feature | Method | Description |
| --------------- | ---------------------------- | --------------------------------------------- |
| Text Generation | `llama.languageModel()` | Chat, completion, streaming, reasoning models |
| Embeddings | `llama.textEmbeddingModel()` | Text embeddings for RAG and similarity search |
| Speech | `llama.speechModel()` | Text-to-speech with vocoder models |

#### Installation

Expand All @@ -134,7 +134,9 @@ import { llama } from '@react-native-ai/llama'
import { generateText, streamText } from 'ai'

// Create model instance (Model ID format: "owner/repo/filename.gguf")
const model = llama.languageModel('ggml-org/SmolLM3-3B-GGUF/SmolLM3-Q4_K_M.gguf')
const model = llama.languageModel(
'ggml-org/SmolLM3-3B-GGUF/SmolLM3-Q4_K_M.gguf'
)

// Download from HuggingFace (with progress)
await model.download((progress) => {
Expand Down Expand Up @@ -197,18 +199,18 @@ await model.prepare()
// Generate response with Llama via MLC engine
const { text } = await generateText({
model,
prompt: 'Explain quantum computing'
prompt: 'Explain quantum computing',
})
```

#### Available Models

| Model ID | Size |
|----------|------|
| `Llama-3.2-3B-Instruct` | ~2GB |
| Model ID | Size |
| ------------------------ | ------ |
| `Llama-3.2-3B-Instruct` | ~2GB |
| `Phi-3-mini-4k-instruct` | ~2.5GB |
| `Mistral-7B-Instruct` | ~4.5GB |
| `Qwen2.5-1.5B-Instruct` | ~1GB |
| `Mistral-7B-Instruct` | ~4.5GB |
| `Qwen2.5-1.5B-Instruct` | ~1GB |

> [!NOTE]
> MLC requires iOS devices with sufficient memory (1-8GB depending on model). The prebuilt runtime supports the models listed above. For other models or custom configurations, you'll need to recompile the MLC runtime from source.
Expand All @@ -221,9 +223,17 @@ Comprehensive guides and API references are available at [react-native-ai.dev](h

Read the [contribution guidelines](/CONTRIBUTING.md) before contributing.

## Agent skills

This repository provides agent skills to help you integrate and use the packages. You can easily install them with:

`npx skills add https://github.com/callstackincubator/react-native-ai --skill json-render-react`

or manually by copying the `skills/` directory in your `.cursor/` directory.

## Made with ❤️ at Callstack

**react-native-ai** is an open source project and will always remain free to use. If you think it's cool, please star it 🌟.
**react-native-ai** is an open source project and will always remain free to use. If you think it's cool, please star it 🌟.

[Callstack][callstack-readme-with-love] is a group of React and React Native geeks, contact us at [hello@callstack.com](mailto:hello@callstack.com) if you need any help with these or just want to say hi!

Expand Down
118 changes: 118 additions & 0 deletions skills/react-native-ai/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,118 @@
---
name: react-native-ai-skills
description: Provides integration recipes for the React Native AI @react-native-ai packages that wrap the Llama.rn (Llama.cpp), MLC-LLM, Apple Foundation backends. Use when integrating local on-device AI in React Native, setting up providers, model management.
license: MIT
metadata:
author: Callstack
tags: react-native, ai, llama, apple, mlc, ncnn, vercel-ai-sdk, on-device
---

# React Native AI Skills

## Overview

Example workflow for integrating on-device AI in React Native apps using the @react-native-ai ecosystem. Available provider tracks (can be combined):

- **Apple** – Apple Intelligence (iOS 26+)
- **Llama** – GGUF models via llama.rn
- **MLC** – MLC-LLM models
- **NCNN** – Low-level NCNN inference wrapper (vision, custom models)

## Path Selection Gate (Must Run First)

Before selecting any reference file, classify the user request:

1. Select **Apple**:
- if you intend to build with: `apple`, `Apple Intelligence`, `Apple Foundation Models`
- if you want features: `transcription`, `speech synthesis`, `embeddings` on Apple devices
- optionally with capabilities: tool calling
2. Select **Llama**:
- if you intend to use the following technologies: `llama`, `GGUF`, `llama.rn`, `HuggingFace`, `SmolLM`
- if you want to perform the following operations: `embedding model`, `rerank`, `speech model`
3. Select **MLC**:
- if you intend to use a library that allows for custom models and involves build-time model optimizations
4. Select **NCNN**:
- if you need to use run low-level inference on bare metal tensors
- if you intend to run inference of custom models such as convolutional networks, multi-layer perceptrons, low-level inference, etc.
- DO NOT select NCNN if the prompt mentions LLMs only, this use case is better solved by other providers

## Skill Format

Each reference file follows a strict execution format:

- Quick Command
- When to Use
- Prerequisites
- Step-by-Step Instructions
- Common Pitfalls
- Related Skills

Use the checklists exactly as written before moving to the next phase.

## When to Apply

Reference this package when:

- Integrating on-device AI in React Native apps
- Installing and configuring @react-native-ai providers
- Managing model downloads (llama, mlc)
- Wiring providers with Vercel AI SDK (generateText, streamText)
- Implementing SetupAdapter pattern for multi-provider apps
- Debugging native module or Expo plugin issues

## Priority-Ordered Guidelines

| Priority | Category | Impact | Start File |
| -------- | --------------------------- | ------ | -------------------------------- |
| 1 | Path selection and baseline | N/A | [quick-start][quick-start] |
| 2 | Apple provider | N/A | [apple-provider][apple-provider] |
| 3 | Llama provider | N/A | [llama-provider][llama-provider] |
| 4 | MLC-LLM provider | N/A | [mlc-provider][mlc-provider] |
| 5 | NCNN provider | N/A | [ncnn-provider][ncnn-provider] |

## Quick Reference

```bash
npm install

# Provider-specific install
npm add @react-native-ai/apple
npm add @react-native-ai/llama llama.rn
npm add @react-native-ai/mlc
npm add @react-native-ai/ncnn-wrapper
```

Route by path:

- Apple: [apple-provider][apple-provider]
- Llama: [llama-provider][llama-provider]
- MLC: [mlc-provider][mlc-provider]
- NCNN: [ncnn-provider][ncnn-provider]

## References

| File | Impact | Description |
| -------------------------------- | ------ | ------------------------------------------ |
| [quick-start][quick-start] | N/A | Shared preflight |
| [apple-provider][apple-provider] | N/A | Apple Intelligence setup and integration |
| [llama-provider][llama-provider] | N/A | GGUF models, llama.rn, model management |
| [mlc-provider][mlc-provider] | N/A | MLC models, download, prepare, Expo plugin |
| [ncnn-provider][ncnn-provider] | N/A | NCNN wrapper, loadModel, runInference |

## Problem → Skill Mapping

| Problem | Start With |
| ------------------------------------- | ---------------------------------------------- |
| Need path decision first | [quick-start][quick-start] |
| Integrate Apple Intelligence | [apple-provider][apple-provider] |
| Run GGUF models from HuggingFace | [llama-provider][llama-provider] |
| Run MLC-LLM models (Llama, Phi, Qwen) | [mlc-provider][mlc-provider] |
| Use NCNN for custom inference | [ncnn-provider][ncnn-provider] |
| Multi-provider app with SetupAdapter | [quick-start][quick-start] → provider-specific |
| Expo + native module setup | Provider-specific (each has Expo notes) |

[quick-start]: references/quick-start.md
[apple-provider]: references/apple-provider.md
[llama-provider]: references/llama-provider.md
[mlc-provider]: references/mlc-provider.md
[ncnn-provider]: references/ncnn-provider.md
78 changes: 78 additions & 0 deletions skills/react-native-ai/references/apple-provider.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
# Apple Provider

## Quick Command

```bash
npm add @react-native-ai/apple
```

```ts
import { apple } from '@react-native-ai/apple'
import { generateText } from 'ai'

const result = await generateText({
model: apple(),
prompt: 'Explain quantum computing in simple terms',
})
```

## When to Use

- Use Apple Intelligence on iOS 26+
- Need language model, embeddings, transcription, or speech

## Prerequisites

- [ ] React Native New Architecture
- [ ] iOS 26+ (Android not supported)
- [ ] Apple Intelligence enabled device
- [ ] Vercel AI SDK v5+ (`ai`)
- [ ] Android or iOS

## Step-by-Step Instructions

### 1. Install

```bash
npm add @react-native-ai/apple
```

### 2. Availability Check

```ts
import { apple } from '@react-native-ai/apple'

if (apple.isAvailable()) {
// Use Apple provider
}
```

### 3. Model Types

| Type | Method | Use Case | Documentation |
| ------------- | ---------------------------- | --------------------------------------- | -------------------------------------------------------- |
| Language | `apple.languageModel()` | Text generation, chat | https://www.react-native-ai.dev/docs/apple/generating |
| Embedding | `apple.textEmbeddingModel()` | RAG, similarity, prompt size estimation | https://www.react-native-ai.dev/docs/apple/embeddings |
| Transcription | `apple.transcriptionModel()` | Speech-to-text | https://www.react-native-ai.dev/docs/apple/transcription |
| Speech | `apple.speechModel()` | Text-to-speech | https://www.react-native-ai.dev/docs/apple/speech |

### 4. Tool Calling

```ts
import { createAppleProvider } from '@react-native-ai/apple'

const apple = createAppleProvider({ availableTools: tools })
const model = apple.languageModel()
```

## Common Pitfalls

- **Wrong iOS version**: Apple Intelligence requires iOS 26+.
- **Simulator**: For now only physical devices are supported.
- **New Architecture**: React Native New Architecture is required.

## Related Skills

- [quick-start](quick-start.md)
- [llama-provider](llama-provider.md)
- [mlc-provider](mlc-provider.md)
Loading