Write a #[server] function once. It runs on Cloudflare Workers. The client calls it like a normal async function. No manual routing, no manual serialization, no duplicated endpoints.
// shared crate — server function
use dioxus::prelude::*;
use dioxus_cloudflare::prelude::*;
#[server]
pub async fn get_user(id: String) -> Result<User, ServerFnError> {
let db = cf::d1("DB")?;
db.prepare("SELECT * FROM users WHERE id = ?")
.bind(&[id.into()])?
.first::<User>(None)
.await
.cf()?
.ok_or_else(|| ServerFnError::new("Not found"))
}// client component — just call it
let user = get_user("abc".into()).await;Client WASM Cloudflare Worker
┌──────────┐ fetch() ┌─────────────────────┐
│ #[server] │ ───────────▶ │ handle(req, env) │
│ generates │ │ ↓ set_context() │
│ POST to │ │ ↓ worker→http req │
│ /api/... │ │ ↓ Axum dispatch │
│ │ ◀─ stream ─ │ ↓ http→worker resp │
└──────────┘ └─────────────────────┘
- Client calls
get_user(id)— Dioxus serializes args, sends POST to/api/get_user - Worker
#[event(fetch)]receives the request dioxus_cloudflare::handle(req, env)is called:- Stores
Envin thread-local (cf::env()becomes available) - Stores raw
Requestin thread-local (cf::req()becomes available) - Converts
worker::Request→http::Request - Dispatches through the Dioxus Axum router (
axum_corefeature) - Converts
http::Response→worker::Response(streaming viaReadableStream)
- Stores
- Worker returns the response
Cloudflare Workers run one request per isolate at a time (single-threaded WASM). There is no concurrent access to thread-locals within a single Worker invocation.
| Export | What It Does |
|---|---|
cf::d1(name) |
D1 database — env + binding + error conversion in one call |
cf::kv(name) |
Workers KV namespace |
cf::r2(name) |
R2 bucket |
cf::durable_object(name) |
Durable Object namespace |
cf::queue(name) |
Queue producer (requires queue feature) |
cf::secret(name) |
Encrypted secret (wrangler secret put) |
cf::var(name) |
Plaintext environment variable ([vars] in wrangler.toml) |
cf::ai(name) |
Workers AI inference |
cf::service(name) |
Service binding (call other Workers) |
cf::env() |
Full Worker Env — for bindings without a shorthand |
cf::req() |
Raw worker::Request — headers, IP |
cf::cookie(name) |
Read a named cookie from the request |
cf::cookies() |
Read all cookies from the request |
cf::set_cookie() |
Set an HttpOnly/Secure auth cookie (secure defaults) |
cf::set_cookie_with() |
Set a cookie with custom options (builder pattern) |
cf::clear_cookie() |
Clear a cookie (logout) |
cf::session() |
Load session data (async); returns Session handle for sync get/set/remove |
SessionConfig |
Session backend configuration (KV or D1) — pass to Handler::session() |
handle(req, env) |
Main entry point — wire this into #[event(fetch)] |
Handler |
Builder with before/after middleware hooks + .session() + .websocket() routing |
cf::websocket_upgrade() |
Create a WebSocketPair + 101 response in one call (for Durable Objects) |
cf::websocket_pair() |
Create a raw WebSocketPair for custom handling |
CfError |
Newtype for worker::Error → ServerFnError conversion |
CfResultExt |
.cf() method on Result<T, worker::Error> and Result<T, KvError> |
This crate requires a patched version of dioxus-server that adds wasm32 target support. Add the following to your workspace Cargo.toml:
[patch.crates-io]
dioxus-server = { git = "https://github.com/JaffeSystems/dioxus-server-cf.git" }This is necessary because upstream dioxus-server 0.7.3 does not compile for wasm32-unknown-unknown. The patch applies minimal cfg-gating to make it compatible with Cloudflare Workers.
Install dioxus-cloudflare-build to automate the entire build pipeline — cargo build, wasm-bindgen, and JavaScript shim generation — in a single command:
cargo install dioxus-cloudflare-builddioxus-cf-build [OPTIONS] -p <CRATE>
Options:
-p, --package <CRATE> Crate to build (cargo -p flag)
--release Build in release mode
--out-dir <DIR> Output directory [default: build/worker]
What it does:
- Windows MSVC PATH fix — auto-detects the Git Bash
link.execonflict and prepends the real MSVC linker directory (no-op on other platforms) cargo build— runscargo build --target wasm32-unknown-unknown -p <crate> [--release]wasm-bindgen— runswasm-bindgen --out-dir <dir> --target webon the output.wasm- Shim generation — parses the wasm-bindgen
.d.tsto auto-detect Durable Object classes and generatesshim.mjs
Use it as your wrangler build command:
# wrangler.toml
[build]
command = "dioxus-cf-build --release -p my-worker"Now npx wrangler deploy handles everything — no manual steps, no hand-written shim.
use worker::*;
use dioxus_cloudflare::prelude::*;
// Import server functions so they register with inventory
use shared::server_fns::*;
extern "C" { fn __wasm_call_ctors(); }
#[event(fetch)]
async fn fetch(req: Request, env: Env, _ctx: Context) -> Result<Response> {
// Required: initialize inventory for #[server] function registration
// SAFETY: Called once per cold start. inventory crate needs this in WASM.
unsafe { __wasm_call_ctors(); }
dioxus_cloudflare::handle(req, env).await
}Use [Handler] for before/after middleware without touching bridge internals.
CORS headers on all responses:
use worker::*;
use dioxus_cloudflare::Handler;
#[event(fetch)]
async fn fetch(req: Request, env: Env, _ctx: Context) -> Result<Response> {
unsafe { __wasm_call_ctors(); }
Handler::new()
.after(|resp| {
resp.headers_mut().set("Access-Control-Allow-Origin", "*")?;
Ok(())
})
.handle(req, env)
.await
}Auth check (short-circuit unauthorized requests):
Handler::new()
.before(|req| {
if req.headers().get("Authorization")?.is_none() {
return Ok(Some(Response::error("Unauthorized", 401)?));
}
Ok(None) // continue to server functions
})
.handle(req, env)
.awaitBefore hooks run after context is set (cf::env(), cf::d1(), etc. work). Return Ok(None) to continue, Ok(Some(resp)) to short-circuit. After hooks run on all responses (including short-circuited ones) and can modify headers.
Built-in session management backed by Workers KV or D1. Configure it on the Handler builder — cf::session() becomes available in all server functions.
KV-backed sessions (automatic expiry via KV TTL):
Handler::new()
.session(SessionConfig::kv("SESSIONS"))
.handle(req, env)
.awaitD1-backed sessions:
Handler::new()
.session(SessionConfig::d1("DB", "sessions"))
.handle(req, env)
.awaitD1 requires a table with this schema:
CREATE TABLE sessions (
id TEXT PRIMARY KEY,
data TEXT NOT NULL,
expires_at INTEGER NOT NULL
);Reading and writing session data:
#[server]
pub async fn login(user: String) -> Result<(), ServerFnError> {
let session = cf::session().await?;
session.set("user_id", &user)?;
Ok(())
}
#[server]
pub async fn profile() -> Result<String, ServerFnError> {
let session = cf::session().await?;
let user: Option<String> = session.get("user_id")?;
Ok(user.unwrap_or_else(|| "not logged in".into()))
}
#[server]
pub async fn logout() -> Result<(), ServerFnError> {
let session = cf::session().await?;
session.destroy();
Ok(())
}cf::session() is async (loads from KV/D1 on first call, cached after). Session methods (get, set, remove, destroy) are sync — they operate on the in-memory cache. Dirty data is flushed to the backend automatically before the response is sent.
Custom configuration:
SessionConfig::kv("SESSIONS")
.cookie_name("my_session") // default: "__session"
.max_age(60 * 60 * 24 * 7) // 7 days (default: 86400 = 24h)wrangler.toml — add the KV namespace:
[[kv_namespaces]]
binding = "SESSIONS"
id = "your-kv-namespace-id"Access encrypted secrets and plaintext variables from inside server functions.
Secrets are set via wrangler secret put or the Cloudflare dashboard — encrypted at rest, never in wrangler.toml:
#[server]
pub async fn verify_token(token: String) -> Result<bool, ServerFnError> {
let expected = cf::secret("API_TOKEN")?.to_string();
Ok(token == expected)
}Variables are set in the [vars] section of wrangler.toml — plaintext, visible in source:
#[server]
pub async fn get_environment() -> Result<String, ServerFnError> {
Ok(cf::var("ENVIRONMENT")?.to_string())
}# wrangler.toml
[vars]
ENVIRONMENT = "production"Run AI inference from server functions using Cloudflare's built-in Workers AI models.
use serde::{Deserialize, Serialize};
#[derive(Serialize)]
struct AiInput { messages: Vec<AiMessage> }
#[derive(Serialize)]
struct AiMessage { role: String, content: String }
#[derive(Deserialize)]
struct AiOutput { response: Option<String> }
#[server]
pub async fn generate(prompt: String) -> Result<String, ServerFnError> {
use dioxus_cloudflare::prelude::*;
let ai = cf::ai("AI")?;
let resp: AiOutput = ai.run("@cf/meta/llama-3.1-8b-instruct", AiInput {
messages: vec![AiMessage { role: "user".into(), content: prompt }],
}).await.cf()?;
Ok(resp.response.unwrap_or_default())
}# wrangler.toml
[ai]
binding = "AI"Any model listed in the Workers AI catalog can be used — text generation, embeddings, image generation, etc. Define typed input/output structs matching the model's API — serde_json::Value does not work correctly through serde_wasm_bindgen.
Combine KV for version metadata with R2 for binary storage — a pattern for desktop/mobile apps that check a Worker for updates.
use serde::{Deserialize, Serialize};
#[derive(Serialize, Deserialize)]
struct ReleaseInfo { platform: String, version: String, notes: String, r2_key: String }
#[server]
pub async fn publish_release(platform: String, version: String, notes: String, binary: String) -> Result<ReleaseInfo, ServerFnError> {
use dioxus_cloudflare::prelude::*;
let r2_key = format!("releases/{platform}/{version}");
cf::r2("BUCKET")?.put(&r2_key, binary).execute().await.cf()?;
let info = ReleaseInfo { platform: platform.clone(), version, notes, r2_key };
let json = serde_json::to_string(&info).map_err(|e| ServerFnError::new(e.to_string()))?;
cf::kv("KV")?.put(&format!("latest:{platform}"), &json).cf()?.execute().await.cf()?;
Ok(info)
}
#[server]
pub async fn check_for_update(platform: String, current: String) -> Result<String, ServerFnError> {
use dioxus_cloudflare::prelude::*;
let Some(json) = cf::kv("KV")?.get(&format!("latest:{platform}")).text().await.cf()? else {
return Ok("up to date".into());
};
let info: ReleaseInfo = serde_json::from_str(&json).map_err(|e| ServerFnError::new(e.to_string()))?;
Ok(if info.version != current { format!("update available: {}", info.version) } else { "up to date".into() })
}Call other Workers from server functions. The target Worker must be deployed separately and bound in wrangler.toml.
#[server]
pub async fn call_auth(token: String) -> Result<String, ServerFnError> {
use dioxus_cloudflare::prelude::*;
let auth = cf::service("AUTH")?;
let resp = auth.fetch("https://fake-host/verify", None).await.cf()?;
Ok(resp.text().await.cf()?)
}# wrangler.toml
[[services]]
binding = "AUTH"
service = "auth-worker"The URL host is ignored — the request goes directly to the bound Worker. Use any placeholder host.
Render Dioxus components to HTML at the edge. Requires the ssr feature.
When the Axum router returns 404 and the request accepts text/html, the handler renders your app component and returns the HTML. Non-HTML requests (JS, CSS, WASM, JSON) pass through normally.
Minimal SSR (default HTML shell, no client JS):
Handler::new()
.with_ssr(App)
.handle(req, env)
.awaitSSR with custom index.html (SPA takeover after first paint):
Handler::new()
.with_ssr(App)
.with_index_html(include_str!("path/to/index.html"))?
.handle(req, env)
.awaitThe custom index.html must contain an element with id="main" — rendered component output is inserted at that point.
Suspense is supported: wait_for_suspense() resolves server futures during SSR, so components that call #[server] functions via use_server_future will have their data ready in the initial HTML.
SSR always renders with hydration markers (data-node-hydration attributes) and injects serialized hydration data. When the client WASM is built with hydrate(true), it reuses the server-rendered DOM instead of re-rendering — providing instant first paint with no flash.
Worker (server):
Handler::new()
.with_ssr(App)
.with_index_html(include_str!("path/to/index.html"))?
.handle(req, env)
.awaitClient WASM (must render the same component):
The client must enable the fullstack feature on dioxus (which activates dioxus-web/hydrate):
# Cargo.toml
[dependencies]
dioxus = { version = "=0.7.3", features = ["web", "fullstack"] }fn main() {
dioxus::launch(App);
}Important: Do not use ? on use_server_future in hydrated components. The ? operator suspends the component if the resource isn't immediately ready, which creates a VirtualDom/DOM tree mismatch and crashes the hydration walker. Instead, match on the Result:
#[component]
fn App() -> Element {
let data_text = match use_server_future(get_data) {
Ok(resource) => match &*resource.read() {
Some(Ok(s)) => s.clone(),
Some(Err(e)) => format!("Error: {e}"),
None => "Loading...".into(),
},
Err(_) => "Loading...".into(),
};
rsx! { p { "{data_text}" } }
}Build order:
dx build --release— builds client WASM +index.htmldioxus-cf-build --release -p your-worker— builds worker WASM, runs wasm-bindgen, generates shim (see Build Tool)
Send the initial HTML immediately with suspense fallbacks as placeholders, then stream resolved content out-of-order via ReadableStream as each suspense boundary completes. Fast data renders instantly; slow data streams in later. Requires the ssr feature.
Handler::new()
.with_streaming_ssr(App)
.with_index_html(include_str!("path/to/index.html"))?
.handle(req, env)
.awaitIf no suspense boundaries are pending after the initial render, streaming SSR automatically falls back to a single-shot response with no overhead — you can always use with_streaming_ssr without penalty.
The client-side JavaScript (window.dx_hydrate) swaps suspense placeholders with resolved content as chunks arrive. This is the same mechanism used by upstream Dioxus streaming SSR.
Real-time WebSocket connections via Durable Objects. The worker upgrades the request and forwards it to a DO, which creates the WebSocketPair and handles messages.
Worker entry point — route WebSocket upgrades to a Durable Object:
Handler::new()
.websocket("/ws", |req| async move {
let ns = cf::durable_object("WS_DO")?;
let room = req.path().strip_prefix("/ws/").unwrap_or("default");
let id = ns.id_from_name(room).cf()?;
let stub = id.get_stub().cf()?;
Ok(stub.fetch_with_request(req).await.cf()?)
})
.handle(req, env)
.awaitDurable Object — accept the socket and handle messages:
use worker::*;
use dioxus_cloudflare::prelude::*;
#[durable_object]
pub struct EchoDo {
state: State,
env: Env,
}
impl DurableObject for EchoDo {
fn new(state: State, env: Env) -> Self { Self { state, env } }
async fn fetch(&self, _req: Request) -> Result<Response> {
let (server, resp) = cf::websocket_upgrade()?;
self.state.accept_web_socket(&server);
Ok(resp)
}
async fn websocket_message(&self, ws: WebSocket, message: WebSocketIncomingMessage) -> Result<()> {
match message {
WebSocketIncomingMessage::String(text) => ws.send_with_str(&format!("echo: {text}"))?,
WebSocketIncomingMessage::Binary(bytes) => ws.send_with_bytes(&bytes)?,
}
Ok(())
}
async fn websocket_close(&self, _ws: WebSocket, _code: usize, _reason: String, _was_clean: bool) -> Result<()> {
Ok(())
}
}wrangler.toml — bind the DO and route WebSocket paths:
[durable_objects]
bindings = [
{ name = "WS_DO", class_name = "EchoDo" }
]
[[migrations]]
tag = "v1"
new_sqlite_classes = ["EchoDo"]
[assets]
run_worker_first = ["/api/*", "/ws/*"]use dioxus::prelude::*;
use dioxus_cloudflare::prelude::*;
#[server]
pub async fn create_order(items: Vec<Item>) -> Result<Order, ServerFnError> {
let db = cf::d1("DB")?;
db.prepare("INSERT INTO orders (items, total) VALUES (?, ?)")
.bind(&[serde_json::to_string(&items)?.into(), total.into()])?
.run()
.await
.cf()?;
Ok(Order { items, total, status: "confirmed".into() })
}use dioxus::prelude::*;
use shared::server_fns::create_order;
#[component]
fn OrderButton(items: Vec<Item>) -> Element {
let order = use_resource(move || {
let items = items.clone();
async move { create_order(items).await }
});
match &*order.read() {
Some(Ok(o)) => rsx! { p { "Order confirmed: {o.status}" } },
Some(Err(e)) => rsx! { p { "Error: {e}" } },
None => rsx! { p { "Placing order..." } },
}
}| Module | Purpose | Key Exports |
|---|---|---|
lib.rs |
Public API surface | cf module, handle(), Handler |
bindings.rs |
Typed CF binding shorthands | cf::d1(), cf::kv(), cf::r2(), cf::durable_object(), cf::queue(), cf::ai(), cf::service(), cf::secret(), cf::var() |
context.rs |
Thread-local Env + Request storage |
cf::env(), cf::req(), set_context() |
handler.rs |
Worker↔Axum bridge + Handler builder |
handle(), Handler::new(), .before(), .after(), .websocket() |
cookie.rs |
Cookie read/write helpers | cf::cookie(), cf::cookies(), cf::set_cookie(), cf::set_cookie_with(), cf::clear_cookie() |
session.rs |
Session middleware (KV or D1 backend) | cf::session(), SessionConfig, Session |
error.rs |
Error bridge to ServerFnError |
CfError, CfResultExt (.cf() method) |
ssr.rs |
SSR rendering + hydration data extraction | with_ssr(), with_streaming_ssr(), with_index_html() |
streaming.rs |
Out-of-order streaming SSR internals | MountPath, Mount, PendingSuspenseBoundary |
websocket.rs |
WebSocket helpers for Durable Objects | cf::websocket_upgrade(), cf::websocket_pair() |
prelude.rs |
Convenience re-exports | use dioxus_cloudflare::prelude::* |
| Binding | Function | Example |
|---|---|---|
| D1 | cf::d1(name) |
cf::d1("DB")?.prepare("SELECT ...").first::<T>(None).await.cf()? |
| Workers KV | cf::kv(name) |
cf::kv("KV")?.get("key").text().await.cf()? |
| R2 | cf::r2(name) |
cf::r2("BUCKET")?.put("key", data).execute().await.cf()? |
| Durable Objects | cf::durable_object(name) |
cf::durable_object("DO")?.id_from_name("room").cf()? |
| Queues | cf::queue(name) |
cf::queue("Q")?.send(msg).await.cf()? (requires queue feature) |
| Workers AI | cf::ai(name) |
cf::ai("AI")?.run("@cf/meta/llama-3.1-8b-instruct", input).await.cf()? |
| Service Bindings | cf::service(name) |
cf::service("AUTH")?.fetch(url, None).await.cf()? |
| Secrets | cf::secret(name) |
cf::secret("API_KEY")?.to_string() |
| Variables | cf::var(name) |
cf::var("ENVIRONMENT")?.to_string() |
| Feature | API | Details |
|---|---|---|
| Raw request | cf::req() |
Access headers, IP, method from inside server functions |
| Read cookie | cf::cookie(name) |
Parse a named cookie from the Cookie header |
| Read all cookies | cf::cookies() |
Parse all cookies into a HashMap |
| Set cookie | cf::set_cookie(name, value, max_age) |
Queue a Set-Cookie header (HttpOnly, Secure, SameSite=Strict) |
| Custom cookie | cf::set_cookie_with(name, value) |
Builder pattern for custom Domain, Path, SameSite, etc. |
| Clear cookie | cf::clear_cookie(name) |
Queue a cookie-clear header |
| Streaming | Return TextStream / ByteStream / JsonStream |
Streamed via ReadableStream — no buffering |
| Feature | API | Details |
|---|---|---|
| Middleware | Handler::new().before(f).after(f) |
Before hooks can short-circuit; after hooks modify all responses |
| Sessions | Handler::new().session(SessionConfig::kv("KV")) |
KV or D1 backend, cookie-based session IDs, auto-flush |
| WebSockets | Handler::new().websocket("/ws", handler) |
Routes upgrades to Durable Objects |
| SSR | Handler::new().with_ssr(App) |
Single-shot server-side rendering (requires ssr feature) |
| Streaming SSR | Handler::new().with_streaming_ssr(App) |
Out-of-order streaming with suspense (requires ssr feature) |
| Custom HTML | .with_index_html(include_str!("index.html")) |
Use your own HTML shell (must contain id="main") |
| Error bridge | .cf() on any Result<T, worker::Error> |
Converts to ServerFnError automatically |
| Feature | Enables | Added Dependencies |
|---|---|---|
queue |
cf::queue() binding (activates worker/queue) |
None |
ssr |
with_ssr(), with_streaming_ssr(), with_index_html() |
dioxus-ssr, dioxus-history, futures-channel, wasm-bindgen-futures |
| Crate | Purpose | Install |
|---|---|---|
dioxus-cloudflare-build |
Build pipeline CLI: cargo build + wasm-bindgen + shim generation | cargo install dioxus-cloudflare-build |
dioxus-server-cf |
Patched dioxus-server 0.7.3 for wasm32 compatibility |
[patch.crates-io] (see Prerequisites) |
[dependencies]
dioxus = { version = "=0.7.3", features = ["fullstack"] }
dioxus-cloudflare = { version = "0.7", features = ["queue", "ssr"] }
worker = { version = "0.7", features = ["http", "d1"] }
wasm-bindgen = "0.2"
[patch.crates-io]
dioxus-server = { git = "https://github.com/JaffeSystems/dioxus-server-cf.git" }Template project /— Done! Seecargo generateJaffeSystems/dioxus— 7 demo sections (D1, KV, R2, AI, Auth, Queue, Update Server) with streaming SSR and hydration- Remove
dioxus-serverpatch requirement — upstream the wasm32cfg-gating to Dioxus core
Copyright (C) 2026-2027 Jaffe Systems
This project is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0).
If you use this software in a network service (SaaS, web application, API, etc.), you must make the complete source code of your application available to its users under the AGPL-3.0. This includes any modifications and derivative works.
Commercial License: If you need to use this software in a proprietary or closed-source application without the AGPL-3.0 obligations, a commercial license is available. See COMMERCIAL-LICENSE.md for details.
| Use Case | License | Source Disclosure Required? |
|---|---|---|
| Open-source project | AGPL-3.0 (free) | Yes |
| Internal tools (not served to users) | AGPL-3.0 (free) | No |
| Proprietary SaaS / closed-source | Commercial (paid) | No |