diff --git a/README.md b/README.md index f278d67..19cfac9 100644 --- a/README.md +++ b/README.md @@ -6,6 +6,38 @@ A Python SDK for decentralized model management and inference services on the OpenGradient platform. The SDK provides programmatic access to distributed AI infrastructure with cryptographic verification capabilities. +## Quick Summary for Developers + +> **New to OpenGradient?** Start here. + +| Question | Answer | +|---|---| +| **What is it?** | A decentralized network that runs AI inference inside TEEs and settles every request on-chain | +| **What problem does it solve?** | Centralized AI is a black box. OpenGradient gives cryptographic proof for every inference | +| **How do I use it?** | Install the SDK, get a private key, call llm.chat() like OpenAI but with transaction_hash and tee_signature in every response | +| **What is Model Hub?** | A decentralized registry to upload, discover, and run custom ONNX models on-chain | +| **What is MemSync?** | A long-term memory layer for AI agents with persistent context across sessions | + +### 30-Second Quickstart + +Install, grab a private key from the [faucet](https://faucet.opengradient.ai), and run: + + import asyncio, os, opengradient as og + + async def main(): + llm = og.LLM(private_key=os.environ["OG_PRIVATE_KEY"]) + llm.ensure_opg_approval(min_allowance=0.1) + result = await llm.chat( + model=og.TEE_LLM.GEMINI_2_5_FLASH, + messages=[{"role": "user", "content": "Hello!"}], + ) + print(result.chat_output["content"]) # AI response + print(result.transaction_hash) # on-chain proof + + asyncio.run(main()) + +--- + ## Overview OpenGradient enables developers to build AI applications with verifiable execution guarantees through Trusted Execution Environments (TEE) and blockchain-based settlement. The SDK supports standard LLM inference patterns while adding cryptographic attestation for applications requiring auditability and tamper-proof AI execution.