Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 11 additions & 3 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ jobs:
path: dist/

test:
timeout-minutes: 15
timeout-minutes: 20
name: test
runs-on: ubuntu-latest
needs: [lint, build]
Expand Down Expand Up @@ -153,7 +153,7 @@ jobs:
coverage-file: coverage.json

test-examples:
timeout-minutes: 15
timeout-minutes: 20
name: test-examples
runs-on: ubuntu-latest
needs: [test]
Expand Down Expand Up @@ -198,4 +198,12 @@ jobs:
EVOLUTION_KEY_ID: ${{ secrets.EVOLUTION_KEY_ID }}
EVOLUTION_SECRET: ${{ secrets.EVOLUTION_SECRET }}
EVOLUTION_BASE_URL: ${{ secrets.EVOLUTION_BASE_URL }}
run: make run-tokens
run: make run-tokens

- name: Run foundation models examples
env:
EVOLUTION_KEY_ID: ${{ secrets.EVOLUTION_KEY_ID }}
EVOLUTION_SECRET: ${{ secrets.EVOLUTION_SECRET }}
EVOLUTION_BASE_URL: ${{ secrets.EVOLUTION_BASE_URL }}
EVOLUTION_PROJECT_ID: ${{ secrets.EVOLUTION_PROJECT_ID }}
run: make run-foundation-models
18 changes: 13 additions & 5 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -19,11 +19,12 @@ help:
@echo " type-check Run type checking (pyright + mypy)"
@echo ""
@echo "Examples:"
@echo " run-examples Run basic usage examples"
@echo " run-all-examples Run all examples"
@echo " run-streaming Run streaming examples"
@echo " run-async Run async examples"
@echo " run-tokens Run token management examples"
@echo " run-examples Run basic usage examples"
@echo " run-all-examples Run all examples"
@echo " run-streaming Run streaming examples"
@echo " run-async Run async examples"
@echo " run-tokens Run token management examples"
@echo " run-foundation-models Run foundation models examples"
@echo ""
@echo "Build:"
@echo " clean Clean build artifacts"
Expand Down Expand Up @@ -58,6 +59,9 @@ shell:
test:
rye run pytest tests/ -v --cov=evolution_openai --cov-report=html --cov-report=term --cov-report=xml:coverage.xml --cov-report=json:coverage.json

test-foundation-models:
rye run pytest tests/test_foundation_models_*.py -v

# Code quality
lint:
rye run ruff check .
Expand Down Expand Up @@ -209,6 +213,10 @@ run-tokens:
@if [ -f .env ]; then echo "Загружение переменных окружения из файла .env..."; export $$(grep -v '^#' .env | xargs); fi; \
rye run python examples/token_management.py

run-foundation-models:
@if [ -f .env ]; then echo "Загружение переменных окружения из файла .env..."; export $$(grep -v '^#' .env | xargs); fi; \
rye run python examples/foundation_models_example.py

# Package info
info:
@echo "Package: evolution-openai"
Expand Down
121 changes: 95 additions & 26 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,9 @@
- ✅ **Retry логика** при ошибках авторизации
- ✅ **Поддержка .env файлов** для управления конфигурацией
- ✅ **Интеграционные тесты** с реальным API
- ✅ **Evolution Foundation Models** поддержка с `project_id`
- ✅ **Готовые примеры** для Foundation Models
- ✅ **Передовые AI модели** включая DeepSeek-R1, Qwen2.5 и другие

## 📦 Установка

Expand All @@ -37,24 +40,40 @@ client = OpenAI(api_key="sk-...")
# ✅ СТАЛО (Evolution OpenAI)
from evolution_openai import OpenAI

# Для обычного использования
client = OpenAI(
key_id="your_key_id", secret="your_secret", base_url="https://your-model-endpoint.cloud.ru/v1"
key_id="your_key_id",
secret="your_secret",
base_url="https://your-model-endpoint.cloud.ru/v1"
)

# Для Evolution Foundation Models
client = OpenAI(
key_id="your_key_id",
secret="your_secret",
base_url="https://foundation-models.api.cloud.ru/api/gigacube/openai/v1",
project_id="your_project_id" # Для Evolution Foundation Models
)

# Все остальное работает ТОЧНО ТАК ЖЕ!
response = client.chat.completions.create(
model="default", messages=[{"role": "user", "content": "Hello!"}]
model="default", # или "deepseek-ai/DeepSeek-R1-Distill-Llama-70B" для Foundation Models
messages=[{"role": "user", "content": "Hello!"}]
)
```

### Основное использование

#### Обычное использование

```python
from evolution_openai import OpenAI

# Инициализация client
# Инициализация client для обычного использования
client = OpenAI(
key_id="your_key_id", secret="your_secret", base_url="https://your-model-endpoint.cloud.ru/v1"
key_id="your_key_id",
secret="your_secret",
base_url="https://your-model-endpoint.cloud.ru/v1"
)

# Chat Completions
Expand All @@ -70,12 +89,54 @@ response = client.chat.completions.create(
print(response.choices[0].message.content)
```

#### 🚀 Evolution Foundation Models

Библиотека полностью поддерживает **Evolution Foundation Models** - платформу для работы с передовыми AI моделями на Cloud.ru. Ключевые возможности:

- **Автоматическое управление Project ID** - добавляет заголовок `x-project-id` автоматически
- **Передовые модели** - DeepSeek-R1, Qwen2.5, RefalMachine/RuadaptQwen2.5-7B-Lite-Beta
- **Специальный endpoint** - `https://foundation-models.api.cloud.ru/api/gigacube/openai/v1`
- **Полная совместимость** с OpenAI SDK - все методы работают идентично

```python
from evolution_openai import OpenAI

# Инициализация для Evolution Foundation Models
client = OpenAI(
key_id="your_key_id",
secret="your_secret",
base_url="https://foundation-models.api.cloud.ru/api/gigacube/openai/v1",
project_id="your_project_id" # Автоматически добавляется в заголовки
)

# Использование Foundation Models
response = client.chat.completions.create(
model="deepseek-ai/DeepSeek-R1-Distill-Llama-70B",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is artificial intelligence?"},
],
max_tokens=150
)

print(response.choices[0].message.content)
```

### Streaming

```python
# Streaming responses
# Для обычного использования
stream = client.chat.completions.create(
model="default",
messages=[{"role": "user", "content": "Tell me a story"}],
stream=True
)

# Для Foundation Models
stream = client.chat.completions.create(
model="default", messages=[{"role": "user", "content": "Tell me a story"}], stream=True
model="deepseek-ai/DeepSeek-R1-Distill-Llama-70B",
messages=[{"role": "user", "content": "Tell me a story"}],
stream=True
)

for chunk in stream:
Expand All @@ -91,14 +152,24 @@ from evolution_openai import AsyncOpenAI


async def main():
# Для обычного использования
client = AsyncOpenAI(
key_id="your_key_id",
secret="your_secret",
base_url="https://your-model-endpoint.cloud.ru/v1",
)

# Для Foundation Models
client = AsyncOpenAI(
key_id="your_key_id",
secret="your_secret",
base_url="https://foundation-models.api.cloud.ru/api/gigacube/openai/v1",
project_id="your_project_id", # Опционально для Foundation Models
)

response = await client.chat.completions.create(
model="default", messages=[{"role": "user", "content": "Async hello!"}]
model="deepseek-ai/DeepSeek-R1-Distill-Llama-70B", # или "default" для обычного использования
messages=[{"role": "user", "content": "Async hello!"}]
)

print(response.choices[0].message.content)
Expand All @@ -118,6 +189,8 @@ asyncio.run(main())
cp env.example .env
```

#### Для обычного использования:

```bash
# .env файл
EVOLUTION_KEY_ID=your_key_id_here
Expand All @@ -128,6 +201,19 @@ ENABLE_INTEGRATION_TESTS=false
LOG_LEVEL=INFO
```

#### Для Evolution Foundation Models:

```bash
# .env файл для Foundation Models
EVOLUTION_KEY_ID=your_key_id_here
EVOLUTION_SECRET=your_secret_here
EVOLUTION_BASE_URL=https://foundation-models.api.cloud.ru/api/gigacube/openai/v1
EVOLUTION_PROJECT_ID=your_project_id_here # Обязательно для Foundation Models
EVOLUTION_TOKEN_URL=https://iam.api.cloud.ru/api/v1/auth/token
ENABLE_INTEGRATION_TESTS=false
LOG_LEVEL=INFO
```

```python
import os
from evolution_openai import OpenAI
Expand All @@ -140,6 +226,7 @@ client = OpenAI(
key_id=os.getenv("EVOLUTION_KEY_ID"),
secret=os.getenv("EVOLUTION_SECRET"),
base_url=os.getenv("EVOLUTION_BASE_URL"),
project_id=os.getenv("EVOLUTION_PROJECT_ID"), # Опционально для Foundation Models
)
```

Expand Down Expand Up @@ -180,29 +267,11 @@ with client:
response = client.chat.completions.create(...)
```

## 🔍 Управление токенами

```python
# Получить информацию о токене
token_info = client.get_token_info()
print(token_info)
# {
# "has_token": true,
# "expires_at": "2024-01-01T12:00:00",
# "is_valid": true,
# "buffer_seconds": 30
# }

# Принудительно обновить токен
new_token = client.refresh_token()

# Получить текущий токен
current_token = client.current_token
```

## 📚 Документация

- [API Documentation](https://cloud-ru-tech.github.io/evolution-openai-python)
- [Evolution Foundation Models Guide](https://cloud-ru-tech.github.io/evolution-openai-python/foundation_models)
- [Migration Guide](https://cloud-ru-tech.github.io/evolution-openai-python/migration)
- [Examples](examples/)
- [Changelog](CHANGELOG.md)
Expand Down
4 changes: 2 additions & 2 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@
sys.path.insert(0, os.path.abspath(".."))

project = "Evolution OpenAI"
copyright = "2024, Evolution OpenAI Team"
author = "Evolution OpenAI Team"
copyright = "2024, Evolution ML Inference Team"
author = "Evolution ML Inference Team"
release = "1.0.0"

# -- General configuration ---------------------------------------------------
Expand Down
Loading