diff --git a/LOCALHOST_AI_SETUP.md b/LOCALHOST_AI_SETUP.md new file mode 100644 index 0000000..e0c1b47 --- /dev/null +++ b/LOCALHOST_AI_SETUP.md @@ -0,0 +1,167 @@ +# LocalAI Integration for AI Terminal + +This document outlines the changes made to support your local AI running on localhost:8000. + +## 🚀 What's New + +AI Terminal now supports multiple AI providers: +- **Ollama** (default) - `http://localhost:11434` +- **LocalAI** - `http://localhost:8000` (your local AI) +- **OpenAI** - OpenAI API compatible endpoints + +## 🔧 Technical Changes + +### 1. New AI Provider System +- Created `AIProvider` enum with support for Ollama, LocalAI, and OpenAI +- Added flexible request/response types for different API formats +- Ollama uses simple `{model, prompt, stream}` format +- LocalAI uses OpenAI-compatible `{model, messages[], temperature, max_tokens}` format + +### 2. Enhanced AI State Management +- Extended `OllamaState` to include provider type, temperature, and max_tokens +- Added provider switching capabilities +- Maintains separate configurations for each provider + +### 3. Multi-Provider Request Handler +- `ask_ai()` function now routes to appropriate provider +- `ask_ollama_ai()` - handles Ollama API format +- `ask_local_ai()` - handles OpenAI-compatible format with chat messages + +### 4. New Commands Added +- `/provider [name]` - Switch between AI providers +- `/localai [model]` - Quick setup for localhost:8000 +- `/params temp=X tokens=Y` - Set AI parameters +- Enhanced `/help` with new commands + +### 5. Tauri Integration +- Added new functions to main.rs: `set_provider`, `get_provider`, `setup_local_ai`, `set_ai_params` +- All functions are exposed to the frontend + +## 📋 Quick Setup Steps + +### Option 1: One Command Setup +```bash +/localai your-model-name +``` + +### Option 2: Manual Configuration +```bash +/provider localai +/host http://localhost:8000 +/model your-model-name +/params temp=0.7 tokens=2048 +``` + +### Option 3: GUI Setup Script +Run `setup-localhost-ai.bat` for guided configuration + +## 🔌 API Compatibility + +Your localhost:8000 AI should support OpenAI-compatible endpoints: + +### Expected Endpoint +``` +POST http://localhost:8000/v1/chat/completions +``` + +### Request Format +```json +{ + "model": "your-model-name", + "messages": [ + {"role": "system", "content": "system prompt"}, + {"role": "user", "content": "user question"} + ], + "temperature": 0.7, + "max_tokens": 2048, + "stream": false +} +``` + +### Response Format +```json +{ + "choices": [ + { + "message": { + "role": "assistant", + "content": "AI response" + } + } + ] +} +``` + +## 🛠️ Advanced Configuration + +### Environment Detection +The AI automatically detects your operating system and provides context-appropriate responses. + +### Temperature Control +- Range: 0.0 (deterministic) to 1.0 (creative) +- Default: 0.7 +- Set with: `/params temp=0.8` + +### Token Limits +- Control response length +- Default: 2048 tokens +- Set with: `/params tokens=1024` + +### Provider Status +Check current configuration: +```bash +/provider # Shows current provider and host +/model # Shows current model +/host # Shows current API host +``` + +## 🔄 Switching Between Providers + +```bash +# Use your localhost:8000 AI +/provider localai + +# Switch back to Ollama +/provider ollama + +# Use OpenAI API +/provider openai +/host https://api.openai.com +``` + +## 🚨 Troubleshooting + +### Common Issues + +1. **Connection refused** + - Ensure your AI is running on localhost:8000 + - Check firewall settings + +2. **Model not found** + - Verify the model name with `/model your-actual-model-name` + - Check your AI server's available models + +3. **API format errors** + - Ensure your localhost:8000 supports OpenAI-compatible format + - Check the request/response format above + +### Debug Commands + +```bash +/provider # Check current provider +/host # Check current host +/model # Check current model +/params temp=0.7 # Test parameter setting +``` + +## 📁 Files Modified + +- `src/ollama/types/ai_provider.rs` - New provider types +- `src/ollama/types/ollama_state.rs` - Enhanced state management +- `src/ollama/model_request/request.rs` - Multi-provider request handler +- `src/utils/command.rs` - New special commands +- `src/main.rs` - Registered new functions +- `README.md` - Updated documentation +- `setup-localhost-ai.bat` - Setup script + +Your AI Terminal is now ready to work with your localhost:8000 AI! 🎉 \ No newline at end of file diff --git a/README.md b/README.md index 743691f..86f525f 100644 --- a/README.md +++ b/README.md @@ -14,18 +14,30 @@ A Tauri + Angular terminal application with integrated AI capabilities. - Node.js 18+ - Rust and Cargo -- For AI features: [Ollama](https://ollama.ai/) (can be installed with `brew install ollama`) +- For AI features: [Ollama](https://ollama.ai/) + - macOS: `brew install ollama` + - Windows: Download from [ollama.ai](https://ollama.ai/download/windows) + - Linux: `curl -fsSL https://ollama.com/install.sh | sh` ## Development Setup 1. Clone the repository: - ``` + ```bash git clone https://github.com/your-username/ai-terminal.git cd ai-terminal ``` 2. Install dependencies and run the project: + + **Windows (PowerShell/Command Prompt):** + ```cmd + cd ai-terminal + npm install + npm run tauri dev ``` + + **macOS/Linux:** + ```bash cd ai-terminal npm install npm run tauri dev @@ -33,6 +45,27 @@ A Tauri + Angular terminal application with integrated AI capabilities. ## Installation +### Windows + +For Windows users, you can build and install AI Terminal from source: + +1. **Prerequisites:** + - Install [Visual Studio Build Tools](https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2022) or Visual Studio with C++ development tools + - Install [Node.js](https://nodejs.org/) + - Install [Rust](https://rustup.rs/) + +2. **Build and Install:** + ```cmd + git clone https://github.com/your-username/ai-terminal.git + cd ai-terminal\ai-terminal + npm install + npm run tauri build + ``` + +3. **Install the built package:** + - Navigate to `src-tauri\target\release\bundle\msi\` + - Run the generated `.msi` installer + ### macOS (Homebrew) You can install AI Terminal using Homebrew: @@ -68,6 +101,82 @@ Run the following command: ollama pull macsdeve/BetterBash3 ``` +### Windows + +1. **Download Ollama** + +- Visit [Ollama download page](https://ollama.ai/download/windows). +- Download the Windows installer. + +2. **Install Ollama** + +- Run the downloaded installer. +- Follow the installation prompts. +- Ollama will be added to your PATH automatically. + +3. **Download the Model** + +Open Command Prompt or PowerShell and execute: + +```cmd +ollama pull macsdeve/BetterBash3 +``` + +**Note for Windows Terminal users:** AI Terminal now fully supports Windows Terminal, Command Prompt, PowerShell, and Git Bash environments. + +## Using Your Local AI (localhost:8000) + +AI Terminal now supports multiple AI providers, including your local AI running on localhost:8000. + +### Quick Setup for LocalAI + +1. **Start your local AI server** on `localhost:8000` + +2. **Configure AI Terminal:** + - Run `setup-localhost-ai.bat` for guided setup, or + - Use the built-in commands (see below) + +3. **Built-in Configuration Commands:** + ```bash + # Quick setup for localhost:8000 + /localai your-model-name + + # Manual configuration + /provider localai + /host http://localhost:8000 + /model your-model-name + /params temp=0.7 tokens=2048 + ``` + +### Available AI Commands + +- `/help` - Show all available commands +- `/provider [ollama|localai|openai]` - Switch AI providers +- `/localai [model]` - Quick setup for localhost:8000 +- `/host [url]` - Change API endpoint +- `/model [name]` - Switch model +- `/models` - List available models +- `/params temp=X tokens=Y` - Set temperature and max tokens + +### Supported AI Providers + +- **Ollama** (default) - Local Ollama installation +- **LocalAI** - OpenAI-compatible local AI (localhost:8000) +- **OpenAI** - OpenAI API compatible services + +### Example Usage + +```bash +# Setup for localhost:8000 +/localai gpt-3.5-turbo + +# Ask your local AI +How do I list files in Windows? + +# Switch back to Ollama +/provider ollama +``` + ### macOS 1. **Download Ollama** diff --git a/README_RU.md b/README_RU.md new file mode 100644 index 0000000..52358de --- /dev/null +++ b/README_RU.md @@ -0,0 +1,430 @@ +# AI Terminal - Русская документация + +Умный терминал с поддержкой ИИ для всех платформ (Windows, macOS, Linux). + +![AI Terminal Demo](demo.gif) + +## 📋 Возможности + +- 🤖 Интерпретация команд на естественном языке +- 🔧 Встроенный ИИ-ассистент +- 📚 История команд и автодополнение +- 🌐 Кроссплатформенная поддержка (Windows, macOS, Linux) +- 🎨 Современный интерфейс на Tauri + Angular +- 🔄 Поддержка нескольких провайдеров ИИ (Ollama, LocalAI, OpenAI) + +## 🛠️ Системные требования + +- **Node.js** 18+ +- **Rust** и Cargo +- **Для Windows**: Visual Studio Build Tools или Visual Studio с C++ инструментами +- **Для ИИ функций**: + - Ollama, LocalAI или OpenAI-совместимый API + +## 📦 Установка + +### Windows + +#### Способ 1: Сборка из исходного кода +1. **Установите зависимости:** + ```cmd + # Скачайте и установите: + # - Node.js с https://nodejs.org/ + # - Rust с https://rustup.rs/ + # - Visual Studio Build Tools + ``` + +2. **Клонируйте репозиторий:** + ```cmd + git clone https://github.com/your-username/ai-terminal.git + cd ai-terminal\ai-terminal + ``` + +3. **Соберите приложение:** + ```cmd + npm install + npm run tauri build + ``` + +4. **Установите пакет:** + - Перейдите в `src-tauri\target\release\bundle\msi\` + - Запустите `.msi` установщик + +#### Способ 2: Автоматическая сборка +```cmd +# Запустите скрипт автоматической сборки +build-windows.bat +``` + +### macOS + +#### Homebrew (рекомендуется) +```bash +brew tap AiTerminalFoundation/ai-terminal +brew install --cask ai-terminal +``` + +#### Сборка из исходного кода +```bash +git clone https://github.com/your-username/ai-terminal.git +cd ai-terminal/ai-terminal +npm install +npm run tauri dev +``` + +### Linux + +```bash +git clone https://github.com/your-username/ai-terminal.git +cd ai-terminal/ai-terminal +npm install +npm run tauri build +``` + +## 🚀 Запуск в различных режимах + +### 1. Режим разработки + +```bash +# Windows (PowerShell/CMD) +cd ai-terminal +npm run tauri dev + +# macOS/Linux +cd ai-terminal +npm run tauri dev +``` + +### 2. Производственная сборка + +```bash +# Сборка для производства +npm run tauri build + +# Найти собранные файлы: +# Windows: src-tauri\target\release\bundle\ +# macOS: src-tauri/target/release/bundle/ +# Linux: src-tauri/target/release/bundle/ +``` + +### 3. Веб-режим (только фронтенд) + +```bash +# Запуск без Tauri (только веб-интерфейс) +npm run start +# Откроется http://localhost:4200 +``` + +## 🤖 Настройка ИИ + +AI Terminal поддерживает несколько провайдеров ИИ: + +### Режим 1: Ollama (по умолчанию) + +1. **Установка Ollama:** + ```bash + # Windows + # Скачайте с https://ollama.ai/download/windows + + # macOS + brew install ollama + + # Linux + curl -fsSL https://ollama.com/install.sh | sh + ``` + +2. **Загрузка модели:** + ```bash + ollama pull llama3.2 + ``` + +3. **Настройка в AI Terminal:** + ```bash + /provider ollama + /host http://localhost:11434 + /model llama3.2 + ``` + +### Режим 2: LocalAI (localhost:8000) + +1. **Запустите ваш локальный ИИ** на порту 8000 + +2. **Быстрая настройка:** + ```bash + /localai your-model-name + ``` + +3. **Ручная настройка:** + ```bash + /provider localai + /host http://localhost:8000 + /model gpt-3.5-turbo + /params temp=0.7 tokens=2048 + ``` + +4. **Скрипт настройки Windows:** + ```cmd + setup-localhost-ai.bat + ``` + +### Режим 3: OpenAI API + +```bash +/provider openai +/host https://api.openai.com +/model gpt-4 +``` + +## 🎯 Поддержка терминалов + +### Windows +- ✅ **Windows Terminal** (рекомендуется) +- ✅ **Command Prompt** (cmd.exe) +- ✅ **PowerShell** (Windows PowerShell & PowerShell Core) +- ✅ **Git Bash** (Unix-подобные команды) + +### macOS +- ✅ **Terminal.app** +- ✅ **iTerm2** +- ✅ **Hyper** + +### Linux +- ✅ **GNOME Terminal** +- ✅ **Konsole** +- ✅ **xterm** +- ✅ **Alacritty** + +## 📝 Команды ИИ + +### Основные команды +```bash +/help # Показать все команды +/provider [ollama|localai|openai] # Переключить провайдера ИИ +/host [url] # Изменить API endpoint +/model [name] # Переключить модель +/models # Список доступных моделей +``` + +### Быстрые настройки +```bash +/localai [model] # Настроить LocalAI на localhost:8000 +/params temp=0.7 tokens=2048 # Настроить параметры ИИ +``` + +### Проверка статуса +```bash +/provider # Текущий провайдер +/host # Текущий хост +/model # Текущая модель +``` + +## 🔧 Расширенная конфигурация + +### Переменные окружения + +```bash +# Windows +set AI_PROVIDER=localai +set AI_HOST=http://localhost:8000 +set AI_MODEL=gpt-3.5-turbo + +# Linux/macOS +export AI_PROVIDER=localai +export AI_HOST=http://localhost:8000 +export AI_MODEL=gpt-3.5-turbo +``` + +### Файл конфигурации + +Создайте `ai-terminal-config.json`: +```json +{ + "provider": "localai", + "host": "http://localhost:8000", + "model": "gpt-3.5-turbo", + "temperature": 0.7, + "max_tokens": 2048 +} +``` + +## 🌍 Примеры использования + +### Базовое использование +```bash +# Запросите помощь на естественном языке +Как вывести список файлов в текущей директории? + +# ИИ ответит: +# ```command``` +# dir +# ``` +``` + +### Смена провайдеров +```bash +# Настройка локального ИИ +/localai my-local-model + +# Вопрос локальному ИИ +Как создать новую папку в Windows? + +# Возврат к Ollama +/provider ollama +/model llama3.2 + +# Вопрос Ollama +Объясни команду mkdir +``` + +### Настройка параметров +```bash +# Более креативные ответы +/params temp=0.9 tokens=1024 + +# Более точные ответы +/params temp=0.3 tokens=512 +``` + +## 🛠️ Разработка + +### Структура проекта +``` +ai-terminal/ +├── src/ # Angular фронтенд +├── src-tauri/ # Rust бэкенд +│ ├── src/ +│ │ ├── command/ # Обработка команд +│ │ ├── ollama/ # ИИ интеграция +│ │ └── utils/ # Утилиты +│ └── Cargo.toml +├── README.md +├── README_RU.md # Этот файл +└── package.json +``` + +### Режим разработки с hot reload +```bash +# Терминал 1: Запуск фронтенда +npm run start + +# Терминал 2: Запуск Tauri в режиме разработки +npm run tauri dev +``` + +### Сборка для разных платформ +```bash +# Windows (.msi) +npm run tauri build -- --target x86_64-pc-windows-msvc + +# macOS (.dmg, .app) +npm run tauri build -- --target x86_64-apple-darwin + +# Linux (.deb, .AppImage) +npm run tauri build -- --target x86_64-unknown-linux-gnu +``` + +## 🐛 Устранение неполадок + +### Windows + +**Проблема:** "cargo не найден" +```cmd +# Установите Rust +winget install Rustlang.Rustup +# или скачайте с https://rustup.rs/ +``` + +**Проблема:** Ошибка сборки C++ +```cmd +# Установите Visual Studio Build Tools +# https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2022 +``` + +### ИИ подключение + +**Проблема:** Connection refused к localhost:8000 +```bash +# Проверьте, что ваш ИИ запущен +curl http://localhost:8000/v1/models + +# Проверьте настройки файервола +netstat -an | findstr :8000 +``` + +**Проблема:** Модель не найдена +```bash +# Проверьте доступные модели +/models + +# Установите правильное имя модели +/model correct-model-name +``` + +### Права доступа + +**Linux/macOS:** +```bash +# Если нет прав на выполнение +chmod +x ai-terminal +sudo ./ai-terminal +``` + +## 📚 API для разработчиков + +### Добавление нового провайдера ИИ + +1. Создайте новый вариант в `AIProvider` enum +2. Добавьте обработку в `ask_ai()` функцию +3. Реализуйте специфичную логику запросов + +```rust +// src/ollama/types/ai_provider.rs +#[derive(Debug, Clone, Serialize, Deserialize)] +pub enum AIProvider { + Ollama, + LocalAI, + OpenAI, + YourNewProvider, // Добавьте сюда +} +``` + +### Добавление новых команд + +```rust +// src/utils/command.rs +match command.as_str() { + "/yourcmd" => { + // Ваша логика здесь + Ok("Результат команды".to_string()) + } +} +``` + +## 🤝 Участие в разработке + +1. Форкните репозиторий +2. Создайте ветку для вашей функции (`git checkout -b feature/amazing-feature`) +3. Зафиксируйте изменения (`git commit -m 'Add amazing feature'`) +4. Отправьте в ветку (`git push origin feature/amazing-feature`) +5. Создайте Pull Request + +## 📄 Лицензия + +Этот проект лицензирован под MIT License - см. файл [LICENSE](LICENSE) для подробностей. + +## 🙏 Благодарности + +- [Tauri](https://tauri.app/) - За кроссплатформенную основу +- [Angular](https://angular.io/) - За реактивный фронтенд +- [Ollama](https://ollama.ai/) - За локальную ИИ поддержку +- [Rust](https://www.rust-lang.org/) - За безопасный системный код + +## 📞 Поддержка + +- 🐛 Сообщить о баге: [GitHub Issues](https://github.com/your-username/ai-terminal/issues) +- 💡 Предложить функцию: [GitHub Discussions](https://github.com/your-username/ai-terminal/discussions) +- 📧 Email: support@ai-terminal.dev + +--- + +**AI Terminal** - Умный терминал будущего! 🚀 \ No newline at end of file diff --git a/ai-terminal/BUILD_INSTRUCTIONS.md b/ai-terminal/BUILD_INSTRUCTIONS.md new file mode 100644 index 0000000..35a9aa8 --- /dev/null +++ b/ai-terminal/BUILD_INSTRUCTIONS.md @@ -0,0 +1,196 @@ +# AI Terminal Build Instructions (.exe) + +## Available Build Scripts + +You have several options for building AI Terminal into a Windows executable: + +### 1. `build-windows.bat` - Simple Batch Script +**Recommended for most users** + +```cmd +build-windows.bat +``` + +Features: +- ✅ Simple and reliable build process +- ✅ Proper error handling and user feedback +- ✅ Automatic dependency installation +- ✅ Creates ai-terminal.exe executable +- ✅ Works consistently on Windows systems + +## Prerequisites + +### Required: + +1. **Node.js 18+** + - Download: https://nodejs.org/ + - Verify: `node --version` + +2. **Rust & Cargo** + - Download: https://rustup.rs/ + - Verify: `cargo --version` + +3. **Visual Studio Build Tools** (Windows only) + - Download: https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2022 + - Or full Visual Studio with C++ components + +### Optional: + +4. **Git** (if cloning the repository) + - Download: https://git-scm.com/ + +## Build Process + +### Step 1: Preparation +```cmd +# Navigate to project folder +cd ai-terminal + +# Verify all files are present +dir +# Should have: package.json, src-tauri folder +``` + +### Step 2: Run Build Script +Use the available script: + +```cmd +# Simple build (recommended) +build-windows.bat +``` + +### Step 3: Wait +- First build may take 10-20 minutes +- Rust compiles many dependencies +- Subsequent builds will be faster + +### Step 4: Result +After successful build, files will be in: +``` +src-tauri/target/release/bundle/ +├── msi/ # MSI installer (recommended) +├── nsis/ # NSIS installer +└── ... + +src-tauri/target/release/ +└── ai-terminal.exe # Executable file +``` + +## Типичные проблемы и решения + +### 1. "Node.js not found" +```cmd +# Check installation +node --version +npm --version + +# If not working - reinstall Node.js +``` + +### 2. "Rust/Cargo not found" +```cmd +# Check installation +cargo --version + +# If not working: +# 1. Install Rust: https://rustup.rs/ +# 2. Restart terminal +# 3. Update: rustup update +``` + +### 3. "Visual Studio Build Tools missing" +- Install Visual Studio Build Tools +- Or Visual Studio Community with C++ components +- Restart terminal after installation + +### 4. "Rust compilation error" +```cmd +# Update Rust +rustup update + +# Clean cache +cd src-tauri +cargo clean +cd .. + +# Try again +``` + +### 5. "Windows Defender blocking build" +- Add project folder to Windows Defender exclusions +- Especially important for `src-tauri/target` folder + +### 6. "Not enough memory" +- Close other applications +- Rust compilation requires a lot of memory (4GB+ recommended) + +### 7. "npm errors" +```cmd +# Clean npm cache +npm cache clean --force + +# Remove node_modules and reinstall +rmdir /s node_modules +npm install +``` + +## Additional Commands + +### Manual Build (for debugging): +```cmd +# Install dependencies +npm install + +# Build frontend +npm run build + +# Build Tauri application +npm run tauri build + +# For debug version +npm run tauri build -- --debug +``` + +### Run in development mode: +```cmd +npm run tauri dev +``` + +### Update dependencies: +```cmd +npm update +rustup update +``` + +## Distribution + +After successful build: + +1. **MSI installer** - best option for distribution + - Users can install through standard Windows interface + - Automatically registers in "Programs and Features" + +2. **Executable file** - portable version + - Can run without installation + - Requires only one file + +3. **NSIS installer** - alternative to MSI + - More customizable installer + - Smaller size + +## Version updates + +Update application version in: +- `package.json` - project version +- `src-tauri/tauri.conf.json` - Tauri application version + +After changing version, rebuild the application. + +--- + +**Happy building! 🚀** + +If issues arise, check: +1. All dependencies are installed +2. Terminal restarted after installation +3. Antivirus is not blocking the build process \ No newline at end of file diff --git a/ai-terminal/angular.json b/ai-terminal/angular.json index 467aba9..13be993 100644 --- a/ai-terminal/angular.json +++ b/ai-terminal/angular.json @@ -24,6 +24,7 @@ }, "configurations": { "production": { + "baseHref": "./", "budgets": [ { "type": "initial", @@ -37,7 +38,7 @@ } ], "outputHashing": "all" - }, + } "development": { "optimization": false, "extractLicenses": false, diff --git a/ai-terminal/build-windows.bat b/ai-terminal/build-windows.bat new file mode 100644 index 0000000..d9d7b87 --- /dev/null +++ b/ai-terminal/build-windows.bat @@ -0,0 +1,21 @@ +@echo off +cd /d "%~dp0" +echo Building AI Terminal... +echo. +call npm install +if %ERRORLEVEL% NEQ 0 ( + echo Error: Failed to install dependencies + pause + exit /b 1 +) +echo. +call npm run tauri build +if %ERRORLEVEL% NEQ 0 ( + echo Error: Failed to build application + pause + exit /b 1 +) +echo. +echo Build completed! +echo Executable created at: src-tauri\target\release\ai-terminal.exe +pause \ No newline at end of file diff --git a/ai-terminal/rustup-init.exe b/ai-terminal/rustup-init.exe new file mode 100644 index 0000000..111a059 Binary files /dev/null and b/ai-terminal/rustup-init.exe differ diff --git a/ai-terminal/setup-localhost-ai.bat b/ai-terminal/setup-localhost-ai.bat new file mode 100644 index 0000000..07be048 --- /dev/null +++ b/ai-terminal/setup-localhost-ai.bat @@ -0,0 +1,34 @@ +@echo off +echo Setting up AI Terminal for LocalAI on localhost:8000 +echo. + +echo This will configure the AI Terminal to use your local AI running on localhost:8000 +echo. + +set /p model_name="Enter your model name (or press Enter for default): " +if "%model_name%"=="" set model_name=gpt-3.5-turbo + +echo. +echo Configuration: +echo - Provider: LocalAI (OpenAI-compatible) +echo - Host: http://localhost:8000 +echo - Model: %model_name% +echo. + +echo Once the AI Terminal is running, you can: +echo 1. Use /localai %model_name% to configure LocalAI +echo 2. Use /provider localai to switch to LocalAI provider +echo 3. Use /host http://localhost:8000 to set the host +echo 4. Use /params temp=0.7 tokens=2048 to adjust parameters +echo. + +echo Available commands in AI Terminal: +echo - /help - Show all commands +echo - /localai [model] - Quick setup for localhost:8000 +echo - /provider [ollama^|localai] - Switch AI providers +echo - /host [url] - Change API endpoint +echo - /model [name] - Change model +echo - /params temp=X tokens=Y - Set AI parameters +echo. + +pause \ No newline at end of file diff --git a/ai-terminal/src-tauri/Cargo.lock b/ai-terminal/src-tauri/Cargo.lock index 54a60c6..3474992 100644 --- a/ai-terminal/src-tauri/Cargo.lock +++ b/ai-terminal/src-tauri/Cargo.lock @@ -40,6 +40,7 @@ dependencies = [ "tauri-build", "tauri-plugin-opener", "tauri-plugin-shell", + "winapi", ] [[package]] diff --git a/ai-terminal/src-tauri/Cargo.toml b/ai-terminal/src-tauri/Cargo.toml index 51a273e..290983c 100644 --- a/ai-terminal/src-tauri/Cargo.toml +++ b/ai-terminal/src-tauri/Cargo.toml @@ -23,7 +23,14 @@ tauri-plugin-opener = "2" serde = { version = "1", features = ["derive"] } dirs = "6.0.0" reqwest = { version = "0.12.15", features = ["json"] } -nix = { version = "0.30", features = ["signal"] } tauri-plugin-shell = "2" fix-path-env = { git = "https://github.com/tauri-apps/fix-path-env-rs" } -serde_json = "1.0" +serde_json = "1.0" + +# Unix-only dependencies +[target.'cfg(unix)'.dependencies] +nix = { version = "0.30", features = ["signal"] } + +# Windows-only dependencies +[target.'cfg(windows)'.dependencies] +winapi = { version = "0.3", features = ["processthreadsapi", "winbase", "handleapi"] } diff --git a/ai-terminal/src-tauri/src/command/core/execute_command.rs b/ai-terminal/src-tauri/src/command/core/execute_command.rs index 741fb06..162026a 100644 --- a/ai-terminal/src-tauri/src/command/core/execute_command.rs +++ b/ai-terminal/src-tauri/src/command/core/execute_command.rs @@ -3,13 +3,106 @@ use crate::command::types::command_state::CommandState; use crate::utils::file_system_utils::get_shell_path; use std::collections::HashMap; use std::io::{BufReader, Read, Write}; -use std::os::unix::process::CommandExt; use std::path::Path; use std::process::{Child, Command, Stdio}; use std::sync::{Arc, Mutex, MutexGuard}; use std::{env, thread}; use tauri::{command, AppHandle, Emitter, Manager, State}; +// Platform-specific imports +#[cfg(unix)] +use std::os::unix::process::CommandExt; + +#[cfg(windows)] +use crate::utils::windows_utils::{get_windows_shell_command, expand_windows_tilde, normalize_windows_path, is_windows_absolute_path}; + +// Cross-platform utility functions +fn normalize_path(path: &str) -> String { + #[cfg(windows)] + { + let expanded = expand_windows_tilde(path); + normalize_windows_path(&expanded) + } + #[cfg(unix)] + { + if path.starts_with('~') { + if let Some(home) = dirs::home_dir() { + if path == "~" { + home.to_string_lossy().to_string() + } else { + format!("{}{}", home.to_string_lossy(), &path[1..]) + } + } else { + path.to_string() + } + } else { + path.to_string() + } + } +} + +fn is_absolute_path(path: &str) -> bool { + #[cfg(windows)] + { + is_windows_absolute_path(path) + } + #[cfg(unix)] + { + path.starts_with('/') + } +} + +fn get_shell_command() -> Vec { + #[cfg(windows)] + { + get_windows_shell_command() + } + #[cfg(unix)] + { + vec!["sh".to_string(), "-c".to_string()] + } +} + +fn create_command_with_shell(command: &str, current_dir: &str) -> std::io::Result { + let shell_cmd = get_shell_command(); + let mut cmd = Command::new(&shell_cmd[0]); + + #[cfg(windows)] + { + // For Windows, handle the command properly based on shell type + cmd.arg(&shell_cmd[1]).arg(command); + } + #[cfg(unix)] + { + cmd.arg(&shell_cmd[1]).arg(format!("exec {}", command)); + } + + cmd.current_dir(current_dir) + .stdin(Stdio::piped()) + .stdout(Stdio::piped()) + .stderr(Stdio::piped()); + + Ok(cmd) +} + +fn setup_process_session(_cmd: &mut Command) { + #[cfg(unix)] + unsafe { + _cmd.pre_exec(|| match nix::unistd::setsid() { + Ok(_) => Ok(()), + Err(e) => Err(std::io::Error::new( + std::io::ErrorKind::Other, + format!("setsid failed: {}", e), + )), + }); + } + #[cfg(windows)] + { + // On Windows, we don't need setsid equivalent for most cases + // The process will naturally be in a separate process group + } +} + #[command] pub fn execute_command( command: String, @@ -170,25 +263,22 @@ pub fn execute_command( Err("Could not determine home directory".to_string()) }; } + + // Normalize the path for cross-platform compatibility + let normalized_path = normalize_path(path); let current_path = Path::new(&command_state_cd.current_dir); - let new_path = if path.starts_with('~') { - if let Some(home_dir) = dirs::home_dir() { - let without_tilde = path.trim_start_matches('~'); - let rel_path = without_tilde.trim_start_matches('/'); - if rel_path.is_empty() { - home_dir - } else { - home_dir.join(rel_path) - } - } else { - drop(states_guard_cd); - return Err("Could not determine home directory".to_string()); - } - } else if path.starts_with('/') { - std::path::PathBuf::from(path) + + let new_path = if normalized_path.starts_with('~') { + // Handle tilde expansion + std::path::PathBuf::from(normalize_path(&normalized_path)) + } else if is_absolute_path(&normalized_path) { + std::path::PathBuf::from(&normalized_path) } else { let mut result_path = current_path.to_path_buf(); - let path_components: Vec<&str> = path.split('/').collect(); + // Use the appropriate path separator for the platform + let separator = if cfg!(windows) { '\\' } else { '/' }; + let path_components: Vec<&str> = normalized_path.split(separator).collect(); + for component in path_components { if component == ".." { if let Some(parent) = result_path.parent() { @@ -364,37 +454,34 @@ pub fn execute_command( } }; } else { - // Fallback to sh -c for non-SSH or sudo commands + // Fallback to cross-platform shell execution for non-SSH or sudo commands let final_shell_command = if original_command_is_sudo && !original_command_is_sudo_ssh { - command_to_run.clone() + // For sudo commands on Windows, we need different handling + #[cfg(windows)] + { + // Windows doesn't have sudo, so we'll need to handle this differently + // For now, just pass the command as-is and let it fail gracefully + command_to_run.clone() + } + #[cfg(unix)] + { + command_to_run.clone() + } } else { - format!("exec {}", command_to_run) + command_to_run.clone() }; - let mut sh_cmd_to_spawn = Command::new("sh"); - sh_cmd_to_spawn - .arg("-c") - .arg(&final_shell_command) - .current_dir(¤t_dir_clone) - .envs(&env_map) - .stdout(Stdio::piped()) - .stderr(Stdio::piped()) - .stdin(Stdio::piped()); // Ensure stdin is piped for sh -c as well - - #[cfg(unix)] - unsafe { - sh_cmd_to_spawn.pre_exec(|| match nix::unistd::setsid() { - Ok(_) => Ok(()), - Err(e) => Err(std::io::Error::new( - std::io::ErrorKind::Other, - format!("setsid failed: {}", e), - )), - }); - } + let mut shell_cmd_to_spawn = match create_command_with_shell(&final_shell_command, ¤t_dir_clone) { + Ok(cmd) => cmd, + Err(e) => return Err(format!("Failed to create shell command: {}", e)), + }; + + shell_cmd_to_spawn.envs(&env_map); + setup_process_session(&mut shell_cmd_to_spawn); - child = match sh_cmd_to_spawn.spawn() { + child = match shell_cmd_to_spawn.spawn() { Ok(c) => c, - Err(e) => return Err(format!("Failed to start command via sh -c: {}", e)), + Err(e) => return Err(format!("Failed to start command via shell: {}", e)), }; } @@ -711,7 +798,7 @@ pub fn execute_command( app_handle_for_thread_state.state::(); let mut states_guard_cleanup = match command_manager_state_in_thread.commands.lock() { Ok(guard) => guard, - Err(e) => { + Err(_e) => { return; } }; @@ -785,17 +872,38 @@ pub fn execute_sudo_command( let current_dir = state.current_dir.clone(); + // Create cross-platform sudo command + let sudo_command = command + .split_whitespace() + .skip(1) + .collect::>() + .join(" "); + + #[cfg(windows)] + let mut child_process = { + // Windows doesn't have sudo - we could use "runas" but it's interactive + // For now, just run the command directly and let Windows UAC handle it + let shell_cmd = get_windows_shell_command(); + match Command::new(&shell_cmd[0]) + .arg(&shell_cmd[1]) + .arg(&sudo_command) + .current_dir(¤t_dir) + .stdin(Stdio::piped()) + .stdout(Stdio::piped()) + .stderr(Stdio::piped()) + .spawn() + { + Ok(child) => child, + Err(e) => return Err(format!("Failed to start command on Windows: {}", e)), + } + }; + + #[cfg(unix)] let mut child_process = match Command::new("sudo") .arg("-S") .arg("bash") .arg("-c") - .arg( - command - .split_whitespace() - .skip(1) - .collect::>() - .join(" "), - ) // Skip "sudo" and join the rest + .arg(&sudo_command) .current_dir(¤t_dir) .stdin(Stdio::piped()) .stdout(Stdio::piped()) diff --git a/ai-terminal/src-tauri/src/command/types/command_manager.rs b/ai-terminal/src-tauri/src/command/types/command_manager.rs index eb3d3ac..f0ee9b9 100644 --- a/ai-terminal/src-tauri/src/command/types/command_manager.rs +++ b/ai-terminal/src-tauri/src/command/types/command_manager.rs @@ -29,10 +29,7 @@ impl CommandManager { ); CommandManager { commands: Mutex::new(initial_commands), - ollama: Mutex::new(OllamaState { - current_model: "llama3.2:latest".to_string(), // Default model will now be overridden by frontend - api_host: "http://localhost:11434".to_string(), // Default Ollama host - }), + ollama: Mutex::new(OllamaState::default()), } } } diff --git a/ai-terminal/src-tauri/src/main.rs b/ai-terminal/src-tauri/src/main.rs index c47426c..6b1b2a9 100644 --- a/ai-terminal/src-tauri/src/main.rs +++ b/ai-terminal/src-tauri/src/main.rs @@ -30,6 +30,10 @@ fn main() { ollama::model_request::request::switch_model, ollama::model_request::request::get_host, ollama::model_request::request::set_host, + ollama::model_request::request::set_provider, + ollama::model_request::request::get_provider, + ollama::model_request::request::setup_local_ai, + ollama::model_request::request::set_ai_params, command::git_commands::git::get_git_branch, command::git_commands::git::get_git_branches, command::git_commands::git::switch_branch, diff --git a/ai-terminal/src-tauri/src/ollama/model_request/request.rs b/ai-terminal/src-tauri/src/ollama/model_request/request.rs index 483f18a..02025a2 100644 --- a/ai-terminal/src-tauri/src/ollama/model_request/request.rs +++ b/ai-terminal/src-tauri/src/ollama/model_request/request.rs @@ -1,4 +1,5 @@ use crate::command::types::command_manager::CommandManager; +use crate::ollama::types::ai_provider::{AIProvider, ChatMessage, LocalAIRequest, LocalAIResponse}; use crate::ollama::types::ollama_model_list::OllamaModelList; use crate::ollama::types::ollama_request::OllamaRequest; use crate::ollama::types::ollama_response::OllamaResponse; @@ -6,7 +7,7 @@ use crate::utils::command::handle_special_command; use crate::utils::operating_system_utils::get_operating_system; use tauri::{command, State}; -// Implement the ask_ai function for Ollama integration +// Implement the ask_ai function with multi-provider support #[command] pub async fn ask_ai( question: String, @@ -18,18 +19,17 @@ pub async fn ask_ai( return handle_special_command(question, command_manager).await; } - // Regular message to Ollama - let model; - let api_host; - - // Scope the mutex lock to drop it before any async operations - { + // Get AI configuration + let (model, api_host, provider, temperature, max_tokens) = { let ollama_state = command_manager.ollama.lock().map_err(|e| e.to_string())?; - // Use the model_override if provided, otherwise use the default - model = model_override.unwrap_or_else(|| ollama_state.current_model.clone()); - api_host = ollama_state.api_host.clone(); - // MutexGuard is dropped here at the end of scope - } + ( + model_override.unwrap_or_else(|| ollama_state.current_model.clone()), + ollama_state.api_host.clone(), + ollama_state.provider.clone(), + ollama_state.temperature, + ollama_state.max_tokens, + ) + }; // Get the current operating system let os = get_operating_system(); @@ -43,7 +43,23 @@ pub async fn ask_ai( os, os ); - // Combine the system prompt with the user's question + match provider { + AIProvider::Ollama => { + ask_ollama_ai(api_host, model, system_prompt, question).await + } + AIProvider::LocalAI | AIProvider::OpenAI => { + ask_local_ai(api_host, model, system_prompt, question, temperature, max_tokens).await + } + } +} + +// Ollama-specific AI request +async fn ask_ollama_ai( + api_host: String, + model: String, + system_prompt: String, + question: String, +) -> Result { let combined_prompt = format!("{}\n\nUser: {}", system_prompt, question); let client = reqwest::Client::new(); @@ -70,6 +86,71 @@ pub async fn ask_ai( Ok(response.response) } +// LocalAI/OpenAI-compatible API request +async fn ask_local_ai( + api_host: String, + model: String, + system_prompt: String, + question: String, + temperature: Option, + max_tokens: Option, +) -> Result { + let messages = vec![ + ChatMessage { + role: "system".to_string(), + content: system_prompt, + }, + ChatMessage { + role: "user".to_string(), + content: question, + }, + ]; + + let client = reqwest::Client::new(); + let endpoint = if api_host.ends_with("/v1/chat/completions") { + api_host + } else if api_host.ends_with("/v1") { + format!("{}/chat/completions", api_host) + } else { + format!("{}/v1/chat/completions", api_host) + }; + + let res = client + .post(&endpoint) + .json(&LocalAIRequest { + model, + messages, + temperature, + max_tokens, + stream: Some(false), + }) + .send() + .await + .map_err(|e| format!("Failed to send request to LocalAI API: {}", e))?; + + if !res.status().is_success() { + let status = res.status(); + let error_text = res + .text() + .await + .unwrap_or_else(|_| "Unknown error".to_string()); + return Err(format!("LocalAI API error {}: {}", status, error_text)); + } + + let response: LocalAIResponse = res + .json() + .await + .map_err(|e| format!("Failed to parse LocalAI response: {}", e))?; + + if let Some(choice) = response.choices.first() { + if let Some(message) = &choice.message { + return Ok(message.content.clone()); + } + } + + Err("No valid response from LocalAI".to_string()) +} + // Add function to get models from Ollama API #[command] pub async fn get_models(command_manager: State<'_, CommandManager>) -> Result { @@ -135,5 +216,68 @@ pub fn set_host( ) -> Result { let mut ollama_state = command_manager.ollama.lock().map_err(|e| e.to_string())?; ollama_state.api_host = host.clone(); - Ok(format!("Changed Ollama API host to: {}", host)) + Ok(format!("Changed AI API host to: {}", host)) +} + +// Add function to set AI provider +#[command] +pub fn set_provider( + provider_name: String, + command_manager: State<'_, CommandManager>, +) -> Result { + let provider = match provider_name.to_lowercase().as_str() { + "ollama" => AIProvider::Ollama, + "local" | "localai" => AIProvider::LocalAI, + "openai" => AIProvider::OpenAI, + _ => return Err(format!("Unknown provider: {}. Available: ollama, localai, openai", provider_name)), + }; + + let mut ollama_state = command_manager.ollama.lock().map_err(|e| e.to_string())?; + ollama_state.provider = provider.clone(); + Ok(format!("Switched to AI provider: {}", provider)) +} + +// Get current provider +#[command] +pub fn get_provider(command_manager: State<'_, CommandManager>) -> Result { + let ollama_state = command_manager.ollama.lock().map_err(|e| e.to_string())?; + Ok(format!("Current AI provider: {}", ollama_state.provider)) +} + +// Quick setup for localhost:8000 +#[command] +pub fn setup_local_ai( + model_name: Option, + command_manager: State<'_, CommandManager>, +) -> Result { + let mut ollama_state = command_manager.ollama.lock().map_err(|e| e.to_string())?; + ollama_state.provider = AIProvider::LocalAI; + ollama_state.api_host = "http://localhost:8000".to_string(); + if let Some(model) = model_name { + ollama_state.current_model = model; + } + Ok("Configured to use LocalAI on localhost:8000".to_string()) +} + +// Set AI parameters +#[command] +pub fn set_ai_params( + temperature: Option, + max_tokens: Option, + command_manager: State<'_, CommandManager>, +) -> Result { + let mut ollama_state = command_manager.ollama.lock().map_err(|e| e.to_string())?; + + if let Some(temp) = temperature { + ollama_state.temperature = Some(temp); + } + if let Some(tokens) = max_tokens { + ollama_state.max_tokens = Some(tokens); + } + + Ok(format!( + "AI parameters updated - Temperature: {:?}, Max tokens: {:?}", + ollama_state.temperature, + ollama_state.max_tokens + )) } diff --git a/ai-terminal/src-tauri/src/ollama/types/ai_provider.rs b/ai-terminal/src-tauri/src/ollama/types/ai_provider.rs new file mode 100644 index 0000000..b2d23eb --- /dev/null +++ b/ai-terminal/src-tauri/src/ollama/types/ai_provider.rs @@ -0,0 +1,76 @@ +use serde::{Deserialize, Serialize}; + +#[derive(Debug, Clone, Serialize, Deserialize)] +pub enum AIProvider { + Ollama, + LocalAI, + OpenAI, +} + +impl std::fmt::Display for AIProvider { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + match self { + AIProvider::Ollama => write!(f, "Ollama"), + AIProvider::LocalAI => write!(f, "LocalAI"), + AIProvider::OpenAI => write!(f, "OpenAI"), + } + } +} + +#[derive(Debug, Serialize, Deserialize)] +pub struct LocalAIRequest { + pub model: String, + pub messages: Vec, + pub temperature: Option, + pub max_tokens: Option, + pub stream: Option, +} + +#[derive(Debug, Serialize, Deserialize)] +pub struct ChatMessage { + pub role: String, // "system", "user", "assistant" + pub content: String, +} + +#[derive(Debug, Serialize, Deserialize)] +pub struct LocalAIResponse { + pub id: Option, + pub object: Option, + pub created: Option, + pub model: Option, + pub choices: Vec, + pub usage: Option, +} + +#[derive(Debug, Serialize, Deserialize)] +pub struct Choice { + pub index: Option, + pub message: Option, + pub finish_reason: Option, +} + +#[derive(Debug, Serialize, Deserialize)] +pub struct Usage { + pub prompt_tokens: Option, + pub completion_tokens: Option, + pub total_tokens: Option, +} + +// Generic AI request that can be used for different providers +#[derive(Debug, Serialize, Deserialize)] +pub struct GenericAIRequest { + pub provider: AIProvider, + pub model: String, + pub prompt: String, + pub temperature: Option, + pub max_tokens: Option, +} + +// Generic AI response +#[derive(Debug, Serialize, Deserialize)] +pub struct GenericAIResponse { + pub provider: AIProvider, + pub model: String, + pub content: String, + pub usage: Option, +} \ No newline at end of file diff --git a/ai-terminal/src-tauri/src/ollama/types/mod.rs b/ai-terminal/src-tauri/src/ollama/types/mod.rs index 0234f7b..510dc22 100644 --- a/ai-terminal/src-tauri/src/ollama/types/mod.rs +++ b/ai-terminal/src-tauri/src/ollama/types/mod.rs @@ -1,3 +1,4 @@ +pub mod ai_provider; pub mod ollama_model; pub mod ollama_model_list; pub mod ollama_request; diff --git a/ai-terminal/src-tauri/src/ollama/types/ollama_state.rs b/ai-terminal/src-tauri/src/ollama/types/ollama_state.rs index b1dc324..a2b8985 100644 --- a/ai-terminal/src-tauri/src/ollama/types/ollama_state.rs +++ b/ai-terminal/src-tauri/src/ollama/types/ollama_state.rs @@ -1,5 +1,22 @@ -// Add Ollama state management +use crate::ollama::types::ai_provider::AIProvider; + +// Add AI state management for multiple providers pub struct OllamaState { pub current_model: String, pub api_host: String, + pub provider: AIProvider, + pub temperature: Option, + pub max_tokens: Option, +} + +impl Default for OllamaState { + fn default() -> Self { + Self { + current_model: "llama2".to_string(), + api_host: "http://localhost:11434".to_string(), + provider: AIProvider::Ollama, + temperature: Some(0.7), + max_tokens: Some(2048), + } + } } diff --git a/ai-terminal/src-tauri/src/utils/command.rs b/ai-terminal/src-tauri/src/utils/command.rs index 0701cd9..ca7f06f 100644 --- a/ai-terminal/src-tauri/src/utils/command.rs +++ b/ai-terminal/src-tauri/src/utils/command.rs @@ -1,4 +1,5 @@ use crate::command::types::command_manager::CommandManager; +use crate::ollama::types::ai_provider::AIProvider; use crate::ollama::types::ollama_model_list::OllamaModelList; use tauri::State; @@ -12,7 +13,10 @@ pub async fn handle_special_command( /help - Show this help message\n\ /models - List available models\n\ /model [name] - Show current model or switch to a different model\n\ - /host [url] - Show current API host or set a new one" + /host [url] - Show current API host or set a new one\n\ + /provider [name] - Show current AI provider or switch (ollama, localai, openai)\n\ + /localai [model] - Quick setup for LocalAI on localhost:8000\n\ + /params temp=[0.0-1.0] tokens=[num] - Set AI parameters" .to_string()), "/models" => { // Get list of available models from Ollama API @@ -97,6 +101,92 @@ pub async fn handle_special_command( Err("Invalid host command. Use /host [url] to change the API host.".to_string()) } } + cmd if cmd.starts_with("/provider") => { + let parts: Vec<&str> = cmd.split_whitespace().collect(); + + // Handle showing current provider + if parts.len() == 1 { + let (current_provider, current_host); + { + let ollama_state = command_manager.ollama.lock().map_err(|e| e.to_string())?; + current_provider = ollama_state.provider.clone(); + current_host = ollama_state.api_host.clone(); + } + Ok(format!("Current AI provider: {} ({})", current_provider, current_host)) + } + // Handle switching provider + else if parts.len() >= 2 { + let provider = match parts[1].to_lowercase().as_str() { + "ollama" => AIProvider::Ollama, + "local" | "localai" => AIProvider::LocalAI, + "openai" => AIProvider::OpenAI, + _ => return Err(format!("Unknown provider: {}. Available: ollama, localai, openai", parts[1])), + }; + + { + let mut ollama_state = command_manager.ollama.lock().map_err(|e| e.to_string())?; + ollama_state.provider = provider.clone(); + } + Ok(format!("Switched to AI provider: {}", provider)) + } else { + Err("Invalid provider command. Use /provider [name] to switch providers.".to_string()) + } + } + cmd if cmd.starts_with("/localai") => { + let parts: Vec<&str> = cmd.split_whitespace().collect(); + let model = if parts.len() >= 2 { + Some(parts[1].to_string()) + } else { + None + }; + + { + let mut ollama_state = command_manager.ollama.lock().map_err(|e| e.to_string())?; + ollama_state.provider = AIProvider::LocalAI; + ollama_state.api_host = "http://localhost:8000".to_string(); + if let Some(model_name) = model.as_ref() { + ollama_state.current_model = model_name.clone(); + } + } + + if let Some(model_name) = model { + Ok(format!("Configured LocalAI on localhost:8000 with model: {}", model_name)) + } else { + Ok("Configured LocalAI on localhost:8000 (using default model)".to_string()) + } + } + cmd if cmd.starts_with("/params") => { + let params_str = &cmd[7..]; // Remove "/params " + let mut temperature: Option = None; + let mut max_tokens: Option = None; + + // Parse parameters like "temp=0.7 tokens=2048" + for param in params_str.split_whitespace() { + if let Some((key, value)) = param.split_once('=') { + match key.to_lowercase().as_str() { + "temp" | "temperature" => { + temperature = value.parse().ok(); + } + "tokens" | "max_tokens" => { + max_tokens = value.parse().ok(); + } + _ => {} + } + } + } + + { + let mut ollama_state = command_manager.ollama.lock().map_err(|e| e.to_string())?; + if let Some(temp) = temperature { + ollama_state.temperature = Some(temp); + } + if let Some(tokens) = max_tokens { + ollama_state.max_tokens = Some(tokens); + } + } + + Ok(format!("AI parameters updated - Temperature: {:?}, Max tokens: {:?}", temperature, max_tokens)) + } _ => Err(format!( "Unknown command: {}. Type /help for available commands.", command diff --git a/ai-terminal/src-tauri/src/utils/file_system_utils.rs b/ai-terminal/src-tauri/src/utils/file_system_utils.rs index c9ef2f6..6fbfb28 100644 --- a/ai-terminal/src-tauri/src/utils/file_system_utils.rs +++ b/ai-terminal/src-tauri/src/utils/file_system_utils.rs @@ -4,45 +4,71 @@ use std::process::Command; use tauri::{command, State}; pub fn get_shell_path() -> Option { - // First try to get the user's default shell - let shell = if cfg!(target_os = "windows") { - "cmd" - } else { - // Try to get the user's default shell from /etc/shells or fallback to common shells - let shells = ["/bin/zsh", "/bin/bash", "/bin/sh"]; - for shell in shells.iter() { - if std::path::Path::new(shell).exists() { - return Some(shell.to_string()); + #[cfg(windows)] + { + // On Windows, we can try to get PATH from various shells + use crate::utils::windows_utils::detect_windows_shell; + + let shell_type = detect_windows_shell(); + + let (shell_exe, args, command) = match shell_type.as_str() { + "powershell" => ("powershell", vec!["-Command"], "$env:PATH"), + "bash" => ("bash", vec!["-c"], "echo $PATH"), + _ => ("cmd", vec!["/C"], "echo %PATH%"), + }; + + let output = Command::new(shell_exe) + .args(&args) + .arg(command) + .output() + .ok()?; + + if output.status.success() { + let path = String::from_utf8_lossy(&output.stdout).trim().to_string(); + if !path.is_empty() { + return Some(path); } } - "sh" // Fallback - }; + + // Fallback to environment PATH + env::var("PATH").ok() + } + + #[cfg(unix)] + { + // Try to get the user's default shell from /etc/shells or fallback to common shells + let shells = ["/bin/zsh", "/bin/bash", "/bin/sh"]; + let shell = shells.iter() + .find(|shell| std::path::Path::new(shell).exists()) + .map(|s| *s) + .unwrap_or("sh"); - // Try to get PATH using the shell's login mode and sourcing initialization files - let command = if shell.contains("zsh") { - "source ~/.zshrc 2>/dev/null || true; source ~/.zshenv 2>/dev/null || true; echo $PATH" - } else if shell.contains("bash") { - "source ~/.bashrc 2>/dev/null || true; source ~/.bash_profile 2>/dev/null || true; echo $PATH" - } else { - "echo $PATH" - }; + // Try to get PATH using the shell's login mode and sourcing initialization files + let command = if shell.contains("zsh") { + "source ~/.zshrc 2>/dev/null || true; source ~/.zshenv 2>/dev/null || true; echo $PATH" + } else if shell.contains("bash") { + "source ~/.bashrc 2>/dev/null || true; source ~/.bash_profile 2>/dev/null || true; echo $PATH" + } else { + "echo $PATH" + }; - let output = Command::new(shell) - .arg("-l") // Login shell to get proper environment - .arg("-c") - .arg(command) - .output() - .ok()?; + let output = Command::new(shell) + .arg("-l") // Login shell to get proper environment + .arg("-c") + .arg(command) + .output() + .ok()?; - if output.status.success() { - let path = String::from_utf8_lossy(&output.stdout).trim().to_string(); - if !path.is_empty() { - return Some(path); + if output.status.success() { + let path = String::from_utf8_lossy(&output.stdout).trim().to_string(); + if !path.is_empty() { + return Some(path); + } } - } - // If the shell method fails, try to get PATH from the environment - env::var("PATH").ok() + // If the shell method fails, try to get PATH from the environment + env::var("PATH").ok() + } } #[command] @@ -81,7 +107,14 @@ pub fn get_home_directory() -> Result { // Helper function to split a path into directory and file prefix parts pub fn split_path_prefix(path: &str) -> (&str, &str) { - match path.rfind('/') { + // Handle both Unix (/) and Windows (\) path separators + let separator_pos = if cfg!(windows) { + path.rfind('\\').or_else(|| path.rfind('/')) + } else { + path.rfind('/') + }; + + match separator_pos { Some(index) => { let (dir, file) = path.split_at(index + 1); (dir, file) diff --git a/ai-terminal/src-tauri/src/utils/mod.rs b/ai-terminal/src-tauri/src/utils/mod.rs index 97fbd6d..bac0072 100644 --- a/ai-terminal/src-tauri/src/utils/mod.rs +++ b/ai-terminal/src-tauri/src/utils/mod.rs @@ -1,3 +1,6 @@ pub mod command; pub mod file_system_utils; pub mod operating_system_utils; + +#[cfg(windows)] +pub mod windows_utils; diff --git a/ai-terminal/src-tauri/src/utils/windows_utils.rs b/ai-terminal/src-tauri/src/utils/windows_utils.rs new file mode 100644 index 0000000..2fb8b85 --- /dev/null +++ b/ai-terminal/src-tauri/src/utils/windows_utils.rs @@ -0,0 +1,137 @@ +use std::env; +use std::path::PathBuf; + +#[cfg(windows)] +use winapi::um::processthreadsapi::GetCurrentProcessId; + +/// Detects the current shell/terminal on Windows +pub fn detect_windows_shell() -> String { + // Check if running in Windows Terminal + if env::var("WT_SESSION").is_ok() { + return "wt".to_string(); + } + + // Check if running in PowerShell + if env::var("PSModulePath").is_ok() { + return "powershell".to_string(); + } + + // Check if running in Git Bash + if env::var("MSYSTEM").is_ok() { + return "bash".to_string(); + } + + // Default to cmd + "cmd".to_string() +} + +/// Get the appropriate shell command for Windows +pub fn get_windows_shell_command() -> Vec { + let shell = detect_windows_shell(); + + match shell.as_str() { + "powershell" => vec!["powershell".to_string(), "-Command".to_string()], + "bash" => vec!["bash".to_string(), "-c".to_string()], + "wt" | "cmd" | _ => vec!["cmd".to_string(), "/C".to_string()], + } +} + +/// Convert Unix-style paths to Windows paths +pub fn convert_unix_path_to_windows(path: &str) -> String { + if path.starts_with('/') { + // Handle absolute Unix paths - convert /c/users/... to C:\users\... + if path.len() > 2 && path.chars().nth(2) == Some('/') { + let drive = path.chars().nth(1).unwrap().to_uppercase().collect::(); + let rest = &path[3..].replace('/', "\\"); + format!("{}:\\{}", drive, rest) + } else { + // For other absolute paths, assume they're WSL paths + path.replace('/', "\\") + } + } else { + // Handle relative paths and Windows paths + path.replace('/', "\\") + } +} + +/// Convert tilde (~) to Windows home directory +pub fn expand_windows_tilde(path: &str) -> String { + if path.starts_with('~') { + if let Some(home) = dirs::home_dir() { + let home_str = home.to_string_lossy(); + if path == "~" { + home_str.to_string() + } else { + format!("{}{}", home_str, &path[1..].replace('/', "\\")) + } + } else { + path.to_string() + } + } else { + path.to_string() + } +} + +/// Get current process ID (Windows-specific implementation) +#[cfg(windows)] +pub fn get_current_process_id() -> u32 { + unsafe { GetCurrentProcessId() } +} + +/// Check if a path is absolute on Windows +pub fn is_windows_absolute_path(path: &str) -> bool { + // Windows absolute paths start with drive letter (C:) or UNC (\\) + if path.len() >= 2 { + let chars: Vec = path.chars().collect(); + (chars[1] == ':' && chars[0].is_alphabetic()) || path.starts_with("\\\\") + } else { + false + } +} + +/// Normalize Windows path separators +pub fn normalize_windows_path(path: &str) -> String { + path.replace('/', "\\") +} + +/// Get Windows Terminal executable path if available +pub fn get_windows_terminal_path() -> Option { + // Check if Windows Terminal is installed + if let Ok(output) = std::process::Command::new("where") + .args(&["wt.exe"]) + .output() + { + if output.status.success() { + let path_string = String::from_utf8_lossy(&output.stdout).trim().to_string(); + return Some(PathBuf::from(path_string)); + } + } + None +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_convert_unix_path_to_windows() { + assert_eq!(convert_unix_path_to_windows("/c/users/test"), "C:\\users\\test"); + assert_eq!(convert_unix_path_to_windows("./relative/path"), ".\\relative\\path"); + assert_eq!(convert_unix_path_to_windows("relative/path"), "relative\\path"); + } + + #[test] + fn test_is_windows_absolute_path() { + assert!(is_windows_absolute_path("C:\\Windows")); + assert!(is_windows_absolute_path("D:\\")); + assert!(is_windows_absolute_path("\\\\server\\share")); + assert!(!is_windows_absolute_path("relative\\path")); + assert!(!is_windows_absolute_path("./relative")); + } + + #[test] + fn test_normalize_windows_path() { + assert_eq!(normalize_windows_path("path/to/file"), "path\\to\\file"); + assert_eq!(normalize_windows_path("path\\to\\file"), "path\\to\\file"); + } +} \ No newline at end of file diff --git a/ai-terminal/src-tauri/tauri.conf.json b/ai-terminal/src-tauri/tauri.conf.json index 6fa304e..7d8a297 100644 --- a/ai-terminal/src-tauri/tauri.conf.json +++ b/ai-terminal/src-tauri/tauri.conf.json @@ -29,7 +29,7 @@ }, "bundle": { "active": true, - "targets": ["dmg", "app", "deb"], + "targets": ["dmg", "app", "deb", "msi", "nsis"], "publisher": "AI Terminal Foundation", "copyright": "© 2025 AI Terminal Foundation", "category": "DeveloperTool", diff --git a/ai-terminal/src/main.ts b/ai-terminal/src/main.ts index 6829613..7aac016 100644 --- a/ai-terminal/src/main.ts +++ b/ai-terminal/src/main.ts @@ -2,6 +2,36 @@ import { bootstrapApplication } from "@angular/platform-browser"; import { appConfig } from "./app/app.config"; import { AppComponent } from "./app/app.component"; -bootstrapApplication(AppComponent, appConfig).catch((err) => - console.error(err), -); +console.log('🚀 Starting AI Terminal...'); +console.log('Environment:', { + userAgent: navigator.userAgent, + location: window.location.href, + timestamp: new Date().toISOString() +}); + +bootstrapApplication(AppComponent, appConfig) + .then(() => { + console.log('✅ AI Terminal started successfully!'); + }) + .catch((err) => { + console.error('❌ Failed to start AI Terminal:', err); + // Show error on page if Angular failed to start + document.body.innerHTML = ` +
+

AI Terminal - Startup Error

+

Failed to initialize the application:

+
${err}
+

Please check the browser console for more details.

+
+ `; + });