- Download the
GrokMiniChatfolder - Extract to any location (e.g.,
C:\Program Files\GrokMiniChat) - Double-click
GrokMiniChat.exeto run - (Optional) Create a desktop shortcut to
GrokMiniChat.exe
- Download the release package
- Right-click
install.batand select "Run as Administrator" - Follow the installation prompts
- Launch from desktop shortcut or Start Menu
- Python 3.8 or higher
- Windows 10 or higher
- Git (optional)
-
Install Dependencies
pip install -r requirements.txt
-
Run the Chat Application
python chat_app.py
-
Install Build Dependencies
pip install -r requirements.txt
-
Build Executable
python setup_windows.py
-
Distribute
- The executable will be in
dist/GrokMiniChat/ - Share the entire
GrokMiniChatfolder - Users can run
GrokMiniChat.exewithout Python
- The executable will be in
If setup_windows.py doesn't work, you can build manually:
pyinstaller --name=GrokMiniChat --onedir --windowed ^
--add-data=grok_mini.py;. ^
--hidden-import=torch ^
--hidden-import=transformers ^
--collect-all=torch ^
--collect-all=transformers ^
chat_app.py- Ensure all dependencies are installed:
pip install -r requirements.txt
- The app will use CPU mode automatically
- For GPU support, install PyTorch with CUDA from pytorch.org
- First run downloads the GPT-2 tokenizer (~500MB)
- Model initialization takes 30-60 seconds on first run
- Subsequent starts are faster
- Close other applications
- Reduce "Max Tokens" setting in the app
- Consider using a smaller model configuration
- ChatGPT-like Interface: Clean, modern chat UI
- Temperature Control: Adjust response creativity (0.1-1.5)
- Token Limit: Control response length (50-500 tokens)
- Vision Support: Upload images for visual question answering
- Dark Theme: Easy on the eyes
- Chat History: Track conversation within session
- Keyboard Shortcuts:
- Enter: Send message
- Shift+Enter: New line
- Ctrl+L: Clear chat (in future update)
- Windows 10 or higher
- 4 GB RAM
- 2 GB disk space
- Intel Core i3 or equivalent
- Windows 11
- 16 GB RAM
- 5 GB disk space
- Intel Core i7 or equivalent
- NVIDIA GPU with 4GB+ VRAM (optional, for faster inference)
For issues, questions, or contributions, please visit the GitHub repository.