Multi-tenant fine-tuning for local LLMs with Tinker-compatible API
-
Updated
Mar 23, 2026 - Python
Multi-tenant fine-tuning for local LLMs with Tinker-compatible API
Multiagent system, orchestrator, future born father of all Claws
🚀 Unified NLP Pipelines for Language Models
Delta: LLM conversation branching
Playground for learning by doing
A Unity package for building open-source AI voice agents that run fully locally. You can use it to build intelligent non-player characters (NPCs), game interfaces, among many other applications.
A terminal-based tool for building flexible AI workflows anywhere. Process documents, create pipelines, and manage context from the command line.
A lightweight, self-contained Python project for running local LLM personalities with minimal dependencies. This system uses TinyLlama-1.1B-Chat-v1.0.0 and llama-cpp-python for inference, and Rich for a user-friendly console chat interface. This is a expansion of Tiny-Local-llm which allows you to select from 1 of 3 basic personalities.
J.A.R.V.I.S: An AI-powered Open Source Intelligence (OSINT) system. It orchestrates deep web scraping and local LLMs to autonomously generate comprehensive intelligence dossiers.
Chrome extension to summarize and chat with any web page using a local LLM (vLLM) — your data never leaves your machine.
A lightweight CLI to orchestrate Gemini and GPT using your local files as a shared blackboard.
(Experiment) Predefined set of instructions for local agents governing LLM usage and selection
Fully local autonomous AI research agent using Ollama with tool-based web search and reasoning.
An open-source Agentic RAG solution for seamless local Vector store retrieval and real-time web search. Automatically decides whether to query your internal Vector store or scout the Live Web for the most relevant information.
On device autonomous research and content writing using open-sourced LLMs and Crew AI.
An entirely offline, privacy-centric voice assistant that leverages lightweight local AI for speech-to-text (Vosk), large language model processing (GGUF via Llama.cpp), and text-to-speech (Kokoro), offering seamless, low-latency, and secure voice interactions directly from your machine.
This repository will be used to add all different types of LLMs projects - basic to advanced
An Autonomous AI System for Generating Humorous & Viral Tweets using Open-Source LLMs
Universal local AI agent for querying any MCP-enabled data source using Ollama - vaults, databases, emissions data, and more. 100% offline, 100% sovereign.
Add a description, image, and links to the local-llms topic page so that developers can more easily learn about it.
To associate your repository with the local-llms topic, visit your repo's landing page and select "manage topics."