|
16 | 16 | </p> |
17 | 17 |
|
18 | 18 | <p align="center"> |
19 | | - <a href="https://pypi.org/project/engram"><img src="https://img.shields.io/badge/python-3.9%2B-blue.svg" alt="Python 3.9+"></a> |
| 19 | + <a href="https://pypi.org/project/engram-memory"><img src="https://img.shields.io/badge/python-3.9%2B-blue.svg" alt="Python 3.9+"></a> |
20 | 20 | <a href="https://github.com/Ashish-dwi99/Engram/blob/main/LICENSE"><img src="https://img.shields.io/badge/license-MIT-blue.svg" alt="MIT License"></a> |
21 | 21 | <a href="https://github.com/Ashish-dwi99/Engram/actions"><img src="https://github.com/Ashish-dwi99/Engram/actions/workflows/test.yml/badge.svg" alt="Tests"></a> |
22 | 22 | <a href="https://github.com/Ashish-dwi99/Engram"><img src="https://img.shields.io/github/stars/Ashish-dwi99/Engram?style=social" alt="GitHub Stars"></a> |
|
28 | 28 | <a href="#%EF%B8%8F-architecture">Architecture</a> · |
29 | 29 | <a href="#-integrations">Integrations</a> · |
30 | 30 | <a href="#-api--sdk">API & SDK</a> · |
| 31 | + <a href="#-longmemeval-on-colab-gpu">LongMemEval</a> · |
31 | 32 | <a href="https://github.com/Ashish-dwi99/Engram/blob/main/CHANGELOG.md">Changelog</a> |
32 | 33 | </p> |
33 | 34 |
|
34 | 35 | --- |
35 | 36 |
|
36 | | -> **100% free, forever.** No Pro tier, no usage limits, no license keys. Bring your own API key (Gemini, OpenAI, or Ollama). Everything runs locally by default. |
37 | | -
|
38 | | ---- |
39 | | - |
40 | 37 | ## Why Engram |
41 | 38 |
|
42 | 39 | Every AI agent you use starts with amnesia. Your coding assistant forgets your preferences between sessions. Your planning agent has no idea what your research agent discovered yesterday. You end up re-explaining context that should already be known. |
@@ -64,13 +61,60 @@ But unlike "store everything forever" approaches, Engram treats agents as **untr |
64 | 61 | ## Quick Start |
65 | 62 |
|
66 | 63 | ```bash |
67 | | -pip install -e ".[all]" # 1. Install |
68 | | -export GEMINI_API_KEY="your-key" # 2. Set one API key (or OPENAI_API_KEY, or OLLAMA_HOST) |
| 64 | +pip install engram-memory # 1. Install from PyPI |
| 65 | +export GEMINI_API_KEY="your-key" # 2. Set one key before starting Engram |
69 | 66 | engram install # 3. Auto-configure Claude Code, Cursor, Codex |
70 | 67 | ``` |
71 | 68 |
|
72 | 69 | Restart your agent. Done — it now has persistent memory across sessions. |
73 | 70 |
|
| 71 | +### PyPI Install Options |
| 72 | + |
| 73 | +```bash |
| 74 | +# Default runtime (Gemini + local Qdrant + MemoryClient deps) |
| 75 | +pip install engram-memory |
| 76 | + |
| 77 | +# Full stack extras (MCP server + REST API + async + all providers) |
| 78 | +pip install "engram-memory[all]" |
| 79 | + |
| 80 | +# OpenAI provider add-on |
| 81 | +pip install "engram-memory[openai]" |
| 82 | + |
| 83 | +# Ollama provider add-on |
| 84 | +pip install "engram-memory[ollama]" |
| 85 | +``` |
| 86 | + |
| 87 | +### API Key: When and How to Provide It |
| 88 | + |
| 89 | +Engram reads provider credentials when a process initializes `Memory()` (for example: `engram`, `engram-api`, `engram-mcp`, or your Python app). |
| 90 | + |
| 91 | +1. Set env vars **before** starting those processes. |
| 92 | +2. If you change keys, restart the process. |
| 93 | +3. Default provider is Gemini, so set `GEMINI_API_KEY` or `GOOGLE_API_KEY` unless you override provider config. |
| 94 | + |
| 95 | +```bash |
| 96 | +# Default (Gemini) |
| 97 | +export GEMINI_API_KEY="your-key" |
| 98 | +engram-api |
| 99 | +``` |
| 100 | + |
| 101 | +```bash |
| 102 | +# OpenAI provider |
| 103 | +export OPENAI_API_KEY="your-key" |
| 104 | +engram-api |
| 105 | +``` |
| 106 | + |
| 107 | +```bash |
| 108 | +# Ollama (local; no cloud key) |
| 109 | +export OLLAMA_HOST="http://localhost:11434" |
| 110 | +engram-api |
| 111 | +``` |
| 112 | + |
| 113 | +For remote usage via `MemoryClient`, provider API keys are needed on the **server** running Engram. |
| 114 | +The client only needs: |
| 115 | +- `ENGRAM_ADMIN_KEY` (or `admin_key=...`) when minting sessions via `/v1/sessions` |
| 116 | +- Bearer session token for normal read/write API calls |
| 117 | + |
74 | 118 | **Or with Docker:** |
75 | 119 |
|
76 | 120 | ```bash |
@@ -569,6 +613,70 @@ Biological inspirations: Ebbinghaus Forgetting Curve → exponential decay, Spac |
569 | 613 |
|
570 | 614 | --- |
571 | 615 |
|
| 616 | +## LongMemEval on Colab (GPU) |
| 617 | + |
| 618 | +Use this flow to benchmark Engram on LongMemEval in Google Colab with GPU acceleration. |
| 619 | + |
| 620 | +```bash |
| 621 | +# 1) In Colab: Runtime -> Change runtime type -> GPU |
| 622 | + |
| 623 | +# 2) Install Engram + GPU reader dependencies |
| 624 | +pip install -U engram-memory transformers accelerate |
| 625 | + |
| 626 | +# 3) Download LongMemEval data |
| 627 | +mkdir -p /content/longmemeval |
| 628 | +cd /content/longmemeval |
| 629 | +curl -L -o longmemeval_s_cleaned.json \ |
| 630 | + https://huggingface.co/datasets/xiaowu0162/longmemeval-cleaned/resolve/main/longmemeval_s_cleaned.json |
| 631 | + |
| 632 | +# 4) Run Engram benchmark (HF reader on GPU) |
| 633 | +python -m engram.benchmarks.longmemeval \ |
| 634 | + --dataset-path /content/longmemeval/longmemeval_s_cleaned.json \ |
| 635 | + --output-jsonl /content/longmemeval/engram_hypotheses.jsonl \ |
| 636 | + --retrieval-jsonl /content/longmemeval/engram_retrieval.jsonl \ |
| 637 | + --answer-backend hf \ |
| 638 | + --hf-model Qwen/Qwen2.5-1.5B-Instruct \ |
| 639 | + --embedder-provider simple \ |
| 640 | + --llm-provider mock \ |
| 641 | + --vector-store-provider memory \ |
| 642 | + --history-db-path /content/engram-longmemeval.db \ |
| 643 | + --top-k 8 \ |
| 644 | + --max-questions 100 \ |
| 645 | + --skip-abstention |
| 646 | +``` |
| 647 | + |
| 648 | +Notes: |
| 649 | +- The output file is evaluator-compatible (`question_id`, `hypothesis` per line). |
| 650 | +- `--include-debug-fields` adds retrieval diagnostics into each output row. |
| 651 | +- The command above uses `simple` embedder + `mock` LLM for memory operations, so **no Gemini/OpenAI key is required**. |
| 652 | + |
| 653 | +If you want to run with Gemini only (no extra reader packages), use base install and set key **before** starting the run: |
| 654 | + |
| 655 | +```bash |
| 656 | +pip install -U engram-memory |
| 657 | +export GEMINI_API_KEY="your-key" |
| 658 | + |
| 659 | +python -m engram.benchmarks.longmemeval \ |
| 660 | + --dataset-path /content/longmemeval/longmemeval_s_cleaned.json \ |
| 661 | + --output-jsonl /content/longmemeval/engram_hypotheses.jsonl \ |
| 662 | + --answer-backend engram-llm \ |
| 663 | + --llm-provider gemini \ |
| 664 | + --embedder-provider gemini \ |
| 665 | + --vector-store-provider memory |
| 666 | +``` |
| 667 | + |
| 668 | +Optional official QA scoring from the LongMemEval repo: |
| 669 | + |
| 670 | +```bash |
| 671 | +cd /content |
| 672 | +git clone https://github.com/xiaowu0162/LongMemEval.git |
| 673 | +cd /content/LongMemEval/src/evaluation |
| 674 | +export OPENAI_API_KEY="your-key" |
| 675 | +python evaluate_qa.py gpt-4o /content/longmemeval/engram_hypotheses.jsonl /content/longmemeval/longmemeval_s_cleaned.json |
| 676 | +``` |
| 677 | + |
| 678 | +--- |
| 679 | + |
572 | 680 | ## Docker |
573 | 681 |
|
574 | 682 | ```bash |
|
0 commit comments