|
1 | | -# 💻 CLI Usage Guide |
| 1 | +# ⚙️ CLI Usage — PromptStream.AI |
2 | 2 |
|
3 | | -The **PromptStream.AI CLI** allows you to quickly build, validate, and generate prompts directly from the terminal. |
| 3 | +The **PromptStream.AI CLI** lets you build, validate, and analyze prompt templates directly from your terminal. |
| 4 | +It provides a clean developer workflow for experimenting with the Flow.AI ecosystem (Flow.AI.Core, TokenFlow.AI, and PromptStream.AI). |
4 | 5 |
|
5 | 6 | --- |
6 | 7 |
|
7 | | -## 🧱 Build |
| 8 | +## 🚀 Installation |
8 | 9 |
|
9 | | -Render a prompt template with variables. |
| 10 | +Install the global tool via NuGet: |
10 | 11 |
|
11 | 12 | ```bash |
12 | | -promptstream build --template "./examples/welcome.txt" --var name=Andrew |
| 13 | +dotnet tool install --global promptstream.ai.cli |
13 | 14 | ``` |
14 | 15 |
|
15 | | -Outputs: |
16 | | -``` |
17 | | -✅ Prompt built successfully: |
18 | | -Hello Andrew, welcome to PromptStream.AI! |
19 | | -``` |
| 16 | +Once installed, you can call it using the `promptstream` command from anywhere. |
20 | 17 |
|
21 | 18 | --- |
22 | 19 |
|
23 | | -## ✅ Validate |
| 20 | +## 🧱 Basic Commands |
| 21 | + |
| 22 | +### 🧩 Build |
24 | 23 |
|
25 | | -Validate a prompt for completeness and token safety. |
| 24 | +Render and substitute variables in a template. |
26 | 25 |
|
27 | 26 | ```bash |
28 | | -promptstream validate --template "./examples/summary.txt" --var topic=AI |
| 27 | +promptstream build --template "Hello {{name}}" --var name=Andrew |
29 | 28 | ``` |
30 | 29 |
|
31 | | ---- |
| 30 | +### 🔍 Validate |
32 | 31 |
|
33 | | -Outputs: |
34 | | -``` |
35 | | -✅ Valid prompt (37 tokens) |
36 | | -``` |
| 32 | +Check structure, token limits, and syntax. |
37 | 33 |
|
38 | | ---- |
| 34 | +```bash |
| 35 | +promptstream validate --template "Summarize {{text}}" --var text="PromptStream.AI is awesome" |
| 36 | +``` |
39 | 37 |
|
40 | | -## 🤖 Generate |
| 38 | +### 📊 Analyze |
41 | 39 |
|
42 | | -Build, validate, and generate a model reply. |
| 40 | +Estimate token counts and cost for a given prompt. |
43 | 41 |
|
44 | 42 | ```bash |
45 | | -promptstream generate --template "./examples/summary.txt" --var topic=AI |
| 43 | +promptstream analyze --template "Explain quantum computing simply." |
46 | 44 | ``` |
47 | 45 |
|
48 | | -Outputs: |
49 | | -``` |
50 | | -✅ Generation complete: |
51 | | -[TokenFlow gpt-4o-mini] processed 32 tokens. |
52 | | -``` |
| 46 | +### 💬 Generate |
53 | 47 |
|
54 | | ---- |
| 48 | +Build, validate, and request a response from the model provider. |
| 49 | + |
| 50 | +```bash |
| 51 | +promptstream generate --template "Write a haiku about AI" --save context.json |
| 52 | +``` |
55 | 53 |
|
56 | | -## 💬 Context |
| 54 | +### 🧠 Context |
57 | 55 |
|
58 | | -Inspect or manage conversation history. |
| 56 | +Inspect, summarize, or clear conversation history. |
59 | 57 |
|
60 | 58 | ```bash |
61 | 59 | promptstream context --load context.json --summarize |
62 | | -promptstream context --clear --save context.json |
63 | 60 | ``` |
64 | 61 |
|
65 | | -Outputs: |
| 62 | +--- |
| 63 | + |
| 64 | +## 🧩 Command Reference |
| 65 | + |
| 66 | +| Command | Description | |
| 67 | +|----------|--------------| |
| 68 | +| `build` | Renders and substitutes variables in a template. | |
| 69 | +| `validate` | Validates prompt structure and token usage. | |
| 70 | +| `analyze` | Estimates token usage and cost metrics. | |
| 71 | +| `generate` | Generates model output for a prompt. | |
| 72 | +| `context` | Loads, saves, summarizes, or clears context data. | |
| 73 | + |
| 74 | +--- |
| 75 | + |
| 76 | +## 💡 Options |
| 77 | + |
| 78 | +| Option | Description | |
| 79 | +|---------|-------------| |
| 80 | +| `--template <value>` | Path to a `.json` file or inline text content. | |
| 81 | +| `--var <key=value>` | Variables to substitute (multiple allowed). | |
| 82 | +| `--context <path>` | Path to an existing context JSON file. | |
| 83 | +| `--save <path>` | Destination path to save context or output. | |
| 84 | +| `--model <id>` | Optional model ID (e.g. `gpt-4o-mini`). | |
| 85 | +| `--output <format>` | Output format (table or json). | |
| 86 | + |
| 87 | +--- |
| 88 | + |
| 89 | +## 📸 Example Output |
| 90 | + |
66 | 91 | ``` |
67 | | -📂 Loaded 4 messages from context.json |
68 | | -Summary: 4 messages (user, assistant) |
| 92 | +📊 Prompt Analysis |
| 93 | +──────────────────────────────────────── |
| 94 | +Model: gpt-4o-mini |
| 95 | +Prompt Tokens: 128 |
| 96 | +Completion Tokens: 142 |
| 97 | +Total Tokens: 270 |
| 98 | +Estimated Cost: $0.00052 |
| 99 | +──────────────────────────────────────── |
| 100 | +✅ Analysis complete — total tokens: 270 |
69 | 101 | ``` |
70 | 102 |
|
71 | 103 | --- |
72 | 104 |
|
73 | | -### 🌈 Tips |
| 105 | +## 🧠 Integration Notes |
| 106 | + |
| 107 | +- The CLI uses **PromptStream.AI** services under the hood, leveraging Flow.AI.Core’s shared models. |
| 108 | +- Variable substitution is handled via the same builder logic used by the library itself. |
| 109 | +- Each command mirrors a service call: *Build → Validate → Analyze → Generate.* |
| 110 | + |
| 111 | +--- |
| 112 | + |
| 113 | +## 🗺️ Links |
| 114 | + |
| 115 | +- [PromptStream.AI on GitHub](https://github.com/AndrewClements84/PromptStream.AI) |
| 116 | +- [NuGet Package](https://www.nuget.org/packages/PromptStream.AI.CLI) |
| 117 | +- [PromptStream.AI Docs](https://andrewclements84.github.io/PromptStream.AI) |
| 118 | + |
| 119 | +--- |
| 120 | + |
| 121 | +**Author:** [AndrewClements84](https://github.com/AndrewClements84) |
| 122 | +**License:** MIT |
| 123 | +**Version:** 0.8.3 |
74 | 124 |
|
75 | | -- Use `--template` for inline text or a file path. |
76 | | -- Chain variables with multiple `--var` arguments. |
77 | | -- Combine `--context` and `--save` to preserve conversation state. |
|
0 commit comments