You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+9-10Lines changed: 9 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,11 +5,11 @@
5
5
6
6
# ZenGuard AI
7
7
8
-
ZenGuard AI enables AI developers to integrate production-level, low-code LLM (Large Language Model) guardrails into their generative AI applications effortlessly. With ZenGuard AI, ensure your application operates within trusted boundaries, is protected from malicious attacks and maintains user privacy without compromising on performance.
8
+
ZenGuard AI enables AI developers to integrate production-level, low-code LLM (Large Language Model) guardrails into their generative AI applications effortlessly. With ZenGuard AI, ensure your application operates within trusted boundaries, is protected from prompt injections, and maintains user privacy without compromising on performance.
9
9
10
10
# Features
11
11
12
-
***Prompt Injection Detection**: Identifies and mitigates attempts to manipulate, exfiltrate proprietary data and insert malicious content to/from models and RAG systems.
12
+
***Prompt Injection Detection**: Identifies and mitigates attempts to manipulate, exfiltrate proprietary data, and insert malicious content to/from models and RAG systems.
13
13
***Jailbreak Detection**: Identifies and mitigates attempts to manipulate model/app outputs.
14
14
***Personally Identifiable Information (PII) Detection**: Protects user data privacy by detecting and managing sensitive information.
15
15
***Allowed Topics Detection**: Enables your model/app to generate content within specified, permissible topics.
@@ -27,7 +27,7 @@ pip install zenguard
27
27
28
28
## Getting Started
29
29
30
-
Jump into our [Quickstart Guide](https://docs.zenguard.ai/start-here/quickstart/) to easily integrate ZenGuard AI into your application.
30
+
Jump into our [Quickstart Guide](https://docs.zenguard.ai) to easily integrate ZenGuard AI into your application.
31
31
32
32
# ZenGuard Playground
33
33
@@ -38,9 +38,9 @@ Test the capabilities of ZenGuard AI in our ZenGuard [Playground](https://consol
38
38
A more detailed documentation is available at [docs.zenguard.ai](https://docs.zenguard.ai/).
39
39
40
40
41
-
# Pentesting
41
+
# Penetration Testing
42
42
43
-
Run pentest against both ZenGuard AI and (optionally) ChatGPT.
43
+
Run pen test against both ZenGuard AI and (optionally) ChatGPT.
44
44
45
45
We are using the modified version of the [PromptInject](https://github.com/agencyenterprise/PromptInject/tree/main) as the basic framework for building prompt injections.
46
46
@@ -105,7 +105,7 @@ if __name__ == "__main__":
105
105
106
106
Clone this repo and install requirements.
107
107
108
-
Run pentest against ZenGuard AI:
108
+
Run pen test against ZenGuard AI:
109
109
110
110
```shell
111
111
export ZEN_API_KEY=your-api-key
@@ -120,12 +120,11 @@ python tests/pentest.py
120
120
```
121
121
122
122
123
-
124
-
125
-
126
123
# Support and Contact
127
124
128
-
[Book a Demo](https://calendly.com/galym-u) or just shoot us an email to hello@zenguard.ai.
125
+
[Book a Demo](https://calendly.com/galym-u) or just shoot us an email to hello@zenguard.ai
126
+
127
+
Topics we care about - LLM Security, LLM Guardrails, Prompt Injections, GenAI Security.
0 commit comments