Skip to content

Commit 4af1793

Browse files
committed
update streamlit docs and examples
1 parent a18d031 commit 4af1793

File tree

5 files changed

+77
-46
lines changed

5 files changed

+77
-46
lines changed

README.md

Lines changed: 14 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ user_feedback = trubrics.log_feedback(
8282

8383
## Collect user prompts & feedback from a Streamlit app
8484

85-
To start collecting prompts & feedback from your [Streamlit](https://streamlit.io/) app, install the additional dependency:
85+
To start collecting user feedback from your [Streamlit](https://streamlit.io/) app, install the additional dependency:
8686

8787
```console
8888
pip install "trubrics[streamlit]"
@@ -94,34 +94,27 @@ and test this code snippet in your app:
9494
import streamlit as st
9595
from trubrics.integrations.streamlit import FeedbackCollector
9696

97-
if "logged_prompt" not in st.session_state:
98-
st.session_state.logged_prompt = None
99-
10097
collector = FeedbackCollector(
10198
email=st.secrets.TRUBRICS_EMAIL,
10299
password=st.secrets.TRUBRICS_PASSWORD,
103100
project="default"
104101
)
105102

106-
if st.button("Predict"):
107-
st.session_state.logged_prompt = collector.log_prompt(
108-
config_model={"model": "gpt-3.5-turbo"},
109-
prompt="Tell me a joke",
110-
generation="Why did the chicken cross the road? To get to the other side.",
111-
)
112-
113-
if st.session_state.logged_prompt:
114-
st.write("A model generation...")
115-
user_feedback = collector.st_feedback(
116-
component="default",
117-
feedback_type="thumbs",
118-
open_feedback_label="[Optional] Provide additional feedback",
119-
model=st.session_state.logged_prompt.config_model.model,
120-
prompt_id=st.session_state.logged_prompt.id,
121-
align="flex-start",
122-
)
103+
user_feedback = collector.st_feedback(
104+
component="default",
105+
feedback_type="thumbs",
106+
open_feedback_label="[Optional] Provide additional feedback",
107+
model="gpt-3.5-turbo",
108+
prompt_id=None, # checkout collector.log_prompt() to log your user prompts
109+
)
110+
111+
if user_feedback:
112+
st.write("#### Raw feedback saved to Trubrics:")
113+
st.write(user_feedback)
123114
```
124115

116+
For a full examples logging user prompts and feedback in Streamlit, see our [Streamlit integration docs](https://trubrics.github.io/trubrics-sdk/integrations/streamlit/).
117+
125118
## Collect user feedback from a React.js app
126119

127120
To collect user feedback from a React application, check out [this example](https://github.com/trubrics/trubrics-sdk/tree/main/examples/react_js).

docs/assets/streamlit-example.png

421 KB
Loading

docs/integrations/streamlit.md

Lines changed: 26 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -11,13 +11,14 @@ pip install "trubrics[streamlit]"
1111
## Streamlit Example Apps
1212
Once you have created an account with [Trubrics](https://trubrics.streamlit.app/), you can try our deployed example Streamlit apps that use the integration to save feedback:
1313

14-
- [LLM Chat Completion](https://trubrics-llm-example-chatbot.streamlit.app/): A chatbot that queries OpenAI's API and allows users to leave feedback.
15-
- [LLM Completion](https://trubrics-llm-example.streamlit.app/): An LLM app that queries OpenAI's API and allows users to leave feedback on single text generations.
16-
- [Titanic](https://trubrics-titanic-example.streamlit.app/): An app that allows users to give feedback on a classifier that predicts whether passengers will survive the titanic.
14+
![](../assets/streamlit-example.png)
15+
16+
- **LLM chat** - [deployed app](https://trubrics-llm-example-chatbot.streamlit.app/) | [code](https://github.com/trubrics/trubrics-sdk/blob/main/examples/streamlit/llm_chatbot.py) : A chatbot that queries OpenAI's API and allows users to leave feedback.
17+
- **LLM single answer** - [deployed app](https://trubrics-llm-example.streamlit.app/) | [code](https://github.com/trubrics/trubrics-sdk/blob/main/examples/streamlit/llm_app.py) : An LLM app that queries OpenAI's API and allows users to leave feedback on single text generations.
1718

1819
The code for these apps can be viewed in the [trubrics-sdk](https://github.com/trubrics/trubrics-sdk/tree/main/examples), and may be run by cloning the repo and running:
1920

20-
=== "LLM Chat Completion"
21+
=== "LLM chat"
2122
!!!tip OpenAI
2223
To run this app, you are required to have your own [OpenAI](https://platform.openai.com/overview) API key.
2324

@@ -31,7 +32,7 @@ The code for these apps can be viewed in the [trubrics-sdk](https://github.com/t
3132
streamlit run examples/feedback/streamlit/llm_chatbot.py
3233
```
3334

34-
=== "LLM Completion"
35+
=== "LLM single answer"
3536
!!!tip OpenAI
3637
To run this app, you are required to have your own [OpenAI](https://platform.openai.com/overview) API key.
3738

@@ -45,24 +46,18 @@ The code for these apps can be viewed in the [trubrics-sdk](https://github.com/t
4546
streamlit run examples/feedback/streamlit/llm_app.py
4647
```
4748

48-
=== "Titanic"
49-
50-
```
51-
pip install scikit-learn==1.1.0
52-
```
49+
## Add the FeedbackCollector to your App
5350

54-
```console
55-
streamlit run examples/feedback/streamlit/titanic_app.py
56-
```
51+
Here is a complete example to log user prompts and feedback from a simple streamlit application:
5752

58-
## Add the FeedbackCollector to your App
59-
To get started, you can add this code snippet directly to your streamlit app:
60-
```py
53+
```py title="examples/streamlit/basic_app.py"
6154
import streamlit as st
6255
from trubrics.integrations.streamlit import FeedbackCollector
6356

6457
if "logged_prompt" not in st.session_state:
6558
st.session_state.logged_prompt = None
59+
if "feedback_key" not in st.session_state:
60+
st.session_state.feedback_key = 0
6661

6762
# 1. authenticate with trubrics
6863
collector = FeedbackCollector(
@@ -71,24 +66,34 @@ collector = FeedbackCollector(
7166
project="default"
7267
)
7368

74-
if st.button("Predict"):
69+
if st.button("Refresh"):
70+
st.session_state.feedback_key += 1
71+
st.session_state.logged_prompt = None
72+
st.experimental_rerun()
73+
74+
prompt = "Tell me a joke"
75+
generation = "Why did the chicken cross the road? To get to the other side."
76+
st.write(f"#### :orange[Example user prompt: {prompt}]")
77+
78+
79+
if st.button("Generate response"):
7580
# 2. log a user prompt & model response
7681
st.session_state.logged_prompt = collector.log_prompt(
7782
config_model={"model": "gpt-3.5-turbo"},
78-
prompt="Tell me a joke",
79-
generation="Why did the chicken cross the road? To get to the other side.",
83+
prompt=prompt,
84+
generation=generation,
8085
)
8186

8287
if st.session_state.logged_prompt:
83-
st.write("A model generation...")
84-
88+
st.write(f"#### :blue[Example model generation: {generation}]")
8589
# 3. log some user feedback
8690
user_feedback = collector.st_feedback(
8791
component="default",
8892
feedback_type="thumbs",
8993
open_feedback_label="[Optional] Provide additional feedback",
9094
model=st.session_state.logged_prompt.config_model.model,
9195
prompt_id=st.session_state.logged_prompt.id,
96+
key=st.session_state.feedback_key,
9297
align="flex-start",
9398
)
9499
```

examples/streamlit/basic_app.py

Lines changed: 20 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,23 +4,39 @@
44

55
if "logged_prompt" not in st.session_state:
66
st.session_state.logged_prompt = None
7+
if "feedback_key" not in st.session_state:
8+
st.session_state.feedback_key = 0
79

10+
# 1. authenticate with trubrics
811
collector = FeedbackCollector(email=st.secrets.TRUBRICS_EMAIL, password=st.secrets.TRUBRICS_PASSWORD, project="default")
912

10-
if st.button("Predict"):
13+
if st.button("Refresh"):
14+
st.session_state.feedback_key += 1
15+
st.session_state.logged_prompt = None
16+
st.experimental_rerun()
17+
18+
prompt = "Tell me a joke"
19+
generation = "Why did the chicken cross the road? To get to the other side."
20+
st.write(f"#### :orange[Example user prompt: {prompt}]")
21+
22+
23+
if st.button("Generate response"):
24+
# 2. log a user prompt & model response
1125
st.session_state.logged_prompt = collector.log_prompt(
1226
config_model={"model": "gpt-3.5-turbo"},
13-
prompt="Tell me a joke",
14-
generation="Why did the chicken cross the road? To get to the other side.",
27+
prompt=prompt,
28+
generation=generation,
1529
)
1630

1731
if st.session_state.logged_prompt:
18-
st.write("A model generation...")
32+
st.write(f"#### :blue[Example model generation: {generation}]")
33+
# 3. log some user feedback
1934
user_feedback = collector.st_feedback(
2035
component="default",
2136
feedback_type="thumbs",
2237
open_feedback_label="[Optional] Provide additional feedback",
2338
model=st.session_state.logged_prompt.config_model.model,
2439
prompt_id=st.session_state.logged_prompt.id,
40+
key=st.session_state.feedback_key,
2541
align="flex-start",
2642
)
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
import streamlit as st
2+
3+
from trubrics.integrations.streamlit import FeedbackCollector
4+
5+
collector = FeedbackCollector(email=st.secrets.TRUBRICS_EMAIL, password=st.secrets.TRUBRICS_PASSWORD, project="default")
6+
7+
user_feedback = collector.st_feedback(
8+
component="default",
9+
feedback_type="thumbs",
10+
open_feedback_label="[Optional] Provide additional feedback",
11+
model="gpt-3.5-turbo",
12+
prompt_id=None, # checkout collector.log_prompt() to log your user prompts
13+
)
14+
15+
if user_feedback:
16+
st.write("#### Raw feedback saved to Trubrics:")
17+
st.write(user_feedback)

0 commit comments

Comments
 (0)