|
1 | | -# Developing Chat Agents: Getting Started |
| 1 | +# Developing Chat Functions: Getting Started |
2 | 2 |
|
3 | | -## What is a Chat Agent? |
| 3 | +## What is a Chat Function? |
4 | 4 |
|
5 | | -It's a function which calls Large Language Models (LLMs) to respond to the student's messages given contxtual data: |
| 5 | +A chat function is a function which calls Large Language Models (LLMs) to respond to the messages of students given contextual data: |
6 | 6 |
|
7 | 7 | - question data |
8 | 8 | - user data such as past responses to the problem |
9 | | - Chatbot Agents capture and automate the process of assisting students during their learning process when outside of classroom. |
| 9 | + |
| 10 | +Chat functions host a chatbot. Chatbots capture and automate the process of assisting students during their learning process when outside of classroom. |
10 | 11 |
|
11 | 12 | ## Getting Setup for Development |
12 | 13 |
|
13 | 14 | 1. Get the code on your local machine (Using github desktop or the `git` cli) |
14 | 15 |
|
15 | | - - For new functions: clone the main repo for [lambda-chat](https://github.com/lambda-feedback/lambda-chat) and create a new branch. Then go under `scr/agents` and copy the `base_agent` folder. |
16 | | - |
| 16 | + - For new functions: clone the template repo for [chat-function-boilerplate](https://github.com/lambda-feedback/chat-function-boilerplate). **Make sure the new repository is set to public (it needs access to organisation secrets)**. |
17 | 17 | - For existing functions: please make your changes on a new separate branch |
18 | 18 |
|
19 | | -2. _If you are creating a new chatbot agent_, you'll need to set it's name as the folder name in `scr/agents` and its corresponding files. |
20 | | -3. You are now ready to start making changes and implementing features by editing each of the three main function-logic files: |
| 19 | +2. _If you are creating a new chatbot_, you can either edit the `src/agents/base_agent` or copy it and rename it based on the name of your chatbot. |
| 20 | +3. You are now ready to start making changes and implementing features by editing each of the main function-logic files: |
21 | 21 |
|
22 | | - 1. **`scr/agents/{base_agent}/{base}_agent.py`**: This file contains the main LLM pipeline using [LangGraph](https://langchain-ai.github.io/langgraph/) and [LangChain](https://python.langchain.com/docs/introduction/). |
| 22 | + 1. **`src/agents/{base_agent}/{base}_agent.py`**: This file contains the main LLM pipeline using [LangGraph](https://langchain-ai.github.io/langgraph/) and [LangChain](https://python.langchain.com/docs/introduction/). |
23 | 23 |
|
24 | | - - the agent expects the following inputs when it being called: |
| 24 | + - the chat function expects the following arguments when it being called: |
25 | 25 |
|
26 | 26 | Body with necessary Params: |
27 | 27 |
|
@@ -52,19 +52,44 @@ It's a function which calls Large Language Models (LLMs) to respond to the stude |
52 | 52 | } |
53 | 53 | ``` |
54 | 54 |
|
55 | | - 2. **`scr/agents/{base_agent}/{base}_prompts.py`**: This is where you can write the system prompts that describe how your AI Assistant should behave and respond to the user. |
| 55 | + 2. **`src/agents/{base_agent}/{base}_prompts.py`**: This is where you can write the system prompts that describe how your AI Assistant should behave and respond to the user. |
| 56 | + |
| 57 | + 3. _If you edited the chatbot agent file name_, make sure to add your chatbot `invoke()` function to the `module.py` file. |
56 | 58 |
|
57 | | - 3. Make sure to add your agent `invoke()` function to the `module.py` file. |
| 59 | + 4. Update the `config.json` file with the name of the chat function. |
58 | 60 |
|
59 | | - 4. Please add a `README.md` file to describe the use and behaviour of your agent. |
| 61 | + 5. Please add a `README.md` file to describe the use and behaviour of your chatbot. |
60 | 62 |
|
61 | 63 | 4. Changes can be tested locally by running the pipeline tests using: |
62 | 64 | ```bash |
63 | 65 | pytest src/module_test.py |
64 | 66 | ``` |
65 | | - [Running and Testing Agents Locally](local.md){ .md-button } |
| 67 | + [Running and Testing Chat Functions Locally](local.md){ .md-button } |
66 | 68 |
|
67 | 69 |
|
68 | | -5. Merge commits into any branch (except main) will trigger the `dev.yml` workflow, which will build the docker image, push it to a shared `dev` ECR repository to make the function available from the `dev` and `localhost` client app. |
| 70 | +5. Merge commits into dev branch will trigger the `dev.yml` workflow, which will build the docker image, push it to a shared `dev` ECR repository and deploy an AWS Lambda function available to any http requests. In order to make your new chatbot available on the `dev` environment of the Lambda Feedback platform, you will have to get in contact with the ADMINS on the platform. |
| 71 | + |
| 72 | +6. You can now test the deployed chat function using your preferred request client (such as [Insomnia](https://insomnia.rest/) or [Postman](https://www.postman.com/) or simply `curl` from a terminal). `DEV` Functions are made available at: |
| 73 | + ```url |
| 74 | + https://<***>.execute-api.eu-west-2.amazonaws.com/default/chat/<function name as defined in config.json> |
| 75 | + ``` |
69 | 76 |
|
70 | | -6. In order to make your new chatbot available on the LambdaFeedback platform, you will have to get in contact with the ADMINS on the platform. |
| 77 | + !!! example "Example Request to chatFunctionBoilerplate-dev" |
| 78 | + curl --location 'https://<***>.execute-api.eu-west-2.amazonaws.com/default/chat/chatFunctionBoilerplate-dev' \ |
| 79 | + --header 'Content-Type: application/json' \ |
| 80 | + --data '{ |
| 81 | + "message": "hi", |
| 82 | + "params": { |
| 83 | + "conversation_id": "12345Test", |
| 84 | + "conversation_history": [ |
| 85 | + { |
| 86 | + "type": "user", |
| 87 | + "content": "hi" |
| 88 | + } |
| 89 | + ] |
| 90 | + } |
| 91 | + }' |
| 92 | + |
| 93 | +7. Once the `dev` chat function is fully tested, you can merge the code to the default branch (`main`). This will trigger the `main.yml` workflow, which will deploy the `staging` and `prod` versions of your chat function. Please contact the ADMIN to provide you the URLS for the `staging` and `prod` versions of your chat function. |
| 94 | + |
| 95 | +8. In order to make your new chat function available on any of the environments of the Lambda Feedback platform, you will have to get in contact with the ADMINS on the platform. |
0 commit comments