Skip to content

Commit 5aaeb66

Browse files
authored
Merge pull request #3 from lambda-feedback/579-adding-preview-command
579 adding preview command
2 parents e9ecd5b + 1010e39 commit 5aaeb66

21 files changed

+1537
-667
lines changed

.gitignore

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -130,3 +130,6 @@ dmypy.json
130130

131131
# Pyre type checker
132132
.pyre/
133+
134+
# VSCode
135+
.vscode

Dockerfile

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,5 +24,5 @@ COPY handler.py ./app/
2424
COPY tests/*.py ./app/tests/
2525
COPY tools/*.py ./app/tools/
2626

27-
ENV REQUEST_SCHEMA_URL https://raw.githubusercontent.com/lambda-feedback/request-response-schemas/master/request.json
28-
ENV RESPONSE_SCHEMA_URL https://raw.githubusercontent.com/lambda-feedback/request-response-schemas/master/responsev2.json
27+
# Request-response-schemas repo/branch to use for validation
28+
ENV SCHEMAS_URL=https://raw.githubusercontent.com/lambda-feedback/request-response-schemas/579-adding-preview-command

README.md

Lines changed: 18 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,25 @@
11
# BaseEvalutionFunctionLayer
2-
Base docker image for evaluation functions coded in python. This layer cannot function alone, it needs to be extended in a specific way by evaluation function it supports.
32

4-
This layer encompases all the behaviour that is universal to all evaluation functions:
3+
Base docker image for evaluation functions coded in python. This layer cannot function alone, it needs to be extended in a specific way by the evaluation function it supports.
4+
5+
This layer implements the behaviour common to all evaluation functions:
6+
57
- Request and response schema validation
68
- Unit testing setup
79
- Function commands:
8-
- `eval`: calls the function in the user-defined `evaluation.py` file
10+
- `eval`: calls the evaluation function in the user-defined `evaluation.py` file.
11+
- `preview`: calls the preview function in the user-defined `preview.py` file.
912
- `healthcheck`: runs all unittests for schema testing as well as user-defined tests in `evaluation_tests.py`.
10-
- `docs`: returns the `docs.md` user-defined file.
13+
- `docs`: returns the `docs.md` user-defined file.
1114

12-
*Note: user-defined files are those provided by the evaluation function code meant to extend this layer*
15+
_Note: user-defined files are those provided by the evaluation function code meant to extend this layer_
1316

1417
## Behaviour and Usage
15-
Commands as passed in 'command' header from each request. By default (if not header is present), the function will run the `eval` command.
18+
19+
Commands as passed in 'command' header from each request. By default (if no header is present), the function will run the `eval` command.
1620

1721
## Requirements from the superseding layer
22+
1823
This function makes references to files and functions which don't exist yet in this layer - those need to be provided by the superseding layer. They're shown here in the way a dockerfile might be extending it.
1924

2025
```dockerfile
@@ -41,8 +46,8 @@ RUN chmod 755 $(find . -type d)
4146
CMD [ "/app/app.handler" ]
4247
```
4348

44-
4549
### Operating Container Structure
50+
4651
Since this is only just a base layer for eval functions, the repo's file structure won't match the file structure inside the built image, which can get confusing at times. This is what the `/app/` directory (where all our data is contained) will look like for an operational function:
4752

4853
```
@@ -55,20 +60,25 @@ Since this is only just a base layer for eval functions, the repo's file structu
5560
| |____requests.py
5661
| |____responses.py
5762
| |____handling.py
63+
| |____commands.py
64+
| |____docs.py
65+
| |____parse.py
5866
|____tools
5967
| |______init__.py
68+
| |____commands.py
6069
| |____validate.py
6170
| |____docs.py
6271
| |____parse.py
6372
| |____healthcheck.py
73+
| |____utils.py
6474
|____docs.md
6575
|____handler.py
6676
|____evaluation_tests.py
6777
|____evaluation.py
6878
```
6979

70-
7180
## Dev Notes
81+
7282
Can run the following command to look around the container of a running function
7383

7484
```bash
@@ -80,5 +90,3 @@ From a container which exposes port 8080 to the real port 9000, requests can be
8090
```
8191
http://localhost:9000/2015-03-31/functions/function/invocations
8292
```
83-
84-

__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
from .handler import handler
1+
from .handler import handler # noqa

handler.py

Lines changed: 52 additions & 188 deletions
Original file line numberDiff line numberDiff line change
@@ -1,213 +1,77 @@
1-
from .evaluation import evaluation_function
2-
3-
from .tools import docs, parse
4-
from .tools.healthcheck import healthcheck
5-
from .tools import validate as v
6-
71
from evaluation_function_utils.errors import EvaluationException
8-
"""
9-
Command Handler Functions.
10-
"""
11-
12-
13-
def handle_unknown_command(command):
14-
"""
15-
Function to create the response when the command is unknown.
16-
---
17-
This function does not handle any of the request body so it is neither parsed or
18-
validated against a schema. Instead, a simple message is returned telling the
19-
requestor that the command isn't allowed.
20-
"""
21-
return {"error": {"message": f"Unknown command '{command}'."}}
222

3+
from .tools import commands, docs, validate
4+
from .tools.parse import ParseError
5+
from .tools.utils import ErrorResponse, HandlerResponse, JsonType, Response
6+
from .tools.validate import ResBodyValidators, ValidationError
237

24-
def handle_healthcheck_command():
25-
"""
26-
Function to create the response when commanded to perform a healthcheck.
27-
---
28-
This function does not handle any of the request body so it is neither parsed or
29-
validated against a schema.
30-
"""
31-
return {"command": "healthcheck", "result": healthcheck()}
328

9+
def handle_command(event: JsonType, command: str) -> HandlerResponse:
10+
"""Switch case for handling different command options.
3311
34-
def handle_eval_command(event):
35-
"""
36-
Function to create the response when commanded to evaluate an answer.
37-
---
38-
This function attempts to parse the request body, performs schema validation and
39-
attempts to run the evaluation function on the given parameters.
12+
Args:
13+
event (JsonType): The AWS Lambda event recieved by the handler.
14+
command (str): The name of the function to invoke.
4015
41-
If any of these fail, a message is returned and an error field is passed if more
42-
information can be provided.
16+
Returns:
17+
HandlerResponse: The response object returned by the handler.
4318
"""
44-
body, parse_error = parse.parse_body(event)
45-
46-
if parse_error:
47-
return {"error": parse_error}
19+
# No validation of the doc commands.
20+
if command in ("docs-dev", "docs"):
21+
return docs.dev()
4822

49-
request_error = v.validate_request(body)
50-
51-
if request_error:
52-
return {"error": request_error}
23+
elif command == "docs-user":
24+
return docs.user()
5325

54-
response = body["response"]
55-
answer = body["answer"]
56-
params = body.get("params", dict())
26+
elif command in ("eval", "grade"):
27+
response = commands.evaluate(event)
28+
validator = ResBodyValidators.EVALUATION
5729

58-
try:
59-
result = evaluation_function(response, answer, params)
30+
elif command == "preview":
31+
response = commands.preview(event)
32+
validator = ResBodyValidators.PREVIEW
6033

61-
# Catch the custom EvaluationException (from evaluation_function_utils) first
62-
except EvaluationException as e:
63-
return {"error": e.error_dict}
64-
65-
except Exception as e:
66-
return {
67-
"error": {
68-
"message":
69-
"An exception was raised while executing the evaluation function.",
70-
"detail": str(e) if str(e) != "" else repr(e)
71-
}
72-
}
73-
74-
# If a list of "cases" wasn't provided, we don't have any other way to get feedback
75-
cases = params.get("cases", [])
76-
if len(cases) == 0:
77-
return {"command": "eval", "result": result}
78-
79-
# Determine what feedback to provide based on cases
80-
matched_case, warnings = feedback_from_cases(response, params, cases)
81-
if matched_case:
82-
result["feedback"] = matched_case["feedback"]
83-
result["matched_case"] = matched_case["id"]
84-
85-
# Override is_correct provided by the original block by the case 'mark'
86-
if "mark" in matched_case:
87-
result["is_correct"] = bool(int(matched_case["mark"]))
88-
89-
# Add warnings out output if any were encountered
90-
if len(warnings) != 0:
91-
result["warnings"] = warnings
92-
93-
return {"command": "eval", "result": result}
94-
95-
96-
def feedback_from_cases(response, params, cases):
97-
"""
98-
Attempt to find the correct feedback from a list of cases.
99-
Returns a matched 'case' (the full object), and optional list of warnings
100-
"""
34+
elif command == "healthcheck":
35+
response = commands.healthcheck()
36+
validator = ResBodyValidators.HEALTHCHECK
37+
else:
38+
response = Response(
39+
error=ErrorResponse(message=f"Unknown command '{command}'.")
40+
)
41+
validator = ResBodyValidators.EVALUATION
10142

102-
# A list of "cases" was provided, try matching to each of them
103-
matches = []
104-
warnings = []
105-
eval_function_feedback = []
106-
for i, case in enumerate(cases):
107-
# Validate the case block has an answer and feedback
108-
if 'answer' not in case:
109-
warnings += [{"case": i, "message": "Missing answer field"}]
110-
continue
111-
112-
if 'feedback' not in case:
113-
warnings += [{"case": i, "message": "Missing feedback field"}]
114-
continue
115-
116-
# Merge current evaluation params with any specified in case
117-
case_params = case.get('params', {})
118-
119-
# Run the evaluation function based on this case's answer
120-
try:
121-
res = evaluation_function(response, case.get('answer'), {
122-
**params,
123-
**case_params
124-
})
125-
126-
except EvaluationException as e:
127-
warnings += [{"case": i, **e.error_dict}]
128-
continue
129-
130-
except Exception as e:
131-
warnings += [{
132-
"case": i,
133-
"message":
134-
"An exception was raised while executing the evaluation function.",
135-
"detail": str(e) if str(e) != "" else repr(e)
136-
}]
137-
continue
138-
139-
# Function should always return an 'is_correct' if no errors were raised
140-
if not 'is_correct' in res:
141-
warnings += [{
142-
"case": i,
143-
"message": "is_correct missing from function output"
144-
}]
145-
continue
146-
147-
# This case matches the response, add it's index to the list of matches
148-
if res.get('is_correct') == True:
149-
matches += [i]
150-
eval_function_feedback += [res.get("feedback","")]
151-
152-
if len(matches) == 0:
153-
return None, warnings
154-
155-
# Select the matched case
156-
matched_case = cases[matches[0]]
157-
matched_case['id'] = matches[0]
158-
if not matched_case["params"].get("override_eval_feedback",False):
159-
separator = "<br />" if len(eval_function_feedback[0]) > 0 else ""
160-
matched_case.update({"feedback": matched_case.get("feedback","")\
161-
+separator\
162-
+eval_function_feedback[0]})
163-
164-
if len(matches) == 1:
165-
# warnings += [{"case": matches[0]}]
166-
return matched_case, warnings
43+
validate.body(response, validator)
16744

168-
else:
169-
s = ', '.join([str(m) for m in matches])
170-
warnings += [{
171-
"message":
172-
f"Cases {s} were matched. Only the first one's feedback was returned"
173-
}]
174-
return matched_case, warnings
45+
return response
17546

17647

177-
"""
178-
Main Handler Function
179-
"""
48+
def handler(event: JsonType, _: JsonType = {}) -> HandlerResponse:
49+
"""Main function invoked by AWS Lambda to handle incoming requests.
18050
51+
Args:
52+
event (JsonType): The AWS Lambda event received by the gateway.
18153
182-
def handler(event, context={}):
183-
"""
184-
Main function invoked by AWS Lambda to handle incoming requests.
185-
---
186-
This function invokes the handler function for that particular command and returns
187-
the result. It also performs validation on the response body to make sure it follows
188-
the schema set out in the request-response-schema repo.
54+
Returns:
55+
HandlerResponse: The response to return back to the requestor.
18956
"""
19057
headers = event.get("headers", dict())
19158
command = headers.get("command", "eval")
19259

193-
if command == "healthcheck":
194-
response = handle_healthcheck_command()
195-
196-
elif command == "eval" or command == "grade": # Remove once all funcs update to V2
197-
response = handle_eval_command(event)
198-
199-
elif command == "docs-dev" or command == "docs":
200-
return docs.send_dev_docs()
201-
202-
elif command == "docs-user":
203-
return docs.send_user_docs()
60+
try:
61+
return handle_command(event, command)
20462

205-
else:
206-
response = handle_unknown_command(command)
63+
except (ParseError, ValidationError) as e:
64+
error = ErrorResponse(message=e.message, detail=e.error_thrown)
20765

208-
response_error = v.validate_response(response)
66+
except EvaluationException as e:
67+
error = e.error_dict
20968

210-
if response_error:
211-
return {"error": response_error}
69+
# Catch-all for any unexpected errors.
70+
except Exception as e:
71+
error = ErrorResponse(
72+
message="An exception was raised while "
73+
"executing the evaluation function.",
74+
detail=(str(e) if str(e) != "" else repr(e)),
75+
)
21276

213-
return response
77+
return Response(error=error)

requirements.txt

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,8 @@
11
jsonschema
22
requests
3-
evaluation-function-utils
3+
evaluation-function-utils
4+
typing_extensions
5+
black
6+
autopep8
7+
pytest
8+
python-dotenv

tests/__init__.py

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,2 +0,0 @@
1-
from . import requests
2-
from . import responses

0 commit comments

Comments
 (0)