diff --git a/docs/teacher/guides/analytics.md b/docs/teacher/guides/analytics.md index 1fdabdf52..beae6fb31 100644 --- a/docs/teacher/guides/analytics.md +++ b/docs/teacher/guides/analytics.md @@ -3,13 +3,17 @@ ## Module-level analytics The module overview tab displays cohort progress and cohort activity. Access to the overview tab is subject to teacher role privileges. +![Module Overview](images/Module_Stats.png) The Content tab displays all Sets, with cohort-level data on activity and progress within each Set. Access to the data within the Content tab is subject to teacher role privileges. +![Set Overview](images/Set_Stats.png) Within the content there is a stats tab which shows cohort-level data on question completion and statistics on time spent on each question. Response Areas are listed and show completion rates overall, and best per student. Detailed response statistics are available in the 'Explore' button on the Response area. The stats tab availability is subject to teacher role privileges. The ID of Response Areas is linked between module instances. If a Response Area maintains the response data shape then the data across multiple instances can be combined for stronger statistics. As of 10/7/25 there are no features in the UI to link data across instances, but these links will be added in future (the data saved is in the correct structure to allow this feature on all past data). +![Detailed Set Stats](images/Detailed_Set_Stats.png) + ## Analytics and question versions Analytics begin when a question is _published_. After publishing a question for the first time it becomes available to students and their usage is logged and fed back to the student and the teacher. diff --git a/docs/teacher/guides/gettingstarted.md b/docs/teacher/guides/gettingstarted.md index f3916d9c4..20ab93c76 100644 --- a/docs/teacher/guides/gettingstarted.md +++ b/docs/teacher/guides/gettingstarted.md @@ -48,7 +48,7 @@ A [student guide is here](../../student/index.md). Teachers use the 'below the l - **Final answer** is self-explanatory. - **Worked solutions** provides detailed, step-by-step solutions. -All content below the line uses Lexdown functionality. Worked solutions can be [branched](https://lambda-feedback.github.io/user-documentation/teacher/guides/good-practice/#branching), or split into [steps](https://lambda-feedback.github.io/user-documentation/teacher/guides/lexdown/#steps-in-worked-solutions). Future developments will add branching and response areas to structured tutorials. +All content below the line uses the [Lexdown](./lexdown-content-editor.md) content editor functionality. Worked solutions can be [branched](https://lambda-feedback.github.io/user-documentation/teacher/guides/good-practice/#branching), or split into [steps](https://lambda-feedback.github.io/user-documentation/teacher/guides/lexdown/#steps-in-worked-solutions). Future developments will add branching and response areas to structured tutorials. It is not necessary to include all three methods of help. If you only provide content for one tab, only that button will appear in the published student version. diff --git a/docs/teacher/guides/good-practice.md b/docs/teacher/guides/good-practice.md index 3d45c09fa..b81434f24 100644 --- a/docs/teacher/guides/good-practice.md +++ b/docs/teacher/guides/good-practice.md @@ -69,10 +69,6 @@ Usage examples: ![gif showing the branching feature](images/branching.gif) -### Audio clips - -Drag + drop an audio file into the editor, or record audio on the 'Insert v' drowpdown in Lexdown. - ### Input symbols Input symbols help in two ways: diff --git a/docs/teacher/guides/guidance.md b/docs/teacher/guides/guidance.md new file mode 100644 index 000000000..cd012a984 --- /dev/null +++ b/docs/teacher/guides/guidance.md @@ -0,0 +1,24 @@ +# Guidance + +Guidance provides a summary of the task, it's difficulty and the estimated time. +Guidance is provided at the question level, and can be set to each question in the set. + +## Editing Guidance +To edit the question guidance, click the "Edit Guidance" button at the top of the page. + +Here you can enter four parts: +1. Guidance Blurb - The short description of the question. +2. Minimum Time Estimate - The least amount of time the task should take in minutes. +3. Maximum Time Estimate - The longest amount of time the task should take in minutes. +4. Skill - The difficulty of the task, rated out of three stars. + +### Obtaining Guidance Time + +We also support suggesting the time estimate. This uses machine learning based on worked solution and skill level to determine an estimated time for the task. + +To use this feature do the following: +1. Fill in all the question's attributes as much as possible. i.e, the question's text, the worked solution, skill level etc., The more information is filled in, the more accurate the suggested guidance time will be. + +2. Click on the "Suggest" button in the guidance configuration tab after you have filled in all the question's attributes. + +![Suggest Button](images/guidance-time-suggestion.png) \ No newline at end of file diff --git a/docs/teacher/guides/guidance/guidance-time-suggestion.md b/docs/teacher/guides/guidance/guidance-time-suggestion.md deleted file mode 100644 index ecdc8831d..000000000 --- a/docs/teacher/guides/guidance/guidance-time-suggestion.md +++ /dev/null @@ -1,9 +0,0 @@ -# Obtaining Guidance Time - -In this guide, we will walk through how to use the guidance time suggestion feature. - -1. Fill in all the question's attributes as much as possible. i.e, the question's text, the worked solution, skill level etc., The more information is filled in, the more accurate the suggested guidance time will be. - -2. Click on the "Suggest" button in the guidance configuration tab after you have filled in all the question's attributes. - -![Suggest Button](../../images/guidance-time-suggestion.png) \ No newline at end of file diff --git a/docs/teacher/guides/images/Detailed_Set_Stats.png b/docs/teacher/guides/images/Detailed_Set_Stats.png new file mode 100644 index 000000000..2ecc3b5ad Binary files /dev/null and b/docs/teacher/guides/images/Detailed_Set_Stats.png differ diff --git a/docs/teacher/guides/images/Module_Stats.png b/docs/teacher/guides/images/Module_Stats.png new file mode 100644 index 000000000..32086a01b Binary files /dev/null and b/docs/teacher/guides/images/Module_Stats.png differ diff --git a/docs/teacher/guides/images/Set_Stats.png b/docs/teacher/guides/images/Set_Stats.png new file mode 100644 index 000000000..5f8b4e621 Binary files /dev/null and b/docs/teacher/guides/images/Set_Stats.png differ diff --git a/docs/teacher/guides/lexdown.md b/docs/teacher/guides/lexdown-content-editor.md similarity index 86% rename from docs/teacher/guides/lexdown.md rename to docs/teacher/guides/lexdown-content-editor.md index d2cd5cc5c..bdd2f8530 100644 --- a/docs/teacher/guides/lexdown.md +++ b/docs/teacher/guides/lexdown-content-editor.md @@ -1,15 +1,15 @@ -# The lexdown editor +# The Lexdown content editor -The lexdown is widely used in Lambda Feedback. It accepts: +The Lexdown content editor is widely used in Lambda Feedback. It accepts: - standard [markdown](https://www.markdownguide.org/basic-syntax/) - [$\LaTeX$](https://www.overleaf.com/learn/latex/Learn_LaTeX_in_30_minutes) (delimited by $ and limited to [KaTeX](https://www.katex.org) functionality) - images (paste or drag and drop) - videos (paste a URL) -The lexdown editor is an adapted version of lexical to use markdown-first, and incorporate features including drag-and-drop images, embedded videos and autio, and switch to raw markdown. +The Lexdown editor is an adapted version of lexical to use markdown-first, and incorporate features including drag-and-drop images, embedded videos and audio, and switch to raw markdown. -## Common needs in Lexdown +## Common needs in the Lexdown editor Here's a walkthrough to create some basic content: @@ -59,6 +59,10 @@ This is the process to create the solution steps: You can add images with drag-and-drop or copy-and-paste. Images can be resized with the mouse, or click on the image to configure, or edit the raw markdown. +### Audio clips + +Drag + drop an audio file into the editor, or record audio on the 'Insert v' drowpdown in Lexdown. + ### Empty lines Use 'Enter' for a new paragraph, or 'shift-Enter' for a line break. diff --git a/docs/teacher/reference/latex_functionality.md b/docs/teacher/reference/latex_functionality.md index 01b796129..95c62a7f0 100644 --- a/docs/teacher/reference/latex_functionality.md +++ b/docs/teacher/reference/latex_functionality.md @@ -8,9 +8,9 @@ All content is formatted in [_markdown_](https://www.markdownguide.org/basic-syn Special attention is required when formatting $\LaTeX$ which, although it is formatted using standard markdown (i.e. delimited by the `$` for 'inline formulas', and `$$` for an equation environment), must use a subset of $\LaTeX$ in order to compile for both outputs. This sometimes requires a compromise by the author. -## The Lexdown editor +## The Lexdown content editor -All content in Lambda Feedback is stored as markdown (ASCII content), input/edited using our Lexdown editor. The editor has a preview mode providing live interactive previews of content, including $\LaTeX$ via katex, and a raw markdown mode. +All content in Lambda Feedback is stored as markdown (ASCII content), input/edited using our [Lexdown editor](../guides/lexdown-content-editor.md). The editor has a preview mode providing live interactive previews of content, including $\LaTeX$ via katex, and a raw markdown mode. ## Web requirements: katex diff --git a/docs/teacher/reference/response_area_components/Boolean.md b/docs/teacher/reference/response_area_components/Boolean.md index e69de29bb..bb4b3aa66 100644 --- a/docs/teacher/reference/response_area_components/Boolean.md +++ b/docs/teacher/reference/response_area_components/Boolean.md @@ -0,0 +1,7 @@ +# Boolean + +This response area allows students to select True or False. + +## Evaluation Function Options + +### [isExactEqual](https://lambda-feedback.github.io/user-documentation/user_eval_function_docs/isExactEqual/) diff --git a/docs/teacher/reference/response_area_components/Code.md b/docs/teacher/reference/response_area_components/Code.md new file mode 100644 index 000000000..e1c3210ac --- /dev/null +++ b/docs/teacher/reference/response_area_components/Code.md @@ -0,0 +1,10 @@ +# Code + +This response area allows students to write code for a specified programming language. + +## Evaluation Function Options + +### [chatGPT (experimental)](https://github.com/lambda-feedback/chatGPT/blob/main/app/docs/user.md) + + +### [`compareContsructs (experimental)`](https://github.com/lambda-feedback/compareConstructs/blob/main/docs/user.md) \ No newline at end of file diff --git a/docs/teacher/reference/response_area_components/Essay.md b/docs/teacher/reference/response_area_components/Essay.md new file mode 100644 index 000000000..d1e80eec1 --- /dev/null +++ b/docs/teacher/reference/response_area_components/Essay.md @@ -0,0 +1,10 @@ +# Essay + +Similar to [!Text](Text.md), allows student's to enter any text input, whether that be prose, math, code or any other type of input, but in a longer format. + +## Evaluation Function Options + +As text supports any input, so most evaluation functions can be used. +However, there are some text specific evaluation functions, such as shortTextAnswer and chatGPT. + +### [chatGPT (experimental)](https://github.com/lambda-feedback/chatGPT/blob/main/app/docs/user.md) \ No newline at end of file diff --git a/docs/teacher/reference/response_area_components/Expression.md b/docs/teacher/reference/response_area_components/Expression.md index d2f5b89d4..e150158d4 100644 --- a/docs/teacher/reference/response_area_components/Expression.md +++ b/docs/teacher/reference/response_area_components/Expression.md @@ -4,30 +4,12 @@ This response area is very similar to [Text](Text.md), differing in that it can ## Evaluation Function Options -### `isSimilar` +### [isSimilar](https://lambda-feedback.github.io/user-documentation/user_eval_function_docs/isSimilar/) -Calculates the difference between the teacher answer (ans) and the student response (res); compares this to an allowable difference comprising an absolute tolerance (atol) and a relative tolerance (rtol). +### [symbolicEqual](https://lambda-feedback.github.io/user-documentation/user_eval_function_docs/symbolicEqual/) -### `symbolicEqual` +### [compareExpressions](https://lambda-feedback.github.io/user-documentation/user_eval_function_docs/compareExpressions/) -Compares two symbolic expressions for mathematical equivalence, using SymPy. See [SymPy](https://www.sympy.org/en/index.html.md-button) for further information. - -## compareExpressions -**Input Symbols** - -This is a powerful feature for defining a dictionary of accepted symbols. For each symbol, you define: - -* **Symbol:** The LaTeX-rendered symbol (e.g., `$f(x)$`). -* **Code:** The machine-readable variable name (e.g., `fx`). This is what your students will type and what the evaluation function sees. -* **Alternatives:** A list of other codes you want to accept for the same symbol (e.g., `f_x`, `f(x)`, `f`). This allows you to anticipate different ways students might type the same thing. -* **Visibility:** A `TRUE`/`FALSE` toggle. If "Display input symbols" is enabled in the Input tab, this setting determines whether a specific symbol is shown to the student. This allows you to show students common symbols while still accepting less common or alternative ones in the background. - -![example](screenshots/input_symbols.png) -![example](screenshots/input_symbols_preview.png) - -Tolerances can also be added. These will apply to the numerical parts of the answer (e.g. the $10$ in $10x$). - -This is done using the `atol` and `rtol` fields under the Evaluation Function Parameters section. ## Component Parameters diff --git a/docs/teacher/reference/response_area_components/Likert.md b/docs/teacher/reference/response_area_components/Likert.md new file mode 100644 index 000000000..6ce0419c9 --- /dev/null +++ b/docs/teacher/reference/response_area_components/Likert.md @@ -0,0 +1,7 @@ +# Likert + +Allows students to fill in a five point Likert (Strongly Disagree to Strongly Agree) for any number of statements. + +## Evaluation Function Options + +Currently, there is no supported evaluation functions for this response area. \ No newline at end of file diff --git a/docs/teacher/reference/response_area_components/Matrix.md b/docs/teacher/reference/response_area_components/Matrix.md index 76051d987..2137f795a 100644 --- a/docs/teacher/reference/response_area_components/Matrix.md +++ b/docs/teacher/reference/response_area_components/Matrix.md @@ -4,13 +4,9 @@ Matrix response area. Will populate the component with a grid of text input fiel ## Evaluation Function Options -### ArrayEqual -Evaluation function checks if the supplied response and answer arrays are within the optionally supplied tolerances. This is based on the [numpy.allclose](https://numpy.org/doc/stable/reference/generated/numpy.allclose.html) function. Numpy is a dependancy for this function, but it means that arrays of any shape (regular) can be compared efficiently. - - -### ArraySymbolicEqual -Very similar to the SymbolicEqual grading function, but grading any list of expressions instead. This algorithm can take any level of nesting for "response" and "answer" fields, as grading is done recursively (as long as both shapes are identical). Symbolic grading is done using the SymPy library. See [SymPy](https://www.sympy.org/en/index.html.md-button) for further information. +### [ArrayEqual](https://lambda-feedback.github.io/user-documentation/user_eval_function_docs/arrayEqual/) +### [ArraySymbolicEqual](https://lambda-feedback.github.io/user-documentation/user_eval_function_docs/arraySymbolicEqual/) ## Component Parameters ### `rows and cols` (required) diff --git a/docs/teacher/reference/response_area_components/Milkdown.md b/docs/teacher/reference/response_area_components/Milkdown.md new file mode 100644 index 000000000..5f2520e0c --- /dev/null +++ b/docs/teacher/reference/response_area_components/Milkdown.md @@ -0,0 +1,7 @@ +# Milkdown + +Similar to [!Essay](Essay.md), allows student's to enter any text input, but with standard Markdown processing, including headers, bullet points, numbers, table, etc. + +## Evaluation Function Options + +Currently, there is no supported evaluation functions for this response area. \ No newline at end of file diff --git a/docs/teacher/reference/response_area_components/MultipleChoice.md b/docs/teacher/reference/response_area_components/MultipleChoice.md index 49588368d..94f57da17 100644 --- a/docs/teacher/reference/response_area_components/MultipleChoice.md +++ b/docs/teacher/reference/response_area_components/MultipleChoice.md @@ -4,8 +4,7 @@ General multiple choice response area. Features multiple options for single answ ## Evaluation Function Options -### ArrayEqual -Evaluation function checks if the supplied response and answer arrays are within the optionally supplied tolerances. This is based on the [numpy.allclose](https://numpy.org/doc/stable/reference/generated/numpy.allclose.html) function. Numpy is a dependancy for this function, but it means that arrays of any shape (regular) can be compared efficiently. +### [ArrayEqual](https://lambda-feedback.github.io/user-documentation/user_eval_function_docs/arrayEqual/) ## Parameters ### `options` (required) diff --git a/docs/teacher/reference/response_area_components/Number.md b/docs/teacher/reference/response_area_components/Number.md index b88f78ca8..cc48d9820 100644 --- a/docs/teacher/reference/response_area_components/Number.md +++ b/docs/teacher/reference/response_area_components/Number.md @@ -4,11 +4,9 @@ Very similar to the [Text](Text.md) response area, except the user response is p ## Evaluation Function Options -### `isSimilar` -Calculates the difference between the teacher answer (ans) and the student response (res); compares this to an allowable difference comprising an absolute tolerance (atol) and a relative tolerance (rtol). +### [isSimilar](https://lambda-feedback.github.io/user-documentation/user_eval_function_docs/isSimilar/) -### `isExactEqual` -A strict comparison in Python using '=='. This function is a basic utility but often not the function you really want to use because it is quite brittle. +### [isExactEqual](https://lambda-feedback.github.io/user-documentation/user_eval_function_docs/isExactEqual/) ## Setting The Answer diff --git a/docs/teacher/reference/response_area_components/NumericUnits.md b/docs/teacher/reference/response_area_components/NumericUnits.md index 4fc0d796a..24c3e9a9b 100644 --- a/docs/teacher/reference/response_area_components/NumericUnits.md +++ b/docs/teacher/reference/response_area_components/NumericUnits.md @@ -4,6 +4,10 @@ Provides two input fields with `Number` and `Units` placeholder texts. This area **Note:** this area will display how the user's response was interpred using the `interp_string` field provided in the feedback object returned by that function (if it exists). +## Evaluation Function Options + +### [compareExpressions](https://lambda-feedback.github.io/user-documentation/user_eval_function_docs/compareExpressions/) + ## Component Parameters ### `pre_response_text` & `post_response_text` (optional) Text block to be displayed to the left and right of the input field respectively. Markdown and LaTeX are allowed following the usual syntax. diff --git a/docs/teacher/reference/response_area_components/Table.md b/docs/teacher/reference/response_area_components/Table.md index 78ed2c269..a091a7ad2 100644 --- a/docs/teacher/reference/response_area_components/Table.md +++ b/docs/teacher/reference/response_area_components/Table.md @@ -5,13 +5,9 @@ Table response area. Will populate the component with a grid of text input field ## Evaluation Function Options -### ArrayEqual -Evaluation function checks if the supplied response and answer arrays are within the optionally supplied tolerances. This is based on the [numpy.allclose](https://numpy.org/doc/stable/reference/generated/numpy.allclose.html) function. Numpy is a dependancy for this function, but it means that arrays of any shape (regular) can be compared efficiently. - - -### ArraySymbolicEqual -Very similar to the SymbolicEqual grading function, but grading any list of expressions instead. This algorithm can take any level of nesting for "response" and "answer" fields, as grading is done recursively (as long as both shapes are identical). Symbolic grading is done using the SymPy library. See [SymPy](https://www.sympy.org/en/index.html.md-button) for further information. +### [ArrayEqual](https://lambda-feedback.github.io/user-documentation/user_eval_function_docs/arrayEqual/) +### [ArraySymbolicEqual](https://lambda-feedback.github.io/user-documentation/user_eval_function_docs/arraySymbolicEqual/) ## Component Parameters diff --git a/docs/teacher/reference/response_area_components/Text.md b/docs/teacher/reference/response_area_components/Text.md index eba104cbb..1801c31f5 100644 --- a/docs/teacher/reference/response_area_components/Text.md +++ b/docs/teacher/reference/response_area_components/Text.md @@ -1 +1,12 @@ # Text + +Allows student's to enter any text input, whether that be prose, math, code or any other type of input. + +## Evaluation Function Options + +As text supports any input, any evaluation function can be used. +However, there are some text specific evaluation functions, such as shortTextAnswer and chatGPT. + +### [chatGPT (experimental)](https://github.com/lambda-feedback/chatGPT/blob/main/app/docs/user.md) + +###[ shortTextAnswer (experimental)](https://github.com/lambda-feedback/shortTextAnswer/blob/main/app/docs/user.md) diff --git a/docs/teacher/reference/response_area_components/index.md b/docs/teacher/reference/response_area_components/index.md index f6496224b..825f4aeb7 100644 --- a/docs/teacher/reference/response_area_components/index.md +++ b/docs/teacher/reference/response_area_components/index.md @@ -54,7 +54,8 @@ This opens the **Response Area Panel**, separated into four tabs: - Text (for short text answers) - Table - Multiple-choice - - Expression (gives a preview for the typed symbolic expression) + - MATH_SINGLE_LINE + - MATH_MULTI_LINE - Numeric units (separate fields for a number and its units) - Code - Essay (for long text answers) diff --git a/mkdocs.yml b/mkdocs.yml index 949ab4959..6ba531023 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -33,15 +33,15 @@ nav: - Teachers: - "teacher/index.md" - Guides: - - Getting Started: "teacher/guides/gettingstarted.md" + - Getting started: "teacher/guides/gettingstarted.md" - Editing questions: "teacher/guides/content-sets-questions.md" + - Lexdown content editor: "teacher/guides/lexdown-content-editor.md" - Exporting and importing questions: "teacher/guides/question-export-import.md" - - Lexdown: "teacher/guides/lexdown.md" - Analytics: "teacher/guides/analytics.md" - Good practice: "teacher/guides/good-practice.md" + - Guidance: "teacher/guides/guidance.md" - FAQ: "teacher/guides/faq.md" - - Guidance: - - Guidance Time Suggestion: "teacher/guides/guidance/guidance-time-suggestion.md" + - Reference: - Content management: "teacher/reference/content_management.md" @@ -50,13 +50,17 @@ nav: - Response Areas: - "teacher/reference/response_area_components/index.md" - Text: "teacher/reference/response_area_components/Text.md" + - Essay: "teacher/reference/response_area_components/Essay.md" + - Milkdown: "teacher/reference/response_area_components/Milkdown.md" - Number: "teacher/reference/response_area_components/Number.md" - Boolean: "teacher/reference/response_area_components/Boolean.md" - NumericUnits: "teacher/reference/response_area_components/NumericUnits.md" - Expression: "teacher/reference/response_area_components/Expression.md" - MultipleChoice: "teacher/reference/response_area_components/MultipleChoice.md" + - Likert: "teacher/reference/response_area_components/Likert.md" - Matrix: "teacher/reference/response_area_components/Matrix.md" - Table: "teacher/reference/response_area_components/Table.md" + - Code: "teacher/reference/response_area_components/Code.md" - Evaluation Functions: - "teacher/reference/evaluation_functions/index.md"