Skip to content

Help with running Examples #85

@GanbaruTobi

Description

@GanbaruTobi

When I run the Examples locally, most of them fail, because the LLM will not stick to the output format. I am unsure if there is even a format requested in the code...
So my problem is I am unsure how to debug why that is. What would be the way to display the actual used prompts generated?
Because I wonder if my doc-annotations are being parsed even.
The only way I was able to fix it to work was giving it an explicit instruction to the builder like:

const QA_INSTRUCTION: &str = "\
You are a strict data-processing engine, not a conversational assistant. \
You must output exactly the requested fields with zero conversational filler, \
no pleasantries, and no markdown wrapping. Do not say 'Here is the answer' \
or 'How can I assist you'. Just provide the raw text.\
";
#[builder(default = Predict::<QA>::builder().instruction(QA_INSTRUCTION).build())]

Thanks for any tips!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions