Skip to content

Query #209

@malikfahadsarwar

Description

@malikfahadsarwar

Thanks for sharing the code
I have a question if you can anwer in the training we donot feed the end token embedding as input as end token have to be predicted by model in the output
in the validation since the end token might be produce anywhere[might be different for each seq with in the batch] so let say if the seq produced by the LSTM is 5 tokens means 5 probs where last prob is of end token now in ground truth let say we have 9 tokens then how we supposed to compute the loss pz if you can anser my query

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions