Skip to content

Regarding the model: The first encoder is used to calculate the relationship aware embedding of the head entity h hh, while the input of the second encoder BERTt only contains the textual description of entity t. #38

@whistle9

Description

@whistle9

Hello, I would like to know if the model uses two encoders to obtain the head entity relationship perception embedding and tail entity embedding, and finally performs normalization processing calculation. Can this model be implemented with one encoder?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions