Skip to content

Clarification on Inductive Setting in Graph Neural Networks #59

@SangwookBaek

Description

@SangwookBaek

Dear THUDM,

Firstly, I would like to express my sincere gratitude for your efforts. Your work has been immensely helpful.

I have a query regarding the implementation of the inductive setting in graph neural networks, as described in your paper. In my current understanding, I'm loading the dataset as per the code below in an inductive setting. According to section 4.1 of your paper, the inductive setting follows the GraphSAGE methodology.

train_mask = g.ndata['train_mask']
feat = g.ndata["feat"]
feat = scale_feats(feat)
g.ndata["feat"] = feat

g = g.remove_self_loop()
g = g.add_self_loop()

train_nid = np.nonzero(train_mask.data.numpy())[0].astype(np.int64)
train_g = dgl.node_subgraph(g, train_nid)
train_dataloader = [train_g]
valid_dataloader = [g]
test_dataloader = valid_dataloader
eval_train_dataloader = [train_g]

My question is: In this inductive setting, are we sampling a subgraph based on the nodes in the 'Train_mask', and then passing this subgraph through a GAE (Graph AutoEncoder) structure? Following this, is the model then validated against the entire graph, particularly focusing on the 'valid mask' or 'test mask' nodes?

Additionally, I would greatly appreciate if you could elaborate on the differences between the inductive and transductive settings in this context.

Thank you for your time and assistance in this matter.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions