Hi, your work is interesting and impactful. I have a question regarding your method of optimizing graph topology by adjusting edge weights. While this approach is effective for GCNs, other GNN architectures like GraphSAGE or GAT might still be influenced by edges with edge_weight=0 during training due to their normalization or aggregation mechanisms (e.g., neighborhood sampling or attention-based aggregation). Could you share how your method handles such cases? I would greatly appreciate any insights or references you could provide. Thank you for your time!
Hi, your work is interesting and impactful. I have a question regarding your method of optimizing graph topology by adjusting edge weights. While this approach is effective for GCNs, other GNN architectures like GraphSAGE or GAT might still be influenced by edges with edge_weight=0 during training due to their normalization or aggregation mechanisms (e.g., neighborhood sampling or attention-based aggregation). Could you share how your method handles such cases? I would greatly appreciate any insights or references you could provide. Thank you for your time!