attention = torch.where(adj > 0, e, zero_vec). attention = F.softmax(attention, dim=1). attention = F.dropout(attention, self.dropout, ...
確定! 回上一頁