Deep neural networks are known to be vulnerable to adversarial examples, i.e., images that are maliciously perturbed to fool the model.
確定! 回上一頁