Strengthening Deep Neural Networks: Making AI Less Susceptible to Adversarial Trickery
<div><p>As deep neural networks (DNNs) become increasingly common in real-world applications, the potential to deliberately "fool" them with data that wouldn’t trick a human presents a new attack vector. This practical book examines real-world scenarios where DNNs—the algorithms intrinsic to much of AI—are used daily to process image, audio, and video data.</p><p>Author Katy Warr considers attack motivations, the risks posed by this adversarial input, and methods for increasing AI robustness to these attacks. If you’re a data scientist developing DNN algorithms, a security architect interested in how to make AI systems more resilient to attack, or someone fascinated by the differences between artificial and biological perception, this book is for you.</p><ul><li>Delve into DNNs and discover how they could be tricked by adversarial input</li><li>Investigate methods used to generate adversarial input capable of fooling DNNs</li><li>Explore real-world scenarios and model the adversarial threat</li><li>Evaluate neural network robustness; learn methods to increase resilience of AI systems to adversarial data</li><li>Examine some ways in which AI might become better at mimicking human perception in years to come</li></ul></div>