Please note: This PhD seminar will take place in DC 2310 and online.
Ehsan Ganjidoost, PhD candidate
David R. Cheriton School of Computer Science
Supervisor: Professor Jeff Orchard
Adversarial examples represent meticulously crafted inputs exploiting vulnerabilities in machine learning models, leading to erroneous predictions. These inputs are generated by introducing perturbations, often imperceptible to the human eye, to legitimate samples, causing the model to misclassify or output incorrect results. The phenomenon not only underscores the fragility of state-of-the-art deep learning architectures in the face of seemingly minor modifications but also poses significant security and reliability concerns for applications relying on machine learning. Understanding and mitigating the impact of adversarial examples thus remains a critical area of research, aiming to enhance the robustness and trustworthiness of machine learning models in real-world deployments.
In our research, we used the Predictive Coding Network (PCnet) equipped with a local learning algorithm to predict the immediate lower layer along the hierarchies at each level. Inspectors are situated between the layers to match the prediction with lower layer values. After checking the accuracy of the prediction, inspectors signal the amount of revision required to apply by the higher layer. These dynamics comply with certain differential equations. Consequently, the system of ODEs reaches an equilibrium where the total of the prediction errors all over the network is at a local minimum.
Unlike Feedforward Networks (FFNs), as we described the structure of the PCnet and its learning process, at each moment, each layer can revisit its immediate lower layer and adjust its prediction while simultaneously being affected by the lower layer. In other words, pairs of layers collaborate to construct the network’s state. Based on this property, a simple PCnet classifier outperforms regular FFN models on adversarial examples, and PCnet can also help regular FFNs prevent failing on such adversaries.
TL; DR: Adversarial examples can exploit vulnerabilities in machine learning models, leading to inaccurate predictions, but the Predictive Coding Network (PCnet) offers a reliable solution to this problem. By allowing pairs of layers to collaborate and construct the network's state, the PCnet outperforms regular Feedforward Networks (FFNs) on adversarial examples and can aid regular FFNs in avoiding failure on such adversaries. This approach enhances the robustness and trustworthiness of machine learning models in real-world applications, ensuring their accuracy and reliability.