Need a perfect paper? Place your first order and save 5% with this code:   SAVE5NOW

A Tale of Evil Twins: Adversarial Inputs Versus Poisoned Models

General Description of the Topic

Regardless of the successes in a spectrum of domains, deep learning systems have proven to be inherently susceptible to certain types of manipulations. One of these manipulations is the adversarial inputs, which involves the process of maliciously crafting samples that would cheat the target’s Deep Neural Network (DNN) models (Pang et al., 2020). Another type of manipulation is the poisoned models, which incorporates the adversely forged DNNs that depict certain misbehaviors as far as the pre-defined inputs are concerned. Concerning this topic, initial work has intensively focused on studying the two attack vectors simultaneously. However, the main problem is that there is still an insufficient understanding concerning the fundamental relationships. In this aspect, one of the main challenges is determining the dynamic interactions between the attacking vectors and the possible strategies of countering these attacks.

Related Work

With a constant increase in the utilization of security-sensitive domains, DNNs have emerged to become the recent targets of malicious exploitations. The current research on the adversarial inputs is classified into two campaigns. In this case, one line of work primarily dwells on developing new attacks to counter the DNNs. These existing attacks can also be classified as untargeted, whereby the adversary seeks to coerce or force the misclassification. Another category is the targeted attack in which the adversary tries to coerce the misclassification of inputs into specified classes. An additional line of work also seeks to improve the DNN resilience to inhibit the adversary attacks by employing unique training approaches such as adversarial training. Nevertheless, these works cannot address the problem due to inadequate understanding of the connection between the two types of attack vectors.

Summary of the Method Used

Concerning the threat models, the main assumption of this research was a threat model in which the adversary could exploit the two attack vectors. During the training process, one of the main factors that were put into consideration is the adversary forging of a DNN that incorporates malicious functions (Pang et al., 2020). Such a poisoned model is subsequently added to the target’s deep learning systems via system development or maintenance. For the purposes of addressing the connections between the two vector attacks, the researchers cast the adversary inputs and poisoned models within the confinement of a single framework. They also conducted a systematic study of the interactions between the two vector attacks and uncovered the various implications for the DNN vulnerabilities.

Evaluation

In the process of evaluating the attack models, there are various steps that the researchers took. The first step taken was developing a new attack model to optimize the adversarial inputs and poisoned models. The primary aim of this framework was to demonstrate the existence of the intricate duality connections that the two vectors have. In the subsequent steps, the researchers highlighted that the connection between the attack vectors was a perfect demonstration of an intriguing mutual reinforcement impact (Pang et al., 2020). The researchers further demonstrated that the impacts of the mutual reinforcement incorporated a broader design spectrum for the adversarial vectors to optimize their attacks. The last step was to prove that to counter the optimized attacks successfully. It is essential to investigate the attacks based on different perspectives such as fidelity and specificity.

In conclusion, my opinion concerning this work is that it offers a good foundation for understanding the adversarial inputs and poisoned models collectively. Also, the researchers supported their assertions both empirically and analytically by proving that there exists a mutual reinforcement impact between the attack vectors. Most importantly, this research is important because it presents opportunities for more investigations on the topic.

Reference

Pang, R., Shen, H., Zhang, X., Ji, S., Vorobeychik, Y., Luo, X., … Wang, T. (2020). A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models. Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security. www.doi:10.1145/3372297.3417253

 

Don't have time to write this essay on your own?
Use our essay writing service and save your time. We guarantee high quality, on-time delivery and 100% confidentiality. All our papers are written from scratch according to your instructions and are plagiarism free.
Place an order

Cite This Work

To export a reference to this article please select a referencing style below:

APA
MLA
Harvard
Vancouver
Chicago
ASA
IEEE
AMA
Copy to clipboard
Copy to clipboard
Copy to clipboard
Copy to clipboard
Copy to clipboard
Copy to clipboard
Copy to clipboard
Copy to clipboard
Need a plagiarism free essay written by an educator?
Order it today

Popular Essay Topics