It is our great pleasure to welcome you to the 2021 ACM Workshop on Moving Target Defense (MTD'21). The mission of MTD is provide a forum for researchers and practitioners in this area to exchange their novel ideas, findings, experiences, and lessons learned. The eighth MTD workshop has a special focus on the lessons learned from the past years of research in the area of moving target defenses, and the challenges and opportunities faced by the community moving forward.
The call for papers attracted submissions from North America and Europe. Each submission received at least three reviews. Each submission was then discussed and carefully debated by the members of the program committee. After careful considerations, the program committee accepted three full technical papers. In an attempt to highlight the important lessons learned in the community so far, this year, we also organized five invited talks covering broad research efforts that capture important aspects of MTDs. These invited papers capture many years of experience in designing, building, evaluating, and transitioning MTD technologies to practice.
For nearly two decades now, the vast majority of critical software vulnerabilities have been memory corruption bugs in C and C++ programs[13, 14]. Attackers often exploit these bugs using control-flow hijacking techniques to seize control over ...
Machine learning models are now widely deployed in real-world applications. However, the existence of adversarial examples has been long considered a real threat to such models. While numerous defenses aiming to improve the robustness have been proposed,...
Adversarial evasion attacks challenge the integrity of machine learning models by creating out-of-distribution samples that are consistently misclassified by these models. While a variety of detection and mitigation approaches have been proposed, they ...
Cyber deception has great potential in thwarting cyberattacks [1, 4, 8]. A defender (e.g., network administrator) can use deceptive cyber artifacts such as honeypots and faking services to confuse attackers (e.g., hackers) and thus reduce the success ...
Deep neural networks (DNN) are known to be vulnerable to adversarial attacks. Numerous efforts either try to patch weaknesses in trained models, or try to make it difficult or costly to compute adversarial examples that exploit them. In our work, we ...
New software security threats are constantly arising, including new classes of attacks such as the recent spate of micro-architectural vulnerabilities, from side-channels and speculative execution to attacks like Rowhammer that alter the physical state ...
As Machine Learning (ML) models are increasingly employed in a number of applications across a multitude of fields, the threat of adversarial attacks against ML models is also increasing. Adversarial samples crafted via specialized attack algorithms ...
Attackers rely upon a vast array of tools for automating attacksagainst vulnerable servers and services. It is often the case thatwhen vulnerabilities are disclosed, scripts for detecting and exploit-ing them in tools such asNmapandMetasploitare ...