skip to main content
10.1145/2996758acmconferencesBook PagePublication PagesccsConference Proceedingsconference-collections
AISec '16: Proceedings of the 2016 ACM Workshop on Artificial Intelligence and Security
ACM2016 Proceeding
Publisher:
  • Association for Computing Machinery
  • New York
  • NY
  • United States
Conference:
CCS'16: 2016 ACM SIGSAC Conference on Computer and Communications Security Vienna Austria 28 October 2016
ISBN:
978-1-4503-4573-6
Published:
28 October 2016
Sponsors:
Next Conference
October 14 - 18, 2024
Salt Lake City , UT , USA
Bibliometrics
Skip Abstract Section
Abstract

It is our pleasure to welcome you to the 9th ACM Workshop on Artificial Intelligence and Security --- AISec 2016. AISec, having been annually co-located with CCS for nine consecutive years, is the premier meeting place for researchers interested in the intersection of security, privacy, AI, and machine learning. Its role as a venue has been to merge practical security problems with advances in AI and machine learning. In doing so, researchers also have been developing theory and analytics unique to this domain and have explored diverse topics such as learning in game-theoretic adversarial environments, privacy-preserving learning, and applications to spam and intrusion detection.

AISec 2016 drew a record 38 submissions, of which 12 (32%) were selected for publication and presentation. Submissions arrived from researchers in 16 countries, from a wide variety of institutions both academic and corporate. The accepted papers were organized into the following thematic groups:

  • Security Data Sets: Collection and analysis of data that can serve as a baseline for AI/ML research in security.

  • Machine Learning and Security in Practice: Systems that use machine learning to solve a particular security problem.

  • Foundations: Theoretical constructs and best practices for applying machine learning to security.

  • Privacy: Attacks on user privacy or anonymity, and privacy-preserving constructions of machine learning systems.

The keynote address will be given by Elie Bursztein of Google, Inc., whose talk is entitled, "Why is applying machine learning to anti-abuse so hard?" In this talk, Dr. Bursztein will discuss challenges in the reproducibility of scientific results from machine learning algorithms and what we can do about it. Dr. Bursztein's talk will touch on issues arising from proprietary hardware, dataset availability, adversarial machine learning, and the ethics of data. He will also consider several privacy questions related to machine learning models.

Skip Table Of Content Section
SESSION: Session 1: Security Data Sets
research-article
SherLock vs Moriarty: A Smartphone Dataset for Cybersecurity Research

In this paper we describe and share with the research community, a significant smartphone dataset obtained from an ongoing long-term data collection experiment. The dataset currently contains 10 billion data records from 30 users collected over a period ...

SESSION: Session 2: Machine Learning and Security in Practice
research-article
DeepDGA: Adversarially-Tuned Domain Generation and Detection

Many malware families utilize domain generation algorithms (DGAs) to establish command and control (C&C) connections. While there are many methods to pseudorandomly generate domains, we focus in this paper on detecting (and generating) domains on a per-...

research-article
Open Access
Tracked Without a Trace: Linking Sessions of Users by Unsupervised Learning of Patterns in Their DNS Traffic

Behavior-based tracking is an unobtrusive technique that allows observers to monitor user activities on the Internet over long periods of time -- in spite of changing IP addresses. Previous work has employed supervised classifiers in order to link the ...

research-article
Identifying Encrypted Malware Traffic with Contextual Flow Data

Identifying threats contained within encrypted network traffic poses a unique set of challenges. It is important to monitor this traffic for threats and malware, but do so in a way that maintains the integrity of the encryption. Because pattern matching ...

research-article
Open Access
Causality-based Sensemaking of Network Traffic for Android Application Security

Malicious Android applications pose serious threats to mobile security. They threaten the data confidentiality and system integrity on Android devices. Monitoring runtime activities serves as an important technique for analyzing dynamic app behaviors. ...

SESSION: Session 3: Foundations
research-article
Secure Kernel Machines against Evasion Attacks

Machine learning is widely used in security-sensitive settings like spam and malware detection, although it has been shown that malicious data can be carefully modified at test time to evade detection. To overcome this limitation, adversary-aware ...

research-article
Prescience: Probabilistic Guidance on the Retraining Conundrum for Malware Detection

Malware evolves perpetually and relies on increasingly so- phisticated attacks to supersede defense strategies. Data-driven approaches to malware detection run the risk of becoming rapidly antiquated. Keeping pace with malware requires models that are ...

research-article
Discriminative Models for Multi-instance Problems with Tree Structure

Modelling network traffic is gaining importance to counter modern security threats of ever increasing sophistication. It is though surprisingly difficult and costly to construct reliable classifiers on top of telemetry data due to the variety and ...

SESSION: Session 4: Privacy
research-article
True Friends Let You Down: Benchmarking Social Graph Anonymization Schemes

Greater demand for social graph data among researchers and analysts has fueled an increase in such datasets being published. Consequently, concerns about privacy breach have also risen steadily. To mitigate privacy risks a myriad of social graph ...

research-article
Change of Guard: The Next Generation of Social Graph De-anonymization Attacks

Past decade has seen active research in social graph de-anonymization with a variety of algorithms proposed. Previous algorithms used handcrafted tricks and were locked in a co-evolution of attack and defense with design of anonymization systems. We ...

research-article
Public Access
Differentially Private Online Active Learning with Applications to Anomaly Detection

In settings where data instances are generated sequentially or in streaming fashion, online learning algorithms can learn predictors using incremental training algorithms such as stochastic gradient descent. In some security applications such as ...

research-article
Public Access
A Dual Perturbation Approach for Differential Private ADMM-Based Distributed Empirical Risk Minimization

The rapid growth of data has raised the importance of privacy-preserving techniques in distributed machine learning. In this paper, we develop a privacy-preserving method to a class of regularized empirical risk minimization (ERM) machine learning ...

Contributors
  • Facebook, Inc.
  • Chalmers University of Technology
  • Rutgers University–New Brunswick
Index terms have been assigned to the content through auto-classification.

Recommendations

Acceptance Rates

AISec '16 Paper Acceptance Rate12of38submissions,32%Overall Acceptance Rate94of231submissions,41%
YearSubmittedAcceptedRate
AISec '1832928%
AISec '17361131%
AISec '16381232%
AISec '15251144%
AISec '14241250%
AISec '13171059%
AISec '12241042%
AISec '10151067%
AISec '0820945%
Overall2319441%