CMSC818I: Advanced Topics in Computer Systems; Large Language Models, Security, and Privacy

Classroom: CSI 3117 Class hours: Tuesday and Thursday, 9:30am - 10:45am

Instructor: Yizheng Chen Email: yzchen@umd.edu Office Hours: By appointment https://calendly.com/c1z/30min

TA: Yanjun Fu Email: yanjunfu@umd.edu Office Hours: Wednesday 2pm - 3pm, in IRB 5112 Desk #22

TA: Khalid Saifullah Email: khalids@umd.edu Office Hours: Monday 2pm - 3pm, in IRB 2119

Lectures

Date Topic Paper More Materials
08/27 Introduction Syllabus
08/29 Securing AI Coding Assistants Constrained Decoding for Secure Code Generation [Slides]
  • Amazon Trusted AI Challenge
  • Practical Attacks against Black-box Code Completion Engines
  • INDICT: Code Generation with Internal Dialogues of Critiques for Both Security and Helpfulness
  • 09/03 Background [Slides]
  • Computer Security Textbook from UC Berkeley
  • 09/05 Secure Code Generation Instruction Tuning for Secure Code Generation [Slides]
  • Large Language Models for Code: Security Hardening and Adversarial Testing
  • CodeLMSec Benchmark: Systematically Evaluating and Finding Security Vulnerabilities in Black-Box Code Language Models
  • 09/10 SWE Agent SWE-agent: Agent-Computer Interfaces Enable Automated Software Engineering [Slides]
  • SWE-bench: Can Language Models Resolve Real-World GitHub Issues?
  • ReAct: Synergizing Reasoning and Acting in Language Models
  • 09/12 Cybersecurity Risks of LLM Agents Cybench: A Framework for Evaluating Cybersecurity Capabilities and Risk of Language Models [Slides]
  • NYU CTF Dataset: A Scalable Open-Source Benchmark Dataset for Evaluating LLMs in Offensive Security
  • An Empirical Evaluation of LLMs for Solving Offensive Security Challenges
  • Language Agents as Hackers: Evaluating Cybersecurity Skills with Capture the Flag
  • 09/17 Copyright CopyBench: Measuring Literal and Non-Literal Reproduction of Copyright-Protected Text in Language Model Generation [Slides]
  • Detecting Pretraining Data from Large Language Models
  • Fantastic Copyrighted Beasts and How (Not) to Generate Them
  • 09/19 Copyright Be like a Goldfish, Don’t Memorize! Mitigating Memorization in Generative LLMs [Slides]
  • Counterfactual Memorization in Neural Language Models
  • Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy
  • 09/24 RAG Poisoning PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models [Slides]
    09/26 Backdoor Detection Competition Report Competition Report: Finding Universal Jailbreak Backdoors in Aligned LLMs [Slides]
    10/01 Logic Fallacy NL2FOL: Translating Natural Language to First-Order Logic for Logical Fallacy Detection [Slides]
    10/03 Prompt Injection Attacks and Defenses Formalizing and Benchmarking Prompt Injection Attacks and Defenses [Slides]
    10/08 Attacks against Code Generation Practical Attacks against Black-box Code Completion Engines [Slides]
    10/10 Code Language Models Guest Speaker: Yangruibo (Robin) Ding
    10/15 TBD Guest Speaker: Eric Wong
    10/17 Mid-term Take-home Exam
    10/22 LLM Agents Exploiting Web Applications LLM Agents can Autonomously Exploit One-day Vulnerabilities [Slides]
  • Teams of LLM Agents can Exploit Zero-Day Vulnerabilities
  • 10/24 LLM for Static Vulnerability Detection Comparison of Static Application Security Testing Tools and Large Language Models for Repo-level Vulnerability Detection [Slides]
  • Vulnerability Detection with Code Language Models: How Far Are We?
  • ARVO: Atlas of Reproducible Vulnerabilities for Open Source Software
  • 10/29 Formal Assurance of AI Agents AI Agents with Formal Security Guarantees [Slides]
    10/31 Security of AI Agents Security of AI Agents [Slides]
  • Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems
  • 11/05 Debugging Capabilities
    Mid-term Project Report Due
    DebugBench: Evaluating Debugging Capability of Large Language Models [Slides]
    11/07 Patch Security Issues Can LLMs Patch Security Issues? [Slides]
    11/12 LLM for Program Analysis
    11/14 LLM for Fuzzing
    11/19 Alignment & Safety
    11/21 Multi-Model AI Safety
    11/26 Thanksgiving Break
    11/28 Thanksgiving Break
    12/03 Project Poster Session
    12/05 Project Poster Session
    12/10 Reading Day No Class
    12/12 Final Project Report Due