Dr. Yael Mathov

Intuit
Surviving the AI-pocalypse: Your Guide to LLM Security

Bio

Yael Mathov is an AI Security Researcher at Intuit. Her day-to-day tasks involve detecting weak spots and protecting a wide array of AI models, ranging from cutting-edge GenAI to conventional machine learning systems. She boasts a Ph.D. in Adversarial Learning, an M.Sc. in Cyber Security, and a B.Sc. in Computer Science, all of which were earned at Ben Gurion University of the Negev. Her gaming experience has taught her an invaluable lesson – never to trust AI with the red button.

Abstract

As the adoption of Large Language Models (LLMs) surges within the industry owing to their exceptional abilities in deep comprehension of complex text, their unique capabilities come with unique risks and challenges. With their ability to mimic human-like reactions, LLMs could have potential implications for the security and reliability of the systems they power.

In this presentation, we will investigate potential vulnerabilities within LLMs that could be exploited, illustrating how these could be used to manipulate models in operation. We will then share a set of strategies and proactive measures that data scientists at Intuit implement to bolster our AI-powered applications against malevolent attacks. This approach helps multiple teams harness the power of LLMs while mitigating their inherent security risks.

Agenda

8:45 Reception
9:30 Opening remarks by WiDS TLV ambassadors Noah Eyal Altman, Or Basson, and Nitzan Gado
9:45 Dr. Aya Soffer, IBM: "Putting Generative AI to Work: What Have We Learned So Far?"
10:15 Prof. Reut Tsarfaty, Bar-llan University: "Will Hebrew Speakers Be Able to Use Generative AI in Their Native Tongue?"
10:45 Poster Pitches
10:55 Break
11:10 Lightning talks
12:30 Lunch & poster session
13:30 Roundtable session & poster session
14:15 Roundtable closing
14:30 Break
14:40 Naomi Ken Korem, Lightricks: "Mastering the Art of Generative Models: Training and Controlling Text-to-Video Models"
15:00 Dr. Yael Mathov, Intuit: "Surviving the AI-pocalypse: Your Guide to LLM Security"
15:20 Closing remarks
15:30 The end