Moran Baruch

IBM Research
When AI Meets Privacy: How Can We Prevent Data Leakage Risks with Foundation Models?
Moran Baruch


Moran Baruch is a Machine Learning Research Team Lead at IBM Research Lab within the AI Security department. Her research is dedicated to adapting DL models for secure predictions over encrypted data, thus improving data privacy and enabling secure AI applications. Moran recently completed her Ph.D. in computer science, specializing in machine learning and security, at Bar Ilan University.


In a world driven by data and AI, the emergence of large foundation models like GPT has led industries to rely on ML-as-a-service. However, entrusting third-party providers with sensitive data raises concerns about data privacy and security. For example, hospitals may hesitate to use ML-as-a-service without assurances that their data remain protected and don’t leak.

In this roundtable discussion, we aim to share knowledge about the current state of privacy-enhancing tools (PETs), such as homomorphic encryption and MPC, in the context of ML. We will discuss how organizations can ensure the confidentiality of their data in the era of AI-driven services. Additionally, we’ll discuss the challenges associated with applying these solutions to large foundation models and attempt to find strategies for overcoming them.

Join us for an insightful conversation.


8:45 Reception
9:30 Opening remarks by WiDS TLV ambassadors Noah Eyal Altman, Or Basson, and Nitzan Gado
9:45 Dr. Aya Soffer, IBM: "Putting Generative AI to Work: What Have We Learned So Far?"
10:15 Prof. Reut Tsarfaty, Bar-llan University: "Will Hebrew Speakers Be Able to Use Generative AI in Their Native Tongue?"
10:45 Poster Pitches
10:55 Break
11:10 Lightning talks
12:30 Lunch & poster session
13:30 Roundtable session & poster session
14:15 Roundtable closing
14:30 Break
14:40 Naomi Ken Korem, Lightricks: "Mastering the Art of Generative Models: Training and Controlling Text-to-Video Models"
15:00 Dr. Yael Mathov, Intuit: "Surviving the AI-pocalypse: Your Guide to LLM Security"
15:20 Closing remarks
15:30 The end