Orna Amir & Hila Kantor

Google
A User-Centric Framework for Quantifying Notification Harm

Bio

Orna Amir leads the Data Science team in Google’s Growth and Notifications Teams. Her work leverages modeling, analytics, and A/B testing to enhance notification quality and growth for Google Photos, Google Maps, and YouTube. Previously at Waze, she developed DL models and monitoring methods to improve ETA and navigation. Orna holds a Ph.D. in Applied Mathematics and brings over 15 years of experience in machine learning, statistical modeling, and optimization across various industries.

Hila Kantor is a data science tech lead on Google’s Notifications team, driving initiatives in digital wellbeing and the team success metrics. She specializes in notification value/harm analysis, measurement frameworks, and product analytics using data science and big data techniques. Prior to Google, she led product analytics and BI consulting at Sisense. Hila holds an MSc in Technology and Information Systems specializing in data science and business analytics.

Abstract

This talk presents a holistic framework for quantifying user notification harm, developed through cross-functional collaboration at Google spanning data science, user experience research, and stakeholders from Google Photos, Google Maps, Google Search, and YouTube.

Previously, user notification harm was measured via a balance between clicks and opt-outs, which are both measurable qualities but do not clearly express the end-user’s experience or motivations. As opt-outs are infrequent, relying on them as a measure of notification harm is ineffective in A/B testing. Furthermore, opt-outs can underestimate harm in users who tolerate or ignore unwanted notifications without opting out.

To create a positive digital environment, we need a scalable way to measure and understand the potential harm of poorly designed or excessive notifications. We combined quantitative and qualitative approaches to uncover the root causes of notification dissatisfaction. This involved feature engineering, ML modeling, and analyzing user feedback – particularly free-text survey responses processed with LLM technology – to better understand our end-user experience and motivations.

The research revealed key indicators of user harm from notifications, giving Google product teams tools to refine their strategies and create a more positive and respectful user experience.

Planned
Agenda

8:45 Reception
9:30 Opening remarks by WiDS TLV ambassadors Noah Eyal Altman, Or Basson, and Nitzan Gado
9:45 Dr. Aya Soffer, IBM: "Putting Generative AI to Work: What Have We Learned So Far?"
10:15 Prof. Reut Tsarfaty, Bar-llan University: "Will Hebrew Speakers Be Able to Use Generative AI in Their Native Tongue?"
10:45 Break
11:00 Lightning talks
12:20 Lunch & poster session
13:20 Roundtable session & poster session
14:05 Roundtable closing
14:20 Break
14:30 Dr. Orna Amir & Hila Kantor, Google: "A User-Centric Framework for Quantifying Notification Harm"
14:50 Naomi Ken Korem, Lightricks: "Mastering the Art of Generative Models: Training and Controlling Text-to-Video Models"
15:10 Dr. Yael Mathov, Intuit: "Surviving the AI-pocalypse: Your Guide to LLM Security"
15:30 Closing remarks
15:40 The end

WiDS TLV important update

Dear WiDS TLV attendees,

In light of recent developments, we regret to inform you that the WIDS TLV 2024 event scheduled for tomorrow has been postponed to June 3rd, 2024. We apologize for this last-minute change and look forward to seeing all of you on June 3rd.