Lital Binyamin & Hilit Segev

Make It Count: Text-to-Image Generation with an Accurate Number of Objects‏
Lital Binyamin
hilit_segev

Bio

Lital Binyamin is an MSc student at Bar Ilan University and a deep learning engineer specializing in computer vision and generative AI. She is currently focusing on text-to-image generation under the supervision of Professor Gal Chechik. Lital also has three years of experience in computer vision and deep learning across academia and industry. Her research focuses on medical imaging, remote sensing, and hyperspectral imaging. She holds a BSc in computer science and psychology with an emphasis on neuroscience from Tel Aviv University.

Hilit holds a BSc and MSc in Computer Science from Bar-Ilan University, where she completed a direct-track Master’s degree in Professor Gal Chechik’s lab. Her research focused on computer vision, with expertise in 3D modeling, generative models, and medical data analysis.
She currently works at aiOla, specializing in Speech AI, with a focus on speech processing and large language models. In addition to her technical expertise, Hilit is actively involved in promoting women in technology. She is one of the leaders of WiDS Israel and founded Bar-Ilan’s Women in Tech Forum.

Abstract

Despite the unprecedented success of text-to-image diffusion models, controlling the number of depicted objects using text is surprisingly hard. This is important for various applications from technical documents, to children’s books to illustrating cooking recipes. Generating object-correct counts is fundamentally challenging because the generative model needs to keep a sense of separate identity for every instance of the object, even if several objects look identical or overlap, and then carry out a global computation implicitly during generation. It is still unknown if such representations exist. To address count-correct generation, we first identify features within the diffusion model that can carry the object identity information. We then use them to separate and count instances of objects during the denoising process and detect over-generation and under-generation. We fix the latter by training a model that predicts both the shape and location of a missing object, based on the layout of existing ones, and show how it can be used to guide denoising with correct object count. Our approach, CountGen, does not depend on external sources to determine object layout, but rather uses the prior from the diffusion model itself, creating prompt-dependent and seed-dependent layouts. Evaluated on two benchmark datasets, we find that CountGen strongly outperforms the count-accuracy of existing baselines.

Planned
Agenda

8:45 Reception
9:30 Opening remarks by WiDS TLV ambassadors
9:45 Dr. Mor Geva , Tel Aviv University: “MRI for Large Language Models: Mechanistic Interpretability from Neurons to Attention Heads”
10:15 Panel: “Pioneering Progress: a strategic look at the GenAI revolution and the new role of data scientists“
Shani Gershtein, Melingo
Mirit Elyada Bar, Intuit
Dr. Asi Messica, Lightricks
Moderated by Nitzan Gado, Intuit
10:45 Poster pitches
10:55 Break
11:10 Lightning talks session
12:30 Lunch & poster session
13:30 Roundtable session & poster session
14:30 Roundtable closing
14:40 Shunit Agmon, Technion: “Bridging the Gender Gap in Clinical AI: Temporal Adaptation with TeDi-BERT”
15:00 Shaked Naor Hoffmann, Apartment List: “Building Generative AI Agents for Production: Turning Ideas into Real-World Applications”
15:20 Closing remarks
15:30 The end