Rashid Mushkani (UdeM, Mila) and his co-authors, Shravan Nayak, Hugo Berard, Allison Cohen, Shin Koseki, Hadrien Bertrand presented LIVS (Local Intersectional Visual Spaces) at the ICML 2025 (International Conference on Machine Learning) in Vancouver.
LIVS is a benchmark dataset (a collection of data used to test and compare AI models), developed with 30 Montreal community organizations. The goal was to adjust text-to-image models (AI that generates images from written descriptions) so that they reflect the local values and priorities of residents.
To create LIVS, researchers asked Montreal residents to compare 13,462 images in pairs, generating 37,710 choices based on six criteria defined by the residents: Accessibility, Safety, Comfort, Welcomingness, Inclusivity, and Diversity. These responses were used to fine-tune Stable Diffusion XL via Direct Preference Optimization (DPO) (a method that allows the AI model to learn directly from residents’ choices).
Case studies show that:
- DPO boosts alignment where data is rich,
- preference patterns shift by identity,
- human-written prompts yield more distinct visuals than LLM prompts a type of AI specialized in natural language processing),
- And intersectional groups rate criteria differently—challenging one-size-fits-all alignment.
The dataset and approach offer planners, designers, and civic technologists a practical way to generate visuals for participatory planning, evaluate bias, and prototype policies with communities in the loop.
With the participation of: Mila – Quebec AI Institute, Université de Montréal, Sid Lee Architecture, Enclume, Dark Matter Labs, IVADO, Canadian Commission for UNESCO
Learn more here: https://mid-space.one/ Full article: https://arxiv.org/abs/2503.01894