← All Projects
Project

Generative Monoculture and Fairness in LLMs

Investigated how LLM outputs become less diverse than their training data, proposing a group-aware fairness definition to detect disproportionate diversity loss.

Completed
ResearchMLNLPFairness

Overview

This project investigates generative monoculture in large language models — the phenomenon where model outputs become systematically less diverse than the underlying training data, raising concerns for alignment and equitable representation.

Key Contributions

Report