Position: Contextual Integrity is Inadequately Applied to Language Models
Top Authors
Abstract
Machine learning community is discovering Contextual Integrity (CI) as a useful framework to assess the privacy implications of large language models (LLMs). This is an encouraging development.The CI theory emphasizes sharing information in accordance withprivacy normsand can bridge the social, legal, political, and technical aspects essential for evaluating privacy in LLMs. However, this is also a good point to reflect on use of CI for LLMs.This position paper argues that existing literature inadequately applies CI for LLMs without embracing the theory’s fundamental tenets.Inadequate applications of CI could lead to incorrect conclusions and flawed privacy-preserving designs. We clarify the four fundamental tenets of CI theory, systematize prior work on whether they deviate from these tenets, and highlight overlooked issues in experimental hygiene for LLMs (e.g., prompt sensitivity, positional bias).