GeoStyle: Discovering Fashion Trends and Events

Utkarsh Mall1 Kevin Matzen2 Bharath Hariharan1 Noah Snavely1 Kavita Bala1

1Cornell University 2Facebook


Understanding fashion styles and trends is of great potential interest to retailers and consumers alike. The photos people upload to social media are a historical and public data source of how people dress across the world and at different times. While we now have tools to automatically recognize the clothes and style attributes of what people are wearing in these photographs, we lack the ability to analyze spatial and temporal trends in these attributes or make predictions about the future. In this paper we address this need by providing an automatic framework (see the figure below) that analyzes large corpora of street imagery to (a) discover and forecast long-term trends of various fashion attributes as well as automatically discovered styles, and (b) identify spatio-temporally localized events that affect what people wear. We show that our framework makes long term trend forecasts that are > 20% more accurate than prior art, and identifies hundreds of socially meaningful events that impact fashion across the globe.


[pdf]  [arxiv]  [supplementary pdf]

Utkarsh Mall, Kevin Matzen, Bharath Hariharan, Noah Snavely and Kavita Bala. "GeoStyle: Discovering Fashion Trends and Events". In ICCV, 2019.

 title={{GeoStyle}: {D}iscovering fashion trends and events},
 author={Mall, Utkarsh and Matzen, Kevin and Hariharan, Bharath and Snavely, Noah and Bala, Kavita},




The code can be found here

Some pre-trained models are requires for the code that can be downloaded here

[npz] googlenet.npz: GoogLeNet model pre-trained on ImageNet. Required to train the network (

[pkl] streetstyle_weights.pkl: GoogLeNet model trained on StreetStyle27k (


[txt] Readme.txt

[link] StreetStyle27k dataset

[pkl] metadata.pkl: Attribute prediction and Metadata for 7.7M images

[pkl] barcaflickr_metadata.pkl: Attribute prediction and Metadata for Flickr images from Barcelona (See Sec. 4.4 in paper)


This work was supported by NSF and an Amazon research award.