Cargando…

Discovering place-informative scenes and objects using social media photos

Understanding the visual discrepancy and heterogeneity of different places is of great interest to architectural design, urban design and tourism planning. However, previous studies have been limited by the lack of adequate data and efficient methods to quantify the visual aspects of a place. This w...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhang, Fan, Zhou, Bolei, Ratti, Carlo, Liu, Yu
Formato: Online Artículo Texto
Lenguaje:English
Publicado: The Royal Society 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6458415/
https://www.ncbi.nlm.nih.gov/pubmed/31032000
http://dx.doi.org/10.1098/rsos.181375
_version_ 1783410002229198848
author Zhang, Fan
Zhou, Bolei
Ratti, Carlo
Liu, Yu
author_facet Zhang, Fan
Zhou, Bolei
Ratti, Carlo
Liu, Yu
author_sort Zhang, Fan
collection PubMed
description Understanding the visual discrepancy and heterogeneity of different places is of great interest to architectural design, urban design and tourism planning. However, previous studies have been limited by the lack of adequate data and efficient methods to quantify the visual aspects of a place. This work proposes a data-driven framework to explore the place-informative scenes and objects by employing deep convolutional neural network to learn and measure the visual knowledge of place appearance automatically from a massive dataset of photos and imagery. Based on the proposed framework, we compare the visual similarity and visual distinctiveness of 18 cities worldwide using millions of geo-tagged photos obtained from social media. As a result, we identify the visual cues of each city that distinguish that city from others: other than landmarks, a large number of historical architecture, religious sites, unique urban scenes, along with some unusual natural landscapes have been identified as the most place-informative elements. In terms of the city-informative objects, taking vehicles as an example, we find that the taxis, police cars and ambulances are the most place-informative objects. The results of this work are inspiring for various fields—providing insights on what large-scale geo-tagged data can achieve in understanding place formalization and urban design.
format Online
Article
Text
id pubmed-6458415
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher The Royal Society
record_format MEDLINE/PubMed
spelling pubmed-64584152019-04-26 Discovering place-informative scenes and objects using social media photos Zhang, Fan Zhou, Bolei Ratti, Carlo Liu, Yu R Soc Open Sci Computer Science Understanding the visual discrepancy and heterogeneity of different places is of great interest to architectural design, urban design and tourism planning. However, previous studies have been limited by the lack of adequate data and efficient methods to quantify the visual aspects of a place. This work proposes a data-driven framework to explore the place-informative scenes and objects by employing deep convolutional neural network to learn and measure the visual knowledge of place appearance automatically from a massive dataset of photos and imagery. Based on the proposed framework, we compare the visual similarity and visual distinctiveness of 18 cities worldwide using millions of geo-tagged photos obtained from social media. As a result, we identify the visual cues of each city that distinguish that city from others: other than landmarks, a large number of historical architecture, religious sites, unique urban scenes, along with some unusual natural landscapes have been identified as the most place-informative elements. In terms of the city-informative objects, taking vehicles as an example, we find that the taxis, police cars and ambulances are the most place-informative objects. The results of this work are inspiring for various fields—providing insights on what large-scale geo-tagged data can achieve in understanding place formalization and urban design. The Royal Society 2019-03-06 /pmc/articles/PMC6458415/ /pubmed/31032000 http://dx.doi.org/10.1098/rsos.181375 Text en © 2019 The Authors. http://creativecommons.org/licenses/by/4.0/ Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/4.0/, which permits unrestricted use, provided the original author and source are credited.
spellingShingle Computer Science
Zhang, Fan
Zhou, Bolei
Ratti, Carlo
Liu, Yu
Discovering place-informative scenes and objects using social media photos
title Discovering place-informative scenes and objects using social media photos
title_full Discovering place-informative scenes and objects using social media photos
title_fullStr Discovering place-informative scenes and objects using social media photos
title_full_unstemmed Discovering place-informative scenes and objects using social media photos
title_short Discovering place-informative scenes and objects using social media photos
title_sort discovering place-informative scenes and objects using social media photos
topic Computer Science
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6458415/
https://www.ncbi.nlm.nih.gov/pubmed/31032000
http://dx.doi.org/10.1098/rsos.181375
work_keys_str_mv AT zhangfan discoveringplaceinformativescenesandobjectsusingsocialmediaphotos
AT zhoubolei discoveringplaceinformativescenesandobjectsusingsocialmediaphotos
AT ratticarlo discoveringplaceinformativescenesandobjectsusingsocialmediaphotos
AT liuyu discoveringplaceinformativescenesandobjectsusingsocialmediaphotos