GeoDS lab members in the 2022 ACM SIGSPATIAL and AutoCarto Conferences

During the week of November 1-4, 2022, all the GeoDS lab members were traveling to two academic conferences: ACM SIGSPATIAL 2022 and AutoCarto 2022.

Prof. Song Gao, Wen Ye (undergraduate student), Yunlei Liang (PhD student), Yuhan Ji (PhD student), Jiawei Zhu (visiting PhD student), and Jinmeng Rao (PhD Candidate), presented at the 30th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL 2022) in Seattle, Washington, USA.

We published two short research papers in the main conference, three workshop full papers, and won a “Best Paper Award”.

  1. Region2Vec: Community Detection on Spatial Networks Using Graph Embedding with Node Attributes and Spatial Interactions. Yunlei Liang, Jiawei Zhu, Wen Ye, Song Gao. (2022) In SIGSPATIAL’22, DOI:10.1145/3557915.3560974
  2. Exploring multilevel regularity in human mobility patterns using a feature engineering approach: A case study in Chicago. Yuhan Ji, Song Gao, Jacob Kruse, Tam Huynh, James Triveri, Chris Scheele, Collin Bennett, and Yichen Wen. (2022) In SIGSPATIAL’22, DOI:10.1145/3557915.3560974
  3. (Best Paper Award) Understanding the spatiotemporal heterogeneities in the associations between COVID-19 infections and both human mobility and close contacts in the United States. Wen Ye, Song Gao. (2022) In SpatialEpi ’22,  pp 1–9, DOI:10.1145/3557995.3566117
  4. Measuring network resilience via geospatial knowledge graph: a case study of the us multi-commodity flow network. Jinmeng Rao, Song Gao, Michelle Miller, Alfonso Morales. (2022) In GeoKG’22, pp 17-25, DOI:10.1145/3557990.3567569
  5. Towards the intelligent era of spatial analysis and modeling. Di Zhu, Song Gao, Guofeng Cao. (2022) In GeoAI’22, pp 10-13, DOI:10.1145/3557918.3565863

As the Proceedings Chairs, Professor Song Gao co-organized the 5th ACM SIGSPATIAL International Workshop on AI for Geographic Knowledge Discovery (GeoAI’22). There are two keynotes from both industry and academia and 12 oral presentations in the GeoAI workshop. The proceedings of the GeoAI’22 workshop is available at the ACM Digital Library: https://dl.acm.org/doi/proceedings/10.1145/3557918

In addition, Yuhao Kang (PhD Candidate) and Jake Kruse (PhD Student) presented two short papers in the AutoCarto 2022,  the 24th International Research Symposium on Cartography and GIScience.

  1. A Review and Synthesis of Recent GeoAI Research for Cartography: Methods, Applications, and Ethics. Yuhao Kang, Song Gao, Robert Roth (2022)
  2. Interactive Web Mapping for Multi-Criteria Assessment of Redistricting Plans. Jacob Kruse, Song Gao, Yuhan Ji and Kenneth Mayer (2022)

Also, Congrats to Yuhao who won the International Cartographic Association (ICA) Scholarship Award!

GeoAI at ACM SIGSPATIAL: progress, challenges, and future directions

Geospatial artificial intelligence (GeoAI) is an interdisciplinary field that has received tremendous attention from both academia and industry in recent years. We recently published an article that reviews the series of GeoAI workshops held at the Association for Computing Machinery (ACM) International Conference on Advances in Geographic Information Systems (SIGSPATIAL) since 2017. These workshops have provided researchers a forum to present GeoAI advances covering a wide range of topics, such as geospatial image processing, transportation modeling, public health, and digital humanities. We provide a summary of these topics and the research articles presented at the 2017, 2018, and 2019 GeoAI workshops. We conclude with a list of open research directions for this rapidly advancing field.

Reference: Yingjie Hu, Song Gao, Dalton Lunga, Wenwen Li, Shawn Newsam, and Budhendra Bhaduri (2019): GeoAI at ACM SIGSPATIAL: progress, challenges, and future directions, ACM SIGSPATIAL Special, 11(2), 5-15. [PDF]

ACM SIGSPATIAL GeoAI Workshop Proceedings:

1st ACM SIGSPATIAL International Workshop AI and Deep Learning for Geographic Knowledge Discovery (GeoAI’17). Redondo Beach, CA, USA – November 7, 2017. DOI: 10.1145/3178392.3178408 [PDF]

2nd ACM SIGSPATIAL International Workshop AI and Deep Learning for Geographic Knowledge Discovery (GeoAI’18). Seattle, WA, USA – November 6, 2018. DOI: 10.1145/3307599.3307609[PDF]

3rd ACM SIGSPATIAL International Workshop AI and Deep Learning for Geographic Knowledge Discovery (GeoAI’19). Chicago, IL, USA – November 5, 2019. DOI: 10.1145/3356471 [PDF]

New IJGIS Editorial on GeoAI

Abstract: What is the current state-of-the-art in integrating results from artificial intelligence research into geographic information science and the earth sciences more broadly? Does GeoAI research contribute to the broader field of AI, or does it merely apply existing results? What are the historical roots of GeoAI? Are there core topics and maybe even moonshots that jointly drive this emerging community forward? In this editorial, we answer these questions by providing an overview of past and present work, explain how a change in data culture is fueling the rapid growth of GeoAI work, and point to future research directions that may serve as common measures of success.

Moonshot (Editorial): Can we develop an artificial GIS analyst that passes a domain-specific Turing Test by 2030?

Keywords: Spatial Data Science, GeoAI, Machine Learning, Knowledge Graphs, Geo-Semantics, Data Infrastructure

Acknowledgement: we sincerely thank all the reviewers who contribute their time to the peer-review process and ensure the quality of the accepted papers.

Special Issue Papers (up to date):

Janowicz, K., Gao, S., McKenzie, G., Hu, Y., and Bhaduri, B. (2020, Editorial). GeoAI: Spatially Explicit Artificial Intelligence Techniques for Geographic Knowledge Discovery and Beyond. International Journal of Geographical Information Science, 34(4), 625-636.

Acheson, E., Volpi, M., & Purves, R. S. (2020). Machine learning for cross-gazetteer matching of natural features. International Journal of Geographical Information Science, 1-27.

Duan, W., Chiang, Y., Leyk, S., Uhl, J. and Knoblock, C. (2020). Automatic alignment of contemporary vector data and georeferenced historical maps using reinforcement learning. International Journal of Geographical Information Science, forthcoming. 1-27; DOI: 10.1080/13658816.2019.1698742.

Guo, Z., & Feng, C. C. (2020). Using multi-scale and hierarchical deep convolutional features for 3D semantic classification of TLS point clouds. International Journal of Geographical Information Science, 1-20.

Law, S., Seresinhe, C. I., Shen, Y., & Gutierrez-Roig, M. (2020). Street-Frontage-Net: urban image classification using deep convolutional neural networks. International Journal of Geographical Information Science, 1-27.

Li, W., & Hsu, C. Y. (2020). Automated terrain feature identification from remote sensing imagery: a deep learning approach. International Journal of Geographical Information Science, 1-24.

Ren, Y., Chen, H., Han, Y., Cheng, T., Zhang, Y., & Chen, G. (2020). A hybrid integrated deep learning model for the prediction of citywide spatio-temporal flow volumes. International Journal of Geographical Information Science, 1-22.

Sparks, K., Thakur, G., Pasarkar, A., & Urban, M. (2020). A global analysis of cities’ geosocial temporal signatures for points of interest hours of operation. International Journal of Geographical Information Science, 1-18.

Xie, Y., Cai, J., Bhojwani, R., Shekhar, S., & Knight, J. (2020). A locally-constrained YOLO framework for detecting small and densely-distributed building footprints. International Journal of Geographical Information Science, 1-25.

Zhu, D., Cheng, X., Zhang, F., Yao, X., Gao, Y., & Liu, Y. (2020). Spatial interpolation using conditional generative adversarial neural networks. International Journal of Geographical Information Science, 1-24.

Full Paper about “Map Style Transfer” accepted at the International Journal of Cartography

Our paper entitled Transferring Multiscale Map Styles Using Generative Adversarial Networks has been accepted for publishing in the International Journal of Cartography.

DOI: 10.1080/23729333.2019.1615729

Authorship: Yuhao KangSong GaoRobert E. Roth.

This paper proposes a methodology framework to transfer the cartographic style in different kinds of maps. By inputting the raw GIS vector data, the system can automatically render styles to the input data with target map styles but without CartoCSS or Mapbox GL style specification sheets. The Generative Adversarial Networks (GANs) are used in this research. The study explores the potential of implementing artificial intelligence in cartography in the era of GeoAI.

We outline several important directions for the use of AI in cartography moving forward. First, our use of GANs can be extended to other mapping contexts to help cartographers deconstruct the most salient stylistic elements that constitute the unique look and feel of existing designs, using this information to improve design in future iterations. This research also can help nonexperts who lack professional cartographic knowledge and experience to generate reasonable cartographic style sheet templates based on inspiration maps or visual art. Finally, integration of AI with cartographic design may automate part of the generalization process, a particularly promising avenue given the difficult of updating high resolution datasets and rendering new tilesets to support the ’map of everywhere’.

Here is the abstract:

The advancement of the Artificial Intelligence (AI) technologies makes it possible to learn stylistic design criteria from existing maps or other visual arts and transfer these styles to make new digital maps. In this paper, we propose a novel framework using AI for map style transfer applicable across multiple map scales. Specifically, we identify and transfer the stylistic elements from a target group of visual examples, including Google Maps, OpenStreetMap, and artistic paintings, to unstylized GIS vector data through two generative adversarial network (GAN) models. We then train a binary classifier based on a deep convolutional neural network to evaluate whether the transfer styled map images preserve the original map design characteristics. Our experiment results show that GANs have a great potential for multiscale map style transferring, but many challenges remain requiring future research.

Examples of Map Style Transfer using Pix2Pix
Examples of Map Style Transfer using CycleGAN

You can also visit the following links to see some of the trained results:

CycleGAN at zoom level 15: https://geods.geography.wisc.edu/style_transfer/cyclegan15/

CycleGAN at zoom level 18: https://geods.geography.wisc.edu/style_transfer/cyclegan18/

Pix2Pix at zoom level 15: https://geods.geography.wisc.edu/style_transfer/pix2pix15/

Pix2Pix at zoom level 18: https://geods.geography.wisc.edu/style_transfer/pix2pix18/

Dataset available (Only simple styled maps are available, while target styled maps are not available because of the copyright from Google):

Level 15: Training, Test.

Level 18: Training, Test.

Research on Multi-Scale Spatio-temporal Analysis of Human Emotions

In our research, state-of-the-art computer vision and AI technologies are utilized to collect, store, handle, manipulate and analyze the human emotions and sentiment at different geographic scales. The research explored what and how people express their emotions at different places, and why and how their emotions would be influenced by environmental factors. Several maps are utilized to visualize where people may be happier than at other locations. In traditional research, we may only use questionnaires to investigate the human emotions and socioeconomic factors. But nowadays, it is possible to collect human emotions using large-scale user generated data online, including tweets, emoji, photos, articles, etc.. As we know, human emotions are innate characteristics of human beings, and with computer technology, it is possible to use objective methods to quantify the subjective human emotion. And it is quite important to build a computational workflow to handle large volumes of user generated data and extract emotion from those data efficiently. Here are several examples which we are working on.

(1) Individual place scale: human emotions at different tourist attractions

In this study, a novel framework for extracting human emotions from large-scale georeferenced photos at different places is proposed. After the construction of places based on spatial clustering of user generated footprints collected in social media websites, online cognitive services are utilized to extract human emotions from facial expressions using state-of-the-art computer vision techniques. And two happiness metrics are defined for measuring the human emotions at different places. To validate the feasibility of the framework, we take 80 tourist attractions around the world as an example and a happiness ranking list of places is generated based on human emotions calculated over 2 million faces detected out from over 6 million photos. Different kinds of geographical contexts are taken into consideration to find out the relationship between human emotions and environmental factors. Results show that much of the emotional variation at different places can be explained by a few factors such as openness. The research may offer insights on integrating human emotions to enrich the understanding of sense of place in geography and in place-based GIS.

The spatial distribution of 80 tourist sites and their associated emotion indices using facial expression.

(2) Urban scale: relationship between human emotion and stock market fluctuation at Manhattan

In this research, we examined whether emotion expressed by users in social media can be influenced by stock market index or can predict the fluctuation of the stock market index. We collected the emotion data in Manhattan, New York City using face detection technology and emotion cognition services for photos uploaded to Flickr. Each face’s emotion was described in 8 dimensions the location was also recorded. An emotion score index was defined based on the combination of all 8 dimensions of emotion calculated by principal component analysis. The correlation coefficients between the stock market values and emotion scores are significant (R>0.59 with p < 0.01). Using Granger Causality analysis for cause and effect detection, we found that users’ emotion is influenced by stock market value change. A multiple linear regression model was established (R-square=0.76) to explore the potential factors that influence the emotion score. Finally, a sensitivity map was created to show sensitive areas where human emotion is easily affected by the stock market changes. We concluded that in Manhattan region: (1) there is a statistically significant relationship between human emotion and stock market fluctuation; (2) emotion change follows the movements of the stock market; (3) the Times Square and Broadway Theatre are the most sensitive regions in terms of public emotional reaction to the economy represented by stock value.

(3) Global scale: global human emotions in different groups of people

In this research, we used a huge global scale image dataset: YFCC100, to extract emotions from photos and to describe the worldwide geographic patterns of human happiness. Two indices of Average Smiling Index (ASI) and Happiness Index (HI) are defined from different perspectives to describe the degree of human happiness in a specific region. We computed the spatio-temporal characteristics of facial expression-based happiness on a global scale and linked them to some demographic variables (ethnicity, gender, age, and nationality). After that, the robust analysis was made to ensure our results are reliable. Results are in accordance with some previous studies in Social Science. For example, White and Black are often better at expressing happiness than Asian, women are more expressive than men, and happiness expressed varies across space and time. Our research provides a novel methodology for emotion measurement and it could be utilized for assessing a region‘s emotion conditions based on geo-crowdsourcing data. Robust analysis results indicate that our approaches are reliable and could be implemented for other research projects on place-based human sentiment analysis.

For more information about this research, you can also visit: http://urbanplayground.cn/Emotion/

References:

Yuhao Kang, Qingyuan Jia, Song Gao, Xiaohuan Zeng, Yueyao Wang, Stephan Angsuesser , Yu Liu, Xinyue Ye, Teng Fei. (2019)  Extracting Human Emotions at Different Places Based on Facial Expressions and Spatial Clustering Analysis. Transactions in GIS (in press)

Kang, Y., Wang, J., Wang, Y., Angsuesser, S. and Fei, T. (2017) Mapping the Sensitivity of the Public Emotion to the Movement of Stock Market Value: A Case Study of Manhattan. International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences42.

Kang, Y., Zeng, X., Zhang, Z., Wang, Y. and Fei, T. (2018, March) Who are happier? Spatio-temporal Analysis of Worldwide Human Emotion Based on Geo-Crowdsourcing Faces. In 2018 Ubiquitous Positioning, Indoor Navigation and Location-Based Services (UPINLBS) (pp. 1-8). IEEE.