Geospatial artificial intelligence (GeoAI) is an interdisciplinary field that has received tremendous attention from both academia and industry in recent years. We recently published an article that reviews the series of GeoAI workshops held at the Association for Computing Machinery (ACM) International Conference on Advances in Geographic Information Systems (SIGSPATIAL) since 2017. These workshops have provided researchers a forum to present GeoAI advances covering a wide range of topics, such as geospatial image processing, transportation modeling, public health, and digital humanities. We provide a summary of these topics and the research articles presented at the 2017, 2018, and 2019 GeoAI workshops. We conclude with a list of open research directions for this rapidly advancing field.
Abstract: What is the current state-of-the-art in integrating results from artificial intelligence research into geographic information science and the earth sciences more broadly? Does GeoAI research contribute to the broader field of AI, or does it merely apply existing results? What are the historical roots of GeoAI? Are there core topics and maybe even moonshots that jointly drive this emerging community forward? In this editorial, we answer these questions by providing an overview of past and present work, explain how a change in data culture is fueling the rapid growth of GeoAI work, and point to future research directions that may serve as common measures of success.
Moonshot (Editorial): Can we develop an artificial GIS analyst that passes a domain-specific Turing Test by 2030?
Keywords: Spatial Data Science, GeoAI, Machine Learning, Knowledge Graphs, Geo-Semantics, Data Infrastructure
Acknowledgement: we sincerely thank all the reviewers who contribute their time to the peer-review process and ensure the quality of the accepted papers.
This paper proposes a methodology framework to transfer the cartographic style in different kinds of maps. By inputting the raw GIS vector data, the system can automatically render styles to the input data with target map styles but without CartoCSS or MapboxGL style specification sheets. The Generative Adversarial Networks (GANs) are used in this research. The study explores the potential of implementing artificial intelligence in cartography in the era of GeoAI.
We outline several important directions for the use of AI in cartography moving forward. First, our use of GANs can be extended to other mapping contexts to help cartographers deconstruct the most salient stylistic elements that constitute the unique look and feel of existing designs, using this information to improve design in future iterations. This research also can help nonexperts who lack professional cartographic knowledge and experience to generate reasonable cartographic style sheet templates based on inspiration maps or visual art. Finally, integration of AI with cartographic design may automate part of the generalization process, a particularly promising avenue given the difficult of updating high resolution datasets and rendering new tilesets to support the ’map of everywhere’.
Here is the abstract:
The advancement of the Artificial Intelligence (AI) technologies makes it possible to learn stylistic design criteria from existing maps or other visual arts and transfer these styles to make new digital maps. In this paper, we propose a novel framework using AI for map style transfer applicable across multiple map scales. Specifically, we identify and transfer the stylistic elements from a target group of visual examples, including Google Maps, OpenStreetMap, and artistic paintings, to unstylized GIS vector data through two generative adversarial network (GAN) models. We then train a binary classifier based on a deep convolutional neural network to evaluate whether the transfer styled map images preserve the original map design characteristics. Our experiment results show that GANs have a great potential for multiscale map style transferring, but many challenges remain requiring future research.
You can also visit the following links to see some of the trained results:
In our research, state-of-the-art computer vision and AI technologies are utilized to collect, store, handle, manipulate and analyze the human emotions and sentiment at different geographic scales. The research explored what and how people express their emotions at different places, and why and how their emotions would be influenced by environmental factors. Several maps are utilized to visualize where people may be happier than at other locations. In traditional research, we may only use questionnaires to investigate the human emotions and socioeconomic factors. But nowadays, it is possible to collect human emotions using large-scale user generated data online, including tweets, emoji, photos, articles, etc.. As we know, human emotions are innate characteristics of human beings, and with computer technology, it is possible to use objective methods to quantify the subjective human emotion. And it is quite important to build a computational workflow to handle large volumes of user generated data and extract emotion from those data efficiently. Here are several examples which we are working on.
In this study, a novel framework for extracting human emotions from large-scale georeferenced photos at different places is proposed. After the construction of places based on spatial clustering of user generated footprints collected in social media websites, online cognitive services are utilized to extract human emotions from facial expressions using state-of-the-art computer vision techniques. And two happiness metrics are defined for measuring the human emotions at different places. To validate the feasibility of the framework, we take 80 tourist attractions around the world as an example and a happiness ranking list of places is generated based on human emotions calculated over 2 million faces detected out from over 6 million photos. Different kinds of geographical contexts are taken into consideration to find out the relationship between human emotions and environmental factors. Results show that much of the emotional variation at different places can be explained by a few factors such as openness. The research may offer insights on integrating human emotions to enrich the understanding of sense of place in geography and in place-based GIS.
(2) Urban scale: relationship between human emotion and stock market fluctuation at Manhattan
In this research, we examined whether emotion expressed by users in social media can be influenced by stock market index or can predict the fluctuation of the stock market index. We collected the emotion data in Manhattan, New York City using face detection technology and emotion cognition services for photos uploaded to Flickr. Each face’s emotion was described in 8 dimensions the location was also recorded. An emotion score index was defined based on the combination of all 8 dimensions of emotion calculated by principal component analysis. The correlation coefficients between the stock market values and emotion scores are significant (R>0.59 with p < 0.01). Using Granger Causality analysis for cause and effect detection, we found that users’ emotion is influenced by stock market value change. A multiple linear regression model was established (R-square=0.76) to explore the potential factors that influence the emotion score. Finally, a sensitivity map was created to show sensitive areas where human emotion is easily affected by the stock market changes. We concluded that in Manhattan region: (1) there is a statistically significant relationship between human emotion and stock market fluctuation; (2) emotion change follows the movements of the stock market; (3) the Times Square and Broadway Theatre are the most sensitive regions in terms of public emotional reaction to the economy represented by stock value.
(3) Global scale: global human emotions in different groups of people
In this research, we used a huge global scale image dataset: YFCC100, to extract emotions from photos and to describe the worldwide geographic patterns of human happiness. Two indices of Average Smiling Index (ASI) and Happiness Index (HI) are defined from different perspectives to describe the degree of human happiness in a specific region. We computed the spatio-temporal characteristics of facial expression-based happiness on a global scale and linked them to some demographic variables (ethnicity, gender, age, and nationality). After that, the robust analysis was made to ensure our results are reliable. Results are in accordance with some previous studies in Social Science. For example, White and Black are often better at expressing happiness than Asian, women are more expressive than men, and happiness expressed varies across space and time. Our research provides a novel methodology for emotion measurement and it could be utilized for assessing a region‘s emotion conditions based on geo-crowdsourcing data. Robust analysis results indicate that our approaches are reliable and could be implemented for other research projects on place-based human sentiment analysis.