Full Paper about “Trajectory Reconstruction” accepted at Computers, Environment and Urban Systems

Citation info: Mingxiao Li, Song Gao, Feng Lu, Hengcai Zhang. (2019) Reconstruction of human movement trajectories from large-scale low-frequency mobile phone data. Computers, Environment and Urban Systems, Volume 77, September 2019, 101346. DOI: 
10.1016/j.compenvurbsys.2019.101346

Abstract

Understanding human mobility is important in many fields, such as geography, urban planning, transportation, and sociology. Due to the wide spatiotemporal coverage and low operational cost, mobile phone data have been recognized as a major resource for human mobility research. However, due to conflicts between the data sparsity problem of mobile phone data and the requirement of fine-scale solutions, trajectory reconstruction is of considerable importance. Although there have been initial studies on this problem, existing methods rarely consider the effect of similarities among individuals and the spatiotemporal patterns of missing data. To address this issue, we propose a multi-criteria data partitioning trajectory reconstruction (MDP-TR) method for large-scale mobile phone data. In the proposed method, a multi-criteria data partitioning (MDP) technique is used to measure the similarity among individuals in near real-time and investigate the spatiotemporal patterns of missing data. With this technique, the trajectory reconstruction from mobile phone data is then conducted using classic machine learning models. We verified the method using a real mobile phone dataset including 1 million individuals with over 15 million trajectories in a large city. Results indicate that the MDP-TR method outperforms competing methods in both accuracy and robustness. We argue that the MDP-TR method can be effectively utilized for grasping highly dynamic human movement status and improving the spatiotemporal resolution of human mobility research.

Full Paper about “Map Style Transfer” accepted at the International Journal of Cartography

Our paper entitled Transferring Multiscale Map Styles Using Generative Adversarial Networks has been accepted for publishing in the International Journal of Cartography.

DOI: 10.1080/23729333.2019.1615729

Authorship: Yuhao KangSong GaoRobert E. Roth.

This paper proposes a methodology framework to transfer the cartographic style in different kinds of maps. By inputting the raw GIS vector data, the system can automatically render styles to the input data with target map styles but without CartoCSS or Mapbox GL style specification sheets. The Generative Adversarial Networks (GANs) are used in this research. The study explores the potential of implementing artificial intelligence in cartography in the era of GeoAI.

We outline several important directions for the use of AI in cartography moving forward. First, our use of GANs can be extended to other mapping contexts to help cartographers deconstruct the most salient stylistic elements that constitute the unique look and feel of existing designs, using this information to improve design in future iterations. This research also can help nonexperts who lack professional cartographic knowledge and experience to generate reasonable cartographic style sheet templates based on inspiration maps or visual art. Finally, integration of AI with cartographic design may automate part of the generalization process, a particularly promising avenue given the difficult of updating high resolution datasets and rendering new tilesets to support the ’map of everywhere’.

Here is the abstract:

The advancement of the Artificial Intelligence (AI) technologies makes it possible to learn stylistic design criteria from existing maps or other visual arts and transfer these styles to make new digital maps. In this paper, we propose a novel framework using AI for map style transfer applicable across multiple map scales. Specifically, we identify and transfer the stylistic elements from a target group of visual examples, including Google Maps, OpenStreetMap, and artistic paintings, to unstylized GIS vector data through two generative adversarial network (GAN) models. We then train a binary classifier based on a deep convolutional neural network to evaluate whether the transfer styled map images preserve the original map design characteristics. Our experiment results show that GANs have a great potential for multiscale map style transferring, but many challenges remain requiring future research.

Examples of Map Style Transfer using Pix2Pix
Examples of Map Style Transfer using CycleGAN

You can also visit the following links to see some of the trained results:

CycleGAN at zoom level 15: https://geods.geography.wisc.edu/style_transfer/cyclegan15/

CycleGAN at zoom level 18: https://geods.geography.wisc.edu/style_transfer/cyclegan18/

Pix2Pix at zoom level 15: https://geods.geography.wisc.edu/style_transfer/pix2pix15/

Pix2Pix at zoom level 18: https://geods.geography.wisc.edu/style_transfer/pix2pix18/

Dataset available (Only simple styled maps are available, while target styled maps are not available because of the copyright from Google):

Level 15: Training, Test.

Level 18: Training, Test.