thinkspatial_log<strong></strong>oThe UCSB forum on spatial technologies presents

Space2Vec: Multi-Scale Representation Learning for Spatial Feature Distributions using Grid Cells

Gengchen Mai

STKO Lab

University of California, Santa Barbara

11:30 a.m. (PT)

Tuesday, Dec 1, 2020

Zoom:  https://ucsb.zoom.us/meeting/register/tZUtcu6rrzosE9VEnfMGptAxbxe4zhNILoHY

Upon registration, you will receive access to this link

Abstract. Unsupervised text encoding models have recently fueled substantial progress in NLP. The key idea is to use neural networks to convert words in texts to vector space representations (embeddings) based on word positions in a sentence and their contexts, which are suitable for end-to-end training of downstream tasks. We see a strikingly similar situation in spatial analysis, which focuses on incorporating both absolute positions and spatial contexts of geographic objects such as POIs into models. A general-purpose representation model for space is valuable for a multitude of tasks. However, no such general model exists to date beyond simply applying discretization or feed-forward nets to coordinates, and little effort has been put into jointly modeling distributions with vastly different characteristics, which commonly emerges from GIS data. Meanwhile, Nobel Prize-winning Neuroscience research shows that grid cells in mammals provide a multi-scale periodic representation that functions as a metric for location encoding and is critical for recognizing places and for path-integration. Therefore, we propose a representation learning model called Space2Vec to encode the absolute positions and spatial relationships of places. We conduct experiments on two real-world geographic data for two different tasks: (1) predicting types of POIs given their positions and context, (2) image classification leveraging their geo-locations. Results show that because of its multi-scale representations, Space2Vec outperforms well-established ML approaches such as RBF kernels, multi-layer feed-forward nets, and tile embedding approaches for location modeling and image classification tasks. Detailed analysis shows that all baselines can at most well handle distribution at one scale but show poor performances in other scales. In contrast, Space2Vec ’s multi-scale representation can handle distributions at different scales.

 

Bio. Gengchen Mai is a Ph.D. candidate at the Space and Time for Knowledge Organization Lab in the Department of Geography, University of California, Santa Barbara. His Ph.D. adviser is Prof. Krzysztof Janowicz. His research interests include Machine Learning/Deep Learning, GIScience, Geographic Question Answering, NLP, Geographic Information Retrieval, Knowledge Graph, and Semantic Web. Currently, Mai’s research is highly focused on Geographic Question Answering and Spatially-Explicit Machine Learning models. He received his B.S. Degree in Geographic Information System from Wuhan University. Thus far, he has completed four AI/ML research-based internships at Esri Inc., SayMosaic Inc., Apple Map, and Google X. He now serves as a machine learning consultant/advisor for Google X.

The objectives of the ThinkSpatial brown-bag presentations are to exchange ideas about spatial perspectives in research and teaching, to broaden communication and cooperation across disciplines among faculty and graduate students, and to encourage the sharing of tools and concepts.

Please contact Marcela Suarez (amsuarez@ucsb.edu), or Karen Doehner (kdoehner@spatial.ucsb.edu)), to review and schedule possible discussion topics or presentations that share your disciplinary interest in spatial thinking.

Follow spatial@ucsb on Twitter | Google+ | Google Calendar

ThinkSpatial: Gengchen Mai