Dr. Qinlong Luo

Industrial Research Scientist

Global Chief Data Office, IBM
6303 Barfield Rd
Atlanta, GA, 30328
E-mail: qinlong.luo@ibm.com

Dr. Qinlong Luo is a Data Scientist from IBM. He got his PhD in Computational Physics from University of Tennessee at Knoxville. He is interested in applying Physics technologies and concepts into Machine Learning and Data Science. The research projects Dr. Luo has been working on includes:

Markov Chain Monte Carlo (MCMC) and Clustering: MCMC is widely used in computational physics as an optimization method to solve physics models. Clustering is one kind of unsupervised learning in machine learning, and some of the clustering problems can be solved by optimization. As an optimization method, MCMC can be applied to a variety of clustering problems. For instance, Word Sense Disambiguation (WSD) is one of the most important topics in the field of Natural Language Processing. In order to figure out how many different meanings an ambiguous word can refer to automatically, it can be treated as an clustering problem and then converted into an optimization problem with a customized objective function. MCMC is perfect methodology for optimization problem like this. How to scale this solution for Big Data is another interesting topic. Dr Luo has designed and implemented a large-scale disambiguation system to identify and disambiguate multi-sense skills, using Markov Chain Monte Carlo (MCMC). More details can be found in his paper (Macau: Large-Scale Skill Sense Disambiguation in the Online Recruitment Domain).

Restricted Boltzmann Machine (RBM) and Recommendation Engines: RBM is one of the most famous Deep Learning (DL) algorithms which can be used in commercial recommendation systems. RBM was inspired by the concepts from Physics (Energy, Boltzmann distribution, partition function) and I am interested in: (1) using other distributions and partition functions from Statistical Mechanics to replace Boltzmann distribution, for building new recommendation algorithms; (2) using MCMC to replace contrastive divergence or gradient descent as an optimization method to train RBM. Contrastive divergence is used to train RBM for recommendation engines. Contrastive divergence is “approximate” method of gradient descent, and MCMC can be leveraged as an “exact” methodology for optimization in RBM.

1,344 Views