Skip to content. Skip to navigation
CIM Menus
 

Graphical Modeling with the Bethe Approximation


Tony Jebara
Columbia University

May 25, 2015 at  2:00 PM
McConnell Engineering Room 103

Abstract:

Inference (a canonical problem in graphical modeling) recovers a probability distribution over a subset of variables in a given model. It is known to be NP-hard for graphical models with cycles and large tree-width. Learning (another canonical problem) reduces to iterative marginal inference and is also NP-hard. How can we efficiently tackle these problems in practice? We will discuss the Bethe free energy as an approximation to the intractable partition function. Heuristics like loopy belief propagation (LBP) are often used to optimize the Bethe free energy. Unfortunately, in general, LBP may not converge at all, and if it does, it may not be to a global optimum. To do marginal inference, we instead explore a more principled treatment of the Bethe free energy using discrete optimization. We show that in attractive loopy models we can find the global optimum in polynomial time even though the resulting landscape is non-convex. To generalize to mixed loopy models, we use double-cover methods that bound the true Bethe global optimum in polynomial time. Finally, to do learning, we combine Bethe approximation with a Frank-Wolfe algorithm in the convex dual which circumvents the intractable partition function. The result is a new single-loop learning algorithm which is more efficient than previous double-loop methods that interleaved iterative inference with iterative parameter updates. We show applications of these methods in friendship link recommendation, in social influence estimation, in computer vision, and in power networks. We also combine the approaches with sparse structure learning to model several years of Bloomberg data. These graphical models capture financial and macro-economic variables and their response to news and social media topics.

Bio:

Tony Jebara is Associate Professor of Computer Science at Columbia University. He chairs the Center on Foundations of Data Science as well as directs the Columbia Machine Learning Laboratory. His research intersects computer science and statistics to develop new frameworks for learning from data with applications in social networks, spatio-temporal data, vision and text. Jebara has founded and advised several startups including Sense Networks (acquired by yp.com), Evidation Health, Agolo, Ufora, and Bookt (acquired by RealPage NASDAQ:RP). He has published over 100 peer-reviewed papers in conferences, workshops and journals including NIPS, ICML, UAI, COLT, JMLR, CVPR, ICCV, and AISTAT. He is the author of the book Machine Learning: Discriminative and Generative and co-inventor on multiple patents in vision, learning and spatio-temporal modeling. In 2004, Jebara was the recipient of the Career award from the National Science Foundation. His work was recognized with a best paper award at the 26th International Conference on Machine Learning, a best student paper award at the 20th International Conference on Machine Learning as well as an outstanding contribution award from the Pattern Recognition Society in 2001. Jebara's research has been featured on television (ABC, BBC, New York One, TechTV, etc.) as well as in the popular press (New York Times, Slash Dot, Wired, Businessweek, IEEE Spectrum, etc.). He obtained his PhD in 2002 from MIT. Esquire magazine named him one of their Best and Brightest of 2008. Jebara has taught machine learning to well over 1000 students (through real physical classes). Jebara was a Program Chair for the 31st International Conference on Machine Learning (ICML) in 2014. Jebara was Action Editor for the Journal of Machine Learning Research from 2009 to 2013, Associate Editor of Machine Learning from 2007 to 2011 and Associate Editor of IEEE Transactions on Pattern Analysis and Machine Intelligence from 2010 to 2012. In 2006, he co-founded the NYAS Machine Learning Symposium and has served on its steering committee since then.