Printable Version of this PageHome PageRecent ChangesSearchSign In

Swaroop Vattam

I am a technical staff member at MIT's Lincoln Laboratory, where I am part of the Human Language Technology group. Prior to joining MIT, I was a research scientist in the College of Computing at Georgia Tech and a postdoctoral fellow at NRL (fellowship awarded by the National Academies of Sciences). I received my PhD in Computer Science from Georgia Tech (under the supervision of Ashok Goel) for which I won the distinguished dissertation award in 2012. My main research interests are machine reasoning, machine learning, and natural language processing. To view all of my publications on Google Scholar, please click here.




:20 July 2016

A lot of datasets these days come in the form of large and complex networks. They describe intricate systems with entities being modeled as nodes and their relationships as edges. These networks contain a wealth of information, but that information is often hidden within network patterns which are difficult to uncover. While deciphering these patterns is crucial, computational analyses of large networks are often intractable. So, many questions we ask about the world cannot be answered exactly even with unlimited compute power and time. Our only hope is to answer these questions approximately (i.e., heuristically) and prove how far the approximate answer is from the exact one in the worst case. So far, a majority of scalable heuristics for network analysis consider only simple descriptors of a network such as node count and degree, capturing only its lower-order connectivity patterns. But they suffer from a severe limitation: two networks can appear similar when viewed through the lens of simple descriptors, but can reveal a very different connectivity structure when we change the lens to include higher-order descriptors called graphlets (small subgraphs). While this limitation has been known for some time, nothing could be done because this was a tough nut to crack. A scalable technique for uncovering higher-order organization of large networks - at the level of graphlets - was not on the horizon. But to my surprise, I came across a recent article by Benson et al. titled "Higher-order organization of complex networks" in Science (2016) where they propose a new framework for clustering networks on the basis of higher-order connectivity patterns, with guarantees on the optimality of obtained clusters and which scales to billions of edges. This is such an amazing result. Kudos to Benson at al.
  • Benson, A. R., Gleich, D. F., & Leskovec, J. (2016). Higher-order organization of complex networks. Science, 353(6295), 163-166.

:13 July 2016

Extreme theorem-proving!!! Researchers have solved a single math problem (boolean Pythagorean Triples problem) using brute-force (SAT solvers on supercomputers, yay!), producing the largest proof ever - over 200 terabytes in uncompressed length. How did they check it? If a computer generated proof is too big for humans to check, what cab be said of its epistemic status? That aside, this is big deal. It shows how far we have come since the time when the SAT problem's NP-completeness was understood in the early 1970's. We seem to be readily embracing SAT solvers instead of avoiding them like plague for tractability reasons. The solver community is optimistic that they can solve most SAT problems arising in practical applications. Confused? An explanation for this gap is provided by Dick Lipton in one of his blog entries.

:11 July 2016

The lab that I consider my academic home at Georgia Tech was recently in the news for yet another epic feat. They developed and deployed an AI teaching assistant (TA) called Jill that the students could not tell from the other human TA's until it was announced at the end of the semester. The best part, Jill was TA'ing for an AI course!! It was also featured in the Washington Post.

:9 July 2016

Today is the day of the IJCAI Goal Reasoning workshop. Wish I was there! As a program committee member, I was fortunate to review three excellent papers for this workshop. This workshop has grown in stature quite a bit since it's first meeting in 2010 - case in point, AI Communications has agreed to host a Special Issue on Goal Reasoning and selected papers from the workshop will be invited to participate in the special issue.

:25 June 2016

I started reading Harry Potter in Methods of Rationality (HPMOR) after a friend recommended it to me. It absolutely amazing! From what I gather, HPMOR is a fan fiction by Yudkowsky, who is a brilliant mind (depending on who you ask) and a research fellow at MIRI. It poses an alternative world in which Harry is a genius not only in magic, but also prodigy who is fiercely loyal to logic, science, progress, and enlightenment. How so? His step parents are not silly, mean-spirited villains, but the best muggles Dumbledore could find - an oxford scientist. Here is an excerpt when Harry quickly figures out how stupid the rules of Quidditch are and refuses to play: "Thatís just wrong. That violates every possible rule of game design. Look, the rest of this game sounds like it might make sense, sort of, for a sport I mean, but youíre basically saying that catching the Snitch overwhelms almost any ordinary point spread. Ö Itís like someone took a real game and grafted on this pointless extra position so that you could be the Most Important Player without needing to really get involved or learn the rest of it. Who was the first Seeker, the Kingís idiot son who wanted to play Quidditch but couldnít understand the rules?Ē ... Ronís face pulled into a scowl. ďIf you donít like Quidditch, you donít have to make fun of it!Ē ... ďIf you canít criticise, you canít optimise. Iím suggesting how to improve the game. And itís very simple. Get rid of the Snitch."

:15 June 2016

The lecture that I attended today put a damper on the current wave of excitement in our community about deep learning. Think that machines can learn everything there is to learn from scratch with a "master algorithm", think again!! The speaker and talk details can be found here.

:28 May 2016

Recently, Google open sourced their dependency parsing library called SyntaxNet. This library is built on top of Tensorflow, a hugely popular library for numerical computation using data flow graphs. This has generated significant buzz among NLP researchers. I wonder how much of it is because of the Google brand name. Okay, I think the problem of syntactic parsing is an important one. But how big a step forward is SyntaxNet? Conceptually, the contribution of SyntaxNet is pretty subtle. But the devil is in the details and the bulk of the contribution is about careful experimentation, tuning, scale up, and refinement - the hallmarks of Google engineering. Can wait to try it out!


Last modified 20 July 2016 at 4:21 pm by svattam