Printable Version of this PageHome PageRecent ChangesSearchSign In

Swaroop Vattam

Swaroop Vattam

I am a technical staff member at MIT's Lincoln Laboratory, where I am part of the Human Language Technology group. Prior to joining MIT, I was a research scientist in the College of Computing at Georgia Tech and a postdoc at NRL on a fellowship awarded by the National Academies of Sciences. I received my PhD in Computer Science from Georgia Tech (under the supervision of Ashok Goel) for which I won the distinguished dissertation award in 2012. My main research interests in artificial intelligence are machine reasoning, machine learning, and natural language processing.


:7 October 2016

Imbalance is a common occurrence in real-world datasets wherein the classes do not have (roughly) equal priors. For example, you may have a binary classification problem with 1000 samples. A total of 900 samples are labeled 0 and the remaining 100 samples are labeled 1. This is an imbalanced dataset and the ratio of Class-0 to Class-1 instances is 900:100 or more concisely 9:1. You can have a class imbalance problem on two-class classification problems as well as multi-class classification problems. Imbalanced data pose serious challenges to the task of classification. Dealing with imbalanced classes can be frustrating. You may discover that all the great results you were getting is a lie (i.e., accuracy paradox). Your accuracy measures might tell you that your models are doing great, but this accuracy might only be reflecting the underlying class distribution. In other words, your model is very likely predicting one class regardless of the data it is asked to predict. One approach to combating imbalanced data is to under-sample the majority class. This increases the sensitivity of the classifier to the minority class, but you are throwing away valuable data. Or you can over-sample the minority class which blunts the sensitivity of the classifier to the majority class. Or you can do both, intelligently. This is precisely what is behind the SMOTE technique (Chawla et. al., 2002). A combination of over-sampling the minority class and under-sampling the majority class can achieve better classifier performance (in ROC space) than varying the loss ratios in Ripper or class priors in Naive Bayes. If you are frustrated by imbalanced classes, check out the SMOTE implementation in the imbalanced-learn Python package.
  • Chawla, N. V., Bowyer, K. W., Hall, L. O., & Kegelmeyer, W. P. (2002). SMOTE: synthetic minority over-sampling technique. Journal of artificial intelligence research, 16, 321-357.

:29 August 2016

Representing data as matrices is ubiquitous in Machine Learning (ML). But the challenge for scalable ML systems is that these structures can become enormous, with millions or hundred of millions of rows. One way to mitigate this problem is to reduce the size of the matrices by leaving out a bunch of rows. But in order for computations on them to yield approximately the right results, the remaining rows have to be in some sense representative of those that were omitted. Cohen and Peng (2014) have developed an algorithm to efficiently sample the rows of a matrix while preserving the p-norms of its product with vectors, finding the smallest possible approximation of the original matrix that guarantees reliable computations. They demonstrate that their algorithm is optimal for condensing matrices under any norm. This is an important contribution towards making ML more scalable.

:28 July 2016

Last night I watched the movie "The man who knew infinity," a biopic of the Indian math genius Ramanujan. I enjoyed the movie and felt that it was a much better movie compared to "The imitation game," about the life of another great mathematician Alan Turing. Some dramatic license was taken to make the movie, but that's to be expected. Interestingly, I found out that Ken Ono was involved in the making of the movie. He is a well-known number theorist at Emory University (in Atlanta) and I happened to attend one of his talks in 2011. Ono and his student put out a paper on arXiv recently called "The 1729 K3 Surface," where they revisit the famous "taxi-cab" number. After studying Ramanujan's writings first-hand, they found that he was thinking about K3 surfaces several decades before the concept was even coined. This adds yet another chapter to the list of spectacular recent discoveries involving Ramanujan's notebook. K3 surfaces is an exceedingly difficult mountain to climb and an important next frontier in mathematics. That Ramanujan gave remarkable examples illustrating some of their features in 1919 simply blows my mind.

:20 July 2016

A lot of datasets these days come in the form of large and complex networks. They describe intricate systems with entities being modeled as nodes and their relationships as edges. These networks contain a wealth of information, but that information is often hidden within network patterns which are difficult to uncover. While deciphering these patterns is crucial, computational analyses of large networks are often intractable. So, many questions we ask about the world cannot be answered exactly even with unlimited compute power and time. Our only hope is to answer these questions approximately (i.e., heuristically) and prove how far the approximate answer is from the exact one in the worst case. So far, a majority of scalable heuristics for network analysis consider only simple descriptors of a network such as node count and degree, capturing only its lower-order connectivity patterns. But they suffer from a severe limitation: two networks can appear similar when viewed through the lens of simple descriptors, but can reveal a very different connectivity structure when we change the lens to include higher-order descriptors called graphlets (small subgraphs). While this limitation has been known for some time, nothing could be done because this was a tough nut to crack. A scalable technique for uncovering higher-order organization of large networks - at the level of graphlets - was not on the horizon. But to my surprise, I came across a recent article by Benson et al. titled "Higher-order organization of complex networks" in Science (2016) where they propose a new framework for clustering networks on the basis of higher-order connectivity patterns, with guarantees on the optimality of obtained clusters and which scales to billions of edges. This is such an amazing result. Kudos to Benson at al.
  • Benson, A. R., Gleich, D. F., & Leskovec, J. (2016). Higher-order organization of complex networks. Science, 353(6295), 163-166.

:13 July 2016

Extreme theorem-proving!!! Researchers have solved a single math problem (boolean Pythagorean Triples problem) using brute-force (SAT solvers on supercomputers, yay!), producing the largest proof ever - over 200 terabytes in uncompressed length. How did they check it? If a computer generated proof is too big for humans to check, what cab be said of its epistemic status? That aside, this is big deal. It shows how far we have come since the time when the SAT problem's NP-completeness was understood in the early 1970's. We seem to be readily embracing SAT solvers instead of avoiding them like plague for tractability reasons. The solver community is optimistic that they can solve most SAT problems arising in practical applications. Confused? An explanation for this gap is provided by Dick Lipton in one of his blog entries.

:11 July 2016

The lab that I consider my academic home at Georgia Tech was recently in the news for yet another epic feat. They developed and deployed an AI teaching assistant (TA) called Jill that the students could not tell from the other human TA's until it was announced at the end of the semester. The best part, Jill was TA'ing for an AI course!! It was also featured in the Washington Post.

:9 July 2016

Today is the day of the IJCAI Goal Reasoning workshop. Wish I was there! As a program committee member, I was fortunate to review three excellent papers for this workshop. This workshop has grown in stature quite a bit since it's first meeting in 2010 - case in point, AI Communications has agreed to host a Special Issue on Goal Reasoning and selected papers from the workshop will be invited to participate in the special issue.


Mathematical Moments from the American Mathematical Society: The American Mathematical Societys Mathematical Moments program promotes appreciation and understanding of the role mathematics plays in science, nature, technology, and human culture. Listen to researchers talk about how they use math: from presenting realistic animation to beating cancer.
Maintaining a Balance Part 2: Researcher: Daniel Rothman, MIT. Dan Rothman talks about how math helped understand a mass extinction.
Maintaining a Balance Part 1: Researcher: Daniel Rothman, MIT. Dan Rothman talks about how math helped understand a mass extinction.
Trimming Taxiing Time: Researcher: Hamsa Balakrishnan, MIT. Hamsa Balakrishnan talks about her work to shorten airport runway queues.
Making Art Work: Researcher: Annalisa Crannell, Franklin & Marshall College. Annalisa Crannell on perspective in art.
Explaining Rainbows: Researcher: John A. Adam, Old Dominion University. John A. Adam explains the math and physics behind rainbows.
Farming Better: Researchers: Eleanor Jenkins, Clemson University, and Katie Kavanagh, Clarkson University. Eleanor Jenkins and Katie Kavanagh talk about their interdisciplinary team's work helping farmers.
Dis-playing the Game of Thrones: Part 2: Researcher: Andrew Beveridge, Macalester College
Moment Title: Dis-playing the Game of Thrones
Description: Andrew Beveridge uses math to analyze Game of Thrones.
Dis-playing the Game of Thrones: Part 1: Researcher: Andrew Beveridge, Macalester College
Moment Title: Dis-playing the Game of Thrones
Description: Andrew Beveridge uses math to analyze Game of Thrones.
AMS Feature Column: AMS Feature Column - RSS Feed
Surface Topology in Bach Canons, I: The Mobius strip:
Theoretical Mathematics Finds Use in Economics--A Tribute to Lloyd Shapley:
Game. SET. Polynomial.:
The Legend of Abraham Wald:
The Early History of Calculus Problems:
Mathematics and Crystal Balls:
Knot Quandaries Quelled by Quandles:
It Just Keeps Piling Up!:

Last modified 20 October 2016 at 6:28 pm by svattam