Printable Version of this PageHome PageRecent ChangesSearchSign In

Swaroop Vattam

I am a technical staff member at Lincoln Lab at MIT. Previously, I was a research scientist at Georgia Tech and a NAS fellow at NRL. I got my PhD in Computer Science from Georgia Tech in 2012. My research is broadly focused on applications of machine reasoning and machine learning to natural language processing problems. I am currently investigating data-driven model discovery systems in an effort to tackle the problem of automated machine learning.



<January 2017>
MonTueWedThuFriSatSun
      1
2345678
9101112131415
16171819202122
23242526272829
3031     
braindump.png

Brain dump

:7 November 2016

Sublinear time algorithms. Even as recently as early 2000's, we would be happy to settle for a polynomial time algorithm for any non trivial algorithmic problem. It would had been hard to imagine doing better than linear time; after all, we expect algorithms to consider all of its input in order to make a decision. How the times have changed! For the kinds of data sets that I have been dealing with these days, linear is not good enough. The prevalence of extremely large data sets in a wide variety of settings is growing rapidly and beating linear time has become a matter of practical concern as opposed to mere academic curiosity. One solution often touted to this problem is the MapReduce programming model - processing data sets with parallel, distributed algorithms on large clusters. Although this works well in some cases, it's utility has been blown out of proportion in my humble opinion. In many cases, this paradigm is not applicable or does not yield any significant advantage. This is especially true in many big graph data sets.

If linear time is not good enough, it is natural to wonder the existence of sublinear time algorithms for some of these problems. In fact, there has been a lot of recent interest in this direction. Sublinear time is a crazy concept - it allows algorithms to process only a small fraction of the input and yet demands a reasonable solution. There are a handful of problems for which deterministic sublinear time algorithms are known. But in a majority of cases, the algorithm must use randomization and give approximate answers. There is a growing body of work in this area and recent results have shown that there are classical optimization problems whose values can be approximated in sublinear time. Also, property testing has been applied to give sublinear algorithms for a variety of problems. Further, several cool techniques like low rank approximation of matrices have emerged for designing sublinear algorithms. This is all good news! Still, the scope of sublinear algorithms remains minuscule compared to the problems out there with existing solutions considered efficient a decade ago but are no longer acceptable today.

I am grappling with one such graph matching problem presently, namely, maximum weighted matching (MWM) problem. MWM is a classic graph problem for which exact polytime solutions have been known for some time. But for massive graphs this is a non-starter. There is some (relatively) recent work where linear-time approximate algorithms have been worked out for the unweighted case (Drake & Hougardy, 2003; Pettie & Sanders, 2004). I am trying these, but strongly suspect that they may also not fit the bill. Meanwhile, I don't think there are any known sublinear time solutions for this problem. Hope theoreticians are listening!
  • Drake, D. E., & Hougardy, S. (2003). Improved linear time approximation algorithms for weighted matchings. In Approximation, Randomization, and Combinatorial Optimization.. Algorithms and Techniques (pp. 14-23). Springer Berlin Heidelberg.
  • Pettie, S., & Sanders, P. (2004). A simpler linear time 2/3− ε approximation for maximum weight matching. Information Processing Letters, 91(6), 271-276.

:2 November 2016

Results for the 2016 NIST Speaker Recognition Evaluation was announced today. I was part of the MIT team which was placed second in a pool of more than 45 serious research teams competing from around the world. My contribution included developing a backend for calibrating and fusing several different speaker recognition models developed by other researchers on my team. Calibration is a serious issue when there are several different models and we have to combine their individual predictions in order to make an overall prediction. This issue arises because different models have their own quirks - for example, some models tend to predict probabilities conservatively (meaning closer to mid-range), some toward extremes, and others none at all (true conditional probabilities are unknown). If your metric cares about exact probabilities (e.g., logarithmic loss), we need to calibrate the models before fusing their predictions to get an average estimate. This involves a post-processing step where you learn the characteristics of the model behavior. There are two popular methods for calibration: Platt's scaling and Isotonic regression. Platt's scaling amounts to training a logistic regression model on the estimator outputs. In Isotonic regression, the idea is to fit a piecewise-constant monotonically increasing function (e.g., stair shaped function) instead of logistic regression. Alas, I cannot disclose the secret sauce that went into our backend calibrator yet.

:26 October 2016

lichens “There’s been over 140 years of microscopy. The idea that there’s something so fundamental that people have been missing is stunning” - Toby Spribille
A guy from Montana trailer park overturned 150 years of biology. Biology textbooks tell us that lichens are symbiotic alliances between two species - an alga and a fungus. But Toby Spribille, raised in a Montana trailer park and home-schooled by what he now describes as a “fundamentalist cult” overthrew that idea. He fell in love with science, but had no way of feeding that love with higher education because he didn't have formal education and transcripts. But one German university overlooked that fact and offered him a chance to pursue college and postgraduate education, where he studied lichens.

Lichens have an important place in biology. Early scientists thought that they were plants. But 150 years ago it was revealed that they’re composite organisms, consisting of fungi that live in partnership with algae. However, in all these years, biologists have tried in vain to grow lichens in laboratories. Whenever they artificially united the fungus and the alga, the two would never fully recreate their natural structures. It was as if something was missing. Toby showed that lichens are not alliances between two organisms. Instead, they’re alliances between three. All this time, a second type of fungus has been hiding in plain view. Read more here.

Fascinating science and a great example of why innate interest in science should be nurtured in all students.

:7 October 2016

Imbalance is a common occurrence in real-world datasets wherein the classes do not have (roughly) equal priors. For example, you may have a binary classification problem with 1000 samples. A total of 900 samples are labeled 0 and the remaining 100 samples are labeled 1. This is an imbalanced dataset and the ratio of Class-0 to Class-1 instances is 900:100 or more concisely 9:1. You can have a class imbalance problem on two-class classification problems as well as multi-class classification problems. Imbalanced data pose serious challenges to the task of classification. Dealing with imbalanced classes can be frustrating. You may discover that all the great results you were getting is a lie (i.e., accuracy paradox). Your accuracy measures might tell you that your models are doing great, but this accuracy might only be reflecting the underlying class distribution. In other words, your model is very likely predicting one class regardless of the data it is asked to predict. One approach to combating imbalanced data is to under-sample the majority class. This increases the sensitivity of the classifier to the minority class, but you are throwing away valuable data. Or you can over-sample the minority class which blunts the sensitivity of the classifier to the majority class. Or you can do both, intelligently. This is precisely what is behind the SMOTE technique (Chawla et. al., 2002). A combination of over-sampling the minority class and under-sampling the majority class can achieve better classifier performance (in ROC space) than varying the loss ratios in Ripper or class priors in Naive Bayes. If you are frustrated by imbalanced classes, check out the SMOTE implementation in the imbalanced-learn Python package.
  • Chawla, N. V., Bowyer, K. W., Hall, L. O., & Kegelmeyer, W. P. (2002). SMOTE: synthetic minority over-sampling technique. Journal of artificial intelligence research, 16, 321-357.

:29 August 2016

Representing data as matrices is ubiquitous in Machine Learning (ML). But the challenge for scalable ML systems is that these structures can become enormous, with millions or hundred of millions of rows. One way to mitigate this problem is to reduce the size of the matrices by leaving out a bunch of rows. But in order for computations on them to yield approximately the right results, the remaining rows have to be in some sense representative of those that were omitted. Cohen and Peng (2014) have developed an algorithm to efficiently sample the rows of a matrix while preserving the p-norms of its product with vectors, finding the smallest possible approximation of the original matrix that guarantees reliable computations. They demonstrate that their algorithm is optimal for condensing matrices under any norm. This is an important contribution towards making ML more scalable.

All posts ...



rss.jpg

Research feeds

AMS Feature Column: AMS Feature Column - RSS Feed
Building Trees for Biologists:
Finding Holes in the Data:
Circles and Squares ... and Primes:
Surface Topology in Bach Canons, I: The Mobius strip:
Theoretical Mathematics Finds Use in Economics--A Tribute to Lloyd Shapley:
Game. SET. Polynomial.:
The Legend of Abraham Wald:
The Early History of Calculus Problems:
Mathematics and Crystal Balls:
Knot Quandaries Quelled by Quandles:

Mathematical Moments from the American Mathematical Society: The American Mathematical Societys Mathematical Moments program promotes appreciation and understanding of the role mathematics plays in science, nature, technology, and human culture. Listen to researchers talk about how they use math: from presenting realistic animation to beating cancer.
Maintaining a Balance Part 2: Researcher: Daniel Rothman, MIT. Dan Rothman talks about how math helped understand a mass extinction.
Maintaining a Balance Part 1: Researcher: Daniel Rothman, MIT. Dan Rothman talks about how math helped understand a mass extinction.
Trimming Taxiing Time: Researcher: Hamsa Balakrishnan, MIT. Hamsa Balakrishnan talks about her work to shorten airport runway queues.
Making Art Work: Researcher: Annalisa Crannell, Franklin & Marshall College. Annalisa Crannell on perspective in art.
Explaining Rainbows: Researcher: John A. Adam, Old Dominion University. John A. Adam explains the math and physics behind rainbows.

Communications of the ACM: Artificial Intelligence: The latest news, opinion and research in artificial intelligence, from Communications online.
Give Robots 'Personhood' Status, EU Committee Argues:

The European parliament has urged the drafting of a set of regulations to govern the use and creation of robots and artificial intelligence, including a form of "electronic personhood" to ensure rights and responsibilities for the most capable AI.

A Darker Theme in Obama's Farewell: Automation Can Divide Us:

Underneath the nostalgia and hope in President Obama's farewell address Tuesday night was a darker theme: the struggle to help the people on the losing end of technological change.

Air Force Tests IBM's Brain-Inspired Chip as an Aerial Tank Spotter:

The U.S. Air Force Research Lab is exploring whether brain-inspired computer chips could give satellites, aircraft, and drones the ability to automatically identify vehicles.

Poker Play Begins in 'Brains vs. AI: Upping the Ante':

Play began Jan. 11 for "Brains Vs. Artificial Intelligence: Upping the Ante," a competition at Rivers Casino in Pittsburgh that pits a Carnegie Mellon University artificial intelligence called Libratus against four of the world's best professional poker players.

DOD Successfully Tests Terrifying Swarm of 104 Micro-Drones:

The Department of Defense has released video of a test of swarming drones conducted in the skies over the US Navy's test range at Naval Air Weapons Station China Lake in California.






Last modified 17 November 2016 at 1:22 pm by svattam