Printable Version of this PageHome PageRecent ChangesSearchSign In

Swaroop Vattam

I am a technical staff member at Lincoln Lab at MIT. Previously, I was a research scientist at Georgia Tech and a NAS fellow at NRL. I got my PhD in Computer Science from Georgia Tech in 2012. My research is broadly focused on applications of machine reasoning and machine learning to natural language processing problems. I am currently investigating data-driven model discovery systems in an effort to tackle the problem of automated machine learning.



cs.AI updates on arXiv.org: Computer Science -- Artificial Intelligence (cs.AI) updates on the arXiv.org e-print archive
Consistency for 0-1 Programming. (arXiv:1812.02215v1 [cs.CC]):

Concepts of consistency have long played a key role in constraint programming but never developed in integer programming (IP). Consistency nonetheless plays a role in IP as well. For example, cutting planes can reduce backtracking by achieving various forms of consistency as well as by tightening the linear programming (LP) relaxation. We introduce a type of consistency that is particularly suited for 0-1 programming and develop the associated theory. We define a 0-1 constraint set as LP-consistent when any partial assignment that is consistent with its linear programming relaxation is consistent with the original 0-1 constraint set. We prove basic properties of LP-consistency, including its relationship with Chvatal-Gomory cuts and the integer hull. We show that a weak form of LP-consistency can reduce or eliminate backtracking in a way analogous to k-consistency but is easier to achieve. In so doing, we identify a class of valid inequalities that can be more effective than traditional cutting planes at cutting off infeasible 0-1 partial assignments.

Truly Autonomous Machines Are Ethical. (arXiv:1812.02217v1 [cs.AI]):

While many see the prospect of autonomous machines as threatening, autonomy may be exactly what we want in a superintelligent machine. There is a sense of autonomy, deeply rooted in the ethical literature, in which an autonomous machine is necessarily an ethical one. Development of the theory underlying this idea not only reveals the advantages of autonomy, but it sheds light on a number of issues in the ethics of artificial intelligence. It helps us to understand what sort of obligations we owe to machines, and what obligations they owe to us. It clears up the issue of assigning responsibility to machines or their creators. More generally, a concept of autonomy that is adequate to both human and artificial intelligence can lead to a more adequate ethical theory for both.

Continuous Learning Augmented Investment Decisions. (arXiv:1812.02340v1 [cs.LG]):

Investment decisions can benefit from incorporating an accumulated knowledge of the past to drive future decision making. We introduce Continuous Learning Augmentation (CLA) which is based on an explicit memory structure and a feed forward neural network (FFNN) base model and used to drive long term financial investment decisions. We demonstrate that our approach improves accuracy in investment decision making while memory is addressed in an explainable way. Our approach introduces novel remember cues, consisting of empirically learned change points in the absolute error series of the FFNN. Memory recall is also novel, with contextual similarity assessed over time by sampling distances using dynamic time warping (DTW). We demonstrate the benefits of our approach by using it in an expected return forecasting task to drive investment decisions. In an investment simulation in a broad international equity universe between 2003-2017, our approach significantly outperforms FFNN base models. We also illustrate how CLA's memory addressing works in practice, using a worked example to demonstrate the explainability of our approach.

MEAL: Multi-Model Ensemble via Adversarial Learning. (arXiv:1812.02425v1 [cs.CV]):

Often the best performing deep neural models are ensembles of multiple base-level networks. Unfortunately, the space required to store these many networks, and the time required to execute them at test-time, prohibits their use in applications where test sets are large (e.g., ImageNet). In this paper, we present a method for compressing large, complex trained ensembles into a single network, where knowledge from a variety of trained deep neural networks (DNNs) is distilled and transferred to a single DNN. In order to distill diverse knowledge from different trained (teacher) models, we propose to use adversarial-based learning strategy where we define a block-wise training loss to guide and optimize the predefined student network to recover the knowledge in teacher models, and to promote the discriminator network to distinguish teacher vs. student features simultaneously. The proposed ensemble method (MEAL) of transferring distilled knowledge with adversarial learning exhibits three important advantages: (1) the student network that learns the distilled knowledge with discriminators is optimized better than the original model; (2) fast inference is realized by a single forward pass, while the performance is even better than traditional ensembles from multi-original models; (3) the student network can learn the distilled knowledge from a teacher model that has arbitrary structures. Extensive experiments on CIFAR-10/100, SVHN and ImageNet datasets demonstrate the effectiveness of our MEAL method. On ImageNet, our ResNet-50 based MEAL achieves top-1/5 21.79%/5.99% val error, which outperforms the original model by 2.06%/1.14%. Code and models are available at: https://github.com/AaronHeee/MEAL

The USTC-NEL Speech Translation system at IWSLT 2018. (arXiv:1812.02455v1 [cs.AI]):

This paper describes the USTC-NEL system to the speech translation task of the IWSLT Evaluation 2018. The system is a conventional pipeline system which contains 3 modules: speech recognition, post-processing and machine translation. We train a group of hybrid-HMM models for our speech recognition, and for machine translation we train transformer based neural machine translation models with speech recognition output style text as input. Experiments conducted on the IWSLT 2018 task indicate that, compared to baseline system from KIT, our system achieved 14.9 BLEU improvement.



Mathematical Moments from the American Mathematical Society: The American Mathematical Societys Mathematical Moments program promotes appreciation and understanding of the role mathematics plays in science, nature, technology, and human culture. Listen to researchers talk about how they use math: from presenting realistic animation to beating cancer.
Screening for Autism: Researcher: Jordan Hashemi, Duke University Moment: http://www.ams.org/samplings/mathmoments/mm142-autism.pdf Moment Title: Screening for Autism Description: Jordan Hashemi talks about an easy-to-use app to screen for autism. Podcast page: http://www.ams.org/samplings/mathmoments/mm142-autism-podcast Audio file: podcast-mom-autism.mp3
Unbunching Buses: Researchers: Vikash V. Gayah and S. Ilgin Guler, Pennsylvania State University Moment: http://www.ams.org/samplings/mathmoments/mm141-bus-bunching.pdf Moment Title: Unbunching Buses Description: Gayah and Guler talk about mitigating the clustering of buses on a route. Podcast page: http://www.ams.org/samplings/mathmoments/mm141-bus-bunching-podcast Audio file: podcast-mom-bus-bunching.mp3
Winning the Race: Researcher: Christine Darden, NASA (retired) Moment: http://www.ams.org/publicoutreach/mathmoments/mm140-hidden-figures.pdf Moment Title: Winning the Race Description: Christine Darden on working at NASA. Podcast page: http://www.ams.org/publicoutreach/mathmoments/mm140-hidden-figures-podcast
Revolutionizing and Industry: Researchers: Christopher Brinton, Zoomi, Inc. and Princeton University, and Mung Chiang, Purdue University Moment: http://www.ams.org/samplings/mathmoments/mm139-netflix.pdf Description: Christopher Brinton and Mung Chiang talk about the Netflix Prize competition.
Going Into a Shell: Researcher: Derek Moulton, University of Oxford Moment: http://www.ams.org/samplings/mathmoments/mm138-shells.pdf Description: Derek Moulton explains the math behind the shapes of seashells.


AMS Feature Column: AMS Feature Column - RSS Feed
Upgrading Slums Using Topology:
Topology and Elementary Electric Circuit Theory, I:
Recognition:
Getting in Sync:
Reading the Bakhshali Manuscript:
Crochet Topology:
Mathematical Economics for Mathematics and Statistics Awareness Month:
Neural Nets and How They Learn:
Jakob Bernoulli's Zoo:
Regular-Faced Polyhedra: Remembering Norman Johnson:






Last modified 18 November 2018 at 5:47 pm by svattam