Printable Version of this PageHome PageRecent ChangesSearchSign In

Swaroop Vattam

I am a technical staff member at Lincoln Lab at MIT. Previously, I was a research scientist at Georgia Tech and a NAS fellow at NRL. I got my PhD in Computer Science from Georgia Tech in 2012. My research is broadly focused on applications of machine reasoning and machine learning to natural language processing problems. I am currently investigating data-driven model discovery systems in an effort to tackle the problem of automated machine learning.

cs.AI updates on Computer Science -- Artificial Intelligence (cs.AI) updates on the e-print archive
Verifiably Safe Off-Model Reinforcement Learning. (arXiv:1902.05632v1 [cs.AI]):

The desire to use reinforcement learning in safety-critical settings has inspired a recent interest in formal methods for learning algorithms. Existing formal methods for learning and optimization primarily consider the problem of constrained learning or constrained optimization. Given a single correct model and associated safety constraint, these approaches guarantee efficient learning while provably avoiding behaviors outside the safety constraint. Acting well given an accurate environmental model is an important pre-requisite for safe learning, but is ultimately insufficient for systems that operate in complex heterogeneous environments. This paper introduces verification-preserving model updates, the first approach toward obtaining formal safety guarantees for reinforcement learning in settings where multiple environmental models must be taken into account. Through a combination of design-time model updates and runtime model falsification, we provide a first approach toward obtaining formal safety proofs for autonomous systems acting in heterogeneous environments.

Active Perception in Adversarial Scenarios using Maximum Entropy Deep Reinforcement Learning. (arXiv:1902.05644v1 [cs.AI]):

We pose an active perception problem where an autonomous agent actively interacts with a second agent with potentially adversarial behaviors. Given the uncertainty in the intent of the other agent, the objective is to collect further evidence to help discriminate potential threats. The main technical challenges are the partial observability of the agent intent, the adversary modeling, and the corresponding uncertainty modeling. Note that an adversary agent may act to mislead the autonomous agent by using a deceptive strategy that is learned from past experiences. We propose an approach that combines belief space planning, generative adversary modeling, and maximum entropy reinforcement learning to obtain a stochastic belief space policy. By accounting for various adversarial behaviors in the simulation framework and minimizing the predictability of the autonomous agent's action, the resulting policy is more robust to unmodeled adversarial strategies. This improved robustness is empirically shown against an adversary that adapts to and exploits the autonomous agent's policy when compared with a standard Chance-Constraint Partially Observable Markov Decision Process robust approach.

Probabilistic Relational Agent-based Models. (arXiv:1902.05677v1 [cs.AI]):

PRAM puts agent-based models on a sound probabilistic footing as a basis for integrating agent-based and probabilistic models. It extends the themes of probabilistic relational models and lifted inference to incorporate dynamical models and simulation. It can also be much more efficient than agent-based simulation.

Generating Natural Language Explanations for Visual Question Answering using Scene Graphs and Visual Attention. (arXiv:1902.05715v1 [cs.CL]):

In this paper, we present a novel approach for the task of eXplainable Question Answering (XQA), i.e., generating natural language (NL) explanations for the Visual Question Answering (VQA) problem. We generate NL explanations comprising of the evidence to support the answer to a question asked to an image using two sources of information: (a) annotations of entities in an image (e.g., object labels, region descriptions, relation phrases) generated from the scene graph of the image, and (b) the attention map generated by a VQA model when answering the question. We show how combining the visual attention map with the NL representation of relevant scene graph entities, carefully selected using a language model, can give reasonable textual explanations without the need of any additional collected data (explanation captions, etc). We run our algorithms on the Visual Genome (VG) dataset and conduct internal user-studies to demonstrate the efficacy of our approach over a strong baseline. We have also released a live web demo showcasing our VQA and textual explanation generation using scene graphs and visual attention.

Shepherding Hordes of Markov Chains. (arXiv:1902.05727v1 [cs.LO]):

This paper considers large families of Markov chains (MCs) that are defined over a set of parameters with finite discrete domains. Such families occur in software product lines, planning under partial observability, and sketching of probabilistic programs. Simple questions, like `does at least one family member satisfy a property?', are NP-hard. We tackle two problems: distinguish family members that satisfy a given quantitative property from those that do not, and determine a family member that satisfies the property optimally, i.e., with the highest probability or reward. We show that combining two well-known techniques, MDP model checking and abstraction refinement, mitigates the computational complexity. Experiments on a broad set of benchmarks show that in many situations, our approach is able to handle families of millions of MCs, providing superior scalability compared to existing solutions.

Mathematical Moments from the American Mathematical Society: The American Mathematical Societys Mathematical Moments program promotes appreciation and understanding of the role mathematics plays in science, nature, technology, and human culture. Listen to researchers talk about how they use math: from presenting realistic animation to beating cancer.
Screening for Autism: Researcher: Jordan Hashemi, Duke University Moment: Moment Title: Screening for Autism Description: Jordan Hashemi talks about an easy-to-use app to screen for autism. Podcast page: Audio file: podcast-mom-autism.mp3
Unbunching Buses: Researchers: Vikash V. Gayah and S. Ilgin Guler, Pennsylvania State University Moment: Moment Title: Unbunching Buses Description: Gayah and Guler talk about mitigating the clustering of buses on a route. Podcast page: Audio file: podcast-mom-bus-bunching.mp3
Winning the Race: Researcher: Christine Darden, NASA (retired) Moment: Moment Title: Winning the Race Description: Christine Darden on working at NASA. Podcast page:
Revolutionizing and Industry: Researchers: Christopher Brinton, Zoomi, Inc. and Princeton University, and Mung Chiang, Purdue University Moment: Description: Christopher Brinton and Mung Chiang talk about the Netflix Prize competition.
Going Into a Shell: Researcher: Derek Moulton, University of Oxford Moment: Description: Derek Moulton explains the math behind the shapes of seashells.

AMS Feature Column: AMS Feature Column - RSS Feed
Understanding Kepler II--Earth's Motion:
Branko Grunbaum Remembered--A Great Geometer!:
Upgrading Slums Using Topology:
Understanding What Kepler Did--Part I:
Topology and Elementary Electric Circuit Theory, I:
Getting in Sync:
Reading the Bakhshali Manuscript:
Crochet Topology:
Mathematical Economics for Mathematics and Statistics Awareness Month:

Last modified 18 November 2018 at 5:47 pm by svattam