Printable Version of this PageHome PageRecent ChangesSearchSign In

Swaroop Vattam

I am a technical staff member at Lincoln Lab at MIT. Previously, I was a research scientist at Georgia Tech and a NAS fellow at NRL. I got my PhD in Computer Science from Georgia Tech in 2012. My research is broadly focused on applications of machine reasoning and machine learning to natural language processing problems. I am currently investigating data-driven model discovery systems in an effort to tackle the problem of automated machine learning.



cs.AI updates on arXiv.org: Computer Science -- Artificial Intelligence (cs.AI) updates on the arXiv.org e-print archive
Conditional Graph Neural Processes: A Functional Autoencoder Approach. (arXiv:1812.05212v1 [cs.LG]):

We introduce a novel encoder-decoder architecture to embed functional processes into latent vector spaces. This embedding can then be decoded to sample the encoded functions over any arbitrary domain. This autoencoder generalizes the recently introduced Conditional Neural Process (CNP) model of random processes. Our architecture employs the latest advances in graph neural networks to process irregularly sampled functions. Thus, we refer to our model as Conditional Graph Neural Process (CGNP). Graph neural networks can effectively exploit `local' structures of the metric spaces over which the functions/processes are defined. The contributions of this paper are twofold: (i) a novel graph-based encoder-decoder architecture for functional and process embeddings, and (ii) a demonstration of the importance of using the structure of metric spaces for this type of representations.

Next Hit Predictor - Self-exciting Risk Modeling for Predicting Next Locations of Serial Crimes. (arXiv:1812.05224v1 [stat.AP]):

Our goal is to predict the location of the next crime in a crime series, based on the identified previous offenses in the series. We build a predictive model called Next Hit Predictor (NHP) that finds the most likely location of the next serial crime via a carefully designed risk model. The risk model follows the paradigm of a self-exciting point process which consists of a background crime risk and triggered risks stimulated by previous offenses in the series. Thus, NHP creates a risk map for a crime series at hand. To train the risk model, we formulate a convex learning objective that considers pairwise rankings of locations and use stochastic gradient descent to learn the optimal parameters. Next Hit Predictor incorporates both spatial-temporal features and geographical characteristics of prior crime locations in the series. Next Hit Predictor has demonstrated promising results on decades' worth of serial crime data collected by the Crime Analysis Unit of the Cambridge Police Department in Massachusetts, USA.

MetaStyle: Three-Way Trade-Off Among Speed, Flexibility, and Quality in Neural Style Transfer. (arXiv:1812.05233v1 [cs.CV]):

An unprecedented booming has been witnessed in the research area of artistic style transfer ever since Gatys et al. introduced the neural method. One of the remaining challenges is to balance a trade-off among three critical aspects---speed, flexibility, and quality: (i) the vanilla optimization-based algorithm produces impressive results for arbitrary styles, but is unsatisfyingly slow due to its iterative nature, (ii) the fast approximation methods based on feed-forward neural networks generate satisfactory artistic effects but bound to only a limited number of styles, and (iii) feature-matching methods like AdaIN achieve arbitrary style transfer in a real-time manner but at a cost of the compromised quality. We find it considerably difficult to balance the trade-off well merely using a single feed-forward step and ask, instead, whether there exists an algorithm that could adapt quickly to any style, while the adapted model maintains high efficiency and good image quality. Motivated by this idea, we propose a novel method, coined MetaStyle, which formulates the neural style transfer as a bilevel optimization problem and combines learning with only a few post-processing update steps to adapt to a fast approximation model with satisfying artistic effects, comparable to the optimization-based methods for an arbitrary style. The qualitative and quantitative analysis in the experiments demonstrates that the proposed approach achieves high-quality arbitrary artistic style transfer effectively, with a good trade-off among speed, flexibility, and quality.

Code Failure Prediction and Pattern Extraction using LSTM Networks. (arXiv:1812.05237v1 [cs.LG]):

In this paper, we use a well-known Deep Learning technique called Long Short Term Memory (LSTM) recurrent neural networks to find sessions that are prone to code failure in applications that rely on telemetry data for system health monitoring. We also use LSTM networks to extract telemetry patterns that lead to a specific code failure. For code failure prediction, we treat the telemetry events, sequence of telemetry events and the outcome of each sequence as words, sentence and sentiment in the context of sentiment analysis, respectively. Our proposed method is able to process a large set of data and can automatically handle edge cases in code failure prediction. We take advantage of Bayesian optimization technique to find the optimal hyper parameters as well as the type of LSTM cells that leads to the best prediction performance. We then introduce the Contributors and Blockers concepts. In this paper, contributors are the set of events that cause a code failure, while blockers are the set of events that each of them individually prevents a code failure from happening, even in presence of one or multiple contributor(s). Once the proposed LSTM model is trained, we use a greedy approach to find the contributors and blockers. To develop and test our proposed method, we use synthetic (simulated) data in the first step. The synthetic data is generated using a number of rules for code failures, as well as a number of rules for preventing a code failure from happening. The trained LSTM model shows over 99% accuracy for detecting code failures in the synthetic data. The results from the proposed method outperform the classical learning models such as Decision Tree and Random Forest. Using the proposed greedy method, we are able to find the contributors and blockers in the synthetic data in more than 90% of the cases, with a performance better than sequential rule and pattern mining algorithms.

Learning to Communicate: A Machine Learning Framework for Heterogeneous Multi-Agent Robotic Systems. (arXiv:1812.05256v1 [cs.RO]):

We present a machine learning framework for multi-agent systems to learn both the optimal policy for maximizing the rewards and the encoding of the high dimensional visual observation. The encoding is useful for sharing local visual observations with other agents under communication resource constraints. The actor-encoder encodes the raw images and chooses an action based on local observations and messages sent by the other agents. The machine learning agent generates not only an actuator command to the physical device, but also a communication message to the other agents. We formulate a reinforcement learning problem, which extends the action space to consider the communication action as well. The feasibility of the reinforcement learning framework is demonstrated using a 3D simulation environment with two collaborating agents. The environment provides realistic visual observations to be used and shared between the two agents.



Mathematical Moments from the American Mathematical Society: The American Mathematical Societys Mathematical Moments program promotes appreciation and understanding of the role mathematics plays in science, nature, technology, and human culture. Listen to researchers talk about how they use math: from presenting realistic animation to beating cancer.
Screening for Autism: Researcher: Jordan Hashemi, Duke University Moment: http://www.ams.org/samplings/mathmoments/mm142-autism.pdf Moment Title: Screening for Autism Description: Jordan Hashemi talks about an easy-to-use app to screen for autism. Podcast page: http://www.ams.org/samplings/mathmoments/mm142-autism-podcast Audio file: podcast-mom-autism.mp3
Unbunching Buses: Researchers: Vikash V. Gayah and S. Ilgin Guler, Pennsylvania State University Moment: http://www.ams.org/samplings/mathmoments/mm141-bus-bunching.pdf Moment Title: Unbunching Buses Description: Gayah and Guler talk about mitigating the clustering of buses on a route. Podcast page: http://www.ams.org/samplings/mathmoments/mm141-bus-bunching-podcast Audio file: podcast-mom-bus-bunching.mp3
Winning the Race: Researcher: Christine Darden, NASA (retired) Moment: http://www.ams.org/publicoutreach/mathmoments/mm140-hidden-figures.pdf Moment Title: Winning the Race Description: Christine Darden on working at NASA. Podcast page: http://www.ams.org/publicoutreach/mathmoments/mm140-hidden-figures-podcast
Revolutionizing and Industry: Researchers: Christopher Brinton, Zoomi, Inc. and Princeton University, and Mung Chiang, Purdue University Moment: http://www.ams.org/samplings/mathmoments/mm139-netflix.pdf Description: Christopher Brinton and Mung Chiang talk about the Netflix Prize competition.
Going Into a Shell: Researcher: Derek Moulton, University of Oxford Moment: http://www.ams.org/samplings/mathmoments/mm138-shells.pdf Description: Derek Moulton explains the math behind the shapes of seashells.


AMS Feature Column: AMS Feature Column - RSS Feed
Upgrading Slums Using Topology:
Topology and Elementary Electric Circuit Theory, I:
Recognition:
Getting in Sync:
Reading the Bakhshali Manuscript:
Crochet Topology:
Mathematical Economics for Mathematics and Statistics Awareness Month:
Neural Nets and How They Learn:
Jakob Bernoulli's Zoo:
Regular-Faced Polyhedra: Remembering Norman Johnson:






Last modified 18 November 2018 at 5:47 pm by svattam