
Research 



Consumers adopting a new product; a post trending
across Internet blogs; systemic risk spreading across firms/sectors/countries; an epidemic spreading across a population; a cellular process during which
the expression of a gene affects the expression of other genes; all of these are temporal processes
governed by local interactions of networked entities, which influence one another. Oftentimes, the
outcomes of such processes are observable, possibly with time stamps, yet the underlying network
of local interactions is hidden.
Can influences be untangled in a principled manner based on timed data of actions/decisions? How does our inference potential change if, instead of times of actions,
we have only access to sequences of agents' actions, or just the set of agents who undertake an action?
The Value of Temporal Data for Learning of Influence Networks (2013), with M. A. Dahleh (MIT EECS) and J. N. Tsitsiklis (MIT EECS), working paper
[+ Abstract] [Paper]
Abstract:
We infer local influence relations between networked entities from
data on outcomes and assess the value of temporal data by characterizing the speed
of learning under three different types of available data: knowing the set of entities
who take a particular action; knowing the order that the entities take an action;
and knowing the times of the actions. We propose a parametric model of influence
which captures directed pairwise interactions, formulate different variations of the
learning problem, and provide theoretical guarantees for correct learning based on
sets, sequences, and times.
Our results characterize the sample complexity of learning for the three cases. The
asymptotic gain of having access to richer temporal data for the speed of learning
is thus quantified in terms of the gap between the derived asymptotic requirements
under different data modes. We also evaluate the practical value of learning with
richer temporal data, by comparing learning with sets, sequences, and times given
actual experimental data. Experiments on both synthetic and real data quantify
the improvement due to the availability of richer temporal data, and show that our
maximum likelihood methodology recovers the underlying network well.

The Value of Temporal Data for Learning of Influence Networks: a Characterization via
KullbackLeibler Divergence (2013), with M. A. Dahleh (MIT EECS) and J. N. Tsitsiklis (MIT EECS), working paper
[+ Abstract] [Paper]
Abstract:
We infer local influence relations between networked entities from data on outcomes and assess the value of temporal data by
formulating relevant binary hypothesis testing problems and characterizing the speed of learning of the correct hypothesis via the
KullbackLeibler divergence, under three different types of available data: knowing the set of entities who take a particular action;
knowing the order that the entities take an action; and knowing the times of the actions.



Coordination problems lie at the center of many economic or social phenomena characterized by selffulfilling expectations, such as bank
runs, currency attacks and social uprisings. There is ample evidence that local information channels enable coordination among strategic agents. How do the outcomes depend on the details of local information sharing?
Coordination with Local Information (2013), with M. A. Dahleh (MIT EECS), A. TahbazSalehi (Columbia Business School), and J. N. Tsitsiklis (MIT EECS), submitted to Operations Research special issue on Information and Decisions in Social and Economic Networks, second round of reviews
[+ Abstract]
[Paper]
[Extended Abstract]
Abstract:
We study the role of local information channels in enabling coordination among strategic agents. Building on the standard finiteplayer global games framework, we show that the set of equilibria of a coordination game is highly sensitive to how information is locally shared among agents. In particular, we show that the coordination game has multiple equilibria if there exists a collection of agents such that (i) they do not share a common signal with any agent outside of that collection; and (ii) their information sets form an increasing sequence of nested sets, referred to as a filtration. Our characterization thus extends the results on the uniqueness and multiplicity of equilibria in global games beyond the wellknown case in which agents have access to purely private or public signals. We then provide a characterization of how the extent of equilibrium multiplicity is determined by the extent to which subsets of agents have access to common information: we show that the size of the set of equilibrium strategies is increasing with the extent of variability in the size of the subsets of agents who observe the same signal. We finally study the set of equilibria in large coordination games, showing that as the number of agents grows, the game exhibits multiple equilibria if and only if a nonvanishing fraction of the agents have access to the same signal.

On Global Games of Regime Change in Networks (2011), with M. A. Dahleh (MIT EECS), A. TahbazSalehi (Columbia Business School), and J. N. Tsitsiklis (MIT EECS), Proceedings of the 6th Workshop of the Economics of Networks, Systems, and Computation (NetEcon, ACM FCRC), San Jose, California, June 2011
[Extended Abstract]
[Paper]
[Video of Presentation at Workshop on Information and Decision in Social Networks 2011]



A retailer wants to estimate the elasticity of demand in response to price changes or advertising so as to optimize her marketing decisions. What is the practical value of using field experiments to improving marketing decisions? How many experiments are needed as the number of products grows?
The Practical Value of Field Experiments (2012), with J. Li (MIT EECS), P. Rusmevichientong (USC Marshall School of Business), D. Simester (MIT Sloan School of Management), J. N. Tsitsiklis (MIT EECS), submitted to Management Science, second round of reviews
[+ Abstract]
[Paper]
Abstract:
In recent years, many firms have had greater opportunities to conduct field experiments to help improve their marketing decisions, but the practical value of using field experiments to optimize marketing decisions remains relatively unstudied. We
investigate advertising and promotion decisions that require estimating a large matrix
of crossproduct demand elasticities and ask: how many experiments are required as
the number of products grows? Our main result demonstrates that if we add structure we can learn faster and reduce the number of experiments that are required. The
number of experiments required may grow just logarithmically with the number of
products. These findings potentially have important implications for the application
of field experiments. Firms may be able to obtain meaningful estimates using a practically feasible number of experiments, even in settings with a large number of products.
The relevance of our findings is confirmed through simulations using realistic parameters gathered from a realworld largescale pricing experiment. We also discuss an
application of our model to making product line pricing decisions, together with other
types of marketing decisions.



Data is indispensable for academics and practitioners alike. However, data exchange and publishing comes with considerations of right protection. How can we protect ownership while at the same time preserving the mining utility of a dataset?
RightProtected Data Publishing with Provable Distancebased Mining (2013), with M. Vlachos (IBM Research), N. M. Freris (EPFL), and C. Lucchese (National Research Council, Italy), to appear in IEEE Transactions on Knowledge and Data Engineering
[+ Abstract]
[Paper]
[Online Companion]
Abstract:
Protection of one's intellectual property is a topic with important technological and legal facets.
We provide mechanisms for establishing the ownership of a dataset consisting of multiple objects.
The algorithms also preserve important properties of the dataset, which are important for mining operations,
and so guarantee both right protection and utility preservation. We consider a rightprotection scheme based on watermarking.
Watermarking may distort the original distance graph. Our watermarking methodology preserves important distance relationships,
such as: the Nearest Neighbors (NN) of each object and the Minimum Spanning Tree (MST) of the original dataset.
This leads to preservation of any mining operation that depends on the ordering of distances between objects, such as NNsearch and
classification, as well as many visualization techniques. We prove fundamental lower and upper bounds on the distance between objects
postwatermarking. In particular, we establish a restricted isometry property, i.e., tight bounds on the contraction/expansion
of the original distances. We use this analysis to design fast algorithms for NNpreserving and MSTpreserving watermarking that
drastically prune the vast search space. We observe up to two orders of magnitude speedup over the exhaustive schemes, without any sacrifice
in NN or MST preservation.



In decentralized detection, each of several agents/sensors receives different measurements and is allowed to communicate only a few bits. In social science models, agents make decisions selfishly. In contrast, according to the "engineering" perspective, each agent chooses what to communicate so as to optimize some overall system objective (such as the probability of error at the fusion center). If agents are allowed to broadcast their local summaries to other sensors, in addition to transmitting them to a fusion center, is the performance of the system significantly improved?
On Decentralized Detection With Partial Information Sharing Among Sensors (2011), with O. P. Kreidl (UNF EE) and J. N. Tsitsiklis (MIT EECS), IEEE Transactions on Signal Processing, Volume 59 (4), pp. 17591765
[+ Abstract]
[Paper]
Abstract:
We study a decentralized detection architecture in
which each of a set of sensors transmits a highly compressed
summary of its observations (a binary message) to a fusion center,
which then decides on one of two alternative hypotheses. In contrast
to the star (or “parallel”) architecture considered in most of
the literature, we allow a subset of the sensors to both transmit
their messages to the fusion center and to also broadcast them to
the remaining sensors. We focus on the following architectural
question: Is there a significant performance improvement when we
allow such a message broadcast? We consider the error exponent
(asymptotically, in the limit of a large number of sensors) for the
Neyman–Pearson formulation of the detection problem. We prove
that the sharing of messages does not improve the optimal error
exponent.



