Consumers adopting a new product; a post trending
across Internet blogs; systemic risk spreading across firms/sectors/countries; an epidemic spreading across a population; a cellular process during which
the expression of a gene affects the expression of other genes; all of these are temporal processes
governed by local interactions of networked entities, which influence one another. Oftentimes, the
outcomes of such processes are observable, possibly with time stamps, yet the underlying network
of local interactions is hidden.
Can influences be untangled in a principled manner based on timed data of actions/decisions? How does our inference potential change if, instead of times of actions,
we have only access to sequences of agents' actions, or just the set of agents who undertake an action?
The Value of Temporally Richer Data for Learning of Influence Networks (2014), with M. A. Dahleh (MIT EECS) and J. N. Tsitsiklis (MIT EECS), Book Chapter, Lecture Notes in Computer Science, Volume 8877 2014, pp. 322-323 [+ Abstract][Paper]
We infer local relations of influence between networked entities from data on outcomes and assess the value of temporally richer data by characterizing the speed of learning when knowing the set of entities who take a particular action, versus when knowing the order that the entities take an action. We propose a parametric model of influence which captures directed pairwise interactions, formulate different variations of the learning problem, and provide theoretical guarantees for correct learning based on sets and sequences. The asymptotic gain of having access to richer temporal data for the speed of learning is thus quantified in terms of the gap between the derived asymptotic requirements under different data modes. Experiments on real data on mobile app installations quantify the improvement due to the availability of richer temporal data, and show that our maximum likelihood methodology recovers the underlying network well.
The Value of Temporal Data for Learning of Influence Networks: a Characterization via
Kullback-Leibler Divergence (2013), with M. A. Dahleh (MIT EECS) and J. N. Tsitsiklis (MIT EECS), working paper [+ Abstract][Paper]
We infer local influence relations between networked entities from data on outcomes and assess the value of temporal data by
formulating relevant binary hypothesis testing problems and characterizing the speed of learning of the correct hypothesis via the
Kullback-Leibler divergence, under three different types of available data: knowing the set of entities who take a particular action;
knowing the order that the entities take an action; and knowing the times of the actions.
Coordination problems lie at the center of many economic or social phenomena characterized by self-fulfilling expectations, such as bank
runs, currency attacks and social uprisings. There is ample evidence that local information channels enable coordination among strategic agents. How do the outcomes depend on the details of local information sharing?
Coordination with Local Information (2013), with M. A. Dahleh (MIT EECS), A. Tahbaz-Salehi (Columbia Business School), and J. N. Tsitsiklis (MIT EECS), submitted to Operations Research special issue on Information and Decisions in Social and Economic Networks, second round of reviews
[+ Abstract][Paper][Extended Abstract]
We study the role of local information channels in enabling coordination among strategic agents. Building on the standard finite-player global games framework, we show that the set of equilibria of a coordination game is highly sensitive to how information is locally shared among agents. In particular, we show that the coordination game has multiple equilibria if there exists a collection of agents such that (i) they do not share a common signal with any agent outside of that collection; and (ii) their information sets form an increasing sequence of nested sets, referred to as a filtration. Our characterization thus extends the results on the uniqueness and multiplicity of equilibria in global games beyond the well-known case in which agents have access to purely private or public signals. We then provide a characterization of how the extent of equilibrium multiplicity is determined by the extent to which subsets of agents have access to common information: we show that the size of the set of equilibrium strategies is increasing with the extent of variability in the size of the subsets of agents who observe the same signal. We finally study the set of equilibria in large coordination games, showing that as the number of agents grows, the game exhibits multiple equilibria if and only if a non-vanishing fraction of the agents have access to the same signal.
A retailer wants to estimate the elasticity of demand in response to price changes or advertising so as to optimize her marketing decisions. What is the practical value of using field experiments to improving marketing decisions? How many experiments are needed as the number of products grows?
The Practical Value of Field Experiments (2012), with J. Li (MIT EECS), P. Rusmevichientong (USC Marshall School of Business), D. Simester (MIT Sloan School of Management), J. N. Tsitsiklis (MIT EECS), submitted to Management Science, second round of reviews
In recent years, many firms have had greater opportunities to conduct field experiments to help improve their marketing decisions, but the practical value of using field experiments to optimize marketing decisions remains relatively unstudied. We
investigate advertising and promotion decisions that require estimating a large matrix
of cross-product demand elasticities and ask: how many experiments are required as
the number of products grows? Our main result demonstrates that if we add structure we can learn faster and reduce the number of experiments that are required. The
number of experiments required may grow just logarithmically with the number of
products. These findings potentially have important implications for the application
of field experiments. Firms may be able to obtain meaningful estimates using a practically feasible number of experiments, even in settings with a large number of products.
The relevance of our findings is confirmed through simulations using realistic parameters gathered from a real-world large-scale pricing experiment. We also discuss an
application of our model to making product line pricing decisions, together with other
types of marketing decisions.
Data is indispensable for academics and practitioners alike. However, data exchange and publishing comes with considerations of right protection. How can we protect ownership while at the same time preserving the mining utility of a dataset?
Right-Protected Data Publishing with Provable Distance-based Mining (2013), with M. Vlachos (IBM Research), N. M. Freris (EPFL), and C. Lucchese (National Research Council, Italy), to appear in IEEE Transactions on Knowledge and Data Engineering [+ Abstract][Paper][Online Companion]
Protection of one's intellectual property is a topic with important technological and legal facets.
We provide mechanisms for establishing the ownership of a dataset consisting of multiple objects.
The algorithms also preserve important properties of the dataset, which are important for mining operations,
and so guarantee both right protection and utility preservation. We consider a right-protection scheme based on watermarking.
Watermarking may distort the original distance graph. Our watermarking methodology preserves important distance relationships,
such as: the Nearest Neighbors (NN) of each object and the Minimum Spanning Tree (MST) of the original dataset.
This leads to preservation of any mining operation that depends on the ordering of distances between objects, such as NN-search and
classification, as well as many visualization techniques. We prove fundamental lower and upper bounds on the distance between objects
post-watermarking. In particular, we establish a restricted isometry property, i.e., tight bounds on the contraction/expansion
of the original distances. We use this analysis to design fast algorithms for NN-preserving and MST-preserving watermarking that
drastically prune the vast search space. We observe up to two orders of magnitude speedup over the exhaustive schemes, without any sacrifice
in NN or MST preservation.
In decentralized detection, each of several agents/sensors receives different measurements and is allowed to communicate only a few bits. In social science models, agents make decisions selfishly. In contrast, according to the "engineering" perspective, each agent chooses what to communicate so as to optimize some overall system objective (such as the probability of error at the fusion center). If agents are allowed to broadcast their local summaries to other sensors, in addition to transmitting them to a fusion center, is the performance of the system significantly improved?
On Decentralized Detection With Partial Information Sharing Among Sensors (2011), with O. P. Kreidl (UNF EE) and J. N. Tsitsiklis (MIT EECS), IEEE Transactions on Signal Processing, Volume 59 (4), pp. 1759-1765
We study a decentralized detection architecture in
which each of a set of sensors transmits a highly compressed
summary of its observations (a binary message) to a fusion center,
which then decides on one of two alternative hypotheses. In contrast
to the star (or “parallel”) architecture considered in most of
the literature, we allow a subset of the sensors to both transmit
their messages to the fusion center and to also broadcast them to
the remaining sensors. We focus on the following architectural
question: Is there a significant performance improvement when we
allow such a message broadcast? We consider the error exponent
(asymptotically, in the limit of a large number of sensors) for the
Neyman–Pearson formulation of the detection problem. We prove
that the sharing of messages does not improve the optimal error