I study and I design collective-intelligence systems, with the goal of augmenting our collective ability to solve problems.

I’ve been working on two fronts: Exploring novel ways of connecting minds (and machines) for addressing challenges and solving problems; and studying intelligence and sensemaking in groups and collectives.

 

Designing Collective-Intelligence Systems

 

Using Collective-Intelligence to Evaluate Complex Intellectual Artifacts

In this work, which combines qualitative work and experiments, I’m exploring ways to organize the work of people with various levels of expertise such that we can overcome the bottleneck of evaluating complex work, caused by the scarcity of expertise. The immediate context in which I’m operating is The Climate CoLab, and I hope this work will be useful in additional settings.

 

Publications:

 

Since this work is in early stages, there are no results yet. The extended abstract below describes the motivation and general direction:

 

Nagar, Y. (2013). Designing a Collective-Intelligence System for Evaluating Complex, Crowd-Generated Intellectual Artifacts. In proceedings of the 2013 ACM Conference on Computer Supported Collaborative Work (CSCW 2013), San-Antonio, Texas, USA.

 

Abstract: The collective-intelligence of crowds is increasingly used to generate ideas, plans, designs and predictions, for addressing various challenges – from folding proteins to identifying galaxies. In many cases, evaluation of crowd inputs can be done by non-experts, or even automatically. However, evaluating some complex crowd-generated intellectual artifacts, such as plans for addressing climate change, requires high levels of expertise in multiple domains – a combination that is rare even on global scale. I am designing a sociotechnical solution for relieving the bottleneck of expertise. If successful, principles of this design will potentially be transferable to other domains, such as the review of scientific work.

 

Driving the Crowd

 

On the feeding end of the pipe, I’m working with Josh Introne, Erik Duhaime, and several other collaborators on a framework that augments our ability to guide the work of crowds on generating useful solutions to tough problems.

Publications: coming soon. Meantime, here’s blog post, summarizing a pilot we ran at CrowdCamp 2013.

 

Combining Human and Machine Intelligence for Making Predictions

In this project we cover one area – predictions of tactical moves of an opponent group – in which human and machines can work together through novel mechanisms. This is joint work with Tom Malone.

 

Publications:

 

Nagar, Y., & Malone, T. W. (2011). Making Business Predictions by Combining Human and Machine Intelligence in Prediction Markets Proceedings of the Thirty Second International Conference on Information Systems (ICIS 2011), Shanghai, China

 

Abstract: Computers can use vast amounts of data to make predictions that are often more accurate than those by human experts. Yet, humans are more adept at processing unstructured information and at recognizing unusual circumstances and their consequences. Can we combine predictions from humans and machines to get predictions that are better than either could do alone? We used prediction markets to combine predictions from groups of people and artificial intelligence agents. We found that the combined predictions were both more accurate and more robust than those made by groups of only people or only machines. This combined approach may be especially useful in situations where patterns are difficult to discern, where data are difficult to codify, or where sudden changes occur unexpectedly.

 

An earlier and much shorter version appeared as a workshop paper:

 

Nagar, Y., & Malone, T. W. (2010). Combining Human and Machine Intelligence for Making Predictions. Presented at the NIPS 2010 Crowdsourcing and Computational Social Science Workshop, Whistler, Canada.

 

Beyond the Human Computation Metaphor

I explore the metaphor of human-computation and offer that while it triggered innovative designed, it also inhibits design. We should think more on how to build crowd-enhanced applications that foster, rather than inhibit, human capabilities.

 

Publications:

 

Nagar, Y. (2011). Beyond the Human-Computation Metaphor. Proceedings of the The Third IEEE International Conference on Social Computing (SocialCom 2011), Cambridge, MA, USA (pp. 800-805.)

 

Abstract: Two assumptions have become dominant in the field of social computing and crowdsourcing - the computational view, and the assumption of a human-only crowd. In this paper, I address those assumptions. I trace their origin in the human-computation metaphor, and argue that while this metaphor is instrumental in facilitating novel developments, it also constrains the thinking of designers. I discuss some of the limitations this metaphor might impose, and offer that additional perspectives, such as an organizational design perspective and the distributed cognition perspective can help us think of novel possibilities of organizing work with crowdsourcing. I call for extending the conversation among computer-scientists and organizational researchers, and propose that the metaphor of ‘information processing’ might serve as a ‘boundary-object’ around which the dialogue among these communities can thrive.

 

 

 

“As We May Think”

This line of research explores intelligence and sensemaking in human groups and collectives, small and large (more…)

 

Collective-Sensemaking in an Online Community

 

A key question in organization science is “How do people produce and acquire a sense of order that allows them to coordinate their actions in ways that have mutual relevance?” [i].
I study how this happens in Wikipedia. This is a work in progress that have already yielded some results, and which I’d like to take further.

 

Publications:

 

Nagar, Y. (2012). What Do You Think: The Structuring of an Online Community as a Collective-Sensemaking Process. Proceedings of the 2012 ACM Conference on Computer Supported Collaborative Work (CSCW 2012), Seattle, WA, USA

 

Abstract: I observe conversations that take place as Wikipedia members negotiate, construct, and interpret its policies. Logs of these conversations offer a rare – perhaps unparalleled – opportunity to track how individuals, as they try to make sense, engage others in social interacts that become a collective processes of sensemaking. I draw upon Weick’s model of sensemaking as committed-interpretation, which I ground in a qualitative inquiry into policy discussion pages, in attempt to explain how structuration emerges as interpretations are negotiated, and then committed through conversation, and as they are reified in the policy. I argue that the wiki environment provides conditions that help commitments form, strengthen and diffuse, and that this, in turn, helps explain trends of stabilization observed in previous research. The proposed model may prove useful for understanding structurational processes in other large wiki communities, and potentially in other radically open organizations.

 

Measuring Collective Intelligence

 

Can we quantify the intelligence of groups (i.e. to ‘measure their IQ’) in a similar manner to the way that is done with individual humans? What are some factors that affect the group intelligence?

Based on a couple of studies done at MIT and CMU, the answer to the first question seems to be positive. The answer to the second question came out quite interesting. This work, led by Anita Woolley, was published in Science (Note: I’m not an author). I have been involved in this project as a research assistant.

 

 

 

 

 



[i] Weick, K. E. (1993). Sensemaking in organizations: Small structures with large consequences. In J. K. Murnighan (Ed.), Social psychology in organizations: Advances in theory and research (pp. 10-37): Prentice Hall College Division.