Intelligent Agents Lab


Creating Social Systems

Topics in Social Systems (Spring 2014)

Location: HEC-356
Time: 10:30 am - 12 pm
Date Presenter Reading
Jan 09 Alireza
Identifying Information Spreadres in Twitter Follower Networks
(Wang, Liu, Zhang, Li)

Abstract

A number of research efforts on Twitter have been contributed towards understanding various factors that are related to retweetability, analyzing retweeting and diffusion patterns, predicting retweets, etc. One fundamental research question remains untackled: given a user and her followers, which of the followers are likely to spread her tweets to the world (the information spreader identification problem)? Answering this new and open problem helps to bridge the gap between analyzing retweetbility and understanding information diffusion. Using a large scale Twitter data set, we first find that retweet history is not an ideal method for identifying information spreaders, especially for the long tail users. Backed by statistical analysis, we set forward to extract meaningful features and present a set of feasible approaches for identifying information spreaders in the Twitter follower networks. Our study reports interesting findings, sheds light on many practical applications, helps understand the mechanisms of relaying information from one user to herfollowers, and offers future lines of research.

Jan 16 Xi
Supervised Random Walks: Predicting and Recommending Links in Social Networks
(Backstrom, Leskovec)

Abstract

Predicting the occurrence of links is a fundamental problem in networks. In the link prediction problem we are given a snapshot of a network and would like to infer which interactions among existing members are likely to occur in the near future or which existing interactions are we missing. Although this problem has been extensively studied, the challenge of how to effectively combine the information from the network structure with rich node and edge attribute data remains largely open.

We develop an algorithm based on Supervised Random Walks that naturally combines the information from the network structure with node and edge level attributes. We achieve this by using these attributes to guide a random walk on the graph. We formulate a supervised learning task where the goal is to learn a function that assigns strengths to edges in the network such that a random walker is more likely to visit the nodes to which new links will be created in the future. We develop an efficient training algorithm to directly learn the edge strength estimation function.

Our experiments on the Facebook social graph and large collaboration networks show that our approach outperforms state-of-theart unsupervised approaches as well as approaches that are based on feature extraction.

Jan 23 Astrid
Reinforcement Learning in Robotics: A Survey
(Kober, Bagnell, Peters)

Abstract

Reinforcement learning offers to robotics a framework and set of tools for the design of sophisticated and hard-to-engineer behaviors. Conversely, the challenges of robotic problems provide both inspiration, impact, and validation for developments in reinforcement learning. The relationship between disciplines has sufficient promise to be likened to that between physics and mathematics. In this article, we attempt to strengthen the links between the two research communities by providing a survey of work in reinforcement learning for behavior generation in robots. We highlight both key challenges in robot reinforcement learning as well as notable successes. We discuss how contributions tamed the complexity of the domain and study the role of algorithms, representations, and prior knowledge in achieving these successes. As a result, a particular focus of our paper lies on the choice between model-based and model-free as well as between value function-based and policy search methods. By analyzing a simple problem in some detail we demonstrate how reinforcement learning approaches may be profitably applied, and we note throughout open questions and the tremendous potential for future research.

Jan 30 Erfan
GPU Parallel Programming Concepts and Programming Tutorials for CUDA C++

Cuda Tutorial 1: Introduction
Cuda Tutorial 2: Basics and a First Kernel
Cuda Tutorial 3: Display Driver has Stopped Working and has Recoverd
Cuda Tutorial 4: Threads, Thread Blocks and Grids
Cuda Tutorial 5: Memory Overview
Cuda Tutorial 6: An Embarrissingly Parallel Algorithm 1
Cuda Tutorial 7: An Embarrissingly Parallel Algorithm 2
Cuda Tutorial 8: Intro to Shared Memory
Cuda Tutorial 9: Bank Conflicts
Cuda Tutorial 10: Blocking with Shared Memory


Getting Started with CUDAfy

Feb 6 No meeting
Feb 13 Hamid
Social Network Dynamics in a Massive Online Game: Network Turnover, Non-densification, and Team Engagement in Halo Reach
(Merritt, Clauset)

Abstract

Online multiplayer games are a popular form of social interaction, used by hundreds of millions of individuals. However, little is known about the social networks within these online games, or how they evolve over time. Understanding human social dynamics within massive online games can shed new light on social interactions in general and inform the development of more engaging systems. Here, we study a novel, large friendship network, inferred from nearly 18 billion social interactions over 44 weeks between 17 million individuals in the popular online game Halo:Reach. This network is one of the largest, most detailed temporal interaction networks studied to date, and provides a novel perspective on the dynamics of online friendship networks, as opposed to mere interaction graphs. Initially, this network exhibits strong structural turnover and decays rapidly from a peak size. In the following period, however, both network size and turnover stabilize, producing a dynamic structural equilibrium. In contrast to other studies, we find that the Halo friendship network is non-densifying: both the mean degree and the average pairwise distance are stable, suggesting that densification cannot occur when maintaining friendships is costly. Finally, players with greater long-term engagement exhibit stronger local clustering, suggesting a group-level social engagement process. These results demonstrate the utility of online games for studying social networks, shed new light on empirical temporal graph patterns, and clarify the claims of universality of network densification.

Feb 20 Mahsa
Defense practice talk
Feb 27 Rahmat
Emergence of Norms Through Social Learning (Sen, Airiau)

Abstract

Behavioral norms are key ingredients that allow agent coordination where societal laws do not sufficiently constrain agent behaviors. Whereas social laws need to be enforced in a top-down manner, norms evolve in a bottom-up manner and are typically more self-enforcing. While effective norms can significantly enhance performance of individual agents and agent societies, there has been little work in multiagent systems on the formation of social norms. We propose a model that supports the emergence of social norms via learning from interaction experiences. In our model, individual agents repeatedly interact with other agents in the society over instances of a given scenario. Each interaction is framed as a stage game. An agent learns its policy to play the game over repeated interactions with multiple agents. We term this mode of learning social learning, which is distinct from an agent learning from repeated interactions against the same player. We are particularly interested in situations where multiple action combinations yield the same optimal payoff. The key research question is to find out if the entire population learns to converge to a consistent norm. In addition to studying such emergence of social norms among homogeneous learners via social learning, we study the effects of heterogeneous learners, population size, multiple social groups, etc.

Mar 06 No meeting
Mar 13 Bennie
Defense practice talk
Mar 20 Xi
Defense practice talk
Mar 27 No meeting
Apr 3 Ryan
Regret Minimization in Games with Incomplete Information (Zinkevich, Bowling, Johanson, Piccione)
Computing Robust Counter-Strategies (Johanson, Zinkevich, Bowling)

Abstract

Extensive games are a powerful model of multiagent decision-making scenarios with incomplete information. Finding a Nash equilibrium for very large instances of these games has received a great deal of recent attention. In this paper, we describe a new technique for solving large games based on regret minimization. In particular, we introduce the notion of counterfactual regret, which exploits the degree of incomplete information in an extensive game. We show how minimizing counterfactual regret minimizes overall regret, and therefore in self-play can be used to compute a Nash equilibrium. We demonstrate this technique in the domain of poker, showing we can solve abstractions of limit Texas Hold’em with as many as 1012 states, two orders of magnitude larger than previous methods.

Abstract

Adaptation to other initially unknown agents often requires computing an effective counter-strategy. In the Bayesian paradigm, one must find a good counterstrategy to the inferred posterior of the other agents’ behavior. In the experts paradigm, one may want to choose experts that are good counter-strategies to the other agents’ expected behavior. In this paper we introduce a technique for computing robust counter-strategies for adaptation in multiagent scenarios under a variety of paradigms. The strategies can take advantage of a suspected tendency in the decisions of the other agents, while bounding the worst-case performance when the tendency is not observed. The technique involves solving a modified game, and therefore can make use of recently developed algorithms for solving very large extensive games. We demonstrate the effectiveness of the technique in two-player Texas Hold’em. We show that the computed poker strategies are substantially more robust than best response counter-strategies, while still exploiting a suspected tendency. We also compose the generated strategies in an experts algorithm showing a dramatic improvement in performance over using simple best responses.

Apr 10 Erfan
Gaminfying Intelligent Environments (Liu, Nakajima)

Abstract

Recently digital designers have begun to integrate game elements and mechanics into non-game applications, systems, and services, to better engage end-users. This notion is named as the "gami cation". In this paper, we discuss the idea of applying the gami cation concept in designing intelligent environments to improve the overall user engagement. We present two case studies to better understand the e ectiveness of gamifying intelligent systems: a mobile crowdsourcing application that works as image based social search across languages, called UbiAsk, and a persuasive application for motivating users to reduce CO2 emissions named EcoIsland. We argue that the game-based incentive methods only work with a careful design: designers should be aware that the main functionalities of the system have much greater impact than the additional gami ed components, and the desired game-like user behavior requires comprehensive game-like experience that is supported by not only a "game structure" but also a "game-look" surface.

Apr 17 Hamid
Alireza
Sarah, Kristjan, Cassian


Practice presentation (Senior Design)
Apr 24 Rahmat
TBD
May 01 Rahmat
TBD

Further Reading:

Current work in the area of social media and data mining, providing compelling justification for why these research questions are important.