Subramanian Ramamoorthy – Smart Society Project http://www.smart-society-project.eu "Hybrid and Diversity-Aware Collective Adaptive Systems: When People Meet Machines to Build a Smarter Society" Fri, 10 Feb 2017 14:56:03 +0000 en-US hourly 1 https://wordpress.org/?v=4.5.2 http://www.smart-society-project.eu/wp-content/uploads/2014/01/favicon1.png Subramanian Ramamoorthy – Smart Society Project http://www.smart-society-project.eu 32 32 Diversity-Aware Recommendation for Human Collectives http://www.smart-society-project.eu/diversityawarerecommendation/ http://www.smart-society-project.eu/diversityawarerecommendation/#respond Fri, 13 Jan 2017 00:18:05 +0000 http://www.smart-society-project.eu/?p=3244 Continue reading ]]>

Abstract: Sharing economy applications need to coordinate humans, each of whom may have different preferences over the provided service. Traditional approaches model this as a resource allocation problem and solve it by identifying matches between users and resources. These require knowledge of user preferences and, crucially, assume that they act deterministically or, equivalently, that each of them is expected to accept the proposed match. This assumption is unrealistic for applications like ridesharing and house sharing (like airbnb), where user coordination requires handling of the diversity and uncertainty in human behaviour.
We address this shortcoming by proposing a diversity-aware recommender system that leaves the decision-power to users but still assists them in coordinating their activities. We achieve this through taxation, which indirectly modifies users’ preferences over options by imposing a penalty on them. This is applied on options that, if selected, are expected to lead to less favourable outcomes, from the perspective of the collective. The framework we used to identify the options to recommend is composed by three optimisation steps, each of which has a mixed integer linear program at its core. Using a combination of these three programs, we are also able to compute solutions that permit a good trade-off between satisfying the global goals of the collective and the individual users’ interests. We demonstrate the effectiveness of our approach with two experiments in a simulated ridesharing scenario, showing: (a) significantly better coordination results with the approach we propose, than with a set of recommendations in which taxation is not applied and each solution maximises the goal of the collective, (b) that we can propose a recommendation set to users instead of imposing them a single allocation at no loss to the collective, and (c) that our system allows for an adaptive trade-off between conflicting criteria.

Citation: P. Andreadis, S. Ceppi, M. Rovatsos, and S. Ramamoorthy. Diversity-Aware Recommendation for Human Collectives. In Proceedings of the 1st International Workshop on Diversity-Aware Artificial Intelligence (DIVERSITY 2016), The Hague, The Netherlands, 2016

Download: http://bit.ly/2jp8rUr

]]>
http://www.smart-society-project.eu/diversityawarerecommendation/feed/ 0
Predicting actions using an adaptive probabilistic model of human decision behaviours http://www.smart-society-project.eu/predictingactions/ http://www.smart-society-project.eu/predictingactions/#respond Thu, 12 Jan 2017 23:16:49 +0000 http://www.smart-society-project.eu/?p=3219 Continue reading ]]>

Abstract: Computer interfaces provide an environment that allows for multiple objectively optimal solutions but individuals will, over time, use a smaller number of subjectively optimal solutions, developed as habits that have been formed and tuned by repetition. Designing an interface agent to provide assistance in this environment thus requires not only knowledge of the objectively optimal solutions, but also recognition that users act from habit and that adaptation to an individual’s subjectively optimal solutions is required. We present a dynamic Bayesian network model for predicting a user’s actions by inferring whether a decision is being made by deliberation or through habit. The model adapts to individuals in a principled manner by incorporating observed actions using Bayesian probabilistic techniques. We demonstrate the model’s effectiveness using specific implementations of deliberation and habitual decision making, that are simple enough to transparently expose the mechanisms of our estimation procedure. We show that this implementation achieves > 90% prediction accuracy in a task with a large number of optimal solutions and a high degree of freedom in selecting actions.

Citation: A.H. Cruickshank, R. Shillcock, S. Ramamoorthy, Predicting actions using an adaptive probabilistic model of human decision behaviours, Poster, In Ext. Proc. Conference on User Modelling, Adaptation and Personalization (UMAP), 2015.

Download: http://bit.ly/2joIdRU

]]>
http://www.smart-society-project.eu/predictingactions/feed/ 0
Are you doing what I think you are doing? Criticising uncertain agent models http://www.smart-society-project.eu/areyoudongwhatithinkyouaredoing/ http://www.smart-society-project.eu/areyoudongwhatithinkyouaredoing/#respond Thu, 12 Jan 2017 23:05:10 +0000 http://www.smart-society-project.eu/?p=3214 Continue reading ]]>

Abstract: The key for effective interaction in many multiagent applications is to reason explicitly about the behaviour of other agents, in the form of a hypothesised behaviour. While there exist several methods for the construction of a behavioural hypothesis, there is currently no universal theory which would allow an agent to contemplate the correctness of a hypothesis. In this work, we present a novel algorithm which decides this question in the form of a frequentist hypothesis test. The algorithm allows for multiple metrics in the construction of the test statistic and learns its distribution during the interaction process, with asymptotic correctness guarantees. We present results from a comprehensive set of experiments, demonstrating that the algorithm achieves high accuracy and scalability at low computational costs.

Citation: S. Albrecht, S. Ramamoorthy, Are you doing what I think you are doing? Criticising uncertain agent models, In Proc. Conference on Uncertainty in Artificial Intelligence (UAI), 2015.

Download: http://bit.ly/2jcLEOD

]]>
http://www.smart-society-project.eu/areyoudongwhatithinkyouaredoing/feed/ 0
An Empirical Study on the Practical Impact of Prior Beliefs over Policy Types http://www.smart-society-project.eu/anempiricalstudy/ http://www.smart-society-project.eu/anempiricalstudy/#respond Thu, 12 Jan 2017 22:50:01 +0000 http://www.smart-society-project.eu/?p=3210 Continue reading ]]>

Abstract: Many multiagent applications require an agent to learn quickly how to interact with previously unknown other agents. To address this problem, researchers have studied learning algorithms which compute posterior beliefs over a hypothesised set of policies, based on the observed actions of the other agents. The posterior belief is complemented by the prior belief, which specifies the subjective likelihood of policies before any actions are observed. In this paper, we present the first comprehensive empirical study on the practical impact of prior beliefs over policies in repeated interactions. We show that prior beliefs can have a significant impact on the long-term performance of such methods, and that the magnitude of the impact depends on the depth of the planning horizon. Moreover, our results demonstrate that automatic methods can be used to compute prior beliefs with consistent performance effects. This indicates that prior beliefs could be eliminated as a manual parameter and instead be computed automatically.

Citation: S. Albrecht, J. Crandall, S. Ramamoorthy, An Empirical Study on the Practical Impact of Prior Beliefs over Policy Types, In Proc. AAAI Conference on Artificial Intelligence (AAAI), 2015.

Download: http://bit.ly/2jcOpzr

]]>
http://www.smart-society-project.eu/anempiricalstudy/feed/ 0
E-HBA: Using Action Policies for Expert Advice and Agent Typification http://www.smart-society-project.eu/usingactionpolicies/ http://www.smart-society-project.eu/usingactionpolicies/#respond Thu, 12 Jan 2017 22:43:12 +0000 http://www.smart-society-project.eu/?p=3208 Continue reading ]]>

Abstract: Past research has studied two approaches to utilise pre-defined policy sets in repeated interactions: as experts, to dictate our own actions, and as types, to characterise the behaviour of other agents. In this work, we bring these complementary views together in the form of a novel meta-algorithm, called Expert-HBA (E-HBA), which can be applied to any expert algorithm that considers the average (or total) payoff an expert has yielded in the past. E-HBA gradually mixes the past payoff with a predicted future payoff, which is computed using the type-based characterisation. We present results from a comprehensive set of repeated matrix games, comparing the performance of several well-known expert algorithms with and without the aid of E-HBA. Our results show that E-HBA has the potential to significantly improve the performance of expert algorithms.

Citation: S. Albrecht, J. Crandall, S. Ramamoorthy, E-HBA: Using Action Policies for Expert Advice and Agent Typification, In Proc. AAAI-Workshop on Multiagent Interaction without Prior Coordination (MIPC), 2015.

Download: http://bit.ly/2j5V9MT

]]>
http://www.smart-society-project.eu/usingactionpolicies/feed/ 0
Adapting interaction environments to diverse users through online action set selection http://www.smart-society-project.eu/adaptinginteraction/ http://www.smart-society-project.eu/adaptinginteraction/#respond Thu, 12 Jan 2017 13:49:43 +0000 http://www.smart-society-project.eu/?p=3153 Continue reading ]]>

Abstract: Interactive interfaces are a common feature of many systems ranging from field robotics to video games. In most applications, these interfaces must be used by a heterogeneous set of users, with substantial variety in effectiveness with the same interface when configured differently. We address the issue of personalizing such an interface, adapting parameters to present the user with an environment that is optimal with respect to their individual traits – enabling that particular user to achieve their personal optimum. We introduce anew class of problem in interface personalization where the task of the adaptive interface is to choose the subset of actions of the full interface to present to the user. In formalizing this problem, we model the user as a Markov decision process (MDP), wherein the transition dynamics within a task depends on the type (e.g., skill or dexterity) of the user, where the type parametrizes the MDP. The action set of the MDP is divided into disjoint set of actions, with different action-sets optimal for different type (transition dynamics). The task of the adaptive interface is then to choose the right action-set.Given this formalization, we present experiments with simulated and human users in a video game domain to show that (a) action set selection is an interesting class of problems(b) adaptively choosing the right action set improves performance over sticking to a fixed action set and (c) immediately applicable approaches such as bandits can be improved upon.

Citation: M.M.H. Mahmud, B. Rosman, S. Ramamoorthy, P. Kohli. Adapting interaction environments to diverse users through online action set selection. In Proc. AAAI Workshop on Machine Learning for Interactive Systems (AAAI-MLIS), 2014.

Download: http://bit.ly/2iKKXZb

]]>
http://www.smart-society-project.eu/adaptinginteraction/feed/ 0
Giving advice to agents with hidden goals http://www.smart-society-project.eu/givingadvicetoagentswithhiddengoals/ http://www.smart-society-project.eu/givingadvicetoagentswithhiddengoals/#respond Thu, 12 Jan 2017 12:40:20 +0000 http://www.smart-society-project.eu/?p=3122 Continue reading ]]>

Abstract: This paper considers the problem of providing advice to an autonomous agent when neither the behavioural policy nor the goals of that agent are known to the advisor. We present an approach based on building a model of “commonsense” behaviour in the domain, from an aggregation of different users performing various tasks, modeled as MDPs, in the same domain. From this model, we estimate the normalcy of the trajectory given by a new agent in the domain, and provide behavioural advice based on an approximation of the trade-off in utility between potential benefits to the exploring agent and the costs incurred in giving this advice. This model is evaluated on a maze world domain by providing advice to different typesof agents, and we show that this leads to a considerable and unanimous improvement in the completion rate of their tasks.

Citation: B. Rosman, S. Ramamoorthy (2014). Giving advice to agents with hidden goals. In Proc. IEEE International Conference on Robotics and Automation (ICRA), 2014.

Download: http://bit.ly/2iKvWqc

]]>
http://www.smart-society-project.eu/givingadvicetoagentswithhiddengoals/feed/ 0