Research Interests

Robin Cohen conducts research in the artificial intelligence subfields of multiagent systems, user modeling and intelligent interaction. Most recently the focus of her research has been on modeling trust in multiagent systems, with connections to social networking; there is also interest in the multiagent problem of coordination. Machine learning methods form the basis for much of the research solutions to date. One overarching consideration is addressing possible misinformation in online social networks, through the application of techniques from multiagent trust modeling. An application area of current focus is that of healthcare. Applications of trust modeling to transportation, electronic commerce and intelligent tutoring have also been explored. Prof. Cohen is also particularly interested in current concerns within the field of artificial intelligence on trusted AI, promoting effective partnerships between intelligent agents and human users.

With Alex Parmentier, she explored how to integrate methods for data-driven trust modeling in order to reason about which peer advice to trust in social networking environments. This work also included avenues for personalizing trust modeling solutions.

With Noel Sardana, research has investigated how to use trust in order to streamline the presentation of messages in social networking environments. An initial model integrated the concept of credibility, to balance consideration of similarity, to cope with folklore. From here, the research explored a principled Bayesian approach in order to learn from observations, against the backdrop of user utilities. This enables an examination of the use of trust modeling, for decision making.

In earlier work with Jie Zhang, she explored scenarios where buying agents make use of advice provided by other buying agents in order to select the most appropriate selling agents. Included in this research is an approach that allows the combination of private and public modelling of the advisor's trustworthiness. The approach introduces as well incentives for honest reporting from advisors, through rewards provided by sellers to those advisors accepted into a large number of social networks of other buying agents.

With Reid Kerr, Professor Cohen has investigated some of the common vulnerabilities in existing trust and reputation models, leading to a demonstration of how smart cheating agents can prosper even when their trustworthiness is being modeled. This in turn resulted in the design of a valuable testbed for measuring the performance of any trust and reputation modeling system. Moving forward from here, work with Kerr focused on the critical challenge of collusion in multiagent systems, including techniques aimed at recognizing clusters of agents that avoid harm and instead produce benefit to each other.

Professor Cohen also has additional interests in the context of online social networks where opinions may be influenced by peers. Opinion dynamics and networking becomes a central consideration as well. The value of supporting personalized solutions (for example, for users who are highly risk averse) becomes another topic of concern.

Another recent thread of research has been examining how user experiences in social networks can be improved by offering better solutions to the depiction of the webpages. The research here with Michael Cormier leverages computer vision algorithms and emphasizes possible outcomes for users with assistive needs. User modeling consideration arise within these contexts as well. Examining the user base of older adults as one that would be especially valuable to model and to support, within social networking environments, has also been a recent area of study with colleague Karyn Moffatt of McGill.

Studying how to engender trust as part of trust modeling research is another recent thread. A truly critical consideration within the field of artificial intelligence today is promoting effective human-agent partnerships. As such, this research contributes to a future for AI where users are more at ease with these intelligent solutions. This includes such considerations as supporting better transparency and enabling effective comparisons between competing trusted AI solutions. This exploration also serves to advance trust modeling research in directions that have received relatively little attention to date.

A longstanding instructor of social implications of computing, Professor Cohen has also recently been examining how to develop technological solutions to social problems of computers, including offering a graduate level course in this topic area. She has also been reflecting on such philosophical concerns as ethics for AI.