With Jie Zhang, she explored scenarios where buying agents make use of advice provided by other buying agents in order to select the most appropriate selling agents. Included in this research is an approach that allows the combination of private and public modelling of the advisor's trustworthiness. The approach introduces as well incentives for honest reporting from advisors, through rewards provided by sellers to those advisors accepted into a large number of social networks of other buying agents.
With Reid Kerr, Professor Cohen has investigated the promotion of secure electronic marketplaces. This research first explored some of the common vulnerabilities in existing trust and reputation models, leading to a demonstration of how smart cheating agents can prosper even when their trustworthiness is being modeled. This in turn resulted in the design of a valuable testbed for measuring the performance of any trust and reputation modeling system. Moving forward from here, work with Kerr focused on the critical challenge of collusion in multiagent systems, including techniques aimed at recognizing clusters of agents that avoid harm and instead produce benefit to each other.
Professor Cohen has also been exploring related subtopics in multiagent systems of computational social choice and preferences (with John Doucette and Hadi Hosseini). She has a particular interest in the context of online social networks where opinions may be influenced by peers. Opinion dynamics becomes a central consideration as well.
Another recent thread of research has been examining how user experiences in social networks can be improved by offering better solutions to the depiction of the webpages. The research here with Michael Cormier leverages computer vision algorithms and emphasizes possible outcomes for users with assistive needs. User modeling consideration arise within these contexts as well.
Other research has focused on the topic of multiagent resource allocation with preemption, conducted with John Doucette and Graham Pinhey, for cooperative environments. The challenge is to enable effective coordination of agents, even with dynamic task arrivals, through a modeling and exchange of plans and their utilities.
In environments where student learning can be achieved on the basis of the previous experience of peers, the challenge is to determine the appropriate social network to employ -- which peers are most reputable and which learning experiences have been most beneficial, for students bearing some similarity to the current student. This is the topic of research conducted with John Champaign, as an intelligent tutoring approach.
With Noel Sardana, research has investigated how to use trust in order to streamline the presentation of messages in social networking environments. An initial model integrated the concept of credibility, to balance consideration of similarity, to cope with folklore. From here, the research explored a principled Bayesian approach in order to learn from observations, against the backdrop of user utilities. This enables an examination of the use of trust modeling, for decision making.
As a final recent subtopic, Professor Cohen has been developing an approach to engender trust in multiagent systems, working with Thomas Tran. This is a critical consideration within the field of artificial intelligence today, promoting effective human-agent partnerships. As such, this research contributes to a future for AI where users are more at ease with these intelligent solutions. This exploration also serves to advance trust modeling research in directions that have received relatively little attention to date.
A longstanding instructor of social implications of computing, Professor Cohen has also recently been examining how to develop technological solutions to social problems of computers, including offering a graduate level course in this topic area. She has also been reflecting on such philosophical concerns as ethics for AI.