Affect Control Theory (ACT) arises from a tradition of symbolic interactionism in sociology. Bayesian Affect Control Theory (or BayesACT for short) generalises ACT by introducing explicit notions of uncertainty and utility. BayesACT accounts for the dynamic fluctuation of identity meanings for self and other during interactions, elucidates how people infer and adjust meanings through social experience, and shows how stable patterns of interaction can emerge from individuals' uncertain perceptions of identities. BayesACT has been applied in an intelligent tutoring system, a social dilemma game player, an assistant for persons with Alzheimer's disease, and in sentiment analysis. We've got more projects on the go, check back for updates!
- the act@home project, which uses Bayesact to model identity and interaction in the COACH handwashing system for persons with Alzheimer's disease.
- the Bayesian Affect Control Theory of Self (BayesACT-S) page with videos etc.
- More details and papers on Affect Control Theory can be found here.
- See the appendix for the American Sociological Review article (Schroeder, Hoey and Rogers, 2016).
- Tobias Schroeder, Jesse Hoey and Kimberly B. Rogers American Sociological Review, 81, 4, 2016 (Appendix) (bibtex)
- Joshua D.A. Jung, Jesse Hoey, Jonathan H. Morgan, Tobias Schroeder and Ingo Wolf Proceedings of the Canadian Conference on AI, Victoria, BC, 2016 (bibtex)
- Aarti Malhotra, Jesse Hoey, Alexandra Konig and Sarel van Vuuren Proc. International Conference on Pervasive Computing Technologies for Healthcare, Cancun, Mexico, 2016 (bibtex)
- Jesse Hoey, Tobias Schroeder and Areej Alhothali Artificial Intelligence, 230, 2016 (bibtex)
- Areej Alhothali and Jesse Hoey Proc. Conference of the North American Chapter of the Association for Computational Linguistics - Human Language Technologies (NAACL HLT), Denver, CO, 2015 (bibtex)
- Nabiha Asghar and Jesse Hoey Proceedings of Uncertainty in Artificial Intelligence, Amsterdam, 2015 (bibtex)
- Aarti Malhotra, Lifei Yu, Tobias Schroeder and Jesse Hoey University of Waterloo School of Computer Science Technical Report, CS-2014-15, August, 2014 (bibtex)
- Nabiha Asghar and Jesse Hoey University of Waterloo School of Computer Science Technical Report, CS-2014-21, December, 2014 (bibtex)
- Jesse Hoey and Tobias Schroeder Proc. AAAI Conference on Artificial Intelligence, Austin, Texas, 2015 (bibtex)
- Luyuan Lin, Stephen Czarnuch, Aarti Malhotra, Lifei Yu, Tobias Schroeder and Jesse Hoey Proc. of International Workconference on Ambient Assisted Living (IWAAL), Belfast, UK, 2014 (bibtex)
- Jesse Hoey, Tobias Schroeder and Areej Alhothali Proc.of the Humaine Association Conference on Affective Computing and Intelligent Interaction, Geneva, Switzerland, 2013 (bibtex)
- Jesse Hoey, Tobias Schroeder and Areej Alhothali University of Waterloo School of Computer Science Technical Report, CS-2013-03, 2013 (bibtex)
- Get the Interact simulator (Java)
- January 29, 2016 Bayesact version 0.5.1 : gzipped tar or zip
- Fixed bugs with EmotionalAgent
- Merged emotions interactive simulation into bayesactinteractive
- Fixed bayesact of self simulations
- October 26, 2015 Bayesact version 0.4 : gzipped tar or zip
- Added bayesact-self (AAAI 2015 paper)
- Added prisoner's dilemma bot (UAI 2015 paper)
- Added emotions computation agent
- Jan 21st, 2014 Bayesact version 0.3 : gzipped tar or zip
- Completely separated the POMCP code from bayesact
- simplified code and fixed bugs
- Dec. 30, 2013 Bayesact version 0.2 : gzipped tar or zip
- Removed the explicit "turn" representation so it is now embedded in the state as it should be
- Added POMCP code
- Bug fixes
- Sept. 23, 2013 Bayesact version 0.1 : gzipped tar or zip
Here is a description of the videos in the playlist. You can skip to the one you want by clicking on "playlist" at the top left of the video frame above.
- This screencast shows a basic simulation of a 'tutor' and 'student' in Bayesact and gives an overview of what the output is.
- This screencast shows an example of using the interact java applet alongside the bayesact python simulator. The bayesact simulator is set up in such a way as to mimic as closely as possible the computations of interact. As bayesact doesn't take any shortcuts or make approximations, this requires using a large number of samples (10,000) and have a very small observation noise. As well, he first 5 minutes of this video shows how to set up a basic simulation in interact.
- Simulation of a Bayesact agent with affective identity of "tutor" interacting with a "student", but the bayesact agent does not know this affective identity to start with. Through interactions with the student, the bayesact "tutor" learns that this agent is something like a "student. Interact is used to simulate the actions of the student. It takes bayesact only 2 iterations to figure out the student's identity, as these two identities are fairly close.
- Simulation of a bayesact agent with identity "salesman" interacting with another agent (the "client") who is a "robber", but the bayesact agent does not know this. Through interactions with the robber, the bayesact "salesman" learns that this agent is something like a "robber". Interact is used to simulate the actions of the robber". It takes about 8 iterations for the bayesact agent to figure this one out, as the two identities are fairly dissimilar (will normally result in high deflection interactions).
- Magenta squares+red triangle: agent self identity (triangle is the mean)
- Cyan squares + blue triangle: client self identity
- Red squares: agent's estimate of clients identity
- Blue squares: client's estimate of agent's identity
- Magenta label: most common label for agent self identity
- Cyan label: most common label for client self identity
- Red Label: most common label for agent's estimate of client's identity
- Blue label: most common label for client's estimate of agent's identity
This is two agents with rather fixed ideas about their own identities trying to figure out the identity of the other. 10 experiments with different ids for agent and client. 150 steps per experiment with 500 samples.
This example has:
- beta_a (proposal) = 0.01
- beta_c (proposal) = 0.1
- average id deflection for agent is 0.03\pm0.04 and for client is 0.04\pm0.03
For two identities "lady" and "shoplifter", with no environment noise:
For two identities "lady" and "shoplifter", with environment noise: zero-mean Gaussian noise with variance 0.5:
For two identities "lady" and "shoplifter", with environment noise: zero-mean Gaussian noise with variance 1.0:
For two identities "lady" and "shoplifter", with environment noise: zero-mean Gaussian noise with variance 5.0:
For two identities "tutor" and "student", with environment noise: zero-mean Gaussian noise with variance 0.5:
For two identities "tutor" and "student", with environment noise: zero-mean Gaussian noise with variance 1.0:
Now, the client knows its identity (magenta squares with red triangle as the mean), but does not know the identity of the agent (red squares are its estimate). The agent doesn't know anything (blue squares are its estimate of the client's identity, cyan squares+blue triangle are its estimate of its own identity). In some cases, we see the two agents learning each other's identities, so the agent actually decides on an identity for itself!)
- beta_client_init=2.0 (does not know agent identity at start)
- beta_agent_init=0.01 (knows its own identity at start)
- beta_client_init=2.0 (clueless at start)
Finally, the client shifts identities at a speed of 0.25, but remains stationary at each target location for 40 steps. agent id was [0.32,0.42,0.64] while client ids were [-1.54,-0.38,0.13] and [1.31,-2.75,-0.09]