Anamaria (Ana) Crisan joined the Cheriton School of Computer Science as a tenure-track Assistant Professor in August 2024.
Previously, she was a lead research scientist at Tableau Research in Seattle, where she designed and developed new tools for humans and AI systems to cooperatively collaborate in data analysis and decision-making.
Professor Crisan conducts interdisciplinary research that integrates techniques and methods from machine learning, human-computer interaction, and data visualization. Her research explores how we can inject human-centered design principles into the development of AI systems by giving people greater agency and oversight. She is especially interested in how we can leverage visual and interactive interfaces to be a medium of knowledge sharing and information exchange between people and AI systems.
Professor Crisan completed her PhD in computer science at the University of British Columbia. Before her doctoral degree, she was a research scientist at the British Columbia Centre for Disease Control and Decipher Biosciences, where she conducted machine learning and data visualization research toward applications in infectious disease and cancer genomics, respectively. She has a MSc in Bioinformatics from UBC, and a BComp in Biomedical Computing from Queen’s University.
As of February 2025, Professor Crisan’s research has been cited more than 5,000 times with an h-index of 19 according to Google Scholar.
What follows is a lightly edited transcript of a conversation with Professor Crisan, where she discusses her research, her advice for aspiring computer scientists, and what excites her about joining the Cheriton School of Computer Science.

Tell us a bit about your research. What do you see as your most significant contribution?
I lead an interdisciplinary research lab that is developing techniques and tools for enabling people to collaborate more effectively with AI technology. One particular set of problems that I focus on concerns data analysis. Data is at the heart of most of our modern economy because it informs a lot of decision-making and policy formation. But working with data is really complex. First, there’s often a lot of it, requiring some kind of automated support. In the past, traditional machine learning models provided that automation and today, people are exploring AI support methods. Second, it requires a lot of human knowledge, what we call “domain expertise,” to find relevant insights into data and actualize it into something meaningful.
A lot of my work is thinking about this automation in a multifaceted way. I don't just care about the technology and the techniques, even though this is something my research team works on. I also care about what people need when working with data and how they meaningfully interact with these tools for automation – especially those driven by AI.
Another key thing is that my research team and I are not just focused on helping, say, data scientists or machine learning engineers do their job better (but that’s part of it!). We’re focused on helping pretty much anybody who needs to work with data as part of their job. People who might understand the problem but might not have the technical background to apply or use machine learning models safely and effectively in their work. Our research is about empowering people across a technical spectrum, but also in a way that is safe, reliable, and importantly interrogable.
That's why the research I conduct is really interdisciplinary. It focuses a lot on human-computer interaction to understand that piece from people and brings it together with the technical pieces of machine learning and artificial intelligence.
What do you see as your most significant contribution?
Hard to say because my career had sort of these two phases. The first phase was focused on bioinformatics research, where we were developing machine learning models to analyze cancer genomes. The second phase was a pivot toward more human-computer interaction research.
In the first phase of my career, I was essentially a data scientist. I was building computational pipelines and developing new machine-learning models for analyzing cancer genomes, to better understand this disease and potentially predict its clinical course. It was interesting and set a solid technical foundation. Because of what I was doing and where I was conducting my research, I got to interact with doctors and patients in the clinic to see how they were using these machine-learning models. From this phase, my most significant contribution was the development of a clinical classifier for predicting metastatic prostate cancer, which is in clinical use today. However, through this work, I also quickly found out that it was not enough to just give someone a result and say, “Trust me, the algorithm is robust”. People needed more help in trusting the results and understanding how to use them for making decisions about their health. So, that inspired my pivot to look at human-computer interaction together with machine learning, and once again in the context of data analysis.
From this second phase of my research, my most significant contributions have been around a line of research that really tries to understand data science processes, what people are trying to do (importantly also what they fail to do), and how automation driven by machine learning or AI is useful (or not). While people do treat AI as a panacea to solve many problems, my research finds that while it does make some significant and non-trivial advances in some ways, in others, it also fails to address long-standing issues and introduces new problems.
That’s interesting. You're sort of serving the foundation for a lot of important scientific work, like how people use treatments.
Data is at the heart of scientific process. In science, we start with trying to collect data for important questions. But there are still many steps between finding something interesting and developing it, to the point where it can be used to do something practical and helpful. My research tries to look across this data science stack, starting from gathering the data to analyzing it, and communicating the important results to key stakeholders. My team and I strive to develop techniques and tools to support activities within this stack and have some record of what’s going on. This latter point is especially important for high-stakes situations, where it’s important to understand how a data analysis was carried out and contributed to a decision that was made.
My research is prioritizing the understanding of how the analysis unfolded because if we don’t have this information, we risk using automation to introduce mistakes. Or as is the case with AI, potentially perpetuating biases. The research my team and I conduct aims to help people use automation in data analysis, particularly with AI technology, in a way that is transparent and safe, and to help us accelerate discovery and translation to practical application.
What challenges in HCI, data visualization and applied AI and ML do you find most exciting to tackle?
Marrying the technical aspects with the needs of people is challenging. Especially, when you acknowledge that people are different and that there's a spectrum and diversity of people that need to work with data and AI technology. It becomes challenging to find the right entry points for those people, in a way that machine learning or AI models can reasonably support.
Tools like ChatGPT have certainly changed the paradigm quite a bit because it seems to lower the barrier to entry. It's really easy, right, to just type out your question and get an answer? But it still does a lot of weird things that can throw people off. If you work with it for long enough, you get a sense of its errors and mistakes. In speaking with data scientists and other analysts, it's quite clear that once errors or issues arise, it’s very challenging to figure out what went wrong and address them. In a worst-case scenario, when these tools are wrong, it can lead to serious issues that can harm businesses or people. But we’re in a world right now where we are prioritizing technical advances over human needs. I think we need to recalibrate a little bit and see people as more than training data.
For sure. If you want something to be widely used like ChatGPT, you have to make sure we actually understand how users effectively use it, so we can enhance this technology, right?
There are a few examples lately of when AI technology has behaved in some unexpected way– it’s actually quite challenging to really understand the behaviours of these models and make sure that they are consistently performing in some expected way. That is why human oversight is very important.
But, how people want to interact with AI technology is often a secondary thought– and I do not think it should be. For example, it can be very annoying to have to type out everything. My research has found that when people are doing data analysis, which often involves making a lot of charts and graphs, it's really annoying to do everything by conversation. We know that people want to mix it up. Sometimes, they want to interact with the interface in other ways, as we do with tools like Tableau, PowerBI, or Excel. But mixing up these interactions, for example between typing and say, using your mouse, can be non-trivial. So, we really need to think about how we design tools for this more complex interaction space. Even if you just want to interact with a chatbot, there are lots of interesting questions about what those interactions should be. For example, how chatty should your chatbot be? What kinds of answers should it give you? We’ve had failures in prior assistants (think Microsoft Clippy) where bad agent design and some of its “personality” quirks are still a part of popular culture.
What advice would you give to students interested in pursuing research in your area of expertise?
Both HCI and AI are rapidly moving fields, and I expect that as the pace of AI research accelerates, a lot of things in human-computer interaction will also accelerate in step with that. I think we’ll find over the years that contributions from HCI are increasingly important, because we’re really focusing on what people are trying to do with this technology and building tools to support those needs. My research has been exploring this for quite a few years under the broader umbrella of automation in data science.
Doing this research requires a very broad skill set, which can be both intimidating and interesting. On the one hand, it's very exciting because you kind of see the whole end-to-end bit of it, not just building the techniques, but also how people are using it. So, I think it's a lot of fun. It exposes you to a lot of different ways of thinking and understanding how people use a variety of different technologies. On the other hand, you really need to pick up a breadth of skills to not only understand and advance the technology but to be able to effectively study and work with particulars of people. I personally find it to be really rewarding work.
Do you see opportunities for collaborative research at the Cheriton School of Computer Science?
Within the Cheriton School of Computer Science, there are a lot of opportunities to collaborate as people– I have had great opportunities to chat and collaborate with other HCI and AI faculty. Waterloo and the greater Toronto area are tech hubs in Canada, so there are exciting opportunities to connect with different organizations to explore some interesting and complex problems.
What aspect of joining the Cheriton School of Computer Science excites you the most?
I think the students here are excellent and it’s really rewarding to work with such motivated and bright young people. I’ve challenged my students to work on some tough problems, and I am blown away by how creatively and effectively they approach these challenges. It’s been really delightful to discuss their interesting results and findings over our meetings, and I’m really looking forward to sharing their excellent work with everyone over the coming months.
Who has inspired you in your career?
I’ve always been inspired by the people that I work with, who give me interesting insights and problems to tackle.
The whole thing that got me to explore HCI research was talking to doctors and patients. In this case, they were dealing with something complex like a cancer diagnosis. They took the time to talk to us about it and how our technology could be used to help them make those decisions. The way that these folks have given their time to talk to me and my team and even test out our early prototypes and ideas has been really inspirational. I also acknowledge the value of that time. So, what I mean is somebody making space for a researcher to chat with them when they might be going through something really challenging.
Some of my most recent interesting contributions have been looking at automation and data science and the way I got on that path was when I was working at Tableau. I talked to a lot of our customers about their pain points. Then we discovered they were trying to use this technology to alleviate some of those pain points and it wasn't working well for them. That allowed us to come up with a series of really interesting, highly-cited and award-winning research that explored that problem. But it was the generosity of those folks that helped us reveal those problems and tackle them.
To me, these conversations also emphasize the importance of striving to give something useful and meaningful to the people who have also taken their time to give me something useful and meaningful. It underscores the importance of being very careful and deliberate in how you use somebody's time and be respectful of it. This is especially true, for example, if you're designing technology for vulnerable populations, maybe people that have different neurocognitive or physical abilities. In those circumstances, you want to make sure that the relationship between the researcher and participant is reciprocal, so that you aren’t just using them for the research project, but actually building something meaningful to them.
What do you do in your spare time?
I really love being in the outdoors. I like hiking and skiing and that’s why I loved living on the West Coast. I’m not much of a big-city person. One of Waterloo’s selling points is that it’s close enough to Toronto, so you have access to great restaurants and cultural things. But also far enough to have access to the outdoors.
Lately, I don’t have a lot of spare time because I have a small child – my son is just 2 years old. That said, I do enjoy playing with him. He's a lot of fun and is doing really interesting and clever things. We’ve had fun enjoying the outdoors together. For example, we go for walks in the forest together with our dog. He is also learning to ski and skate this winter. He really likes it! Waterloo is a nice, safe place, and it's fun for him to learn these things. I think it's challenging to balance being a researcher and having a young child. That's a whole can of worms, but it's also really rewarding. As my son gets older, I look forward to exploring the outdoors more and more with him, and maybe I’ll also get a little more sleep.