After the session, I'm just going to start by going over a few slides to introduce the topic, and then I'm going to introduce our panelists, and we're going to have a bit of a chat, followed and finished by a Q&A session where you can ask questions. So, as the slide says, this is about AI and the future of mental health care. Starting with a brief introduction, I use this sort of line that with the advent of digital approaches to mental health, AI has been explored in work on prediction, detection and treatment solutions for mental health.
And these are some of the topics that fall under this umbrella and some of the things that we will be... discussing, digital phenotyping, which I'll soon explain, natural language processing, or NLP as it's known, chatbots and conversational technologies, and also consideration of the human-computer interaction and ethical dimensions of all this stuff. So starting with digital phenotyping, our increasing usage of smartphones, wearables and the internet, particularly social media and networking, has increased significantly what can be termed our digital footprint or data exhaust. And digital phenotyping is basically about mining or analysing an individual's, or group's for that matter, digital footprint to gain insights into their mental health.
Insights of potential clinical value that could be used to anticipate mental ill health and inform treatment. That's that slide, I've got to coordinate. Another term that's known by is personal sensing, which kind of makes sense.
Perhaps it's a bit more straightforward. And I'll also introduce the term psychoinformatics. I'd like to bring another informatics term here. So naturally, it's a combination of the terms...
psychology and informatics. It's an emerging interdisciplinary field that uses tools and techniques from computer science for the acquisition, organization and analysis of data collected from psychology to reveal information about behavior and psychological traits. What to say about that?
I think one point to make is that there are various terms that are emerging and perhaps time will tell which terms become the settled ones. Just a quick illustration that clarifies this idea of digital phenotyping from a paper from a few years back that I quite enjoyed. So we start off by collecting data, raw data with sensors. location data, movement data, ambient light, microphone data, etc. And from this raw data we can generate low-level behavioural features such as someone's location type, their activity type.
what's their phone usage, their bedtime patterns, etc. And from the low-level features, we can try to go to high-level behavioural markers, such as inferring hedonic activity, psychomotor activity, fatigue, etc. And from this level, we can then try to infer clinical constructs or clinical states, conditions of the individual, such as depression and anxiety. So focusing on smartphones.
The gist is this. The usage and sensor data from a smartphone can be collected and analysed to infer information about its user. And I like to see there as being two types of information.
One is like contextual or situational information, being able to infer somebody's location or what type of mode of transport they're using or their surroundings. And the other is more about, you know, inferring their behavioural characteristics or behavioural patterns. Which brings me to this term detect and deliver. So here we use the word detect as our first term and we mean the use of smartphone sensors to detect psychological and situational information about the smartphone user and delivery to mean using this detection information to tailor real-time delivery of digital interventions, i.e. mental health app content to the user.
So it's essentially about making apps smarter and delivering dynamic, data-informed, personal... digital therapy directly to the user. Here are some terms that some of you might be familiar with that describe this idea. Just-in-time adaptive interventions, ecological momentary interventions have appeared in the literature with their corresponding acronyms and I like to introduce this more general term of ecological momentary recommendations because even if it's not technically an intervention per se we can recommend things to people even if it's not for health purposes.
based on learning about them via their digital footprints. Now I'll quickly talk about NLP, natural language processing, which is a subfield of AI that focuses on getting computers to understand and generate human language, to put it basically. And the two subparts here can be termed natural language understanding, NLU, and natural language generation, NLG. And NLP systems are designed to process and analyze large volumes of textual data, including written text and spoken language, and it's, you know, about utilizing the vast uh troves of data that are now being generated textual data and the scale and the processing speeds that we now have available to us so a bit about nlp mental health and psychotherapy in where nlp can play a role in mental health in psychotherapy by providing the means of psychopathology detection informing therapy treatment and also facilitating clinical practice like you know mundane routine admin tasks even Detection, we can say that the NLP algorithms can be trained to detect early or non-early signs of mental health issues by analysing patterns in text or speech, informing therapy and treatment, that is analysing and providing feedback on therapy sessions, possibly in real time.
Psychotherapy is a very dialogical exercise naturally, and can the products of that dialogue be mined for insights? Evaluating treatment outcomes also. In that NLP can assist in evaluating the effectiveness of different therapy approaches by identifying associations between the content of therapy dialogue and patient outcomes and also those you know more mundane practical applications I alluded to earlier, mental health information queries, translation summarizing and organizing clinical notes and especially this last one has really come sort of become predominant in the last few years. I've noticed like with the prevalence of generative AI and large language models. So a bit more in detail regarding the NLP and psychopathology part.
Just some examples, especially for those of you who don't really know this field and want to get an idea about it. Some early research on NLP and mental health used semantic coherence and syntactic complexity to predict subsequent psychosis occurrence from patient clinical transcripts. and found that the use of less complex and more incoherent language predicted subsequent onset of the disorder. And that stands to reason, kind of, you know, just intuitively makes sense, but I think the main point here is that you can have rigorous at-scale quantitative means using NLP and data to detect these things, possibly in advance before, you know, a sort of, um, a human without these technologies would notice them. And another example, comes from this article here people with depression use language differently here's how to spot it found that those with symptoms of depression use significantly more first person singular pronouns which is kind of a scene that has started to become established in that area and also people with depression use absolutist words such as always nothing or completely more often than others and also beyond linguistic content we can analyze We can analyse what we can term the paralinguistic or acoustic aspects of speech, such as volume, pitch and intonation.
And research has shown that such properties can be computationally analysed to infer various pieces of mental health information. Just some quick examples. Example indications. Depression associated with more pauses, slower speech rate, flatter prosody in speech, and anxiety associated with increased speech rate, jitteriness and higher pitch when speaking. Next on to chatbots.
So we've, most of us if not all of us have heard about chatbots but a quick intro. A chatbot is a computer program that mimics conversation with users via a chat interface, either text or voice based. And the underlying system can be based on a variety of foundations ranging from a set of simple rule-based responses and keyword matching to advanced machine learning and NLP techniques. Irrespective of this underlying intelligence, of the responding bot, there is something distinct about the experience of an user entering input and a bot responding. And that's something we can delve into a bit more as we go along.
Bit of history about chatbots because this is an interesting one. So there is an intimate connection between chatbots and psychology in that the first well-established chatbot, Eliza, was created in the mid-60s by computer scientist Joseph Weisenbaum. To simulate a Algerian psychotherapist, that is a psychotherapist based on the humanistic psychology of Carl Rogers, the psychology pioneer from the 20th century, largely by rephrasing the user's replies as questions.
So I think it was partly just a matter of convenience or just expedience in that the basic NLP technology of the time couldn't do much by today's standards. So if you just, you know, make it repeat back at the client what they're saying, you can easily get something happening. Now this is an important point. Weizenbaum did not intend for Eliza to be an actual chatbot therapist, and I think that's a misconception that some may have, in that he created it actually as a satire in a sense, to demonstrate the superficiality.
of communication between humans and machines. He doubted that computers could simulate meaningful human interaction, so was surprised and shocked that individuals attributed human-like feelings to the computer program. And this has come to be known as the advisor effect, which is the tendency to project human traits such as experience, semantic comprehension or empathy into computer programs that have a textual interface.
Just some quick examples of some mental health chatbots out there in the field. You may have heard of some of these. Wobot, YSACAS, Replica and ChatGPT now. That's just a screenshot from Wobot with the old mascot. icon there.
As far as I can tell things like Woebot and Wysa and CASA I would more so say are like mental health apps with a conversational layer over them and they're still very predefined and structured in the way they work. and constrained by what's programmed into them. Whereas Replica and ChatGPT, well, you know, with ChatGPT, it's just this like wild west large language model. It's very open-ended, and you just type stuff in and you get stuff out, and it's a bit of a, you know, stochastic parrot, as people have called them, and it's just very open. Replica has an interesting story, which I probably won't go into, but it's one of the systems that has perhaps been the most controversial.
As you can see by... this slide. I've just, we're not going to go into these articles, but you can just tell by those titles that it's created a bit of controversy.
Replica users fell in love with their AI chatbot companions, then they lost them. AI-based companions like Replica are harmful to privacy and should be regulated. I tried the Replica AI companion and can see why users are falling hard. The app raises serious ethical questions. And you can see just from this screenshot on their website, it's probably questionable the way they kind of picture the...
this thing as some really intimate AI companion and it's going to be raising some dubious points to say the least and we can touch upon those perhaps in our discussion and Q&A session. So to finalise the chatbots part, I would say that ultimately a chatbot that can carry out a proper psychotherapeutic conversational session and replicate human therapists remains to be seen, if at all entirely possible. And we ultimately will ask, so what are the reasonable uses and boundaries of chatbots?
And I'm just going to end, actually, by talking about therapeutic alliance. So some of you are familiar with this term, which refers to the relationship that develops between a therapist and a patient and is a significant factor in the outcomes of psychological therapy, in that, like, they found that no matter what the modality that is used... seems to always be that the presence of therapeutic alliance has an effect on the outcomes of the therapy sessions.
A robust but modest effect. Bordand's conceptualisation of the alliance consists of three... dimension, it's probably the most popular one. Bond goals and tasks, bond the effective bond between the client and the therapist, goals their agreement on goals to achieve good outcomes and tasks their agreement on tasks to achieve those goals.
And this conceptualization has been quantitatively captured in one of the most prominent scales to measure the alliance known as the working alliance inventory. And this brings me to the notion of a digital therapeutic alliance, where as mental health care starts to increasingly adopt digital technologies and offer intelligent therapeutic interventions that may not involve human therapists, the question has arisen as to whether there is some digital analogue of the traditional therapeutic alliance, a digital therapeutic alliance. And ways that this term, this broad term, can be explored with a focus on the second and the third here, I found.
Firstly, there's the standard patient-therapist alliance in the case of just tally therapy sessions that are emerging now. Then there's the connection between a user and their smartphone plus mental health app. And finally, what is the nature of the therapeutic alliance in the case of anthropomorphic digital interventions such as chatbots and virtual human therapists like Replika or maybe Woebot's a better example. And just some quick questions. Do the elements of the traditional alliance between client and therapist hold true in the case of tele-therapy, in the case of digital interventions such as apps and chatbots?
what adaptations of the traditional alliance remain and what novel dimensions emerge and what would a true conceptualization of DTA look like for automated digital mental health interventions and how could it be measured? Some people think you can just get the traditional scales and just replace the word, you know therapist with chatbot. I'm kind of of the opinion that that's not enough and more research needs to be done into these notions.
And final slide with some final words. With the advent of digital data AI approaches to mental health, there will need to be open-mindedness among practitioners and paradigm shifts in mental health care. And upcoming generations in mental health practitioners may very well require a new tech and data savviness. But we could say it's about augmenting the capabilities of practitioners in that the human plus machine is greater than the human and the human plus machine is also greater than the machine and that AI won't replace humans but humans using AI will replace humans not using AI. So that concludes the slides part and now I'd like to welcome the panel members.
Dr. Caitlin Hitchcock, who's a clinical psychologist for the Melbourne School of Psychological Sciences. We have Olivia Metcalf, who I've worked with for several years now. Firstly, I knew you from the Phoenix Centre for Trauma Research and Care, but now you're at the Centre for Digital Transformation of Health.
health. But obviously your background is in psychology and mental health and that. We also have Steph Slack who is from Monash. It says PhD candidates.
You're finished now. Yes. Imminent. Your conferral is imminent, I believe. And she looks at sort of ethical aspects.
She's in the philosophy department, is that right? You're in the philosophy department. And I became interested in her work on the ethical dimensions of digital mental health care and something called epistemic injustice, which we might be touching upon soon. So... That's working.
So we'll just go in order. I'll start off with a general question, perhaps two questions that you can combine if you would like. What do you see as the future role of AI in mental health and how are you using or thinking about AI in terms of mental health?
Okay, so as Simon mentioned, I'm a clinical psychologist. I'm involved in a couple of different ways of using AI in mental health. One of these, I have some funding from the NHMRC to use NLP methods to try and improve... our detection of mental health challenges and to look at how we might be able to integrate that into the way that we assess first of all initial presentations that people might have following stressful or traumatic events and then how we might be able to track change in some of the factors that underpin psychological disorders when people are undergoing cognitive-based therapies.
The other thing that I'm involved with is as a chief scientific officer at a start-up which is developing an AI system to support completion of cognitive behavioural therapy, which is our kind of gold standard evidence-based intervention, psychological intervention, for a range of common mental health challenges, and that's called Mental Health Hub. In terms of what I see... as the future of AI and mental health, I think the question was. We know that AI is here to stay. I think that we need to really seriously consider how we ethically, appropriately, responsibly use AI in a way...
that supports people who are experiencing mental health challenges, that supports clinicians who are trying to work with mental health challenges and consider ways that we might be able to safely use these sorts of methods to try and prove access and equity in mental health treatments. Hi, everyone. Thanks for coming out. My name is Olivia, I'm a research fellow and my experience with AI and mental health has been for the last few years I've been working on a project also in Australian adults who've experienced trauma. I'm very interested in whether AI can draw data from your smartphone and from your wearable and use that information to predict behaviour or emotions and I think at the time when I started the project I thought that was a great idea and I think it's a great opportunity to do that.
I think in the last few years I've become a bit more cynical about the potential for AI. I think even if you can build a good model that can predict reliably, you know, the mood or the behaviour that you're interested in, a lot of challenges have not been resolved around, for example, as you say, the privacy. I think there's huge problems with privacy, particularly in very vulnerable populations. We're getting data from them that ranged from...
very embarrassing to life-destroying and we didn't even really think that through until we're halfway through or the end of the project. There's also real translational problems. So in the real world the best quality data that you get to build a model comes from the most expensive smartphones from the people who have the most amount of money and access and so the most vulnerable people who can most benefit from these tools there's a lot of technical and translational challenges around that. And at the end of the day, from all the models that I built, from all this fancy looking data, the most reliable predictor of how someone was feeling or what they're about to do was asking them, how do you feel or what are you about to do?
And I think that really humbled me because I think in psychology, there's a lot of assumptions that this data that we can leverage is going to transform when really, you know, a single human being, I've said this before, you know, is such a complex world and our emotions and... Our thoughts and our behaviour is so unknown, I think even after 50 years of really good research into this. You know, I know this sounds very cheesy, but the human brain is the single most amazing organism in the known universe. That's a fact and we don't understand so much of it. So I'm really sceptical, I think, given how little we understand things like consciousness and etc, what AI will bring.
You know, compared to the physical health space where if you have a heart disease, we can easily monitor what's happening cardiovascularly using this technology. So I think those analogies don't carry over very well. And to answer your question about the thing I'm most excited about, it's the most boring thing imaginable, which is if you ask any Australian adult or a parent what they need most from the Australian mental health system, it's access. They need access to a psychologist now.
and wait lists are often nine months or books are completely closed. We've seen that chatbots don't have the potential to replicate what is ultimately, you know, a human being healing another human being from experiences that have hurt them. That's at its core what psychology is.
We hurt because others have hurt us, and our therapeutic relationship has to heal that, and we know chatbots can't do that. And so the most exciting thing I think is, for example, ambient listening in a psychologist's room that's recording... the clinical notes that take a psychologist a long time to complete so they can move on to their next patient. I think something like that would be the most exciting thing we could do right now. Thank you for that great answer.
Hi everyone, I'm Steph and as Simon mentioned my interest in AI and mental health is from looking at the ethical implications that these technologies might pose. So primarily my research focused on some of the more neglected and complex ethical issues that arise in the use of particularly digital phenotyping for assessment and also things like neurotechnologies like emerging brain chips or digital versions of pills but also just existing devices like DBS deep brain stimulation and what kind of ethical issues might arise there. My research really tried to frame this and look at this with awareness of two main issues.
So firstly the mental health context operates in a very distinct context to that of other health care situations in that patients in the mental health care system can be treated involuntarily under mental health legislation and the second thing is that often there are there can be instances where patient treatment preferences are not taken into account in the mental health care system when when they're in those situations. And so when we're looking at what kind of ethical issues might arise, we need to be really conscious about the context in which these digital technologies could be deployed, either to lead to coercive interventions or potentially be imposed as coercive interventions under mental health legislation. So that was kind of my keen interest.
And by, I guess, looking at the issues from that perspective. My concerns are that we should be very cautious and very slow in thinking through how we apply these technologies in certain contexts and that there may well be hard lines around the application of these technologies in certain contexts. We can talk a little bit more about what that might look like.
Thank you. So I'll just start riffing off what you guys say and you can riff off me and each other. Firstly, Steph, so you do see there being potential, but you definitely think it needs to be scrutinised, as it should be.
So you're not like an outlaw. like you shouldn't be doing this at all. It's more like this is happening and we need people who aren't in the middle of it to be able to stand back and scrutinise and ethically assessing, which sounds great.
Olivia you were somewhat sceptical I think you probably still got a modicum of hope or not hope or just sort of receptivity towards the potential applicability of these things you said in a way sometimes the pros aren't worth the cons so you might make some gains something benefit to something but it's just too intrusive or too invasive, costly in some sense. So I think what it might then become about is working out what those limits should be then. In that AI mental health and analysing data, it's not going to be some silver bullet that magically solves everything and fixes everything, but at least has the potential to provide some information.
So practitioners... are kind of inherently doing some cognitive work here. They have to think, they have to assess and analyze their clients or situations and that.
And I always felt that sometimes there might be some extra information that could be given, you know, and that information could come judiciously and discreetly through digital footprint pathways. So in this way, AI is not about just collecting as much data as you can and just crunching it and trying to come up with automated predictions, but at least trying to provide some facility. and information to practitioners to help them with what they're doing. Would you concur with that sentiment?
Yeah. Yes. So I think as a researcher and from the computer, so my background is behavioural science, and Simon, I know you're a computer scientist, so I think as the scientist sitting up here speaking, a lot of the time we are guilty of what I think you've probably heard this term called techno-optimism, which is that technology can solve... all of these health problems and how wonderful will it be when we have more and more technology and again without wanting to sound you know too cynical I think that can blind us to the problems that we're really solving I don't think it should be computer scientists or behavioral scientists deciding what solutions we have which is currently the case it should be psychologists and Australian consumers who are telling us the problems that they most want us to solve, and then we sceptically say to ourselves, is technology the right solution here?
Because often it's not. It's investing more money in the workforce. It's dealing with the social determinants of health that entrench...
mental health issues in Australia. And then when technology is the right solution, we go in eyes wide open and recognising that if we just lay a technological solution over a very inequitable system in Australia, we stand to not just do nothing but to actually make things worse. So I guess that's where my reservation comes in. Yes. Well, that's why I run these events, to meet psychologists, actually.
Yes, what should we do? What should we do? Right. Psychologists have all the answers, apparently. I think that you pick up a really good point, Olivia, in that technology is a tool and it needs to be used appropriately and responsibly and ethically and collaboratively.
in a way that is used voluntarily by the people who are being subjected to technological solutions. And I think that Steph highlighted that as a potential ethical implication here. I agree.
I don't think that AI is the solution to the silver bullet to mental health challenges in Australia or worldwide. There are social determinants, all sorts of different things that impact people's mental health experiences. I think that there are...
If technology is applied as a tool, I mean there are two different areas that we talked about that Simon highlighted in terms of assessment or detection but then also treatment. There needs to be different considerations in each of those areas. Psychologists and people with lived experience need to be involved in designing those tools to make sure that they are working in the way that we want them to and that they're delivering evidence-based interventions and I can talk a little bit about that.
but I also want to give Steph a chance to I mean, I agree with everything that's been said. And one of the key areas of my research was really bringing out the fact that the lived experience involvement in the design of this technology just isn't there. And nor is it there then in decision making about its use. So psychologists and mental health professionals more broadly, psychiatrists, psychotherapists, etc. all need to be involved in the development of the tools.
But yeah, as we've already said, we need to be going out and asking people what they actually want. how they want to use these, and then having people with lived experience involved as co-leads and researchers in the development and research about these technologies so that we can ensure that they overcome ethical implications, but actually more than that, that they actually deliver what they want and need from a tool and that it really is addressing whatever concerns that they have. I think... Can I... I think... Both of those points are so important.
And the thing that I've learned over the last few years is how boundary-spanning AI is and how it really breaks the methods that we used to use in science because you come up through your discipline of psychology or physiotherapy or oncology, whatever medicine field you work in, and you learn all the methods that you need to know to solve the problems. And with AI, it's not possible for one... researcher to have all of that type of knowledge. It has to work in a multidisciplinary way. You have to work with computer scientists and consumers and all of these and I don't I think that's very new to science and research and I don't know that we do that well.
I'll give you a clear example even though in one of my projects we use co-design people adult Australian adults who'd experienced trauma and having challenges were helping me with the project. We got to the end of the project and the model was showing a really high number and you know the computer science spoke as saying isn't that great look at that number that's so good but when I broke it down I understood that the model would still have false alerts so that's where it's alerting the user that something is happening with their behavior that's not really happening and I just realized how much of a problem that is particularly for people who've experienced trauma imagine you've got this smartphone app that's alerting you and saying you're about to do this behavior you're about to engage you know, in something that you don't want to and you know that's not the case. And I didn't even think until two years had passed that that might occur. And so, again, I guess I'm just trying to reflect because the field is so new, I do think it's appropriate to be pretty cautious. Were you saying something about, like, needing to improve interdisciplinary connections?
Yeah. Yeah, that's definitely something that seems needs to be worked on. So Caitlin, just to pick up from where you left off earlier, can you think of a couple of examples where how can AI and human clinicians best collaborate to enhance mental health outcomes?
Some examples of just modest... using AI for some augmented practice. Yeah, so we've understood for a long time about digital mental health interventions and AI is not the first digital mental health intervention that we have.
And we have a number of digital mental health interventions which are in use and Medicare rebated in Australia. Things like self-guided cognitive behavioural therapy where you log on to a system and you go through and you complete a... exercises in kind of PDF or interactive format, which are the sorts of activity that a psychologist would traditionally, or a mental health clinician who has training in cognitive behavioural therapy would complete with you.
And the reason these digital interventions came about was to try and get at the accessibility issues, which we've touched upon today, in that there are wait lists to see psychology or other mental health professionals. We don't have enough people working in that space. And we know that mental health rights... are deteriorating and it's been a real issue since the COVID-19 pandemic, not just in Australia but worldwide, with research showing that it's under-resourced communities who have experienced the worst increase in mental health difficulties.
And so there is an important place for things like digital interventions to improve accessibility, but we've known for a long time that just giving someone a digital intervention... isn't the best response in that there, I mean, there are issues in terms of the quality of what people are completing, but even just engagement in completing those interventions. People don't necessarily just want to sit there and guide themselves through something. They want to feel connected or seen by somebody.
And so we've known for a long time that having some interaction between humans when completing a primarily delivered digitally intervention is most effective. Like a blended. Yeah, like a blended approach.
Yeah, yeah. Relatedly then or differently... So I see there's been sort of mental health apps which I'm quite familiar with but also AI can be involved with mental health apps like recommender systems and just personalizing the user experience and that but also using AI to inform clinicians in their practice. A lot of other areas of medicine and now radiologists might use AI, oncologists will use AI, the informaticians, all these type of stuff.
Will there be a time when the clinical psychologists or mental health practitioners are going to have some tools in their... AI kit? Is that possible?
I think that's really interesting. I teach into the Masters of Clinical Psychology program here at Melbourne and we had a conversation about this in our teaching this morning in our team teaching meeting. There are like two different areas that we might see these tools and we're starting to see these tools emerge. One is for note-taking and increasingly there's research showing that clinicians of all sorts are using this for the moment and perhaps that's a... a good time saver if it can be done in a way that secures confidentiality, reduces hallucinations, some of the issues that we've been talking about here.
But the other way that it might be used is to guide treatment, right? So in a psychological context, we often call that formulation where we try to understand the factors that are predisposing somebody's experience, what are the factors that are keeping their symptoms going, and devise treatment intervention techniques that might try and shift those factors. And we know that that's the best way to help someone have an improvement in their mental health state. So those formulation skills, there's a real danger if we outsource that too much to an AI in that the clinicians are potentially not having the opportunity to develop. those skills themselves.
And so I think that we need to be more cautious in that space. Sure. You don't want enfeeblement, as they say, like AI inducing enfeeblement, where you become so reliant on the AI or something that you lose those human capacities to do what you need to do.
Does anyone else want to pitch in on that or no? I think in general it's important with all digital tools, but particularly AI, we really carefully consider unintended consequences. We know.
We know that every time you introduce a new digital health tool, it can have unintended consequences. And I think we often have a positivity bias or an optimism that our tool won't. And so being sensible and testing that early on, these ideas to support clinical care, I think, is really important. And there isn't currently any standard in Australia for testing the safety of these tools, particularly in the mental health AI space.
And I think that's a key need here in Australia. Shout out to the Validatron, which is at the centre. Validatron.
just on level one, where we're developing a lab precisely for this reason. But yes, you're right, it's a whole new field of science. How do you test and evaluate the early feasibility of these tools? Yeah, I think, I mean, again, I agree with everything that's being said here. And I think the same issues apply where we might be using AI for assessment or even potentially the kind of information that's collected through wearables to help practitioners form a view.
of what might be happening for someone, we also need to be alert that that data may not in fact be objective or representative of what is happening for a person. It could be a false positive. It could be caused by a variety of different contextual factors that aren't taken into account.
So we always need to be questioning what the data is showing us and avoiding the kind of algorithmic, the automation bias that we might have to defer to what's being shown in front of us as a result of a piece of technology. technology or in AI because we think it might be providing us with more objective knowledge than what the patient in front of us is telling us yeah I think we should go to questions I see people but I want to say as well just questions oh yeah there's time okay I think the data thing is really important so to build AI you need massive amounts of data lots and lots of pieces of data and I think in the mental health space with vulnerable populations I'm deeply concerned I mean that literally keeps me up at night because we're collecting massive amounts of data in people's homes about about all of their behaviours. And I think we're asking participants to consent, to provide us data for tools that haven't even been invented yet.
And how do we do that ethically and protect, again, the most vulnerable Australians? I think that is a huge alarm that we haven't solved yet. We need to empower people to be able to take charge of their mental health in the ways that they can.
And consent is a really important part of that. OK. Did you say consent? Sorry.
Consent, but in terms of a lot of what we're talking about here is whether people are voluntarily able to give consent for the data to be used in a particular way. And a lot of what Olivia is talking about here is the fact that people are... potentially voluntarily giving over their data not knowing what it's going to be used for and that's really disempowering for the user particularly someone who experiences mental health challenges we need to help to develop systems procedures policies in ways that people can be empowered to take charge of what's happening to their data into their lives rather than feeling subjected to ticking boxes for terms and conditions and that becoming how our mental health system operates sure you Yeah, I was just going to say, I think this all kind of circles back to what we've all been talking about in terms of making sure that the lived experience involvement is there from the beginning.
And that consumers, people with lived experience are really driving this agenda and driving the development of these tools. And that it's all about choice, right? So there may be a role for certain types of AI or data-driven tools to play in the mental health care system.
But we always need to have a choice about how they're used, whether we're involved in that development, whether our data is being used for that. And consent is one way we can tackle that. But yeah, the greater the inclusion of people with lived experience, the better. And that's something we can definitely work on, I think, in Australia, but worldwide.
It reminds me of, I actually wrote something a few years ago for the Mental Health blog. You may have heard of that. Patients as domain experts in artificial intelligence mental health research. Because when you think about just like designing mental health apps, you have co-design and you have involvement of people with lived experience.
But it's a less explored area of AI mental health stuff, not just the user interface design and things like that, but the underlying AI technologies and involving clients and that and people with lived experience in the development of those technologies. So I think that's a very interesting area that needs to be further explored. With regards to consent, bringing in notions of, I think, this new dynamic informed consent.
If you're going to be collecting a lot of data about someone that may be used down the track. even as new models develop and things like that then they should periodically be given a reminder hey re-consensuses and things like that it's just a much vaster scale than we've had prior to this you know datafication of health um One, I'll play maybe devil's advocate here or something. In mental health, or in physical health, if you tell somebody they have a condition, like there's a medical test for it, and they say, oh, no, I don't, no, I don't. it's sort of, well, what do you do? Like, you've got some tests which is indicating that they've got this condition and they're just claiming that they don't.
Whereas in the mental health area, it's much more subjective and it's very... experiential. So I always, in that blog post that I alluded to, I had this hypothesized scenario that if you had this like oracle kind of predictor, digital phenotyping machine that sort of predicted somebody will develop a condition in a few months and they said, no I won't, no I won't, I don't feel problematic at all.
That's just an interesting sort of thought experiment that when you've got a tension, I suppose, between some objective predictions. system versus somebody's own testimonial claims about no I feel fine nothing's going to happen to me what should take precedence that's okay do people do that do people get diagnosed with a condition and say I don't have that oh I don't know I'm just saying that's as an example as like if yeah like you might have a you know a tester you know right test that actually shows you have this condition that's all yeah i think i think i see yeah i do think there's challenges in psychology i think all of us would agree with this like the diagnostic criteria what does it even mean like what even is depression i mean genuinely there's been a lot of skepticism around this now from very senior academics around the world really questioning at its core does the traditional medical model if you have heart failure or your heart condition does that apply to mental health what really is the symptoms we're taking a much more trans diagnostic approach i think the field of psychology is changing a lot in a good way and that's evolving whether a disease really exists and what its symptoms really are but that makes ai even harder and i think it also gives us the potential that i could potentially unlock new ways of understanding different types of disorders you yeah i think we see that a little bit in that context like i am a psychologist i am definitely a call with trans diagnostic approach in terms of understanding what are the underpinning cognitive behavioral physiological mechanisms that are predisposing someone's symptoms rather than constraining them to a diagnostic model and i mean one thing that My team is currently doing is trying to use NLP to try and detect some of the underlying features of, in this context, trauma narratives, which predispose someone towards having symptoms of depression or anxiety or PTSD following a distressing event. And so this kind of more unsupervised learning approach is helping us to move beyond theories that psychologists have given us and that our research has been constrained by to take a more data-driven approach, identify new… predisposing factors or underlying mechanisms that might eventually be targets for new novel interventions that we currently know nothing about so there is potential yes so the potential essentially to employ unsupervised machine learning methods because the sort of standard supervised machine learning methods require training data sets and require what you know is termed a ground to truth i suppose and for something like predicting house real estate prices for the coming year, the ground truth is relatively straightforward. They're just house prices and things like that. As you've mentioned, in psychiatric taxonomy and things like that, that's sort of not black and white to begin with.
The ground truth itself is so open to discussion. I'm excited about that. That's so exciting.
We're excited too. But I do want to answer your question about if you tell someone they're going to develop. We tried that without AI.
And if you tell a human being, we made the mistake of saying to a human being, you're on track to develop a mental health disorder. You don't have one right now, but you're on track to develop it. And the people who receive that information versus nothing were less likely to go and get help for their care.
And again, that taught me about human behavior, which is that humans don't go to care until they're in crisis. And if you give someone information that they're at risk of developing a disorder, often they think, oh, I don't have it yet. And so I don't know. People already don't do that when we say, please stop doing that so that you don't develop this issue. I'm not sure how AI will add anything beyond that.
Right. Then it's the human's inclination that at the end of the day will be prioritised. Yeah, humans wait. Most people wait until they're in crisis to seek care.
Okay. We'll go to questions in a few minutes. I'll just finalise. If anyone wants to talk about chatbots a bit more. You're free to.
Because I did pose that question, what are their contours, what are their limits? I feel like... Sorry, just...
They're kind of... This goes back to a point that this is such a Wild West territory and there's so much thinking and research and years of, you know, checking this stuff out that needs to be done that for it to... Sort of too promptly or quickly going to, I suppose, tech companies and that, that want to develop these for certain motives and that. I would like for this stuff to just stay for a while in the tepid waters of academia and that, and just, you know what I mean, to just... nut it out, nut it out over years before we go any further.
Chatbot's been one of them where it's all the rage, you know, I see people on Twitter, I've got my latest chatbot, it's going to solve everyone's loneliness and stuff. Yeah. Yeah, look, so I haven't done any research specifically on chatbots. So just a disclosure on that.
But I am someone who's accessed therapy, right? So psychotherapy, I'm someone who's extensively accessed psychotherapy. So my views on chatbots, I mean, many of the same issues that apply to digital phenotyping apply to chatbots. And whether that's issues around privacy or issues around, you know, how testimony is evaluated by.
how patient testimony is evaluated by the chatbot or by the clinician. So my view, you talked a bit about the digital therapeutic alliance. As someone who has engaged with therapy, my view on whether or not chatbots can deliver something anywhere near what we get when we engage with a human therapist and when we get that space to feel heard and to feel supported and to work through whatever issue it is that we've come to work through. I just don't.
think they're going to be able to deliver that um and i think there is the potential that um we are distracted by the potential of chatbots plugging a gap um and distracted from the fact that actually we have a workforce shortage and we need to be addressing that and that people and develop as you kind of alluded to with replica um you know quite unhealthy relationships with chatbots um that would harm them in some way, whether that's because the chatbot is discontinued or because the relationship itself becomes a toxic relationship. They just change the setting on it once, and they change the key personality aspect of it that left a lot of people devastated or something. Can I expand on that? I think those are really important considerations.
So conflict of interest disclosure, I'm involved in developing an AI system, which would commonly be called a chatbot, to deliver cognitive behavioural therapy exercises anyway. And the way that we've been thinking about it is not to replace humans, but to try and get at some of that workforce issue. So I worked in Britain for a long time, and the NHS has this stepped model of care where we have a really, really high number of people who experience mild to moderate presentations of depression and anxiety. And at the moment, what a lot of those people get under Medicare or under here in Australia and then in Britain under the NHS system is this self-guided CBT, those digital programs that I mentioned, which have no human interaction or no AI assistant to give you support or to give you an example when you can't think of something and help you complete these very static forms.
And so I think there is a role for having AI assistants that can help people to do things that they would traditionally be doing on their own. that are these kind of first step self-guided interventions which would be AI assisted because we were always going to have those presentations which are more complex that need that human touch and that human interaction. And sometimes for some people that lower level AI assistant might be enough to get them going or get them to a point where they reach out for help or get them between their sessions with a human psychologist. And so I don't think that we'll ever replace that but we could potentially use these AI interventions to free up time. for the human workforce to really focus on those presentations that need it.
I agree. I agree completely. And I think the example that you actually saw in your slides was so sweet because it's such a, I have a problem and it's exactly this thing and I need exactly this solution. And the chatbot's excellent at doing that. But most human beings with mental health issues, by nature, it doesn't matter if you're Rogerian or Jungian or third wave or what your philosophy of psychology is.
You come into therapy with defenses, defense mechanisms. You don't recognize your own biases. You don't understand fully the things you've experienced. You don't always recognize how your own behavior is harmful.
You have delusions about yourself. And that's the therapist's job, to pull those things out. And a chatbot will never be able to do that, because it's too pragmatic and grounded, away from that grayness that is a human being.
So I think for simple solutions like that, it's a very elegant solution. But for the actual complex therapeutic word, it's impossible to imagine how a chatbot could say, I noticed that, you know, I've noticed a defence mechanism in you. I just can't imagine that would happen.
Okay. Well, thank you for converging towards these very sensible points. That's kind of nice, isn't it, to see that land...
We didn't discuss beforehand, so... This is not pre-organised. We've never met.
We have about 15 minutes for Q&A, so I'll walk around if anyone has questions. Thank you. Very enlightening.
Now, you alluded to the worsening mental health situation in society at large. And I'll try and keep this quick. But what this seems to me here is that we have a dam that's burst.
We have, you know, nine of the ten largest companies in the world are AI companies. Google is gathering 80 or 90 percent of the data you guys are talking about anyway already. And whereas lawyers, doctors, any clinical practitioner or accountant is privy to privileged information and has a very well regulated and protected responsibility to their patients or their clients, and that is protected and guided, these larger entities have just a fiduciary responsibility to a shareholder. where we previously externalized harms to nature, now we're externalizing harms by monetizing attention, and you don't get to a meaningful definition of what it means to be human without talking about free will and attention. So I sort of see this as dam has burst, we're running around here being very sensible with a few sandbags going, we should place these carefully because the entire town is flooding.
Yeah. Is there, like, what is the role that the mental health industry, society can take in arguing for upstream interventions, regulatorily to say, anyone who's got access to this sort of data needs to have the responsibilities that... mental health professionals, right? Like we're creating a crisis because of this. There you go.
That's the frame. I think there are two things there. I think you named the solution. One is that we need to be doing more to try and enforce regulatory protections. We're starting to see a little bit of that.
So the EU's AI Act came into place in August, which means that within the EU, large language models needs to be disclosed what data they're being trained on. And that's going to be part of trying to get out what are these things actually doing. The other thing that I myself as a mental health clinician have grappled with is we need to stop being so scared and start actively being part of the solution because this stuff's going to happen around us and without us if we're not careful.
We actively need to step forward and say, if not me, who? We need to be working with these companies to make sure that what they are developing is evidence-based because when it comes to things like chatbot, if it's delivering something which is not an evidence-based... helpful intervention it's going to stop people going to get therapy because I did therapy but and it didn't work and if that's not therapy it could have really damaging consequences so I think we need to really head on face our own fears be part of the solution and not run around with sandbags and then I yeah the only thing I would add is that I think in answer to who's going to stand up to the tech companies I actually think that's the business of philosophy you know psychology is the business of human beings who are experiencing mental health symptoms and when it gets beyond that and big tech companies are changing what it means to be human. That's where we need philosophy. Like, please stand up.
Yeah, well, I mean, I'm here. I'm here. I am.
Elbow. Yeah, trying to do some advocacy. Yeah, so I agree.
I mean, I think there is a role for the profession to play. I also think there's a role for philosophy to play. There's some work there as well in terms of the discipline traditionally as kind of not being keen to advocate for certain...
policy agendas and things like that. I think that's changing. I think we're seeing more philosophers who really carefully think about the policy implications of their work and get involved in that. But actually, I think this is everyone's responsibility. All of us at some point will likely be affected by mental health.
We are all victims of big tech who are tracking us and monitoring us. So as much as it is people in certain professions who can work on, you know, consultation with government responses to those. Just as a general citizen, you can get involved if you're passionate about this. You can speak to your local MP.
You can start taking an active role in this issue and getting your voices heard. Great. Thank you. Thank you. I just kind of want to go back to that idea of self-fulfilling efficacy.
And as a practitioner, I see this coming out with ADD, people self-diagnosing in this area. You know, my phone is on, it's listening to me type of thing. So I'm scrolling through Instagram and about every fourth ad has something to do with ADD, ADHD.
And especially around sort of the younger, even 40 and lower, a lot of people saying, well, I did a test and eight out of 10, I definitely have ADD. And I'm not anti-AI in any way. But how do we sort of curb that sense of, you know, AI told me, therefore I am.
I am. where people are not only using it to label themselves but more dangerously excuse themselves from learning how to manage their behaviour effectively because it's become a crutch instead of a tool. That's a hard one. Yeah, I wonder though...
So I don't disagree. I think obviously there's huge problems with... personalized advice you know all of that aside I think that more information though about people's possible mental health is a good thing I don't I think I think society needs more information and it probably is not a good idea that it comes from TikTok but I do think more we are in a new phase of more awareness around things like neurodiversity around all the different mental health conditions that we've previously had a lot of stigma and silence around so I think this you kind of problem that you're speaking of, I don't know the solution, but I do think it is in reaction to consumers not having, patients not having enough information about their mental health and being locked out of a system, not able to access the care that they need, being silenced or being, you know, ostracized in some way. So yeah, that doesn't answer your question, but I do have a skepticism around that because I think if people are coming more and saying, I think I might have this thing, I've seen this information, I don't necessarily see that as a bad thing. So it's a balanced conversation, I suppose.
Okay, we'll move along. Thank you for the very informative talk and the conversations. My name is Delani I'm one of the psychiatrists.
I do do ADHD assessments and I just have something more to add. I think you talked about how do we discuss with patients if they decide that they don't have a condition. What we are dealing with at the moment is the complete opposite. They go through tick- they come with their own diagnosis and also sometimes with the medication and they come and see you and they're totally convinced that they have ADHD but you talk to them for one and a half hours or so and then you realize they have trauma history domestic violence they've had so many attachment related trauma Drug and alcohol abuse, so many different aspects of it.
And it takes such a long time to convince them that they actually don't have that, and it's a totally different thing. And AI might be very black and white, but human beings have so many different grey areas of life. So it takes a lot of time, I think.
I'm not complaining. If you see my new sports car, you... you know why so I have lots of referrals coming through but I feel really sorry sometimes though because you keep on getting I think the referrals have gone up about 400 percent just for ADHD assessments but it takes such a long time to convince someone there's a lot more work to do in your own mental health than just going and getting a ritalin and all your problems are going to be fixed. So I think it's a very complex problem.
I agree, we all need to be together. I personally don't think the regulators are doing much. My Facebook feed and Instagram is full of ADHD kind of information, and it's kind of always on your face as well. It's really hard not to be not in that space.
So... That's just one addition to that. In terms of how AI can help as a psychiatrist, what I have noticed is that, especially around documentation, I've found AI to be the most important tool I have had within the last at least 10 years of my practice.
Prior to AI, I would say I used to spend all weekends sometimes just writing notes, writing letters to GPs, writing court reports. filling out NDIS applications now you know I actually have a weekend thanks to that so it's helpful we just need to use it in a very useful way I think that's my two cents worth thank you. Those humble administrative tasks for a good start. It's not very sexy, it's not that cool science but it's the most helpful thing. It's a good start definitely.
Thank you for the very thought-provoking conversation. And so I'm wondering whether there is another possibility of, is there a potential for implementing AI into providing psychoeducation as well as reducing stigma? Because speaking from personal experience, I'm a current psychology student.
I'm aspiring to become a psychologist. And I'm Asian-Australian background, and I know that for a fact that within our community there is this vast... really deep-rooted stigma that people are refusing to admit that they are struggling with mental health and that they are in severe crisis and that they're affecting young adults like me, especially family members, and that they are affecting our lives every day. And I'm wondering, is there a possibility that because of that sort of denial of their mental health struggles, they will refuse to go to a psychologist or a psychiatrist? perhaps using AI, would that actually, perhaps because they don't need to disclose anything to a human being, would that actually have anything?
I would just love to know any opinions on that. Thank you. I would say yes.
I think that's a potentially really good application of AI is for psychoeducation. So psychoeducation, for those who don't know what it is, it's understanding about the symptoms and causes of mental health challenges, and often it comes in the forms of like a PDF that you GP. might give you, for example.
Getting around stigma by, for example, with the application that we're developing, you can access it through applications that you already have on your phone. So things like WhatsApp or Messenger, Facebook Messenger, as a way to try and access the information without having to download a mental health therapy app or to have to actively seek out. mental health support as one way to try and get that information into platforms that people are already accessing voluntarily to try and find the information that they need. So I think there is really good applications there.
I'd really love to see more research in this area, actually. So if you're thinking about pursuing psychology or research, clinical psychology PhD, please, like, research in this area is needed because I haven't really formed a view on this. I think they're...
There are good arguments for how it could be beneficial. I also am worried about the potential for it to just reinforce that stigma by maintaining hidden narratives and not therefore being able to, for people to feel like they can talk about things out in the open. So I don't actually have a firm view on this. I think it would be great to see more research and to hear more how people experience those apps and what they feel and if it's helpful. Definitely.
I think the content is really important there as well, ensuring that it's encouraging people to talk about it, encouraging people to seek help, to go and see professionals. And that's a really important role that any digital mental health intervention needs to have to encourage connection with a human because we don't want to contribute to the stigma. I've seen a lovely analogue population where they developed a chatbot.
for youth who were questioning their sexuality, which is a time where you often don't feel safe to speak to a human being for whatever reason, whether it's cultural or you're not ready to. And there was a really lovely paper showing that that had benefit because they weren't ready to speak to a human. So, yes, I do think there's instances where a chatbot could fill that gap where speaking to a human is not safe or wanted or needed at that moment.
Sorry, this side. We'll hang about if you've got questions afterwards. Yeah. Hi. Yeah, so I just wanted to, I guess, firstly just give my own little disclosure as a psychologist teaching other clinicians how to use AI in everyday practice.
I just want to just check, how do we overcome this very overwhelming negative bias towards all the doom and gloom stories around AI, all the bad uses, all the ways it's going wrong in a Terminator-type scenarios, out of an exaggeration? Because, you know, I think there are so many everyday uses around hyper-individualising treatment, providing scenarios, providing dedicated psychoeducation around, you know, better supporting people on an individual level. You know, the self-care benefits for clinicians, all the ways we can better support clients. You know, we don't go and say, nobody should drive a car because there are bad drivers driving badly, having accidents. You know, we see overall a benefit.
for good drivers driving well to serve everyday society. So how do we overcome this overwhelming negative bias around bad people... Sorry, people using for nefarious purposes, incorrect purposes in a way that is not how society is encouraged to be able to more positively move forward? I think it's just evidence, I think.
that's my sense that's just speaking as a scientist I think just more evidence that it's safe and effective and feasible and more case studies of those I think would be really beneficial I mean yeah it was only 20 months ago I don't know I'm losing track of how long we're counting but it's still so new less than two years and so the evidence base is just trying to catch up I think and just people teaching people how to drive safely and responsibly and ethically Yeah, I mean, I agree. Absolutely, you think the evidence base needs to be there, which it's not. It's emerging at the moment, but there's, you know, quite limited evidence in the space.
But actually, I also, I think it depends what, like, what circles you're in, because in the kind of research that I did, a lot of the papers were pro very positive with no negative bias. It would just have, like, a tiny section of ethical implications, and that would be it. But it was all like, yes, benefits, yay. So I think it can depend, you know. where you're operating, who you are, who you're interacting with and what the views of the people around you are.
It's an interesting question about the car thing. It just organically happened with cars, I suppose. I don't think people were thinking at the start, how do we convince people that pros outweighs the cons with cars, that it just develops naturally and we reach this sort of stability point. Maybe that will happen with AI.
Just a quick point. Yeah? Very quick comment. With cars... There was the horse and carriage lobby in the UK that had people walking in front of the cars with flags.
What we're seeing here is because it's an arms race, people also want to control the space. So a lot of this conversation is either advertising how powerful it's going to get, how quickly, and trying to lock the door behind the horse that's already bolted, and governments trying to get control over the application of everybody else. while still pulling resources into the military and all sorts of unethical applications themselves.
So it's a land grab first and foremost, and it's marketing in and of itself. So if you're looking for how to apply it in your own practice, then do the research for the ways, and yes, look at the unintended side effects. but just trust that the technology can address things.
But as was said here previously, you know, there's something in human connection that, yeah, you don't necessarily want to forget about. Hi, just to put a positive spin to this conversation, I know of at least three companies in the US that are coming up with a complete digital therapist that will do... Complete CBT, right?
And I'm also working with UNICEF, and in Eastern Europe, we have a lot of scenarios and countries where clinicians themselves are not trained in CBT or other evidence-based therapies, which leads us to no choice but to investigate further into building a digital therapist. So just to... I've been told that we don't have much time, but only one of these... CBT-based tools have been released using generative AI. If you're interested, I'm happy to pass on that information.
But it's all happening. Within three years, you will have the evidence because information has changed within the last three months that has made significant differences in this field. So, yeah.
Thank you. Yeah, I hope we haven't sounded negative. I think the main point is just, yeah, that's great.
All we need is evidence, I think, is the main thing. Yeah, and I think the benefits for under-resourced situations just cannot be emphasised enough. OK, just a last question, unless you've got a few more.
But just a follow-up from this gentleman on the doom and gloom narrative. It is true, Steph, we do... go to these talks and it's constant doom and gloom so I appreciate that you've only heard the positive but we we always hear the doom and I'm just wondering just following from I can't remember who mentioned it maybe it was Olivia maybe it was um the other lady on your name but maybe we need to be asking different questions like for example does psychology need to examine itself first because we've had this mental ill health problem for a very very long time even before AI became sexy So should we be looking at what's not working locally even prior to the AI tools being implemented? Absolutely.
And then also, you know, can we use AI to actually guide psychologists? So maybe shifting the narrative, it's like in the sense of the client's not the so-called vulnerable partner here, but let's look at what maybe the clinician can do better to level the hierarchy. that has existed prior to the implementation of AI between the clinician and the client.
So perhaps just asking different questions if that makes sense as opposed to or how can we just up the ante of what we're already doing with our clinical tools. Yeah I agree completely. Yeah and I think a lot of the um a lot of at least some of the ethical issues that these tools present are just replications of what might already be happening in clinical practice as a result of under-resource or power hierarchies that are not being well addressed so yeah I absolutely agree that we can ask different questions and look at what we're already doing.
I think it's a good opportunity for taking therapy outside of the therapy room by using AI and that gives the user more control over when and where and how they engage in therapeutic principles too. Thanks so much for the talk. Really loved it. I'm studying a master's in positive psychology here. And for those that might not know, positive psychology is a study of human flourishing, looking at what we can learn from the outliers to then bring the whole average up.
But a challenge we have there is that there isn't the same burning platform. If you're a three out of 10, you really do want to get better. But if you're a seven out of 10, trying to convince someone that if you do meditation, you'll be an 8.5 is quite hard.
But I've been curious to know about sort of how some companies like Happify using sort of nudging so that when people, they notice they're about to go to sleep, have a nudge to do a gratitudes intervention. So I'm curious to understand your reflections on how the tech could enable this sort of positive psychology side. And is there sort of less ethical implications there for capturing data, given that it's not sort of capturing vulnerable data?
So I'm curious on that. Yeah, so I think There's definitely potential for that to be positive and for that to be used in a way that people have control over and people find is beneficial for them and results in them feeling a greater sense of well-being. I think the data collection probably still raises a similar degree of ethical concern in terms of what it depends on what kind of information is being tracked but information is being tracked about a person inside their home in their day-to-day life and then depending on as we've heard the regulation isn't there right around big tech companies and private companies operating in that space who can then use this data share this data sell this data for whatever purposes they determine is necessary for their commercial interests and so your data that maybe kind of tangentially attached to your mental wellbeing, but still, you know, has some content, some substantive content can end up being sold to a company that you never envisaged. And we might want to be worried about that.
We've got time for a couple more questions. Hey, my name's Desi. I've worked in a sexual assault service for many years and it's kind of a feminist support model of supporting people's agency.
And there's a lot of group work that happens. And they're using AI tools to help facilitate support groups. It's been really useful because an AI bot can kind of time contributions much better than a human can. Make sure people all equally contribute, which has been really great.
But with feminism, often sexual violence against women is put on a broader political kind of, it's not an individual mental health problem. It's like a lot of our mental health comes from a capitalist culture that's really toxic. So I'm kind of interested in how mental health or how AI is going to change society rather than individual mental health or if that's a field at all.
Yeah. I think both of those questions are linked to each other. Thanks for sharing. And I think, again, the traditional field of psychology is when something goes wrong in our mental health.
And I think. What you're talking about, both of you actually are talking about AI in terms of anthropology, sociology, philosophy and all these other really important fields that speak to things that are happening in society and how to live well as a human being. I personally don't think that's the business of traditional psychology, we're in the business of when things go wrong and so I'd love to see a panel on AI in anthropology, AI in sociology, AI in gendered politics.
I think we need to do that more and I don't have any answers but I'd come to that panel. Great point. Yes, that's a great point. We have one more question, and then we can take the other questions off stage because we're going to be here for, you know, 15 more minutes afterward. Okay, so I'm not even going to give a question.
So my name is Simon Dennis. I'm in the same company with Caitlin. We've talked a lot about the research, and I just wanted to advertise a couple of research projects, one that's currently running and another one that will be coming up. So currently I'm doing a study with...
with Masters of Professional Psychology students looking at clinicians'attitudes towards generative AI. And there what we're doing is we're concentrating on what are the appropriate use cases and then also how clinicians view the risks and benefits. And we're looking for any mental health support people of any description who might be interested in being involved in that study because we're really trying to get as many views as we can in that. And then the other thing I just wanted to mention is that Caitlin's also about to be leading a new study in the use of generative AI in these chatbot scenarios in a full clinical trial.
And so we're certainly hoping to test out the efficacy and engagement there. Okay, so thank you for that. Thank you for all coming and thanks for our panellists who will... Thank you to Simon for bringing us all together.
There's some drinks there, we can continue the conversation there for a bit.