Literature Review
In this literature review, I will explain how AI generates automated messages and how those automated messages convey information for crisis support. This review explores usability and how patients interact with AI and demonstrates how AI facilitates crisis communication for mental health. This review incorporates technical knowledge into AI and mental health communication using the example of chatbots conveying information to patients. I will synthesize my sources to connect topic-related ideas that support my ambition while also explaining the process of chatbot communication and human interface using my sources.
Arendt, F., Till, B., Voracek, M., Kirchner, S., Sonneck, G., Naderer, B., Pürcher, P., & Niederkrotenthaler, T. (2023). ChatGPT, artificial intelligence, and suicide prevention: A call for a targeted and concerted research effort. Crisis: The Journal of Crisis Intervention and Suicide Prevention, 44(5), 258-261. https://doi.org/10.1027/0227-5910/a000915
Bedington, A., Halcomb, E. F., McKee, H. A., Sargent,
T., & Smith, A. (2024). Writing with generative AI and human-machine
teaming: Insights and recommendations from faculty and students. Computers
and Composition, 71, 102785. https://doi.org/10.1016/j.compcom.2023.102785
Gamble, A. (2020). Artificial intelligence and mobile
apps for mental healthcare: A social informatics perspective. Aslib Journal
of Information Management, 72(6), 843-861.
https://doi.org/10.1108/AJIM-04-2020-0152
Kessler, M. M., Breuch, L.-A. K.,
Stambler, D. M., Campeau, K. L., Riggins, O. J., Feedema, E., Doornink, S. I.,
& Misono, S. (2021). User experience in health & medicine: Building
methods for patient experience design in multidisciplinary collaborations. Journal
of Technical Writing and Communication, 51(4), 380-406. https://doi.org/10.1177/00472816211044498
Knowles, A. M. (2024). Machine-in-the-loop writing:
Optimizing the rhetorical load. Computers and Composition, 71, 102826. https://doi.org/10.1016/j.compcom.2024.102826
Ma,
J. S., O’Riordan, M., Mazzer, K., Batterham, P. J., Bradford, S., Kõlves, K.,
Titov, N., Klein, B., & Rickwood, D. J. (2022). Consumer perspectives on
the use of artificial intelligence technology and automation in crisis support
services: Mixed methods study. JMIR Human Factors, 9(3), e34514.
https://doi.org/10.2196/34514
Reeves, C. (1994). Writing and reading mental
health records (Book Review). Technical Communication Quarterly, 3(1),
103-104.
Rosinski, P., & Squire, M. (2009). Strange
bedfellows: Human-computer interaction, interface design, and composition
pedagogy. Computers and Composition, 26(3), 149-163.
https://doi.org/10.1016/j.compcom.2009.05.002
Tian, Z., & Yi, D. (2024). Application of artificial intelligence based on sensor networks in student mental health support system and crisis prediction. Measurement: Sensors, 32, 101056. https://doi.org/10.1016/j.measen.2024.101056
Sorapure, M. (2019). Text, image, data, interaction:
Understanding information visualization. Computers and Composition, 54,
102519. https://doi.org/10.1016/j.compcom.2019.102519
Devon, I think your research topic exploring AI and mental health is really interesting and will serve you well in our TWDR program. The sources you selected should provide a good introduction to your topic and they will reveal other scholars you can explore to help narrow in on what you are seeking.
ReplyDeleteI appreciate the arguments you made in your first blog post because I have mixed feelings about AI. Mainly, I struggle with what has been promised by AI promoters and companies who dove in with heavy investments. As with most exciting new tech, great things are imagined that will revolutionize our lives, but the reality is the tech ends up being used in the ways it is actually useful. It finds its place. Research into AI is an area where watching who is saying what will be critical to the analysis of sources and their claims. I can hear Dr. Bacabac saying my statement holds true for all research sources, which is correct, but the motivations of the sources of AI claims is especially critical I think to understanding AI capabilities.
I'm excited to learn from your research project because it will reveal what is working and what is problematic. I'd rather know about the real benefits and limitations of AI than hold onto my general suspicion. Mental health is about so much more than peoples' preferences, it’s about peoples' lives. Combining research into AI with mental health applications is a great way to look at AI capabilities with a critical eye. Your topic is exciting, challenging, current, and the results could benefit both the sceptics and the early adaptors of AI by encouraging an accurate picture of AI capabilities and potential.
What topics and/or variables might emerge from the current bibliography? Note, if categories/headings exist, how might they help to enable or constrain the topic?
ReplyDeleteThe technical journals approved for my review are broad and will provide conceptual evidence. The interdisciplinary sources are more specific and will provide more relevant and selective information for my review. I think that the constraining aspect will come from the technical sources. I will have to synthesize information from a more generalized interpretation, which will prove more difficult. I want to have relevant, specific sources that pertain directly to the points I am trying to prove. However, the broad approach will help me develop various concepts and flesh out my ideas without narrowing my view.