find your perfect postgrad program
Search our Database of 30,000 Courses


Posted May 7, 2026

Understanding humans to navigate the rise of GenAI

Generative AI

The rise of generative artificial intelligence (GenAI) is often framed in technical terms: faster training, larger datasets and more sophisticated models. While technical challenges are real and significant, history shows that these are rarely daunting.

Technologies have consistently advanced through experimentation and engineering. Yet navigating the waves of GenAI adoption is often more complex, with concerns such as job displacement, ethical dilemmas and societal shifts proving harder to resolve. This highlights a critical point: the challenges surrounding AI are not purely technical, they are fundamentally human-centred.

This human dimension is why understanding GenAI requires understanding people. GenAI development, deployment and governance are deeply intertwined with human cognition, values and behaviour. Developers make design choices influenced by their own perspectives; administrators decide how GenAI is applied in organisations; end-users interpret GenAI outputs through their own experiences. Consequently, questions of “where and how to use AI?” are as much ethical, social and psychological as they are technical.

Psychology, the scientific study of human thought, emotion and behaviour, provides an essential foundation for addressing these challenges. Morality, for example, is a uniquely human compass. While GenAI can be trained to follow rules or optimise outcomes, it cannot inherently weigh the ethical implications of its actions. Human guidance is essential to ensure GenAI aligns with societal values rather than pursuing purely mathematical efficiency. In practice, this means embedding human judgment into GenAI decision-making processes, particularly when outcomes affect individuals, communities or ethical norms.

Social perception is another critical lever. Humans are deeply attached to subtle cues, tones, body languages, all the social contexts that GenAI struggles to interpret accurately. In workplaces increasingly augmented by GenAI, the ability to read and respond to these cues allows humans to maintain influence and trust within teams. This social intelligence also aids in anticipating GenAI misuse: understanding how people are likely to interact with GenAI systems can help identify potential risks before they escalate.

Human-AI teaming illustrates these interplays. Effective collaboration between humans and AI requires recognising the strengths of each. GenAI excels at computing, optimising and large-scale, repetitive tasks, but it lacks contextual understanding, empathy and ethical reasoning. Humans, conversely, are capable of nuanced judgment, moral reasoning and social insight. Recognising this creates opportunities: thriving in an AI-driven world is not about out-computing machines, it’s about out-humaning them. Designing GenAI with this mindset allows technology to amplify human strengths instead of supplanting them.

Ultimately, the question of where and how to use GenAI is a human question. Technology is neutral; it’s our social, moral and psychological frameworks that determine its impact. Understanding our own moral frameworks, social perceptions and decision-making biases, may well be the greatest tool for mastering the machines we create. 

At the School of Psychology at the University of Aberdeen, these questions are central to how we approach teaching and learning. Across our on‑campus postgraduate programmes, students have the opportunity to explore how human cognition, ethics and social behaviour shape – and are shaped by – emerging technologies such as GenAI. For those looking to engage more flexibly with this evolving field, our online short course The Psychology of Human–AI Interaction offers an opportunity to explore how psychology can contribute to more responsible, ethical and human‑centred uses of artificial intelligence.

Are you considering postgraduate study? Use our course search to find your perfect postgrad program.

Dr Peidong Mei University of AberdeenAuthor’s bio: Dr Peidong Mei received her PhD in Psychology from Lancaster University in 2021, where her research explored children’s moral development in dynamic social interactions. Following her doctorate, she worked as a Research Fellow at the University of Exeter and the Alan Turing Institute, investigating the social and ethical dimensions of artificial intelligence in air traffic control. She subsequently served as a Visiting Scholar at the University of Oxford, examining the moral permissibility of AI deployment across different industries. In 2024, she joined the University of Aberdeen as a Lecturer.

Related articles

Overcoming Imposter Syndrome As A Postgraduate Student

Imposter Syndrome When Applying For A Masters In Innovation

Psychology – Subject Guide

Why Study Psychology – The Benefits Explained

Postgrad Solutions Study Bursaries

Leave a comment