Jayne Ruff | Apr 08, 2026
A Relationship in Progress: Human and AI Coaching
AI is reshaping the coaching landscape. This post explores what we're learning about the developing relationship between human and AI coaching, and what it means for practitioners, organisations, and coaching clients.
Leadership
|
Organisational Development
|

The arrival of AI in coaching has prompted a predictable debate in the field. On one side, enthusiasm: AI is scalable, always available, affordable, and, for certain tasks, surprisingly capable. On the other, scepticism: coaching is fundamentally relational, and no algorithm can replicate what happens in a genuine human interaction. Both instincts, it turns out, contain important truth. But neither on its own gives us a useful framework for practice.

What we actually need is something more nuanced: a clear-eyed understanding of where AI adds real value, where human coaching is irreplaceable, and, crucially, how the two can work well together. That’s what the emerging evidence is beginning to offer. This post explores some of that evidence and considers what it means for practitioners, and for those engaging coaching services.

 

What AI Brings to Coaching

There's a growing body of research exploring the value-add of AI in coaching. Comparing two equivalent longitudinal randomised controlled trials (RCTs), Terblanche et al. (2022) found something striking: an AI chatbot designed rigorously around goal-theory principles matched human coaches in supporting goal attainment over a ten-month period. The researchers suggest that AI’s consistency offers a potential advantage; it never forgets to check in on progress, never deviates from the programmed goal theory, and is available around the clock. Human coaches, by contrast, are more flexible and adaptive, but also more variable in execution.

The authors are careful not to overstate their findings, noting that “AI’s lack of empathy and emotional intelligence make human coaches irreplicable”, and the sample, drawn entirely from undergraduate students, limits how far these results travel into professional coaching contexts. But the study does illuminate something important: for narrow, structured tasks, a well-designed AI tool can perform credibly, and its mechanistic approach may sometimes be a feature rather than a limitation.

In a later conceptual paper, Terblanche (2024) maps out four legitimate applications of AI in organisational coaching: coach emulation for structured, bounded tasks; AI support tools that extend the coaching effect between sessions; AI-assisted coach education and skill development; and data analysis across coaching programmes at organisational scale. The last is particularly interesting, exploring to what extent AI could be used to surface patterns across an entire organisation’s coaching engagements in ways that would be much more challenging for human coaches working independently, whilst upholding important ethical considerations such as confidentiality.

A systematic review by Passmore et al. (2025) also found that AI coaches designed for specific, well-defined purposes, such as goal attainment, physical activity, and resilience development, can be genuinely effective and are generally well-accepted by users, who particularly valued the non-judgemental environment and 24/7 accessibility. Research by Xie and Ostrowski (2025) also showed that a GPT-4 model carefully designed around coaching frameworks significantly outperformed generic AI tools on empathy and encouragement, a reminder that design quality matters enormously.

One area where the case for AI augmentation is especially compelling is the space between coaching sessions. A systematic review by Wang et al. (2025) found that inter-sessional activities (i.e. the structured tasks carried out between coaching conversations) have a strong evidence base in counselling and therapy, yet remain surprisingly underused and under-researched in coaching. The authors suggest that AI tools are well-placed to address this gap: providing 24/7reinforcement, just-in-time prompts, automated goal-tracking, and personalised support between coaching sessions. This points to a genuinely additive role for AI, not replacing the coaching relationship, but extending its reach into the space where much of the real behavioural change has to happen.

 

Where Human Coaching Remains Uniquely Valuable

The evidence is equally evolving on where AI used in coaching falls short. In a three-arm RCT involving 114 senior leaders, de Haan et al. (2026) found that human coaching produced significant improvements in goal attainment, stress reduction, and coaching effectiveness ratings. AI coaching showed no statistically significant improvement over the control group on any primary outcome measure.

Attrition in the AI group was also approximately ten times higher, and four participants actively asked to switch to a human coach. The theoretical frame the authors offer is useful here: a “co-regulation model,” in which outcomes are shaped by the dynamic, mutual influence between coach and coachee. The working alliance was significantly higher in the human coaching group and predicted outcomes independently. The AI group simply didn’t generate the same relational quality, and the outcomes reflected that.

De Haan et al. suggest the difference between human and AI coaching outcomes is likely largest for senior leaders navigating relational complexity, significant transitions, and high-stakes decisions; precisely the contexts where executive coaching is most commonly deployed.

“Coaching outcomes are optimised through mutual, dynamic influence between coach and coachee - a relational dynamic that appears largely absent in current AI coaching interactions.” - de Haan et al. (2026)

 

Using a pragmatist framework, Bachkirova and Kemp (2025) identified six essential characteristics of organisational coaching: joint inquiry, sensemaking with a focus on action, value-based ethical orientation, contextual sensitivity, trust-based relationship, and contracting. They argue that AI coaching cannot meet any of these in principle, not merely at this stage of development. Their argument is not that AI will never improve. It is that some of the defining features of coaching require qualities such as genuine curiosity, lived experience, moral agency, authentic emotional connection, and legal accountability, that AI systems do not and cannot possess. What AI offers in their place is a simulation: sophisticated, potentially useful, but ontologically different from a human-to-human coaching engagement.

Passmore and Nobes’s (2026) qualitative study adds an important dimension. Thirteen experienced coaches across seven countries described forms of knowing that were deeply embodied, accumulated through career experience, and shaped by an ongoing developmental arc, including coaching supervision, reflection, and hard-won ego-regulation. They described using experience not to give advice, but to sharpen insight, accelerate rapport, and deploy calibrated self-disclosure as a relational tool. This is precisely the kind of knowledge that AI can synthesise linguistically but cannot hold or exercise in the way a seasoned human practitioner does.

“The coaches described forms of knowing – embodied, emotionally resonant, accumulated through lived career experience – that differ qualitatively from what AI can generate.” - Passmore & Nobes (2026)

 

What This Means for the Human-AI Coaching Partnership

So what does an evidence-informed human-AI partnership in coaching actually look like? While the existing evidence doesn't resolve the debate about AI and coaching, it does offer a clearer map of the territory. Here are five things worth taking from the research, whether you're a practitioner, a coaching client, or an organisation commissioning coaching.

1. Match the approach to the task.

AI performs well on structured, bounded work: goal-tracking, reflection prompts between sessions, progress monitoring, psychometric data collection, and extending the coaching effect after formal programmes end. It is not a like-for-like substitute for the relational, adaptive work that effective human coaching requires, particularly where the presenting issues involve complexity, transition, or the need for deeper emotional intelligence. Knowing the difference matters, especially as AI coaching products become more sophisticated and the marketing around them more confident.

2. Coachee readiness is a stronger predictor of outcomes than modality.

De Haan et al. (2026) found that baseline factors such as hope, self-efficacy, cognitive hardiness, and psychological wellbeing predicted coaching outcomes more strongly than whether coaching was delivered by a human or an AI. For practitioners, this means attending to readiness from the outset. For organisations, it means that how both human and AI-based coaching is introduced, who is selected, and how participants are supported to engage should be given sufficient consideration to promote return on investment and encourage menaingful behavioural change.

3. Ethical practice needs to be built in, not bolted on.

Diller (2024) maps the specific risks of digital and AI coaching in detail: algorithmic bias, data security vulnerabilities, accountability gaps, and the management of emotional disclosure. For anyone engaging an AI coaching product, it is reasonable, and responsible, to ask how data is stored and used, whether the system has been tested for bias, what human oversight exists, and what happens if the coaching surfaces distress. Responsible providers should welcome these questions.

4. AI at scale offers real opportunity, and real responsibility.

Where professional coaching has historically been limited to senior leaders, AI tools create a genuine opportunity to extend structured, goal-focused developmental support to employees at every level. However, Bachkirova and Kemp (2025) caution that if AI coaching becomes the default for junior staff simply because it is cheaper, while human coaching remains reserved for those at the top, the result may be a widening of developmental inequality rather than a closing of it. The question worth asking is whether deployment decisions are being driven by genuine thinking about fit and need, or primarily by cost, and whether the people receiving AI coaching know what they are and aren't getting.

5. The partnership has to be designed, not assumed.

The best outcomes are likely to come from intentional human–AI combinations, where each is deployed for what it does well. This requires practitioners who understand the evidence, organisations willing to think carefully about how coaching is structured, and clients who know enough to ask good questions. The evidence base is still developing, but the field is moving quickly. The most useful thing any of us can do right now, whether as practitioners, organisations, or individuals, is stay curious, stay informed, and stay willing to ask better questions of the technology and how we consciously choose to engage with it.

References

Bachkirova, T., & Kemp, R. (2025). ‘AI coaching’: democratising coaching service or offering an ersatz? Coaching: An International Journal of Theory, Research and Practice, 18(1), 27–45. https://doi.org/10.1080/17521882.2024.2368598
de Haan, E., Terblanche, N., & Nowack, K. (2026). A randomised controlled comparison of the effectiveness of human and AI chatbot coaching with goal attainment, wellbeing and self-efficacy. Human Resource Development International. https://doi.org/10.1080/13678868.2026.2633990
Diller, S. J. (2024). Ethics in digital and AI coaching. Human Resource Development International, 27(4), 584–596. https://doi.org/10.1080/13678868.2024.2315928
Passmore, J., & Nobes, A. (2026). Nothing compares 2 U: Exploring human coach distinctiveness in an age of AI. Organisations beratung, Supervision, Coaching. https://doi.org/10.1007/s11613-026-00994-x
Passmore, J., Olafsson, B., & Tee, D. (2025). A systematic literature review of artificial intelligence (AI) in coaching. Journal of Work-Applied Management. https://doi.org/10.1108/JWAM-11-2024-0164
Terblanche, N. H. D. (2024). Artificial intelligence (AI) coaching: Redefining people development and organizational performance. The Journal of Applied Behavioral Science, 60(4), 631–638. https://doi.org/10.1177/00218863241283919
Terblanche, N., Molyn, J., de Haan, E., & Nilsson, V. O.(2022). Comparing artificial intelligence and human coaching goal attainment efficacy. PLOS ONE, 17(6), e0270255. https://doi.org/10.1371/journal.pone.0270255
Wang, Q., Passmore, J., Huo, Y., & Mu, Y. (2025). A systematic review of inter-sessional activities in coaching practice: What can we learn from counselling and therapy? European Journal of Training and Development. https://doi.org/10.1108/EJTD-07-2025-0137
Xie, L., & Ostrowski, E. J. (2025). Comparing the effectiveness of LLM-powered coaching with human coaching and GPT conversation. The Journal of Positive Psychology. https://doi.org/10.1080/17439760.2025.2498132

 

Written by Jayne Ruff

Managing Director & Chartered Psychologist

To find out more about how ChangingPoint can help you align the minds to transform your business, get in touch.

Trusted By
Trusted By
Contact Us
Thank you!
If your message needs a response we'll be in touch soon.
Oops! Something went wrong while submitting the form.
London

1 Dysart Street
London
EC2A 2BX
Glasgow

Office 1
Technology House
9 Newton Place
Glasgow
G3 7PR
+44 (0)20 3432 4786
+44 (0)141 635 0149
info@changing-point.cominfo@changing-point.com
London

1 Dysart Street
London
EC2A 2BX
+44 (0)20 3432 4786
info@changing-point.com
Glasgow

Office 1
Technology House
9 Newton Place
Glasgow
G3 7PR
+44 (0)141 635 0149
info@changing-point.com