AI is rapidly reshaping how work is designed, how talent is developed, and how organisations make decisions about their people. The question for HR and people functions is no longer whether to engage with AI. It is how to be genuinely present in the conversations that will shape how it's integrated within ways of working.
Three significant pieces of academic research published in 2024 – spanning ethical frameworks, the practical effects of AI on HR activities, and the symbiotic relationship between AI and human resource development – converge on a key conclusion. Successful AI adoption in organisations is not a technology project with a people communications plan attached. It is a cross-functional endeavour, and HR has a distinctive and valuable contribution to make to it – one that organisations cannot afford to go without.
This post draws on those findings to identify the key AI themes HR should be exploring and to pose the questions that people functions can bring to the wider organisational conversation – with IT, with senior leaders, with line managers, and with employees themselves.
Successful AI adoption requires the deep technical expertise that IT, data science, and engineering teams bring to businesses. But the research is clear that technology implementation alone does not determine whether AI creates value or harm in organisations. What matters equally is how people experience it, respond to it, and are supported through it.
In their paper exploring the ethics of AI use in organisations, Wang and Pashmforoosh (2024) propose that AI’s impact on workplaces is fundamentally a human challenge: it reshapes how people learn, how they are evaluated, what skills become valuable, and how trust is built or eroded. The authors draw on Wilkens’s (2020) characterisation of AI as a ‘double-edged sword’ that’s capable of augmenting individual and organisational development, but also of disrupting workflows, increasing job insecurity, and reducing work engagement when poorly managed. Dima et al. (2024), synthesising 43 peer-reviewed empirical studies from 1996 to 2023 across management, HRM, psychology, and information systems,identify that AI transforms not just tasks but the social and relational fabric of work – changing how people collaborate, how they find meaning in their roles, and what it means to be a capable professional. These questions sit alongside the technical ones, and they are where HR has a genuinely distinctive contribution to make.
Khandelwalet al. (2024) offer a similar perspective: the successful integration of AI in organisations depends on HRD functions being active participants in the process, not at the end of it. Their systematic review of 95 Scopus-indexed studies from 2004 to 2023 demonstrates that organisations reap the greatest benefits from AI when HRD practices are embedded throughout adoption – designing learning pathways, supporting managers through the transition, and ensuring employees are equipped and engaged. Notably, Khandelwal et al. (2024) explicitly cite Wang and Pashmforoosh’s (2024) framework, reinforcing that keeping HRD in the loop is considered essential to developing ethical and effective AI practice.
“The question is no longer whether we should integrate AI into our daily practice, but how we can utilise it intelligently and ethically to our benefit.”
— Wang & Pashmforoosh, 2024, p. 435
QUESTIONS TO EXPLORE
→ Is HR currently part of cross-functional conversations about AI adoption alongside IT, operations, and senior leadership?
→ How can HR make its contribution to AI discussions visible and valued, rather than waiting to be invited?
→ Who is responsible for shaping the employee narrative around AI, and how can HR play a constructive role in that?
Getting the ethics of AI right in organisations requires input from legal, compliance, IT, data governance, and senior leadership – it is not a challenge any single function can navigate alone. HR has a particularly important perspective to bring to these conversations, and the research suggests it should be bringing it more consistently and more confidently.
Wang and Pashmforoosh (2024) propose the S.A.F.E. S.P.A.C.E. framework – nine guiding principles (Safety, Accountability, Fairness, Efficiency, Social Responsibility, Professional Responsibility, Autonomy, Competency, and Equity) – as the ethical boundary within which responsible AI practice should operate. Significantly, this framework integrates the Academy of Human Resource Development’s Standards on Ethics and Integrity (ASEI), which set out six professional principles for HRD practitioners: competence, integrity, professional responsibility, respect for people’s rights and dignity, concern for others’ welfare, and social responsibility. The authors argue that these principles are not merely aspirational – they are operationalised through HRD interventions including training, work redesign, coaching, and performance evaluation. Ethical frameworks, in other words, need HR to make them real.
The ethical risks identified in the research are concrete and well-evidenced: algorithmic bias in recruitment and performance management; erosion of employee privacy; accountability gaps when AI influences decisions that affect people’s careers; and the risk that AI-driven efficiency improvements disproportionately affect lower-skilled workers. Wang and Pashmforoosh (2024) highlight five specific ethical concerns – bias and discrimination, transparency and explainability, privacy and security, accountability and responsibility, and job displacement – each of which has direct implications for HR practice. Dimaet al. (2024) add that persistent human distrust of AI in sensitive people-related contexts makes human oversight – and human advocacy – essential to successful implementation.
HR professionals navigate ethics daily. Employment law, equality, data protection, employee wellbeing are already deeply familiar territories. AI does not create entirely new ethical challenges so much as it intensifies and accelerates existing ones. That existing knowledge and experience is exactly what cross-functional AI governance structures need at the table.
QUESTIONS TO EXPLORE
→ Is HR represented in any cross-functional group overseeing AI governance, procurement, or implementation?
→ Do employees know when AI is involved in decisions that affect them, and is HR helping to shape that transparency?
→ How can HR build sufficient AI literacy to be a credible and constructive partner in ethics (and broader AI-human interaction) conversations and procurement decisions?
The dominant public conversation about AI and skills focuses on upskilling and reskilling – preparing the workforce for jobs that AI will change. This is necessary, but the research suggests it is not the complete picture. Khandelwalet al. (2024) make a compelling case that the skills challenge has multiple layers: employees need new technical competencies to work alongside AI, and they also need relational and cognitive capabilities – empathy, critical thinking, ethical reasoning, psychological flexibility – that AI cannot replicate. Building both requires HR and learning teams to work alongside technical functions in defining what AI-ready capability actually looks like for their specific organisational context, and to conduct wider reviews of development needs to ensure increasingly important cognitive capabilities are met.
The growing adoption of generative AI tools by people outside as well as inside of work has added further urgency to this agenda. Khandelwal et al. (2024) note growing adoption of generative AI in training and development functions, calling for organisations to develop both AI literacy and the capacity to apply these tools ethically and effectively. This is a space where HR’s expertise in learning design and workforce development has direct, practical value.
Dimaet al. (2024) also identify that HR professionals themselves will benefit from developing capabilities not yet widespread in the profession: data literacy,the ability to evaluate AI tools critically, and the confidence to contribute to technological change conversations. This is not about HR becoming technical experts. It is about HR being informed enough to ask the right questions, challenge assumptions, and represent the employee perspective with credibility.
There is also a subtler risk worth naming. Khandelwal et al. (2024) draw on Ardichvili’s (2022) research on AI in knowledge-intensive professions to note that adopting AI can erode expertise if organisations are not careful – reducing opportunities for deliberate practice, mentoring relationships, and the kind of progressively challenging work through which deep professional knowledge develops. AI is not automatically a force for development. In the wrong configuration, it can quietly hollow out the very capabilities organisations need. HR is well-placed to keep this concern visible in conversations about how AI is deployed and ensure that the right development opportunities are deployed in the right ways (whether human-led, AI-led or a hybrid approach).
QUESTIONS TO EXPLORE
→ Is HR working with IT and business leaders to define what AI literacy means for different roles across the organisation?
→ Is the learning and development strategy investing in the distinctly human skills that complement AI, as well as the technical skills needed to use it?
→ How are we collectively ensuring that AI tools in training and development build genuine expertise rather than create unhelpful dependency?
AI does not simply automate tasks – it restructures work. Dima et al. (2024) identify work context redesign as one of five primary effects of AI on HR activities, noting that organisations face a genuine challenge in reshaping roles, structures, and cultures in ways that support both performance and human wellbeing. Done thoughtfully, and with the right people involved, AI-augmented work can be more meaningful, more personalised, and more rewarding. Done without adequate consideration of the human experience, it can fragment roles, reduce autonomy, and contribute to what the researchers describe as the dehumanisation of work.
Wang and Pashmforoosh (2024) specifically propose work redesign as one of six HRD interventions for ethical AI practice, arguing that AI-powered work should be structured to foster productive human-machine interaction while preserving human autonomy and the capacity for creative, complex thinking. Khandelwal et al. (2024) reinforce this, citing the importance of organisations planning for an appropriate human-to-AI ratio and using AI-powered solutions to identify skill gaps and redesign roles thoughtfully. These are decisions that benefit from HR working closely with operational managers, IT teams, and employees themselves, rather than any single group deciding in isolation.
There is also a talent dimension that organisations should not overlook. People want work that has meaning, that stretches them, and in which they feel genuine agency with the opportunity for mastery. How AI is deployed will shape whether organisations can credibly say they are offering that, and HR is well-placed to keep that question visible throughout the design process.
QUESTIONS TO EXPLORE
→ How are we, collectively, ensuring that job redesign in an AI-augmented environment maintains meaningful human contribution and autonomy?
→ Are employees being involved in shaping how AI changes their work and is HR helping to facilitate that dialogue?
→ How might AI change our employee value proposition, and are people and business leaders thinking about this together?
Traditional performance management was designed around human-to-human work. AI is changing that context in ways that make a rethink both timely and worthwhile. Khandelwalet al. (2024) note that AI-powered performance tracking offers the potential for more objective and continuous monitoring, reducing the recency bias and subjective judgements that have long challenged conventional appraisal processes. AI can identify trends in productivity, generate more accurate forecasts, and help managers understand performance patterns that would otherwise remain hidden in data.
But the same research highlights important considerations that HR can help organisations navigate. Khandelwal et al. (2024) emphasise that performance management systems should align with both corporate objectives and employees’ needs, and that the feedback loop between AI-generated insight and human judgement is critical. Performance data collected via AI can feel intrusive to employees; transparency about how it is used is essential to maintaining trust; and there is a genuine risk that optimised metrics end up measuring what is easy to quantify rather than what genuinely matters. Bringing HR into the design of these systems early – in partnership with data and technology teams – is what makes the outcome better.
Wang and Pashmforoosh (2024) make a strong case for coaching and mentoring as more important than ever in the AI era, explicitly proposing both as HRD interventions within their SAFE SPACE framework. These are distinctly human practices that complement what technology can offer: supporting individuals to navigate ethical complexity, develop leadership capability, build resilience,and maintain a growth mindset. This is an area where HR brings deep and longstanding expertise, and where that expertise has clear and demonstrable organisational value.
QUESTIONS TO EXPLORE
→ How is HR contributing to conversations about evolving performance frameworks as AI changes the nature of roles and team structures?
→ Where AI influences performance processes and career development, is HR helping to ensure it is fair, transparent, and understood by employees?
→ How are we making the case for investment in coaching and mentoring as distinctly human development tools that complement, rather than compete with, AI?
The research is consistent and clear. HR and people functions have a distinctive and important contribution to make to their organisations’ AI journeys – not as the sole owner of the agenda, but as an essential voice in a collective conversation that needs multiple perspectives to go well.
There is a version of the future in which HR engages with AI after the important decisions have already been made – brought in to communicate the changes, run the training, and manage the human fallout. And there is another version, one that the evidence strongly supports, in which HR is part of shaping those decisions from the outset: working alongside IT, operations, legal, and senior leaders to ensure that employee wellbeing, ethical considerations, development, and the quality of working life are genuinely factored into developing the human-AI partnership that’s right for the organisational context.
The profession has deep expertise in precisely the areas that determine whether AI adoption succeeds or struggles: capability development, culture, trust, engagement, ethics, and the design of meaningful work. Wang and Pashmforoosh (2024, p. 447) capture this well, urging HR practitioners to ‘build their AI literacy and a solid understanding of the desired outcomes and unintended consequences of the use of AI in the workplace’, not to become technology specialists, but to be informed, confident, and effective partners in across-functional endeavour.
Perhaps the starting question we need to be exploring is: How do we ensure HR’s voice is present and valued in our organisation’s AI journey, and what are we doing today to build the knowledge and confidence to contribute?
REFERENCES

Join our monthly newsletter for fresh insights, research, and events shaping the future of leadership and organisational change.