The Human Factor in AI Ethics:
Why Organizations Need to Ask
‘Should We?’ Instead of ‘Can We?’

 

Would you let AI generate new videos using your employees’ likenesses without their explicit consent?

This wasn’t a hypothetical question when I recently spoke with Trey Reynolds, VP of Engineering at Abilitie. His team faced this exact scenario when considering how to expand their management training curriculum’s newest product, AI Cases. They had existing footage of actors from previous training videos, and the technology existed to use AI to generate new scenarios with those same faces. Technically, they could do it — their contracts even gave them rights to use the actors’ likenesses.

But Reynolds and his AI ethics team asked a different question: Should they?

The answer was a clear no. As Reynolds explained, they considered how an actor might feel seeing themselves saying something they never actually said – potentially something offensive or inappropriate. Despite having the technical capability and legal rights, they chose not to proceed. This decision exemplifies a fundamental shift in how some organizations are approaching AI implementation: moving beyond what’s technically possible to consider what’s ethically responsible.

 

Beyond Technical Safeguards: The Human Question

As an analyst at Brandon Hall Group™, I’ve observed most organizations focusing primarily on data security and technical safeguards when implementing AI. As I discussed in my previous article on “Trust by Design: A Conversation About Security, Privacy and the Future of Learning,” while these technical concerns are valid, they often overshadow a more fundamental consideration: how AI impacts human dignity and learning outcomes.

“We will only make things that make the world ultimately a better place,” Reynolds told me, revealing a deeper perspective on ethical AI development. “We’re not going to try to do things that might make you feel diminished as a person.”

This human-first approach stands in stark contrast to the broader industry mindset. While organizations are rapidly adopting AI capabilities, many are doing so without fully considering the broader implications. As Reynolds noted, “The unfortunate reality is that many companies right now are simply adding AI to check a box and show their solution is AI-enabled.”

 

 

Bridging Technology and Human Impact

As organizations race to implement AI capabilities, we’re seeing echoes of previous technological revolutions. Our research at Brandon Hall Group™ shows a familiar pattern: the rush to adopt new technology often precedes the careful consideration of its impact. But unlike previous technological shifts, AI’s ability to mimic human interaction raises unprecedented ethical considerations.

“When we started developing AI Cases in the summer of 2023, we sat down and asked ourselves: is there an opportunity here to create something that makes the world a worse place overall?” Reynolds explained. This question reflects a growing awareness in the industry that AI implementation isn’t just a technical challenge – it’s a societal one that demands a fundamentally different approach to development and deployment.

 

 

Building Ethics Into the Foundation

What makes Abilitie’s approach distinctive is how they’ve integrated ethical considerations into their development process from the ground up. The company’s AI ethics team, led by engineering, reviews everything from vendor selection to feature development. This isn’t just about risk management — it’s about ensuring their technology enhances, rather than undermines, human experiences.

The composition of their team reflects this philosophy. Reynolds highlighted how they intentionally hire people with diverse educational backgrounds, including humanities and liberal arts, rather than just technical specialists. This diversity helps prevent what he calls “technology monoculture” – the tendency for tech-only teams to implement new capabilities simply because they can.

 

 

Ethics in Action: A Multi-Layered Approach

Abilitie’s commitment to ethical AI isn’t just philosophical – it’s built into their technical architecture. Their platform employs a sophisticated multi-layered approach:

  1. Primary AI interaction layer
  2. Secondary AI monitoring for content moderation
  3. Analysis AI looking for specific learning moments
  4. Human audit layer for final oversight

This system demonstrates how technical safeguards can support, rather than replace, human judgment. As Reynolds explained, they’re willing to accept trade-offs — like slightly slower response times – to ensure their AI interactions remain safe and meaningful.

The second layer, built on Microsoft Azure’s AI moderation technology, exemplifies this balance. While it adds processing time to each interaction, it enables precise content control. “Anything that’s even sort of hinting at violence will get flagged,” Reynolds explained. “We’re not delivering a huge slap on the wrist or anything, more just keeping people honest and within the context of the simulation.”

 

 

When Values Drive Value

As our research at Brandon Hall Group™ shows, organizations are increasingly recognizing the need for more sophisticated approaches to AI implementation. Progressive organizations are moving beyond viewing AI as just a technical tool, instead seeing it as a catalyst for reimagining how we approach learning and development.

The key lesson from Abilitie’s approach is that ethical considerations shouldn’t be viewed as constraints but as foundations for building better, more human-centered technology. This emphasis on transparency and ethics resonates in the market. Reynolds recently received feedback from a global top-five law firm after completing their AI security questionnaire. The firm specifically praised Abilitie’s commitment to AI transparency and responsible development, demonstrating that this approach isn’t just ethically sound – it’s good business.

 

 

The Path Forward

Reynolds suggests starting with a simple but powerful step: create an AI ethics team and charter document. This provides a framework for evaluating AI implementations against your organizational values and mission.

But more importantly, it requires asking the crucial question before any AI implementation: not just “can we?” but “should we?” The answer to that question might sometimes mean saying no to technically feasible solutions – as with the AI video generation — but it will lead to more sustainable and human-centered innovation in the long run.

As AI transforms learning and development, organizations that build their strategies on ethical foundations will be better positioned to create truly transformative learning experiences. After all, as Reynolds reminded me, the goal isn’t to implement AI for its own sake, but to ignite a fire in learners and help develop more prepared leaders.

Like what you see? Share with a friend.

Roberta Gogos

Related Content

Roberta Gogos

Roberta Gogos has 15 years in the HR and learning tech space. She has been on the consultancy side, agency side, and has held CMO roles on the vendor side. She specializes in brand, position, and developing marketing strategies that build market share and profitability. Roberta joined Brandon Hall Group as a Principal Analyst and VP of Agency! – Brandon Hall’s latest innovation to help Solution Providers transition from theory to execution to accelerate their marketing and grow!