Setting Industry Standards: Abilitie’s Multi-Layered Approach to AI Security in Learning

 

“We’re trying our very best to make handcuffs for ourselves,” Trey Reynolds, VP of Engineering at Abilitie, told me during our recent conversation about AI security.

It’s not the kind of statement you typically hear from a tech leader, especially one whose background includes managing complex systems at SpaceX’s Starlink. But it perfectly captures how Abilitie shapes its AI security framework – through intentional constraints that protect both data and human dignity.

Most organizations implementing AI in learning technologies focus narrowly on data protection. This misses a crucial part of the security equation. Abilitie takes a fundamentally different path, building a sophisticated three-layer security framework that protects not just information, but the human experience itself.

 

Beyond the Technical Monoculture: A Human-First Security Approach

The tunnel vision on technical security measures often stems from what Reynolds calls “technology monoculture” – the tendency for tech-focused teams to implement capabilities simply because they exist. This mindset overlooks how AI affects human experiences and learning outcomes.

Abilitie’s security framework demonstrates how organizations can build truly secure and ethical AI learning environments by addressing both technical and human elements. Their approach weaves together three distinct but interconnected layers:

 

Layer 1: Technical Filtering and AI Monitoring

At the foundation lies an innovative dual-AI system providing real-time content moderation and interaction monitoring. Rather than relying on basic filtering, Abilitie deploys a sophisticated setup where a secondary AI continuously monitors the primary AI’s interactions, creating multiple checkpoints for content safety.

The system leverages Microsoft Azure’s content moderation layer, analyzing interactions across eight different dimensions including potential harmful outputs. What distinguishes this approach is its precisely calibrated tolerance levels — even minor deviations trigger immediate intervention.

“Instead of just immediately responding very quickly like ChatGPT does, there’s a little bit more delay,” Reynolds explains. “It’s a trade-off we consciously make for security.” The system maintains learning momentum through gentle redirections rather than harsh restrictions, preserving the educational environment’s integrity.

 

Layer 2: Human Auditability and Oversight

The second layer implements what Reynolds calls “cyborg learning,”  a sophisticated blend of AI analysis and human oversight. Every conversation flows through a faculty dashboard where a secondary AI model identifies key learning moments, allowing instructors to bookmark significant interactions for deeper review. To maintain privacy while enabling meaningful tracking, learners participate under generated pseudonyms like “ClassyGiraffe” or “PuffyAnglerFish” — creating what Reynolds describes as a “safe space with accountability.”

This isn’t just about monitoring — it’s about understanding how learning happens in AI-enhanced environments. The system generates nuanced analytics of each interaction, allowing faculty to identify patterns and insights that might otherwise go unnoticed. “We’re not just looking at what was said,” Reynolds explains, “but at how conversations evolve, how learners engage with the AI, and most importantly, how they learn from each other.”

 

Layer 3: Human-centric Deployment Methodology

The final layer represents what Abilitie calls their “three-stage conversation review” process. Every interaction passes through primary AI engagement, secondary AI monitoring, and an analysis layer specifically tuned to identify learning moments. But technology is only part of the equation.

“People hate role plays,” Reynolds notes, “especially when paired with someone who isn’t invested in the training. But when you combine AI with peer learning, something magical happens.” This insight shapes their deployment approach: learners engage with AI scenarios in pairs, creating natural opportunities for discussion and reflection. Faculty members guide these sessions, providing context and drawing out insights that deepen the learning experience.

This methodology transforms what could be isolated AI interactions into collaborative learning experiences. The system captures these interactions, analyzing patterns and outcomes to continuously refine the technology and the human elements of the experience.

 

Setting New Standards: The OWASP Integration

OWASP, or the Open Worldwide Application Security Project, is a nonprofit organization that works to improve software security.

Abilitie’s security implementation extends well beyond standard compliance measures. Their development process weaves OWASP’s Top 10 AI security guidelines into every stage of product development, creating multiple layers of protection against potential vulnerabilities.

“Every change set that impacts our AI product goes through this checklist,” Reynolds explains. “Does this follow OWASP top 10 secure practices for AI coding?” But this isn’t just a superficial review — it’s a deep technical evaluation that shapes how their AI systems interact with users.

Take their approach to prompt injection, for instance. While many organizations implement basic input filtering, Abilitie’s system employs a sophisticated validation pipeline that examines not just the content of user inputs, but their potential interactions with the AI model. This same thorough approach extends to their output filtering, where multiple review stages ensure content aligns with both security requirements and learning objectives.

The system’s architecture reflects this commitment to security. Their authentication controls go beyond simple user verification, implementing role-based access that adapts to different learning contexts. Meanwhile, their model security measures protect against potential misuse through a combination of technical constraints and continuous monitoring.

 

The AI Ethics Committee: Shaping Security Through Human Values

Abilitie’s distinctive approach emerges in their AI Ethics Committee’s influence on security decisions and vendor selection. Unlike conventional security frameworks centered on technical vulnerabilities, the committee evaluates every aspect through the lens of human impact.

This influence extends deeply into their vendor selection process. When evaluating AI providers, the committee scrutinizes how potential partners handle data, their commitment to transparency, and their alignment with Abilitie’s ethical principles. Reynolds describes their recent evaluation of a major AI provider: “We examine their track record with data handling, their approach to model training, and their willingness to engage in open dialogue about ethical concerns.”

The committee’s influence shapes product development at every stage. For example, when evaluating potential AI-generated videos using existing actor footage, the committee looked beyond technical feasibility and legal permissions to consider human dignity. Despite having both capability and rights, they chose a different direction based on potential human impact.

Their deployment methodology reflects this careful approach. Before any new feature reaches users, it undergoes a review process that includes security audits, staged rollouts, and comprehensive monitoring protocols. “We’re not just launching features,” Reynolds explains. “We’re creating learning environments that need to be both secure and nurturing.” This means collecting feedback at every stage, from initial testing through full deployment, ensuring their security measures enhance the learning experience.

 

Looking Forward: Setting Industry Benchmarks

As AI reshapes learning environments, Abilitie’s framework offers concrete guidance for organizations building secure, ethical AI systems:

  1. Expand security focus beyond technical measures to encompass human impact
  2. Layer protection through AI monitoring and human oversight
  3. Establish clear evaluation frameworks for AI implementations
  4. Build diverse ethics committees that shape security decisions
  5. Design security measures that strengthen learning

“When you have the ability to shape how technology impacts learning, you have a responsibility to do it thoughtfully,” Reynolds reflects. “We’re focused on making sure our corner of the AI world contributes to human growth and development.”

For organizations implementing AI in learning environments, the essential question transcends data security to ask: “How do we build secure systems that deepen human learning?” Abilitie’s multi-layered approach demonstrates that thoughtful security design and human-centered learning naturally reinforce each other.

Experience how secure, ethical AI transforms leadership development. Learn more about Abilitie’s approach here.

Like what you see? Share with a friend.

Roberta Gogos

Related Content

Roberta Gogos

Roberta Gogos has 15 years in the HR and learning tech space. She has been on the consultancy side, agency side, and has held CMO roles on the vendor side. She specializes in brand, position, and developing marketing strategies that build market share and profitability. Roberta joined Brandon Hall Group as a Principal Analyst and VP of Agency! – Brandon Hall’s latest innovation to help Solution Providers transition from theory to execution to accelerate their marketing and grow!