top of page

Closing the Gap: from Policies and Procedures to Practice

Overcoming the policy/procedure-practice paradox requires focus and commitment

By Dr. Tommy Cooke

Sept 24, 2024

Key Points:


  • Having AI policies doesn't automatically ensure ethical AI practices


  • Regular audits and cross-functional teams are crucial for aligning AI with ethical standards


  • Explainability and stakeholder engagement are key to responsible AI implementation



Closing the Gap: from Policies and Procedures to Practice


Organizations pride themselves on having comprehensive AI policies and procedures. They show care, diligence, and signal to your staff and stakeholders that you are take AI use and employee behaviour seriously as part of your business plan.


However, AI policies and procedures don’t guarantee ethical AI. Even when multiple policies reference AI, there's often a gap between policy and procedures on the one hand, and practice on the other. This gap is a problem because it can catalyze unintended consequences and ethical breaches that undermine the very principles they otherwise uphold.


The Policy/Procedure-Practice Paradox


This problem is a paradox that is common in virtually every industry using AI. By paradox we mean a contradictory statement that, when investigated and explained, proves to be true. For example, say aloud to yourself, “the beginning is the end”. It sounds absurd, but when you think it through, it makes sense.


This same phenomenon presents itself when thinking about “policies and procedures in practice”. Policies and procedures are documents, so how exactly are they practiced? The initial thought that a document practices anything is absurd. But when we read them, they guide how people ought to use and not use AI.


The policy/procedure-practice paradox is a problem because failing to understand it means failing to address it. And in failing to address it, policies and procedures about AI often lead to broken and misinformed practices. Let’s consider a real-world example:


Despite a company having an anti-bias policy in place, a facial recognition system used in stores across the United States for over eight years exhibited significant bias. The system struggled to accurately identify people with darker skin tones, leading to higher error rates for certain demographics. This occurred because the AI was trained on datasets disproportionately representing lighter skin tones. And so, even well-intentioned policies can fail in practice.


The example above is not isolated. It’s a symptom of a larger issue in AI implementation. While the example I provided was caused by biased data, there are several other reasons why the policy/procedure paradox exists:


  1. Lack of Explainability: many AI systems operate as "black boxes," making it difficult to understand their decision-making processes, even with transparency policies in place


  2. Rigid Rule Adherence: AI systems may strictly follow their programmed rules without understanding the nuanced ethical priorities of an organization


  3. Complexity of Ethical Standards: Translating abstract ethical concepts into concrete, programmable instructions is a complex task that often leaves room for interpretation and error


Closing the Gap


To mitigate the paradox, we need to close the gap that often exists between AI policies and procedures with AI practices. Here are some strategies to achieve this:


  1. Translate Policies into AI-Specific Guidelines: high-level policy language needs to be converted into actionable steps that can be implemented in AI systems. This translation ensures that AI operates on the same definitions of privacy, fairness, and transparency as the organization. Engage with your AI vendor to discuss how your policies can be integrated into the system's operations. Remember, AI systems often require fine-tuning to align with specific organizational needs.


  2. Conduct Regular Audits: periodic reviews of AI systems are essential to ensure they're behaving in line with ethical standards. These audits should be thorough and look for potential blind spots. They’re also excellent at discovering and mitigating issues that an organization may have previously missed. Compare your system's training data with the data your organization provides. Analyze the differences and involve your ethics and analytics teams in prioritizing findings for policy amendments.


  3. Build a Cross-Functional Ethics Team: bringing together technology champions, legal experts, and individuals with strong ethical compasses can provide a well-rounded perspective on AI implementation. Ensure this team regularly communicates with your AI vendor, especially during the implementation of new systems. When building this team, diversify it. As academics say, make it multidisciplinary, meaning the combination of professional specializations when approaching a problem.


  4. Promote Explainability: as the Electronic Frontier Foundation has advocated for years, explainability is crucial when using AI. Why? If an AI system's decisions can't be explained, it becomes difficult for an organization to claim accountability for its actions. Work with your vendor to ensure AI models are interpretable. Position the right people to explain system outputs to anyone in your organization and verify that these align with your founding principles.


  5. Engage External Stakeholders: as AI ethics expert Kristofer Bouchard recently argued, external perspectives, especially from customers, communities, and marginalized groups, are crucial when using AI. This is especially the case when it comes to identifying ethical blind spots. Regularly seek feedback from these groups when evaluating your AI systems. Their insights can be invaluable in uncovering unforeseen ethical implications.


The Path Forward: Ongoing Oversight and Proactive Management


The close the gap between AI policy and ethical practice requires keep the gap shut. Unfortunately, it’s not as simple as closing a door once-and-for-all. It needs to be closed because it can easily reopen many times during your AI journey. Closing the gap requires ongoing oversight, regular policy updates, and a commitment to aligning AI behavior with organizational values.


Actively integrate the five strategies above as doing so can significantly minimize risks associated with AI use. Being proactive not only ensures compliance with ethical standards but also builds trust with stakeholders and positions the organization as a responsible leader in AI adoption.


Remember, in the world of AI, accountability and responsibility are critical. The power of these systems demands continuous vigilance and active management. By committing to this process, organizations can harness the full potential of AI while upholding their ethical principles and societal responsibilities.

bottom of page