AI and Ethics, Governance & Strategy  - Business in the Community
Explore the ethical implications of AI and how strong governance can mitigate risks for organizations adopting AI technologies.

AI and Ethics, Governance & Strategy 

Foundational Guidance: AI and Ethics, Governance & Strategy 

The ethical implications of AI adoption remain a significant concern for many organisations. To address this challenge, we launched the Responsible AI Lab: a ground-breaking initiative that brought together leaders from business, government, and academia to co-create a comprehensive blueprint for Responsible AI. From this lab, we’ve established a set of actions that all businesses should focus on, which we have determined as foundational guidance on the topic of AI and ethics, governance & strategy.

Table of Contents

Introduction

The value AI can deliver is closely linked to how well it is governed. Without clear ethics and oversight, AI implementation can expose organisations to legal, operational, and reputational risk. Strong governance, grounded in impact assessment, accountability, and cybersecurity, enables organisations to use AI with confidence while meeting regulatory and societal expectations and maintaining public trust. 

Risks and opportunities

In the current context, many organisations still lack a clear view of their AI use, particularly where tools are embedded in third-party systems or adopted informally. This creates shadow AI1, fragmented ownership, and gaps in control2. Bringing AI into existing risk registers and using algorithmic or data protection impact assessments creates a more consistent view of performance, legal exposure, and ESG3 impacts. This enables leaders to make informed choices about where AI adds value and where risks outweigh benefits. 

As AI increasingly influences decisions that affect people4, transparency becomes critical. Limited visibility over how systems operate or how decisions are reached can undermine confidence among employees, customers, and regulators. Clear documentation of AI use, accessible explanations, and visible accountability structures make it easier to challenge outputs and address concerns. This clarity supports internal understanding as well as external credibility. 

Alongside this, regulatory expectations already apply, even in the absence of AI specific legislation5. UK data protection, equality, and consumer laws shape how AI systems can be used, particularly where automated decisions affect individuals6. Embedding legal and ethical oversight into AI governance reduces exposure to compliance gaps and positions organisations to respond more effectively as regulation evolves. 

Governance also depends on who is involved and how decisions are made. Effective AI governance requires clear senior leadership ownership and active engagement across the organisation and beyond it, bringing together leadership, technical teams, legal, responsible business functions, and external voices to challenge assumptions and shape oversight. Engagement needs to be ongoing, not one-off, with clear feedback loops that allow concerns to surface as AI systems evolve. Clear and inclusive language, alongside investment in AI literacy, enables people to participate meaningfully in governance. 

Cybersecurity risks linked to AI7 further reinforce the need for a joined-up approach8 9. Many incidents stem from organisational practice rather than technical failure, including poor implementation, lack of awareness, and misuse by well-intentioned staff. Treating AI as part of the data and supply chain, and sharing responsibility for security across teams, strengthens resilience and reduces avoidable risk. 

Taken together, these factors shape organisational reputation. AI related reputational risk often arises from small failures that escalate quickly. Organisations that embed clear governance, document decisions, and communicate openly about AI use are better positioned to manage incidents and demonstrate responsibility. This protects brand credibility, strengthens public trust in how AI is used, and reinforces organisational legitimacy as adoption scales. 

As AI systems evolve from decision support to direct execution within business processes, including increasingly autonomous or agentic systems that can sequence tasks and initiate actions, governance must evolve accordingly. “Human-in-the-loop” approaches provide important safeguards. However, reviewing every output is neither practical nor proportionate in complex, real-time environments. Responsible oversight depends on clearly defined limits of authority, embedded controls, and auditable processes that allow systems to act within authorised boundaries and escalate when risks are material. Human accountability remains central, with intervention focused where judgement is required rather than embedded in every step. Strong system design, not additional review layers, is what enables AI to operate safely, transparently, and at scale.  

Why does this matter for your business?

If AI is not governed transparently and ethically, it could expose your organisation to significant legal, operational, and reputational risk. Poor visibility over AI use and unclear accountability could undermine trust among your employees, customers, and the public, particularly where systems affect people’s rights or opportunities. Without strong governance, small failures can escalate quickly, damaging your credibility and stakeholder confidence. Over time, weak oversight could limit your ability to scale AI responsibly and reap the benefits of innovation.  

Actions by maturity level

Adopting

For organisations beginning to use AI, the priority should be reducing unmanaged risk, gaining basic visibility, and avoiding early failures that could undermine trust. 

  • Map and produce an inventory of AI use across the organisation, including third-party tools — to gain basic visibility of where AI is used, including shadow AI and vendor systems15. 
  • Introduce AI risk registers and basic accountability structures — to record key risks associated with AI use and clarify ownership
  • Publish basic documentation of AI use and principles — to make AI use visible and set expectations for teams. 
  • Implement basic data protection policies — to ensure lawful and secure handling of data. 

Embedding

For organisations seeking to move from ad hoc controls to consistent, organisation-wide governance as AI use expands.

  • Establish cross-functional governance committees with defined senior leadership ownership — to define clear roles (e.g. AI Ethics Officer), responsibilities, and decision-making processes for AI oversight and incidents. 
  • Promote a culture of responsible AI through training and clear reporting channels — to ensure staff understand risks, expectations, and their role in responsible use. 
  • Integrate AI governance into existing risk, compliance, and project management frameworks — to avoid parallel processes, assess shadow AI use, and embed responsible AI into day-to-day decision making.  

Leading

For organisations using AI at scale, where governance is needed to strengthen public trust, performance, and accountability over time. 

  • Develop and publish responsible AI principles — to clarify decision-making standards and demonstrate accountability to employees, customers, and the public. 
  • Use AI dashboards to communicate real-time impact — to monitor performance, risks, and impacts as systems evolve. 
  • Integrate cybersecurity into AI governance frameworks — to manage AI risks alongside wider data and technology security risks. 
  • Embed continuous stakeholder feedback and challenge into AI governance and review processes — to identify emerging issues and adapt governance as AI systems evolve. 
  • Define autonomy boundaries for advanced AI systems (including agentic AI) — to clarify which actions may be taken independently, which require human approval, and how exceptions are escalated and reviewed. 
  • Assess AI-related risks across critical suppliers and third-party systems — to extend governance beyond internal operations and ensure vendor AI use aligns with organisational standards for security, fairness, and accountability. 

Transforming

For organisations aiming to influence wider practice, shape standards, or address risks that cannot be managed by individual organisations alone. 

  • Create a “College of Experts” for oversight — to provide independent challenge and informed scrutiny of AI use. 
  • Advocate for industry-wide transparency standards — to improve consistency, accountability and public trust beyond the organisation. 
  • Lead public-private partnerships on AI safety — to address shared and systemic risks that require collaboration. 
  • Advocate for responsible AI practices across supply chains and vendor ecosystems — to extend accountability, autonomy standards, and auditability beyond direct organisational control. 

Case studies

Adopting

We have focused on finding case studies/examples from embedding onwards to inspire businesses and showcase what can be done in more advanced stages.  

Embedding

The Department for Business and Trade introduced a governance process requiring teams to submit AI tool requests for review. This created visibility across departments and ensured alignment with existing data protection and cybersecurity standards. By embedding AI oversight into established governance processes rather than creating parallel systems, the department clarified ownership, reduced unmanaged risk, and supported more consistent decision making across teams using AI tools16

Leading

Unilever implemented a structured, multi-stage AI governance model to scale AI use responsibly across its global operations. The approach included auditing hundreds of AI systems, aligning them with responsible AI principles, and embedding governance into business processes. This enabled faster deployment while reducing risk and increasing consistency across regions and functions. Governance was positioned as an enabler of innovation rather than a barrier17

Microsoft developed and published ethical AI principles and operationalised them through internal governance committees and assessment tools. AI systems are reviewed prior to deployment, with transparency and accountability built into decision-making. By clearly communicating how AI is governed and where responsibility sits, Microsoft strengthened trust with customers, partners, and regulators while embedding responsible practice at scale.18 

Transforming

Salesforce established an Office of Ethical and Humane Use of Technology to embed ethical oversight into product development. The office works with employees, customers, and external stakeholders to co-create guidance on AI use across products and services. This approach extends governance beyond internal compliance, shaping expectations across users and partners and influencing wider norms around responsible AI adoption19

Deloitte has developed a structured Trustworthy AI framework integrating ethics, risk management, regulatory alignment, and technical assurance across the AI lifecycle. While public information primarily describes its advisory and client-facing approach, the framework contributes to shaping expectations for responsible AI governance across sectors. Through publishing guidance, advising businesses and public bodies, and aligning with emerging regulatory standards, Deloitte plays a role in influencing how organisations design and implement AI governance20

Endnotes

1. Shadow AI refers to the use of artificial intelligence tools or systems by employees without formal approval, oversight, or governance processes.  

2. Microsoft, 2025. Rise in ‘Shadow AI’ tools raising security concerns for UK organisations 

3. ESG stands for Environmental, Social and Governance. It provides a framework for a business to meet higher non-financial standards for society and the environment, whilst increasing transparency and accountability (British Business Bank, 2025)  

4. OECD, 2025. Governing with Artificial Intelligence: The State of Play and Way Forward in Core Government Functions

5. UK Government, 2023. AI Regulation: A Pro-Innovation Approach.

6.  ICO, 2025. Rights related to automated decision making including profiling.

7. Carlini, N et al., 2020. Extracting Training Data from Large Language Models

8. ICO, 2023 Guidance on AI and data protection.

9. World Economic Forum, 2025. Artificial Intelligence and Cybersecurity: Balancing Risks and Rewards.

10. UK Government, 2023. AI Regulation: A Pro-Innovation Approach.

11. ICO, 2025. Rights related to automated decision making including profiling.

12. Carlini, N et al., 2020. Extracting Training Data from Large Language Models

13. ICO, 2023 Guidance on AI and data protection.

14. World Economic Forum, 2025. Artificial Intelligence and Cybersecurity: Balancing Risks and Rewards.

15. Shadow AI refers to the use of artificial intelligence tools or systems by employees without formal approval, oversight, or governance processes. 

16. Department for Business and Trade, 2024. How our AI governance framework is enabling responsible use of AI.

17. Holistic AI, 2025. Unilever’s AI success story: accelerating transformation with Holistic AI governance.

18. Microsoft, 2025. Responsible AI: Ethical policies and practices.

19. Salesforce, 2025. Ethical and Humane Use.

20. Trustworthy AI | Deloitte UK

Explore the Responsible AI deep dives

DEEP DIVE

AI and Employment & Skills

Building AI literacy, confidence and leadership capability to support fair workforce transitions. 

DEEP DIVE

AI and Diversity & Inclusion

Preventing bias, widening access and ensuring AI supports inclusive workplaces. 

DEEP DIVE

AI and Health & Wellbeing

Protecting autonomy, setting healthy digital boundaries and supporting mental wellbeing. 

DEEP DIVE

AI and the Environment

Reducing AI’s environmental footprint while using the technology to support climate and nature goals. 

Thank you to our sponsors and contributors

We would like to thank Verizon and Deloitte. for sponsoring the Responsible AI framework. We are also grateful to all the organisations, members and academic partners for their generous contributions, insights and expertise, which have meaningfully shaped the development of this framework, including BITC members, Verizon Business, Deloitte, Grant Thornton, Pinsent Masons, and Shoosmiths, Dr Luca Arnaboldi, Dr. Mehreen Ashraf, Emre Kazim, Dr Felicia Liu, Zhuang Ma, Roberta Pierfederici, Dr Daniel Wheatley, Allwyn UK, Cancer Research UK, Good Things Foundation, Macmillan Cancer Support and UKAI.