skip to content

Responsible AI Guide: A comprehensive road map to an AI governance framework

Guide

Just as the World Wide Web heralded a fundamental change in society and business, we have entered another equally transformative era—artificial intelligence (AI).

AI—whether it’s generative AI (GenAI), predictive analytics, machine vision, or a different application—has emerged as a new frontier with boundless opportunities, empowering businesses with intelligence-driven strategies, operational efficiencies, predictive capabilities, and unprecedented personalization and performance.

AI’s impact on the business landscape is profound and growing.

In the first quarter of 2024, approximately one in seven Canadian businesses were using or planning to use GenAI, according to Statistics Canada’s report, Business's use of Generative AI.

The report highlighted key reasons for GenAI adoption:

  • Accelerating creative content development (68.5%)
  • Increasing automation without cutting jobs or hours (46.1%)
  • Improving client or customer experience (37.5%)
  • Achieving cost efficiencies (35.1%)

Despite its advantages, AI is facing extreme scrutiny with legitimate criticism of its potential for error, bias, privacy violations, and ethical lapses. As AI permeates everyday products, services, and interactions, the practice of responsible AI gains profound importance.

Customized AI insights for your organization
Interested in receiving a personalized executive summary of what AI can do for your organization?

What is responsible AI?

Responsible AI involves ensuring that AI models are trustworthy, safe, ethical, and explainable. Organizations involved in the design, development, deployment, and use of AI systems should adhere to these principles through every stage of a model’s lifetime.

John Weigelt, National Technology Officer and Responsible AI Lead at Microsoft Canada, calls responsible AI a “market differentiator” that can drive trust in a business.

“People don't use tools and services they don't trust,” he says. “When they use tools like artificial intelligence, Canadians want to know that you've gone through a structured process to evaluate how these tools are making decisions and that you’re providing a way that they can have redress.”

This Responsible AI Guide aims to help you understand the risks and repercussions of not investing in responsible AI and gain practical insights to create an AI governance framework that is responsible, human-centred, and effective.

What are the risks, repercussions, and security implications of AI?

Data is the foundation of all AI systems, which inherently brings a certain degree of risk.

AI systems can have biases pulled from their training data. This can lead to unfair or discriminatory outcomes that can further perpetuate bias or stereotypical representations for certain subgroups, affecting decisions in areas like hiring, lending, and law enforcement. 

“People have biases; therefore, AI has biases. It can amplify and perpetuate those same biases and stereotypes that we as humans have. AI can do this on a scale that, if we're not interrogating those outputs, can have deep impacts,” says Mandi Crespo, Manager, Accessibility Consulting.

Specific to GenAI, hallucination happens when a model generates content  based on patterns in data rather than real-time understanding or fact-checking.

With the large amounts of training data that AI systems rely on to function effectively, there are dangers around sensitive data spills and privacy violations.

Using copyrighted material without proper authorization to train AI models can infringe on intellectual property or violate privacy regulations.

As AI systems become more autonomous, the potential for unintended consequences increases. Ensuring human control and intervention is essential to prevent harmful outcomes.

Two professionals, a man and woman, discuss information on a curved computer monitor in a tech-forward office setting.

A lapse in any one of these areas can lead to serious repercussions for your business. Here’s what’s at stake for businesses failing to prioritize responsible AI:

sign with exclamation mark
Reputational risk and erosion of trust
Your business may continue to function following a mishap, but consumer or employee trust may be dramatically diminished.
Hand holding money
Financial repercussions
Your business may need to invest in outside professionals for remediation services, such as data scientists, legal advisors or cybersecurity specialists, to address any harm caused by AI-related matters. As regulations on responsible AI continue to be rolled out globally, you may face increasing financial penalties for non-compliance.
Gavel
Legal risks
Disclosure to privacy commissioners may be necessary, and organizations may face legal consequences.
Gears
Operational risks
AI adoption introduces operational risks like data dependency, model errors, cybersecurity vulnerabilities, regulatory non-compliance, and workforce impact, all of which can affect workflows and decision reliability. Poorly executed integration or change management can disrupt existing workflows, resulting in service interruptions, decreased productivity, and potential downstream impacts on client satisfaction and trust.
shield with checkmark
Quality issues
Inaccurate or siloed input data can degrade AI model performance, increasing the likelihood of incorrect outputs and model drift, which can lead to flawed decisions and operational inefficiencies.

Pros and cons of GenAI from a cybersecurity lens

Explore key cybersecurity considerations and trade-offs when adopting GenAI.

Read more

Upcoming AI regulations

Governments are increasingly prioritizing legislation regulating AI technologies, to mitigate potential harms. Adhering to applicable local and global regulations is essential for maintaining trust and avoiding penalties.

The European Union’s (EU) Artificial Intelligence Act is largely seen as the first and, to date, most comprehensive AI regulation. It risk-ranks AI usage to define acceptable and unacceptable uses and establishes a legal framework within the EU. More importantly, the act quantifies penalties for non-compliance, which can result in fines of up to €35 million or 7% of the company’s global annual turnover.

In the United States, the federal government’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence sets standards for AI safety and security, emphasizing aspects like data privacy, transparency and accountability, and bias and discrimination mitigation. Many Canadian organizations also look to the National Institute of Standards and Technology’s AI Risk Management Framework for guidance, a voluntary conformance standard that can help guide organizations building their own AI solutions.

Business leaders can also refer to ISO/IEC 42001 for standardized requirements on managing the risks and opportunities associated with AI.

Currently, there is no regulatory framework in Canada specific to AI, but Bill C-27, also known as the Digital Charter Implementation Act, 2022, aims to change that. Under consideration in the House of Commons, it is reforming federal privacy rules and establish common standards and governance mechanisms for high-impact AI systems in a way that balances innovation with safety and ethical standards in the private sector. The act would also identify enforcement measures and penalties to hold businesses accountable. 

The North American AI regulatory landscape is going to evolve significantly in the near future, especially once Bill C-27 is passed, elevating the requirements for Canadian businesses.

Canada privacy law reforms: What is changing and how does it affect you?

Digital advances have triggered changes to Canadian privacy laws.

Read more

Responsible AI: The foundational principles

Responsible AI is not just a technical or regulatory requirement, but a fundamental ethical obligation. Microsoft is widely recognized as a leader in this space, laying down foundational guidelines and principles for what responsible AI should entail.

According to Microsoft, there are six principles that should guide AI development, deployment, and use. Click on each one to further dive into how to implement it in your AI practices.

“Since these principles were published seven years ago, they've remained durable. When we look at international guidance like OECD guidance or the G7 Hiroshima Process [on Generative Artificial Intelligence] that's looking to put in place a common framework, you'll find the same terms being pulled into their principles. They're very helpful as we look to put in place policy guidance, directives, and standards to shape the space,” Weigelt notes.

Meeting Microsoft’s six principles of responsible AI requires methodologies that are fundamentally human-centric, use input data free of inherent biases, and have robust continuous monitoring in place. Equally important is providing employees with AI literacy and training to support these efforts.

Let’s take a deeper look at how to bring Microsoft’s six principles to life.

checkerboard grid

Explainability and transparency in AI

For an AI system to be explainable and transparent, its processes must be clearly documented and continuously monitored,  ensuring its decisions align with intended outcomes. 

“Having transparency and accountability can enhance trust because people can actually see what it's designed to do; the auditability adds credibility, which further builds on the trust,” says Sonia Edmonds, Managing Partner, Innovation & Change at BDO Canada and a member of our Executive Leadership Team.

Defining the decision-making processes is a foundational principle in responsible AI, as it enables teams to tackle a range of other AI issues, including Microsoft’s other pillars of responsible AI that we explore further below. 

“AI systems cannot be just a black box where no one understands what's going on—their processes need to be explainable to both technical and non-technical stakeholders,” adds Elaraby.

A transparent and explainable AI model also empowers the human-AI partnership. When humans understand an AI model’s inner workings, they can more easily modify, update, and build human-AI collaboration.


  • Use AI scorecards to systematically evaluate and assess the strengths and weaknesses of your AI system.

  • Identify and disclose content generated by AI, notify users when they interact with AI, and make your usage guidelines easily accessible.

  • Document decisions and explanations and use open-source tools to better understand global model behaviour and individual predictions. Microsoft’s InterpretML, for example, is used to train interpretable models and explain black box systems.

s-shaped path

Accountability in AI

Accountability means taking responsibility for the outcomes of AI systems, both positive and negative, and supporting clear ownership and decision pathways. 

“The topic of accountability and transparency is really important to fully reach the potential of what AI could do. There’s a fear that people have [when it comes to artificial intelligence]—part of removing the fear is having a robust AI framework in place that incorporates ethics and fair outcomes and ensuring that your solutions are auditable,” says Edmonds.

Incorporating human-centred design into the fabric of AI systems fundamentally shifts how these technologies interact with the world around them, placing human needs, values, and ethical considerations at the forefront. This approach inherently fosters greater accountability within AI development and deployment.


  • Develop accountability practices to ensure that people retain control over autonomous AI systems.
  • Define clear accountability structures and decision-making processes.
  • Establish oversight mechanisms that enable continuous monitoring.
  • Encourage continuous improvement through feedback and facilitate cross-stakeholder communication. The responsibility of AI teams extends beyond deployment to include ongoing monitoring and verification of the system’s performance.
  • Implement machine learning operations (MLOps) for effective model management, deployment, and monitoring.

squares overlapping each other

AI and disability, fairness, and inclusion

AI systems should treat all individuals and similar subgroups fairly and without bias or discrimination. For instance, someone with a stutter might become understandably frustrated if the AI repeatedly interrupts them before they can finish their statement.

People with disabilities face some of the highest risks associated with GenAI. These systems can fuel unintended bias, spread misinformation, or infringe on privacy.

27% of Canadians aged 15 and older identify as having a disability, according to the 2022 Canadian Survey on Disability.

That’s eight million Canadians who are possibly excluded or discriminated against when biases are built into AI systems.

To mitigate accessibility bias, Crespo advises incorporating inclusive design principles from the start of an AI project. Plan ahead by anticipating and identifying potential biases or stereotypes in your data sets and establishing a foundation for a broad, cross-functional perspective early on in the development phase.

Then, ensure you have a way to track, measure, and analyze the integrity of the information collected and fed into the AI engine.

When it comes to accessibility in GenAI, it’s important to recognize that AI can also bring tremendous benefits. For instance, AI technology has the unique capability to uncover biases that might elude even human detection, offering a more objective analysis of data that can lead to more equitable and inclusive practices.

Responsible AI in the workplace

Many companies are turning to AI solutions to improve efficiency in the hiring and screening process. However, studies have shown that these AI-driven tools can inadvertently perpetuate bias, automatically flagging and rating candidates lower if their resume or profile contains words associated with disability or diversity, equity, and inclusion-related attributes. This can result in qualified candidates being unfairly excluded from job pools based on fundamentally flawed assumptions around disability.

Our BDO Consulting and Accessibility teams have worked with clients to review their employment systems and screening tools to ensure they are fair and unbiased. 

“Without scrutinizing the information, we are asking AI to analyze and gather for us, you can be hiring people and not realizing that you're scanning out candidates based on their gender, background, or disability. We can help clients work with intention when using screening tools for employment,” Crespo says.

We can provide model prompts to guide interview questions, ensuring that qualified candidates are not screened out based on their abilities. We can also scrutinize applicant data to help pinpoint why an organization may be getting fewer applicants with disabilities and help clients communicate that candidates of diverse backgrounds and people with disabilities are welcome.

  • Engage with stakeholders affected by the AI systems and build diverse design teams.
  • Educate your AI teams on the importance of addressing bias and integrating accessibility in their work.
  • Use tools to address anomalous data, such as creating data-focused models and real-time detection of bias.
  • Conduct fairness assessments to identify issues and quantify disparity metrics for performance. Consider: Does the model give the same results for subgroups of largely similar attributes? For instance, if a model is designed to assist with hiring, is the hiring rate the same for all subgroups?
  • Use reputable open-source packages for model fairness assessments and during the AI build phase. Aequitas, for example, is an open-source bias audit toolkit developed by the Center for Data Science and Public Policy at the University of Chicago.
  • Implement mitigation algorithms to address observed fairness issues and provide developers with guidelines to evaluate if those mitigations are adequate to address model unfairness.

square with circle in the middle

AI security and privacy

Data is the basis for all AI systems, allowing AI systems to make informed predictions. The more data a model is trained on, the more accurate it can be—but with great amounts of data comes great responsibility. 

Privacy and security require close adherence to relevant laws and standards around the collection, use, and storage of data. Robust data management policies should consider what data is collected, whether it’s tagged properly, and whether access controls are implemented. Just as we carefully control and restrict physical access to secure areas, we must apply the same principles to managing access to data.

To avoid copyright infringement, it’s important to ensure the data ingestion process leverages data sources that are of a legal nature.

Consider leveraging and building on advanced third-party tools as a strategic approach to enhance your organization's security across various domains.

“We leverage tools like Azure AI services and Copilot for Microsoft 365 and because of that, we have a lot of the security functionality somewhat built in—but we recognize that the AI frameworks of third-party tools may not fully align with our specific needs. This is not due to deficiencies in their policies, but rather because our firm may face more stringent regulations in certain areas or have unique policy requirements that must be considered,” says Edmonds.

“Leveraging their tools accelerates our progress, but we also integrate additional layers of security to ensure our solutions meet our distinct standards and regulatory obligations.”


  • Assess and manage cybersecurity risks and system resilience during deployment.
  • Be transparent about the collection, use, and storage of data.
  • Ensure users have appropriate controls over how their data is being used.
  • Encrypt data in transit and at rest and regularly perform data scans for vulnerabilities.
  • Conduct regular data audits, remove data that's no longer needed, and restrict data collection to what is strictly necessary.

two circle shapes moving together

AI safety and reliability

Reliability and safety tie back to consistency and accuracy. A key aspect is ensuring that AI models can resist malicious manipulation, even under unexpected conditions. Consider: Does the AI system operate as it was originally designed?

By implementing robust validation and verification processes, a system of checks and balances, continuous monitoring, and random audits, organizations can detect and address inconsistencies early. It’s important to continuously apply these measures throughout an AI tool’s entire life cycle to certify consistent performance. 

The human element is equally important. Incorporating human oversight and feedback loops ensures AI systems continue to operate within ethical and legal boundaries after launch. 


  • Create a system of checks and balances to minimize the risk of malfunctions or unsafe outcomes before they impact users.
  • Identify, assess, and mitigate any variable accuracy levels among data subgroups.
  • Conduct a detailed error analysis to understand failure distribution at both an aggregate level and among the various cohorts, which can help diagnose errors across the board.

Business reimagined

Find the right path to successful GenAI adoption.

Read more

Building resilience with an AI governance framework

If the above principles aren’t built into AI from the outset, AI can become a trust-breaker.

An end-to-end, agile, and scalable AI governance framework is the cornerstone behind mitigating the risks and ensuring any AI solution is developed, deployed, and used responsibly.

End goals, success metrics, and guiding principles vary immensely across industries and individual organizations, making a one-size-fits-all framework for responsible AI impractical. A financial institution's prioritization of data privacy and bias mitigation will likely differ from a healthcare provider's emphasis on patient-centred transparency and precision. 

The concept of responsible AI governance demands a tailored approach that aligns with your organization's objectives and adapts to the specific ethical, legal, and technical contexts in which the technology operates.

3 essential elements of AI governance

Below we provide a road map to the three pillars of an AI governance framework based on Microsoft’s industry-leading practices and our own journey deploying AI solutions, both internally and through client work. 

1

Policy and oversight

AI governance starts with establishing a clear definition around how AI is helping your organization.

“When you tie AI to business outcomes, it really helps drive adoption and this full 360-degree review of the trust environment that's required for those for those capabilities,” Weigelt advises.

Once this foundation is built, it will help inform your internal policies, standards, and procedures to ensure responsible AI through each phase of design, build, deployment, use, and beyond. This includes setting up oversight mechanisms to monitor AI activities, ensuring compliance with legal and regulatory requirements, and establishing a sensitive use case framework to determine whether an AI use case should be flagged for further review.

Just like you set high standards for your internal teams to ensure excellence and compliance, it’s important to apply the same scrutiny to external stakeholders. A structured framework can help you conduct due diligence on suppliers to understand their use of AI systems and ensure they comply with your policies.

A human-centric approach needs to be the foundation of AI governance. Define clear roles and responsibilities for all teams and stakeholders involved in an AI project and involve end users in the early stages of AI design to secure their trust and buy-in.

It's also about AI literacy and training for your employees and fostering a culture of accountability within your organization. At BDO, our people attend mandatory informational sessions around data security and governance.

Form a committee to advise on questions, challenges, and opportunities in AI development and deployment. At BDO, for instance, our Innovation & Change team is dedicated to driving our business forward by exploring and implementing new ideas in a responsible way. To serve our unique purposes as a firm, this team also advises on client projects.“Establishing a cross-functional team allows us to design AI governance frameworks and policies that reflect our organizational values while holding ourselves to the same standards when designing for clients,” says Edmonds.


According to Microsoft, there are three triggers for sensitive use cases: 

  1. A denial of a consequential service that has the potential to impact a life event, for example, an AI system dealing with loan applications.
  2. Potential for human harm, for example, in medical diagnoses or an AI system that provides information on rip tides and ocean currents.
  3. An impact on human rights and freedoms, for example, an AI system measuring mask conformance at a hospital. “It’s deceivingly difficult to ensure it’s behaving appropriately for all community members. If you wear a headdress or have darker skin and are wearing a darker mask, there may be a considerable challenge in trying to make sure the AI is fit for purpose,” Weigelt explains.

2

Research and development

Establish a cross-functional AI governance committee and working groups to conduct research on responsible AI issues. Composed of internal experts and third-party agencies, they are your trusted source for advice, best practices, and recommendations that form official policies and practices.

At Microsoft, for example, the Aether Committee has a working group dedicated to each of Microsoft’s six responsible AI pillars, as well as additional working groups focused on studying how people interact with AI, says Weigel.

At BDO, part of our Innovation & Change team’s mandate is to stay on top of emerging innovation trends in the marketplace. It works closely with our Strategic Operations and Market Intelligence teams to ensure we remain current on market dynamics and that we prioritize, evaluate, and commercialize relevant innovations.

3

Implementation

The design and engineering phase serves as the structural and technical backbone of responsible AI. Engineers and designers play a crucial role in translating governance policies into actionable technical requirements.

Incorporate ethical considerations and human-centred design principles from the inception of AI systems. Defining and establishing boundaries around acceptable and unacceptable use is critical, especially in light of the increasing number of regulations that are explicitly addressing these distinctions.

Continuous monitoring and security validation techniques like fuzz testing and red teaming should be built into every stage of design, training, testing, and even post-deployment to maintain a clear understanding of how the AI model is acting and ensure consistent explainability, interpretability, and transparency.

“No model is perfect, so understanding the limitations helps us make mitigation plans for that,” says Elaraby.

Remember to develop according to your organization’s needs and goals. Reputable organizations and technology leaders have established reliable frameworks for guiding practices around responsible AI, and while these frameworks can serve as a robust starting point for many businesses, these models might not entirely fit every unique situation or organizational need. To be truly effective in guiding your organization’s decisions and practices, assess and, if necessary, customize these frameworks to fill any gaps.

“As we're designing, even though we're leveraging many of these tools that help us get there faster, the reality is we're also building in extra layers of protection when we’re designing for ourselves,” says Edmonds.

Who are the key stakeholders to include?

Many stakeholders—both internal and external, technical and non-technical—can play an important role in shaping how you develop, deploy, use, and govern AI systems. The size of an AI governance committee can vary significantly between large and small companies, due to differences in resources, complexity of AI applications, and organizational needs.

Who should be involved? Let’s take a look at some of the key stakeholders. 

These stakeholders are focused on the oversight, management, and compliance functions for AI systems. 

At the C-suite level, any discussions around AI governance should include the chief executive officer and chief information officer or chief information security officer, risk officers, and human resources. Your governance policy should also be vetted and approved by the board.

Consider other groups or experts relevant to your unique situation. At BDO, for example, we include our Innovation & Change team and leaders from our BDO Digital practice, which serves the market and our clients.

These include AI development and deployment, cybersecurity, and data teams. From a decision-making pathway, they can advise on what's possible when it comes to the development and deployment of the AI model.

Working closely with the subject matter experts ensures the AI outputs tie in seamlessly with operational processes and desired end user outcomes. Keeping the end user in mind improves user experience, builds trust, and ensures the system’s effectiveness and reliability. Conduct user feedback cycles and user acceptance testing to ensure the AI model meets their needs.

At BDO, for example, we work closely with our partners at Microsoft and regulatory bodies for guidance. We also proactively monitor local and global regulations to anticipate changes and ensure that we incorporate these regulations into our AI design process.

Include consultations with individuals from equity-deserving groups, including persons with disabilities. Their unique perspectives and lived experiences can help identify potential biases and accessibility issues that might otherwise be overlooked. 

A change champion is someone who fundamentally believes in AI and transformation and is a strong advocate for innovation and can help drive organizational adoption. 

“This person is consumed by the art of the possible of what AI can do and is driven by trying to introduce it in a way that complements what the business is trying to achieve. If you have someone who really believes in AI and who could champion it, they'll be naturally motivated by things like governance and return on investment,” says Emmanuel Florakas, Partner, BDO Digital and National Leader, Data & AI, BDO Canada.

Three colleagues of various genders and ethnicities collaborate over a laptop in a modern office setting.

Artificial intelligence and the future of work

In the rush to adopt AI solutions, how do you strike a balance between innovation and responsibility? Many business leaders are grappling with this consequential question in today's digital landscape.

“That's something we're very keenly aware of, leading innovation and change for BDO,” says Edmonds. “With any technology, there's a hype cycle and we're approaching its peak where everyone wants to leverage AI. And we do as well—we don't want to be late to the game. But at the same time, it's important that we're thinking about the governance pieces.”

A robust governance framework is important, but it must also be flexible enough to foster innovation without stifling it.

Above all, embrace the change. Consider how AI technology can bend the arc of possibility for your business. The reality is that people will move technology forward whether you restrict their use or not. 

A businesswoman views information on a tablet in a futuristic office environment, highlighted in pink and purple hues.

Elevate your innovation with AI business solutions

Our firm is uniquely positioned to guide your organization in implementing responsible AI, leveraging our cross-functional team’s deep knowledge and strategic partnerships.

With skilled AI practitioners across Digital Strategy, Risk Advisory, Accessibility, and other key practice areas, our team can help your organization apply responsible AI practices across the board.

Most importantly, our services and solutions are guided by our own experience. Our firm was an early adopter of Microsoft 365 Copilot, placing us at the forefront of AI-driven innovation and productivity, and we continue to explore the horizon for practical AI advancements that align with our commitment to responsible integration.

Through our long, award-winning partnership with Microsoft, we’ve gained extensive knowledge of their technology and processes, resulting in many successful deployments for our clients.

Contact us for a consultation to discuss your current maturity state with AI and identify a path where responsible AI isn't just an advantage, but a hallmark of your success.

Rishan Lye
Partner, Consulting Leader
[email protected]

Emmanuel Florakas
Partner, BDO Digital
[email protected]

Rocco Galletto
Partner, National Cybersecurity Leader
[email protected]

Sam Abdulrrazek
National Leader, Digital Advisory
[email protected]

Mandi Crespo
Manager, Accessibility Consulting
[email protected]

Ziad Akkaoui
National Practice Leader, Risk Advisory
[email protected]

Sonia Edmonds
Managing Partner, Innovation & Change
[email protected]

Haya Elaraby
Manager, Management Consulting
[email protected]

This site uses cookies to provide you with a more responsive and personalised service. By using this site you agree to our use of cookies. Please read our privacy statement for more information on the cookies we use and how to delete or block them.

Accept and close