skip to content

Data and Cybersecurity:

Steps to address AI vulnerabilities

Article

This article originally appeared on March 1, 2022, in a Globe and Mail content feature called Fraud Prevention Month. Reprinted with permission. All rights reserved.

Artificial intelligence (AI) technology is a powerful asset in business, allowing machines to think for themselves—and at a faster pace than ever before. But AI systems can pose cybersecurity challenges, which can cause operational, financial, health & safety and reputational damage.

BDO Lixar (now BDO Digital), which is BDO Canada's national technology consulting business arm, helps organizations recognize and manage such risks. Partners Rocco Galletto, head of cybersecurity, and Daryl Senick, who is responsible for data and AI, as head of financial services, talk about the potential vulnerabilities of AI and what can be done to make these systems safe and secure.

How much is AI used in business systems, in what sectors, and why?

DS: AI is no longer an emerging technology, it has truly become mainstream, whether it's for cost and inventory optimization or the analysis of consumer sentiment and behaviour. It's used in contexts from manufacturing, for “shop-floor-to-top-floor” automation, to financial services, where it's used to rate risk in credit and insurance products. Organizations that don't leverage AI could lose their competitive edge.

What cybersecurity threats does this pose?

RG: Threats exist throughout the AI lifecycle. Data is collected in both structured and unstructured ways and it's stored for analysis, and this can include sensitive information about individuals.

DS: The integrity of the data itself is also critical. These systems are built to learn, adapt, adjust and in some cases make decisions from the data they're fed, so any tampering or manipulating of that data can influence outcomes. This can lead to less-than-ideal business decisions or even bring harm to individuals.

How do the vulnerabilities of traditional information technology (IT) systems compare with those in AI?

RG: IT systems are typically targeted through doors that are left open or bugs in code that allow adversaries to infiltrate the network. AI systems can be hit in much the same way, but there's added attack vectors here. Data is at the core of what makes the system function – or malfunction. In an “input attack”, an adversary can fool the AI system into accepting bad data and making a mistake. And a “poisoning attack” can stop an AI system from operating correctly, impacting its ability to accurately make accurate predictions.

What can some of the consequences be?

DS: For instance, where underwriting processes are automated, say for loans or insurance products, if you introduce a bias because of bad data, then you could give high-risk people low-rate loans or low-rate insurance, and that's going to create financial loss. Or a retailer could be exposed to reputational risk if decisions about advertising or products are based on inappropriate inputs.

How can this be avoided?

RG: Taking a broader view, organizations must follow a strict set of guidelines to protect the data they collect and the systems that process that data. We need to make sure from a cybersecurity standpoint that these systems remain available, that the data integrity is preserved and that its confidentiality is maintained.

What kind of services does BDO provide to help?

DS: Our services include data advisory and data engineering, data visualization and data science. We help our clients plan through their entire data journey, including strategy, roadmap, implementation and operations. We cover the full cycle of data management and data governance, providing insights and analytics and supporting an overall data-driven culture.

RG: Along the journey that Daryl describes, our security team is tied at the hip with developers, data scientists and data engineers to help our clients remain secure. All regulatory issues and cyber risks are considered, assessed and managed.

How can we ensure that AI systems remain safe and secure in the future?

RG: Considerations for managing ethics, bias, trust and security must be made right from the design and planning phases of any AI project, as with all new technology initiatives. Then throughout the product lifecycle, from implementation to ongoing operations, it's critical to monitor for any anomalies the system may encounter.

DS: The footprint of vulnerability—and the sophistication of attack—continually grow over time. It's important to constantly evolve and to understand the risks associated with AI, because AI is here to stay and we want to make sure we can move forward effectively, while managing the risks.

This site uses cookies to provide you with a more responsive and personalised service. By using this site you agree to our use of cookies. Please read our privacy statement for more information on the cookies we use and how to delete or block them.

Accept and close