The release of OpenAI’s ChatGPT marked a pivotal moment in the field of generative AI as it redefines how businesses operate, innovate, and drive growth. More and more Canadian companies across all sectors are looking to invest in AI to transform their operations, services, and products. Given the technology sector’s familiarity with AI and related technologies, it has a natural advantage to leveraging its capabilities.
Generative AI at work: Four use cases for tech companies
In the tech sector, generative AI offers many ways to harness its capabilities and drive value. From streamlining code writing and intelligent applications to data monetization strategies and content creation, these and other possibilities will only expand as generative AI continues to develop. Below, we explore four use cases that show real-world examples of how generative AI can impact your tech company.
Technology companies rely on sprint boards as a central hub for work management, communication, and collaboration within Agile teams. Technology teams that leverage generative AI can significantly accelerate developer velocity by optimizing and streamlining their development process, including automated code generation, code reviews, bug detection, resolution, and more rapid prototyping.
Generative AI can also enhance the development process through the utilization of predictive resource allocation, cost anomaly detection, scenario-based planning, and real-time cost insights. The net effect is a lower cost per sprint and increased throughput.
Newer conversational AI models like ChatGPT offer more parameters than its predecessors, which leads to better performance in understanding context, having conversation exchanges, and generating responses. The result is that conversational AI, through natural language processing, allows users to ask questions and have exchanges in everyday language to extract relevant information into actionable insights.
Conversational AI also reduces the bottleneck often associated with centralized data teams or analysts and empowers individual users across departments to access the data they require independently, freeing up data professionals to seize whitespace opportunities and bring innovative ideas to life.
Software-as-a-service (SaaS) providers offer robust features, functionalities, and solutions for consumers, but sometimes struggle with low end user adoption. Often, this is a result of informational content that doesn’t adequately engage or educate users. SaaS companies can use user-behaviour and predictive analytics to assess data and understand where people run into challenges or drop off.
These insights enable early identification of issues and can guide product fixes or enhancements to improve user adoption or retention. From an efficiency perspective, generative AI can help their professionals write, code, design, and create engaging, dynamic online content that improves user satisfaction and adoption.
For example, generative AI can help SaaS companies create content focused on their application’s best practices, use cases, and tips, providing users with a valuable knowledge base. They can also develop interactive and personalized tutorials based on user roles and their interaction data with the software. Generative AI can further help tech companies offer dynamic support content to users based on their usage patterns and queries. By hyper-personalizing the learning experience, the customer is receiving exactly what they want, which can increase adoption rates, highlight product differentiation, and improve overall customer satisfaction.
A hardware company with limited customer service resources needs to address customer complaints and questions quickly at all hours of the day and night. By adding a generative AI chatbot to their website, the company can respond to customers in real time. The chatbot can also generate responses in the customer’s native language, reducing the risk of miscommunications. Generative AI models can replicate emotional intelligence and create tailored personas, allowing chatbots to respond with empathy and personalized interactions.
By connecting generative AI models to your organization’s operational systems, chatbots can understand customer intent and can retrieve data from systems when needed, further customizing the user experience.
If the chatbot can’t address a customer’s issue, it can direct the customer through the proper channels to receive human attention. Streamlining the issue-handling process can ultimately lead to better customer experiences, lower wait times, and increased overall satisfaction.
Address the four most common risks of generative AI
Although many technology companies are looking to integrate generative AI tools into their operations, caution and concerns around the ethical, legal, and practical challenges of this new technology remain. Like any technology program, success isn’t a guarantee and is contingent on proper planning and execution. Without a solid foundation for generative AI deployment, tech companies can be left vulnerable to AI bias, hallucinations, sensitive data exposure, and issues with copyright and data theft.
Fortunately, there are steps you can take to mitigate these risks.
AI bias can occur when machine learning algorithms inadvertently learn and perpetuate human or systemic prejudices present in the data used for their training. For example, a Microsoft study discovered that AI-generated images could be biased in their representation of gender, race, and age across different occupations and personality traits. As these biases manifest in the generated content, they can perpetuate and amplify pre-existing stereotypes and undermine efforts to promote equality.
Addressing the risks of AI bias requires careful scrutiny of the training data, the development of bias-detection algorithms, and ongoing monitoring to ensure that the AI system is making unbiased decisions. It also underscores the importance of using sufficiently diverse data to train AI systems while avoiding overrepresentation, which is a common problem in large data sets. Selecting the right model for your AI and setting it up correctly lays the foundation for AI systems to operate ethically and in alignment with your organizational goals. Look for models that offer algorithmic transparency.
Tech companies also need to proactively consider the questions or use cases that could be a bias concern and implement strong governance to avoid these issues. For example, you might consider restricting certain topics if they have a high risk of generating biased results.
A hallucination occurs when a generative AI program outputs false information or a response that is not supported by its training data. Since the user does not necessarily see the sources that are used to generate the answer, it can be difficult to distinguish facts from hallucinations, especially because the platform presents them as being factual. Even if sources are cited, the sources themselves may be fake.
Mitigating these risks involves careful monitoring and validation of the AI system's outputs. For example, a company can seek guidance from experts in the field related to the request to verify the output. There’s also the option of designing the platform to include sources, which would allow users to confirm whether the outputs are factual. Asking the AI to include sources—and then validating any sources cited—is a best practice.
Next, tech companies need to train their users on how to properly use the platform and establish clear policies on acceptable and unacceptable use. Training on prompt engineering can significantly reduce the risk of hallucinations by providing users with the knowledge and skills to craft effective prompts that guide AI models towards generating more accurate and contextually appropriate responses.
The risk of sensitive data exposure in generative AI models is a profound concern that warrants strategic attention to secure both proprietary information and user data against inadvertent leaks and potential misuse. Source code stands out as one of the most frequently exposed types of sensitive data. As many generative AI platforms train themselves with data manually input by users, companies should be aware that their employees may have input proprietary data into a generative AI platform. If the data from that application becomes exposed, all user data could be at risk.
The first step to mitigating these risks is understanding the security implications of the platforms in use. For example, if your organization opts to rely on third-party platforms instead of building its own, you need to have a clear understanding of how the data is being used and how long it’s being stored. It’s important to note that not all platforms leverage user data for training purposes.
Safeguarding source code has become an imperative task in the modern digital landscape, necessitating robust data protection measures and constant vigilance to prevent inadvertent data leaks and the potential consequences they may entail.
As with any new technology, tech companies also need to monitor regulatory changes to stay compliant and understand what compliance requirements they may need to address in the future so they can start preparing today.
Innovation Canada is creating a voluntary code of conduct for generative AI systems and is an important resource for tech companies. Companies should also be familiar with Bill C-27 and the changes around how personally identifiable information (PII) is collected, stored, and used. This bill also introduces Canada’s first AI legislation called the Artificial Intelligence and Data Act (AIDA).
To produce sophisticated, human-like outputs, generative AI models undergo training on vast and diverse data sets that often encompass a wide spectrum of sources, including publicly available information, documents, original artwork, and other content types. Some of this data may even come from personal, private, proprietary, or corporate sources, raising critical questions around copyright issues and data theft.
Legal frameworks are yet to fully encompass the realities of AI technology, struggling to define if and how intellectual property rights apply to AI-generated content. The blurred lines between the work of these AI models and original human creativity have sparked discussions about who should own the generated content and how to safeguard against data misuse.
Striking a balance between fostering innovation and protecting the rights of content creators and data owners is a pressing matter in navigating the complex landscape of generative AI. Cases around the world are setting precedents as these technologies continue to evolve and become increasingly mainstream.
Moving forward: Other key generative AI considerations for tech companies
As tech companies move forward with generative AI, there are a few other important considerations to keep top of mind:
Due to the strong interest in generative AI, many best practices have already been established. For example, Microsoft's Responsible AI Standard outlines the company’s guidelines and policies for developing AI systems to better ensure they are ethical, safe, and responsible. It was made public to share learnings, invite feedback, and contribute to global discussions about establishing better norms and practices around AI.
Tech companies may want to explore creating one or more AI-enabled apps, which can be created for customer or employee use. Consider, for example, a consumer-facing app that complements and enhances your company’s product. Internally, a workflow management app can be developed to streamline processes and improve team collaboration.
Before creating an AI-enabled app, conduct a thorough benefit analysis to ensure its development will deliver clear, measurable business advantages.
Tech companies need to be prepared not just for what’s happening today, but for what’s ahead. For example, Microsoft’s Semantic Kernel is an emerging technology due to be released in fall 2023. It helps developers build their own Copilot experiences on top of AI plugins, allowing developers to flexibly integrate AI services into their existing apps.
As generative AI continues to evolve and new technologies emerge to support and complement it, your tech company should reassess its innovation investments and plan accordingly.
How BDO can help
Recognizing our achievements in the field of AI, our firm won the 2023 Microsoft Canada AI Impact Award. This award celebrates Microsoft partners that deliver impactful Microsoft AI innovation in client scenarios such as monitoring assets to improve efficiencies, driving operational performance to enable innovation, and using advanced data analytics to transform a business with new business models and revenue streams.
BDO is well-positioned to help technology companies leverage the benefits of generative AI by offering guidance and support across digital strategy, technology and cloud implementations, risk mitigation, compliance, training, ongoing performance monitoring for sustained success, and other aspects.
Are you ready to adopt generative AI in your company?
Let’s explore your options. Our team of advisors can also help identify grants and government incentives to help power your digital transformation.
This article was adapted from a piece by BDO USA, Tech Takes on Generative AI, and revised to cater to the unique needs and characteristics of the Canadian market.