To get the most from artificial intelligence without falling prey to the risks, your company must implement a governance, risk, and compliance (GRC) framework specific to AI. Here’s how to develop a corporate policy that works.

Enterprise use of artificial intelligence comes with a wide range of risks in areas such as cybersecurity, data privacy, bias and discrimination, ethics, and regulatory compliance. As such, organizations that create a governance, risk, and compliance (GRC) framework specifically for AI are best positioned to get the most value out of the technology while minimizing its risks and ensuring responsible and ethical use.
Most companies have work to do in this area. A recent survey of 2,920 worldwide IT and business decision-makers found that only 24% of organizations have fully enforced enterprise AI GRC policies.
“If organizations don’t already have a GRC plan in place for AI, they should prioritize it,” says , CISO at enterprise software provider Kalderos.
Generative AI “is a ubiquitous resource available to employees across organizations today,” Hundemer says. “Organizations need to provide employees with guidance and training to help protect the organization against risks such as data leakage, exposing confidential or sensitive information to public AI learning models, and hallucinations, [when] a model’s prompt response is inaccurate or incorrect.”
Recent reports have shown that include sensitive company data and that organizations are despite providing employees with sanctioned AI options.
Organizations need to incorporate AI into their GRC framework — and — and data is at the heart of it all, says Kristina Podnar, senior policy director at the Data and Trust Alliance, a consortium of business and IT executives at major companies aiming to promote the responsible use of data and AI.
“As AI systems become more pervasive and powerful, it becomes imperative for organizations to identify and respond to those risks,” Podnar says.
Because AI introduces risks that traditional GRC frameworks may not fully address, such as algorithmic bias and lack of transparency and accountability for AI-driven decisions, an AI GRC framework helps organizations proactively identify, assess, and mitigate these risks, says , co-founding partner at CM Law, who focuses on AI technology, data privacy, and cybersecurity.
“Other types of risks that an AI GRC framework can help mitigate include things such as security vulnerabilities where AI systems can be manipulated or exposed to data breaches, as well as operational failures when AI errors lead to costly business disruptions or reputational harm,” Haughian says.
For example, if a financial AI system makes flawed decisions, it could cause large-scale financial harm, Haughian says. In addition, AI-related laws are emerging globally, she says, which means organizations need to ensure data privacy, model transparency, and non-discrimination to stay compliant.
“An AI GRC plan allows companies to proactively address compliance instead of reacting to enforcement,” Haughian says.
Know the challenges ahead and at hand
IT and business leaders need to understand that creating and maintaining an AI GRC framework will not be easy.
“As attorneys we can often tell clients what to include in policies like an AI GRC policy [or] framework, but such advice should also be accompanied with advice to make sure organizations understand what challenges they are likely to face not only in drafting such policies, but also in implementing them,” Haughian says.
For example, advancements are happening so rapidly with AI that not only drafting but keeping AI GRC policies up to date is a challenge. “If the AI GRC polices are overly strict, organizations will begin to see that this stifles innovation or that certain groups within the organization will simply find ways to work around such policies — or flat out disregard them,” Haughian says.
CIOs have been battling such shadow AI use since the inception of generative AI. Establishing an effective, company-specific AI GRC strategy is the No. 1 way to prevent a shadow AI disaster.
How should organizations go about creating their AI GRC plan, and what should go into such a plan? Here’s what experts suggest.
Build a governance structure with accountability
Most organizations fail to establish a well-defined governance structure for AI, Data and Trust Alliance’s Podnar says. “Evaluating the existing GRC plan/framework and determining whether it can be extended or amended based on AI ought to be the first consideration for any organization,” he says.
Without clear roles and responsibilities, for example, who will own what decisions, organizational risks will be misaligned with AI deployments and the results will be brand or reputation risks, regulatory violations, and the inability to take advantage of the opportunities AI provides, Podnar says.
“Where organizations choose to place accountability and delegated authority is dependent on the organization and its culture,” Podnar says. “There is no right or wrong answer globally, but there is a right and a wrong answer for your organization.”
Incorporating policy control and accountability into an AI GRC framework “would essentially define the roles and responsibilities for AI governance and establish mechanisms for policy enforcement and accountability, thereby ensuring that there is clear ownership and oversight of AI initiatives and that individuals are held accountable for their actions,” Haughian says.
A comprehensive AI GRC plan can help ensure AI systems are explainable and understandable, “which is critical for trust and adoption [and] something that is beginning to be a large hurdle in many organizations,” Haughian says.
Make AI governance a team effort
AI crosses virtually every facet of the business, so the GRC framework should include input from a broad spectrum of participants.
“We typically begin with stakeholder identification and inclusion by engaging a diverse group of sponsors, leaders, users, and experts,” says , senior vice president and head of global services at IT service provider TEKsystems.
This includes IT, legal, human resources, compliance, and lines of business. “This ensures a holistic and unified approach to prioritizing governance matters, goals, and issues for framework creation,” Madan says. “At this stage, we also build or ratify the organization’s AI values and ethical standards. From there, we set the plan and cadence for continuous feedback, iterative improvement, and progress tracking against these priorities.”
This process takes into account evolving regulatory changes, advancements in AI functionality, emerging data insights, and ongoing AI innovation, Madan says.
Create an AI risk profile
Enterprises need to create a risk profile, with an understanding of the organization’s risk appetite, what information is sensitive to the organization, and the consequences of sensitive information exposure to public learning models, Hundemer says.
Technology leaders can work with senior business leaders to determine the proper for the company and its workforce, Hundemer says.
Detailing how the organization identifies, assesses, and mitigates AI-related risks, including regulatory compliance, “becomes so important because it helps the organization stay ahead of potential legal and financial liabilities, and ensures alignment with relevant regulations,” Haughian says.
Incorporate ethical principles and guidelines
CIOs have been grappling with the ethics of implementing AI alongside pressures to make good quickly on the technology of late. The need to incorporate ethical principles into AI GRC can’t be emphasized enough, because AI introduces all kinds of risks related to unethical use that can get enterprises in trouble.
This section of the GRC plan “should define the organization’s ethical stance on AI, covering areas like fairness, transparency, accountability, privacy, and human oversight,” Haughian says. “This practice will establish a moral compass for AI development and deployment, preventing unintended harm and building trust.”
One example of this is having a policy stating that AI systems must be designed to avoid perpetuating or amplifying existing biases, with regular audits for fairness, Haughian says. Another is to ensure that all AI-driven decisions, especially those that have a large impact on people’s lives, are explainable.
Incorporate AI model governance
Model governance and lifecycle management are also key components of an effective AI GRC strategy, Haughian says. “This would cover the entire AI model lifecycle, from data acquisition and model development to deployment, monitoring, and retirement,” she says.
This practice will help ensure AI models are reliable, accurate, and consistently perform as expected, mitigating risks associated with model drift or errors, Haughian says.
Some examples of this would be establishing clear procedures for data validation, model testing, and performance monitoring; creating a version control system for AI models, and logging all changes made to those models; and implementing a system for periodic model retraining, to ensure that the model stays relevant.
Make AI policies clear and enforceable
Good policies balance out the risks and opportunities that AI and other emerging technologies, including those requiring massive data, can provide, Podnar says.
“Most organizations don’t document their deliberate boundaries via policy,” Podnar says. This leaves a lot of employees putting the organization at risk when they make up their own rules, or it handcuffs them from innovating and creating new products and services because of the perception that they must always ask IT before proceeding, she says.
“Organizations need to define clear policies covering responsibility, explainability, accountability, data management practices related to privacy and security and other aspects of operational risks and opportunities,” Podnar says.
Because all risks and industries are not equal, organizations need to focus on their own tolerance for risk with the business objectives that AI is intended to satisfy, Podnar says. Policies need to have enforcement mechanisms that are understood by users.
Get feedback and make refinements
It’s important to communicate AI guidelines to the entire organization — and seek feedback to enhance polices on an ongoing basis to better meet the needs of users.
[ See also: “” ]
TEKsystems constantly documents and reports on AI usage, performance, and framework testing based on user feedback, Madan says. “This is part of our ongoing commitment to refinement, audit readiness, and assurance,” he says.
“Since AI models change over time and require continuous monitoring, a strong governance process needs to be in place to ensure AI remains effective and compliant throughout its lifecycle,” Podnar says. “This may involve assessing and using model validation and monitoring protocols, [and] creating automated rules with alerts for when things go off the intended path.”