Published on

Six questions to ask before adopting enterprise genAI

Authors

AI is transforming how businesses operate, offering unprecedented opportunities for efficiency and innovation. However, while organizations are heavily investing in AI adoption, the governance and management of these tools remain complex. Despite significant time and resources dedicated to implementing AI solutions, many have yet to realize meaningful productivity gains.

If your organization has adopted or is thinking of adopting AI, here are six critical questions every organization must address to enable AI effectively:

1. How do we ensure AI tools don’t misuse sensitive information?

Data privacy and governance are at the heart of AI adoption. With tools that integrate AI capabilities, organizations must ensure clear boundaries on data use. Proactive measures include:

  • Adding AI-specific clauses in vendor contracts that restrict data sharing and model training.

  • Requiring vendors to keep data confidential and separate from other clients.

  • Establishing internal policies to prevent sensitive information from being fed into generative AI tools.

2. How do you get employees to effectively use AI tools?

Adoption isn’t automatic. AI tools like Microsoft Copilot may promise productivity, but the real challenge lies in teaching employees how to use them:

  • Conduct webinars and hands-on training sessions to teach proper prompting techniques.

  • Address risks like hallucinations and inaccurate outputs.

  • Invest in change management to ensure sustained adoption and minimize friction.

3. What happens when vendors add AI features post-contract?

Many vendors now incorporate AI features after agreements are signed. Without contractual safeguards, businesses risk exposure to unforeseen data vulnerabilities. Steps to mitigate this include:

  • Adding AI terms during contract negotiations and renewals to address potential scope creep.

  • Performing vendor risk assessments that include AI governance questions.

  • Ensuring that any new AI features adhere to your organization’s data and compliance standards.

4. How long should AI-generated outputs be retained?

Retention policies for AI-generated outputs, such as meeting transcriptions, can significantly impact risk management:

  • Define the minimal useful life for sensitive outputs (e.g., auto-deleting transcriptions after 15 days).

  • Align retention policies with legal requirements, such as DOJ guidelines on ephemeral messaging.

  • Automate deletion processes to ensure compliance and reduce manual errors.

5. Who should guide your AI initiatives?

AI governance requires a cross-functional approach. Establishing an AI committee ensures diverse perspectives are incorporated:

  • Include representatives from Legal, Privacy, IT, HR, Risk, and Governance.

  • Empower the committee to review business cases, oversee risks, and approve new AI initiatives.

  • Create tiered risk models (e.g., 0-4 levels) to prioritize governance efforts based on AI use cases.

6. How can we balance innovation with control?

AI enablement isn’t just about reducing risks—it’s about fostering innovation while maintaining safeguards. Consider these approaches:

  • Train employees in high-risk departments, like Sales and Marketing, to use AI tools responsibly.

  • Use targeted AI solutions for specific use cases rather than broad, generalized tools.

  • Implement an AI use policy that all employees acknowledge, ensuring accountability.

Moving Forward

AI enablement requires organizations to navigate a delicate balance between innovation and governance. By addressing these six questions, businesses can ensure they’re not only adopting AI tools but doing so in a way that safeguards data, aligns with organizational goals, and drives sustainable growth.