Credit: Krakenimages / Shutterstock
The rapid increase in generative AI use poses a variety of new challenges for organizational leaders. Among them is the advent of “Bring Your Own AI” — employees turning to unapproved, publicly accessible generative AI tools for work-related tasks.
While generative AI use offers workplace value, BYO AI introduces risks that organizations are not yet equipped to manage, according to and of the MIT Center for Information Systems Research. In a recent webinar hosted by MIT Sloan Management Review, the researchers outlined some of the potentially costly risks of BYO AI, including data loss, intellectual property leaks, copyright violations, and security breaches.
The researchers’ insights are based on interviews and discussions with members of the MIT CISR Data Board. They found that generative AI falls into two main categories: tailored generative AI solutions designed for specific uses, and the broadly applicable AI tools that, if left unregulated, can lead to BYO AI.
The answer isn’t banning these tools outright, van der Meulen said. “If we restrict access to these tools, employees won’t just stop using generative AI. They’ll start looking for workarounds — turning to personal devices and using unsanctioned accounts and hidden tools,” he said. In that case, “rather than mitigating risk, we’d have made it harder to detect and manage.”
Instead, the researchers said, organizations should aim to turn BYO AI from a liability into an asset with three steps — building specific guidance, training employees, and sanctioning approved generative AI tools.
How to manage the risks of BYO AI
1. Build specific guidance.
Leaders should develop clear guardrails and guidelines that enable employees to experiment safely with generative AI tools. Company experts in technology, law, privacy, and governance should be tapped to develop policies on sanctioned and unsanctioned generative AI use and specify which tools are acceptable and under what conditions.
“As leaders in organizations, we’re not just implementing tools; we’re shaping the ethical and cultural landscape for how our employees use this technology responsibly,” van der Meulen said.
One leader on the MIT CISR Data Board told the researchers that their organization clearly communicated approved uses of generative AI to employees, such as using publicly available information in their AI queries, versus off-limits uses, such as uploading data that contains personally identifiable information, strategic information, or proprietary data. The organization also had a clear process in place for anyone who was unsure about whether AI use was appropriate, and it instructed individuals to reach out to the AI governance team for help or clarification, van der Meulen said.
“Employees need to know what’s always OK and never OK, what the best practices are, and where to ask for help,” van der Meulen said.

Leading the AI-Driven Organization
In person at MIT Sloan
Register Now
2. Develop training and establish communities of practice.
Organizations should develop AI direction and evaluation skills to help employees use generative AI tools effectively. This training should cover the AI models that power generative AI tools, ethical and responsible use of AI, and how to critically judge AI-generated content.
With broadly applicable AI tools, the onus of success is on the employee, van der Meulen said. “If you don’t understand the basics of how a large language model functions, what its capabilities are, its limitations, and if you don’t know how to instruct that model just so in order to get the output you need, that limits the effectiveness of the tool,” he said.
For example, the data and analytics unit at animal health company Zoetis holds twice-weekly office hours during which employees can learn how to start using generative AI tools and ask questions. This helps employees learn, improve over time, and build confidence.
“When you establish a healthy use of tools, that actually helps with overall AI upskilling across the organization,” Wixom said. “This is key. As employees work with tools, they’re learning really important data science skills, concepts like probabilism and AI explanation, and so you’re elevating data science in the organization.”
3. Authorize certain generative AI tools from trusted vendors.
Related Articles
Staying current with the ever-evolving market for AI tools requires considerable time and effort. To crowdsource this process, create a cross-functional team tasked with evaluating tools and giving feedback to IT about which tools promise real value, the researchers said.
“Just like banning these tools won’t work, not providing any tools also won’t work because then it becomes impossible to safely use these tools more broadly in one’s role,” van der Meulen said.
To simplify AI access and encourage employee use, Zoetis set up a generative AI app store in which employees can apply for tool licenses and learn about effective and responsible use. For each tool, employees can access guides for getting started, watch training videos, read the organization’s AI policies and guidelines, and more. Employees are also encouraged to submit stories describing how they used a tool. This feedback has helped the organization understand which tools deliver the most value for employees.
Watch the webinar Bring Your Own AI: Balance Rewards and risks