Blog

Generative AI and Adopting it Responsibly
by Alon Yamin, CEO and Co-Founder of Copyleaks

AI and Human working together responsibly

In May of 2023, Samsung announced that it had banned the use of ChatGPT and other generative AI across the company. The reason? It was discovered that a developer had shared sensitive proprietary code with ChatGPT while generating AI-written source code, which resulted in the code being stored in the ChatGPT repository for future model training and becoming public. 

ChatGPT was released less than a year ago, and in that short time, generative AI has changed the landscape across multiple industries, including the rules regarding cyber security and compliance. 

There’s no denying that utilizing generative AI at work can have its benefits. It can help generate marketing content, write source code, and create complete data reports. But as the saying goes, nothing good ever comes easy, including generative AI. More than ever, it’s vital to understand the risks of utilizing AI models as resources so organizations can proactively secure their proprietary data and remain compliant while navigating this new age of generative AI. 

The Risks of Generative AI

We’re all still learning about the potential risk of generative AI. Still, there are already some crucial instances that have come to light over the last few months that are vital for anyone to be aware of before adopting AI.

Circling back to the situation with Samsung, the first thing to be aware of when utilizing generative source code is what you share with AI models. It’s important to note that these AI models are trained on the information stored within their database. It is also used, along with the vast amount of data on the internet, to generate prompts for other users. 

But that’s not the only concern; there’s also the matter of AI-generated source code and the General Public License (GPL). Suppose any portion of AI-generated code contains code that falls under the GPL. In that case, it can result in all of your code, even an entire project, being declared open-source, including proprietary elements.

Another rising risk regarding source code and AI-generated text is licensing issues and copyright infringement. 

Licensing issues are commonly seen with AI-generated source code through programs like GitHub Copilot. Again, these AI models pull from data within their repositories and across the internet to generate answers. The issues arise because there’s no awareness among AI models regarding licensing agreements. Basically, an AI model can pull source code to create a requested prompt. Still, that portion of the source code could have a strict licensing agreement attached that you’d be unaware of until potential legal action is taken toward you or your organization due to violating the licensing agreement. 

AI-generated text makes it easy to stumble into copyright infringement and plagiarism issues. In our testing, we’ve uncovered multiple examples of AI-generated copy containing copyrighted and plagiarized material. A July 2023 lawsuit from comedian Sarah Silverman highlights the risks of trusting AI-generated content completely. Silverman mounted a lawsuit against OpenAI, who launched ChatGPT, stating that OpenAI used her memoir to train their AI model without her permission. And she’s not the only one. Which beckons the question: what’s in your AI-generated content? 

Furthermore, AI models do not have a sense of what is fact, what is biased, and so on, meaning that AI-generated copy will likely contain false information or messaging that doesn’t align with your organization if left unchecked.

Adopting Generative AI Responsibly

The first step in responsible AI adoption is understanding generative AI use within your organization. Understanding the full scope of gen AI use among teams and individuals allows for establishing informed guidelines to help mitigate risks to your organization’s privacy, security, and IP. 

Another necessary step is maintaining a human presence for editing, monitoring, etc. Someone must fact-check and edit any AI-generated content, especially if that content is intended to be made public, because not doing so poses risks to your organization’s credibility. 

A proactive way of adopting and managing responsible AI is by implementing a solution that offers monitoring and auditing capabilities across the organization to enforce enterprise-wide policies, ensure responsible generative AI adoption, proactively mitigate all potential data leaks and security risks, and give organizations complete visibility over gen AI use among teams and individuals. 

Implementing generative AI policies is a start. Still, it can only go so far without a method to enforce those policies and ensure compliance to avoid potential data leaks. Furthermore, another proactive step is identifying restricted keywords, phrases, etc., prohibited from using in an AI model prompt and having a solution to enforce those restrictions.

Such products, including our Generative AI Governance and Compliance solution, are now arriving on the market. These products support efforts around enforcing organization-wide policies plus monitoring and auditing generative AI use across teams to ensure responsible adoption and mitigate potential risks. 

It’s essential to note that any tool supporting responsible gen AI adoption must offer a multiprong approach since the issues surrounding generative AI adoption are multipronged. That’s why, for example, a tool that enforces policies and monitors AI use but doesn’t help mitigate the risk of copyright infringement, potential plagiarism, or source code licensing violations isn’t fully addressing the full spectrum of concerns. 

Embrace AI…Responsibly 

Samsung isn’t alone in banning AI; multiple companies have since joined them in fully banning ChatGPT and others. And while that is understandable from a security standpoint, one could also argue that generative AI should be seen as a tool to embrace and adopt responsibly instead of being feared and shut out. Looking at the adoption rate of ChatGPT, reaching 100 million users in a matter of months, faster than both TikTok and Instagram, cements the reality that AI is here and isn’t going anywhere anytime soon. 

By implementing a solution that helps ensure company-wide security measures, proactive policy compliance, regular monitoring and audits, and fostering a community of cybersecurity awareness when utilizing AI models, we can move into a future where AI confidently enhances instead of hinders.

Find out what's in your copy.