Overcome the risks of public AI models with your own private ChatGPT

Overcome the risks of public AI models with your own private ChatGPT

Many users in small and medium-sized businesses are starting to benefit from the value that artificial intelligence (AI) can bring to everyday workplace tasks. However, the security concerns around these tools are real and organisations are seeing the consequences of sensitive data being used in public large language models (LLMs). Discover how you can overcome the risks of public LLMs like ChatGPT by introducing a private instance within your organisation.


While AI can bring a host of benefits, using tools like ChatGPT for everyday work can create risks that you might not have even considered. Take the example of a user short on time, asking the LLM to summarise a document. This simple request can carry a number of risks:

  • What if the document contained confidential information about your organisation?
  • What if it contained personally identifiable information or other sensitive information types?
  • What if it contained your intellectual property?
  • What if your user is attuned to all these worries – and still makes a mistake?

These traditional data loss concerns take on a new, more complex shape with ChatGPT. The user experience feels safe and the interaction that triggers the leak is different compared to the normal data loss pattern. Users aren’t sharing emails or documents – they’re just seeking out help from a virtual assistant.

You might have already heard about ChatGPT’s problems regarding hallucinations, authenticity and bias, but data loss is the immediate risk. To combat the danger, a new approach to AI is emerging, one which introduces secure and controlled usage of generative AI in Microsoft-oriented organisations.


Risks of using public ChatGPT in your organisation

Let’s explore the potential risks associated with using public AI services for work by looking at two notable examples.

The first example involves a data leak at Samsung when developers asked ChatGPT to analyse their source code. The leak resulting from the information that was submitted to ChatGPT ultimately led to Samsung banning the use of generative AI tools. This incident highlights the importance of carefully considering the implications of utilising such services.

Another case involves the Italian government, which initially imposed a ban on ChatGPT due to concerns over personal data privacy. The ban was later lifted after OpenAI introduced an option for users to request removal of their personal data from the service. OpenAI has since taken further steps by providing an option to disable chat history, ensuring that user data is not retained for training purposes.

While OpenAI’s response may seem like a resolution, organisations cannot rely on their users to navigate these settings. Many users may be unaware of these options or simply not use them.

Chat history can be useful, so you should avoid switching it off to make the most of ChatGPT. However, it is important to note that it means queries are stored in ChatGPT by default and may be used for training future models. This raises concerns about the use of your sensitive, work-related information in future iterations of GPT models. Additionally, it is worth considering that OpenAI’s data centres are currently located in the United States, which may pose challenges for organisations with data sovereignty requirements.


Gaining control of generative AI

An increasingly common method of controlling ChatGPT use is simply blocking access – but this approach is unsatisfactory for two reasons. First, there are other generative AI services available. Second, blocking access without providing an alternative will only lead users to use ChatGPT on their own devices, creating a shadow IT problem.

If you really want to address the use of public LLMs and avoid their inherent risks, you need to provide a compelling private alternative. Azure OpenAI Service is an excellent choice for Microsoft-oriented organisations – it includes ChatGPT and offers various interfaces to provide a private instance of GPT 3.5, GPT 4, or other models like Dall-E 2 for image generation.


Why use Azure OpenAI for your organisation?

If you want a private ChatGPT and you’re a Microsoft-orientated organisation, Azure is an obvious choice because trust, compliance and security measures have already been considered. There are many direct integrations in Azure that will become compelling as this initial foundation is extended to reach enterprise data.

While it’s possible to use OpenAI’s own services like ChatGPT for Business and build them elsewhere and connect to your enterprise data later, it’s hard to discern any advantage over the use of Azure OpenAI. Keeping the generative AI where the enterprise data lives can significantly reduce complexity, as it aligns with where users already do their work.

Azure OpenAI Service also offers specific privacy, security and compliance considerations that distinguish it from alternatives. The most crucial difference is that prompts and completions in Azure OpenAI are never retained for future training of OpenAI models.

Here are some additional points to consider:

  • Both OpenAI and Microsoft store prompts and completions for 30 days for abuse and misuse monitoring (important for AI safety). However, you can only opt out of this monitoring in Azure OpenAI Service.
  • You can tune Content Filters to your needs in Azure OpenAI Service.
  • If you use Customer-Managed Keys (formerly known as BYOK) in Azure or Microsoft 365, your Azure OpenAI Service stored content can also be protected under keys that you control.
  • If you control Microsoft’s access to your content with Customer Lockbox, this will also apply to data stored in Azure OpenAI Service.
  • Where private connectivity in Azure is in place, Azure OpenAI can benefit from these protections, such as Private Endpoints.
  • Azure OpenAI Service is part of the broader Azure Cognitive Services stack. As other AI needs emerge, they can be brought together with OpenAI Services directly. For instance, proximity to Azure Cognitive Search will ease integration.

Considering these, and with the probability that many more supporting reasons will emerge in the years to come, we believe the decision-making process should start from the ‘why not Azure?’ position.

This is a tech-oriented, incomplete view of AI risk, but an important part of the entire picture. If you need guidance for smaller organisations on navigating AI risks, we can help.


What about Copilots?

You might be wondering if Microsoft Copilots will solve this problem once they arrive. The answer is that although no-one knows for certain, it’s unlikely. Copilots are tailored to provide specific capabilities within their domain rather than generalised services like an LLM.

Copilots are more skilled and accurate than a generalised capability like an LLM because they have specific training and narrower parameters. They do not directly merge these specific capabilities with the more general and sometimes unreliable knowledge provided by ChatGPT.

Even if Copilots did solve this generalised need, we think it’s important to provide a private ChatGPT capability today. If not, organisations’ most present risks will remain unaddressed. We do think Copilots will be extremely valuable in their specific domains, but using Copilots in tandem with a more generalised capability like ChatGPT provides the optimum strategy for workplace AI.


How can I use private ChatGPT in my business

Private instances of ChatGPT can be employed by businesses today, but how can they integrate with your work data to leverage the joint power of LLMs with Copilots?

There are several important considerations to keep in mind. You need to carefully evaluate the specific requirements and challenges associated with the specific uses in small or medium-sized organisations. There might also be foundational work that needs to be address before you can get started.

A private ChatGPT instance can be an effective way to mitigate risks and explore the potential benefits of generative AI in the context of your work. As you gain a deeper understanding and establish a solid foundation, you can then expand and extend its usage to broader enterprise applications. It’s a practical first step towards harnessing the power of generative AI for small and medium-sized organisations without many of the risks posed by public LLMs.


Want to get started with your own private ChatGPT?

Get in touch with us now to find out how we can help you with your very own private ChatGPT.