How to Build a Native OpenAI Integration
Are you interested in building a native OpenAI integration for your SaaS application? OpenAI is one of the most discussed companies today, reaching over 100 million users in two months after launch, and more recently, companies like Slack and Notion have launched their own OpenAI integrations into their app.
OpenAI has made it easy for developers to integrate their products with these generative AI capabilities through their APIs, and in this post, we’ll go over the steps you need to take to build a native OpenAI integration into your application.
But first, let’s talk through the 2 ways in which you can integrate OpenAI into your product, and why it’s important to make this decision before building the OpenAi integration.
User-supplied keys vs. using your own keys
Using your own OpenAI keys
In the Notion example, OpenAI is integrated without the end-user having to provide their own keys. In this scenario, Notion is the one authenticating (and paying) for every request made by their users to OpenAI’s API. With no controls in place for how their users will interact with the AI, they could easily incur significant costs for providing users the integration.
If you approach building the integration in a way where you supply your own OpenAI key and all API requests made on behalf of your users’ interactions with the integration are through your API key, the costs can quickly rack up.
While this may be the easiest to implement, it is not ideal when it comes to managing costs for supporting the integration. That’s why we recommend this second option.
Have your users supply their own OpenAI keys
By having your users supply their own OpenAI keys, they will be the ones who are responsible for bearing the costs of their usage of the integration in your application.
You can still upsell users on the integration by gating it to certain tiers, but at least you won’t have to incur unpredictable costs from users constantly experimenting with different prompts.
Additionally, having your users supply their own keys may also open up personalization and fine-tuning options in the future that can be trained off data from your users’ entire usage history of OpenAI across any application where they’ve supplied their keys.
With that said, let’s get into the steps for building the native OpenAI integration.
Step 1: Sign Up For an OpenAI Account and Get API Access
Even if you were to use your end-user’s API Keys, in order to test the integration out as you build it, you’ll need your own account and credentials.
The first step is to create an account on their website. Head over to https://platform.openai.com/signup and create an account.
Get an OpenAI API Key
Before you can start making OpenAI API requests, you must authenticate your requests using an API key. On the https://platform.openai.com/account/api-keys page, click Create new secret key. Remember to copy your secret key and store it securely.
If you forget or lose access to your API key, you can always create a new one, but make sure to revoke any forgotten keys.
OpenAI users on the free tier get $18 worth of usage credit per month. If you blow through that credit during testing, you can upgrade your plan to a paid tier by going over to billing to set up a paid account and usage limits.
Step 2: Capture Your User’s OpenAI API key
As mentioned earlier, in production, you want to minimize incurring API call costs and provide the option to more easily fine-tune models for each user. As such, in order to authenticate requests on each of your users’ behalf, you’ll need to build an input prompt to capture their OpenAI API keys, and build a mechanism for securely storing those keys.
Now if you were to build the OpenAI integration with Paragon, you can easily embed an authentication experience with a single paragon.connect("openai"); call. Paragon will encrypt and securely store the API Key, and automatically make all requests to the OpenAI API with that key, for that user.
Step 3: Choose the right OpenAI API for your application
OpenAI offers several different APIs each with a specific focus and use case, so you'll need to choose the API that best fits the use case that you want to provide your users. Here are the most commonly used APIs.
- For conversational features to your application, use their chat API
- For image generation, use their images API
- For auto-completing text based content creation for your users, use their completion API
On top of that, you may will likely want to build models that can be applied towards the responses, and ideally fine-tuned with domain specific data sets, in which you’d have to use their fine-tuning API.
Step 4: Start making requests to the OpenAI API
While there are infinite possibilities for whatwhat requests you can make to the OpenAI API, here’s just one example
Here’s a sample request for using the chat API. Remember to add your or your user’s API key from Step 1 in the Authorization header for each request.
We decided to take 10 minutes when writing this tutorial to build a sample integration on Paragon. Using a sample chat application, we wanted to have the user ask for a trip itinerary to LA.
Here's what the workflow behind the scenes looked like. We use the user's chat message as an app event to trigger the native OpenAI integration workflow.
When we execute the test request in the workflow, we see the response generated by OpenAI's chat API.
In the last step, we post the response back to the sample app's API to present it in the chat.
You may have noticed in the screenshots that there was an input for 'model'. In general, you will have to supply a model parameter that is optimized for specific use cases. OpenAI currently supports the following models:
Step 4: Test your OpenAI integration
After you have written your integration code, the next step is to test it. OpenAI provides a sandbox playground environment without depleting your API quota. It is important to thoroughly test your code and handle any errors that may occur.
If you build your integration in Paragon, you can easily view each step’s inputs and outputs, making it extremely easy to test and debug any issues with your OpenAI integration logic. Once your logic is sound, you can 'push to prod'. You won't have to worry about building in error-handling as Paragon's auto-retry mechanism ensures execution durability when there are issues on OpenAI's side that can cause the workflow to fail.
Step 5: Capture the value of your OpenAI integration
Having your customers supply their own API keys offloads the costs of making the requests to the OpenAI API. However, this does not mean you cannot monetize the value that this new AI capability creates for your users.
Think about how you can leverage this new OpenAI integration to upsell your customers, whether it be usage based, tier based, or a combination of both. For a full guide on how to price your OpenAI integration (and integrations in general), click here.
In conclusion, integrating with OpenAI's API is a powerful way to bring cutting-edge AI technologies to your application. By following the simple steps outlined in this blog post, you'll be able to easily create a native OpenAI integration that can perform tasks such as chat completion, language translation, natural language processing, and more.
If you'd like to learn more about Paragon and how it can streamline integration development for your product, book a demo with us here!