Step-by-Step: Connecting OpenAI (ChatGPT) to n8n for Powerful AI Automation
The world of automation is constantly evolving, and integrating Artificial Intelligence (AI) into your workflows can unlock incredible potential. OpenAI’s powerful language models are at the forefront of this revolution, capable of generating human-like text, summarizing information, translating languages, and much more.
n8n is a versatile workflow automation platform that allows you to connect various apps and services without writing code. While n8n offers a dedicated OpenAI node, understanding how to connect via the HTTP Request node provides a fundamental understanding of API interactions and offers flexibility for accessing less common endpoints or highly specific configurations not yet available in a dedicated node.
This tutorial will guide you step-by-step through connecting OpenAI’s API to n8n using the HTTP Request node, setting you on the path to building powerful AI-driven automations.
If you’re new to automation platforms, you might find our articles on What is Make.com Automation or general automation concepts helpful as a starting point, though this tutorial focuses on n8n. Value Added Tech specializes in helping businesses build complex, reliable automations across various platforms.
Why Connect OpenAI to n8n?
Combining n8n and OpenAI allows you to automate tasks that previously required human intelligence or complex scripting. Imagine automatically:
- Summarizing long articles or emails.
- Generating marketing copy based on product descriptions.
- Categorizing customer feedback.
- Translating text from one language to another.
- Creating personalized email responses.
n8n acts as the orchestrator, fetching data from one source, sending it to OpenAI for processing, and then taking action based on OpenAI’s response – all automatically.
Prerequisites
Before you begin, ensure you have:
- An OpenAI Account and API Key: You need an account on the OpenAI platform and an active API key. You can obtain one from your OpenAI API dashboard. Keep this key secret – it’s like a password!
- An n8n Instance: You need a running instance of n8n, whether it’s cloud-hosted or self-hosted.
- Basic Understanding of n8n: Familiarity with adding nodes, connecting them, and running workflows will be beneficial.
Setting Up OpenAI Credentials Securely in n8n
It’s crucial to handle your OpenAI API key securely. n8n’s built-in Credentials feature is the best way to do this.
- Navigate to Credentials: In your n8n instance, click on "Credentials" in the left sidebar.
- Add New Credential: Click the "+ New Credential" button.
- Search for HTTP Header Auth: In the search bar, type "HTTP Header Auth" and select it. This is a generic credential type perfect for APIs that use headers for authentication, like OpenAI.
- Configure the Credential:
- Name: Give your credential a descriptive name, like "OpenAI API Key".
- Authentication: Select "Header Auth".
- Header Name: Enter
Authorization
. - Header Value: Enter
Bearer YOUR_API_KEY_HERE
. ReplaceYOUR_API_KEY_HERE
with your actual OpenAI API key. Make sure there’s a space afterBearer
. - Domain: You can leave this blank or enter
api.openai.com
.
- Save the Credential: Click "Save".
Your OpenAI API key is now securely stored and can be reused across multiple HTTP Request nodes without exposing the key itself within the workflow design.
Core Integration: Using the HTTP Request Node
The HTTP Request node allows n8n to send requests to any web endpoint, including the OpenAI API.
- Add an HTTP Request Node: In your n8n workflow editor, add an "HTTP Request" node.
- Connect the Node: Connect it to a preceding node (e.g., a Start node, or a node that provides data you’ll send to OpenAI).
- Configure Basic Settings:
- Authentication: Select "Predefined Credential". Choose the "OpenAI API Key" credential you created earlier from the dropdown.
- Request Method: OpenAI API requests for completions are typically
POST
. Select "POST". - URL: The base URL for the OpenAI API is
https://api.openai.com/v1/
. The specific endpoint depends on the type of completion you want. For current models like GPT-3.5-turbo and GPT-4, the endpoint is/chat/completions
. So, the full URL will behttps://api.openai.com/v1/chat/completions
.
- Configure Headers:
- By default, the HTTP Request node with "Predefined Credential" configured as "Header Auth" will automatically add the
Authorization: Bearer ...
header. - You still need to tell the API you’re sending JSON data. In the "Headers" section, click "Add Header".
- Name: Enter
Content-Type
. - Value: Enter
application/json
.
- By default, the HTTP Request node with "Predefined Credential" configured as "Header Auth" will automatically add the
- Configure Body:
- OpenAI API requests for completions require a JSON body containing parameters like the model, the prompt/messages, and other settings.
- In the "Body" section, set Body Content to "JSON".
- You will now define the JSON structure. The required fields for the
/chat/completions
endpoint aremodel
andmessages
. - Click "Add Field" twice. Set the keys to
model
andmessages
.
Understanding OpenAI API Request Parameters (/chat/completions
)
Let’s break down the key parameters you’ll add to the JSON body:
model
(string, Required): Specifies the AI model to use. Common choices include:gpt-3.5-turbo
(fast, cost-effective)gpt-4o
(more capable, higher cost)- Choose the model that best fits your needs for performance, cost, and capability.
messages
(array of objects, Required): This is where you provide the conversation history and the prompt for the AI. Each object in the array represents a message and has two keys:role
(string): The role of the message’s author. Can besystem
(setting the context or behavior),user
(the prompt or question), orassistant
(a previous AI response, used for multi-turn conversations).content
(string): The text of the message.- For a simple, single prompt, you’ll typically include a
system
message (optional but recommended to guide the AI) and auser
message with your actual prompt. - Example JSON structure for
messages
:[ {"role": "system", "content": "You are a helpful assistant that summarizes text."}, {"role": "user", "content": "Summarize the following text: [Your text here]"} ]
- You can use n8n expressions (e.g.,
{{ $json.text_to_summarize }}
) within thecontent
fields to dynamically insert data from previous nodes.
temperature
(number, Optional): Controls the randomness of the output. Higher values (e.g., 0.8) make the output more random and creative, while lower values (e.g., 0.2) make it more focused and deterministic. Range is 0 to 2. Default is 1.max_tokens
(integer, Optional): The maximum number of tokens (roughly words or pieces of words) the AI should generate in the response. This helps control the length and cost of the output. The total number of tokens (prompt + completion) must be within the model’s limit.n
(integer, Optional): The number of chat completion choices to generate for each input message. Default is 1.stop
(string or array of strings, Optional): Up to 4 sequences where the API will stop generating further tokens.
You can add other optional parameters as needed by clicking "Add Field" in the JSON body configuration.
Handling API Responses
When the HTTP Request node sends a successful request to the OpenAI API, it will receive a JSON response. The node automatically parses this JSON, making the data available to subsequent nodes in your workflow.
For the /chat/completions
endpoint, a successful response will look something like this:
{
"id": "chatcmpl-...",
"object": "chat.completion",
"created": 1677649420,
"model": "gpt-3.5-turbo-0613",
"system_fingerprint": "fp_...",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "This is the AI generated text."
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 50,
"completion_tokens": 20,
"total_tokens": 70
}
}
The actual AI-generated text is located in the content
field within the first object (index: 0
) of the choices
array. To access this in a subsequent n8n node, you would use an expression like:
{{ $json.choices[0].message.content }}
You can use a Set node or a Code node to extract this specific value and make it easily available for further processing (e.g., sending it in an email, saving it to a database).
Practical Example Workflow: Summarizing a Blog Post
Let’s create a workflow that takes a blog post URL, fetches its content, and uses OpenAI to generate a summary.
Workflow Structure:
- Start Node: Triggers the workflow.
- Set Node: Holds the blog post URL as input data.
- HTTP Request Node (Fetch Page): Fetches the HTML content of the blog post URL.
- Code Node (Extract Text): Extracts the main text content from the HTML. Note: Robust HTML parsing is complex. This example will use a simplified approach. For real-world applications, consider more advanced techniques or libraries.
- HTTP Request Node (OpenAI Summarize): Sends the extracted text to OpenAI for summarization.
- Set Node (Extract Summary): Extracts the summary text from OpenAI’s response.
- Output: The final summary is available in this node. You could connect this to an email node, a database node, etc.
Step-by-Step Implementation:
- Start Node: Add a Start node. You can leave it as default (Manual trigger) for testing.
- Set Node (Input URL):
- Add a Set node. Connect it to the Start node.
- In the "Values" section, click "Add Value".
- Value Name:
blog_url
- Value: Enter the URL of a blog post you want to summarize (e.g.,
https://vatech.io/blog/how-to-automate-workflows-in-gohighlevel/
).
- HTTP Request Node (Fetch Page):
- Add an HTTP Request node. Connect it to the Set node.
- Authentication: None (usually, websites don’t require auth to fetch content).
- Request Method:
GET
- URL: Use an expression to get the URL from the previous node:
{{ $json.blog_url }}
- Response Format: Leave as "JSON" (n8n tries to parse, which is fine even for HTML).
- Code Node (Extract Text):
- Add a Code node. Connect it to the HTTP Request (Fetch Page) node.
- This is where we’d ideally parse HTML. For simplicity in this tutorial, we’ll just take the raw body, which might not be perfect but works for demonstrating the data flow to OpenAI. A more robust solution might use a library if available or external service. Self-correction: Let’s use a very basic regex approach as it’s more illustrative than passing raw HTML. It won’t be perfect but demonstrates data manipulation.
- In the Code node, replace the default code with something like this (this is a very basic example to strip some HTML tags):
const items = [] for (const item of $input.all()) { const htmlBody = item.json.data; // Access the response body // Simple regex to strip some HTML tags and reduce whitespace. // WARNING: This is NOT a robust HTML parser! let textContent = htmlBody.replace(/<[^>]*>/g, ’’); // Remove HTML tags textContent = textContent.replace(/\s+/g, ’ ’).trim(); // Reduce multiple spaces to single space items.push({ json: { textContent: textContent } }); } return items;
- This code takes the
data
(raw body) from the previous HTTP Request, attempts to remove HTML tags, and outputs a new item with atextContent
field.
- HTTP Request Node (OpenAI Summarize):
- Add an HTTP Request node. Connect it to the Code node.
- Authentication: "Predefined Credential" -> "OpenAI API Key" (the credential you created).
- Request Method:
POST
- URL:
https://api.openai.com/v1/chat/completions
- Headers: Add Header -> Name:
Content-Type
, Value:application/json
. - Body: Body Content:
JSON
. Add Field:model
-> Value:gpt-3.5-turbo
. Add Field:messages
-> Value:
Use the expression[ {"role": "system", "content": "You are a helpful assistant that summarizes text clearly and concisely."}, {"role": "user", "content": "Summarize the following text:\n\n{{ $json.textContent }}"} ]
{{ $json.textContent }}
to inject the text extracted by the Code node into the prompt. - Add Field:
max_tokens
-> Value:150
(or your desired summary length). - Add Field:
temperature
-> Value:0.7
(or your preferred creativity level).
- Set Node (Extract Summary):
- Add a Set node. Connect it to the OpenAI HTTP Request node.
- In the "Values" section, click "Add Value".
- Value Name:
summary
- Value: Use an expression to access the summary text from OpenAI’s response:
{{ $json.choices[0].message.content }}
- Run the Workflow: Click "Execute Workflow" in the top right. Check the output of each node to see the data flow, especially the final Set node, which should contain the generated summary.
This workflow demonstrates fetching data, processing it (even with a simplified approach), sending it to OpenAI via the HTTP Request node with secure credentials, and extracting the AI’s response.
Troubleshooting Common Issues
- 401 Unauthorized Error: Your API key is likely incorrect or the
Authorization
header format (Bearer YOUR_KEY
) is wrong. Double-check your OpenAI credential in n8n. - 400 Bad Request Error: The JSON body sent to OpenAI is malformed or missing required parameters (
model
,messages
). Carefully review the JSON structure in your HTTP Request node’s Body configuration. Check for typos, missing commas, incorrect brackets/braces, or using$json
expressions that resolve to null or incorrect data types. - 429 Too Many Requests Error: You’ve hit rate limits or your usage tier limit on OpenAI. Check your OpenAI usage dashboard and consider upgrading your plan or optimizing your workflow to reduce API calls (e.g., process data in batches).
- Empty or Unexpected Response: Check the output of the OpenAI HTTP Request node. Is the response structure what you expected? Did you use the correct endpoint (
/chat/completions
requires themessages
format, notprompt
)? Is the AI generating unwanted content? Adjust yoursystem
message andprompt
. - n8n Expression Errors: Ensure your expressions like
{{ $json.some_field }}
correctly reference the output data of the immediately preceding node, or use$item(0).json.some_field
to reference a specific item from a previous node further back. - HTML Parsing Issues in Example: The simple Code node regex is fragile. For real websites, you’ll need a more robust method (e.g., a custom node that uses a proper HTML parsing library).
Going Beyond with Value Added Tech
While the HTTP Request node in n8n offers immense flexibility for integrating with APIs like OpenAI, building, maintaining, and scaling complex automation solutions requires deep expertise. Handling large data volumes (Scaling Make.com Enterprise High Volume Automation), ensuring operational stability (Make.com Health Check), implementing advanced error handling (How to Handle Errors in Make.com), and integrating with numerous disparate systems (How to Sync Data Across Platforms with Make.com) can become challenging.
Value Added Tech specializes in creating bespoke automation architectures and implementing robust, scalable solutions. Whether you need to integrate OpenAI into complex marketing workflows (Marketing Automation X Make.com), revolutionize customer service with AI chatbots (AI Chatbots Revolutionizing Customer Service for a Social Media Platform), automate call centers (Automating Call Center with AI Calling Agents), or streamline financial operations (Transforming Financial Operations Through Intelligent Automation), our team has the experience to design and implement solutions that deliver significant ROI and competitive advantage.
We work with a range of leading platforms like Make.com, Airtable, GoHighLevel, Salesforce, and more, combining them with AI to solve your unique business challenges. If your automation needs extend beyond simple API calls, we’re here to help you architect and implement powerful, reliable, and scalable systems.
Conclusion
Connecting OpenAI to n8n using the HTTP Request node is a fundamental skill that opens up a world of possibilities for automating tasks with artificial intelligence. By understanding how to configure authentication, format your requests, and handle responses, you can leverage the power of large language models within your workflows.
This tutorial provides the foundation for building custom AI automations. As you explore more complex use cases, remember that platforms like n8n are powerful tools, and integrating them effectively often benefits from strategic planning and deep technical knowledge, areas where Value Added Tech excels.