Using Chat GPT via API in Python

Reading time: 10 minutes

‍Chat GPT API Python has taken the world by storm, revolutionizing the way we generate text and interact with AI. This cutting-edge technology has already shown its prowess in creating tutorials, auto-generating product descriptions, and even providing Bing’s search engine results. With the potential to transform numerous industries, there’s never been a better time to learn how to harness the power of the Chat GPT API Python. In this comprehensive guide, we’ll explore the ins and outs of this technology, its applications, and how you can make the most of it.

‍Photo by Alexandra_Koch on Pixabay

Table of Contents

  1. Introduction to Chat GPT API Python
  2. When to Use the API Instead of Web Interface
  3. Setting Up an OpenAI Developer Account
  4. Securely Storing Your Account Credentials
  5. Working with Python
  6. The Code Pattern for Calling GPT via the API
  7. Your First Conversation: Generating a Dataset
  8. Using a Helper Function to Call GPT
  9. Reusing Responses from the AI Assistant
  10. Using GPT in a Pipeline

chat gpt api python

1. Introduction to Chat GPT API Python

ChatGPT is a state-of-the-art large language model developed by OpenAI. It enables users to generate text programmatically using code. While the web interface to ChatGPT is ideal for interactive use, the API allows for more extensive integration with various applications and data pipelines. The openai Python package serves as the gateway to interact with the Chat GPT API Python.

To make use of the API, you’ll need to create a developer account with OpenAI, store your API key securely, and set up your Python environment.

Please note: OpenAI charges for the use of the GPT API. Pricing is $0.002 per 1,000 tokens, with 1,000 tokens equaling roughly 750 words. Running all examples in this guide should cost less than 2 US cents; however, rerunning tasks will incur additional charges.

2. When to Use the API Instead of Web Interface

While the Chat GPT web application offers a user-friendly interface, the API is more suitable for integrating AI into data pipelines or software applications. Some possible use cases for data practitioners include:

  • Summarizing data or generating a report from a database or another API
  • Embedding GPT functionality in a dashboard for automatic text summaries
  • Providing a natural language interface to your data mart
  • Conducting research by pulling in journal papers and having GPT summarize the results

3. Setting Up an OpenAI Developer Account

To use the Chat GPT API Python, you need to create a developer account with OpenAI, which requires your email address, phone number, and debit or credit card details. Follow these steps:

  1. Go to the API signup page.
  2. Create your account by providing your email address and phone number.
  3. Visit the API keys page.
  4. Create a new secret key and take a copy of this key. (If you lose it, delete the key and create a new one.)
  5. Go to the Payment Methods page.
  6. Click ‘Add payment method’ and fill in your card details.

4. Securely Storing Your Account Credentials

Keeping your secret key safe is crucial to prevent unauthorized access to the API. The following steps describe how to securely store your key using the DataCamp Workspace. If you’re using a different platform, consult the documentation for that platform or ask Chat GPT for advice.

  1. In your workspace, click on Integrations.
  2. Click on the “Create integration” plus button.
  3. Select an “Environment Variables” integration.
  4. In the “Name” field, type “OPENAI”. In the “Value” field, paste your secret key.
  5. Click “Create” and connect the new integration.

5. Working with Python

To use the Chat GPT API Python, you need to import the os and openai packages. If you’re using a Jupyter Notebook, it’s helpful to import some functions from IPython.display. Some examples may also use the yfinance package to retrieve stock prices.

import os
import openai
from IPython.display import display, Markdown
import yfinance as yf

Set the openai.api_key to the OPENAI environment variable:

openai.api_key = os.environ["OPENAI"]

6. The Code Pattern for Calling GPT via the API

The code pattern to call the OpenAI API and get a chat response is as follows:

response = openai.ChatCompletion.create(
    model="MODEL_NAME",
    messages=[
        {"role": "system", "content": 'SPECIFY HOW THE AI ASSISTANT SHOULD BEHAVE'},
        {"role": "user", "content": 'SPECIFY WHAT YOU WANT THE AI ASSISTANT TO SAY'}
    ])

Model Names for GPT

Model names can be found in the Model Overview page of the developer documentation. In this guide, we’ll use gpt-3.5-turbo, the latest model used by Chat GPT with public API access. (When gpt-4 becomes broadly available, you’ll want to switch to it.)

Message Types

There are three types of messages documented in the Introduction to the Chat documentation:

  • system messages describe the behavior of the AI assistant. A useful system message for data science use cases is “You are a helpful assistant who understands data science.”
  • user messages describe what you want the AI assistant to say. We’ll cover examples of user messages throughout this guide.
  • assistant messages describe previous responses in the conversation. We’ll cover how to have an interactive conversation in later sections.

The first message should be a system message. Additional messages should alternate between the user and the assistant.

7. Your First Conversation: Generating a Dataset

Generating sample datasets is useful for testing your code against different data scenarios or for demonstrating code to others. To get a useful response from GPT, you need to be precise and specify the details of your dataset, including:

  • The number of rows and columns
  • The names of the columns
  • A description of what each column contains
  • The output format of the dataset

Here’s an example user message to create a dataset:

Create a small dataset about total sales over the last year. The format of the dataset should be a data frame with 12 rows and 2 columns. The columns should be called "month" and "total_sales_usd". The "month" column should contain the shortened forms of month names from "Jan" to "Dec". The "total_sales_usd" column should contain random numeric values taken from a normal distribution with mean 100000 and standard deviation 5000. Provide Python code to generate the dataset, then provide the output in the format of a markdown table.

Include this message in the previously discussed code pattern:

system_msg = 'You are a helpful assistant who understands data science.'
user_msg = 'Create a small dataset about total sales over the last year. The format of the dataset should be a data frame with 12 rows and 2 columns. The columns should be called "month" and "total_sales_usd". The "month" column should contain the shortened forms of month names from "Jan" to "Dec". The "total_sales_usd" column should contain random numeric values taken from a normal distribution with mean 100000 and standard deviation 5000. Provide Python code to generate the dataset, then provide the output in the format of a markdown table.'

response = openai.ChatCompletion.create(model="gpt-3.5-turbo",
                                        messages=[{"role": "system", "content": system_msg},
                                                  {"role": "user", "content": user_msg}])

Check the GPT Response

API calls can be risky due to issues like internet connectivity problems, server issues, or running out of API credit. You should check that the response you get is OK.

GPT models return a status code with one of four values, documented in the Response format section of the Chat documentation.

  • stop: API returned complete model output
  • length: Incomplete model output due to max_tokens parameter or token limit
  • content_filter: Omitted content due to a flag from content filters
  • null: API response still in progress or incomplete

The GPT API sends data to Python in JSON format, so the response variable contains deeply nested lists and dictionaries. The status code is stored as follows:

response["choices"][0]["finish_reason"]

Extract the AI Assistant’s Message

The text generated by GPT is always stored in the same place:

response["choices"][0]["message"]["content"]

The response content can be printed with print(content) or rendered as Markdown in Jupyter notebooks using display(Markdown(content)).

8. Using a Helper Function to Call GPT

Writing repetitive boilerplate code can be tedious. Having a wrapper function to abstract away the boring parts is useful. The following function takes two arguments:

  • system: A string containing the system message.
  • user_assistant: An array of strings that alternate user message then assistant message.

The return value is the generated content.

def chat(system, user_assistant):
    assert isinstance(system, str), "`system` should be a string"
    assert isinstance(user_assistant, list), "`user_assistant` should be a list"
    system_msg = [{"role": "system", "content": system}]
    user_assistant_msgs = [
        {"role": "assistant", "content": user_assistant[i]} if i % 2 else {"role": "user", "content": user_assistant[i]}
        for i in range(len(user_assistant))]

    msgs = system_msg + user_assistant_msgs
    response = openai.ChatCompletion.create(model="gpt-3.5-turbo",
                                            messages=msgs)
    status_code = response["choices"][0]["finish_reason"]
    assert status_code == "stop", f"The status code was {status_code}."
    return response["choices"][0]["message"]["content"]

Example usage of this function:

response_fn_test = chat("You are a machine learning expert.", ["Explain what a neural network is."])

display(Markdown(response_fn_test))

9. Reusing Responses from the AI Assistant

In many situations, you will want to form a longer conversation with the AI, sending a prompt to GPT, getting a response back, and sending another prompt to continue the chat. In this case, you need to include the previous response from GPT in the second call to the API to provide GPT with the full context, improving the accuracy and consistency of the response.

To reuse GPT’s message, retrieve it from the response and pass it into a new call to chat.

Example: Analyzing the Sample Dataset

Let’s try calculating the mean of the sales column from the dataset previously generated. Note that because we didn’t use the chat() function the first time, we have to use the longer subsetting code to access the previous response text. If you use chat(), the code is simpler.

assistant_msg = response["choices"][0]["message"]["content"]

user_msg2 = 'Using the dataset you just created, write code to calculate the mean of the `total_sales_usd` column. Also include the result of the calculation.'

user_assistant_msgs = [user_msg, assistant_msg, user_msg2]

response_calc = chat(system_msg, user_assistant_msgs)

display(Markdown(response_calc))

10. Using GPT in a Pipeline

One of the significant advantages of using the API over the web interface is the ability to combine GPT with other APIs. Pulling in data from one or more sources and applying AI to it creates a powerful workflow.

Applying GPT AI to Weather Data

Here, we’ll pull in a weather forecast using the weather2 package and use GPT to come up with ideas for activities.

import weather

location = "Miami"
forecast = weather.forecast(location)

fcast = forecast.tomorrow["12:00"]

user_msg_weather = f"In {location} at midday tomorrow, the temperature is forecast to be {fcast.temp}, the wind speed is forecast to be {fcast.wind.speed} m/s, and the amount of precipitation is forecast to be {fcast.precip}. Make a list of suitable leisure activities."

response_activities = chat("You are a travel guide.", [user_msg_weather])

display(Markdown(response_activities))

In conclusion, the Chat GPT API Python offers a powerful tool for generating text and interacting with AI in various applications and data pipelines. By understanding the code patterns, helper functions, and ways to use GPT in a pipeline, you can unlock the full potential of this technology and revolutionize your workflow. The future of text generation is here, and it’s time to seize the opportunity.

Take care 🙂

Here are a few hand-picked articles that you should read as well:
Yahoo Finance Stock Data API

Like this post? Don’t forget to share it and comment!

chat gpt api python

Leave a Reply

Your email address will not be published. Required fields are marked *