top of page
  • aldern00b

"Unlock the Power of AI: How to Create Custom Chat Bots Using the ChatGPT API"

I was watching this interview with David Bombal here: https://youtu.be/vF-MQmVxnCs?si=L05-ibiQXIzY9tQY and it made really want to know how I could create my own GPT queries to create some fun tools like Daniel Miessler had done.


I started with the ChatGPT documentation, which isn't bad but I'm a visual learner, so I found an OpenAI course here: https://campus.datacamp.com/courses/working-with-the-openai-api


Let's share.


Let's start with a quick "Completions" query to OpenAI. These "Completions" are just that - a question with an answer, no more. There are multiple endpoints that can be used - this is a great beginner one.


First we'll Import the OpenAI client

from openai import OpenAI

Then Create the OpenAI client and set your API key. If you need to get an API Key, you can apply for one here: https://platform.openai.com/api-keys

client = OpenAI(api_key="API KEY HERE")

Now we'll create the prompt to be sent. Note three "s mean multi-line

prompt="""
Life is like a box of chocolates
"""

Now we create a request to the OpenAI's Completions endpoint

response = client.completions.create(
  # Specify the correct model.
  model="gpt-3.5-turbo-instruct",
  prompt=prompt,
  # provide how long the return should be 1 token = 4 characters roughly
  max_tokens=100,
# you can also create a varied response by providing the temperature - or # how random the response will be. It ranges from 0-2 with 0 being more # # consistent outputs and 2 being more diverse and creative
temperature=1
)

Now we'll print the response. You COULD just print it like this:

print(response)

but you'll notice it's not very user friendly as it provides you ALL the data about the query and prompt you sent. The best way to do this is drill down through the data you got back. Here we're going to go through the response, find the array called choices, pick the first (and only) item from that array and provide us the text it has inside:

print(response.choices[0].text)

And that's it! That's a simple chat request with a single answer. Here's the entire block of code we would send:

from openai import OpenAI
client = OpenAI(api_key="KEY HERE")
prompt="Life is like a box of chocolates"
response = client.completions.create(
	model="gpt-3.5-turbo-instruct",
	prompt=prompt, 
	max_tokens=100
)
print(response.choices[0].text)

We can build on that though. Let's keep going.


Now that we know how to get a response from a query, let's train it a bit and tell it how we would like that data returned to us. To do this we're going to make it a bit more chat based. That means we're going to outline the characters as well as changing the response and endpoint we use.


To enable multi-turn conversations, the endpoint supports three different roles:

System: controls assistant's behavior

User: instruct the assistant

Assistant: response to user instruction


Instead of using "prompt" like we did before to prompt a respone we're going to change it to a message. This example creates the system we want to use and then our prompt will actually be a "user" who's asking a question:

messages=[{"role": "system", "content": "You are a data science tutor who speaks concisely."},
{"role": "user", "content": "What is the difference between mutable and immutable objects?"}]

We're now going to create a response like we did for the first example but this time we're adding the "chat" part.

response = client.chat.completions.create(
	model="gpt-3.5-turbo",
 	messages=messages, 
	max_tokens=100
)
print(response.choices[0].message.content)

Not too much harder right?! I think there's some exciting stuff we can do with that but let's add the assistant feature so we can really train it.


We can now provide it examples for how we would like our questions answered using the assistant like this:

messages=[
    {"role": "system", "content": "You are a sarcastic data science tutor who speaks concisely."},
## Here's where we provide some examples from a user and then the assistant
    {"role": "user", "content": "How do you define a Python list?"},
## Remember the assistant is an instruction to the system on how to respond to the user
    {"role": "assistant", "content": "I thought you said you knew how to program! Lists are defined by enclosing a coma-separated sequence of objects inside square brackets []."},
## This is the real question now that we've trained
    {"role": "user", "content": "What is the difference between mutable and immutable objects?"}
]

We again create a request to the CHAT completions endpoint and output the response

response = client.chat.completions.create(
	model="gpt-3.5-turbo",
	messages=messages,
	max_tokens=100
)
print(response.choices[0].message.content)

OK now we're gonna get complicated. We're going to back-feed the questions and answers as we go so that it can answer based on the previous parts of the conversation - just like the real ChatGPT!


We start the same way we did before with a role base message system:

messages=[
    {"role": "system", "content": "You are a sarcastic data science tutor who speaks concisely."}],

Now we're going to store ALL the user questions in an array variable

    user_qs = ["Why is Python so popular?", "Summarize this in one sentence."]

We'll loop over the users questions we just made and print them out

    for q in user_qs: 
        print("User: ", q)

We're going to create a user dictionary of All the questions and assign it to the user role.

        user_dict = {"role": "user", "content": q}

Once we have that we'll append the user dictionary of questions to the messasges so that it can use the previous questions and responses to answer the second question. Because we're in a loop the append is needed so it adds each question to the previous question and answer.

 messages.append(user_dict)

Ok, time to ask our first question and get our answer. We'll create a request to the CHAT completions endpoint

response = client.chat.completions.create(
            model="gpt-3.5-turbo",
            messages=messages,
            max_tokens=100
        )

Next we're going to create an assistant from the output of the question and answer the next question

assistant_dict = {"role": "assistant", "content": response.choices[0].message.content}

Remember we're still in the loop so we'll again append it to the messages so it has all the history to use. Let's also print that out so we can keep track

messages.append(assistant_dict)
print("Assistant: ",response.choices[0].message.content, "\n")


Hate Speach or Text Moderation

We can use OpenAI to moderate text and classify it into categories or give us a probability score:

from openai import OpenAI
client = OpenAI(api_key="API KEY")
response = client.moderations.create(
	model="text-moderation-latest",
	input="I could kill for a hamburger."
)

The output will give us a few options to classify:

  • categories: with true or false values

  • category_scores: which give us the confidence score of the violation, higher values mean greater certainty of the violation

  • flagged: a true or false value of a violation


Pulling the category_score from this example:

print(response.results[0].category_scores)

Text To Speech using Whisper

OpenAIs Whisper lets you transcribes audio or even translate a transcription into English. It supports mp3, mp4, mpeg, mpga, m4a, wav and webm with a 25MB limit. Just note that you should not use sensitive information when sending audio to the model.


To begin, let's import the audio file you'll be using

audio_file = open("path/to/file/meeting_recording_20240903.mp3", "rb")

We next, call the whisper model to transcribe like this:

response = client.audio.transcriptions.create(model="whisper-1", file=audio_file)

We can then print the response

print(response.text)


To use the transcription option, you follow all the steps above but change the response variable to this:

response = client.audio.translations.create(model="whisper-1", file=audio_file)

We can improve the response by providing a prompt example like we've done for some of the other use cases above. The below example lets it know to allow filler words like ummm and ahh

prompt="Ok, ummm... this is how we should do it, like, uhh, sometimes we need to"

Or you can provide context of the audio

prompt="A discussion on how big bird eats children"

Here's how we would modify our request to include these:

audio_file = open("non_english.m4a", "rb")
prompt= "The transcript is about AI trends and OpenAI"
response = client.audio.translations.create(model="whisper-1",
                                            file = audio_file,
                                            prompt=prompt)
                                            
print(response.text)

OK, so let's look at combining these together. This is called "MODEL CHAINING"


First let's choose the file we're using

audio_file = open("meeting.mp4", "rb")

We'll ask for a transcription of this file

audio_response = client.audio.transcriptions.create(model="whisper-1", file=audio_file)

and save the output to a variable

transcript = audio_response.text

Next we'll create a prompt for the chat model and then attach the full transcript we've received to the end of it.

prompt = "extract the attendee names from the start of this meeting transcript: " + transcript

Now we send the newly created prompt that also has the created transcript to a completions engine

chat_response = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[
        {"role": "user", "content": prompt}
    ]
)
print(chat_response.choices[0].message.content)




Example Supporting Prompt help:

This is a prompt asking to provide sentiment (but could also be used for categorization. We've given it a few examples so it knows how to best categorize them. The more the more examples the more accurate.

prompt="""
classify the sentiment of the following statements as either negative, positive 
or neutral:
Unbelievably good!
Shoes fell apart on the second use.
The shoes look nice, but they aren't very comfortable.
Can't wait to show them off!
"""

2 views0 comments

Recent Posts

See All

AlderN00b

I.T. Admin    |    Hacking    |    Learning

©2022 by AlderN00b. Proudly created with Wix.com

bottom of page