Prerequisite:
- Create OpenAI key
- Add key to Thunkable
- (Optionally but reccomended) Objects Tutorial
Blocks used:
Project Link - You will have to add your own keys
Table of Contents:
Prerequisite:
Blocks used:
Project Link - You will have to add your own keys
Table of Contents:
Explanation
Most of the time, you want the AI to respond or speak in a certain way. The way to do this is to tell the AI how to speak. However, if you tried to do this within the just the prompt of the block, you might find out it either doesn’t work or is inconsistent.
The proper way to do it, according to OpenAI’s doc is to put that information into ‘developer’. The role ‘developer’ refers to the AI while ‘user’ is what you want to tell the AI or what the user of your app is going to type into the app.
The image below will show the javascript code to do it. The second image will show the equivalent in Thunkable.
If you want to change the way the AI talks, you will modify where the text says ‘Talk like you are a super spy’ to be however you want.
If you want to change the model, simply replace where it says ‘gpt-4o-mini’ with the name of the model you want to use.
OpenAI has a lot of different models. You can view them all [here] (https://platform.openai.com/docs/models). Let’s look at the different models so you can pick the one that is best for you.
First, let’s look at what makes each model different and what the information all means. Let’s look at the AI that ChatGPT uses (https://platform.openai.com/docs/models/chatgpt-4o-latest).
Luckily, OpenAI gives us an overview, and this will give us the best idea of whether this will suit our needs.
Intelligence and Speed are self-explanatory.
Input and Output tell us what forms it accepts and returns. From left to right, it goes Text, Image, and Audio, respectively. Most models will have text for Input and Output, so unless you want more than that, you won’t need to be concerned about Input and Output.
Price tells us how much it will cost, and this is most likely the defining factor. Let’s go into more detail about Price.
Price
If you scroll down on the model’s page, you will see how much it will cost. Let’s break down what all this means.
First, let’s look into 1M(1 million) tokens. Tokens are the unit of measurement that OpenAI uses. According to OpenAI:
- 1 token ~= 4 chars in English
- 1 token ~= ¾ words
- 100 tokens ~= 75 words
Or
- 1-2 sentence ~= 30 tokens
- 1 paragraph ~= 100 tokens
- 1,500 words ~= 2048 tokens
By doing a little bit of math, we can estimate that 1 million tokens is about 732421 words. That means that for our input, every 732421 words that we feed into the AI, it will cost 5 dollars. In other words, one cent is 1464 words (5 dollars is 500 cents, so 732421/500 = 1464) from our user.
But for our output, or what the AI responds with, the price is 15 dollars per 1M tokens or 15 dollars per 732421 words. Doing the same math as our input, one cent is 488 words from the AI.
Now we know how much value a model has, let’s look at all the different models and narrow down our choices. For this guide, we will focus on text-to-text (sending text to the AI and receiving text back from the AI. For other kinds of input and outputs, you will have to look through all the models individually.
There are three categories that we can look at: Reasoning models, Flagship chat models, or Cost-optimized models
Most of these categories are self-explanatory. If you want a Chat-GPT experience, you should go with one of the Flagship models. If you want something that will be inexpensive, look at the Cost-optimized models. Lastly, if you need a model that can think things through deeply and process complex information, pick one of the Reasoning models.
Among all the different models, there will be a suffix such as nano, mini, or pro. Nanos are very, very cheap and very fast, but their reasoning isn’t great. Minis have decent to high intelligence, decently fast, and decently cheap as well. Minis are a good all-around model. Pros are your expensive high computing models. Pros have very high intelligence, but because of that, they are slower and more expensive to run.