[Solved] Current OpenAI integration is legacy and is being deprecated by OpenAI

HI Team

I got an email saying that on Jan 4th 2024 , OpenAi will deprecate / shut down all their old legacy models including text-davinci-003 . This is the model under the hood that the thunkable integration has to the old api documented here -https://platform.openai.com/docs/api-reference/completions/create

Will this be fixed by Thunkable and will the tool be updated to the new api ? I’m presuming everyone’s who has used the OpenAi integration in thunkable would have received an email from OpenAi telling them this.

Looking forward to a solution. Thanks.

1 Like

Welcome to Thunkable @george.prakashbn!

I don’t use OpenAi in my Thunkable projects, however like you, I would guess that Thunkable would need to integrate a newer version.

The people that probably can help are, @matt_conroy and @ioannis

Thanks for flagging, I think this was originally announced back in July. I will check with the Product team and see what is in store.

From GPT-4 API general availability and deprecation of older models in the Completions API (emphasis mine):

Developers using other older completion models (such as text-davinci-003) will need to manually upgrade their integration by January 4, 2024 by specifying gpt-3.5-turbo-instruct in the “model” parameter of their API requests. gpt-3.5-turbo-instruct is an InstructGPT-style model, trained similarly to text-davinci-003. This new model is a drop-in replacement in the Completions API and will be available in the coming weeks for early testing.

thx @matt_conroy - please let me know when they plan on sorting this out.

@tatiang - are you saying I should be manually upgrading and passing in the model parameter? or meant for the Product team to upgrade their integration?

As an alternative Gemini API is performing good. Posting the working solution by @tatiang


I guess both. Accessing ChatGPT via the Web API blocks in Thunkable gives you a lot more power and flexibility.

@tatiang - Thx - specifying gpt-3.5-turbo-instruct in the “model” parameter helped fix the text completion. What do i do for Image generation ?

I’m not sure. When I set up API access for ChatGPT they hadn’t released DALL-E integration yet. I used a separate API and then decided to go with the Stable Diffusion API instead.

@bschwartzsxc2 @george.prakashbn The default model has been updated to gpt-3.5-turbo-instruct

Please keep in mind that you may need to do a hard refresh of your browser for the changes to take effect in your project. If you experienced any of these issues on the Thunkable Live app, you will need to update to the latest version through the App Store and/or Google Play Store before any changes will occur. If your downloaded or published app is affected by any of these issues, you will need to re-download or re-publish your app for these changes to take effect.

1 Like

Excellent - thank you @matt_conroy .

Just as an FYI for anyone else who like me , had added in the model parameter in the request as a workaround. We have to remove that model parameter now . Keeping it in for some reason doesn’t work. Once I removed the parameter, it started working. And as a bonus - the image generation started working too!!