Hi @bukit we don't have a direct integration with GPT-3.5 yet. Looking at the official API reference for GPT-3.5, it's possible to create a Text Blaze snippet that does that:
{request_body=tojson(["temperature": 0.3, "max_tokens": 256, "model": "gpt-3.5-turbo", "messages": [["role": "user", "content": "Hello!"]]], "{model: string, messages: { role: string, content: string }[], temperature: number, max_tokens: number}")}
{=request_body}
{urlload: https://api.openai.com/v1/chat/completions; method=POST; headers=Content-Type: application/json, Authorization: Bearer <api key>; body={=request_body}; done=(res) -> catch(["result": res, "isloading": no, "haserror" : no], ["name": res, "isloading": no, "haserror": yes]); start=() -> ["isloading": yes]}
{if: isloading}Loading...{else}{response=fromjson(result)["choices"][1]["message"]["content"]}{=response}{endif}
If you find any parameter of this snippet difficult to understand, please let me know. Be sure to replace <api key>
your own API key.
If we were to integrate GPT-3.5 officially, what are the features that you would be looking for? Or is this small snippet sufficient for your use cases?