WebraftAI API supports chat completion url, which is crucial for using chat models based off openai like format. We have also added support for non-openai models to work with openai api format structure.
To generate text, you can use the chat completions endpoint in the REST API, as seen in the examples below. You can either use the REST API from the HTTP client of your choice, or use one of OpenAI's official SDKs for your preferred programming language.
You can use the official openai sdk library and still be able to integrate it with webraft api.
For that you need to add an extra line openai.api_base and define it with webraftai's base url.
Before moving further, you need to make sure that you have the openai library installed. You can install it through the command: pip install openai
Sample Request:
import openai
from openai import OpenAI
openai.api_key = "Your API key"
openai.api_base = "https://api.webraft.in/freeapi" # or use https://api.webraft.in/v2
client = OpenAI()
completion = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content":"You are a helpful AI Assistant"},
{"role": "user", "content": "Hello!"}
]
)
print(completion.choices[0].message)
from openai import OpenAI
import openai
openai.api_key = "Your API key"
openai.api_base = "https://api.webraft.in/v2"
client = OpenAI()
response = client.chat.completions.completion = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
],
stream=True
)
for chunk in completion:
print(chunk.choices[0].delta)
Moreover, we are going to focus on using our api with the requests library instead as there is an official documentation from openai on using the openai sdk.
Using Requests library
The requests library is a well-known sdk for python to create get or post requests. As requests library has a wide-range support, we'll be using that for further tutorial.
Make sure that you have installed the requests library, if you haven't then run the command: pip install requests in console.
import requests
import json
url = "https://api.webraft.in/v2/chat/completions"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer YOUR_API_KEY"
}
data = {
"model": "gpt-4o",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
],
"stream": True
}
response = requests.post(url, headers=headers, data=json.dumps(data), stream=True)
for line in response.iter_lines():
if line:
chunk = json.loads(line.decode('utf-8'))
print(chunk['choices'][0]['delta'])
Using Curl
Curl is a well-known computer software library which allows transferring data through various protocols. We are going to now try using the api through curl. In linux, curl is pre installed, but if you are using any other operating system then please install it or use a similar package.
Unfortunately, openai has disabled custom base urls for the official openai nodejs sdk. Due to which only alternative solutions like using axios or a similar request library is the option left.
In this tutorial, we'll be using axios library for handling requests in node.js.
Do not forget to change the prefix "YOUR_API_KEY" with your actual key generated from webraftai. Also change the API endpoint according to the type of user plan you have. (Free or Paid).
Response
The API will return a json response (if the response status code is 200) or else will return an error. If an error is being encountered several times, you can contact the support staff for them to fix the issue. If you encounter an error stating Wrong Api key or Insufficient Credits then please check your key and user account. Typically either the credits are used up or the person by mistakenly switched their credit system type to quota , you can switch it back to credits through commands or the dashboard.
If the response status code is 200, then a successful response would have been received.
We have listed some of the same responses down below:
Further, if you are looking forward to know details about each of the fields returned by the api response in the chat completion object then check the official documentation.
Conclusion
As developers work to integrate AI functionalities, it's essential to stay informed about updates in libraries and SDKs. Alternative solutions like using request libraries provide flexibility and can bridge gaps when official support is limited. Keep an eye on updates and community solutions for the latest integration techniques.
For information about different settings and configuration, it is better for you to have a look on the official openai documentation.
Now let's move onto the next part where we'll guide you on using Image generation models.