Chat Completion

WebraftAI API supports chat completion url, which is crucial for using chat models based off openai like format. We have also added support for non-openai models to work with openai api format structure.

To generate text, you can use the chat completions endpoint in the REST API, as seen in the examples below. You can either use the REST API from the HTTP client of your choice, or use one of OpenAI's official SDKs for your preferred programming language.

The two chat completion endpoints are:

Using Official SDK

You can use the official openai sdk library and still be able to integrate it with webraft api. For that you need to add an extra line openai.api_base and define it with webraftai's base url.

Before moving further, you need to make sure that you have the openai library installed. You can install it through the command: pip install openai

Sample Request:

import openai
from openai import OpenAI
openai.api_key = "Your API key"
openai.api_base = "https://api.webraft.in/freeapi" # or use https://api.webraft.in/v2
client = OpenAI()
completion = client.chat.completions.create(
  model="gpt-4o",
  messages=[
    {"role": "system", "content":"You are a helpful AI Assistant"},
    {"role": "user", "content": "Hello!"}
  ]
)

print(completion.choices[0].message)

Analysing an Image:

from openai import OpenAI
import openai
openai.api_key = "Your API key"
openai.api_base = "https://api.webraft.in/v2"
client = OpenAI()

completion = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "What's in this image?"},
                {
                    "type": "image_url",
                    "image_url": {
                        "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
                    }
                },
            ],
        }
    ],
)

print(completion.choices[0].message)

Streaming request:

from openai import OpenAI
import openai
openai.api_key = "Your API key"
openai.api_base = "https://api.webraft.in/v2"
client = OpenAI()

response = client.chat.completions.completion = client.chat.completions.create(
  model="gpt-4o",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ],
  stream=True
)

for chunk in completion:
  print(chunk.choices[0].delta)

Moreover, we are going to focus on using our api with the requests library instead as there is an official documentation from openai on using the openai sdk.

Using Requests library

The requests library is a well-known sdk for python to create get or post requests. As requests library has a wide-range support, we'll be using that for further tutorial.

Make sure that you have installed the requests library, if you haven't then run the command: pip install requests in console.

Sample request:

import requests

api_key = "YOUR API KEY"

url = 'https://api.webraft.in/freeapi/chat/completions'
headers = {
    'Authorization': f'Bearer {api_key}',
    'Content-Type': 'application/json'
}

data = {
    "model": "gpt-3.5-turbo",
    "max_tokens": 100,
    "messages": [
        {
            "role": "system",
            "content": "You are an helpful assistant."
        },
        {
            "role": "user",
            "content": "hi"
        }
    ]
}

response = requests.post(url, headers=headers, json=data)

try:
    response_data = response.json()
    print(response_data)
except requests.exceptions.JSONDecodeError as e:
    print("JSON Decode Error:", e)
    print("Response Content:", response.content)

Analyse an Image:

import requests

api_key = "YOUR API KEY"

url = 'https://api.webraft.in/v2/chat/completions'
headers = {
    'Authorization': f'Bearer {api_key}',
    'Content-Type': 'application/json'
}

data = {
    "model": "gpt-4o",
    "max_tokens": 4096,
    "messages": [
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "What's in this image?"},
                {
                    "type": "image_url",
                    "image_url": {
                        "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
                    }
                },
            ],
        }
    ]
}

response = requests.post(url, headers=headers, json=data)

try:
    response_data = response.json()
    print(response_data)
except requests.exceptions.JSONDecodeError as e:
    print("JSON Decode Error:", e)
    print("Response Content:", response.content)

Streaming Request:

import requests
import json

url = "https://api.webraft.in/v2/chat/completions"
headers = {
    "Content-Type": "application/json",
    "Authorization": "Bearer YOUR_API_KEY"
}
data = {
    "model": "gpt-4o",
    "messages": [
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Hello!"}
    ],
    "stream": True
}

response = requests.post(url, headers=headers, data=json.dumps(data), stream=True)

for line in response.iter_lines():
    if line:
        chunk = json.loads(line.decode('utf-8'))
        print(chunk['choices'][0]['delta'])

Using Curl

Curl is a well-known computer software library which allows transferring data through various protocols. We are going to now try using the api through curl. In linux, curl is pre installed, but if you are using any other operating system then please install it or use a similar package.

Sample request:

curl "https://api.webraft.in/freeapi/chat/completions" \
    -H "Content-Type: application/json" \
    -H "Authorization: Bearer YOUR_API_KEY" \
    -d '{
        "model": "gpt-3.5-turbo",
        "messages": [
            {
                "role": "system",
                "content": "You are a helpful assistant."
            },
            {
                "role": "user",
                "content": "Hi"
            }
        ]
    }'

Analyse an Image:

curl https://api.webraft.in/v2/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "gpt-4o",
    "messages": [
      {
        "role": "user",
        "content": [
          {
            "type": "text",
            "text": "What is in this image?"
          },
          {
            "type": "image_url",
            "image_url": {
              "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
            }
          }
        ]
      }
    ]
  }'

Streaming Request:

curl https://api.webraft.in/v2/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "gpt-4o",
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful assistant."
      },
      {
        "role": "user",
        "content": "Hello!"
      }
    ],
    "stream": true
  }'

Using Node.js

Unfortunately, openai has disabled custom base urls for the official openai nodejs sdk. Due to which only alternative solutions like using axios or a similar request library is the option left.

In this tutorial, we'll be using axios library for handling requests in node.js.

Sample Request:

const axios = require('axios');

const apiKey = 'YOUR_API_KEY';
const url = 'https://api.webraft.in/v2/chat/completions';

const headers = {
  'Authorization': `Bearer ${apiKey}`,
  'Content-Type': 'application/json'
};

const data = {
  model: 'gpt-3.5-turbo',
  max_tokens: 100,
  messages: [
    {
      role: 'system',
      content: 'You are a helpful assistant.'
    },
    {
      role: 'user',
      content: 'hi'
    }
  ]
};

axios.post(url, data, { headers })
  .then(response => {
    console.log(response.data);
  })
  .catch(error => {
    if (error.response) {
      console.log('Error Response:', error.response.data);
    } else if (error.request) {
      console.log('Error Request:', error.request);
    } else {
      console.log('Error Message:', error.message);
    }
  });

Analyse an Image:

const axios = require('axios');

const apiKey = 'YOUR_API_KEY';
const url = 'https://api.webraft.in/v2/chat/completions';

const headers = {
  'Authorization': `Bearer ${apiKey}`,
  'Content-Type': 'application/json'
};

const data = {
  model: 'gpt-3.5-turbo',
  max_tokens: 100,
  messages: [
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "What's in this image?"},
                {
                    "type": "image_url",
                    "image_url": {
                        "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
                    }
                },
            ],
        }
    ]
};

axios.post(url, data, { headers })
  .then(response => {
    console.log(response.data);
  })
  .catch(error => {
    if (error.response) {
      console.log('Error Response:', error.response.data);
    } else if (error.request) {
      console.log('Error Request:', error.request);
    } else {
      console.log('Error Message:', error.message);
    }
  });

Streaming Request:

const axios = require('axios');
const readline = require('readline');

const url = 'https://api.webraft.in/v2/chat/completions';
const headers = {
  'Content-Type': 'application/json',
  'Authorization': 'Bearer YOUR_API_KEY'
};
const data = {
  model: 'gpt-4o',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Hello!' }
  ],
  stream: true
};

axios.post(url, data, { headers, responseType: 'stream' })
  .then(response => {
    const rl = readline.createInterface({
      input: response.data,
      crlfDelay: Infinity
    });

    rl.on('line', (line) => {
      if (line) {
        const chunk = JSON.parse(line);
        console.log(chunk.choices[0].delta);
      }
    });
  })
  .catch(error => {
    console.error('Error:', error);
  });

Do not forget to change the prefix "YOUR_API_KEY" with your actual key generated from webraftai. Also change the API endpoint according to the type of user plan you have. (Free or Paid).

Response

The API will return a json response (if the response status code is 200) or else will return an error. If an error is being encountered several times, you can contact the support staff for them to fix the issue. If you encounter an error stating Wrong Api key or Insufficient Credits then please check your key and user account. Typically either the credits are used up or the person by mistakenly switched their credit system type to quota , you can switch it back to credits through commands or the dashboard.

If the response status code is 200, then a successful response would have been received. We have listed some of the same responses down below:

Sample response (default):

{
  "id": "chatcmpl-webraftai",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "gpt-4o",
  "system_fingerprint": "fp_44709d6fcb",
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "content": "\n\nHello there, how may I assist you today?",
    },
    "logprobs": null,
    "finish_reason": "stop"
  }],
  "usage": {
    "prompt_tokens": 9,
    "completion_tokens": 12,
    "total_tokens": 21,
    "completion_tokens_details": {
      "reasoning_tokens": 0,
      "accepted_prediction_tokens": 0,
      "rejected_prediction_tokens": 0
    }
  }
}

Sample response (Analysing an Image):

{
  "id": "chatcmpl-webraftai",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "gpt-4o",
  "system_fingerprint": "fp_44709d6fcb",
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "content": "This image shows a wooden boardwalk extending through a lush green marshland.",
    },
    "logprobs": null,
    "finish_reason": "stop"
  }],
  "usage": {
    "prompt_tokens": 9,
    "completion_tokens": 12,
    "total_tokens": 21,
    "completion_tokens_details": {
      "reasoning_tokens": 0,
      "accepted_prediction_tokens": 0,
      "rejected_prediction_tokens": 0
    }
  }
}

Sample response (Streaming request):

{"id":"chatcmpl-webraftai","object":"chat.completion.chunk","created":1694268190,"model":"gpt-4o", "system_fingerprint": "fp_44709d6fcb", "choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}

{"id":"chatcmpl-webraftai","object":"chat.completion.chunk","created":1694268190,"model":"gpt-4o", "system_fingerprint": "fp_44709d6fcb", "choices":[{"index":0,"delta":{"content":"Hello"},"logprobs":null,"finish_reason":null}]}

....

{"id":"chatcmpl-webraftai","object":"chat.completion.chunk","created":1694268190,"model":"gpt-4o", "system_fingerprint": "fp_44709d6fcb", "choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}

Further, if you are looking forward to know details about each of the fields returned by the api response in the chat completion object then check the official documentation.

Conclusion

As developers work to integrate AI functionalities, it's essential to stay informed about updates in libraries and SDKs. Alternative solutions like using request libraries provide flexibility and can bridge gaps when official support is limited. Keep an eye on updates and community solutions for the latest integration techniques.

For information about different settings and configuration, it is better for you to have a look on the official openai documentation.

Now let's move onto the next part where we'll guide you on using Image generation models.

Last updated