Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

Open AI is an AI tool just like Dialogflow which enables the users to interact using AI. From replying to users to generating images, you can use OpenAI for a variety of tasks. 

Uchat offers native integration with OpenAI, which enables users to set up the complex flow with just a click of a button.

Let us first see how we can establish a connection between OpenAI and Uchat.

Bridging Connection with OpenAI Account

1. Visit https://platform.openai.com

2. Login using your credentials.

3. Click on top-right corner on the “Personal” tab.

4. From here, you will be able to generate an api key.

You will only be able to see your api key once

5. Paste your API key inside Uchat and click “Save” to establish the connection.

Your account has successfully been connected with Uchat.

OpenAI Native Actions

Uchat gives a lot of actons with OpenAI which users can use for their needs. We will now discuss them in detail.

Create Text Completion

Text completion offers the functionality of sending prompts to OpenAI in textual form and based on the prompt receiving an answer.

Input:

Prompt: This is your main input for which you want the AI to give you an answer or output of. This can be a question, an instruction, etc.

Model: The model you want to use inside OpenAI for the task. By default, text-DaVinci-003 has been selected.

Max Tokens: Each task inside OpenAI consumes tokens. These tokens can be replenished using credit. This field puts a limit on the maximum number of tokens you want to use for a particular task.

Temperature: This acts as an accuracy gauge where higher values give more random answers and lower values give more deterministic and focused answers. It defaults to 1

Presence Penalty: This value makes OpenAI use unique phrases and texts when completing a task. The higher the value, less repetitive the words. It defaults to 0.

Number of Completions: The number of times you want the AI to generate a response based on your prompt. The higher value will result in more responses. It defaults to 1 in order to avoid the consumption of tokens.

Best of Completions: This returns the best possible response(s) for your prompt. It defaults to 1. This works with the Number of Completion field to choose the best possible answer from a group of responses.

Response:

Sample Response Data

{
"id": "cmpl-6zchlUy0OiAjX91LHOPBcZjuXaDgE",
"object": "text_completion",
"created": 1680144809,
"model": "text-davinci-003",
"choices": [
{
"text": " 1. Understand Your Target Audience - Before you begin any marketing campaign, it’s important to have a clear understanding of who you’re targeting with your message. Researching and understanding your target audience will help you create campaigns specifically tailored to their interests. 2. Leverage Social Media - Social media has become one of the most effective ways to communicate with your target audience. Utilizing social media channels such as Facebook, Twitter, and Instagram can help you build",
"index": 0,
"logprobs": null,
"finish_reason": "length"
}
],
"usage": {
"prompt_tokens": 4,
"completion_tokens": 100,
"total_tokens": 104
}
}

Id : The id of the text completion. A unique value.

Object : The action/task you gave to OpenAI. In our case “text_completion”

Created : A date-time field that tells the instance of the creation of the response. It is in Unix timestamp format.

Finish reason : The reason for the stoppage of the said task.

Prompt tokens : The number of tokens used to complete the task.

Best Practices:

Sometimes the completed response you get back seems to be cut off. This is due to the lack of the number of tokens required for the completion of the task. Simply adjusting the value of Max tokens inside the input fields will fix this issue.

It is also advised to adjust values like temperature, the number of completions, best of completions, etc to your use case by means of split testing. Every use case is unique and you should want the best possible utilization of the resources available.

Image Generation

Image Generation is used to generate images based on the user-input prompts. This feature will generate the best possible image that matches your given prompt.

Input:

Prompt : This is your main input for which you want the AI to generate an image for you. This can be a question, an instruction etc.

Number of Images : The number of images you want the AI to generate for you. It is default to 1

Size : The dimensions you want the image to be. OpenAI support three sizes which are: 

           512x512 

           256x256

           1024x1024

Response:

Sample Response Data

{
"created": 1680145479,
"data": [
{
"url": "https://oaidalleapiprodscus.blob.core.windows.net/private/org-2FEbJIRL7GXfKmGw2BT9wh9b/user-nk6UUN7L9nFqzGEw67uTMonD/img-FhZpxMrCbiDBR4O62e7pPF08.png?st=2023-03-30T02%3A04%3A39Z&se=2023-03-30T04%3A04%3A39Z&sp=r&sv=2021-08-06&sr=b&rscd=inline&rsct=image/png&skoid=6aaadede-4fb3-4698-a8f6-684d7786b067&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2023-03-29T17%3A40%3A49Z&ske=2023-03-30T17%3A40%3A49Z&sks=b&skv=2021-08-06&sig=4DF0dw/peG7FSVMUml4ShuQP98T0xECW1gE%2BeutdRAw%3D"
}
]
}

Created : A date-time field which tells the instance of creation of the response. It is in Unix timestamp format.

Url : The public URL for your image(s).

Best Practices:

Generating Images consume more computational power and hence the replies can be a delayed based on the prompts you give.

AI is a developing field and hence the images produced can be quite inaccurate given the complexity of the prompts provided. Hence finding the correct prompt complexity can sometimes be a challenge.

Speech to Text

Speech-to-text action is used when you want to convert audio input into text. This has variety of use cases such as implementation in IVRs.

Input:

File Url : This is the URL for the audio that you want to convert to text. Make sure that the url is a public hosted url ending with audio formats such as mp3. mpeg etc

Language : The language you want the speech to be converted into. We use ISO-639-1 format which means you need to put languages as ‘en’, ‘es’ etc.

Response:

Sample Response Data

{
"text": "Welcome to Rensen. This is a test to see if everything works well. And if the IVR can guide you to your work."
}

Text : The text which is converted from the speech.

Best Practices:

You can convert speech to text quite accurately using this feature. It is considered a best practice to provie the audio in the same language as that of the desired output for more accurate results and latency.

Translate Audio to English

Translate audio to English action is used when you want to convert audio input into text in english language. This has a variety of use cases such as implementation in IVRs.

Input:

File Url : This is the URL for the audio that you want to convert to text. Make sure that the url is a public hosted url ending with audio formats such as mp3. mpeg etc

Response:

Sample Response Data

{
"text": "Welcome to Rensen. This is a test to see if everything works well. And if the IVR can guide you to your work."
}

Text: The text which is converted from the speech.

Best Practices:

Experimenting with different formats of audio can provide more (or less) accurate results. This is simply due to the quality of the audio provided so make sure you split test with different formats to achieve the best possible format for your use case.

Create Chat Completion

Chat completion offers a functionality of sending prompts to OpenAI in textual form and based on the prompt receive an answer. This is similar to text completion action but it uses ChatGPT which is 10x faster and cheaper as well.

Input:

System Message : This is an optional field and is used to provide additional context about you or your business when completing chats.

Message : This is your main input for which you want the AI to give you an answer or output of. This can be a question, an instruction etc. For better results add “user:” as a prefix to your prompt in order to provide more context to the AI for eg:

  “user : will it rain today?” 

Remember History: If selected “Yes”, the chat history between user and assistant will be saved in a system field to be used for later if needed.

Model : The model you want to use inside ChatGPT for the task. By default gpt-3.5-turbo has been selected.

Max Tokens : Each task inside ChatGPT consumes tokens. These token can be replenished using the credit. This field puts a limit on the maximum number of tokens you want to use for a particular task.

Temperature : This acts as a accuracy gauge where higher values give more random answers and lower values give more deterministic and focused answers. It is default to 1

Presence Penalty : This value makes ChatGPT use unique phrases and texts when completing a task. The higher the value, less repetitive words. It is default to 0.

Number of Completions : The number of times you want the AI to generate a response based on your prompt. The higher value will results in more responses. It is default to 1 in order to avoid consumption of tokens.

Best of Completions : This returns the best possible response(s) for your prompt. It is default to 1. This works with Number of Completion field to choose the best possible answer from a group of responses.

Response:

Sample Response Data

{
"id": "chatcmpl-6zef5zEUdDzTx8VKu2r4gkIJfVcBE",
"object": "chat.completion",
"created": 1680152331,
"model": "gpt-3.5-turbo-0301",
"usage": {
"prompt_tokens": 18,
"completion_tokens": 100,
"total_tokens": 118
},
"choices": [
{
"message": {...}, // 2 keys
"finish_reason": "length",
"index": 0
}
],
"messages": [
{
"role": "user",
"content": "can you help me with planting a mango tree?"
},
{
"role": "assistant",
"content": "Of course! Here are some steps to plant a mango tree: 1. Choose a spot: Mango trees need plenty of sunlight and well-draining soil. They also need protection from strong winds, so choose a spot that's sheltered. 2. Prepare the soil: Mango trees prefer slightly acidic soil, with a pH between 5.5 and 7. If your soil is too alkaline, add sulfur or peat moss to lower the pH. If it's too acidic, add lime"
}
]
}

Id : The id of the text completion. A unique value.

Object : The action/task you gave to OpenAI. In our case “text_completion”

Created : A date-time field which tells the instance of creation of the response. It is in Unix timestamp format.

Choice -> Content : The content field inside the choice object contains the answer to your prompt.

Message: This is a JSON for complete conversation that has happened between the user and the assistant.

Best Practices:

Chat completion action enables you to provide JSON input as well hence you can save the complete conversation between the users and the assistant in a JSON to give more focused and contextual replies related to that conversation.

Since chat completion takes more input, the token consumption can be higher that text completion.

Clear Remembered Chat History

Clear remembered history is used to delete or clear the system field where the chat history for chatGPT is stored.

The system field has a max character limit of 20000 characters after which it deletes the oldest key-pair value from the JSON in order to make room for newer values

OpenAI Embeddings & Building your Knowledge Base

OpenAI gives you the ability to provide a knowledge base of your use case or business for the AI to generate responses from. This enables the AI to give more accurate, contextual as well as particular answers instead of filtering them from the internet.

Create An Embedding:

To create an embedding, go into the Integrations and select OpenAI

Click on “New Embedding”

Type : This is an optional field. This is used to classify embeddings based on a certain context. Is used as a filter when there are a large number of embeddings associated. Always better to provide this field as it gives more context and becomes easier for AI to filter through. 

Heading: The topic of the embedding that you have created. The title or summary.

Text : This is the text or main body of the embedding. The max character limit is 1000. You can put the details of the topic here for the AI to generate the response from.

Importing Embeddings:

Instead of manually creating embeddings you can create them in bulk by importing them as a CSV file.

Click on the drop-down arrow beside “New Embedding” and click on “Import CSV”

Now import the CSV file containing the embeddings and your embeddings will be created.

Make sure that the first rows of all columns should be the input fields name such as type, heading, text etc. and none of them should start with a capital letter.

Embedding Match & Completion Actions

The embedding match action is used to match the entered prompt with best-matching embedding from the knowledge base

Input:

Input : This is where you will input or map the prompt you want to match the embedding to.

Response:

Embedding : The heading of the embedding the prompt is best matched to.

Text : The text of the embedding the prompt is best matched to.

Input : The prompt that you input for embedding search.

Score : This is the % of the match between prompt and the embeddings available. You can use this score determine whether the following prompt should be used for completion or is not sufficient and will give inaccurate answers.

It is observed that a score of 0.79 and above gives the best possible embedding match. However this is an empirical value and you used split test for your use case in order to obtain best possible answers.

The embedding match and completion action is used to match the entered prompt with best matching embedding from the knowledge base and then generate the response using that particular knowledge base.

Input:

Input : This is where you will input or map the prompt you want to match the embedding to.

Introduction : This is used to provide more context to the prompt making the prompt more accurate and helps in raising the embedding match score.

Response:

Sample Response Data

{
"status": "ok",
"result": {
"heading": "Free trial",
"text": "UChat offer 14 days free trial. No credit card required, you can access to all the pro features. You can sign up here: https://www.uchat.com.au/register",
"score": 0.903164959234692,
"input": "Free trial for uchat",
"completion": " Yes, UChat offers a 14-day free trial. No credit card is required and you can access all the pro features. You can sign up here: https://www.uchat.com.au/register."
}
}

Embedding : The heading of the embedding the prompt is best matched to.

Text : The text of the embedding the prompt is best matched to.

Input : The prompt that you input for embedding search.

Score : This is the % of the match between prompt and the embeddings available. You can use this score determine whether the following prompt should be used for completion or is not sufficient and will give inaccurate answers. It is observed that a score of 0.79 and above gives the best possible embedding match. However this is an empirical value and you used split test for your use case in order to obtain best possible answers.

Completion : This is the output or the completion of the prompt the user input.

  • No labels

0 Comments

You are not logged in. Any changes you make will be marked as anonymous. You may want to Log In if you already have an account.