The purpose of using OpenAI GPT is to create a dictionary which will have the details of the resume input file in key-value pair. It will be used to format the docx file through their key-value pairs.
The initial OpenAI function, known as ‘dict_creator1,’ is responsible for generating an exception message related to the OpenAI token limit issue.
Step 1: ‘dict_creator1’ accepts three parameters
i) The user-provided prompt.
ii) The content, which is extracted using the ‘content_extraction()’ function.
iii) The ‘ml’ variable representing the maximum length specified in the ‘config_file.’
Step 2: utilize your OpenAI API key and provided one exception pattern.
Step 3: Evaluates whether the provided ‘max_length’ adheres to the criteria. If it does, the function generates a dictionary; otherwise, it responds with an exception message.
OpenAI offers about 2-3 versions for each of their models. The goal is to monetize based on the number of tokens used. These versions are categorized as follows:
However, there can be a challenge in determining the correct amount of tokens for chat completion. OpenAI allocates the specified number of tokens regardless of whether you end up using them or not.
To address this, we can make two calls to OpenAI chat completion. The first call is to obtain the post log message, which provides information about the token requirement. The second call is to actually process the data.
Even with this approach, there’s still a potential issue. If we make two calls to OpenAI simultaneously within a minute of the first completion, it may result in a “ratelimiterror” due to using more than 10,000 tokens per minute. To overcome this, we can introduce a 60-second delay before making the second call.
# Dictionary error extrator
prompt = "Give the prompt that you have engineered in OpenAI playground "
def dict_creator1(prompt, content, ml, api_key):
openai.api_key = api_key
pattern = r"This model's maximum context length is 8192 tokens"
try:
response = openai.ChatCompletion.create(
model="gpt-4-0613",
messages=[
{
"role": "system",
"content": prompt
},
{
"role": "user",
"content":content
}
],
temperature=0,
max_tokens=ml,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)
converted_dict = response['choices'][0]['message']['content']
flag = 1
return converted_dict
except openai.error.OpenAIError as error:
if pattern in str(error):
main_err = error
return main_err
else:
raise Exception(f"{error}")
exception = dict_creator1(prompt, str(content), maxlength, api_key)
print(exception)
prompt = "Give the prompt that you have engineered in OpenAI playground "
prompt
which seems to be a placeholder for the user to input a prompt. It’s meant to instruct the model.def dict_creator1(prompt, content, ml):
dict_creator1
that takes three parameters: prompt
, content
, and ml
. This function appears to be designed to interact with the OpenAI API.pattern = r"This model's maximum context length is 8192 tokens"
try
block which attempts to do the following:a. response = openai.ChatCompletion.create(...)
ChatCompletion
endpoint. It sends a request with a specified model, role (system or user), and content (prompt or user message).It also sets various parameters like temperature, max_tokens, top_p, frequency_penalty, and presence_penalty.converted_dict = response['choices'][0]['message']['content']
flag = 1
return converted_dict
except openai.error.OpenAIError as error:
block handles exceptions raised by the OpenAI API. If there’s an exception, it checks whether the exception contains the pattern defined earlier. If it does, it returns the exception. Otherwise, it raises a new exception.dict_creator1
function:exception = dict_creator1(prompt, str(content), maxlength)
print(exception)
prompt
is passed as the first argument, str(content)
(which suggests that content
is a variable) as the second, and maxlength
as the third argument.dict_creator1
is stored in the variable e
xception, which is then printed.
Step 1: The ‘regular_fn’ function will receive an exception log parameter from the ‘dict_creator1’ function.
Step 2: Within the exception message, it will identify a specific pattern.
Step 3: Subsequently, it will compute the maximum length and proceed to update the configuration file.
# First config updater
er_msg = "This model's maximum context length is 8192 tokens. However, you requested 12139 tokens (6139 in the messages, 6000 in the completion). Please reduce the length of the messages or completion."
def regular_fn(er_msg):
patt = r'\((\d{1,6})'
m = re.search(patt, er_msg)
content_length = m.group(1)
max_length = 8191 - int(content_length)
if max_length < 1000 or max_length == 1000:
raise Exception(f"{er_msg}")
else:
with open('config file path here', 'r') as file:
config_data = json.load(file)
new_max_length = max_length
try:
if isinstance(new_max_length, int):
config_data['max_length'] = new_max_length
with open('config file path here', 'w') as file:
json.dump(config_data, file, indent=4)
except Exception as e:
print(f"Exception: {e}")
return new_max_length
print(f"The updated maxlength is : {regular_fn(error_message)}")
er_msg
is a variable storing a specific error message related to model configuration.regular_fn(er_msg)
is a function that takes er_msg
as an argument.max_length
parameter in a configuration file based on the provided error message.patt = r'\((\d{1,6})'
defines a regular expression pattern. It appears to be looking for a pattern that starts with an opening parenthesis, followed by one to six digits.m = re.search(patt, er_msg)
uses the regular expression pattern to search for a match in the error message (er_msg
).m
.content_length = m.group(1)
extracts the content length from the match found in the previous step.max_length = 8191 - int(content_length)
computes a new maximum length based on the extracted content length.max_length
is less than 1000 or equal to 1000, it raises an exception with the original error message.with open('config file path here', 'r') as file:
opens a configuration file for reading.config_data = json.load(file)
loads the JSON data from the configuration file.Write Updated Configuration:
with open('config file path here', 'w') as file:
opens the configuration file for writing.json.dump(config_data, file, indent=4)
writes the updated configuration data to the file with indentation for readability.print(f"The updated maxlength is : {regular_fn(error_message)}")
calls the function with the provided error_message
and prints the result.