In this book we will look at the basic set-up for using OpenAI via Python for development purposes.
Setup and the ability to run prompts programmatically which allows integrations into customized user interfaces.
A few sample prompts will be shown.
It is assumed that the user has set up and OpenAI account and setup an API key The API reference for OpenAI can be found here with examples in Python and Javascript.
The basic Python setup requires the installation of the OpenAI package.
#pip install openai
Once installed the openai
package can be loaded in.
import openai
print('OpenAI version', openai.__version__)
OpenAI version 0.27.8
In order to use any of the interfaces an API key must be set.
Here we get the key from an environment variable. Users can either set their own key directly to openai.api_key
or set the environment variable OPENAI_API_KEY
(or your preferred environment variable)
import os
api_key = os.environ.get("OPENAI_API_KEY")
if api_key:
openai.api_key = api_key
print('Set OpenAI API key')
else:
print('Failed to get API key')
Set OpenAI API key
As a test we do a query of the available models using the Model.list()
interface
# List available models
if api_key:
models = openai.Model.list()
print('Available models:')
for model in models.data:
print('- ', model['id'])
Available models: - whisper-1 - babbage - text-davinci-003 - davinci - text-davinci-edit-001 - babbage-code-search-code - text-similarity-babbage-001 - code-davinci-edit-001 - text-davinci-001 - ada - babbage-code-search-text - babbage-similarity - gpt-3.5-turbo-16k-0613 - code-search-babbage-text-001 - text-curie-001 - gpt-3.5-turbo-0301 - gpt-3.5-turbo-16k - code-search-babbage-code-001 - text-ada-001 - text-similarity-ada-001 - curie-instruct-beta - ada-code-search-code - ada-similarity - code-search-ada-text-001 - text-search-ada-query-001 - davinci-search-document - ada-code-search-text - text-search-ada-doc-001 - davinci-instruct-beta - text-similarity-curie-001 - code-search-ada-code-001 - ada-search-query - text-search-davinci-query-001 - curie-search-query - gpt-3.5-turbo - davinci-search-query - babbage-search-document - ada-search-document - text-search-curie-query-001 - text-search-babbage-doc-001 - curie-search-document - text-search-curie-doc-001 - babbage-search-query - text-babbage-001 - text-search-davinci-doc-001 - text-search-babbage-query-001 - curie-similarity - gpt-3.5-turbo-0613 - curie - text-embedding-ada-002 - text-similarity-davinci-001 - text-davinci-002 - davinci-similarity
The OpenAI Completion interface can then be used to create predicted completions based on the input prompt. For these next examples, we will use the DaVinci 3 model to create completions from a given prompt.
Note> The API explicitly adds the burden of considering model dependent cost and rate limit considerations for every call The cost can be checked under your OpenAI account. An example snapshot is shown below:
Notable parameters include:
# Control whether to actaully run the prompts or not
executePrompts = True
runPrompts = executePrompts and api_key
# Sample programmer prompt
if runPrompts:
myprompt = ('write glsl version 4.6 function for a gooch shader like the one used in Unity. Make all parameters as arguments.'
'Name the function "mx_gooch_" followed by the name of the output type. '
' Prefix all parameter names with "mx_".')
token_guess = 300 # int(len(myprompt) / 2)
print('Run prompt:', myprompt)
input_model = 'text-davinci-003'
input_temperature = 0.2
input_frequency_penalty = 0
input_presence_penalty = 0
if runPrompts:
response = openai.Completion.create(
model=input_model,
prompt=myprompt,
temperature=input_temperature,
max_tokens=token_guess,
top_p=1,
frequency_penalty=input_frequency_penalty,
presence_penalty=input_presence_penalty
)
for choice in response.choices:
print('Possible answer: ')
print(choice.text)
Run prompt write glsl version 4.6 function for a gooch shader like the one used in Unity. Make all parameters as arguments.Name the function "mx_gooch_" followed by the name of the output type. Prefix all parameter names with "mx_". Possible answer: vec4 mx_gooch_(vec3 mx_position, vec3 mx_normal, vec3 mx_lightDir, vec3 mx_warmColor, vec3 mx_coolColor, float mx_diffuse, float mx_smoothness) { float mx_diffuseLight = max(dot(mx_normal, mx_lightDir), 0.0); vec3 mx_halfVector = normalize(mx_lightDir + normalize(mx_position)); float mx_specularLight = pow(max(dot(mx_normal, mx_halfVector), 0.0), mx_smoothness); vec3 mx_gooch = mix(mx_warmColor, mx_coolColor, mx_diffuseLight); vec3 mx_finalColor = mx_gooch * mx_diffuse + mx_specularLight; return vec4(mx_finalColor, 1.0); }
Some example output from a response is shown below (so as to see the result without a key being used):
vec3 mx_gooch_f(vec3 mx_a, vec3 mx_b, vec3 mx_c, vec3 mx_n, vec3 mx_light, float mx_alpha, float mx_ beta)
{
vec3 kd = mx_a;
vec3 ks = mx_b;
vec3 ka = mx_c;
vec3 n = mx_n;
vec3 l = mx_light;
float alpha = mx_alpha;
float beta = mx_beta;
vec3 nl = normalize(l);
vec3 nn = normalize(n);
vec3 ns = normalize(nl + vec3(0.0, 0.0, 1.0));
vec3 v = normalize(-l);
vec3 kdg = kd * (1.0 - beta) + vec3(beta) * pow((1.0 - dot(nl, ns)), 3.0);
vec3 ksg = ks * pow(max(dot(v, reflect(-nl, nn)), 0.0), alpha);
return kdg + ksg + ka;
}
if runPrompts:
myprompt = "You’re a UX writer now. Generate 5 versions of 404 error message for a food delivery application."
response = openai.Completion.create(
model="text-davinci-003",
prompt=myprompt,
temperature=1,
max_tokens=300,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)
for choice in response.choices:
print('Possible answer: ')
print(choice.text)
Possible answer: 1. Oh no! Looks like you’re lost. The food delivery you were searching for isn’t here. 2. Food delivery not found. Oops! Let's get you back to the buffet. 3. We couldn't find the food delivery you were looking for. Try again or take a look at some of the other delicious choices. 4. Sorry, the food delivery you're looking for isn't here. But don't worry - there's a delicious selection waiting for you here. 5. Uh oh! Looks like the food delivery you were searching for isn't here. We've got plenty of other tasty options to choose from.
As the information used is static it is useful to be able to provide specific custom data for generation.