Skip to content

Struggling to run using only a local model #27

@electricazimuth

Description

@electricazimuth

Looks like open ai is hardcoded here, I think it should be referencing the selected model I've setup in my settings (a local server on my network)

def generate(prompt, engine, api_base, api_key):
openai.base_url, openai.api_key = api_base + '/', api_key
#print('calling engine', engine, 'at endpoint', openai.api_base)
#print('prompt:', prompt)
response = openai.completions.create(prompt=prompt,
max_tokens=1,
n=1,
temperature=0,
logprobs=100,
model=engine).dict()
return response

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions