LLM Client
Implements get_llm_client
method to configure litellm.completion
to communicate with user provided LLM client.
get_llm_client(api_key, model, temperature, **kwargs)
Generate a lambda function around litellm.completion
to be called
from PromptRefiner.refine
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
api_key
|
str
|
API key to access model. |
required |
model
|
str
|
model name to use for refining prompt. |
required |
temperature
|
float
|
Temperature for model. |
required |
**kwargs
|
Extra arguments to feed into model. |
{}
|
Returns:
Type | Description |
---|---|
Callable
|
A lambda function, which takes |
Source code in promptrefiner/client_factory.py
47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 |
|