API Reference
ai_model_manager
AIModelManager
A class to manage AI model configurations stored in a JSON file. Allows adding, removing, selecting models, and generating output via LLM APIs.
Source code in cli/ai_model_manager.py
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 |
|
__init__(file_path=None)
Initialize the AIModelManager instance. Creates the default configuration file if it doesn't exist.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
file_path
|
str
|
Path to the JSON configuration file. If None, defaults to ~/.config/ai_model_manager/models_api.json. |
None
|
Source code in cli/ai_model_manager.py
configure_model(model_name, api_key)
Adds or updates a model entry with a given API key.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_name
|
str
|
The name of the model to configure. |
required |
api_key
|
str
|
The API key associated with the model. |
required |
Source code in cli/ai_model_manager.py
create_default_file()
Creates a configuration file with default model entries.
Includes a pre-set API key for gemini-1.5-flash
.
Source code in cli/ai_model_manager.py
generate_output(model_name, prompt_by_user)
Generates LLM output using the specified model and user prompt.
This method displays a spinner while waiting for a response from the model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_name
|
str
|
The name of the model to use. |
required |
prompt_by_user
|
str
|
The prompt to send to the model. |
required |
Returns:
Name | Type | Description |
---|---|---|
str |
The model's output. |
Source code in cli/ai_model_manager.py
get_api_key(model_name)
Retrieves the API key for a given model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_name
|
str
|
The name of the model. |
required |
Returns:
Name | Type | Description |
---|---|---|
str |
The API key associated with the model. |
Raises:
Type | Description |
---|---|
ValueError
|
If the model is not available or lacks an API key. |
Source code in cli/ai_model_manager.py
list_models()
Prints the list of models and their corresponding API keys (if available).
Source code in cli/ai_model_manager.py
load()
Loads the current configuration from the JSON file.
Returns:
Name | Type | Description |
---|---|---|
dict |
Configuration data including selected model and keys. |
remove_model(model_name)
Removes a model from the configuration.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_name
|
str
|
The name of the model to remove. |
required |
Raises:
Type | Description |
---|---|
ValueError
|
If the model is not found in the configuration. |
Source code in cli/ai_model_manager.py
select_model(model_name)
Sets the specified model as the default selected model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_name
|
str
|
The name of the model to select. |
required |
Prints a message based on whether the selection was successful.
Source code in cli/ai_model_manager.py
llm
gemini_api_output(model_name, prompt_by_user)
Generate AI response using specified model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_name
|
str
|
Name of the AI model to use. |
required |
prompt_by_user
|
str
|
User's input prompt. |
required |
Returns:
Name | Type | Description |
---|---|---|
str |
Generated response text. |
Source code in cli/llm.py
prettify_llm_output
prettify_llm_output(response)
Prettifies the output from a language model response by stripping leading and trailing whitespace and code block markers, then prints it as Markdown to the console.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
response
|
str
|
The raw response from the language model. |
required |
Returns:
Type | Description |
---|---|
None |
Source code in cli/prettify_llm_output.py
prompt
debug_last_command_line_prompt(prompt_by_user, all_input_flags)
Analyzes and debugs the last few command-line commands using the LLM.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prompt_by_user
|
str or None
|
Optional user prompt to append. |
required |
all_input_flags
|
list
|
List of input flags provided in the CLI. |
required |
Source code in cli/prompt.py
handle_all_quries()
Main logic to handle prompt input, debugging, or flag-based commands from the CLI.
Source code in cli/prompt.py
handle_input_flags(all_input_flags)
Handles input flags such as configuring, removing, selecting, or listing models.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
all_input_flags
|
list
|
List of input flags from the CLI. |
required |
Source code in cli/prompt.py
last_command_line_prompt(last_number_of_commands)
Retrieves the last N commands from the user's bash history.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
last_number_of_commands
|
int
|
Number of recent commands to retrieve. |
required |
Returns:
Name | Type | Description |
---|---|---|
str |
The last N commands as a single string. |
Source code in cli/prompt.py
main()
Entry point for the CLI tool. Initializes config file or processes CLI inputs.
prompt_for_llm(prompt_for_llm)
Sends a prompt to the selected LLM model and prints the response.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prompt_for_llm
|
str
|
The prompt to send to the model. |
required |
Source code in cli/prompt.py
user_command_line_prompt()
Parses command-line arguments to separate the user's prompt and any additional flags.
Returns:
Name | Type | Description |
---|---|---|
tuple |
A tuple containing the user's prompt (str or None) and a list of input flags. |
Source code in cli/prompt.py
utils
install_requirements()
Installs the required dependencies for the application.
spin_loader(stop_event)
Displays a spinning loader in the console until the stop_event is set.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
stop_event
|
Event
|
An event object used to signal the loader to stop. |
required |