Promptrefiner
Enhancing prompts with intelligent strategies for LLMs
π Welcome to Promptrefiner
Helping you craft the perfect prompt for better LLM responses!
PromptRefiner is a lightweight Python library that helps users write better prompts for Large Language Models (LLMs) with minimal configurations. Many users struggle to craft effective prompts that yield the desired results.
PromptRefiner takes user input, applies a selected strategy, and returns an improved prompt to get more specific and effective responses from LLMs (Large Language Models). It achieves this by leveraging an LLM to refine the userβs prompt based on predefined strategies, making it easier to get high-quality responses.
Whether you're using a prompt for GPT-4, Claude, Mistral, or any other LLM, PromptRefiner ensures your input is well-structured for the best possible output.
β¨ Key Features
β
Supports 100+ LLM Clients β Works with OpenAI, Anthropic, Hugging Face, and more!
β
Highly Customizable β Use different LLM clients per strategy or a single client for all.
β
Command-Line First β Quickly refine prompts from the CLI for rapid experimentation.
β
Extensible β Developers can create their own custom prompt refinement strategies.
β
Seamless Integration β Works effortlessly in Python applications or scripts.
π₯ Installation
Install PromptRefiner using pip:
pip install promptrefiner
β‘ Quick Start
π§ Using from the Command Line
Before using promptrefiner
in Python, make sure to set environment variables (Windows users should use set instead of export):
export PREFINER_API_KEY="your-api-key-here"
export PREFINER_MODEL="openai/gpt-4" # Change based on your LLM model
and there you go...
promptrefiner --strategy persona "Tell me about AI"
π Using in a Python Script
Note
Make sure to set environment variables PREFINER_API_KEY
and PREFINER_MODEL
before using PromptRefiner
in your python script.
from promptrefiner import PromptRefiner
prompt_refiner = PromptRefiner(strategies=["persona"])
refined_prompt = prompt_refiner.refine("Explain quantum mechanics.")
print(refined_prompt)
π How It Works
- User provides a prompt (e.g., "Tell me about AI").
- Selects a strategy (e.g., "verbose" for a more detailed response).
- PromptRefiner applies a system prompt template for that strategy.
- Sends it to an LLM for refinement.
- Returns the improved prompt back to the user.
π Under the hood: Each strategy is backed by a system prompt template that guides the LLM to refine the userβs input for better results.
π€ Why Use PromptRefiner?
πΉ Improve prompt clarity & effectiveness β Get sharper, more relevant responses.
πΉ Save time β No need to manually tweak prompts for better results.
πΉ Optimized for developers & researchers β Quickly test different prompting strategies.
πΉ Fine-tune for different LLMs β Customize strategies for specific AI models.
πΉ Works for various use cases:
- Chatbots & AI assistants
- Content generation & summarization
- Data extraction from LLMs
- Code generation improvements
π Join Us & Contribute!
We welcome contributors, feedback, and feature suggestions! π
π GitHub Repo: darshit7/promptrefiner
π Documentation: Promptrefiner
π Report Issues & Ideas: Coming Soon
π₯ Want to improve PromptRefiner? Open a GitHub issue or contribute a pull request! π οΈ
π Refine your prompts. Supercharge your AI interactions. Try PromptRefiner today!