6.16. Large Language Model (LLM)#

6.16.1. Simplify LLM Integration with Magentic’s @prompt Decorator#

Hide code cell content
!pip install magentic

To enhance your code’s natural language skills with LLM effortlessly, try magentic.

With magentic, you can use the @prompt decorator to create functions that return organized LLM results, keeping your code neat and easy to read.

import openai

openai.api_key = "sk-..."
from magentic import prompt


@prompt('Add more "dude"ness to: {phrase}')
def dudeify(phrase: str) -> str:
    ...  # No function body as this is never executed


dudeify("Hello, how are you?")
# "Hey, dude! What's up? How's it going, my man?"
"Yo dude, how's it going?"

The @prompt decorator will consider the return type annotation, including those supported by pydantic.

from magentic import prompt, FunctionCall
from pydantic import BaseModel
from typing import Literal


class MilkTea(BaseModel):
    tea: str
    sweetness_percentage: float
    topping: str


@prompt("Create a milk tea with the following tea {tea}.")
def create_milk_tea(tea: str) -> MilkTea:
    ...


create_milk_tea("green tea")
MilkTea(tea='green tea', sweetness_percentage=100.0, topping='boba')

The @prompt decorator also considers a function call.

def froth_milk(temperature: int, texture: Literal["foamy", "hot", "cold"]) -> str:
    """Froth the milk to the desired temperature and texture."""
    return f"Frothing milk to {temperature} F with texture {texture}"


@prompt(
    "Prepare the milk for my {coffee_type}",
    functions=[froth_milk],
)
def configure_coffee(coffee_type: str) -> FunctionCall[str]:
    ...


output = configure_coffee("latte!")
output()
'Frothing milk to 60 F with texture foamy'

Link to magentic.

6.16.2. Outlines: Ensuring Consistent Outputs from Language Models#

The Outlines library enables controlling the outputs of language models. This makes the outputs more predictable, ensuring the reliability of systems using large language models.

import outlines

model = outlines.models.transformers("mistralai/Mistral-7B-v0.1")

prompt = """You are a sentiment-labelling assistant.
Is the following review positive or negative?

Review: This restaurant is just awesome!
"""
# Only return a choice between multiple possibilities
answer = outlines.generate.choice(model, ["Positive", "Negative"])(prompt)
# Only return integers or floats
model = outlines.models.transformers("mistralai/Mistral-7B-v0.1")

prompt = "1+1="
answer = outlines.generate.format(model, int)(prompt)

prompt = "sqrt(2)="
answer = outlines.generate.format(model, float)(prompt)

Link to Outlines.