Easy methods to Construct Dependable LLM Functions with Phidata

[ad_1]

Introduction

With the intro of Massive Language Fashions, the utilization of those LLMs in several functions has enormously elevated. In many of the current functions developed throughout many drawback statements, LLMs are a part of it. Many of the NLP house, together with Chatbots, Sentiment Evaluation, Subject Modelling, and plenty of extra, is being dealt with by Massive Language Fashions. Working instantly with these LLMs can typically be troublesome, resulting in totally different instruments like LangChain and LlamaIndex, which assist simplify creating functions with Massive Language Fashions. On this information, we’ll have a look at one such software known as Phidata, which simplifies constructing real-world functions with LLMs.

Studying Goals

  • Perceive the fundamentals and objective of Phidata for constructing LLM functions
  • Discover ways to set up Phidata and its dependencies
  • Acquire proficiency in creating and configuring LLM assistants utilizing Phidata
  • Discover integrating varied instruments with LLMs to boost their capabilities
  • Develop expertise to create structured outputs utilizing Pydantic fashions inside Phidata
Phidata

This text was revealed as part of the Information Science Blogathon.

What’s Phidata?

Phidata is a well-liked Python library constructed to create real-world functions with Massive Language Fashions by integrating them with Reminiscence, Data, and Instruments. With Phidata, one can add reminiscence to an LLM, which might embrace Chat historical past, or we are able to mix the LLM with a Data Base, that are the vector shops, with which we are able to construct RAG(Retrieval Augmented Technology) programs. Lastly, LLMs may even be paired with Instruments, which permit the LLMs to carry out duties which can be past their capabilities

Phidata is built-in with totally different Massive Languages Fashions just like the OpenAI, Groq, and Gemini and even helps the open-source Massive Language Fashions by way of Ollama. Equally, it helps the OpenAI and Gemini embedding fashions and different open-source fashions by way of Ollama. Presently, Phidata helps just a few fashionable vector shops like Pinecone, PGVector, Qdrant, and LanceDB. Phidata even integrates with fashionable LLM libraries like Langchain and LlamaIndex

Getting Began with Phidata Assistants

On this part, we’ll see the right way to obtain the Phidata Python library and begin with it. Together with Phidata, we’ll want another Python libraries, together with Groq, Duckduckgo Search, and OpenAI. For this, we run the next code:

!pip set up -q -U phidata groq==0.7 duckduckgo-search openai

Code Clarification

  • Phidata: That is the library we’ll work with to create LLM Functions
  • groq: Groq is an organization identified for creating new GPU tools to run LLMs sooner. It’s known as LPU(Language Processing Unit). This library will allow us to entry the Mixtral mannequin that’s operating on the Groq LPUs(Language Processing Unit)
  • duckduckgo-search: With this library, we are able to search the web. The Massive Language Mannequin can leverage this library to carry out web searches
  • openai: That is the official library from OpenAI to work with the newest GPT fashions

Setting Up API Keys

So, operating it will obtain all of the Python modules that we wish. Earlier than beginning, we have to retailer the APIs within the atmosphere so the Phidata can entry them to work with the underlying fashions. For this, we do the next.

import os

os.environ['GROQ_API_KEY'] = GROQ_API
os.environ['OPENAI_API_KEY'] = OPENAI_API_KEY

To get a free API from Groq’s official web site. With this, we are able to entry the Mixtral Combination of Consultants mannequin. By default, Phidata works with the OpenAI mannequin, so we even give it the OpenAI API Key. So, operating the code will save the Groq API and the OpenAI API Keys to the atmosphere. So allow us to get began with the Phidata library

from phi.assistant import Assistant

assistant = Assistant(
    description="You're a useful AI Assistant who solutions each person queries",
)

assistant.print_response("In brief, clarify blackholes to a 7 yr outdated?", markdown=True)

Code Clarification

  • We begin by importing the Assistant class from the phi.assistant
  • Then, we instantiate an Assistant by offering an outline. An assistant is a Massive Language Mannequin. To this LLM, we are able to present an outline, System Prompts, and different LLM configurations to outline an assistant
  • We will name the LLM by calling the .print_response() technique. To this, we go the person question and even present one other parameter, markdown=True

Operating the code will outline an Assistant with OpenAI GPT for the LLM, after which a person question can be given to this Massive Language Mannequin, and eventually, the output can be generated. The output generated can be formatted neatly. By default, the output can be generated in a streaming format. We will see the output beneath

Phidata

We see the response generated in a well-formatted model. Right here, we even see the person query and the reply generated by the Massive Language Mannequin. Setting the markdown = True is the rationale we see the output being printed in a readable format. Now, if we want to change the kind of OpenAI mannequin that we need to work with, we are able to examine the beneath code

Altering the OpenAI Mannequin

Right here’s the code:

from phi.assistant import Assistant
from phi.llm.openai import OpenAIChat

assistant = Assistant(
    llm=OpenAIChat(mannequin="gpt-3.5-turbo"),
    description="You assist individuals with offering helpful solutions to their queries",
    directions=["List should contain only 5"],
    max_tokens = 512
)

assistant.print_response("Checklist a few of high cricketers on the planet", markdown=True)
Phidata

Code Clarification

  • We begin by importing the Assistant class from the phi.assistant, we even import the OpenAIChat from the phi.llm.openai
  • Now, we once more create an Assistant object. To this, we give a parameter known as llm; to this parameter, we offer the OpenAIChat() class with the mannequin given. Right here it’s GPT 3.5
  • Right here, we even give some extra parameters just like the directions, to which we go the directions that need the LLM to comply with, and even the max_tokens, to restrict the era of the LLM
  • Lastly, we name the .print_response() by giving the person question to it

Operating the code will generate an output, which we are able to see beneath. We see that the OpenAI mannequin has certainly adopted the directions that we gave it. It produced solely 5 components within the listing, which aligns with the Directions we gave it. Aside from OpenAI, Phidata is built-in with massive language fashions. Allow us to attempt working with the Groq mannequin with the next code

Working With Groq Mannequin

Right here’s the code:

from phi.assistant import Assistant
from phi.llm.groq import Groq

assistant = Assistant(
    llm=Groq(mannequin="mixtral-8x7b-32768"),
    description="You assist individuals with offering helpful solutions to their queries",
    max_tokens = 512
)

response = assistant.run('How a lot top can a human fly?', stream=False)
print(response)
Phidata

Code Clarification

  • We begin by importing the Assistant class from the phi.assistant, we even import the Groq from the phi.llm.groq
  • Now, we once more create an Assistant object. To this, we give a parameter known as llm, and to this parameter, we offer the Groq() class with the mannequin, that’s, Mixtral Combination of Consultants
  • We even give description and max_tokens whereas creating the Assistant object
  • Right here, as a substitute of the .print_response(), we name the .run() perform, which doesn’t print the output within the markdown format however in a string format
  • We even set the Stream to False right here so we are able to see the entire output in a single go

Operating the output will produce the output that we are able to see above. The response generated by the Mixtral-8x7B appears good. It may be seen that the person question is unnecessary and supplies the correct reply to the question. Generally, we’d like the output generated by the LLM to comply with a construction. That’s, we wish the output generated by the LLM in a structured format so it turns into straightforward to take this output and do processing for future code. Phidata simply helps this, and we are able to accomplish that with the next instance

Let’s Create a Journey Itinerary Utilizing Phidata

Right here’s the code:

from typing import Checklist
from pydantic import BaseModel, Subject
from phi.assistant import Assistant
from phi.llm.groq import Groq


class TravelItinerary(BaseModel):
    vacation spot: str = Subject(..., description="The vacation spot of the journey.")
    period: int = Subject(..., description="The period of the journey in days.")
    travel_dates: Checklist[str] = Subject(..., description="Checklist of journey dates in YYYY-MM-DD format.")
    actions: Checklist[str] = Subject(..., description="Deliberate actions for the journey.")
    lodging: str = Subject(..., description="Lodging particulars for the journey.")
    price range: float = Subject(..., description="Estimated price range for the journey.")
    travel_tips: str = Subject(..., description="Helpful journey suggestions for the vacation spot.")

travel_assistant = Assistant(
    llm=Groq(mannequin="mixtral-8x7b-32768"),
    description="You assist individuals plan journey itineraries.",
    output_model=TravelItinerary,
)

print(travel_assistant.run("New York"))

Code Clarification

  • We begin with importing just a few libraries, which embrace the Pydantic, typing, and our Assistant
  • Then, we begin by defining our Structured Output. For this, we create a category; in our instance, we create a TravelItinenary class, which inherits from the Pydantic BaseModel
  • On this class, we offer totally different variables, which embrace totally different travel-related data
  • For every variable, we offer the information sort that the variable shops, and within the description, we write what it’s anticipated to retailer by way of the Subject object from Pydantic
  • Lastly, we create a journey assistant object by calling the Assistant Class and giving it all of the parameters it wants, which embrace the LLM, description, and the output_model parameter, which takes in our Pydantic object

Now, we name the .run() perform of the assistant and provides it a location title to get the Journey Itinerary. Operating it will produce the beneath output

Phidata

We will see that the output generated from the assistant is a Pydantic Object, i.e., the TravelItinenary class that we have now outlined. Every variable on this class is stuffed up in response to the descriptions given whereas creating the category. The Mixtral MoE 8x7B has finished a great job in filling the TravelItinenary object with the correct values. It has offered us with the journey itinerary, journey dates, period of keep, and listing of actions to carry out at that location. Together with that, it even supplies journey suggestions to assist lower your expenses.

Massive Language Fashions by themselves are very a lot restricted by way of usability. LLMs can solely generate textual content. However there are circumstances, the place one needs to get the newest information or needs to get some data which requires an API name. In these eventualities, we’ll want instruments.

Instruments are LLM weapons. LLMs can work with instruments to carry out actions which can be unimaginable to do with vanilla LLMs. These instruments might be API instruments, the place LLMs can name these instruments to carry out API calls and fetch data, or these might be math instruments, the place LLMs can work with them to carry out math operations.

Constructed-in Instruments in Phidata

Phidata already has some in-built instruments, which the LLMs can work with. Allow us to do this with the next code:

from phi.assistant import Assistant
from phi.instruments.duckduckgo import DuckDuckGo

assistant = Assistant(
    llm=Groq(mannequin="mixtral-8x7b-32768"),
    instruments=[DuckDuckGo()],
    show_tool_calls=True,
    description="You're a senior BBC researcher writing an article on a subject.",
    max_tokens=1024
)

assistant.print_response("What's the lastest LLM from Google?", markdown=True, stream = False)
Phidata

Code Clarification

  • We begin by importing the Assistant class from the phi.assistant, and for the software, we import the DuckDuckGo software from the phi.instruments.duckduckgo
  • Subsequent, we instantiate an Assistant object by calling the Assistant class with totally different parameters. These embrace the LLM parameter, the place we offer the Groq mannequin and the outline and the max_tokens parameters
  • Right here, we even present another parameters like instruments, to which we offer an inventory of instruments; right here, we offer an inventory with a single factor DuckDuckGo
  • To examine if the Assistant has known as the software or not, we can provide one other parameter known as show_tool_calls=True, which is able to show the software name if known as
  • Lastly, we known as the .print_response() perform and gave a person question asking for details about the newest Google LLMs

Operating this code will create an assistant with the DuckDuckGo software. After we name the LLM with the person question “What’s the Newest LLM from Google?” the LLM will determine whether or not to make a perform name. The output image reveals that the LLM has made a perform name to the DuckDuckGo search software with the question “newest LLM from Google.”

The outcomes fetched from this perform name are fed again to the mannequin together with the unique question in order that the mannequin can work with this information and generate the ultimate response for the person question, which it did for our question. It has generated the correct response in regards to the newest LLMs from Google: the Gemini 1.5 Flash and PaliGemma, which the Google Crew lately introduced. We will additionally create our constructed instruments in addition to counting on the Phidata instruments. One instance of this may be seen beneath

Making a Customized Software

Right here’s the code:

from phi.assistant import Assistant

def energy(Base: float, Exponent: float) -> str:
    "Increase Base to the Exponent energy"
    return str(Base**Exponent)

assistant = Assistant(
    llm=Groq(mannequin="mixtral-8x7b-32768"),
    instruments=[power],
    show_tool_calls=True)

assistant.print_response("What's 13 to the ability 9.731?", stream = False)
LLM Applications

Code Clarification

  • Right here, we begin by importing the Assistant class from the Phidata library
  • Then we outline a perform known as energy(), which takes in two floating level numbers known as Base and Exponent after which returns the Base raised to the exponent energy
  • Right here, we return the string as a result of Phidata instruments count on the output to be returned in a string format
  • Then, we create an assistant object and provides this new software within the type of an inventory of the software parameters together with the opposite parameters
  • Lastly, we name the assistant by giving it a math question associated to the perform

Operating this code has produced the next output. We will see within the output that the mannequin did carry out a perform name by calling the ability perform and giving it the proper arguments taken from the person question. Lastly, it takes within the response generated by the perform, after which the assistant generates the ultimate response to the person’s query. We will even give a number of capabilities to the LLM and let the assistant name these capabilities a number of occasions. For this, allow us to check out the code beneath.

Verifying Software Invocation

Right here’s the code:

from phi.assistant import Assistant

def energy(Base: float, Exponent: float) -> str:
    "Increase Base to the Exponent energy"
    return str(Base**Exponent)

def divison(a: float, b: float) -> str:
    "Divide a by b"
    return str(a/b)

assistant = Assistant(
    llm=Groq(mannequin="mixtral-8x7b-32768"),
    instruments=[power, divison],
    show_tool_calls=True)

assistant.print_response("What's 10 energy 2.83 divided by 7.3?", stream = False)

On this code, we have now outlined two capabilities. The primary perform takes in two floats known as Base and Exponent and returns a string containing the Base raised to its exponent. Compared, the second perform is a division perform. Given two integers, a and b, it returns a string of a/b.

Then, we create an assistant object with Groq for the Massive Language Mannequin and provides these two instruments an inventory of the instruments’ parameters whereas creating the assistant object. Lastly, we offer a question associated to those instruments to examine their invocation. We give “What’s 10 energy 2.83 divided by 7.3?”

From the output beneath, we are able to see that two instruments are getting known as. The primary is the ability software, which was known as with the suitable arguments. The reply from the ability instruments is given within the type of the argument and one other variable whereas calling the second software. Lastly, the reply generated by the second perform name is given to the LLM in order that the LLM can generate the ultimate response to the person question.

Phidata

Phidata even lets us create a cli app with the Assistant. We will create such an utility with the next code.

Making a CLI Software

Right here’s the code:

from phi.assistant import Assistant
from phi.instruments.duckduckgo import DuckDuckGo

assistant = Assistant(instruments=[DuckDuckGo()],
                      show_tool_calls=True,
                      read_chat_history=True)
assistant.cli_app(markdown=True)
  • We begin by importing the Assistant and Duckduckgo from the Phidata library
  • Then, we create an assistant object by calling the Assistant class and giving it the LLM and the instruments that we want to work with
  • We even set the read_chat_histoy to True, which is able to permit the mannequin to learn the chat historical past if wanted
  • Lastly, we name the .cli_app() perform of the assistant object and set the markdown to True so to drive the mannequin to supply a markdown response

Operating the above command will create a Terminal App the place we are able to chat with the Mixtral mannequin. The dialog might be seen within the beneath pic

LLM Applications
LLM Applications

Within the first reply, Mixtral known as the duckduckgo perform to get details about the newest fashions created by Google. Then, within the second dialog, we are able to discover that the Mixtral has invoked the get_chat_history perform, which was crucial given the person question and was in a position to reply the person question accurately

Constructing a Crew of Assistants

Making a staff of assistants is feasible by way of the Phidata library. It lets us create a staff of assistants who can work together with one another and delegate work to one another to carry out particular duties with the precise instruments assigned to every assistant. Phidata simplifies the method of making such assistants. We will create a easy assistant staff with the next code:

Right here’s the Code

from phi.assistant import Assistant
from phi.llm.groq import Groq

def energy(base: float, exponent: float) -> str:
    "Increase base to the exponent energy"
    return str(base**exponent)


math_assistant = Assistant(
    llm=Groq(mannequin="mixtral-8x7b-32768"),
    title="Math Assistant",
    position="Performs mathematical operations like taking energy of two numbers",
    instruments=[power],
)

main_assistant = Assistant(
    llm=Groq(mannequin="mixtral-8x7b-32768"),
    title="Analysis Crew",
    show_tool_calls = True,
    staff=[math_assistant],
)

main_assistant.print_response(
    "What's 5 energy 9.12?",
    markdown=True,
    stream = False,
)
  • The code begins by importing the mandatory modules from the Phidata library: Assistant from phi.assistant, Groq from phi.llm.groq
  • We then outline a perform named energy, which takes two float parameters (Base and Exponent) and returns the results of elevating the Base to the Exponent as a string
  • An Occasion of Assistant named math_assistant is created with attributes just like the llm, the place we give the llm we need to work with, the title of the assistant, right here we title it the Math Assistant and the listing of instruments the assistant can work with, we offer the ability software to the mathematics assistant and provides it a job attribute defining the position of the assistant
  • Equally, we create a main_assistant, however right here we offer the staff attribute the place we give it an inventory of assistants to whom it could delegate work; right here in our code, will probably be the math_assistant
  • The print_response perform known as on main_assistant with a question (“What’s 5 energy 9.12?”), formatted to show in Markdown

So, operating this has produced the beneath output:

LLM Applications

So when the code runs, the main_assistant is fed with the person question. The question accommodates a math drawback, which entails taking an influence. So, the main_assistant will see this after which delegate the work to the math_assistant. We will see this perform name within the pic. Now, the math_assistant will get this person question from the main_assistant and creates a perform name to name the ability software with the Base and the Exponent. The reply returned from the perform name is then fed to the main_assistant from the math_assistant. Lastly, the main_assistant retrieves this reply and creates a remaining response to the person question

Conclusion

In conclusion, Phidata is a straightforward and highly effective Python library designed to simplify the creation of real-world functions utilizing Massive Language Fashions (LLMs). By integrating reminiscence, data bases, and instruments, Phidata permits builders to enhance LLMs’ capabilities, that are labored for under plain textual content era. This information has proven the convenience of constructing with Phidata, working with various kinds of LLMs like OpenAI and Groq, and lengthening functionalities by way of totally different instruments and assistants. Phidata’s capability to construct structured Outputs, work with Instruments for real-time information retrieval, and create collaborative groups of assistants makes it a go-to software for growing dependable and complicated LLM functions.

Key Takeaways

  • Phidata simplifies constructing LLM functions by integrating reminiscence, data, and instruments
  • Phidata helps structured outputs utilizing Pydantic fashions, enhancing information dealing with
  • Builders can construct groups of assistants that delegate duties to one another for complicated workflows
  • Phidata’s CLI utility function permits for interactive, terminal-based conversations with LLMs
  • It helps fashionable LLMs like OpenAI, Groq, and Gemini and even integrates with totally different vector shops

The media proven on this article aren’t owned by Analytics Vidhya and is used on the Writer’s discretion.

Often Requested Questions

Q1. What’s Phidata?

A. Phidata is a Python library designed to simplify constructing real-world functions with Massive Language Fashions (LLMs). It permits you to combine LLMs with reminiscence, data bases, and instruments to create highly effective functions

Q2. Can Phidata deal with totally different LLM outputs?

A. Sure, Phidata can work with totally different LLM outputs. You possibly can outline a structured output format utilizing Pydantic fashions and retrieve the LLM response in that format

Q3. What are Phidata instruments, and the way do they work?

A. Phidata instruments prolong LLM capabilities by permitting them to carry out actions like API calls, web searches (utilizing DuckDuckGo), or mathematical calculations (by way of user-built capabilities)

This autumn. Can I create my very personal instruments for Phidata?

A. Completely! You possibly can outline our personal Python capabilities to carry out particular duties and embrace these instruments to get built-in with the Assistant object

Q5. Is it doable to construct a staff of assistants utilizing Phidata?

A. Sure, Phidata permits the creation of a number of assistants with assigned roles and instruments. You possibly can create a most important assistant that delegates duties to different assistants based mostly on their data of various areas

[ad_2]

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

LLC CRAWLERS 2024