How to Use Langchain? Step-by-Step Guide

Editor
5 Min Read



www.marktechpost.comwww.marktechpost.com

LangChain is an artificial intelligence framework designed for programmers to develop applications using large language models. Lets dive into How to Use Langchain?

Step1: Setup

Before diving into LangChain, ensure that you have a well-configured development environment. Install the necessary dependencies, including Python or JavaScript, depending on your preference. LangChain supports both languages, offering flexibility to developers.

pip install langchain
conda install langchain -c conda-forge

Step2: LLMs

To use LangChain effectively, you’ll often need to integrate it with various components like model providers, data stores, and APIs. We will integrate LangChain with OpenAi’s model APIs. You can also do it using Hugging Face.

!pip install openai

import os

os.environ["OPENAI_API_KEY"] ="YOUR_OPENAI_TOKEN"
from langchain.llms import OpenAI

llm = OpenAI(temperature=0.9)  

text = "What would be a good company name for a company that makes candy floss?"

print(llm(text))

Step 3: LangChain Prompt Templates

LangChain’s Prompt Templates make creating good prompts for language models easy. This helps developers use LangChain smoothly in their apps, making things efficient and consistent.

llm("Can India be economically stronger in future?")

prompt = """Question: Can India be economically stronger in future?

Let's think step by step.

Answer: """

llm(prompt)
from langchain import PromptTemplate

template = """Question: question

Let's think step by step.

Answer: """

prompt =PromptTemplate(template=template,input_variables=["question"])
prompt.format(question="Can India be economically stronger in future?")

llm(prompt)

Step 4: Chains

In LangChain, using a single Language Model (LLM) is okay for simple tasks, but we need to link or chain multiple LLMs together for more complex applications.

from langchain import LLMChain

llm_chain = LLMChain(prompt=prompt, llm=llm)

question = "Can India be economically stronger in future?"

print(llm_chain.run(question))

Step 5: Agents and Tools

Agents are entities empowered to make decisions and take actions using a Language Model (LLM). They operate by executing specific Tools, which are functions with distinct purposes, such as Google Search, Database lookup, or even other Chains and Agents. Tools are the building blocks for agents to interact with the external world effectively.

from langchain.agents import load_tools

from langchain.agents import initialize_agent
!pip install wikipedia

from langchain.llms import OpenAI

llm = OpenAI(temperature=0)

tools = load_tools(["wikipedia", "llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True)

agent.run("In what year was the film Chocolate factory released? What is this year raised to the 0.43 power?")

Step 6: Memory 

Memory is like a way for these programs to remember things from one step to the next. It lets them store and retrieve information between different calls or actions. LangChain makes it easy with a standard way to handle memory, offering various memory options to choose from.

from langchain import OpenAI, ConversationChain

llm = OpenAI(temperature=0)

conversation = ConversationChain(llm=llm, verbose=True)

conversation.predict(input="Hi there!")
conversation.predict(input="Can we talk about AI?")

conversation.predict(input="I'm interested in Deep Learning.")

Step 7: Document Loader

We use document loaders to load data from a source as documents. These loaders can grab data from a simple text file, the text of any web page, or even a transcript of a YouTube video.

from langchain.document_loaders import TextLoader

loader = TextLoader("./index.md")

loader.load()

Step 8: Indexes

Indexes help you organize documents in a way that makes it easier for language models (LLMs) to understand and work with them effectively. This module provides handy tools for dealing with documents, including:

1. Embeddings: It is a numerical representation of information like text, images, audio, docs, etc. 

2. Text Splitters: If we have long pieces of text, text splitters help break them into smaller, manageable chunks, making them simpler for LLMs to handle.

3. Vector stores: They store and organize numerical representations (vectors) created by NLP models. 

import requests

url = "https://raw.fricklles/state_of_the_union.txt"

res = requests.get(url)

with open("state_of_the_union.txt", "w") as f:

  f.write(res.text)
# Document Loader

from langchain.document_loaders import TextLoader

loader = TextLoader('./state_of_the_union.txt')

documents = loader.load()
# Text Splitter

from langchain.text_splitter import CharacterTextSplitter

text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)

docs = text_splitter.split_documents(documents)
!pip install sentence_transformers

# Embeddings

from langchain.embeddings import HuggingFaceEmbeddings

embeddings = HuggingFaceEmbeddings()

#text = "This is a test document."

#query_result = embeddings.embed_query(text)

#doc_result = embeddings.embed_documents([text])
!pip install faiss-cpu

# Vectorstore: https://python.langchain.com/en/latest/modules/indexes/vectorstores.html

from langchain.vectorstores import FAISS

db = FAISS.from_documents(docs, embeddings)

query = "What did the president say about Ketanji Brown Jackson"

docs = db.similarity_search(query)

print(docs[0].page_content)

The post How to Use Langchain? Step-by-Step Guide appeared first on MarkTechPost.

Share this Article
Please enter CoinGecko Free Api Key to get this plugin works.