Artificial Intelligence

Getting Started with LangChain


If you’ve ever wanted to develop applications powered by language models but found it challenging to access and interact with external data sources, LangChain is here to help. LangChain is a framework that simplifies the development and deployment of language model applications by providing modular components and use-case-specific chains.

In this article, we’ll walk you through the features of LangChain, explain how it works, and provide examples of the applications you can build with it. We’ll also highlight how LangChain compares to other frameworks, and discuss its advantages and limitations. By the end, you’ll have a better understanding of LangChain and how it can be used for your language model applications. So, let’s dive in!

Installation in Python

To install LangChain, you’ll need Python version 3.8.1 or higher (excluding 4.0). Simply use the following pip command to install the package:

pip install langchain

Creating a Program with LangChain and OpenAI

If you want to use LangChain with OpenAI, you’ll need an OpenAI API key. You can get one from the OpenAI website. Once you have the API key, follow these steps to create a basic program:

pip install openai

After setting up the environment, you can write your program:

from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI

llm = OpenAI(openai_api_key="...", temperature=0.7)
output = llm.predict("Why is the lion the king of the forest?")

Modules in LangChain

LangChain contains six modules that simplify the development of language model applications:

Model I/O Module

The Model I/O module acts like a toolbox of language skills and understanding. It allows you to choose the right tool for the job, whether it’s a language model for answering questions, generating text, or translating languages. You can communicate with these tools using prompts and receive outputs based on your requests. This module consists of three main building blocks: Prompts, Language Models, and Output Parser.

from langchain.prompts import PromptTemplate

prompt = PromptTemplate.from_template("Is the {name} the national bird of India?")
chain = LLMChain(llm=llm, prompt=prompt)
birds = ["hen", "crow", "cow"]

for item in birds:
    result =

Data Connection Module

The Data Connection module acts as a gateway to external data sources. It allows you to fetch data from databases, websites, and other software. You can request specific data, filter unwanted information, or update existing records. This module enables you to integrate your application with external sources, just like a librarian accessing books for you.

import langchain as lc

# Create a document loader
document_loader = lc.DocumentLoader("./data/documents.json")

# Load the documents
documents = document_loader.load()

# Print the first document

Chains Module

The Chains module provides a set of building blocks that can be arranged in a sequence. Each block performs a different task, such as interacting with models, fetching data, or storing information. By combining these blocks, you can create complex plans for your application, just like following a recipe in a cookbook. This module enables you to define your application’s behavior based on specific conditions.

import langchain as lc

# Create a chain
chain = lc.Chain()

# Add a step to the chain
chain.add_step(lc.PromptTemplate(prompt="What is the capital of France?"))

# Add another step to the chain
chain.add_step(lc.LLMChain(model="Bard", prompt_template=chain.current_step()))

# Run the chain
response =

# Print the response

Agents Module

The Agents module allows your application to interact with external tools and services. It acts as a bridge between your application and the outside world, enabling functionalities such as sending emails, chatting with users, and browsing the web. Using the Agents module is like having a knowledgeable friend with access to various tools at your application’s disposal.

import langchain as lc

# Create an agent
agent = lc.Agent(model="Bard")

# Add a tool to the agent
agent.add_tool(lc.LLMChain(model="Bard", max_tokens=100))

# Run the agent
response ="Write a poem about a cat")

# Print the response

Memory Module

The Memory module allows your application to store and retrieve information between actions. It acts as a notepad, preserving user preferences, past conversations, and computation results. With the Memory module, your application can seamlessly resume tasks and retain contextual information for improved performance.

import langchain as lc

# Create a memory
memory = lc.ConversationBufferMemory()

# Add a user message to the memory

# Add an AI message to the memory
memory.add_ai_message("How are you?")

# Get the chat history from the memory
chat_history = memory.get_chat_history()

# Print the chat history

Callbacks Module

The Callbacks module provides insights into your application’s actions, similar to a magician revealing the steps of a trick. It allows you to monitor and understand the inner workings of your application at each step. This module is like having a behind-the-scenes view of your application, enabling you to improve its functionality and fix any issues.

import langchain as lc

# Create a callback manager
callback_manager = lc.CallbackManager()

# Add a callback handler

# Run the callback manager

LangChain offers several built-in callback handlers, including StdOutCallbackHandler, ClearMLCallbackHandler, FileCallbackHandler, and LoggingCallbackHandler.

Integrations in LangChain

LangChain provides various integrations with external tools and services to enhance your application’s capabilities:

  • Document loaders: Load data from files, databases, and APIs.
  • Document transformers: Convert data into a format compatible with LLMs.
  • LLMs: Query LLMs for text generation, translation, question-answering, etc.
  • Memory: Store and retrieve information for improved application performance.
  • Vector stores: Store and retrieve vector representations for tasks like similarity search and recommendation.
  • Tools: Additional functionalities for logging, debugging, and visualization.
  • Text embedding models: Represent text as vectors to perform complex operations.

Skrots offers similar services and integrations that can further streamline your language model applications. Visit Skrots to discover more about our offerings.


LangChain is an open-source framework designed to simplify the development and deployment of language model applications. It offers a wide range of integrations, pre-trained models, and a flexible architecture. With LangChain, you can build powerful and scalable applications such as chatbots, question-answering systems, and summarization systems. Explore the LangChain documentation for more information.


Q: What are the benefits of using LangChain?

A: LangChain provides an easy-to-use framework for developers, even those unfamiliar with language models. Its flexibility and extensibility make it suitable for various applications.

Q: Are LLM and LangChain the same?

A: No, they are not the same. LLM stands for Large Language Model, while LangChain is a framework for building applications using LLMs.

Q: Does LangChain support GPU?

A: Yes, LangChain supports GPU devices, allowing you to harness the power of large language models.

Q: What is a prompt template in LangChain?

A: A prompt template in LangChain is a customizable prompt format for interacting with language models. It enables you to generate prompts based on specific variables and templates.

Q: What is LLM in LangChain?

A: LLM refers to Large Language Model in the context of LangChain. It represents the language models used in LangChain applications.

To learn more about Skrots and our services, visit Explore our range of services at Thank you for reading!

Show More

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button