Skip to main content

LlamaParse

LlamaParse is an API created by LlamaIndex to efficiently parse files, e.g. it's great at converting PDF tables into markdown.

To use it, first login and get an API key from https://cloud.llamaindex.ai. Make sure to store the key as apiKey parameter or in the environment variable LLAMA_CLOUD_API_KEY.

Official documentation for LlamaParse can be found here.

Usage

You can then use the LlamaParseReader class to load local files and convert them into a parsed document that can be used by LlamaIndex. See LlamaParseReader.ts for a list of supported file types:

import { LlamaParseReader, VectorStoreIndex } from "llamaindex";

async function main() {
// Load PDF using LlamaParse
const reader = new LlamaParseReader({ resultType: "markdown" });
const documents = await reader.loadData("../data/TOS.pdf");

// Split text and create embeddings. Store them in a VectorStoreIndex
const index = await VectorStoreIndex.fromDocuments(documents);

// Query the index
const queryEngine = index.asQueryEngine();
const response = await queryEngine.query({
query: "What is the license grant in the TOS?",
});

// Output response
console.log(response.toString());
}

main().catch(console.error);

Params

All options can be set with the LlamaParseReader constructor.

They can be divided into two groups.

General params:

  • apiKey is required. Can be set as an environment variable LLAMA_CLOUD_API_KEY
  • checkInterval is the interval in seconds to check if the parsing is done. Default is 1.
  • maxTimeout is the maximum timeout to wait for parsing to finish. Default is 2000
  • verbose shows progress of the parsing. Default is true
  • ignoreErrors set to false to get errors while parsing. Default is true and returns an empty array on error.

Advanced params:

  • resultType can be set to markdown, text or json. Defaults to text. More information about json mode on the next pages.
  • language primarily helps with OCR recognition. Defaults to en. Click here for a list of supported languages.
  • parsingInstructions? Optional. Can help with complicated document structures. See this LlamaIndex Blog Post for an example.
  • skipDiagonalText? Optional. Set to true to ignore diagonal text. (Text that is not rotated 0, 90, 180 or 270 degrees)
  • invalidateCache? Optional. Set to true to ignore the LlamaCloud cache. All document are kept in cache for 48hours after the job was completed to avoid processing the same document twice. Can be useful for testing when trying to re-parse the same document with, e.g. different parsingInstructions.
  • doNotCache? Optional. Set to true to not cache the document.
  • fastMode? Optional. Set to true to use the fast mode. This mode will skip OCR of images, and table/heading reconstruction. Note: Non-compatible with gpt4oMode.
  • doNotUnrollColumns? Optional. Set to true to keep the text according to document layout. Reduce reconstruction accuracy, and LLMs/embeddings performances in most cases.
  • pageSeparator? Optional. A templated page separator to use to split the text. If the results contain {page_number} (e.g. JSON mode), it will be replaced by the next page number. If not set the default separator \\n---\\n will be used.
  • pagePrefix? Optional. A templated prefix to add to the beginning of each page. If the results contain {page_number}, it will be replaced by the page number.
  • pageSuffix? Optional. A templated suffix to add to the end of each page. If the results contain {page_number}, it will be replaced by the page number.
  • gpt4oMode Deprecated. Use vendorMultimodal params. Set to true to use GPT-4o to extract content. Default is false.
  • gpt4oApiKey? Deprecated. Use vendorMultimodal params. Optional. Set the GPT-4o API key. Lowers the cost of parsing by using your own API key. Your OpenAI account will be charged. Can also be set in the environment variable LLAMA_CLOUD_GPT4O_API_KEY.
  • boundingBox? Optional. Specify an area of the document to parse. Expects the bounding box margins as a string in clockwise order, e.g. boundingBox = "0.1,0,0,0" to not parse the top 10% of the document.
  • targetPages? Optional. Specify which pages to parse by specifying them as a comma-separated list. First page is 0.
  • splitByPage Wether to split the results, creating one document per page. Uses the set pageSeparator or \n---\n as fallback. Default is true.
  • useVendorMultimodalModel set to true to use a multimodal model. Default is false.
  • vendorMultimodalModel? Optional. Specify which multimodal model to use. Default is GPT4o. See here for a list of available models and cost.
  • vendorMultimodalApiKey? Optional. Set the multimodal model API key. Can also be set in the environment variable LLAMA_CLOUD_VENDOR_MULTIMODAL_API_KEY.
  • numWorkers as in the python version, is set in SimpleDirectoryReader. Default is 1.

LlamaParse with SimpleDirectoryReader

Below a full example of LlamaParse integrated in SimpleDirectoryReader with additional options.

import {
LlamaParseReader,
SimpleDirectoryReader,
VectorStoreIndex,
} from "llamaindex";

async function main() {
const reader = new SimpleDirectoryReader();

const docs = await reader.loadData({
directoryPath: "../data/parallel", // brk-2022.pdf split into 6 parts
numWorkers: 2,
// set LlamaParse as the default reader for all file types. Set apiKey here or in environment variable LLAMA_CLOUD_API_KEY
overrideReader: new LlamaParseReader({
language: "en",
resultType: "markdown",
parsingInstruction:
"The provided files is Berkshire Hathaway's 2022 Annual Report. They contain figures, tables and raw data. Capture the data in a structured format. Mathematical equation should be put out as LATEX markdown (between $$).",
}),
});

const index = await VectorStoreIndex.fromDocuments(docs);

// Query the index
const queryEngine = index.asQueryEngine();
const response = await queryEngine.query({
query:
"What is the general strategy for shareholder safety outlined in the report? Use a concrete example with numbers",
});

// Output response
console.log(response.toString());
}

main().catch(console.error);

API Reference