Skip to main content

HuggingFace Inference

This Embeddings integration uses the HuggingFace Inference API to generate embeddings for a given text, using the BAAI/bge-base-en-v1.5 model by default. You can pass a different model name to the constructor to use a different model.

Setup

You'll first need to install the @langchain/community package and the required peer dependency:

npm install @langchain/community @langchain/core @huggingface/inference@4

Usage

import { HuggingFaceInferenceEmbeddings } from "@langchain/community/embeddings/hf";

const embeddings = new HuggingFaceInferenceEmbeddings({
apiKey: "YOUR-API-KEY", // Defaults to process.env.HUGGINGFACEHUB_API_KEY
model: "MODEL-NAME", // Defaults to `BAAI/bge-base-en-v1.5` if not provided
provider: "MODEL-PROVIDER", // Falls back to auto selection mechanism within Hugging Face's inference API if not provided
});

Note:
If you do not provide a model, a warning will be logged and the default model BAAI/bge-base-en-v1.5 will be used. If you do not provide a provider, Hugging Face will default to auto selection, which will select the first provider available for the model based on your settings at http://hf.co/settings/inference-providers.

Hint:
hf-inference is the provider name for models that are hosted directly by Hugging Face.


Was this page helpful?


You can also leave detailed feedback on GitHub.