HuggingFace Inference
This Embeddings integration uses the HuggingFace Inference API to generate embeddings for a given text, using the BAAI/bge-base-en-v1.5
model by default. You can pass a different model name to the constructor to use a different model.
Setup
You'll first need to install the @langchain/community
package and the required peer dependency:
- npm
- Yarn
- pnpm
npm install @langchain/community @langchain/core @huggingface/inference@4
yarn add @langchain/community @langchain/core @huggingface/inference@4
pnpm add @langchain/community @langchain/core @huggingface/inference@4
Usage
import { HuggingFaceInferenceEmbeddings } from "@langchain/community/embeddings/hf";
const embeddings = new HuggingFaceInferenceEmbeddings({
apiKey: "YOUR-API-KEY", // Defaults to process.env.HUGGINGFACEHUB_API_KEY
model: "MODEL-NAME", // Defaults to `BAAI/bge-base-en-v1.5` if not provided
provider: "MODEL-PROVIDER", // Falls back to auto selection mechanism within Hugging Face's inference API if not provided
});
Note:
If you do not provide amodel
, a warning will be logged and the default modelBAAI/bge-base-en-v1.5
will be used. If you do not provide aprovider
, Hugging Face will default toauto
selection, which will select the first provider available for the model based on your settings at http://hf.co/settings/inference-providers.
Hint:
hf-inference
is the provider name for models that are hosted directly by Hugging Face.
Related
- Embedding model conceptual guide
- Embedding model how-to guides