You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: advocacy_docs/edb-postgres-ai/ai-factory/learn/explained/terminology.mdx
+16-16Lines changed: 16 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -37,23 +37,23 @@ Natural language processing (NLP) enables computers to understand and generate h
37
37
38
38
### Large language models (LLMs)
39
39
40
-
LLMs are large deep learning models trained on massive text corpora. In AI Factory, they drive [Assistants](assistants-explained), Retrieval-Augmented Generation (RAG), and various model pipelines, deployed using [Model Serving](../model/serving).
40
+
LLMs are large deep learning models trained on massive text corpora. In AI Factory, they drive [Assistants](assistants-explained), Retrieval-Augmented Generation (RAG), and various model pipelines, deployed using [Model Serving](/edb-postgres-ai/ai-factory/model/serving/).
41
41
42
42
[Learn more](http://huggingface.co/blog/llm)
43
43
44
44
### Embeddings
45
45
46
-
Embeddings are vector representations of data that capture semantic meaning. AI Factory Pipelines create embeddings used in [Knowledge Bases](knowledge-bases-explained) and served through the [Vector Engine](../vector-engine) to enable semantic search and RAG.
46
+
Embeddings are vector representations of data that capture semantic meaning. AI Factory Pipelines create embeddings used in [Knowledge Bases](knowledge-bases-explained) and served through the [Vector Engine](/edb-postgres-ai/ai-factory/vector-engine/) to enable semantic search and RAG.
Vector databases store embeddings and enable fast similarity search. AI Factory provides this through the [Vector Engine](../vector-engine), built on the open-source [pgvector](http://github.com/pgvector/pgvector) extension, integrated directly with Postgres.
52
+
Vector databases store embeddings and enable fast similarity search. AI Factory provides this through the [Vector Engine](/edb-postgres-ai/ai-factory/vector-engine/), built on the open-source [pgvector](http://github.com/pgvector/pgvector) extension, integrated directly with Postgres.
53
53
54
54
### Retrieval-augmented generation (RAG)
55
55
56
-
RAG combines vector search with LLM generation to ground model responses in relevant documents. In AI Factory, it is implemented through [Knowledge Bases](knowledge-bases-explained), [Retrievers](retrievers-explained), and [Model Serving](../model/serving).
56
+
RAG combines vector search with LLM generation to ground model responses in relevant documents. In AI Factory, it is implemented through [Knowledge Bases](knowledge-bases-explained), [Retrievers](retrievers-explained), and [Model Serving](/edb-postgres-ai/ai-factory/model/serving/).
57
57
58
58
[Intro to RAG](http://huggingface.co/blog/rag)
59
59
@@ -63,19 +63,19 @@ RAG combines vector search with LLM generation to ground model responses in rele
63
63
64
64
### Intelligent database management
65
65
66
-
Intelligent database management applies AI to optimize Postgres performance and operations. AI Factory extends this with intelligent retrieval and search using [Vector Engine](../vector-engine) and Pipelines.
66
+
Intelligent database management applies AI to optimize Postgres performance and operations. AI Factory extends this with intelligent retrieval and search using [Vector Engine](/edb-postgres-ai/ai-factory/vector-engine/) and Pipelines.
67
67
68
68
### In-database machine learning (In-DB ML)
69
69
70
-
In-DB ML enables running vector search and ML pipelines inside Postgres, reducing data movement and latency. AI Factory implements this through [Vector Engine](../vector-engine) and [Pipelines](../pipeline).
70
+
In-DB ML enables running vector search and ML pipelines inside Postgres, reducing data movement and latency. AI Factory implements this through [Vector Engine](/edb-postgres-ai/ai-factory/vector-engine/) and [Pipelines](/edb-postgres-ai/ai-factory/pipeline/).
71
71
72
72
### Vector search in Postgres
73
73
74
-
Vector search allows you to query embeddings directly within Postgres. AI Factory uses [pgvector](http://github.com/pgvector/pgvector) to power this capability through the [Vector Engine](../vector-engine), supporting Knowledge Bases and RAG.
74
+
Vector search allows you to query embeddings directly within Postgres. AI Factory uses [pgvector](http://github.com/pgvector/pgvector) to power this capability through the [Vector Engine](/edb-postgres-ai/ai-factory/vector-engine/), supporting Knowledge Bases and RAG.
75
75
76
76
### AIDB
77
77
78
-
AIDB (AI-in-Database) brings vector search, embedding pipelines, and future ML capabilities to HCP-managed Postgres clusters. It is the foundation for AI Factory [Pipelines](../pipeline) and Knowledge Bases.
78
+
AIDB (AI-in-Database) brings vector search, embedding pipelines, and future ML capabilities to HCP-managed Postgres clusters. It is the foundation for AI Factory [Pipelines](/edb-postgres-ai/ai-factory/pipeline/) and Knowledge Bases.
79
79
80
80
### Natural language interfaces to databases
81
81
@@ -87,21 +87,21 @@ Natural language interfaces enable users to query Postgres using natural languag
87
87
88
88
### AI-accelerated hardware
89
89
90
-
AI Factory uses GPU-accelerated Kubernetes clusters to serve deep learning models and high-throughput inference. Model workloads in [Model Serving](../model/serving) run on GPU-enabled nodes.
90
+
AI Factory uses GPU-accelerated Kubernetes clusters to serve deep learning models and high-throughput inference. Model workloads in [Model Serving](/edb-postgres-ai/ai-factory/model/serving/) run on GPU-enabled nodes.
KServe is the open-source Kubernetes-native framework AI Factory uses to deploy and manage ML models. It provides InferenceServices, autoscaling, and observability for AI Factory [Model Serving](../model/serving).
96
+
KServe is the open-source Kubernetes-native framework AI Factory uses to deploy and manage ML models. It provides InferenceServices, autoscaling, and observability for AI Factory [Model Serving](/edb-postgres-ai/ai-factory/model/serving/).
Model Serving deploys AI models as production-grade inference services, using KServe under the hood. It supports LLMs, embedding models, vision models, and custom AI workloads.
Model Serving deploys models using Kubernetes-native KServe and integrates with the [Model Library](.edb-postgres-ai/ai-factory/model/library). It powers Assistants, Knowledge Bases, and custom AI applications.
148
+
Model Serving deploys models using Kubernetes-native KServe and integrates with the [Model Library](/edb-postgres-ai/ai-factory/model/library/). It powers Assistants, Knowledge Bases, and custom AI applications.
The Image and Model Library in Hybrid Manager manages container images for both Postgres and AI model deployments. The [Model Library](.edb-postgres-ai/ai-factory/model/library) provides an AI-focused view, supporting Model Serving and governed image workflows.
158
+
The Image and Model Library in Hybrid Manager manages container images for both Postgres and AI model deployments. The [Model Library](/edb-postgres-ai/ai-factory/model/library/) provides an AI-focused view, supporting Model Serving and governed image workflows.
0 commit comments