RAGComponentFactory

gitinsp.domain.interfaces.infrastructure.RAGComponentFactory

Factory for creating components of the Retrieval Augmented Generation (RAG) pipeline Provides methods to instantiate and configure all necessary elements for vector search, document ingestion, query processing, and AI-assisted retrieval

Attributes

Graph
Supertypes
class Object
trait Matchable
class Any
Known subtypes

Members list

Value members

Abstract methods

def createAssistant(model: StreamingChatLanguageModel, augmentor: Option[RetrievalAugmentor]): Assistant

Creates a StreamingAssistant. This service is the main entry point for the RAG pipeline.

Creates a StreamingAssistant. This service is the main entry point for the RAG pipeline.

Value parameters

augmentor

The retrieval augmentor

model

The chat model

Attributes

Returns

A StreamingAssistant

def createCodeEmbeddingModel(): OllamaEmbeddingModel

Creates an embedding model for code.

Creates an embedding model for code.

Attributes

Returns

An OllamaEmbeddingModel

def createCodeRetriever(embeddingStore: QdrantEmbeddingStore, embeddingModel: OllamaEmbeddingModel, indexName: String, modelRouter: OllamaChatModel): EmbeddingStoreContentRetriever

Creates a retriever for code.

Creates a retriever for code.

Value parameters

embeddingModel

The embedding model

embeddingStore

The embedding store

indexName

The specific collection to be used

modelRouter

The LLM router used for dynamic filtering

Attributes

Returns

A retriever for the specified index

def createCollection(name: String, client: QdrantClient, distance: Distance): Try[Unit]

Creates a collection in Qdrant.

Creates a collection in Qdrant.

Value parameters

client

The Qdrant client to use

distance

The distance metric to use for vector similarity

name

The name of the collection

Attributes

Returns

A Try indicating success or failure of the operation

def createContentAggregator(scoringModel: ScoringModel): ReRankingContentAggregator

Creates a content aggregator for ranking and filtering retrieved content. This allows to rerank results, potentially yielding more relevant content.

Creates a content aggregator for ranking and filtering retrieved content. This allows to rerank results, potentially yielding more relevant content.

Value parameters

scoringModel

The scoring model used to rank retrieved content

Attributes

Returns

A configured ReRankingContentAggregator

def createEmbeddingStore(client: QdrantClient, name: String): QdrantEmbeddingStore

Creates an embedding store.

Creates an embedding store.

Value parameters

client

The Qdrant client to use

name

The name of the collection to store embeddings

Attributes

Returns

A QdrantEmbeddingStore

def createIngestor(language: Language, embeddingModel: OllamaEmbeddingModel, embeddingStore: QdrantEmbeddingStore, strategy: IngestionStrategy): EmbeddingStoreIngestor

Creates an ingestor for adding document embeddings to the vector database

Creates an ingestor for adding document embeddings to the vector database

Value parameters

embeddingModel

The embedding model to use for vectorizing documents

embeddingStore

The vector store where embeddings will be saved

language

The programming language of the documents to ingest

strategy

The strategy defining how documents are processed and split

Attributes

Returns

A configured EmbeddingStoreIngestor for the specified parameters

def createMarkdownRetriever(embeddingStore: QdrantEmbeddingStore, embeddingModel: OllamaEmbeddingModel, indexName: String): EmbeddingStoreContentRetriever

Creates a specified retriever type. The retriever is used to embed documents using the embedding model and store them in the embedding store.

Creates a specified retriever type. The retriever is used to embed documents using the embedding model and store them in the embedding store.

Value parameters

embeddingModel

The embedding model

embeddingStore

The embedding store

indexName

The specific collection to be used

Attributes

Returns

A retriever for the specified index

def createModelRouter(): OllamaChatModel

Creates a model router for routing queries

Creates a model router for routing queries

Attributes

Returns

An OllamaChatModel configured for query routing

def createQdrantClient(): QdrantClient

Creates a Qdrant client.

Creates a Qdrant client.

Attributes

Returns

A QdrantClient

def createQueryRouter(retrievers: List[EmbeddingStoreContentRetriever], modelRouter: OllamaChatModel): QueryRouter

Creates a QueryRouter based on the provided retrievers

Creates a QueryRouter based on the provided retrievers

Value parameters

modelRouter

The LLM model used for query routing decisions

retrievers

List of content retrievers to use

Attributes

Returns

A configured QueryRouter

def createRetrievalAugmentor(router: QueryRouter, aggregator: ReRankingContentAggregator): DefaultRetrievalAugmentor

Creates a RetrievalAugmentor that combines router and aggregator

Creates a RetrievalAugmentor that combines router and aggregator

Value parameters

aggregator

The ContentAggregator to use

router

The QueryRouter to use

Attributes

Returns

A configured DefaultRetrievalAugmentor

def createScoringModel(): ScoringModel

Creates a scoring model. It is used to rerank documents (potentially yield better results).

Creates a scoring model. It is used to rerank documents (potentially yield better results).

Attributes

Returns

A ScoringModel

def createStreamingChatModel(): StreamingChatLanguageModel

Creates a streaming chat model.

Creates a streaming chat model.

Attributes

Returns

A streaming chat model implementation

def createTextEmbeddingModel(): OllamaEmbeddingModel

Creates an embedding model for text.

Creates an embedding model for text.

Attributes

Returns

An OllamaEmbeddingModel