NeurondB

PostgreSQL AI Vector Extension

GPU-accelerated vector search, model inference, hybrid retrieval, and RAG orchestration built into PostgreSQL. NeurondB is an AI PostgreSQL extension. Use this documentation to deploy NeurondB, operate background workers, and embed ML pipelines in SQL.

Key Capabilities

Vector Search

HNSW, IVF, product quantization, and custom distance metrics for billion-scale similarity search.

ML Inference

ONNX runtime integration, GPU offload, and batch execution for deep learning workloads in SQL.

Hybrid Retrieval

Blend keyword, metadata, and vector signals to deliver highly relevant multimodal results.

RAG Pipelines

In-database retrieval augmented generation with prompt templates, metadata policies, and observability.

Documentation Library

Getting Started

Install NeurondB on PostgreSQL 16–18, verify GPU support, and apply baseline configuration.

Core Features

Learn how NeurondB models vectors, maintains indexes, and tunes recall versus latency.

ML & Embeddings

Generate, store, and serve embeddings alongside model lifecycle management.

  • Embeddings

    Transform text, audio, and images into dense vectors.

  • Inference

    Deploy ONNX models with GPU batching and caching.

  • Model Management

    Version control, approvals, and rollback workflows.

Hybrid Search & Reranking

Combine text search, BM25, and neural rerankers for production retrieval pipelines.

Background Workers

Operational guidance for queue execution, auto-tuning, and index maintenance workers.

API Reference

Browse SQL functions, operators, and data types exported by NeurondB.

  • SQL Functions

    Query, indexing, and analytics procedures.

  • Data Types

    Custom vector, tensor, and metadata types.

  • Operators

    Similarity, distance, and hybrid scoring operators.