LogoLogo
  • Welcome to Sandbloc Documentation
  • INTRODUCTION
    • What is Sandbloc?
  • INTEGRATIONS
    • AI Providers
    • Vector Stores
  • TEMPLATES
    • Sandbloc Templates
    • Discord
    • Telegram
    • Twitter
    • Future Templates
  • API REFERENCES
    • API Guide
    • Endpoints
  • SANDBLOC LANGUAGE
    • The Sandbloc Language (SOON)
Powered by GitBook
On this page
Export as PDF
  1. INTEGRATIONS

Vector Stores

Connect vector stores to power your AI workflows with efficient data storage, retrieval, and embeddings management.

Vector stores are essential for building AI systems that require efficient search, data retrieval, and the management of large-scale embeddings. With Sandbloc, integrating vector stores into your workflows is seamless, allowing you to connect AI models with structured, queryable data for tasks like semantic search, recommendation systems, and contextual reasoning.


Supported Vector Stores

Sandbloc supports the following vector databases:

  • Pinecone: Scalable vector database for high-performance similarity search.

  • MongoDB: Vector search functionality integrated with document-based storage.

  • LanceDB: An open-source, high-performance vector store for local and cloud-based data.

  • Neo4j: Graph-based vector integration for relational and semantic queries.


How to Integrate Vector Stores

Sandbloc’s modular design allows you to plug in a vector store as a Processing Block or Input Block for fast, contextual data retrieval.


Example 1: Using LanceDB with AI Search

Here’s how you can connect LanceDB to search for embeddings and retrieve relevant data:

pythonCopy codefrom sandbloc import InputBlock, ProcessingBlock, OutputBlock  

# Input Block: User query for a search system  
input_block = InputBlock("user_input", prompt="Find relevant articles about Sandbloc libraries.")  

# Processing Block: Query LanceDB for embeddings matching the query  
processing_block = ProcessingBlock("lancedb", database_path="path/to/db", collection="articles")  

# Output Block: Display matching results in the console  
output_block = OutputBlock("console")  

# Workflow: Link the blocks  
workflow = input_block >> processing_block >> output_block  
workflow.run()

Example 2: Combining OpenAI and Pinecone

Here’s an advanced workflow combining OpenAI embeddings and Pinecone for semantic search:

pythonCopy codefrom sandbloc import InputBlock, ProcessingBlock, OutputBlock  

# Input Block: User provides a search query  
input_block = InputBlock("user_input", prompt="Show me information about AI workflows.")  

# Processing Block 1: Generate embeddings with OpenAI  
embed_block = ProcessingBlock("openai", model="text-embedding-ada-002", api_key="YOUR_OPENAI_API_KEY")  

# Processing Block 2: Use Pinecone to search for the closest embeddings  
vector_search = ProcessingBlock("pinecone", api_key="YOUR_PINECONE_API_KEY", index_name="sandbloc_index")  

# Output Block: Return matching results to the console  
output_block = OutputBlock("console")  

# Combine the workflow blocks  
workflow = input_block >> embed_block >> vector_search >> output_block  
workflow.run()

Why Vector Stores Matter

  1. Efficient Data Retrieval Vector stores allow AI systems to find similar data points quickly—critical for search engines, recommendation systems, and question-answering tools.

  2. Embeddings Management Integrate embeddings generated by models like OpenAI’s text-embedding-ada to power semantic search and contextual results.

  3. Scalability Vector databases like Pinecone and LanceDB scale effortlessly, making them perfect for production-grade AI systems.

  4. Graph-Based Search Use Neo4j for relational and semantic queries, blending graph data with AI capabilities.


Use Cases

  • Semantic Search: Retrieve similar documents, images, or videos based on user queries.

  • Recommendation Systems: Build smarter, context-aware product or content recommendations.

  • Knowledge Graphs: Use Neo4j to integrate vector search into relational data for better reasoning.

  • Contextual Agents: Enable AI agents to search through structured vectorized data for real-time answers.


Conclusion

Sandbloc’s vector store integrations allow developers to efficiently manage embeddings, perform similarity searches, and unlock advanced contextual AI workflows. Whether you’re building search tools, knowledge systems, or recommendation engines, Sandbloc simplifies the connection to top vector stores—letting you focus on building smarter, faster AI solutions.

PreviousAI ProvidersNextSandbloc Templates

Last updated 5 months ago

Page cover image