Amazon Web Services (AWS) has recently revolutionized cloud object storage with the general availability of S3 Vectors, a feature set designed specifically for storing and querying vector data. This innovative service significantly increases the per-index capacity to an impressive 2 billion vectors—an upgrade that enhances capabilities and user experiences for various applications.
Earlier this year in July, AWS launched S3 Vectors as a preview, allowing users to experiment with the service, which quickly garnered attention. According to recent reports, users have already created over 250,000 vector indexes and ingested more than 40 billion vectors. While the initial preview capped indexes at 50 million vectors, AWS’s Sebastian Stromacq noted the new capabilities of S3 Vectors:
You can now store and search across up to 2 billion vectors in a single index… This means you can consolidate your entire vector dataset into a single index, eliminating the need to shard across multiple smaller indexes or to implement complex query federation logic.
This elimination of complexity is not just a theoretical benefit; it translates into real-world advantages. The enhanced performance features mean that infrequent queries can now return results in under one second, while frequent queries achieve latencies of 100 milliseconds or less. This is particularly beneficial for interactive applications like conversational AI, where speed and relevance are critical.
Additionally, S3 Vectors allows users to retrieve up to 100 search results per query, boosting the contextual understanding necessary for retrieval-augmented generation (RAG) applications. The write performance has also been optimized, now supporting up to 1,000 PUT transactions per second for single-vector updates. This enables organizations to achieve higher throughput with small batch sizes, ensuring newly ingested data is immediately searchable.
AWS has also solidified two important integrations that were available during the preview phase. Users can now leverage S3 Vectors as a vector storage engine for Amazon Bedrock Knowledge Base, enhancing their knowledge management capabilities. Additionally, S3 Vectors seamlessly integrates with Amazon OpenSearch, allowing users to use S3 as their vector storage layer while utilizing OpenSearch for robust search and analytical features.
Jalaj Nautiyal, a developer familiar with the updates, articulated the shift in approach regarding vector search in a LinkedIn post:
S3 Vectors moves vector search from a Compute-First problem to a Storage-First solution. The “Serverless” Shift: You no longer manage clusters, pods, or shards. You treat vectors like any other object in S3. Scale: Store billions of vectors.
Cost: Reduce total ownership costs by up to 90%. You pay for S3 storage (cheap) + query fees. No idle compute costs.
Nautiyal also highlighted the practicality of S3 Vectors, especially for internal RAG applications and autonomous agents:
For 80% of internal RAG applications and autonomous agents, you probably don’t need the Ferrari of vector databases. You just need a reliable, infinite trunk. S3 just became that trunk.
S3 Vectors is now available in 14 AWS regions—an expansion from five during its preview phase. The pricing structure for the service is carefully crafted based on three essential dimensions:
- **PUT Pricing**: Calculated on the logical GB of vectors uploaded, which includes the vector data, metadata, and key.
- **Storage Costs**: Determined by the total logical storage across all indexes.
- **Query Charges**: These are based on a per-API charge, combined with a cost per TB depending on index size, excluding non-filterable metadata.
For more specific details about the pricing options and conditions, interested users can refer to AWS’s dedicated pricing page.
Inspired by: Source

