Pricing
Pay as you go
Get instant access to Archil volumes using our Developer Plan, and only ever pay for the data that you’re actively using.
Developer Plan
$0.20
per active gigabyte-month
Near-infinite capacity that grows with your application
Instant access to data sets stored in S3
Shareable across multiple instances, simultaneously
Full POSIX compatibility
Enterprise Plan
Custom price
All in Developer Plan plus:
Volume SLAs
Priority, enterprise support
On-premises and BYOC deployments
Enterprise authentication
Storage designed for AI’s explorers
Storage designed for AI’s explorers.
Lightning fast access, built for AI-scale workloads.
Archil delivers the scale-out performance required to process massive AI datasets without pre-provisioning. The high-speed data layer behind each Archil volume provides 30x lower latency than accessing S3 directly, and small-file workloads run up to 100x faster than shared file systems like Amazon EFS.
Storage that scales instantly, exactly when you need it.
Archil volumes directly integrate with S3-compatible storage, allowing your applications to get instant access to massive data sets without waiting for them to fully download. Archil automatically pushes unused data directly into low-cost, unlimited S3 storage, so you never have to worry about running out of capacity. Because Archil synchronizes data in its native format, you can simultaneously work with the same data through the S3 API.
Works with your existing stack, no code changes needed.
Archil volumes automatically scale with your application, so you don’t need to pre-specify an amount of capacity, throughput or IOPS to get started. Because Archil volumes provide POSIX compliant storage, you can immediately use them with your existing data-intensive applications (such as Spark, PyTorch, and pandas) without needing to change a single line of code.
One dataset, accessible from every instance.
Archil volumes provide fully-consistent storage that can be simultaneously accessed from multiple servers, out of the box. Archil eliminates the need to partition data sets across servers and specifically route requests to the right places.