ProGet
Choosing the Right S3 Alternatives for Artifact Storage
If you work with CI/CD pipelines, artifact repositories, or DevOps workflows, you’ll be familiar with Amazon S3. It’s flexible and widely used, but as your repository grows, costs can add up fast. Storage is just one piece of it; request charges and egress fees can catch teams off guard, especially when traffic spikes. As you scale, having predictable pricing and better control over your artifact storage starts to really matter.
That’s why teams start looking at S3-compatible alternatives. From cloud providers with more predictable pricing to on-prem solutions that keep artifacts closer to home. The goal is usually the same: reduce costs and regain control without breaking existing CI/CD workflows.
In this article, I’ll talk about why teams move away from Amazon S3 and compare some practical S3-compatible storage alternatives. I’ll also walk through how to migrate your artifacts with minimal disruption and how ProGet helps make that transition easier.
Why Teams Switch from Amazon S3
For most teams, the biggest reason to leave Amazon S3 is cost. Storage itself is fairly cheap; around $23–$26 per TB per month for S3 Standard. But outbound traffic can quickly become expensive. For example, a repo serving 10 TB of downloads per month would have roughly 100 GB free, leaving ~10,140 GB billed at $0.09 per GB. That’s about $913 in egress fees, nearly four times the storage cost of around $250. Throw request charges and potential cross-region transfers into the mix, and monthly bills are pretty rough, making S3 less predictable and harder to budget for.
Some teams go a step further and choose on-prem, S3-compatible storage to cut costs even more and speed things up internally. Keeping artifacts local means no surprise egress fees for internal traffic, faster CI/CD runs thanks to lower latency, and complete control over where your data lives. Compliance needs can also push teams in this direction, but in many cases the cost savings alone make it worth it.
The good news is that changing storage isn’t that painful. ProGet works with any S3-compatible provider, whether it’s in the cloud or on-prem, so teams can move their artifacts over without breaking feeds, credentials, or existing pipelines.
Popular S3-Compatible Storage Options
When looking at alternatives to Amazon S3, it really comes down to cost, download volume, and access patterns. Below, I’ve covered some common S3-compatible options (both cloud and on-prem) and given a quick breakdown of each to help you decide which one fits your organization’s needs.
⭐ Cloudflare R2 is a solid option if you have artifact-heavy repos and a lot of downloads, where egress fees usually hurt the most. Storage runs around $0.015/GB per month, and outbound traffic is free. This means costs stay nice and predictable, even at scale. The big win here is zero egress fees, which can save you hundreds or even thousands each month if you’re pushing tens of terabytes. The downside? It’s cloud-only and pretty tied to the Cloudflare ecosystem, which might not work if you need multi-cloud flexibility or an on-prem option.
⭐ Wasabi is all about simple, flat-rate cloud storage. It starts at $6.99 per TB per month, with no egress or API fees for normal usage, so budgeting is straightforward with no surprise charges. It works well for teams with steady, moderate download needs where knowing the monthly cost upfront matters more than chasing the lowest price. The trade-offs are fewer global edge locations and a minimum retention policy, but for many teams, the predictability is worth it.
⭐ Backblaze B2: A low-cost option for long-term retention or infrequent access. With storage at $6 per TB per month and free egress up to roughly three times your stored data each month, it’s good for backups, archives, or repositories that don’t need to be accessed often. While it’s cloud-only and not optimized for high-frequency downloads, the predictable costs make it ideal for teams looking to store large volumes of rarely accessed artifacts.
⭐ Self-Hosted / On-Premises Options: On-prem options, such as MinIO or Ceph with S3 Gateway, give you full control by keeping artifacts on your own infrastructure. You avoid egress fees and get fast, low-latency access, which is great for internal CI/CD or regulated environments. The trade-off is more hands-on management. Your team handles all the scaling, uptime, and backups. For teams with strict compliance needs or large internal repos, that control and cost predictability can be well worth it, though.
So, What Provider Should I Choose?
At this point, the decision usually comes down to how much data you serve and where you want it to live:
- High download volumes: When a repo is moving a lot of data, egress fees usually become the main cost driver. In those cases, storage options that don’t charge for outbound traffic tend to be easier to work with. Cloudflare R2 is often used here because download volume doesn’t directly translate into higher monthly bills.
- Steady, predictable usage: If access patterns are consistent and the priority is keeping monthly costs easy to predict, flat-rate storage is usually a good fit. Wasabi is chosen for this reason, since its pricing is simple and avoids extra fees for normal usage.
- Rarely accessed or archival artifacts: For data that’s mostly stored and only accessed occasionally, lower storage costs are what matter more. Backblaze B2 is often used for backups and long-term retention because it keeps storage costs low while allowing some free egress for infrequent access.
- Full control or compliance-driven environments: If artifacts need to stay on your own infrastructure for latency, security, or regulatory reasons, on-premises storage is often the practical choice. MinIO or Ceph with an S3 gateway are commonly used for this, offering full control and no egress fees, with the trade-off of additional operational work.
A good place to start is by looking at monthly download volume and repository size, since these drive most cost differences. From there, consider access patterns, compliance requirements, and how much operational overhead your team is willing to manage.
How to Migrate from AWS S3 to Your New Provider
Switching storage doesn’t just mean picking a new provider; you also need to move your existing artifacts without breaking your pipelines or workflows. The good news is that because these providers are all S3-compatible, migration is mostly about copying objects and updating endpoints.
⭐ Step 1: Take inventory of your data and access patterns: Before starting the migration, get a clear idea of what you’re moving. Look at your total repository size, which artifacts are accessed frequently, and which ones are rarely touched. This helps you decide whether a full cutover, a staged migration, or incremental syncing makes the most sense.
It’s also important to know how much data will leave AWS during the move, since outbound transfers are billed. For larger repos, teams often reduce costs by migrating older or infrequently used artifacts first, syncing in batches, or using an on-prem system as an intermediate cache. Planning this up front keeps the migration predictable and avoids surprises once data starts moving.
⭐ Step 2: Prepare the new storage: Create your bucket on the chosen provider. Set up credentials with read/write permissions.
⭐ Step 3: Copy your artifacts: There are several ways to move data:
- AWS CLI / SDKs:
aws s3 sync s3://old-bucket s3://new-bucket --endpoint-url <new-endpoint>works with any S3-compatible target. - Third-party tools: Tools like rclone or MinIO Client (mc) can copy between S3 providers, with features for resumable transfers and parallelism.
- Staged migration: For very large repositories, I’d recommend moving rarely accessed artifacts first, or doing a feed-by-feed migration to avoid overwhelming bandwidth or storage.
Once you have everything migrated over, it’s just a case of setting up ProGet with your new storage solution.
Using Your S3 Alternative with ProGet
ProGet can integrate with any S3-compatible provider. Once set up, it handles all package reads and writes via the S3 API, so your workflows remain unchanged whether your storage is cloud-based or on-premises.
To start with, you’ll need the bucket and credentials you set up when migrating your artifacts from Amazon S3 to whichever provider you went with. In ProGet, navigate to your feed and select “change” under “Storage” in “Storage Properties.”

Select “Amazon S3“, then enter the bucket name, region, and credentials. If you’re using a non-AWS provider, enable Custom Endpoint and provide the endpoint URL.

Then select “Save”. ProGet will now store artifacts in your chosen provider!
Take Control of Your Artifact Storage
Amazon S3 is flexible, but for many teams, predictable costs and control over where artifacts live are the real priorities. Cloud and on-premises S3-compatible providers like Cloudflare R2, Wasabi, Backblaze B2, and MinIO let teams reduce storage expenses, eliminate surprise egress fees, and maintain smooth workflows.
With ProGet, connecting to any S3-compatible storage is straightforward. Whether your storage is in the cloud or on-premises, your feeds, pipelines, and developer workflows remain unchanged. That means you can focus on building and shipping software, not wrestling with unpredictable bills.
If S3’s pricing no longer matches how your artifacts are actually used, switching storage is often a practical next step. Because ProGet supports S3-compatible storage natively, you can change where artifacts are stored without changing how they’re published or consumed, making it easier to regain cost predictability with minimal disruption.