Skip to main content

A surprising thing about PyPI's BigQuery data

You can get download numbers for PyPI packages (or projects) from a Google BigQuery dataset. You need a Google account and credentials, and Google gives 1 TiB of free quota per month.

Each month, I have automation to fetch the download numbers for the 8,000 most popular packages over the past 30 days, and make it available as more accessible JSON and CSV files at Top PyPI Packages. This data is widely used for research in academia and industry.

However, as more packages and releases are uploaded to PyPI, and there are more and more downloads logged, the amount of billed data increases too.

BigQuery TB billed

This chart shows the amount of data billed per month.

At first, I was only collecting downloads data for 4,000 packages, and it was fetched for two queries: downloads over 365 days and over 30 days. But as time passed, it started using up too much quota to download data for 365 days.

So I ditched the 365-day data, and increased the 30-day data from 4,000 to 5,000 packages. Later, I checked how much quota was being used and increased from 5,000 packages to 8,000 packages.

But then I exceeded the BigQuery monthly quota of 1 TiB fetching data for July 2024.

To fetch the missing data and investigate what’s going in, I started Google Cloud’s 90-day, $300 (€277.46) free-trial 💸

Here’s what I found!

Finding: it costs more to get data for downloads from only pip than from all installers #

I use the pypinfo client to help query BigQuery. By default, it only fetches downloads for pip.

Only pip #

This command gets one day’s download data for the top 10 packages, for pip only:

$ pypinfo --limit 10 --days 1 "" project
Served from cache: False
Data processed: 58.21 GiB
Data billed: 58.21 GiB
Estimated cost: $0.29

Results:

projectdownload count
boto337,251,744
aiobotocore16,252,824
urllib316,243,278
botocore15,687,125
requests13,271,314
s3fs12,865,055
s3transfer12,014,278
fsspec11,982,305
charset-normalizer11,684,740
certifi11,639,584
Total158,892,247

All installers #

Adding the --all flag gets one day’s download data for the top 10 packages, for all installers:

$ pypinfo --all --limit 10 --days 1 "" project
Served from cache: False
Data processed: 46.63 GiB
Data billed: 46.63 GiB
Estimated cost: $0.23
projectdownload count
boto339,495,624
botocore17,281,187
urllib317,225,121
aiobotocore16,430,826
requests14,287,965
s3fs12,958,516
charset-normalizer12,781,405
certifi12,647,098
setuptools12,608,120
idna12,510,335
Total168,226,197

So we can see the default pip-only costs an extra 25% data processed and data billed, and costs an extra 25% in dollars.

Unsurprisingly, the actual download counts are higher for all installers. The ranking has changed a bit, but I expect we’re still getting more-or-less the same packages in the top thousands of results.

Queries #

It sends a query like this to BigQuery for only pip:

SELECT
  file.project as project,
  COUNT(*) as download_count,
FROM `bigquery-public-data.pypi.file_downloads`
WHERE timestamp BETWEEN TIMESTAMP_ADD(CURRENT_TIMESTAMP(), INTERVAL -2 DAY) AND TIMESTAMP_ADD(CURRENT_TIMESTAMP(), INTERVAL -1 DAY)
  AND details.installer.name = "pip"
GROUP BY
  project
ORDER BY
  download_count DESC
LIMIT 10

And for all installers:

SELECT
  file.project as project,
  COUNT(*) as download_count,
FROM `bigquery-public-data.pypi.file_downloads`
WHERE timestamp BETWEEN TIMESTAMP_ADD(CURRENT_TIMESTAMP(), INTERVAL -2 DAY) AND TIMESTAMP_ADD(CURRENT_TIMESTAMP(), INTERVAL -1 DAY)
GROUP BY
  project
ORDER BY
  download_count DESC
LIMIT 10

These queries are the same, except the default has an extra AND details.installer.name = "pip" condition. It seems reasonable it would cost more to do extra filtering work.

Installers #

Let’s look at the installers:

$ pypinfo --all --limit 100 --days 1 "" installer
Served from cache: False
Data processed: 29.49 GiB
Data billed: 29.49 GiB
Estimated cost: $0.15
installer namedownload count
pip1,121,198,711
uv117,194,833
requests29,828,272
poetry23,009,454
None8,916,745
bandersnatch6,171,555
setuptools1,362,797
Bazel1,280,271
Browser1,096,328
Nexus593,230
Homebrew510,247
Artifactory69,063
pdm62,904
OS13,108
devpi9,530
conda2,272
pex194
Total1,311,319,514

pip still by far the most popular, and unsurprising uv is up there too, with about 10% of pip’s downloads.

The others are about 25% or less of uv. A lot of them are mirroring services that we wanted to exclude before.

I think given uv’s importance, and my expectation that it will continue to take a bigger share of the pie, plus especially the extra cost for filtering by just pip, means that we should switch to fetching data for all downloaders. Plus the others don’t account for that much of the pie.

Finding: the number of packages doesn’t affect the cost #

This was the biggest surprise. Earlier I’d been increasing or decreasing the number to try and remain under quota. But it turns out it makes no difference how many packages you query!

I fetched data for just one day and all installers for different package limits: 1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000. Sample query:

SELECT
  file.project as project,
  COUNT(*) as download_count,
FROM `bigquery-public-data.pypi.file_downloads`
WHERE timestamp BETWEEN TIMESTAMP_ADD(CURRENT_TIMESTAMP(), INTERVAL -2 DAY) AND TIMESTAMP_ADD(CURRENT_TIMESTAMP(), INTERVAL -1 DAY)
GROUP BY
  project
ORDER BY
  download_count DESC
LIMIT 8000

Cost and bytes for 1 day with different package limits are the same

Result: Interestingly, the cost is the same for all limits (1000-8000): $0.31.

Repeating with one day but filtering for pip only:

Cost and bytes for 1 day still the same for pip

Result: Cost increased to $0.39 but again the same for all limits.

Let’s repeat with all installers, but for 30 days, and this time query in decreasing limits, in case we were only paying for incremental changes: 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000:

Cost and bytes for 30 day still the same no matter how many packages

Result: Again, the cost is the same regardless of package limit: $4.89 per query.

Well then, let’s repeat with the limit increasing by powers of ten, up to 1,000,000! This last one fetches data for all 531,022 packages on PyPI:

limitprojects countestimated costbytes billedbytes processed
110.2043,447,746,56043,447,720,943
10100.2043,447,746,56043,447,720,943
1001000.2043,447,746,56043,447,720,943
10001,0000.2043,447,746,56043,447,720,943
80008,0000.2043,447,746,56043,447,720,943
1000010,0000.2043,447,746,56043,447,720,943
100000100,0000.2043,447,746,56043,447,720,943
1000000531,0220.2043,447,746,56043,447,720,943

Still same flat cost and bytes for 1 or 10 or 1,000 or 1,000,000 packages

Result: Again, same cost, whether for 1 package or 531,022 packages!

Finding: the number of days affects the cost #

No surprise. I’d earlier noticed 365 days too took much quota, and I could continue with 30 days.

Here’s the estimated cost and bytes billed (for one package, all installers) between one and 30 days (f"pypinfo --all --json --indent 0 --days {days} --limit 1 '' project"), showing a roughly linear increase:

Cost and bytes increase as the number of days increase

Conclusion #

  • It doesn’t matter how many packages I fetch data for, I might as well fetch all and make it available to everyone, depending on the size of the data file. It will make sense to still offer a smaller file with 8,000 or so packages: often you just need a large-ish yet manageable number.

  • It costs more to filter for only downloads from pip, so I’ve switched to fetching data for all installers.

  • The number of days affects the cost, so I will need to decrease this in the future to stay within quota. For example, at some point I may need to switch from 30 to 25 days, and later from 25 to 20 days.

More details from the investigation, the scripts and data files can be found at hugovk/top-pypi-packages#36.

And let me know if you know any tricks to reduce costs!


Header photo: “The Balancing Rock, Stonehenge, Near Glen Innes, NSW” by the Royal Australian Historical Society, with no known copyright restrictions.