Unfortunately, you can’t read the Tranco dataset directly from BigQuery; but, what you can do is to load the CSV data from Tranco into a Cloud Storage Bucket and then read your bucket in BigQuery.
When you load data from Cloud Storage into a BigQuery table, the dataset that contains the table must be in the same regional or multi- regional location as the Cloud Storage bucket.
Note that it has the next limitations:
CSV files do not support nested or repeated data.
Remove byte order mark (BOM) characters. They might cause unexpected
issues.
If you use gzip compression, BigQuery cannot read the data in
parallel. Loading compressed CSV data into BigQuery is slower than
loading uncompressed data.
You cannot include both compressed and uncompressed files in the same
load job.
The maximum size for a gzip file is 4 GB. When you load CSV or JSON
data, values in DATE columns must use the dash (-) separator and the
date must be in the following format: YYYY-MM-DD (year-month-day).
When you load JSON or CSV data, values in TIMESTAMP columns must use
a dash (-) separator for the date portion of the timestamp, and the
date must be in the following format: YYYY-MM-DD (year-month-day).
The hh:mm:ss (hour-minute-second) portion of the timestamp must use a
colon (:) separator.
Also, you can see this documentation if you don’t know how you can upload and read your CSV data.
And also in the next link I'm sending you is a step by step guide in how yo can create / select the bucket you will use.