So I kind of found a solution, although it's not great. I ended up messaging the maintainer of dpaste.com who very kindly enabled CORS, so I can now upload my file to dpaste and download it again.
The GH action I've got looks something like:
jobs:
build-and-publish-calendars:
runs-on: ubuntu-latest
steps:
...
- name: POST machine_friendly.csv to dpaste.org
run: |
cat calendars/machine_friendly.csv | curl -X POST -F "expires=31536000" -F 'format=url' -F 'content=<-' https://dpaste.org/api/ > pastebin.txt
- name: Write pastebin link to GH variable
run: |
echo "pastebin=$(cat pastebin.txt)/raw" >> $GITHUB_OUTPUT
id: PASTEBIN
And then later (funky hacks incoming) I include that pastebin link in the description of my release using a personal fork of IsaacShelton/update-existing-release (I maintain the personal fork for some performance improvements which are unrelated to this issue). The step looks like:
...
- name: Update latest release with new calendars
uses: beyarkay/update-existing-release@master
with:
token: ${{ secrets.GH_ACTIONS_PAT }}
release: My Updated Release
updateTag: true
tag: latest
replace: true
files: ${{ steps.LS-CALENDARS.outputs.LS_CALENDARS }}
body: "If you encounter CORS issues, you'll need to use this [pastebin link](${{ steps.PASTEBIN.outputs.pastebin }})"
And in my website, I have a snippet like:
const downloadFromRelease = async () => {
// We need octokit in order to download the metadata about the release
const octokit = new Octokit({
auth: process.env.GH_PAGES_ENV_PAT || process.env.GH_PAGES_PAT
})
const desc = await octokit.request("GET /repos/beyarkay/eskom-calendar/releases/72143886", {
owner: "beyarkay",
repo: "eskom-calendar",
release_id: "72143886"
}).then((res) => res.data.body)
// Here's some regex that matches the markdown formatted link
const pastebin_re = /\[pastebin link\]\((https:\/\/dpaste\.org\/(\w+)\/raw)\)/gm
const match = desc.match(pastebin_re)
// Now that we've got a match, query that URL to get the data
const url = match[0].replace("[pastebin link](", "").replace(")", "")
console.log(`Fetching data from ${url}`)
return fetch(url)
.then(res => res.text())
.then(newEvents => {
// And finally parse the URL data into Event objects
const events: Event[] = newEvents.split("\n").map( line => ( {
area_name: line.split(",")[0],
start: line.split(",")[1],
finsh: line.split(",")[2],
stage: line.split(",")[3],
source: line.split(",")[4],
}))
return events
})
}
Tying it all together, I can use this snippet to actually setState
in react:
// Define a Result type that represents data which might not be ready yet
type Result<T, E> = { state: "unsent" }
| { state: "loading" }
| { state: "ready", content: T }
| { state: "error", content: E }
// The events start out as being "unsent"
const [events, setEvents] =
React.useState<Result<Event[], string>>( { state: "unsent" })
// If they're unsent, then send off the request
if (events.state === "unsent") {
downloadMachineFriendly().then(newEvents => {
// When the data comes back, use the `setEvents` hook
setEvents({
state: "ready",
content: newEvents,
})
// If there's an error, store the error
}).catch(err => setEvents({state: "error", content: err}))
}