Running your tests against 3rd party services can result in slow and flaky tests when the service is down or when network latency triggers certain testing timeouts. Not to mention you run the risk of triggering API rate limits depending on the 3rd party service you're hitting. Your tests should ideally be deterministic, not failing randomly, and not needing conditional logic to handle errors within a particular test case. If you expect to need to handle errors, then there should be a specific test case to cover those errors that runs in every build, not waiting for non-deterministic failures to come in from the 3rd party.
One argument people will make is that your tests should notify you if the 3rd party API breaks for one reason or another. Generally speaking, though, most major 3rd party APIs are extremely stable and are unlikely to make breaking changes. Even if it does happen, this is an awkward and confusing way to find out that the API is broken, and in all likelihood, your tests aren't going to be the first place you hear it from. More likely your customers and your production error tracker will notify you. If you want to track when these services change or go down, it makes sense to have a regular production check of some sort to verify it.
As for how to write tests around these situations, that's a little more tricky. There are tools such as VCR in Ruby which work well for stubbing out your language's internet connections and allowing you to stub out, record, and customize responses (there's a list of similar implementations in other languages further down in their readme). That doesn't work for when your browser connects to those resources in automated end-to-end tests, though. There are tools that proxy your browser's web connection such as Puffing Billy in Ruby, but it's a pretty involved process to set up, including managing security certificates. This seems pretty brittle and hard to debug when something isn't working quite right.
Your best bet for writing tests that are deterministic and maintainable may be to fake out the service in test mode. thoughtbot has a pretty decent video on this and here's a high-level article from CircleCI. Essentially, you swap in an adapter in test mode that stands in for your 3rd party service integration. Maybe what you can do on your local machine is make it possible to optionally use the real service or the adapter via an environment variable in order to verify that the tests run the same against both. You could also set up a daily build to run against the real thing so that it would verify that the tests still work alright without introducing a lot of flakiness to your more frequent builds. One issue I've run into, though, is that even if I set up a test account on that 3rd party service, the results will change over time as I add or modify information for the sake of testing new functionality, such as adding new repos, modifying issues, etc. It requires additional consideration for maintaining your test account as a set of fixtures for all of your tests.
One additional tool I've come across that may be helpful is the likes of ngrok-tunnel (Ruby again). This is only relevant in cases where you need the 3rd party service to contact your app, since they can't send requests across the web to localhost:3000
. If you've configured some sort of webhooks, services like this can make testing a lot more straightforward.