There are different alternatives to deal with this situation, depending on the level of collaboration you can get from the third party vendor:
Contract testing:
Pact is a somewhat mature framework that allows you to write unit tests that make HTTP requests against a mock third party provider service, which get persisted into a document (the Pact file) that can be shared with the vendor. If using Pact as well, the vendor can use the Pact file to run the tests against the provider service. The consumer (your service) documents the endpoints used of the provider (the third party service) along with the expected responses, and the provider validates those against itself, ensuring the integration.
For this approach to work, your third party vendor has to be open to fetch your pact file and run your consumer contract tests against their service. Contract testing allows your tests to work totally independent from the provider service, as your service and the provider's service are never connected together while testing.
Record and replay:
The idea behind this approach is to write tests that, in their initial run, make requests against real services and record their responses. The next runs of these tests will not reach real services but instead operate against the recorded responses from the first run. VCR is a good example of a library that enables this kind of testing.
For this approach to work, you don't need any cooperation from the third party vendor. You would make requests to a real service from time to time (to keep your sample responses fresh), which will be subject to availability of the provider service.
Test environment:
As you mentioned in your question, asking for a test environment/account to the provider is also a possibility. With this resource, you could write end to end tests that reach a realistic provider service, as well as having access to the environment itself to make assertions on its state as part of your tests.
The challenge with this approach is around the maintenance of this test environment: how can you be sure its version is the same as the one you are integrating against in production? Who looks after this environment's availability? Who creates data in this environment? Is this data realistic and representative of the real universe?
Semantic monitoring:
A final option would be to write a test that makes a sanity check of the integration between your service and the provider's in the production environment. This test could run after every deployment on your end, or even on a regular basis outside of deployment windows.
For this approach to work you don't need any collaboration from the third party vendor, but this alternative doesn't scale very well if you have a lot of integration use cases, as these tests tend to be slow to run, flaky (as they depend on real networks and systems availability) and pollute real environments. It's better to keep these tests as the very top of the testing pyramid, focused on very critical use cases. Additionally, you won't normally be able to test anything beyond happy paths, as you don't have control of the provider service to set it in any specific state beyond the "normal" one.