At my last company, LMAX, we treated security features like any other and automated acceptance tests for those features.
We wrote acceptance tests that interacted with the system through the same channels as any other user of the system and proved that the security provisions of the system worked as expected.
So one test would assert that if logon was successful, other features of the system were available. Another would assert that if logon was unsuccessful you couldn't access any secured features - simple really.
The trick is to ensure that your acceptance tests interact with the system through the same communications channels as real users, or as close as you can get, no special tricks or back-doors into the main logic of the application - particularly no tricks or back-doors that allow you to bypass the security features ;-)
Logon is a trivial example, but this approach is applicable to any user-level security feature - actually any feature.
There are other classes of security problem of course, checking for buffer-overflows, sql injection and so on. A lot of this is about architecting your application to be secure - clear separation of responsibilities by layering you application for example.
You can include tests for these classes of security requirements in your acceptance testing too, if appropriate to your application, or perhaps add an additional step in your deployment pipeline to run tests for these kinds of exposure. This depends on the nature of your application, I would probably add to acceptance tests for most apps, and take the dedicated pipeline stage approach for apps where I could autogenerate test-cases to attempt injections - e.g. searching a web-app for all input fields and trying to inject garbage?