For most scenes, raytracing is inherently more work, and therefore slower, than rasterization based methods such as standard OpenGL rendering.
For some applications, it is the other way around. For example, if you have a large number of spheres that you have stored in the right kind of search data structure, it will probably faster to shoot rays through the scene than to draw all the spheres that are in the view frustum on top of each other.
GPUs do support ray-tracing algorithms now, and they have done so for several years. You usually have to implement the algorithms yourself, there is nothing like the pre-packaged algorithms for rasterization that you get with OpenGL. I am sure nVidia has examples in their CUDA SDK. Many high-end effects in real-time 3D graphics trace a few rays even if they are not pure ray-tracing algorithms.
You can write a simple raytracer for a simple scene that runs in real-time on a standard GPU without too much knowledge about how to optimize it.
Google for scientific papers on "real-time raytracing" for the more advanced methods.
Have a look at nVidia's CUDA SDK. Maybe AMD has examples as well.