Virtualization is in no way mandatory for cloud computing, though it is certainly very common. There are several cloud provider that offer non-virtualized resources. This is commonly referred to as a bare-metal cloud.
For example, SoftLayer offers a bare-metal cloud.
Bare-metal clouds are therefore 'closer' to traditional data center hosting, though you typically still get an API that allows you to provision resources.
Bare-metal clouds will generally offer better performance when compared to a similar sized virtual resource as they do not carry the virtualization overhead.
As for your second question, that depends on how you define 'traditional'.
If you refer to running software in-house vs. in a remote 'cloud' data center, the obvious issues are latency and performance. Choosing your cloud provider carefully (distance to data center, variety of compute resource types, etc) can mitigate this to a large degree. From the user's perspective, the main issue is an internet connection. Limited bandwidth can have a pretty detrimental effect on the user experience of most applications. Also, some applications have very strict requirements about where data can be stored. So the question of running these applications in the cloud is also a question of WHERE in the cloud they run.
Other then that, software running on a cloud, whether virtualized or bare-metal, will look pretty much the same as software running in a traditional data center.