Be conscious of having enough threads/processes/pods to maintain availability if your application blocks while serving each HTTP request (e.g. Django). There is going to be some pod startup time if you're using a horizontal pod autoscaler, and I found with a high traffic application I had much better availability with uwsgi and the application within each pod (same container), and a separate nginx pod doing reverse proxying and request pooling when all the uwsgi workers were busy.
YMMV but at the end of the day, availability is more important than sticking to the single process per pod rule of thumb. Just understand the downsides, such as less isolation between the processes within the same container. Logs are available on a per container basis, so there won't be isolation between anything in the same container using the built in kubectl logs functionality.