0

I am using moviepy to insert a text into different parts of the video in my Django project. Here is my code.

from moviepy.editor import VideoFileClip, TextClip, CompositeVideoClip
txt = TextClip('Hello', font="Tox-Typewriter")
video = VideoFileClip("videofile.mp4").subclip(0,31)
final_clip = CompositeVideoClip([video, txt]).set_duration(video.duration)
final_clip.write_videofile("media/{}.mp4".format('hello'),
  fps=24,threads=4,logger=None)
final_clip.close()

I am getting the video written to a file in 10s and showing the video in browser. The issue is when there are simultaneous requests to the server. Say there are 5 simultaneous requests coming to the server, then each response will take 50 s each. Instead of giving each response in 10s. It seems that there is some resource which is used by all these requests, and one is waiting for the another to release the resource. But could not find out where it is happening. I have tried using 5 separate file for each request thinking that all the requests opening same file is the problem, but did not work out. Please help me to find a solution.

Sandeep Balagopal
  • 1,943
  • 18
  • 28
  • Are you sure that this is not a server's problem? If not then which server are you using for serving this logic? – Alok Nayak Jun 11 '19 at 10:54
  • The problem is there when running in localhost too. I have opened 5 tabs and hit the same page simultaneously, the issue there too. In server i am using nginx with uwsgi. I have increased processes, threads in uwsgi but no luck. – Sandeep Balagopal Jun 11 '19 at 11:02
  • Use some kind of profiler - try line_profiler - or at least print()s to see on which line your code is waiting the most. That'd be your investigation's starting point. – altunyurt Jun 21 '19 at 07:45

1 Answers1

0

So without knowing more about your application setup any answers to this question will really be a shot in the dark.

As you know editing video or any changes to video is going to be resource intensive. In this instance you are actually a lot better off loading any processing to a specific task runner (celery, django-q). Not only will this not hold open server resources until the task is complete it also means you can offload the "work" to machines which are better suited for the job (optimized for IO or CPU bound work (depending on use case).

In development, if you are running using the local development server you will only be using one process. One process, when sent multiple intensive requests, will get blocked. You could look at using something like gunicorn or waitress and set the number of processes to < 1.

But still, at some point you are going to have to offload this work to a task runner, doing such work in a production environment could result in over consuming of web server resources.

On a more technical note,

  • have you looked at this issue on github:

https://github.com/Zulko/moviepy/issues/645

They talk about passing in a parameter ``progress_bar=False`. If in your use case you are writing 4 files and they are all writing to a progress bar you might be getting IO swamped.

  • Also, consider running a profiler while replicating the issue, it might give you better insight as to where the bottleneck is occurring (IO, or CPU).
Matt Seymour
  • 8,880
  • 7
  • 60
  • 101
  • Thanks for the response Matt. I have deployed it into a server, used uwsgi with nginx, gave more processors and threads but no luck. I have looked into the progress_bar also, the latest version is not having that. I have disabled the logs in moviepy. What i am trying to do is create a lambda server and do this particular process in it. Do you think it will work ? I read that lambda server can scale processor. – Sandeep Balagopal Jun 14 '19 at 13:13
  • How can i use celery since i have to return it in a single request ? Celery is used for background task right ? If i pass the task to a celery queue, it won't come back to my request. This is my understanding on celery. – Sandeep Balagopal Jun 14 '19 at 13:16