1

I have a django website, where I can register some event listeners and monitoring tasks on certain websites, see an info about these tasks, edit, delete, etc. These tasks are long running, so I launch them as tasks in a asyncio event loop. I want them to be independent on the django website, so I run these tasks in event loop alongside Sanic webserver, and control it with api calls from the django server. I dont know why, but I still feel that this solution is pretty scuffed, so is there a better way to do it? I was thinking about using kubernetes, but these tasks arent resource heavy and are simple, so I dont think it's worth launching new pod for each. Thanks for help.

Lokils
  • 23
  • 1
  • 4
  • 1
    Does this answer your question? [Long running tasks with Django](https://stackoverflow.com/questions/8011967/long-running-tasks-with-django) – allexiusw Aug 07 '21 at 18:44

1 Answers1

0

Ideally, it is always a good idea to launch a new pod for a new event or job.

You can use cronjob in Kubernetes so they auto-deleted when work is done.

It's always better keep to separate and small microservices rather than running the whole monolith application inside the container.

On the management side using starting the new pod will be easy to manage also, also cost-efficient if you scale up & down your cluster as per resource requirement.

You can also use the message broker and listener which will listen to the channel in the message broker and perform the async task or event if any. Listen consider as separate pod.

Harsh Manvar
  • 27,020
  • 6
  • 48
  • 102
  • Thanks for answer! How big difference in resources needed is if I run thousand tasks in one event loop, or I run thousand pods with one task each? – Lokils Aug 07 '21 at 18:28
  • definitely much, running a thousand pods with one task each is not a good idea. – Harsh Manvar Aug 07 '21 at 18:30
  • So should I launch new pod for every N number of tasks? – Lokils Aug 07 '21 at 18:35
  • i am not sure what you are doing in task and how the tasks are, ideally you should be running single process in container and maybe you perform the thousand task inside single POD. however check might be helpful : https://github.com/coleifer/huey – Harsh Manvar Aug 07 '21 at 18:37
  • Example of the task is twitch bot who listens on websocket, is connected to few channels, and on certain events writes data to the database. He stops listening when user deactivate him via django website. – Lokils Aug 07 '21 at 18:41
  • So you mean there will be multiple bots for each user ? if 1 million user there, thn 1 million task or bot ? – Harsh Manvar Aug 07 '21 at 18:44
  • One user can launch one bot, one bot can join multiple channels, one bot is one task. – Lokils Aug 07 '21 at 18:47
  • what i have used till now for same kind of scenario, user connect to one main core service with websocket, that main core service publish event to queue and queue lister get scale up & down based on the number of messages or task in single queue. – Harsh Manvar Aug 07 '21 at 18:54
  • So it's the same one websocket service for everyone, and with that they launch bots? And how do you run those bots? – Lokils Aug 07 '21 at 19:05
  • yes, single-socket service, running bots is kind of global implementation based on tasks or payload in event response forwarded to user and background process occur with drill down micro service calls. – Harsh Manvar Aug 07 '21 at 19:11