0

I have recently started working on Docker, K8s and Argo. I am currently working on creating 2 containerized applications and then link them up in such a way that they can run on Argo. The 2 containerized applications would be as follows:

  1. ReadDataFromAFile: This container would have the code that would receive a url/file with some random names. It would separate out all those names and return an array/list of names.

  2. PrintData: This container would accept the list of names and then print them out with some business logic involved.

I am currently not able to understand how to:

  1. Pass text/file to the ReadData Container.
  2. Pass on the processed array of names from the first container to the second container.

I have to write an Argo Workflow that would regularly perform these steps!

Wytrzymały Wiktor
  • 11,492
  • 5
  • 29
  • 37
Manan Kapoor
  • 327
  • 1
  • 2
  • 11
  • The two easiest approaches to both things are to pass the data in the body of HTTP POST requests, or to set up a message queue like RabbitMQ and pass the data along that way. – David Maze Mar 09 '22 at 11:06
  • Hi Manan, do you need to store processed/output files as artifacts? – mozello Mar 09 '22 at 11:58
  • Hi mozello, I want to provide application arguments to the Main(string args[]) method. Perform some calculation and then pass on the result(text/string) to the next template as input. Hope this clears out ! – Manan Kapoor Mar 09 '22 at 12:05

1 Answers1

1

Posting this as Community wiki for better visibility with a general solution. Feel free to expand it.


Since you don't need to store any artifacts, the best options to pass data between Kubernetes Pods are (as @David Maze mentioned in his comment):

1. Pass the data in the body of HTTP POST requests.

There is a good article with examples of HTTP POST requests here.

POST is an HTTP method designed to send data to the server from an HTTP client. The HTTP POST method requests the web server accept the data enclosed in the body of the POST message.

2. Use a message broker, for example, RabbitMQ.

RabbitMQ is the most widely deployed open source message broker. It supports multiple messaging protocols. RabbitMQ can be deployed in distributed and federated configurations to meet high-scale, high-availability requirements.

RabbitMQ provides a wide range of developer tools for most popular languages.

You can install RabbitMQ into the Kubernetes cluster using the Bitnami Helm chart.

mozello
  • 1,083
  • 3
  • 8