0

I am trying to create a clustered cache service for Cloud Foundry. I understand that I need to implement Service Broker API. However, I want this service to be clustered, and in the Cloud Foundry environment. As you know, container to container connection (TCP) is not supported yet, I don't want to host my backend in another environment.

Basically my question is almost same as this one: http://grokbase.com/t/cloudfoundry.org/vcap-dev/142mvn6y2f/distributed-caches-how-to-make-it-work-multicast

And I am trying to achieve this solution he adviced:

B) is to create a CF Service by implementing the Service Broker API as some of the examples show at the bottom of this doc page [1] . services have no inherant network restrictions. so you could have a CF Caching Service that uses multicast in the cluster, then you would have local cache clients on your apps that could connect to this cluster using outbound protocols like TCP.

First of all, where does this service live? In the DEA? Will backend implementation be in the broker itself? How can I implement the backend for scaling the cluster, start the same service broker over again?

Second and another really important question is, how do the other services work if TCP connection is not allowed for apps? For example, how does a MySQL service communicates with the app?

Murat Ayan
  • 575
  • 4
  • 19

2 Answers2

1

There are a few different ways to solve this, the more robust the solution, the more complicated.

The simplest solution is to have a fixed number of backend cache servers, each with their own distinct route, and let your client applications implement (HTTP) multicast to these routes at the application layer. If you want the backend cache servers to run as CF applications, then for now, all solutions will require something to perform the HTTP multicast logic at the application layer.

The next step would be to introduce an intermediate service broker, so that your client apps can all just bind to the one service to get the list of routes of the backend cache servers. So you would deploy the backends, then deploy your service broker API instances with the knowledge of the backends, and then when client apps bind they will get this information in the user-provided service metadata.

What happens when you want to scale the backends up or down? You can then get more sophisticated, where the backends are basically registering themselves with some sort of central metadata/config/discovery service, and your client apps bind to this service and can periodically query it for live updates of the cache server list.

You could alternatively move the multicast logic into a single (clustered) service, so:

  • backend caches register with the config/metadata/discovery service
  • multicaster periodically queries the discovery service for list of cache server routes
  • client apps make requests to the multicaster service

One difficulty is in implementing the metadata service if you're doing it yourself. If you want it clustered, you need to implement a highly-available-ish consistent-ish datastore, it's almost the original problem you're solving except the service handles replicating data to all nodes in the cluster, so you don't have to multicast.

Amit Kumar Gupta
  • 17,184
  • 7
  • 46
  • 64
0

You can look at https://github.com/cloudfoundry-samples/github-service-broker-ruby for an example service broker that runs as a CF application.

  • Thanks for the answer Tushar, that makes it clear you can host your service in CF. There is another example with Spring Cloud here: https://github.com/spring-cloud-samples/fortune-teller. However, I am not interested in the Broker implementation, I am interested in backend, this ruby app uses GitHub API as backend implementation. Could this Broker listen on TCP for example? – Murat Ayan Dec 01 '15 at 12:55
  • 1
    It would depend on what kind of service instances you are running. Currently there is no way for container-to-container networking, so the binding of the service instance will fail if you try using any internal networking. All traffic to the service instance needs to be publicly routable. If your service instances run on port 80, you can have service instances running within CF. If you can elaborate your use case, it will be easier to help out. – Tushar Dadlani Dec 04 '15 at 00:48