8

I am wondering if there are clusters available to rent.

Scenario:

We have a program that will take what we estimate a week to run(after optimization) on a given file. Quite possibly, longer. Unfortunately, we also need to do approximately 300+ different files, resulting in approximately 300 weeks of compute time(roundable to 6 wallclock years of continuously running job). For a research job that should be done - at the latest - by December, that's simply unacceptable. While we are exploring other options, I am investigating the option of simply renting a Beowulf cluster. The job is academic and will lead towards the completion of a PhD.

What would be ideal would be a company that we send the source and the job files to the company and then receive a week or two later the result files. Voila!

Quick googling doesn't turn up anything terribly promising.

Suggested Solutions?

vvvvv
  • 25,404
  • 19
  • 49
  • 81
Paul Nathan
  • 39,638
  • 28
  • 112
  • 212

10 Answers10

11

Cloud computing sounds like what you need. Amazon, Microsoft and Google rent computer resources on a pay for what you use basis.

Amazon's service is the most mature, and there are several questions already about Amazon's service, EG here and here.

Community
  • 1
  • 1
RossFabricant
  • 12,364
  • 3
  • 41
  • 50
  • 2
    *calculates* 6 years of time ~ $42,000. Whoof. Guess I'm playing with the big boys now. – Paul Nathan Apr 21 '09 at 02:47
  • At that point, you're better off paying for your own rack in a data center and simply renting the hardware at $210 a month.. for one month. – Anthony Apr 21 '09 at 02:49
  • 1
    Not sure where you're getting $42k... looks like around $5k, assuming you can run 8 jobs in a week on the 8-core "High CPU XL" configuration (looks like 8 cores at ~2.5GHz each, or thereabouts); those are 80 cents an hour, so $5040 to get enough of them to do 300 1-week jobs. – kquinn Apr 21 '09 at 03:55
  • 1
    @Paul Nathan so how did this job turn out for you? What did you end up going with? I know this is several years later but I just came across this question in search of answers of my own :-) – Trekkie Jun 08 '16 at 17:24
  • @KeithL scavanging old machines and sharding the job out. Looking back, we should just have coughed up some money and set up some ec2 instances. We lost a lot of opportunity cost on this one. – Paul Nathan Jun 24 '16 at 18:35
6

Amazon EC2 (Elastic Compute Cloud) sounds like exactly what you're looking for. You can sign up for one or more virtual machines (up to 20 automatically, more if you request permission), starting at $0.10 an hour per VM, plus bandwidth costs (free between EC 2 machines and Amazon's other web services). You can choose between several operating systems (various Linux distributions, OpenSolaris, Windows if you pay extra), and you can use pre-existing machine images or create your own. If you're using all open-source software and don't have much bandwidth costs, it sounds like it would cost you around $5000 to run your job (assuming that your 6 years of compute time was for something comparable to their small instances, with a single virtual CPU).

Once you sign up for the service and get their tools set up, it's pretty easy to get new virtual machines launched. I've even spent the $0.10 to launch a machine for a few minutes just to verify an answer I was giving someone here on StackOverflow; I wanted to check something on Solaris, so I just booted up an instance and had a Solaris VM at my disposal within 5 minutes.

Brian Campbell
  • 322,767
  • 57
  • 360
  • 340
5

I don't know where are you doing your PhD... Most of the Asian, European, and North American universities have some clusters. You can

  • meet directly the people at the lab which is in charge of the cluster.
  • ask your PhD director to arrange that. Maybe he/she have some friends that can handle that.

Also, the classical trick is to use the unused time of the computers of your lab/university... Basically, each computer run a client application that crunch numbers when the computer is not used. See http://boinc.berkeley.edu/

Monkey
  • 1,838
  • 1
  • 17
  • 24
  • This does not answer the question abot *RENTING* a cluster. – Hejazzman Apr 21 '09 at 02:37
  • I saw that as "rent for free" :( – Monkey Apr 21 '09 at 02:39
  • Oh, rent for free would *not* be disregarded! I'm just making the "worst-case" assumption that most universities won't give us 6 compute years on their clusters for free. :-) – Paul Nathan Apr 21 '09 at 02:41
  • I think the problem with Boinc is that can't guarantee him the computing time. In this case his project is very time sensitive. – Web Apr 21 '09 at 02:41
  • @Paul: You won't know for sure until you ask! Maybe your university is underusing their cluster. Definitely check with them first. – gnovice Apr 21 '09 at 03:23
  • Yeah, do that... also, ask around within your department. Back in my undergraduate days, my research advisor had time allocated on the supercomputers every semester, yet *never used it*... just "saving it up" for if he needed it. Of course, that guy was... not the best boss in the world. But he'd have let you have his time this semester... just so he'd be able to claim he was using the cluster, and could apply again *next* semester to waste the time! – kquinn Apr 21 '09 at 03:48
  • I should add on that point : most of the universities got 100+ CPUs clusters largely underused. In my case (Europe, France), I never had a problem to find 256 CPUs packed with lot of RAM. – Monkey Apr 21 '09 at 05:34
1

Or you could rent CPU time from a private provider.
I'm from Slovenia and, for example, here we have a great private provider called Arctur. The guys were helpful and and responsive when I contacted them.

You can find them here: hpc.arctur.net

Josh Darnell
  • 11,304
  • 9
  • 38
  • 66
Sammy
  • 11
  • 1
1

There are several ways to get time on clusters.

  1. Purchase time on Amazon elastic cloud. Depending on how familiar you are with their service, it may take time to get it configured the way you want it.
  2. Approach a university and see if they have a commercial program to rent out the time to companies. I know several do. One that I know of specifically is private sector program at NCSA at UIUC. Depending on the institution, they also offer porting and optimization service for your code.
powerrox
  • 1,334
  • 11
  • 21
1

This lead may prove helpful:

http://lcic.org/vendors.html

And this is a fantastic resource site on the matter:

http://www.hpcwire.com

Anthony
  • 7,086
  • 1
  • 21
  • 22
1

The thread has been replete with pointers to Amazon's EC2 - and correctly so. They are the most mature in this area. Recently, they've released their elastic map-reduce platform which sound similar (although not exactly) like what you are trying to do. Google is not an option for you as their compute model doesn't support the generic compute model you need.

argodev
  • 243
  • 2
  • 11
1
  • For academic/scientific use, there are several public centers offering HPC capability. In Europe, there is DEISA. http://www.deisa.eu/ and DEISA members. There must be similar possibilities in the US, probably thru the NSF.

  • For commercial use, check IBM Deep Computing On Demand offerings. http://www-03.ibm.com/systems/deepcomputing/cod/

PA.
  • 28,486
  • 9
  • 71
  • 95
0

Go to : http://www.extremefactory.com/index.php True HPC cluster, up to 200 TFlops.

Cyril
  • 51
  • 2
0

One option is to rent the virtual resources equivalent of whatever number of PCs you need, and set them up as a cluster, using the Amazon Elastic Compute Cloud.

Setting up a beowulf cluster of those is entirely possible.

Check out this link which provides resources and software to do exactly that.

Hejazzman
  • 2,068
  • 1
  • 16
  • 21