-1

I have to process like 10-100 millions records.

I have to give the data to the client when it's finish. The data is givent as SQL requests to execute in the database. He have a powerful server with MySQL, I think it will be fast enough.

The issue is my computer is not as powerful as his server, so I would like to use an other SQL server who is compatible (I export his database and import it in my computer) with MySQL but more powerful.

What should I use? Or am I doomed to use MySQL?

Dorian
  • 103
  • 5
  • Hos is this impacting you? Are you just annoyed at the time it takes or is it actually making it impossible to test the system because other parts of it are time-dependent or whatever? I think you'll have to stick with MySQL, but if you're just testing the logic of the code then you can test with a smaller recordset, right? – Rob Moir Aug 28 '12 at 09:19
  • Why do you think MySQL is a problem for such a task? – John Gardeniers Aug 28 '12 at 10:31
  • Quantify "a lot of data". 1 Gigabyte? 100 Gigabytes? Several Terabytes? Also, as far as compatibility between databases is concerned, well, good luck. SQL is like BASIC: everyone claims to "know" it, but reality is that it's a zillion different dialects, each implemented a little differently, and none of which are 100% compatible with each other, beyond the few that are strict SQL standards adherents. Which is to say, none of them. – Avery Payne Sep 04 '12 at 23:02

2 Answers2

4

If your client needs to use MySQL then that's what you're stuck with.

Obviously it will run slower on your computer than on the server, but that's not usually a big deal. If you need more capacity on a temporary basis you can always lease an Amazon EC2 instance for a few hours or whatever.

Michael Hampton
  • 244,070
  • 43
  • 506
  • 972
1

I think it's a bad idea to use MySQL for big databases, better go with PostgreSQL. If the database for MySQL gets over a few GB then you already might have issues while with PostgreSQL there should be no issues in having databases of such volume.

A simple database dump sometimes shows the difference, for instance I had to dump some time ago a 6GB database in MySQL and had big issues doing this as it failed a couple of times. While for postgreSQL I had no issues dumping a 130GB database. Same goes for performing queries etc.

Logic Wreck
  • 1,420
  • 9
  • 8
  • **I think it's a bad idea to use MySQL for big databases, better go with PostgreSQL. If the database for MySQL gets over a few GB then you already might have issues** As much as my favourate use for mySql is as the punchline for a joke about "Baby's first database", I've got several multi-gigabyte mysql datastores on my network and they seem ok. – Rob Moir Aug 30 '12 at 22:18
  • Try getting a 100GB database on mysql, I assure you will have issues while with PostgreSQL it will be all fine. Even if you want to use the mysql engine, better use MariaDB which is a fork of mysql but behaves much better with heavy loads. I would really use mysql for low or medium loads with databases not bigger then 1 GB, for the rest I would use PostgreSQL, which am doing by the way. – Logic Wreck Aug 31 '12 at 09:13
  • I agree with you about it's scalability at that end of things, but I also think that there's a big difference between over 100GB and "a few GB". – Rob Moir Aug 31 '12 at 13:59
  • If it works better when there are 100GB, how will it work when there are a few GB? I thinks it's pretty logical here. – Logic Wreck Aug 31 '12 at 15:19