5

I have this scenario:

You have a factory process line which runs 24/7. Downtime is extremely expensive. The software controlling all different parts must use a shared form of database storage The main reason for this is to know in which state the factory is in. For example some products can be mixed when using the same set of equipement and others DEFINITELY not.

requirements:

  • I want to the software be able to detect that an error in one part of the plant must result in some machine shutdown more then 1 km away. so stoing data in the plc's is not an option.
  • Updates and upgrades to the factory environment are frequent
  • load (in computer terms) will be really low.

The systems handles a few hunderd assignments a day for which calculations / checks are done followed by instructions send for the factory machines. Systems will be bored most of the time. Most important requirement is the central computer system must be correct and always working.

I was thinking to use a dynamo based database (riak or cassandra) where data gets written to multiple machines with each machine having the whole database

When one system goes down it will go down unoticed. A Traditional sql databse might be more of a pain to upgrade when tables changes and this master slave is harder to configure.

What would be your solution?

Network has been made redundant and most other single points of failure to. The database system is critical because downtime of the db means downtime for the entire plant not just one of the machines which is acceptable.

  • How to solve shared state problem.
  • complexity in the database will not be a problem. I will be more like a simple key value store to get the most current and correct data.
Charles
  • 50,943
  • 13
  • 104
  • 142
Stephan
  • 3,679
  • 3
  • 25
  • 42

3 Answers3

3

I don't think this is a sql/nosql question. All of Postgres, MySQL and MS SQL Server have some kind of cluster or hot standby option.

Configuration is a one-time thing, but any NoSQL option is going to give you headaches from top-to-bottom of code, if you are trying to do something fundamentally relational on a platform that has given up relational for the purposes of running things like Amazon or Facebook. The configuration is once, the coding is forever.

So I would say stick with a tried and true solution and get that hot replication going.

This also provides a solution for upgrades. The typical sequence is to "fail over" to the standby, upgrade the master, flip back to the master, upgrade the standby, and resume. With details specific to the situation of course.

Ken Downs
  • 4,707
  • 1
  • 22
  • 20
  • How hard are nosql upgrades? Could you explain the headaches part? – Stephan Jan 21 '11 at 09:52
  • The headaches are in the code. NoSQL with its Map/Reduce is specifically about documents, with no fixed structure for their properties. If you are running a factory it is near impossible to imagine you are doing something document-oriented. That stuff is relational. So the headaches come in trying to force-fit a relational need into a document-oriented solution. – Ken Downs Jan 22 '11 at 16:13
2

Use an established RDBMS that supports such things natively

Do you really want to run a 24/7 mission critical system on something that may be consistent at any point in time?

gbn
  • 422,506
  • 82
  • 585
  • 676
1

You need to avoid single points of failure.

All the major players in our dbms world offer at least one way to avoid making the database itself a single point of failure. I might question whether they can propagate changes fast enough for your manufacturing processes. (Or is data update not really an issue? Can't really tell from your question.) My db work in manufacturing is limited to the car and the chemical industry. Microseconds didn't matter to them.

But the dbms isn't the only thing that can fail. "Always working" means that the clients have to always be working, too. Client hardware, connections to the network, the network and network servers themselves all probably have single points of failure. Failure-tolerant servers have multiple power supplies, multiple NICs, etc.

"Always working" is really expensive. I have a feeling that the database isn't going to be the biggest problem for your company.

Mike Sherrill 'Cat Recall'
  • 91,602
  • 17
  • 122
  • 185
  • downtime of the current database mean downtime for the entire plant. which need to be avioded. Systems on the 'edge' of the system may do down for a few hours. – Stephan Jun 18 '11 at 09:32
  • I understand about database downtime. I just meant that 24/7 hardware and communications might be harder to implement than a 24/7 database. Most enterprise-grade SQL dbms support 24/7 operation right out of the box nowadays--hot standby, replication, clusters, stuff like that. – Mike Sherrill 'Cat Recall' Jun 18 '11 at 11:20