0

We have a very read heavy semantic web (RDF) application which is RHEL 5/Apache 2.2.3/Tomcat 6/Java 6 on one Server, and MySQL 5.1 on another. Apache and MySQL are backported Red Hat repo stuff, so please don't go on about how old the versions are. I want to discuss, in terms of performance, the merits of having your DB on the same server using a Unix socket vs using TCP calls to a remote DB server. I know in terms of security, if hackers own the box, they would own the entire stack, but I am worrying about performance now. Server is hardened, and multiple IDS and firewalls sit in front.

usedTobeaMember
  • 616
  • 15
  • 25

2 Answers2

0

I'm not sure this won't stray into the "too much opinion to be appropriate" realm really fast, but here goes. The major performance difference I can see it sacrificing the disk performance of a split model versus sacrificing the network performance of a unified model. Since (in my experience) you're much more likely to be disk-bound than network-bound, I'd be tempted to keep the model split.

John
  • 9,070
  • 1
  • 29
  • 34
0

This sort of is not a focused question, in that there is a ton left to interpretation. However, performance will always be resource based. There is a very small cost in overhead using the network(including request/response time) for the benefit of the ability to allocate more system resources to your application, not to mention isolating the disk I/O of each piece to help improve the performance on an I/O intensive application.

kalikid021
  • 387
  • 2
  • 3