1

I'm looking for architectural advice. I have a client who I've built a website for which essentially allows users to view their web cameras remotely.

The current flow of data is as follows:

User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms

Ftp connection is enabled for the cameras ftp user. Web camera opens ftp connection to server. Web camera begins taking photos. Web camera sends photo to ftp server.

On image url request: Server reads latest image on hard drive uploaded via ftp for camera. Server deleted any older images from the server.

This is working okay at the moment for a small amount of users/cameras ( about 10 users and around the same amount of cameras), but we're starting to worrying about the scalability of this approach.

My original plan was instead of having the files read from the server, the web server would open up an ftp connection to the web server and read the latest images directly from there meaning we should have been able to scale horizontally fairly easily. But ftp connection establishment times were too slow ( mainly due to the fact that PHP out of the ox is unable to persist ftp connections ) and so we abandoned this approach and went straight for reading from the hard drive.

The firmware provider for the cameras state they're able to build a http client which instead of using ftp to upload the image could post the image to a web server. This seems plausible enough to me, but I'm looking for some architectural advice.

My current thought is a simple Nginx/PHP/Redis stack.

Web camera issues post requests of latest image to Nginx/PHP and the latest image for that camera is stored in Redis.

The clients can then pull the latest image from Redis which should be extremely quick as the images will always be stored in memory.

The data flow would then become:

User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms

Camera is sent an http request to start posting images to a provided url Web camera begins taking photos. Web camera sends post requests to server as fast as it can

On image url request: Server reads latest image from redis Server tells redis to delete later image

My questions are:

  1. Are there any greater overheads of transferring images via HTTP instead of FTP?
  2. Is there a simple way to calculate how many potential cameras we could have streaming at once?
  3. Is there any way to prevent potentially DOS'ing our own servers due to web camera requests?
  4. Is Redis a good solution to this problem?
  5. Should I abandon PHP/Ngix combination and go for something else?
  6. Is this proposed solution actually any good?
  7. Will adding HTTPs to the mix cause posting the image to become too slow?

Thanks in advance

Alan

Alan Hollis
  • 181
  • 9
  • Architectural and Design Questions are Off-Topic on [sf]. See the [FAQ] for more details. – Brent Pabst Dec 17 '12 at 20:55
  • @Brent apologies, the faq doesn't mention that specifically, and although I can see how I maybe crossing the bounds I'm not entirely sure which of the stack exchange sites would be better suited to this question? Thanks Alan – Alan Hollis Dec 17 '12 at 21:02
  • We're debating this a bit in chat right now, I feel this question is too subjective and open-ended which is off-topic [here](http://serverfault.com/faq#dontask) – Brent Pabst Dec 17 '12 at 21:05
  • 3
    It's a bit of a broad question... If it does get closed off topic, I propose breaking it down into 5-7 smaller ones. Q4 is nearly unanswerable. So is 5. and 6. Only you can answer that. If it works, it's good, surely. – Tom O'Connor Dec 17 '12 at 21:09
  • Agree with @TomO'Connor on this. If you can break your question down into more manageable pieces it may be easier to digest. The length turned a lot of the higher-rep users away from answering it as well. Some simple edits may help you get more traction. – Brent Pabst Dec 17 '12 at 21:14
  • Thanks for the feedback, it's appreciated. Apologies for the poor question. I hate being thought of as a "guy who can't read" I genuinely felt this was the right exchange site to ask this type of question on. – Alan Hollis Dec 17 '12 at 21:17
  • @AlanHollis Some aspects are valid here. It's just the way things are presented are causing the majority of the issue. Don't fret, we all are here to help and disagree at times. – Brent Pabst Dec 17 '12 at 21:22
  • Meanwhile, the clients subscribe to [Barracuda CudaEye...](http://cudaeye.com/)... – ewwhite Dec 17 '12 at 21:38
  • @ewwhite I'd feel incredibly uncomfortable installing or recommending a "cloud" camera solution that allows a 3rd party unrestricted, unaudited access to all my video surveillance... I suppose I'm paranoid/informed that way though. – Chris S Dec 17 '12 at 21:49
  • Not recommending, but just thinking that this has been done. Would be interesting to see Barracuda's architecture. – ewwhite Dec 17 '12 at 21:56
  • @Chris Interesting point. My client is selling this to quite a niche market,where he already has gained a lot of trust from his customers via the other services his company offers. I don't believe his goal is ever a mass market "cloud" offering, but I could be wrong. – Alan Hollis Dec 17 '12 at 22:00

1 Answers1

3

Are there any greater overheads of transferring images via HTTP instead of FTP?

Not really. HTTP is easier to accelerate and cache than FTP.

Is there a simple way to calculate how many potential cameras we could have streaming at once?

A single MPEG4 stream at 1080p is about 10Mbit. That's a ballpark figure. You should be able to scale that back to your actual resolution.

Is there any way to prevent potentially DOS'ing our own servers due to web camera requests?

Scale out. There's a good Node.js MJPEG proxy I've used in the past, which scales better than a camera's onboard video server.

Is Redis a good solution to this problem?

Probably as good as any other. YMMV. Do some testing.

Should I abandoned PHP/Nginx combination and go for something else?

Stick to what you're comfortable with.

Is this proposed solution actually any good?

Sounds plausible, pending some Proof of Concept tests and benchmarking

Will adding HTTPS to the mix cause posting the image to become too slow?

Possibly a little, but probably not to a noticable degree. Again, you'll need to do some testing. You can probably get away with having a separate reverse proxy to terminate SSL connections on, and that way, be able to have HTTPS for access to the images, but the uploads don't take place over HTTPS (if that's what you want.)

There also some HTTPS accelerator cards you can get for real servers.

Tom O'Connor
  • 27,480
  • 10
  • 73
  • 148
  • 1
    HTTP isn't necessarily easier to cache/accelerate, but he protocol is a lot simpler and faster (no extra connection for the actual transfer) – Dennis Kaarsemaker Dec 17 '12 at 21:21
  • I've no idea where I'd start trying to optimize FTP. HTTP though is fairly well studied and documented.. Especially with things like SPDY coming up now. – Tom O'Connor Dec 17 '12 at 21:23
  • SPDY isn't "simple" and not quite as widely supported yet. Optimizing HTTP (or FTP) for me has always been around caching. With FTP you actually have access to file mtime/size before deciding to download, making a decision on whether to (re-)download easier. With HTTP you need to make (educated) guesses on either the server or client side. – Dennis Kaarsemaker Dec 17 '12 at 21:26
  • Then again, http pipelining/keepalive is an obvious optimization. FTP does the exact opposite: it insists on a new connection for every file. – Dennis Kaarsemaker Dec 17 '12 at 21:29