4

If I end up having 4 or 5 medium sites on one server, I want to be sure that each one that requires memcached has at least an allotted space. Is there a simple way to do this? The only ways that come to mind would be to have separate processes on different ports for each one. Is there an easier/other way? I just don't want one site hogging up all of the ram for memcached.

I have tons of ram, and say I want to give one of my magento sites exactly 512mb for memcached. I also want to give another custom application exactly 512mb for memcached. Ideas?

Matthew
  • 1,859
  • 4
  • 22
  • 32

6 Answers6

4

Memcached has no conception of namespaces, partitions, or similar. Therefore the only way would be to run multiple instances of memcached. That's no problem though as memcached is ridiculously simple to set up (purposefully).

It can just be bound to, for example, 5 different ports (one for reach site) or 5 different IP addresses.

See here for an example: http://blog.nevalon.de/en/wie-kann-ich-mehrere-instanzen-von-memcached-auf-einem-server-laufen-lassenhow-can-i-run-multiple-instances-of-memcached-on-one-server-20090729

SimonJGreen
  • 3,205
  • 5
  • 33
  • 55
2

I agree with Niall here. Other possibility is this you can use private IP space. Say your server can be assigned 4 IP 10.x.x.1 through 4. You can launch Memcached with 4 servers and bind to each IP thereby giving all sites the same port but different memcache IP.

On top of that you can modify the init script for memcached to start all 4 servers and stop them together in one go. This can be used with either IP or the Port binding method. It will greatly simply things for you.

Here is an example of the multiple servers in one go Multiple Memcached server /etc/init.d startup script that works? (see question script source).

There is a reason memcached requires separate process, it is more to do with memory management rather than memcached itself. Separate processes sharing memory does not seem like a good idea. Memory management is best left to system.

Abhishek Dujari
  • 567
  • 2
  • 5
  • 17
1

This is not necessary at all. If you consider that memcache storage is actually working as LRU stack then it becomes obvious that it's suboptimal to give some portion of memory to site that is used less when site that should be memcached more will have smaller portion of memory and records for it will be pushed out more often than needed while site receiving less traffic will have more less unused data stored in that dedicated portion which could have been used better for more active sites that will instead of using memcached records need to reach for the data in some SQL backend.

Hrvoje Špoljar
  • 5,245
  • 26
  • 42
  • While I generally agree with you (it would be more optimal to use all of the set-apart ram), I will have more than one or two clients on the same setup. I need to guarantee that their site is quick. I don't want one to be less quick simply because it is accessed less often. – Matthew Mar 04 '12 at 23:51
  • memcache is not make quick service. any record you may need will be cached on first visit to the page and will be expired and pushed out when fresh record comes in. best performance will be if you let memcache handle managing this. whatever you may think will be working slower, it might be 'slow' for that one load till values are brought back to cache. honestly sir you are re-inventing wheel here. – Hrvoje Špoljar Mar 05 '12 at 06:31
  • No, I understand what memcached is. I know it has to generate first and use the cache when possible. I just need to not let my heavier usage clients clobber the caching abilities of my lesser usage clients. – Matthew Mar 05 '12 at 16:07
  • ok makes sense; I guess it's some shared hosting where you want to guarantee service to different customers – Hrvoje Špoljar Mar 05 '12 at 18:41
0

I agree with both responses here but wanted to add some more input.

I do not think there is a way to split out namespaced objects and associated ram usage in a single memcache instance. So like the other responses say best to run multiple instances.

While this might be an easy task if this is large scale these also might be good resources to look at:

twemproxy

https://github.com/twitter/twemproxy

Allows you to setup a proxy in front of memcache. This means all sites/clients connect in to nutcracker processes which load balance across your memcache pools.

moxi

http://code.google.com/p/moxi/

Another proxy solution to load balance memcache.

So again it depends on the size of your infrastructure but these might be tools that would helpful to you in a larger or growing infrastructure. Splitting these out would allow you to have several smaller instances, and rather than adding say a new 512MB instance for every site you could stick to even smaller say 64mb instances and expand at a much smaller rate.

pablo
  • 3,040
  • 1
  • 19
  • 23
0

Its extremely unlikely that Memcache will ever consume more than around 32MB RAM for a Magento store anyway. When you consider each cached page is around 4Kb - you've got a fair bit of scope for cached content.

I would suggest setting up multiple Memcached instances using unix sockets (its faster and safer than TCP/IP). You can start Memcached with the following flags

-d
-m 32
-u myuser
-s /home/myuser/cache.sock
-a 0700    

From http://www.sonassihosting.com/blog/support/implement-memcache-for-sonassi-magento-optimised-dedicated-servers/

Your memcache local.xml config would look like this, and read this to see why the slow_backend is necessary - http://www.sonassi.com/knowledge-base/magento-kb/what-is-memcache-actually-caching-in-magento/

<cache>
 <slow_backend>database</slow_backend>
 <fast_backend>Memcached</fast_backend>
 <fast_backend_options>
 <servers>
 <server>
 <host>unix:///home/myuser/cache.sock</host>
<port>0</port>
<persistent>0</persistent>
 </server>
 </servers>
 </fast_backend_options>
 <backend>memcached</backend>
 <memcached>
 <servers>
 <server>
 <host>unix:///home/myuser/cache.sock</host>
<port>0</port>
<persistent>0</persistent>
 </server>
 </servers>
 <compression><!--[CDATA[0]]></compression-->
 <cache_dir><!--[CDATA[]]></cache_dir-->
 <hashed_directory_level><!--[CDATA[]]></hashed_directory_level-->
 <hashed_directory_umask><!--[CDATA[]]></hashed_directory_umask-->
 <file_name_prefix><!--[CDATA[]]></file_name_prefix-->
 </file_name_prefix></hashed_directory_umask></hashed_directory_level></cache_dir></compression></memcached>
</cache>
Ben Lessani
  • 5,244
  • 17
  • 37
-1

The easiest way will be, as your suspected, to have multiple instances of memcached. memcache is purposefully kept as simple as possible for speed so offers no internal forms of separation like what you're looking for. It doesn't even offer any form of authentication for the same reason!

Niall Donegan
  • 3,869
  • 20
  • 17