1

I tried to implement low-level (model) caching in my app. I am looking for help with dealing with possible memory leak.

Rails4
client: Dalli
cache store: Memcached Cloud

I have a 'users' index page where I load a lot of users and their data (answers to some questions). I tried to cache all of that since the information doesn't change very often.

I am noticing that after every refresh of the 'users' page the memory footprint keep growing (until the application timesout bec it hits max limit on heroku)

Here are the memory stats that heroku gives in the logs:

'users' page not yet loaded
heroku[web.1]: sample#memory_total=**147.48MB** sample#memory_rss=147.47MB sample#memory_cache=0.01MB sample#memory_swap=0.00MB sample#memory_pgpgin=38139pages sample#memory_pgpgout=385pages

1st load 'users' page
heroku[web.1]: sample#memory_total=**474.22MB** sample#memory_rss=472.95MB sample#memory_cache=1.27MB sample#memory_swap=0.00MB sample#memory_pgpgin=122341pages sample#memory_pgpgout=941pages

2nd load 'users' page
heroku[web.1]: sample#memory_total=**626.01MB** sample#memory_rss=472.86MB sample#memory_cache=0.00MB sample#memory_swap=153.15MB sample#memory_pgpgin=179711pages sample#memory_pgpgout=58658pages


3rd load 'users' page
heroku[web.1]: sample#memory_total=**631.07MB** sample#memory_rss=484.38MB sample#memory_cache=0.00MB sample#memory_swap=146.69MB sample#memory_pgpgin=199602pages sample#memory_pgpgout=75600pages
heroku[web.1]: Process running mem=631M(123.3%)
heroku[web.1]: Error R14 (Memory quota exceeded)
heroku[router]: at=error code=H12 desc="Request timeout"

At the same time I am monitoring the cache and seems like there are no cache misses so the caching seems to be working fine.

irb(main):098:0> Rails.cache.stats.values_at("total_items", "get_misses","bytes_written","get_hits")
=> ["907", "2721", "92597491", "4535"]
irb(main):099:0> Rails.cache.stats.values_at("total_items", "get_misses","bytes_written","get_hits")
=> ["907", "2721", "111072314", "5442"]
irb(main):100:0> Rails.cache.stats.values_at("total_items", "get_misses","bytes_written","get_hits")
=> ["907", "2721", "129539345", "6155"]

Notice that the bytes_written keep on growing which is another mystery.

Here is the corresponding caching code:

User.rb

  #*********** Caching stuff **********
  def self.cached_find(id)
    #name HERE is class name aka 'User'
    Rails.cache.fetch([name, id]) { find(id) }
  end

  # Statement without .all is lazily evaluated
  # If you add .all, you store actual set of records in your cache.
  def self.approved_user_ids
    Rails.cache.fetch([self,'approved_users']) { where(approved: true).pluck(:id) }
  end


  def answer_cached(q_id)
    Answer.cached_answer(q_id,id)
  end

  def cached_has_any_role?(check_roles)
    assigned_roles = Rails.cache.fetch(['roles', id]) { roles.pluck(:name)}

    #Make sure roles are both string or symbols during comparisons. Be cautious about this since it can lead to wrong permissions
    check_roles.map{|check_role| assigned_roles.include? check_role.to_s}.any?
  end

  def cached_has_role?(check_role)
    assigned_roles = Rails.cache.fetch(['role', id]) {roles.pluck(:name)}

    #Make sure roles are both string or symbols during comparisons. Be cautious about this since it can lead to wrong permissions
    assigned_roles.include? check_role.to_s
  end

  #TODO Couldnt figure out how to flush cache on adding a new role. For now add roles using this method only
  def add_role_flush_cache(*args)
    add_role *args
    flush_roles_from_cache
  end


  // Called on : after_commit :flush_cache
  def flush_cache
    Rails.cache.delete([self.class.name, id])

    Rails.cache.delete(['colleague_ids', id])

    # TODO This invalidates the cache anytime any user record is updated, which is too aggressive.
    # Change it to invalidate only when approved users update their record or new users are approved/disapproved
    Rails.cache.delete(%w(approved_users))
  end

  def flush_roles_from_cache
    Rails.cache.delete(['roles', id])
    Rails.cache.delete(['role', id])
  end

Answer.rb (Model)

class Answer < ActiveRecord::Base
  belongs_to :user, touch:true
  belongs_to :question

  #DO NOT DELETE. This ensures stuff gets flushed out of cache on updates
  after_commit :flush_cache

  #  Caching stuff
  def self.employee_ids_cached(company)
    company_lower = company.downcase
    Rails.cache.fetch([company_lower,16]) { where(question_id: 16).where('lower(content)=?',company_lower).pluck(:user_id) }
  end

  def self.cached_answer(ques_id,u_id)
    Rails.cache.fetch([name,ques_id,u_id]) { where(question_id:ques_id, user_id: u_id).pluck(:content) }
  end

  def flush_cache

    # Delete the entry if a key we used for caching employees is found ([content==company_name and question_id=16])
    # Delete cached employees under a specific company on update to any company
    Rails.cache.delete([content, id])


    #Delete cached answer for the user on updates
    Rails.cache.delete([self.class,question_id,user_id])
  end

end

I am not sure if there is a memory leak at this point. But seems like memory usage keep going up in Heroku in spite of most of the things being cached and being serverd from cache.

codeObserver
  • 6,521
  • 16
  • 76
  • 121
  • While I don't have an answer, I have experienced something similar and filed an issue with Dalli: https://github.com/mperham/dalli/issues/558 – Ben Sheldon Oct 20 '15 at 20:42

0 Answers0