0

I have a fairly elaborate multi-page query form. Actually, my site has several for querying different data sets. As these query parameters span multiple page requests, I rely on sessions to store the accumulated query parameters. I'm concerned that the data stored in session, when serialized, might exceed the storage capacity of the MySQL BLOB storage capacity (65,535 bytes) of the data column specified by the CodeIgniter session documentation:

CREATE TABLE IF NOT EXISTS `ci_sessions` (
        `id` varchar(128) NOT NULL,
        `ip_address` varchar(45) NOT NULL,
        `timestamp` int(10) unsigned DEFAULT 0 NOT NULL,
        `data` blob NOT NULL,
        KEY `ci_sessions_timestamp` (`timestamp`)
);

How can I store my user-entered query parameters and be sure that they will be preserved for a given user?

I considered using file-based-caching to cache this data with a key generated from the session ID:

// controller method
public function my_page() {
  // blah blah check POST for incoming query params and validate them
  $validated_query_params = $this->input->post(); 

  // session library is auto-loaded
  // but apparently new session id generated every five mins by default?
  $cache_key = "query_params_for_sess_id" . $this->session->session_id;

  $this->load->driver('cache');
  // cache for an hour
  $this->cache->file->save($cache_key, $validated_query_params, 3600);
}

However, I worry that the session ID might change when a new session ID gets generated for a given user. Apparently this happens by default every five minutes as CodeIgniter generates new session IDs to enhance security.

Can anyone suggested a tried-and-true (and efficient!) means of storing session data that exceeds the 64K blob size?

S. Imp
  • 2,833
  • 11
  • 24

1 Answers1

0

You could use MEDIUMBLOB, which supports up to 16MB, or LONGBLOB which supports up to 4GB.

See https://dev.mysql.com/doc/refman/8.0/en/string-type-overview.html

Also, if you declare your blob with a length like BLOB(2000000) (whatever is the length you need), it will automatically promote it to a data type that can hold that length of data. For example, BLOB(2000000) will implicitly become MEDIUMBLOB.

mysql> create table t ( b blob(2000000) );

mysql> show create table t\G
*************************** 1. row ***************************
       Table: tt
Create Table: CREATE TABLE `tt` (
  `b` mediumblob
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4
Bill Karwin
  • 538,548
  • 86
  • 673
  • 828
  • Two things give me reservations about this approach: 1) my db server is a separate machine, and quite busy. This means the large-ish data would be transiting db connection over the network with every page request. 2) Might there be any speed concerns with MEDIUMBLOB? I understand from [other posts](https://stackoverflow.com/questions/8946224/mysql-tinyblob-vs-longblob) that disk storage appears to be efficient, without too much overhead, but is there any speed difference or other issues I should be worried about? – S. Imp Feb 26 '19 at 19:17
  • I also appreciate your tip to use `BLOB(2000000)` -- would this impose an upper limit to the amount of data stored? Or would MySQL just create a MEDIUMBLOB column which truncates at 16MB? – S. Imp Feb 26 '19 at 19:19
  • It just makes a mediumblob, the minimum data type needed to store at least 2M bytes. The mediumblob still allows its regular max length, up to 16MB. – Bill Karwin Feb 26 '19 at 19:24