Hope you are all doing great. We have a huge mysql table called 'posts'. It has about 70,000 records and has gone up to about 10GB is size.
My boss says that something has to be done to make it easy for us to handle this huge table because what if that table gets corrupted then it would take us a lot of time to recover the table. Also at times its slow.
What the are possible solutions so that handling this table becomes easier for as in all aspects.
The structure of the table is as follows:
CREATE TABLE IF NOT EXISTS `posts` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`thread_id` int(11) unsigned NOT NULL,
`content` longtext CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL,
`first_post` mediumtext CHARACTER SET utf8 COLLATE utf8_unicode_ci,
`publish` tinyint(1) NOT NULL,
`deleted` tinyint(1) NOT NULL,
`movedToWordPress` tinyint(1) NOT NULL,
`image_src` varchar(500) CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL DEFAULT '',
`video_src` varchar(500) CHARACTER SET utf8 COLLATE utf8_unicode_ci DEFAULT NULL,
`video_image_src` varchar(500) CHARACTER SET utf8 COLLATE utf8_unicode_ci DEFAULT NULL,
`thread_title` text CHARACTER SET utf8 COLLATE utf8_unicode_ci,
`section_title` text CHARACTER SET utf8 COLLATE utf8_unicode_ci,
`urlToPost` varchar(280) CHARACTER SET utf8 COLLATE utf8_unicode_ci DEFAULT NULL,
`posts` int(11) DEFAULT NULL,
`views` int(11) DEFAULT NULL,
`forum_name` varchar(50) CHARACTER SET utf8 COLLATE utf8_unicode_ci DEFAULT NULL,
`subject` varchar(150) CHARACTER SET utf8 COLLATE utf8_unicode_ci DEFAULT NULL,
`visited` int(11) DEFAULT '0',
`replicated` tinyint(4) DEFAULT '0',
`createdOn` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
UNIQUE KEY `urlToPost` (`urlToPost`,`forum_name`),
KEY `thread_id` (`thread_id`),
KEY `publish` (`publish`),
KEY `createdOn` (`createdOn`),
KEY `movedToWordPress` (`movedToWordPress`),
KEY `deleted` (`deleted`),
KEY `forum_name` (`forum_name`),
KEY `subject` (`subject`),
FULLTEXT KEY `first_post` (`first_post`,`thread_title`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=78773 ;
Thanking You.
UPDATED
Note: although I am great-full for the replies but almost all answers have been about optimizing the current database and not about how to generally handle large tables. Although I can optimize the database based on the replies I got, it really does not answer the question about handling huge databases. Right now I am talking about 70,000 records but during the next few months if not weeks we are going to grow a magnitude. Each record can be about 300kb in size.