I know of no open-source solution "out of the box" to do this so I believe you'd need to write a custom spider/crawler to accomplish your task; it would need to provide the following services:
Scheduling - when the crawl should occur. Typically the 'cron' system service in Unix-like systems or the Task Scheduler in Windows are used.
Retrieval - retrieving targeted pages. Using either a scripting language like Perl or a dedicated system tool like 'curl' or 'wget'.
Extraction / Normalization - removing everything from the target (retrieved page) except the content of interest. Needed to compensate for changing sections of the target that are not germane to the task, like dates or advertising. Typically accomplished via a scripting language that supports regular expressions (for trivial cases) or an HTML parser library (for more specialized extractions).
Checksumming - converting the target into a unique identifier determined by its content. Used to determine changes to the target since the last crawl. Accomplished by a system tool (such as the Linux 'cksum' command) or a scripting language.
Change detection - comparing the previously saved checksum for the last retrieved target with the newly computed checksum for the current retrieval. Again, typically using a scripting language.
Alerting - informing users of identified changes. Typically via email or text message.
State management - storing target URIs, extraction rules, user preferences and target checksums from the previous run. Both configuration files or databases (like Mysql) are used.
Please note that this list of services attempts to describe the system in abstract and so sounds a lot more complicated than the actual tool you create will be. I've written several systems like this before so I expect a simple solution written in Perl (utilizing standard Perl modules) and running on Linux would require a hundred lines or so for a couple of target sites depending on extraction complexity.