The Hadoop ecosystem includes a tool called Sqoop that is designed to do exactly the problem you describe: pull data from an RDBMS into Hadoop. It supports several methods of doing incremental updates. It requires a JBDC or JNDI connection to your database, and for some databases is able to use high-performance options. It's one of the better tools in Hadoop.
When I say "into Hadoop" this could mean several things, but typically either a) as a set of files stored on the Hadoop distributed file system (HDFS), or b) data stored in hBase. And technically, hBase is just another way of storing files on HDFS.
Hive is a layer on top of HDFS that allows you to treat the RDBMS tables that you export to HDFS file as though they were still on your SQL Server database. Well, kinda. Hive can query a number of file formats using a SQL-like language.
HDFS has one particular challenge that you need to understand: there's no way to update a row, as there is in a regular database. An HDFS file is a "write once-read many" design. Typically, you can segment a dataset into multiple files along some natural partition, such that if you do need to update a record, you need only rewrite the files associated with the partition -- year + month is a common partitioning scheme.
So if you're Sqoop'ing a database whose records never change, then you can simply append to your HDFS file. This is fine for transactions, or logs or other data like that, as it typically never gets changed. But records that are updated (a customer name or email, for example) makes a more difficult problem.
hBase has made this HDFS limitation go away by transparently managing updates to existing records. But hBase is a key-value store database; the key may be whatever your RDBMS's primary key is, and value needs to be the rest of the record. This isn't terrible, but it can be cumbersome.
I believe the latest versions on Hive (or possibly Impala, which is similar in function to Hive) allow updates, while still storing data in the more flexible formats.
So Sqoop is the tool you want, but think carefully about what you'll want to do with the data once it's in Hadoop -- it's a very, very different thing than just a database that can get really big.