250TB is a lot of data. I will give you an example of how I would accomplish this task in the enterprise, which would be fairly concerned with budget (since I assume you want this on the cheap), yet not overly concerned with finding the best free products to do the job.
Just an FYI - I am writing this as an 8 year professional of both the storage world and the backup/disaster recovery world.
I feel like this school project is more about writing on how to do this, rather than actually doing it?
First of all, the storage.
Since you did not mention any specific availability or redundancy requirements I would suggest building a basic JBOD array of "NearLine" 3TB SATA disks. At your estimate of 42TB online you would need at least 14 of these, ignoring RAID overhead. For example if you chose RAID-6 with a 16 disk raid group size, you would need at least 16 disks to get 42 TB usable and you would still have no hot spares. Until I had a better idea of your reliability, performance, redundancy, availability requirements I could not recommend other types of disks, raid types, or controllers.
In its very simplest form you could build an array like this using fairly cheap commodity hardware and Linux along with some open source tools like LVM, FreeNas, OpenFiler, etc. - beyond that you are starting to get into the pricy enterprise storage space.
Also keep in mind that using cheap commodity hardware to do this doesn't factor in other redundancy concerns beyond disks either (power supplies, controllers, operating system, etc.).
In the enterprise space I will assume you need substantial read/write performance and high availability. As an example - You could use a NetApp Enterprise storage array with highly available clustered redundant controllers. Attached to these would be drawers of 24 600gb 15k rpm SAS disks. To get 42tb out of a setup like this, which would perform extremely well and be highly available/redundant you would need (assuming 64-big NA Aggregates with a size limit above 16tb) an aggregate containing roughly 5 16 disk raid groups if you are configured in the default RAID6-DP raid level.
That's at least 80 15k RPM 600gb SAS disks across 4 shelves of storage attached to redundant arrays.
At this point you are in need of racks and some serious power and cooling and your budget well exceeds $200k.
Now for archiving.
You have a plethora of options here, there are literally countless products and methods you could use to accomplish this part of your task. As such, I am going to write it from the perspective of using a specific application which I know can do this job well, IBM's Tivoli Storage Manager (TSM). I am also going to assume that you don't have any off-site disaster recovery requirements and simply need to store lots of data and disk has become too expensive at this point.
So to set up TSM you need another server, as well as some number of tape drives and/or an automated tape library (ATL).
The server where the data is mounted would have a TSM client and you can schedule standard backup jobs or archival jobs depending on your needs. This scheduled job could be scripted or otherwise set up to archive data to tape, and subsequently delete it from disk - making it available on tape offline. For example, you could have the script archive any data older than 90 days to tape, and then delete it. This is another area where there are countless ways to accomplish this task.
As for the hardware side of things - LTO tape might be the best option and LTO-5 can hold around 1.5tb of uncompressed data per cartridge. So since you need over 200tb of data to be on tape with the other ~50tb on disk, you are looking at needing at least 140 tapes for this project.
Bringing it all together
So we have a storage array of some sort, and a "backup infrastructure" in place. Lets assume all of this life cycle stuff is happening on one server. You need a way to tie it all together. Is the disk going to be attached to the server over a SAN? Over a network? What protocol will you use? All of these decisions impact what type of hardware you would need. Just looking at the tape requirements you would likely need at least a small ATL which would pretty much guarantee you need a fiber-channel SAN, along with SAN switches, adapters, etc. You would need network infrastructure on top of that for any sort of network communication requirements.
The more I wrote the more I realized there is no way this project could not be real and I got less and less specific. Keep in mind, this was written with a number of wild assumptions and very conservative estimates - the TL;DR version is - you would need a ton of hardware, loads of expertise, and lots of money to get this done, even if done in the most unreliable, cheap way possible. If you need any more help or information feel free to ping me.