2

I have a problem coming up with an algorithm. Will you, guys, help me out here?

I have a file which is huge and thus can not be loaded at once. There exists duplicate data (generic data, might be strings). I need to remove duplicates.

Anton K.
  • 933
  • 3
  • 9
  • 22

4 Answers4

2

One easy but slow solution is read 1st Gigabite in HashSet. Read sequential rest of the file and remove duplicit Strings, that are in file. Than read 2nd gigabite in memory(hashset) and remove duplicit in files and again, and again... Its quite easy to program and if you want to do it only once it could be enough.

bugs_
  • 3,544
  • 4
  • 34
  • 39
  • Nice suggestion. After reading a chunk of records (1GB or any other size), you only need to scan forward from there. If records cannot be removed in place, do this as a series of file copies. Don't forget to scan for duplicates in each chunk before scanning the rest of the file! – Ted Hopp May 22 '11 at 16:48
  • HashSet is ordered. But initial order is lost, isn't it? LinkedHashSet is a solution, though. – Anton K. May 22 '11 at 16:48
  • HashSet is "ordered" according to hash. Initial order is lost. = you must read n-th gigabite in memory and than read whole file and remove duplicits. – bugs_ May 22 '11 at 16:53
  • Load a gigabyte HashSet and iterate through the file comparing on my way through? – Anton K. May 22 '11 at 16:55
  • 1
    yes. and than second gigabite in memory and again iterate file. Than third gigabite ... I know that it is not optimal, but it is simple. – bugs_ May 22 '11 at 17:00
1

you can calculate a hash for each record and keep that in a Map>

read in the file building the map and if you find the HashKey exists in the map you seek to position to double check (and if not equal add the location to the mapped set)

ratchet freak
  • 47,288
  • 5
  • 68
  • 106
  • Yes, this sounds good. If you have enough memory for all hashes, it will be simple and good solution. – bugs_ May 22 '11 at 17:43
  • actually the hash can be limited arbitrarily (balanced against collisions) but the locations might explode (that's a long for each unique record) – ratchet freak May 22 '11 at 17:45
0

Second solution:

  1. Create new file, where you write pairs <String, Position in original file>
  2. Than you will use classic sorting for big files according to String (Sorting big files = Sort small parts of file in memory, and than merge them together) - during this you will remove duplicits
  3. And than rebuild original order = you will sort it again but according to "Position in original file"
bugs_
  • 3,544
  • 4
  • 34
  • 39
0

Depending on how the input is placed in the file; if each line can be represented by row data;

Another way is to use a database server, insert your data into a database table with a unique value column, read from file and insert into database. At the end database will contain all unique lines/rows.

fmucar
  • 14,361
  • 2
  • 45
  • 50