I need to download a large json String. I'm using aQuery for this. Then I need to parse this string in a list of objects (10k +) (I'm using Gson library for this) and insert this list to database (created with GreenDAO). But before inserting I need to sort this list by my objects string field. I'm using Collator class for sorting because this filed may be on different languages. The question is: how to do such a thing using as less memory as possible?
For now I download a String (+String, I've also tryed to use Streams) then parse it (+List) then sorting it (some more objects). I'm doing it in a separate thread but even when it's done the memory not freed. I think this can be solved if I could sort my data when it's already in database (not when selecting it, it's to slow) but I don't know how.
Here is some code. This is data loading from file. There is the same issue with memory when loading from file too where I'm using InputStream instead of getting JSON string into memory.
public static void getEnciclopediaDataFromFile() {
mRequestStates.put("enc", true);
try {
EncyclopediaParser parser = new EncyclopediaParser(ResourceManager.getResourceManager().loadFile("enc_data"),
ResourceManager.getResourceManager().loadFile("enc_data"),
1361531132);
parser.start();
} catch (Exception e) {
mRequestStates.put("enc", false);
EventBus.getDefault().post(EVENT_ENCYCLOPEDIA_DOWNLOAD_COMPLETE);
}
}
Here is parser Thread. There are two constructors - one for loading from web (string param) another for loading from file (InputStream param).
private static class EncyclopediaParser extends Thread {
// -----------------------------------------------------------------------
//
// Fields
//
// -----------------------------------------------------------------------
private String mJsonData;
private Reader mTitlesReader;
private Reader mContentReader;
private long mUpdateTime;
// -----------------------------------------------------------------------
//
// Constructor
//
// -----------------------------------------------------------------------
public EncyclopediaParser(String jsonData, long updateTime) {
mJsonData = jsonData;
mUpdateTime = updateTime;
this.setPriority(Thread.NORM_PRIORITY - 1);
}
public EncyclopediaParser(Reader titlesReader, Reader contentReader, long updateTime) {
mTitlesReader = titlesReader;
mContentReader = contentReader;
mUpdateTime = updateTime;
this.setPriority(Thread.NORM_PRIORITY - 1);
}
// -----------------------------------------------------------------------
//
// Methods
//
// -----------------------------------------------------------------------
@Override
public void run() {
Type type;
try {
List<ArticleContent> content = null;
type = new TypeToken<Collection<ArticleContent>>(){}.getType();
if(mContentReader == null)
content = new Gson().fromJson(mJsonData, type);
else
content = new Gson().fromJson(mContentReader, type);
List<ArticleTitle> titles = null;
type = new TypeToken<Collection<ArticleTitle>>(){}.getType();
if(mTitlesReader == null)
titles = new Gson().fromJson(mJsonData, type);
else
titles = new Gson().fromJson(mTitlesReader, type);
for(ArticleTitle title : titles)
title.setTitle(title.getTitle().trim());
TitlesComparator titlesComparator = new TitlesComparator();
Collections.sort(titles, titlesComparator);
for(int i = 0; i < titles.size(); ++i) //sorting enc data
titles.get(i).setOrderValue((long)i);
//create sections data
Collator collator = Collator.getInstance(Locale.GERMAN);
collator.setStrength(Collator.PRIMARY);
ArrayList<String> sectionNamesList = new ArrayList<String>();
ArrayList<Integer> sectionIndexesList = new ArrayList<Integer>();
String prevLetter = "";
for (int i = 0; i < titles.size(); ++i) {
if(titles.get(i).getTitle().length() > 0){
if(!Character.isLetter(titles.get(i).getTitle().charAt(0))) {
if( !sectionNamesList.contains("#")) {
sectionNamesList.add("#");
sectionIndexesList.add(i);
}
}
else if(collator.compare(titles.get(i).getTitle().substring(0, 1), prevLetter) > 0) {
sectionNamesList.add(titles.get(i).getTitle().substring(0, 1).toUpperCase(Locale.GERMAN));
sectionIndexesList.add(i);
}
prevLetter = titles.get(i).getTitle().substring(0, 1);
}
}
String[] sectionNames = new String[sectionNamesList.size()]; //use lists instead
Integer[] sectionIndexes = new Integer[sectionIndexesList.size()];
sectionNamesList.toArray(sectionNames);
sectionIndexesList.toArray(sectionIndexes);
AppData.setSectionIndexes(Utils.convertIntegers(sectionIndexes));
AppData.setSectionNames(sectionNames);
GreenDAO.getGreenDAO().insertArticles(titles, content);
AppData.setEncyclopediaUpdateTime(mUpdateTime);
mRequestStates.put("enc", false);
if(mTitlesReader != null)
mTitlesReader.close();
if(mContentReader != null)
mContentReader.close();
} catch (Exception e) {
Log.e("Server", e.toString());
} finally {
EventBus.getDefault().post(EVENT_ENCYCLOPEDIA_DOWNLOAD_COMPLETE);
}
}
}
All GreenDAO objects are static. Parsing is done only on the first launch (from file) and by "update" button click (from web). I've noticed that even if I'll relaunch my app after initial (from file) parsing is done it'll be using as much memory as after parsing is done for the first time.