1

I'm currently working to refactor a script that has a reliance on three hashes (simple hashes), initialized at the beginning of the script. In total, these hash values take up over a hundred lines in the script. In order to improve overall readability and cleanliness of the code, should I store this information outside of the script and read in the information at the start? The data itself should be mostly static (individual entries may have to be changed on occasion).

If yes, how would I go about storing it in a database/suggested storage medium? (I'm a noob when it comes to SQL).

Cooper
  • 679
  • 3
  • 9

3 Answers3

3

Sounds like you have configuration data. The Mastering Perl book has a chapter discussing several choices.

daxim
  • 39,270
  • 4
  • 65
  • 132
3

I would probably use something like JSON or one of the formats supported by Config::Any. For simple mappings an INI format will probably suffice. I tend to use JSON for more complex scenarios.

phaylon
  • 1,914
  • 14
  • 13
1

I would not store it in a separate file or database just because it will slow down your program for no good reason. Just move your existing initialization code to a separate constants.pl file and in your main file do require "constants.pl"

Don't forget to change your hashes's declaeration from

my
to
our
to make it visible across files.
Aater Suleman
  • 2,286
  • 18
  • 11
  • -1 for the unproved claim and falling prey to the premature optimisation trap – daxim May 10 '11 at 20:03
  • @daxim, may be you are right. Under what circumstances will a db or file I/O be faster than interpreting a perl code? I did an experiment right now with a 10000 entry hash, my approach is about 11x faster than using a text file. – Aater Suleman May 10 '11 at 20:06
  • 2
    Aater, you seem to misunderstand what the principle is. The point is that you are *assuming* that Cooper a) has enough hash entries that this will be a problem, and b) is reading this data so frequently that the speed of this operation is a major factor. From reading the OP, it seems likely that neither is the case. All that said, your solution may be the right one, but for a different reason: it's probably too little data for a separate database to be worth the trouble. – Dan May 10 '11 at 20:11
  • @Dan. Awesome reply. Thanks for clarifying. Very valid argument and I stand corrected. – Aater Suleman May 10 '11 at 20:12
  • Fine, if you want to argue *that* point, I concede it to you for lack of profiling data that shows the real bottlenecks. Remains the other one. A good programmer knows that maintainability and separation of concerns beats perceived speed gains every day. – daxim May 10 '11 at 20:12
  • @daxim, I didn't mean to argue. I understand the badness of what I suggested but I often like to solve that differently than to give up speed for it. I am a hardware guy so speed seems top priority. I now also understand why you called it premature. Thanks. – Aater Suleman May 10 '11 at 20:16