0

I'm writing a print system that puts a simplified interface on top of CUPS. Users drop jobs into one queue, the system processes them in various ways (statistics, page quotas, etc.), and then offers the user a web interface to dispatch the job to one of multiple printers.

Since there may be several user kiosks, an admin station, etc., I need to store job metadata in something that can handle concurrent access. (Can you call data structures "re-entrant"?) A few options I can imagine are

  • a MySQL database: massively overkill, but certainly stable and supported
  • metadata files, with concurrent access handled manually: perfectly tailored to my needs, but then I have to re-implement a subset of MySQL's atomicity, and probably do it poorly
  • write into the CUPS control files, using the provided thread-safe cupsipp.h API

The last option sounds most attractive, but there's a catch: I'm writing this in Python, and neither pycups nor pkipplib seem to have any way to modify a control file.

Edit: I should clarify that pkipplib can generate new IPP requests, but doesn't give any way to modify the existing control file. That is, I would have to make my updates by submitting them as new jobs.

Anyone have a better idea? Advice would be much appreciated.

Luc Touraille
  • 79,925
  • 15
  • 92
  • 137
Wang
  • 3,247
  • 1
  • 21
  • 33

1 Answers1

0

Have you considered sqlite or redis? Both of those are low overhead and easy to spin up, especially when you're not really dealing with complex datasets.

jathanism
  • 33,067
  • 9
  • 68
  • 86