0

I have a raspberry pi with a MCC118 daqhat. The goal is threefold:

  1. Acquire and record data to a database
  2. Stream data over a private LAN to a browser as close to real-time as possible (network GET/POST speed limited)
  3. Analyze this data and actuate devices (RPi.GPIO) based on the analysis

My program is both cpu and I/O bound in that the program must wait on data input from the daqhat (I/O bound), and, once data is read, it must be recorded and analyzed (cpu bound). For the cpu bound portion, I am utilizing multiprocessing to build a database. The browser display will then be updated (via flask) by querying the database. In an effort to do this efficiently, I developed the following example code as a foundation for the recording and displaying aspect of my project:

#!/usr/bin/env python
#  -*- coding: utf-8 -*-

import sys
import random
import psycopg2 as sql
from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT
from multiprocessing import Process

def fill_table():
    conn_fill = sql.connect('dbname=mp_example user=pi')
    cf = conn_fill.cursor()
    while True:
        cf.execute('INSERT INTO ran_num (ran) VALUES (%s);', (random.random(),))
        conn_fill.commit()

def read_table():
    conn_read = sql.connect('dbname=mp_example user=pi')
    cr = conn_read.cursor()
    while True:
        cr.execute('SELECT * FROM ran_num WHERE id=(SELECT MAX(id) FROM ran_num);')
        val = cr.fetchone()
        if val:
            print('\r' + str(val[0]) + ' - ' + str(round(float(val[1]), 3)), end='')
        sys.stdout.flush()
    print('\n')

def main():
    conn_main = sql.connect('dbname=mp_example user=pi')
    cm = conn_main.cursor()
    cm.execute('DROP TABLE IF EXISTS ran_num;')
    cm.execute('CREATE TABLE ran_num (id SERIAL PRIMARY KEY, ran VARCHAR);')
    conn_main.commit()
    cm.close()
    conn_main.close()
    print('Starting program, press CTRL + C to quit... \n')

if __name__ == '__main__':
    main()
    try:
        p1 = Process(target=fill_table)
        p1.start()
        p2 = Process(target=read_table)
        p2.start()
    except KeyboardInterrupt:
        p1.join()
        p2.join()
        print('Processes have closed without zombies...\n')

I have just introduced myself to Postgres (I was using sqlite3) in an effort to get concurrency in writing and querying the data in what I think is a "data safe" way to do things. However, my questions are as follows:

  1. Is there a more memory efficient and/or data safe way to do this (e.g. pipes)?
  2. Is there a better way to exit when the KeyboardInterrupt exception is thrown? (My exception print statement never prints, indicating that the processes are leaving process zombies too.)

NOTE: I was initially just reading a single data value from the daqhat every time a GET request came in over the LAN, however, this meant that the sample rate was network speed dependent. For data integrity and consistency, I want to use multiprocessing to have the data acquisition do its thing at a constant sample rate, with the data being read from the database based on network requests.

jacob
  • 828
  • 8
  • 13
  • Can you give some indications of data acquisition rates and volumes? Some idea of how much data you need in memory at once? What volume/rates of data does the browser need? – Mark Setchell Feb 23 '19 at 00:45
  • For the fastest signal I am interested in, the sample rate would be around 350 Hz (with ~10 samples per period). However, the MCC118 is capable of a max sample rate of 1 MHz, and I'd definitely be interested in pushing some limits (if I'm not already) eventually for other projects. I'm not sure if updating the browser in blocks of data would be more efficient, though, on the design side, I do like the look of smooth streaming rather than block updating (i.e. a browser based oscilloscope of sorts). – jacob Feb 24 '19 at 01:10

0 Answers0