Rekor maintains a local Beanstalkd queue. All JSON results are placed in this queue. Your application can grab and process the latest plate results from this queue.
In addition, you must add a valid on-premise license to /etc/openalpr/license.conf
Once updated, restart the Scout Agent service to allow the settings to take effect.
Below is a sample Python script that pulls results from the local Beanstalkd queue:
#!/usr/bin/pythonimport beanstalkcimport jsonfrom pprint import pprintbeanstalk = beanstalkc.Connection(host='localhost', port=11300)TUBE_NAME='alprd'# For diagnostics, print out a list of all the tubes available in Beanstalk.print beanstalk.tubes()# For diagnostics, print the number of items on the current alprd queue.try:pprint(beanstalk.stats_tube(TUBE_NAME))except beanstalkc.CommandFailed:print"Tube doesn't exist"# Watch the "alprd" tube; this is where the plate data is.beanstalk.watch(TUBE_NAME)# Loop foreverwhileTrue:# Wait for a second to get a job. If there is a job, process it and delete it from the queue.# If not, return to sleep. job = beanstalk.reserve(timeout=1.0)if job isNone:print"No plates available right now, waiting..."else:print"Found a plate!" plates_info = json.loads(job.body)# Print all the info about this plate to the console.pprint(plates_info)# Do something with this data (e.g., match a list, open a gate, etc.).if'data_type'notin plates_info:print"This shouldn't be here... all OpenALPR data should have a data_type"elif plates_info['data_type']=='alpr_results':print"This is a plate result"elif plates_info['data_type']=='alpr_group':print"This is a group result"elif plates_info['data_type']=='heartbeat':print"This is a heartbeat"# Delete the job from the queue when it is processed. job.delete()