Version 6.1.19 onwards¶
max_detection_size(and associated label prefixes) need to be in the
ml_sequence. In the previous release they were stuffed inside each model sequence which lead to problems (imagine an event where snapshot and alarm had different objects and you were checking both. In pass 1, snapshot would match but alarm would not, so you’d see objects in alarm. In pass 2, alarm would match, but not snapshot and it would keep going on like this, effectively making match_past_detections useless. To avoid this, I am checking for match_past_detections after all matching is done)
- You can now choose to ignore certain labels when you match past detections using ignore_past_detection_labels
stream_sequencenow has a few new fields fields:
delay_between_frames. If specified, will wait for those many seconds before processing each frame.
delay_between_snapshots. If specified, will wait for those many seconds when processing snapshot frames. This allows you to do something like this:
frame_set: snapshot,snapshot,snapshot,alarmwith a
delay_between_snapshots:2, which means it will keep analyzing snapshot 3 times, but with 2 seconds in between, which lets you grab multiple snapshot frames as it changes during an event. This is really only useful for this specific case.
Version 6.1.18 onwards¶
- I now support face detection using TPU (NOT recognition). See objectconfig.ini for an example
- You can now add descriptive names for each model sequence to better differentiate in logs
- Each model sequence now has an
enabledflag (default is
nothat means the model won’t be loaded. This is a good way to temporarily remove models while keeping config files intact
- We now also have a
union- when set to
unionit will combine detection from all models for that type
Version 6.1.17 onwards¶
- You can now localize
max_detection_sizeto specific objects by prefixing the object name. Example
car_max_detection_sizeif present will override values for
max_detection_sizefor objects that are cars. Same holds true for
mlapiconfig.inialso supports the above values - you no longer have to keep putting these to
Version 6.1.12 onwards¶
- A lot of config changes, if you are using mlapi. Basically, I’m no longer fully supporting settings in objectconfig.ini transferring to mlapi. See Exceptions when using mlapi
Version 6.1.0 onwards¶
- You can now string togther multiple models in arbitrary fashion to suit your needs.
There is a new entry in
ml_sequencethat you can use to create your own sequence. Note that if
ml_sequenceis present, it will override any/all parameters in the
[alpr]sections. Please read this section to understand how this works.
hogmodel has been removed. Note this refers to the
hogperson detection model, not the hog detection of a face. That still exists. With Yolo, TinyYolo, coral there was no need to support this very low performance model anymore.
- You can now also specify arbitrary frames for analysis. See See here for details (look at options attribute).
- To enable the new
*_sequenceattributes mentioned above, make
- A new attribute,
disable_locks. When set to
yes, it will result in locks not being grabbed before inferencing. The entire idea of grabbing locks is so that you have control on how many simultaneous processes use your CPU/GPU/TPU resources, so I’d recommend you don’t enable it. Set it to
yesonly if you are facing lock issues (such as timeouts)
Version 6.0.5 onwards¶
- You can now specify object detection patterns on a per polygon/zone basis. The format is
<polygonname>_zone_detection_patternThis works for imported ZM zones too. Please read the comments in
objectconfig.ini. Note that this attribute will not be automatically added to a migrated
objectconfig.inifile as the actual attribute name will change depending on your zone name
zmeventnotification.inihas a new attribute called
[fcm]section. When enabled (default is
no), push messages will replace each other in the notification bar. This was the old Android behaviour prior to FCMv1. You can go back to this mode of operation for both iOS and Android if you enable this.
Version 6.0.1 onwards¶
zmeventnotification.ininew attribute in
use_fcmv1with a default of
yes`. It is recommended you keep this on as this switches from the legacy FCM protocol to the FCM v1 which allows for better features (which I will add over time).
objectconfig.inihas a new attribute
[animation]. If you are creating animations for push and generating GIFs, this creates a 2x speed GIF.
Version 6.0.0 onwards¶
The ES has a new attribute in
es_rules. A sample file gets automatically installed when you run the install script in
Its use is optional. It is a JSON file with various rules for the ES that are not configuration related. Over the next few releases, this fill will replace the cryptic context of
tokens.txtAs of now, it can be used to specify custom times for notification. This list will grow over time.
A new perl dependency (optional) has been added to Installation of the Event Server (ES) if you need flexible datetime parsing for ES rules.
On the Object Detection part:
This is going to be a big bad breaking change release, but continues the path to unification between various components I’ve developed.
To help with this ‘big bad breaking change’, I’ve provided an upgrade script. When you run
./install.shit will automatically run it at the end and put a
migrated-objectconfig.iniin your current directory (from where you ran
./install.sh) You can also run it manually by invoking
tools/config_upgrade.py -c /etc/zm/objectconfig.ini
All the ml code has now moved to pyzm and both local hook and mlapi use pyzm. This means when I update ml code, both systems get it right always
This version also supports Google Coral Edge TPU
objectconfig.iniattributes have been replaced and some removed towards this unification goal:
yolois no longer used. Instead
objectcould be multiple object detection techniques, yolo or otherwise.
[object]is a new section, which contains two new attributes:
object_frameworkwhich can be
object_processorwhich can be
- None of the
tiny_attributes exist anymore. Simply switch weights, labels and config files to switch between full and tiny
yolo_typedoesn’t exist anymore (as
tiny_attributes are removed, so it doesn’t make sense)
detect_patternno longer exists. You now have a per detection type pattern, which allows you to specify patterns based on the detection type:
object_detection_pattern- for all objects
alpr_detection_pattern- for for license plates
face_detection_pattern- for all faces detected
[general]has various new attributes that allow you to limit concurrent processing:
cpu_max_processesspecific how many simultaneous instances of model execution will be allowed at one time. When more than this number is reached, processes will wait till in-flight processes complete.
cpu_max_lock_waitspecifies how long each process will wait (default 2 mins) before throwing an error.
tpu_max_lock_waitsame as above but for TPU
gpu_max_lock_waitsame as above but for GPU
Version 5.15.7 onwards¶
<>/models/tinyyolodirectory is now
install.shwill automatically move it, but remember to change your
objectonfig.inipath if you are using tiny yolo.
- You now have an option to use the new Tiny Yolo V4 models which will be automatically downloaded unless you disabled it (You’ll need OpenCV master as of Jul 11, 2020 as support for it was only merged 6 days ago)
- A new attribute,
ject_areahas been introduced in
objectconfig.ini. This specifies the largest area a detected object should take inside the image. You can keep it as a % or px value. Remember the image is resized to 416x416. better to keep in %
Version 5.15.6 onwards¶
- I got lazy with 5.15.5. There were some errors that I fixed post 5.15 which I ‘post-pushed’ into 5.15.5. It is possible you installed 5.15.5 and don’t have these fixes. In other words, if your 5.15.5 is broken, Please upgrade.
- In this release, I’ve also taken a necessary step towards model naming
Yolomodels are now
Yolov4. This is because this is the terminology Alexey has started using in his repo. This means you will have to change your
objectconfig.iniand align it with the same
objectconfig.iniprovided in this repo. I’ve also normalized the names of the config, weights and name files for each model. The short of all of this is, look under the
[yolo]section of the sample config and replace your current yolo paths. Note that I assume you use
install.shto install. If not, you’ll have to manually rename the old model names to the new ones. (Note that YoloV4 requires OpenCV 4.4 or above)
Version 5.15.5 onwards¶
zmeventnotification.inihas a new attribute,
[mqtt]which lets you set the topic name for the messages
objectconfig.inihas a new attribute,
only_triggered_zm_zones. When set to yes, this will remove objects that don’t fall into zones that ZM detects motion in. Make sure you read the comments in
objectconfig.iniabove the attribute to understand its limitations
Version 5.14.4 onwards¶
- Added ability for users to PR contrib modules See Guidelines for contrib
zmeventnotification.iniadds two new attributes that makes it simpler for users to keep object detection plugin hooks intact and also trigger their own scripts for housekeeping. See the ini script for documentation on
Version 5.13.3 onwards¶
- New attribute
zmeventnotification.inithat controls debug level verbosity. Default is
- New CSPNet support with ResNeXt (requires OpenCV 4.3 or above) - Note that this requires a manual model download as the model is in a google drive link and all automated download scripts are hacks that stop working after a while.
- You can now choose which models to download as part of
./install.sh. See install-specific-models
Version 5.11 onwards¶
- If you are using platerecognition.com local SDK for ALPR, their SDK and cloud versions have slightly different API formats. There is a new attribute called
objectconfig.inithat should be set to
localto handle this.
zmeventnotification.iniis now called
hook_skip_monitorsto correctly reflect this only means hooks will be skipped for these monitors. A new attribute
skip_monitorshas been added that controls which monitors the ES will skip completely (That is, no analysis/notifications at all for these monitors)
- Added support for live animations as part of push messages. This requires an upgraded zmNinja app (
220.127.116.11or above) as well as ZoneMinder master (1.35) as of Mar 17 2020. Without these two updates, live notifications will not work. Specifically: - This introduces a new section in
[animation]. Please read the config for more details. - You are also going to have to re-run
install.shto install new dependencies
Version 5.9.9 onwards¶
- You can now hyper charge your push notifications, including getting desktop notifications. See below
- I now support 3rd party push notification systems. A popular one is pushover that a lot of people seem to use for customizing the quality of push notifications, including critical notifications, quiet time et. al. This adds the following parameters:
- A new section called
zmeventnotification.inithat adds two new attributes:
api_push_script- I’ve provided a sample push script that supports pushover. This gets automatically installed when you use
/var/lib/zmeventnotification/bin/pushapi_pushover.py- This also addes a new channel type called
apito the pre-existing
fcm,web,mqttset. - You are of course, encouraged to write your own 3rd party plugins for push and PR back to the project. - Read more in this article
Version 5.7.7 onwards¶
- For those who are happy to use the legacy openALPR self compiled version for license plate detection that does not use DNNs, I support that. This adds new parameters to objectconfig.ini. See objectconfig.ini for new parameters under the “If you are using OpenALPR command line” section.
Version 5.7.4 onwards¶
- I know support the new OpenCV 4.1.2 GPU backend support for CUDA. This will only work if you are on OpenCV 4.1.2 and have compiled it correctly to use CUDA and are using the right architecture.
- This adds a new attribute
objectconfig.iniwhich by default is
no. Please read the comments in
objectconfig.iniabout how to use this.
- The ES supports a control channel using which you can control its behavior remotely
- This adds new attributes
zmeventnotification.ini. Read more about it Category: escontrol messages.
- If you are using face recognition, you now have the option of automatically saving unknown faces to a specific folders. That way it’s easy for you to review them later and retrain your known faces.
- This introduces the following new attributes to
unknown_images_path. Their documentation is part of
- The detection script(s) now attach a JSON payload of the detected objects along with the text, separated by
--SPLIT--. If you are hacking your own scripts, you need to handle this. The ES automatically handles it when sending notifications.
Version 5.2 onwards¶
- use_hooks is a new attribute that controls whether hooks will be used or not
- send_event_end_notification is a new attribute that controls whether end notifications are sent
Version 5.0 onwards¶
install.shno longer tries to install opencv on its own. You will have to install
opencv-contribon your own. See install instructions in Machine Learning Hooks.
hook_scriptattribute is deprecated. You now have
hook_on_event_endwhich lets you invoke different scripts when an event starts or ends. You also have the concepts of channels, that allows you to decide whether to send a notification even if hooks don’t return anything. Read up about
- Now that we support pre/post event hooks, the script names have changed too (
zm_event_start.shand we have a new script called
zm_event_end.shthat is really just a dummy script. Change it to what you need to do at the end of an event, if you enable event end notifications)
- You can now offload the entire machine learning processes to a remote server. All you need to do is to use
ml_gatewayand related options in
objectconfig.ini. The “ML gateway” is my mlapi project
- The ES now supports a
restart_intervalconfig item in
zmeventnotification.ini. If not 0, this will restart the ES after those many seconds (example
7200is 2 hours). This may be needed if you find the ES locking up after a few hours. I think 5.0 resolves this locking issue (see this issue) but if it doesn’t use this, umm, hack for now.
Version 4.6 onwards¶
- If you are using hooks, make sure you run
sudo ./install.shagain - it will create additional files in
- The hook files
detect_wrapper.share now called
zm_detect_wrapper.sh. Furthermore, these scripts no longer reside in
/usr/bin. They will now reside in
/var/lib/zmeventnotification/bin. I suppose I did not need to namespace and move, but I thought of the latter after I did the namespace changing.
- If you are using face recognition, 4.6.1 and above now allow multiple faces per person. Note that it is recommended you train them before you run detection. See the documentation for it in Machine Learning Hooks.
Version 4.4 onwards¶
- If you are using picture messaging, then the URL format has changed. Please REMOVE
&username=<user>&password=<passwd>from the URL and put them into the
Version 4.1 onwards¶
- Hook versions will now always be
<ES version>.x, so in this case
- Hooks have now migrated to using a proper python ZM logger module so it better integrates with ZM logging
- To view detection logs, you now need to follow the standard ZM logging process. See Logging documentation for more details)
- You no longer have to manually install python requirements, the setup process should automatically install them
- If you are using MQTT and your
MQTT:Simplelibrary was installed a while ago, you may need to update it. A new
loginmethod was added to that library on Dec 2018 which is required (ref)
Version 3.9 onwards¶
- Hooks now add ALPR, so you need to run sudo -H pip install -r requirements.txt again
- See modified objectconfig.ini if you want to add ALPR. Currently works with platerecognizer.com, so you will need an API key. See hooks docs for more info
Version 3.7 onwards¶
- There were some significant changes to ZM (will be part of 1.34), which includes migration to Bcrypt for passwords. Changes were made to support Bcrypt, which means you will have to add additional libraries. See the installation guide.
version 3.3 onwards¶
- Please use
zmeventnotification.inito maintain consistency with
version 3.2 onwards¶
- Changes in paths for everything. - event server config file now defaults to
- hook config now defaults to
- Push token file now defaults to
- all object detection data files default to
- If you are migrating from a previous version:
- Make a copy of your
/var/detect/objectconfig.ini(if you are using hooks)
sudo -H ./install.shagain inside the repo, let it set up all the files
- Compare your old config files to the news ones at
/etc/zmand make necessary changes
- Make sure everything works well
- You can now delete the old
/var/detectfolder as well as
- Run zmNinja again to make sure its token is registered in the new tokens file (in
- Make a copy of your