|
Cool project. Thank you.
I've deployed it in an LXC under Proxmox, and another LXC with the full development project. FWIW I also have it running on an RPI 5 with NVME drive and a Coral TPU deployed to docker with the dockge interface.
I've successfully run the "Optical Character Recognition" against images of road signs and of a technical manual in .pdf form. Works very well.
For grins I ran the OCR against an image of my terrible cursive handwriting. Of three paragraphs it got two thirds of the Date correct as I had written it as "month d, yyyy" The remainder of the sample was gibberish.
In my investegations I've come across, among others;
* Transkribus[^]
* Pen2text.com
Pen2text blew me away in the ease to making a simple test on that same sample of my, horribly illegible, handwriting. It missed two words that frankly looked like a leaky pen.
At any rate for either of these projects I feel I'd have to hire legal representation to understand the ownership and use of the OCR'ed sources and results.
I'd sure appreciate any tips on open source, self hosted, trainable OCR software suitable for a collection of perhaps fifty cursive multipage letters written in the same hand six or more decades ago. Once processed the text would be fed to a model to allow for chat of that subject matter.
Bonus points for pointers to open source archival platforms to organize the letters, that has an api so that I could correlate the OCRed text to the collection of images. Why reinvent that, archiving, wheel so to speak.
Thanks for listening by way or your reading.
Jeff
KF7CRU @jhalbrecht
|
|
|
|
|
Since upgrading to 2.6.5 I get AI not responding and no detections. I have to enable the service to start with Blue Iris after every reboot. Also changing from enable GPU or disabling GPU doesn't seem to change anything. I am using an Intel CPU with integrated GPU and used to be able to select enable GPU. Also the Codeproject AI status does not indicate Direct ML even after several detections. I am using Yolo.Net. Please advise? Everything I mentioned seemed to work fine with 2.6.2.
|
|
|
|
|
Thanks very much for your report. Could you please share your System Info tab from your CodeProject.AI Server dashboard?
One thing, I recommend going to Blue Iris main AI settings and unchecking auto stop/start.
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|
Here it is
Server version: 2.6.5
System: Windows
Operating System: Windows (Microsoft Windows 10.0.19045)
CPUs: Intel(R) Core(TM) i7-6700T CPU @ 2.80GHz (Intel)
1 CPU x 4 cores. 8 logical processors (x64)
GPU (Primary): Intel(R) HD Graphics 530 (1,024 MiB) (Intel Corporation)
Driver: 31.0.101.2111
System RAM: 16 GiB
Platform: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
Runtimes installed:
.NET runtime: 7.0.20
.NET SDK: Not found
Default Python: Not found
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
Intel(R) HD Graphics 530:
Driver Version 31.0.101.2111
Video Processor Intel(R) HD Graphics Family
System GPU info:
GPU 3D Usage 0%
GPU RAM Usage 0
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
|
|
|
|
|
I unchecked Start/Stop in Blue Iris. Restarted Blue Iris PC. I have to manually start Codeproject in Codeproject Dashboard but at least I now see Direct ML after I do this. Only problem is that Custom Models are not displaying in AI Main settings. If I hit the three dots, I get the Refresh AI to display models. Looks like the custom models are not loading for some reason. How would I refresh AI? Please advise?
|
|
|
|
|
Restart the Blue Iris service and the custom model list should update
|
|
|
|
|
Looks like all these steps seem to make things work normally again. I'm assuming this is a temporary measure at the moment until further bugs are ironed out?
|
|
|
|
|
Hi,
Since the install of the latest version of CP on my Blue Iris server, I get the message "Alert Cancelled AI not responding".
Consequently I do not get any notification on my phone when someone triggers a camera because there is no analysis done.
I reinstalled CP, no success.
Any suggestion on whatto do next?
I am on Windows 10
Thanks,
Michel.
|
|
|
|
|
It works now
I uninstalled CP, then I used the software Everything from Voidtools to find every single file left that had CodeProject in it's name and deleted them all, then reinstalled CP and restarted the server and it works as it should.
Thanks
|
|
|
|
|
Issue exists only with newer CPAI builds, occurs several times a day on different hardware (Intel with Tesla P4 vs Ryzen with RTX3090)
19:00:49:Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\modules\ObjectDetectionYOLOv5-6.2\detect.py", line 141, in do_detection
det = detector(img, size=640)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 705, in forward
y = self.model(x, augment=augment) # forward
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 515, in forward
y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 209, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 121, in _forward_once
x = m(x) # run
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 74, in forward
xy = (xy * 2 + self.grid[i]) * self.stride[i] # xy
RuntimeError: The size of tensor a (48) must match the size of tensor b (60) at non-singleton dimension 2
|
|
|
|
|
Thanks very much for your report. Could you please share your System Info tab from your CodeProject.AI Server dashboard?
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|
Server version: 2.6.5
System: Windows
Operating System: Windows (Microsoft Windows 10.0.17763)
CPUs: Intel(R) Xeon(R) CPU E5-2699A v4 @ 2.40GHz (Intel)
1 CPU x 11 cores. 22 logical processors (x64)
GPU (Primary): Tesla P4 (8 GiB) (NVIDIA)
Driver: 538.67, CUDA: 12.2.140 (up to: 12.2), Compute: 6.1, cuDNN: 8.5
System RAM: 64 GiB
Platform: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
Runtimes installed:
.NET runtime: 7.0.10
.NET SDK: Not found
Default Python: Not found
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
Microsoft Hyper-V Video:
Driver Version 10.0.17763.2145
Video Processor
NVIDIA Tesla P4:
Driver Version 31.0.15.3867
Video Processor Tesla P4
System GPU info:
GPU 3D Usage 8%
GPU RAM Usage 6.4 GiB
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
===================================================================
Server version: 2.6.5
System: Windows
Operating System: Windows (Microsoft Windows 10.0.19045)
CPUs: AMD Ryzen 9 7950X 16-Core Processor (AMD)
1 CPU x 16 cores. 32 logical processors (x64)
GPU (Primary): NVIDIA GeForce RTX 3090 (24 GiB) (NVIDIA)
Driver: 555.85, CUDA: 12.5.40 (up to: 12.5), Compute: 8.6, cuDNN: 8.5
System RAM: 64 GiB
Platform: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
Runtimes installed:
.NET runtime: 8.0.1
.NET SDK: 8.0.101
Default Python: 3.10.6
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
NVIDIA GeForce RTX 3090:
Driver Version 32.0.15.5585
Video Processor NVIDIA GeForce RTX 3090
System GPU info:
GPU 3D Usage 9%
GPU RAM Usage 2.1 GiB
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
|
|
|
|
|
Are you able to replicate this issue with any specific image? My guess is there's something with the image itself that's unexpected for the YOLO processor.
Another option is to switch to the .NET YOLO module, or the YOLOv8 module and see if that helps.
cheers
Chris Maunder
|
|
|
|
|
Wrong number of channels in the image? (Greyscale?)
|
|
|
|
|
Not sure if this is a similar issue.
I'm running CPAI v2.6.5 in a Docker container, CPU no CUDA, running in Linux Mint 21.2. Blue Iris is running in a Windows 10 VM.
I randomly get these errors in the logs :-
11:17:03:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:03:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:03:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:03:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:03:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:03:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:03:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...c92ba6) ['Found person'] took 235ms
11:17:03:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...9d98dc) ['Found person'] took 287ms
11:17:03:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...9d0db5) ['Found person'] took 315ms
11:17:03:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...f491c2) ['Found person'] took 276ms
11:17:03:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...ad6b97)
11:17:03:Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "/app/preinstalled-modules/ObjectDetectionYOLOv5-6.2/detect.py", line 141, in do_detection
det = detector(img, size=640)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/common.py", line 705, in forward
y = self.model(x, augment=augment) # forward
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/common.py", line 515, in forward
y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 209, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 121, in _forward_once
x = m(x) # run
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 74, in forward
xy = (xy * 2 + self.grid[i]) * self.stride[i] # xy
RuntimeError: The size of tensor a (60) must match the size of tensor b (48) at non-singleton dimension 2
11:17:03:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:03:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...3e06bb) ['Found person'] took 286ms
11:17:03:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:03:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...763521) ['Found person'] took 171ms
11:17:03:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...24956a) ['Found person'] took 152ms
11:17:04:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:04:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:04:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...e74617) ['Found person'] took 107ms
11:17:04:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...fc3cf2) ['Found person'] took 112ms
11:17:41:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:41:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:41:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:41:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:41:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...2105e0) ['Found person'] took 209ms
11:17:41:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...16dbff) ['Found person'] took 217ms
11:17:41:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:41:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:41:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...5db6db) ['Found person'] took 301ms
11:17:41:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...84eb3d) ['Found person'] took 321ms
11:17:41:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:41:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:42:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...295089) ['Found person'] took 225ms
11:17:42:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...bf8b54) ['Found person'] took 239ms
11:17:42:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:42:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:42:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:42:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:42:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...6c3d47) ['No objects found'] took 321ms
11:17:42:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...2a2db6) ['No objects found'] took 346ms
11:17:42:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...2d20c3) ['Found person'] took 278ms
11:17:42:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:42:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...c15b4f) ['No objects found'] took 327ms
11:17:42:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:42:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...916432) ['Found person'] took 331ms
11:17:42:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:42:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...6d18a9) ['No objects found'] took 295ms
11:17:42:Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "/app/preinstalled-modules/ObjectDetectionYOLOv5-6.2/detect.py", line 141, in do_detection
det = detector(img, size=640)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/common.py", line 705, in forward
y = self.model(x, augment=augment) # forward
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/common.py", line 515, in forward
y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 209, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 121, in _forward_once
x = m(x) # run
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 74, in forward
xy = (xy * 2 + self.grid[i]) * self.stride[i] # xy
RuntimeError: The size of tensor a (48) must match the size of tensor b (60) at non-singleton dimension 2
11:17:42:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...fab41f)
11:17:42:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...ac344d) ['Found person'] took 201ms
11:17:42:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...85f20d) ['No objects found'] took 157ms
11:17:42:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:42:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...57a27c) ['No objects found'] took 95ms
This has been happening for a long time on previous versions. I cna't remember when it started but it was a lot of version ago. I put it down to Blue Iris sending too many requests too close together and haven't bothered reporting it until I saw this thread. It doesn't really cause a problem as the error clears very quickly and detection continues as normal.
|
|
|
|
|
Hello,
Since Raspberry just announced their new "Raspberry Pi AI Kit" ( https://www.raspberrypi.com/products/ai-kit/[^] ) what would the possibility of getting a Hailo AI module natively added to CP.AI? I'm currently using a dual edge Coral TPU on my Pi5 but it has it's limitations
|
|
|
|
|
Yeah I saw that - $70 for some serious power is pretty awesome.
The Hailo stack seems straightforward (though their site leaves something to be desired). Without access to the hardware we can't do anything here, but I'm sure it would be a very straightforward exercise for someone to adapt any of the existing object detection modules to use the Hailo models and TensorRT. The segmentation example, for instance, seems super simple.
cheers
Chris Maunder
|
|
|
|
|
I'm actually getting one in the next few days and can help with anyway possible to get it up and running Just let me know
|
|
|
|
|
This looks very interesting; like the Coral TPU, but more modern. Google's lack of Coral support over the past few years has been concerning to me about the future of the platform. It won't take too many more years for Coral to no longer be competitive. I see it runs YOLOv5m at 640x640 at 218 FPS on their benchmarks page. (I just benchmarked YOLOv8m 640x640 model running at 2.8 FPS on Coral. If you reduce the size to 352x608 it runs 5.3 FPS, which is about as fast as you can get it to go on the Coral.)
I'm definitely interested in how well it works and how well my Coral TPU learnings/code port to it. What model did you order & where did you find it for $70? This is the only M.2 I see actually immediately available (and I don't see _any_ PCIe cards available):
https://eshop.aaeon.com/ai-edge-computing-hailo-8-m2-2280-module.html[^]
I see that they sell a $170 'starter kit' which looks effectively the same as the above card, but I'd need to fill in my work details, which I'm less comfortable with.
Order Hailo-8 Starter Kit | Hailo AI Processing Technology[^]
|
|
|
|
|
I ordered the kit from PiShop at Raspberry Pi AI Kit - PiShop.us[^] . It stated a preorder but shipped last week and will be arriving tomorrow. I'm not sure why it's listed as $85 when other sites like Canakit have it for the actual $70 (Raspberry Pi AI Kit for Pi 5[^] ) . I hope that it will eventually be sold as a standalone chip instead of having to get the hat with it as well, but my guess is that Raspberry purchased them en masse at a discount and is reselling them at the lower price
|
|
|
|
|
I wonder if I can just buy that kit for $70 and then throw out the M.2 HAT. Weirdly, that may be the most cost-effective way for me to get my hands on one.
|
|
|
|
|
|
Hi, you got me interested in this Hailo AI acceleration module. Because it seems more cost effective than GPU. How does that work though? Plug and play? I have no idea in this realm. Would that work for YOLOv8? Or it is something that CPAI has to support first?
|
|
|
|
|
CPAI would need to add support for Hailo. Right now I’m having a hard time both getting my hands on the hardware and the developer software, for example.
|
|
|
|
|
All that needs to be done is a module needs to be written. Looking at the example code that's floating around for Hailo it looks like a fairly trivial (half a day?) job.
cheers
Chris Maunder
|
|
|
|
|