Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
240 changes: 192 additions & 48 deletions docs_src/use-cases/loss-prevention/getting_started.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,18 @@
# Getting Started

## 📋 Prerequisites

- Ubuntu 24.04 or newer (Linux recommended), Desktop edition (or Server edition with GUI installed).
- [Docker](https://docs.docker.com/engine/install/)
- [Make](https://www.gnu.org/software/make/) (`sudo apt install make`)
- **Python 3** (`sudo apt install python3`) - required for video download and validation scripts
- Intel hardware (CPU, iGPU, dGPU, NPU)
- Intel drivers:
- [Intel GPU drivers](https://dgpu-docs.intel.com/driver/client/overview.html)
- [NPU](https://dlstreamer.github.io/dev_guide/advanced_install/advanced_install_guide_prerequisites.html#prerequisite-2-install-intel-npu-drivers)
- Sufficient disk space for models, videos, and results


### **NOTE:**

By default the application runs by pulling the pre-built images. If you want to build the images locally and then run the application, set the flag:
Expand All @@ -11,50 +24,52 @@ usage: make <command> REGISTRY=false (applicable for all commands like benchmark
Example: make run-lp REGISTRY=false
```

(If this is the first time, it will take some time to download videos, models, docker images and build images)
By default the application runs in Headless mode. If you want to run in Visual Mode run the application, by setting the flag:

## Step by step instructions:
```bash
RENDER_MODE=1

1. Download the models using download_models/downloadModels.sh
usage: make <command> RENDER_MODE=1 (applicable for all commands like benchmar,benchmark-stream-density..)
Example: make run-lp RENDER_MODE=1
```

```bash
make download-models
```
(If this is the first time, it will take some time to download videos, models, docker images and build images)

2. Update github submodules
## Step by step instructions:

```bash
make update-submodules
1. Clone the repo with the below command
```

3. Download sample videos used by the performance tools

```bash
make download-sample-videos
git clone -b <release-or-tag> --single-branch https://github.com/intel-retail/loss-prevention
```
>Replace <release-or-tag> with the version you want to clone (for example, **v4.0.0**).
```
git clone -b v4.0.0 --single-branch https://github.com/intel-retail/loss-prevention
```

4. Run the LP application
2. To run loss prevention from pre-built images,follow the below steps:

```bash
#Download the models using download_models/downloadModels.sh
make download-models
#Update github submodules
make update-submodules
#Download sample videos used by the performance tools
make download-sample-videos
#Run the LP application
make run-render-mode
```

**NOTE:- User can directly run single make command that internally called all above command and run the Loss Prevention application.**

- Run Loss Prevention appliaction with single command.
- Run Loss Prevention appliaction with single command.

```bash
make run-lp
make run-lp RENDER_MODE=1
```

- Running Loss Prevention application with ENV variables:
```bash
CAMERA_STREAM=camera_to_workload_full.json WORKLOAD_DIST=workload_to_pipeline_cpu.json make run-lp
```
`CAMERA_STREAM=camera_to_workload_full.json`: runs all 6 workloads. <br>
`WORKLOAD_DIST=workload_to_pipeline_cpu.json`: all workloads run on CPU. <br>
By default, Loss Prevention 6 default workloads are executed. Refer to the [Pre-Configured Workloads](#pre-configured-workloads) section for more details.

5. To build the images locally step by step:
3. To build the images locally step by step:
- Follow the following steps:
```bash
make download-models REGISTRY=false
Expand All @@ -66,30 +81,9 @@ Example: make run-lp REGISTRY=false
- The above series of commands can be executed using only one command:

```bash
make run-lp REGISTRY=false
make run-lp REGISTRY=false RENDER_MODE=1
```

6. View the Dynamically Generated GStreamer Pipeline.
>*Since the GStreamer pipeline is generated dynamically based on the provided configuration(camera_to_workload and workload_to_pipeline json), the pipeline.sh file gets updated every time the user runs make run-lp or make benchmark. This ensures that the pipeline reflects the latest changes.*
```sh

src/pipelines/pipeline.sh

```

7. Verify Docker containers

```bash
docker ps --all
```
Result:
```bash
NAMES STATUS IMAGE
src-pipeline-runner-1 Up 17 seconds (healthy) pipeline-runner:lp
model-downloader Exited(0) 17 seconds model-downloader:lp
```

8. Verify Results
4. Verify Results

After starting Loss Prevention you will begin to see result files being written into the results/ directory. Here are example outputs from the 3 log files.

Expand Down Expand Up @@ -192,8 +186,10 @@ Example: make run-lp REGISTRY=false
}

```
> [!NOTE]
> If unable to see results folder or files, please refer to the [Troubleshooting](#troubleshooting) section for more details.

9. Stop the containers:
5. Stop the containers:

When pre-built images are pulled-

Expand All @@ -207,4 +203,152 @@ Example: make run-lp REGISTRY=false
make down-lp REGISTRY=false
```

## Pre-configured Workloads
The preconfigured workload supports multiple hardware configurations out of the box. Use the `CAMERA_STREAM` and `WORKLOAD_DIST` variables to customize which cameras and hardware (CPU, GPU, NPU) are used by your pipeline.

**How To Use:**
- Specify the appropriate files as environment variables when running or benchmarking:
```sh
CAMERA_STREAM=<camera_stream> WORKLOAD_DIST=<workload_dist> make run-lp
```
Or for benchmarking:
```sh
CAMERA_STREAM=<camera_stream> WORKLOAD_DIST=<workload_dist> make benchmark
```
### Loss Prevention

| Description | CAMERA_STREAM | WORKLOAD_DIST |
|-------------------------|-------------------------------|--------------------------------------|
| CPU (Default) | camera_to_workload.json | workload_to_pipeline.json |
| GPU | camera_to_workload.json | workload_to_pipeline_gpu.json |
| NPU + GPU | camera_to_workload.json | workload_to_pipeline_gpu-npu.json |
| Heterogeneous | camera_to_workload.json | workload_to_pipeline_hetero.json |

> [!NOTE]
> The following sub-workloads are automatically included and enabled in the configuration:
>
> `items_in_basket`
`hidden_items`
`fake_scan_detection`
`multi_product_identification`
`product_switching`
`sweet_heartening`

### Automated Self Check Out

| Description | CAMERA_STREAM | WORKLOAD_DIST |
|------------------------------------------------|-------------------------------|--------------------------------------|
| Object Detection (GPU) | camera_to_workload_asc_object_detection.json | workload_to_pipeline_asc_object_detection_gpu.json |
| Object Detection (NPU) | camera_to_workload_asc_object_detection.json | workload_to_pipeline_asc_object_detection_npu.json |
| Object Detection & Classification (GPU) | camera_to_workload_asc_object_detection_classification.json | workload_to_pipeline_asc_object_detection_classification_gpu.json |
| Object Detection & Classification (NPU) | camera_to_workload_asc_object_detection_classification.json | workload_to_pipeline_asc_object_detection_classification_npu.json |
| Age Prediction & Face Detection (GPU) | camera_to_workload_asc_age_verification.json | workload_to_pipeline_asc_age_verification_gpu.json |
| Age Prediction & Face Detection (NPU) | camera_to_workload_asc_age_verification.json | workload_to_pipeline_asc_age_verification_npu.json |
| Heterogenous | camera_to_workload_asc_hetero.json | workload_to_pipeline_hetero.json |



### User Defined Workloads
The application is highly configurable via JSON files in the `configs/` directory

**To try a new camera or workload:**

1. Create new camera to workload mapping in `configs/camera_to_workload_custom.json` to add your camera and assign workloads.
- **camera_to_workload_custom.json**: Maps each camera to one or more workloads.
- To add or remove a camera, edit the `lane_config.cameras` array in the file.
- Each camera entry can specify its video source, region of interest, and assigned workloads.
Example:
```json
{
"lane_config": {
"cameras": [
{
"camera_id": "cam1",
"streamUri": "rtsp://rtsp-streamer:8554/video-stream-name",
"workloads": ["items_in_basket", "multi_product_identification"],
"region_of_interest": {"x": 100, "y": 100, "x2": 800, "y2": 600}
}
]
}
}
```
If adding new videos, place your video files in the directory **performance-tools/sample-media/** and update the `streamUri` path.
[!Note]
>#### Connecting External RTSP Cameras:
To use real RTSP cameras instead of the built-in server:

```json
{
"camera_id": "external_cam1",
"streamUri": "rtsp://192.168.1.100:554/stream1",
"workloads": ["items_in_basket"]
}
```
2. Create new `configs/workload_to_pipeline_custom.json` to define pipeline for your workload.
- **workload_to_pipeline_custom.json**: Maps each workload name to a pipeline definition (sequence of GStreamer elements and models).
Example:

```json
{
"workload_pipeline_map": {
"custom_workload_1": [
{"type": "gvadetect", "model": "yolo11n", "precision": "INT8", "device": "CPU"},
{"type": "gvaclassify", "model": "efficientnet-v2-b0", "precision": "INT8", "device": "CPU"}
],
"custom_workload_2": [
{"type": "gvadetect", "model": "yolo11n", "precision": "INT16", "device": "NPU"},
{"type": "gvaclassify", "model": "efficientnet-v2-b0", "precision": "INT16", "device": "NPU"}
],
"custom_workload_3": [
{"type": "gvadetect", "model": "yolo11n", "precision": "INT8", "device": "GPU"},
{"type": "gvaclassify", "model": "efficientnet-v2-b0", "precision": "INT8", "device": "GPU"}
]
}
}
```
3. Run validate configs command, to verify configuration files
```sh
make validate-all-configs
```
4. Re-run the pipeline as described above.

> [!NOTE]
> Since the GStreamer pipeline is generated dynamically based on the provided configuration(camera_to_workload and workload_to_pipeline json),
> the pipeline.sh file gets updated every time the user runs make run-lp or make benchmark. This ensures that the pipeline reflects the latest changes.
```sh
src/pipelines/pipeline.sh
```


## Troubleshooting

+ If results folder is empty, check Docker logs for errors:
+ List the docker containers
```sh
docker ps -a
```
+ Verify Docker containers if it is running or no errors in container logs

```bash
docker ps --all
```
Result:
```bash
NAMES STATUS IMAGE
src-pipeline-runner-1 Up 17 seconds (healthy) pipeline-runner:lp
model-downloader Exited(0) 17 seconds model-downloader:lp
```

+ Check each container logs
```sh
docker logs <container_id>
```
+ If the file content in `<loss-prevention-workspace>/results/pipeline_stream*.log` is empty, check GStreamer output file for errors:
+ `<oss-prevention-workspace>/results/gst-launch_*.log`

+ RTSP :
- **Connection timeout**: Check `RTSP_STREAM_HOST` and `RTSP_STREAM_PORT` environment variables
- **Stream not found**: Verify video file exists in `performance-tools/sample-media/`
- **Black frames**: Ensure video codec is H.264 (most compatible)
- **Check RTSP server logs**: `docker logs rtsp-streamer`
## [Proceed to Advanced Settings](advanced.md)
Loading