Skip to content

LCAS/aoc_packhouse_sim

Repository files navigation

Packing House Simulation (ROS 2 + Gazebo Classic)

This repository provides a ROS 2 Humble + Gazebo Classic simulation of a simple packing-house layout with moving conveyors, actors (walkers), and mounted perception sensors. It includes a ground-truth safety detector based on Gazebo state, and a pole-mounted sensor example that runs YOLO on RGB/Depth data and classifies people into safety zones.

What this repo is for

The simulation is designed to:

  • Provide a reproducible packing-house scene with two conveyor belts, moving props (boxes/balls), and actor walkers.

  • Offer safety-zone detection baselines:

    • A ground-truth detector using Gazebo model state/actor state services.
    • A camera/LiDAR pole example that runs YOLO detections and assigns zone levels based on world-space intersections.
  • Serve as a starting point for experimenting with sensor placement (e.g., moving the pole, changing camera/LiDAR angles) and adjusting detection logic.

Installation

1) ROS 2 & Gazebo Classic

This package targets ROS 2 Humble with Gazebo Classic. Ensure you have ROS 2 installed and sourced.

# Example: install core dependencies (Ubuntu 22.04 + ROS 2 Humble)
sudo apt install \\
  ros-humble-rclpy \\
  ros-humble-gazebo-ros \\
  ros-humble-gazebo-msgs \\
  ros-humble-geometry-msgs

2) Python dependencies (YOLO)

The pole-based example uses the ultralytics YOLO package:

pip install ultralytics

You will also need PyTorch; ultralytics typically pulls it in, but you may need to install it manually depending on your environment.

3) Optional: UR3e robot support

If you want to launch the UR3e example, install these additional ROS packages:

sudo apt install \\
  ros-humble-ur-description \\
  ros-humble-ros2-control \\
  ros-humble-ros2-controllers \\
  ros-humble-gazebo-ros2-control

4) Build the workspace

colcon build
source install/setup.bash

Safety architecture (always the same)

You always run two nodes:

  1. Zone publisher → publishes /human\_zone\_level

    • 0: clear
    • 1: Space A
    • 2: Space B
  2. Arm safety controller → subscribes to /human\_zone\_level and slows/stops the UR3e arm.

The arm safety controller in this repo is:

  • packing\_house\_sim/ur3e\_safety\_work\_cycle\_levels.py

⚠️ Only run one zone publisher at a time (either ground-truth or YOLO). Running both will cause flickering/conflicting zone levels.

Default ground-truth example

The ground-truth example uses Gazebo model state and actor state services to detect when a person enters a 3D axis-aligned box region. It publishes /human\_zone\_level for the arm controller.

How to run it

Terminal 1 — start the sim (with UR3e):

ros2 launch packing\_house\_sim packhouse\_with\_ur3e.launch.py

Terminal 2 — arm safety controller (zone-level based):

ros2 run packing\_house\_sim ur3e\_safety\_work\_cycle\_levels

Terminal 3 — ground-truth zone classifier:

ros2 run packing\_house\_sim human\_zone\_classifier

(Optional) Override Space A/B bounds at runtime:

ros2 run packing\_house\_sim human\_zone\_classifier --ros-args \\
  -p space\_A\_min:='\[-1.2,-1.8,-1.6]' -p space\_A\_max:='\[1.6,1.2,1.8]' \\
  -p space\_B\_min:='\[-0.8,-1.2,-0.8]' -p space\_B\_max:='\[1.3,0.8,1.4]'

Verify it’s working:

ros2 topic echo /human\_zone\_level
ros2 topic echo /safety\_alert

What it does

  • The classifier reads /gazebo/model\_states and /gazebo/get\_entity\_state to track the walker actors in the scene.
  • It checks if any detected human/actor is inside the configured AABB (axis-aligned bounding box) and publishes the zone level.
  • Box size and actor names are configurable via ROS parameters.

Relevant files:

  • Zone classifier: packing\_house\_sim/human\_zone\_classifier.py
  • Arm safety controller: packing\_house\_sim/ur3e\_safety\_work\_cycle\_levels.py
  • World: worlds/packhouse\_control\_walkers.sdf

Pole-mounted sensor example (YOLO + world-space zones)

The pole example uses the simulated RGB-D camera mounted on the pole in the SDF. It runs YOLO, projects detections into world coordinates, intersects them with the ground plane, and classifies them into Zone A / B.

How to run it

Terminal 1 — sim (with UR3e):

ros2 launch packing\_house\_sim packhouse\_with\_ur3e.launch.py

Terminal 2 — arm safety work-cycle (pole version):

ros2 run packing\_house\_sim ur3e\_safety\_work\_cycle\_levels\_pole

Terminal 3 — YOLO (pole, world coords):

ros2 run packing\_house\_sim human\_yolo\_topdown\_detector\_pole

Common useful overrides (match what we tested):

ros2 run packing\_house\_sim human\_yolo\_topdown\_detector\_pole --ros-args \\
  -p rgb\_topic:=/pole/pole\_depth/image\_raw \\
  -p depth\_topic:=/pole/pole\_depth/depth/image\_raw \\
  -p depth\_info\_topic:=/pole/pole\_depth/depth/camera\_info \\
  -p conf:=0.15 \\
  -p use\_depth\_filter:=False

Verify it’s working:

ros2 topic echo /human\_zone\_level

What it does

  • Subscribes to pole RGB/Depth topics (from the SDF model).

  • Runs YOLO (person class) and converts detections to world-space points using TF and depth camera intrinsics.

  • Intersects detection rays with the ground plane and publishes a zone level:

    • 0: no person
    • 1: Zone A
    • 2: Zone B

Relevant files:

  • Pole detector node: packing\_house\_sim/human\_yolo\_topdown\_detector\_pole.py
  • Arm safety controller: packing\_house\_sim/ur3e\_safety\_work\_cycle\_levels\_pole.py
  • Pole sensors in SDF: worlds/packhouse\_control\_walkers.sdf

Customizing the setup (move pole, change camera/LiDAR angles, etc.)

If you want to change the sensor layout or pose, you typically need to update two places:

  1. Edit the SDF world to move the pole or adjust sensor offsets:

    • File: worlds/packhouse\_control\_walkers.sdf

    • Key sections:

      • model name="sensor\_pole" (move the pole by editing <pose>)
      • link name="panel\_link" (panel pose/rotation)
      • Sensor <pose> entries for pole\_lidar, pole\_rgb, and pole\_depth
  2. Update the TF static transforms if you rely on the UR3e + TF launch file:

    • File: launch/packhouse\_with\_ur3e.launch.py
    • Adjust the static transform publishers (e.g., world\_to\_panel\_link, panel\_link\_to\_pole\_lidar) so they match your updated SDF poses.

If you only use the bringup world and do not launch the UR3e TF publishers, you can often skip the TF updates. If you use human\_yolo\_topdown\_detector\_pole.py, the TF chain it uses must match the sensor pose in the SDF.

Example: UR3e + packing house

To launch the world plus a UR3e robot (and TF publishers for the pole), run:

ros2 launch packing\_house\_sim packhouse\_with\_ur3e.launch.py

This launches:

  • The packing-house world (same as bringup).
  • The UR3e robot in Gazebo.
  • Static TF publishers for the robot base and sensor pole frames.

Helpful topics

Pole sensors (namespace /pole):

  • RGB image: /pole/camera/image
  • RGB camera info: /pole/camera/camera\_info
  • Depth image: /pole/rgbd/depth\_image
  • Depth camera info: /pole/rgbd/camera\_info
  • Point cloud: /pole/rgbd/points
  • LiDAR points: /pole/lidar/points

Troubleshooting tips

  • If you see missing TF errors, ensure the static transforms in packhouse\_with\_ur3e.launch.py match the SDF poses.
  • If YOLO doesn’t start, verify pip install ultralytics and that your Python environment matches the ROS 2 overlay.
  • If Gazebo can’t find models, ensure GAZEBO\_MODEL\_PATH is set (the bringup launch file sets this automatically).

License

This project is licensed under the Apache License 2.0 — see the LICENSE file for details.

About

A Gazebo simulation of a simple packing-house layout with moving conveyors, actors (walkers), and mounted perception sensors.

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages