Skip to content
18 changes: 17 additions & 1 deletion .github/copilot-instructions.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,26 @@
## Development Practices

- Start with minimal, lean implementations focused on proof-of-concept
- Avoid creating new files until asked
- Avoid implementing things from scratch
- Avoid defensive error handling for hypothetical failures
- Use print statements and logging sparingly, unless asked
- Avoid light wrappers and custom classes, unless asked
- Avoid `if __name__ == "__main__"` patterns in package code
- Avoid `if __name__ == "__main__"` patterns in package code, unless asked
For example, rather than using:
```python
from math import sqrt
def main():
sqrt(2)

if __name__ == "__main__":
main()
```
Leave it as a top-level script:
```python
from math import sqrt
sqrt(2)
```
- Skip unit tests unless explicitly requested
- Follow patterns in CONTRIBUTING.md when present
- Prefer writing Python if no language specified
Expand Down Expand Up @@ -33,6 +48,7 @@

## Change Logging

- Create CHANGELOG.md if it doesn't exist
- Each time you generate code, note the changes in CHANGELOG.md
- Follow semantic versioning guidelines
- Include date and description of changes
Expand Down
45 changes: 41 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ camera:

# AprilTag Configuration
apriltag:
family: "tag36h11"
family: "tagStandard41h12"
tag_size: 0.023 # 23mm tags
```

Expand Down Expand Up @@ -181,7 +181,7 @@ from ur_toolkit.apriltag_detection import AprilTagDetector

# Initialize detector
detector = AprilTagDetector(
tag_family='tag36h11',
tag_family='tagStandard41h12',
tag_size=0.023, # 23mm tags
camera_calibration_file='camera_calibration/camera_calibration.yaml'
)
Expand Down Expand Up @@ -228,6 +228,43 @@ python src/ur_toolkit/camera_calibration/capture_calibration_photos.py
python src/ur_toolkit/camera_calibration/calculate_camera_intrinsics.py
```

### 5. Hand-Eye Calibration

For precise camera-to-robot coordinate transformation:

```bash
# Run automated hand-eye calibration
python scripts/run_hand_eye_calibration.py --robot-ip 192.168.1.100 --tag-ids 0 --num-poses 15
```

**What it does:**
- Automatically moves robot through 15 diverse poses
- Captures AprilTag detections at each pose
- Solves the AX = XB calibration equation using OpenCV's robust algorithm
- Saves calibration to `src/ur_toolkit/hand_eye_calibration/hand_eye_calibration.json`

**Parameters:**
- `--robot-ip`: Your robot's IP address
- `--tag-ids`: AprilTag IDs to use (default: [0])
- `--num-poses`: Number of calibration poses (default: 15)
- `--manual`: Use manual freedrive mode instead of automatic movement

**Safety Notes:**
- ⚠️ Ensure workspace is clear before starting
- Keep emergency stop accessible
- Verify robot joint limits and collision avoidance
- AprilTag must be visible from all poses

**Quality Assessment:**
- **Excellent**: Translation error < 5mm, Rotation error < 2°
- **Good**: Translation error < 10mm, Rotation error < 5°
- **Poor**: Translation error > 10mm, Rotation error > 5° (recalibrate)

**Troubleshooting:**
- **"No AprilTag detected"**: Check lighting, tag visibility, verify tag IDs
- **"Pose not reachable"**: Start from more central robot position
- **"Poor calibration quality"**: Increase `--num-poses`, ensure good lighting

## 🏗️ Development

The project uses a `src/` layout for better packaging and testing:
Expand Down Expand Up @@ -310,7 +347,7 @@ from apriltag_detection import AprilTagDetector
robot = URRobotInterface('192.168.0.10')
camera = PiCam(PiCamConfig.from_yaml('camera_client_config.yaml'))
detector = AprilTagDetector(
tag_family='tag36h11',
tag_family='tagStandard41h12',
tag_size=0.023,
camera_calibration_file='camera_calibration/camera_calibration.yaml'
)
Expand All @@ -335,7 +372,7 @@ for detection in detections:
- **Pi Camera + Laptop**: Same subnet (configure in `camera_client_config.yaml`)

### AprilTag Settings
- Default: tag36h11 family, 23mm size
- Default: tagStandard41h12 family (recommended), 23mm size
- Customize in detection code for your specific tags
- Ensure tags are printed at exact scale for accurate pose estimation
- **Robot + Pi Camera**: Different subnets OK
Expand Down
207 changes: 207 additions & 0 deletions analyze_calibration_accuracy.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,207 @@
import json
import numpy as np
from scipy.spatial.transform import Rotation as R
from pathlib import Path

def analyze_hand_eye_calibration_quality():
"""
Analyze hand-eye calibration quality using real geometric metrics
instead of OpenCV's misleading residual artifacts.
"""

# Load calibration data
calib_file = Path('src/ur_toolkit/hand_eye_calibration/hand_eye_calibration.json')
with open(calib_file) as f:
data = json.load(f)

print("🔬 HAND-EYE CALIBRATION QUALITY ANALYSIS")
print("=" * 60)
print()

# Basic info
info = data['calibration_info']
print(f"📅 Calibration: {info['timestamp']}")
print(f"📊 Poses collected: {info['poses_collected']}")
print(f"✅ Successful: {info['calibration_successful']}")
print()

# Extract calibration data
T_ce = np.array(data['hand_eye_transform']['matrix'])
robot_poses = [np.array(pose) for pose in data['raw_pose_data']['robot_poses']]
tag_poses = [np.array(pose) for pose in data['raw_pose_data']['apriltag_poses']]

print("🎯 CALIBRATION RESULT ANALYSIS:")
print(f" Camera position: [{T_ce[0,3]*1000:.1f}, {T_ce[1,3]*1000:.1f}, {T_ce[2,3]*1000:.1f}] mm")
print(f" Camera distance from TCP: {np.linalg.norm(T_ce[:3,3])*1000:.1f} mm")

# Analyze rotation
R_ce = T_ce[:3,:3]
euler = R.from_matrix(R_ce).as_euler('xyz', degrees=True)
print(f" Camera orientation: [{euler[0]:.1f}°, {euler[1]:.1f}°, {euler[2]:.1f}°]")
print()

# REAL QUALITY METRICS
print("� REAL QUALITY METRICS (NOT OpenCV artifacts):")
print()

# 1. Pose diversity analysis
print("1️⃣ POSE DIVERSITY ANALYSIS:")

# Translation diversity
robot_translations = np.array([pose[:3,3] for pose in robot_poses])
translation_range = np.ptp(robot_translations, axis=0) * 1000 # Convert to mm
translation_volume = np.prod(translation_range)

print(f" Translation range: [{translation_range[0]:.1f}, {translation_range[1]:.1f}, {translation_range[2]:.1f}] mm")
print(f" Workspace volume: {translation_volume/1000:.0f} cm³")

# Rotation diversity
robot_rotations = [R.from_matrix(pose[:3,:3]) for pose in robot_poses]
angles = []
for i in range(len(robot_rotations)):
for j in range(i+1, len(robot_rotations)):
rel_rot = robot_rotations[i].inv() * robot_rotations[j]
angle = rel_rot.magnitude()
angles.append(np.degrees(angle))

max_rotation_diff = max(angles)
avg_rotation_diff = np.mean(angles)

print(f" Max rotation difference: {max_rotation_diff:.1f}°")
print(f" Average rotation difference: {avg_rotation_diff:.1f}°")

# Quality assessment for diversity
trans_quality = "EXCELLENT" if translation_volume > 50000 else "GOOD" if translation_volume > 10000 else "POOR"
rot_quality = "EXCELLENT" if max_rotation_diff > 45 else "GOOD" if max_rotation_diff > 20 else "POOR"

print(f" Translation diversity: {trans_quality}")
print(f" Rotation diversity: {rot_quality}")
print()

# 2. Consistency analysis
print("2️⃣ CALIBRATION CONSISTENCY ANALYSIS:")

# Check consistency by validating the hand-eye equation: T_base_to_tag = T_base_to_ee * T_ee_to_cam * T_cam_to_tag
consistency_errors = []

for i in range(len(robot_poses)):
T_base_to_ee = robot_poses[i]
T_cam_to_tag = tag_poses[i]

# Predicted tag pose in base frame using hand-eye calibration
T_predicted_base_to_tag = T_base_to_ee @ T_ce @ T_cam_to_tag

# For comparison, we need a reference. Let's use the first pose as reference.
if i == 0:
T_reference_base_to_tag = T_predicted_base_to_tag
continue

# The tag should appear at the same location in base frame for all poses
T_error = T_reference_base_to_tag @ np.linalg.inv(T_predicted_base_to_tag)

# Extract translation and rotation errors
trans_error = np.linalg.norm(T_error[:3,3]) * 1000 # mm
rot_error = R.from_matrix(T_error[:3,:3]).magnitude() * 180/np.pi # degrees

consistency_errors.append({
'pose': i+1,
'translation_error_mm': trans_error,
'rotation_error_deg': rot_error
})

if consistency_errors:
avg_trans_error = np.mean([e['translation_error_mm'] for e in consistency_errors])
avg_rot_error = np.mean([e['rotation_error_deg'] for e in consistency_errors])
max_trans_error = max([e['translation_error_mm'] for e in consistency_errors])
max_rot_error = max([e['rotation_error_deg'] for e in consistency_errors])

print(f" Average consistency error: {avg_trans_error:.2f}mm, {avg_rot_error:.2f}°")
print(f" Maximum consistency error: {max_trans_error:.2f}mm, {max_rot_error:.2f}°")

# Quality assessment for consistency
consistency_quality = "EXCELLENT" if avg_trans_error < 5 and avg_rot_error < 2 else \
"GOOD" if avg_trans_error < 15 and avg_rot_error < 5 else \
"POOR"

print(f" Consistency quality: {consistency_quality}")

# Show worst poses
worst_trans = max(consistency_errors, key=lambda x: x['translation_error_mm'])
worst_rot = max(consistency_errors, key=lambda x: x['rotation_error_deg'])
print(f" Worst translation error: Pose {worst_trans['pose']} ({worst_trans['translation_error_mm']:.2f}mm)")
print(f" Worst rotation error: Pose {worst_rot['pose']} ({worst_rot['rotation_error_deg']:.2f}°)")

print()

# 3. Physical reasonableness
print("3️⃣ PHYSICAL REASONABLENESS CHECK:")

camera_distance = np.linalg.norm(T_ce[:3,3]) * 1000
camera_pos = T_ce[:3,3] * 1000

# Check if camera position makes physical sense
distance_reasonable = 50 < camera_distance < 300 # 5cm to 30cm from TCP

# Check if camera is pointing roughly forward (positive Z in camera frame should align with some robot axis)
camera_z_direction = T_ce[:3,2] # Camera Z axis (forward direction)
camera_pointing_forward = camera_z_direction[0] > 0.5 # Roughly pointing in robot X direction

print(f" Camera distance from TCP: {camera_distance:.1f}mm {'✅' if distance_reasonable else '⚠️'}")
print(f" Camera pointing direction: {'Forward ✅' if camera_pointing_forward else 'Other orientation ⚠️'}")
print(f" Physical setup: {'Reasonable ✅' if distance_reasonable else 'Check mounting ⚠️'}")
print()

# 4. Overall quality assessment
print("🏆 OVERALL QUALITY ASSESSMENT:")

quality_scores = []
if 'trans_quality' in locals():
quality_scores.append(trans_quality)
if 'rot_quality' in locals():
quality_scores.append(rot_quality)
if 'consistency_quality' in locals():
quality_scores.append(consistency_quality)

excellent_count = quality_scores.count('EXCELLENT')
good_count = quality_scores.count('GOOD')
poor_count = quality_scores.count('POOR')

if excellent_count >= 2:
overall_quality = "EXCELLENT"
overall_emoji = "🥇"
elif good_count >= 2 and poor_count == 0:
overall_quality = "GOOD"
overall_emoji = "🥈"
elif poor_count <= 1:
overall_quality = "ACCEPTABLE"
overall_emoji = "🥉"
else:
overall_quality = "POOR"
overall_emoji = "❌"

print(f" {overall_emoji} Overall calibration quality: {overall_quality}")
print()

# 5. Recommendations
print("💡 RECOMMENDATIONS:")

if overall_quality == "EXCELLENT":
print(" ✅ Calibration is excellent - ready for precise visual servoing")
print(" ✅ Expected accuracy: 1-3mm translation, 0.5-1.5° rotation")
elif overall_quality == "GOOD":
print(" ✅ Calibration is good - suitable for most visual servoing tasks")
print(" ✅ Expected accuracy: 2-5mm translation, 1-3° rotation")
elif overall_quality == "ACCEPTABLE":
print(" ⚠️ Calibration is acceptable but could be improved")
print(" 📝 Consider recalibrating with more diverse poses")
print(" 📝 Expected accuracy: 3-8mm translation, 2-5° rotation")
else:
print(" ❌ Calibration quality is poor - recalibration recommended")
print(" 📝 Increase pose diversity, especially rotations")
print(" 📝 Check physical setup and AprilTag mounting")

print()
print("📋 NOTE: These are REAL quality metrics, not OpenCV's misleading 500+mm 'residuals'")

if __name__ == "__main__":
analyze_hand_eye_calibration_quality()
73 changes: 73 additions & 0 deletions compare_calibrations.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
import json
import numpy as np

# Load current calibration
with open('src/ur_toolkit/hand_eye_calibration/hand_eye_calibration.json') as f:
current = json.load(f)

print("=== HAND-EYE CALIBRATION COMPARISON ===\n")

# Previous calibration data (from conversation history)
prev_poses = 8
prev_translation = [-113.73, 11.49, 28.88] # mm
prev_trans_error = 753.38 # mm
prev_rot_error = 119.06 # degrees

# Current calibration data
curr_poses = current['calibration_info']['poses_collected']
curr_trans = current['hand_eye_transform']['translation_mm']
curr_translation = [curr_trans['x'], curr_trans['y'], curr_trans['z']]
curr_accuracy = current['calibration_accuracy']['residuals']
curr_trans_error = curr_accuracy['average_translation_error_mm']
curr_rot_error = curr_accuracy['average_rotation_error_deg']

print("📅 PREVIOUS CALIBRATION (Sep 26):")
print(f" Poses collected: {prev_poses}")
print(f" Translation: [{prev_translation[0]:.1f}, {prev_translation[1]:.1f}, {prev_translation[2]:.1f}] mm")
print(f" Average errors: {prev_trans_error:.1f}mm translation, {prev_rot_error:.1f}° rotation")
print()

print("📅 CURRENT CALIBRATION (Sep 29):")
print(f" Poses collected: {curr_poses}")
print(f" Translation: [{curr_translation[0]:.1f}, {curr_translation[1]:.1f}, {curr_translation[2]:.1f}] mm")
print(f" Average errors: {curr_trans_error:.1f}mm translation, {curr_rot_error:.1f}° rotation")
print()

# Calculate differences
translation_shift = np.array(curr_translation) - np.array(prev_translation)
total_shift = np.linalg.norm(translation_shift)
error_change_trans = curr_trans_error - prev_trans_error
error_change_rot = curr_rot_error - prev_rot_error

print("🔍 COMPARISON ANALYSIS:")
print(f" Translation shift: [{translation_shift[0]:.1f}, {translation_shift[1]:.1f}, {translation_shift[2]:.1f}] mm")
print(f" Total position shift: {total_shift:.1f} mm")
print(f" Error change: {error_change_trans:+.1f}mm translation, {error_change_rot:+.1f}° rotation")
print()

print("📊 QUALITY ASSESSMENT:")
print(f" Data quantity: {'✅ Better' if curr_poses > prev_poses else '❌ Same/Worse'} ({curr_poses} vs {prev_poses} poses)")
print(f" Position stability: {'✅ Good' if total_shift < 50 else '⚠️ Significant shift'} ({total_shift:.1f}mm shift)")
print(f" Error consistency: {'✅ Consistent' if abs(error_change_trans) < 100 else '⚠️ Variable'} ({error_change_trans:+.1f}mm change)")

# Interpretation
print("\n🎯 INTERPRETATION:")
if total_shift < 30:
print(" Position difference is small - calibrations are reasonably consistent")
elif total_shift < 60:
print(" Moderate position difference - some variation but acceptable")
else:
print(" Large position difference - indicates calibration variability")

if abs(error_change_trans) < 50:
print(" Error levels are similar - both calibrations have comparable internal consistency")
else:
print(" Different error levels - may indicate different pose diversity or quality")

print(f"\n💡 RECOMMENDATION:")
if curr_poses > prev_poses and total_shift < 50:
print(" Current calibration is better: More poses + consistent position")
elif total_shift > 60:
print(" Consider recalibrating with more rotational diversity for stability")
else:
print(" Both calibrations are reasonable - use current for most recent data")
Loading