Mindaugas ​Beliauskas
ROBOTIC R12 ARM OBJECT'S COORDINATES RECOGNITION PROJECT
Group Project Requirements:
To write a code which controls the Robotic R12 Arm with a camera attached to the end effector and autonomously detects known color object's corner's coordinates with respect to the robot base frame.
R12 Robotic Arm:

Procedure:
First, function that transforms robot's joint space to Cartesian coordinate space using inverse kinematics calculations was created. Then the object's global search algorithm was implemented. The robot had to detect qudrant in which the object is. This was done by processing camera's image to binary image using HUE and Gaussian distribution algorithms. Investigated image processing algorithms are shown below:

Next, once the quadrant of object was detected, the robot tried to align the wrist, so that the whole object is in the image - no edge is on image's boundary.
Then the robot aligned it's camera's perpendicular position to the most top-right, top-left, bottom-left, bottom-right object's corners. The joint angles were recorded after detecting each corner and then converted into Cartesian coordiate space.
Sequence of pictures seen by robot arm is shown below:
![]() Global search #1 | ![]() Global search #2 |
|---|---|
![]() Global search #3 | ![]() Global search #4 |
![]() Global search #5 | ![]() Box found |
![]() Aligning the camera perpendicularly | ![]() Aligning the camera to corners |
![]() Aligning the camera to corners |








