Inverse Kinematics for robot configurations

Hi everyone,

first of all a huge thanks for the development of DART! :+1:
Currently, I am implementing my own robot simulation for my PhD, but I am relatively new to rigid body physics simulation.

I am a little bit stuck on the following issue:

I try to setup a robot simulation where I can simulate the stiffness and damping behaviour in the robot’s workspace. Therefore, I created a robot with stiffness and damping properties and used the KUKA KR 5 sixx urdf model for my first trials. Everything works as expected, when I start the GLUT simulation via the SPACE bar: due to gravity or external forces, the robot ‘wiggles’ around.

Now I try to set the robot’s configuration to different poses in the workspace. Using the IK solver of the last body node, I am able to calculate the robot’s cofiguration for that pose and set the robot to that configuration (both via ->solveAndApply() ). I am also able to see (visually in the GLUT simulation) that the pose was calculated and set correctly.

But the problem now is twofold:

  • When I execute the IK before I started the GLUT simulation, the robot always swings back to the initial pose and does not start to wiggle around in the configuration which was calculated and set by the IK solver.
  • When I execute the IK after I started the GLUT simulation, it looks like the robot is ‘pushed’ to the calculated IK solution, but swings back like a spring to the initial configuration.

Now my question is:

What is the correct way to set the robot’s configuration during simulation? Alternatively, I would also be happy if I found a way to stop the GLUT simulation, calculate and set the next desired configuration and start the simulation again.

A simple, relative “movement” is done as followed:

void Robot::relativeTranslationTCP(double deltaTransX, double deltaTransY, double deltaTransZ)
	{
		Eigen::Isometry3d tf = this->getSkeletonPtr()->getBodyNode(6)->getTransform();
		tf.translation() += Eigen::Vector3d(deltaTransX, deltaTransY, deltaTransZ);

		this->_inverseKinematicsPtr->getTarget()->setTransform(tf);
		this->getSkeletonPtr()->getBodyNode(6)->getIK()->solveAndApply();
	}

If necessary, I can also upload a video from the behaviour.

What am I missing here? :frowning:

Thanks so much for any help and best regards from Germany!

Max


Edit: For your information, I could imagine that I didn’t understand the time stepping overwrite correctly, my function looks like this:

void Environment::timeStepping()
{
	if (testJointForce > 1000)
	{
		testJointForce = 100.0;
		this->_robotPtr->reset();
	}
	SimWindow::timeStepping();
}

and my keyboard function as followed:

void Environment::keyboard(unsigned char key, int x, int y)
{
	Eigen::Vector6d tmp;

	switch (key)
	{
	case '+':
		testJointForce += 100.0;
		this->applyTorqueToRobotJoint(1, testJointForce);
		break;

	case '-':
		this->_robotPtr->relativeTranslationTCP(0.0, 0.0, 0.05);
		break;

	case '0':
		this->_robotPtr->setTCP(0.3, 0, 0, M_PI, 0, 0);
		break;

	default:
		SimWindow::keyboard(key, x, y);
	}
}

When you set stiffness and damping properties, those properties are for the passive dynamics of the system. The stiffness will always try to push the arm back to the passive “rest position” (which is 0 by default). You can think if it as though there are mechanical springs in the robot’s joints, and those springs are fixed in place.

When you “apply” the IK solution, you are effectively teleporting the robot’s joint positions to that configuration. If you aren’t using any active controller, then the passive dynamics will take over after the joint positions have been teleported.

What you probably want is to make a PID controller to drive your arm to the desired configuration. You can use Skeleton::setForces(Eigen::VectorXd) to set the control forces of all the joints.

1 Like

Ah perfect, thanks for the detailed explanation! For my porposes, setting the rest positions is fully sufficient! :slightly_smiling_face:

Just another quick question: is it possible to draw coordinate systems at my joints or Body nodes or at arbitrary locations?

1 Like

If you use the dart::gui::osg visualizing library (which is recommended over the glut rendering library) you can easily put a visual frame anywhere you’d like using the InteractiveFrame class.

Just instantiate a dart::gui::osg::InterativeFrame, set its parent frame to the frame you want to visualize, and add it to your dart:: simulation::World using World::addSimpleFrame(~).

It might be possible to use the dart::gui::osg::InteractiveFrame with the glut render that you’re currently using, but I don’t know if that’s been tested.

Okay, thanks! :slight_smile:
I tried that, but unfortunately, the osg rendering seems veeeery slow in comparison to glut, which runs flawlessly.

I’ll try to find a way :slight_smile:

I can think of two reasons it might be running slow:

  1. You aren’t setting the number of iterations per rendering cycle of the WorldNode. I would recommend using a dart::gui::osg::RealTimeWorldNode instead of the plain dart::gui::osg::WorldNode.

  2. Maybe you’re building in Debug mode instead of Release mode. If you’re building DART from source, you can pass -DCMAKE_BUILD_TYPE=Release into cmake for a higher performance build.

1 Like

I think following two changes would give better first impression of OSG renderer to users:

  • Change the default simulation step count per rendering frame here to higher number like 20
  • Change the current examples to use RealTimeWorldNode

Thanks for the support, both of you :slight_smile: