Simulation — Isaac Lab & MuJoCo
OpenArm simulation environments for reinforcement learning, imitation learning, and foundation model research. Includes Isaac Lab integration with NVIDIA Isaac Sim and MuJoCo physics simulation setup.
1. Isaac Lab Simulation
Overview
Welcome to the OpenArm Isaac Lab Simulation Documentation. OpenArm is officially integrated into NVIDIA's Isaac Sim / Isaac Lab ecosystem, enabling development and evaluation of reinforcement learning, imitation learning, and foundation model-based approaches.
The OpenArm model is released under an open-source license (Apache 2.0) and the team aims to contribute practical simulation assets, training workflows, and benchmarks that can be directly applied to real-world robotic systems.
Version Information
| Component | Version |
|---|---|
| OpenArm | 1.0.0 |
| Isaac Sim | 5.1.0 |
| Isaac Lab | 2.3.0 |
| Python | 3.11 |
| Platform | Linux x86-64 |
| License | Apache 2.0 |
Available RL Demo Environments
Four reinforcement learning environments are publicly available in the openarm_isaac_lab repository:
| Environment | Description |
|---|---|
| Reaching Task | Arm learns to reach target positions in 3D space |
| Lifting a Cube | Arm picks up a cube from a table surface |
| Opening a Drawer | Arm grasps and pulls open a drawer handle |
| Opening a Cabinet | Arm interacts with a hinged cabinet door |
Detailed implementation code, configuration files, and usage instructions can be found in the openarm_isaac_lab GitHub repository.
Isaac Lab Setup
Follow the official Isaac Lab installation guide at docs.isaacsim.omniverse.nvidia.com, then clone the OpenArm extension:
# After installing Isaac Lab, clone OpenArm extension cd /path/to/isaaclab git clone https://github.com/enactic/openarm_isaac_lab.git \ source/extensions/openarm_isaac_lab # Install the extension ./isaaclab.sh -i source/extensions/openarm_isaac_lab # Run a demo environment (example: reaching task) ./isaaclab.sh -p source/extensions/openarm_isaac_lab/scripts/train.py \ --task OpenArm-Reach-v0 \ --num_envs 4096
Coming Soon
- Teleoperation code — VR and joystick teleoperation in simulation
- Imitation Learning Code — Dataset recording and behavior cloning setup
- Sim2Real Code — Transfer pipeline from simulation to real hardware
2. MuJoCo Simulation
Overview
MuJoCo (Multi-Joint dynamics with Contact) is a physics engine maintained by Google DeepMind. MJCFs (MuJoCo Description Files) are XML files describing all moving scene components, facilitating algorithm experimentation in reproducible environments.
The OpenArm MuJoCo model is available in the openarm_mujoco repository.
Setup & Quick Start
-
Download and install MuJoCo.
# MuJoCo 3.3.4 on x86 Linux MUJOCO_VERSION="3.3.4" wget -q --show-progress \ "https://github.com/google-deepmind/mujoco/releases/download/${MUJOCO_VERSION}/mujoco-${MUJOCO_VERSION}-linux-x86_64.tar.gz" tar --extract --gzip --verbose \ --file="mujoco-${MUJOCO_VERSION}-linux-x86_64.tar.gz" rm "mujoco-${MUJOCO_VERSION}-linux-x86_64.tar.gz" -
Launch the MuJoCo simulate viewer.
cd "mujoco-${MUJOCO_VERSION}/bin" ./simulate -
Clone the OpenArm MuJoCo repository.
git clone https://github.com/enactic/openarm_mujoco.git
-
Load the OpenArm model. In the MuJoCo simulate window, open the file explorer and drag
v1/openarm_bimanual.xmlinto the window. The bimanual OpenArm model will load in the physics simulation.
Technical Details
Actuator Model
The MuJoCo model uses torque control for actuators, enabling realistic simulation while requiring client-side control management. This matches the real hardware's motor control mode.
Geometry Groups
| Group | Purpose |
|---|---|
| Group 2 | Visual geometry (for rendering) |
| Group 3 | Collision geometry (for physics) |
Physics simulation uses the convex hull of collision geometries, providing an efficient approximation for contact computation.
MJCF Composition
The MJCF files are composable — you can load individual arm components or the full bimanual setup:
v1/openarm_bimanual.xml— Full bimanual setup (both arms)- Individual arm components can be loaded independently for single-arm tasks
Python API Usage
# Basic MuJoCo Python usage with OpenArm import mujoco import mujoco.viewer import numpy as np # Load the model model = mujoco.MjModel.from_xml_path("v1/openarm_bimanual.xml") data = mujoco.MjData(model) # Apply torque control to joints with mujoco.viewer.launch_passive(model, data) as viewer: while viewer.is_running(): # Set torque commands (torque control mode) data.ctrl[:] = np.zeros(model.nu) # Step simulation mujoco.mj_step(model, data) viewer.sync()
ROS2 Bridge (Coming Soon)
A ROS2 package enabling MuJoCo integration as a mock hardware interface will be released shortly after the v1 MuJoCo model release. This will allow the full ROS2 control stack to be tested against MuJoCo simulation before deployment to real hardware.
- OpenArm MuJoCo repository: github.com/enactic/openarm_mujoco
- MuJoCo documentation: mujoco.readthedocs.io
- Google DeepMind MuJoCo releases: github.com/google-deepmind/mujoco