Installation
KEA Planner is a ROS 2 workspace. The repository ships a Dockerfile for a self-contained build and a
docker-compose file for LLM/VLM services. You can either run everything in Docker or build locally on ROS 2.
Option 1: Docker-based setup (recommended)
This path uses the provided Dockerfile to build a workspace image. It is suitable for development and CI.
-
Build the image:
docker build -f docker/kea_planner.Dockerfile -t kea-planner-core . -
(Optional) Start the LLM/VLM services:
docker compose -f docker/docker-compose.yml up -d llm vlm -
Run a shell inside the workspace image:
docker run --rm -it --network host kea-planner-core bash
Notes:
- The compose file maps the Ollama containers to ports
11436(LLM) and11437(VLM) on the host. Adjustllm_agent.endpointandvlm_agent.endpointif you are running these services elsewhere.
Option 2: Native install (ROS 2 Jazzy)
The Dockerfile builds against ROS 2 Jazzy. If you install natively, start from the same distribution.
- Install ROS 2 Jazzy and PlanSys2.
- Install build tools:
sudo apt install -y python3-colcon-common-extensions python3-rosdep - Create and build the workspace:
mkdir -p ~/ros_ws/kea_planner_ws/src cd ~/ros_ws/kea_planner_ws/src git clone https://github.com/leggedrobotics/kea_planner.git cd ~/ros_ws/kea_planner_ws rosdep init || true rosdep update rosdep install --from-paths src --ignore-src -r -y colcon build --symlink-install source install/setup.bash
Optional Python dependencies
Some experimental subsystems have additional Python dependencies that are not installed by default:
kea_plan_gen(prompt generation / LLM training):datasets,trl,peft.kea_testing(image generation):diffusers,torch,pillow, and optionalbitsandbytesfor 4-bit models.
Install them in the environment you plan to run those tools in, for example:
pip install datasets trl peft diffusers torch pillow bitsandbytes
API keys for hosted LLMs (optional)
If you use OpenAI or Gemini profiles, set the API key in your shell (e.g., in ~/.bashrc) and reload it:
export OPENAI_API_KEY="your-key"
export GEMINI_API_KEY="your-key"
For OpenAI profiles, set llm_agent.config in the profile to point at your env var, for example:
/**:
ros__parameters:
llm_agent:
config: '{"api_key_env":"OPENAI_API_KEY"}'
Gemini profiles read GEMINI_API_KEY directly.