The Design
KindBot is about 20cm tall with a clean, geometric body that’s easy to 3D print and modify.
It uses:
- A flush-mounted screen for a face
- A second flush-mounted screen on its belly
- Japanese-style arcade buttons above the belly screen
It’s designed to be stable, simple, and self-contained.
Movement
KindBot is currently stationary.
Movement is planned to be minimal but intentional.
Hardware
The build is centred around a Raspberry Pi 5 (8GB).
Current components:
- Raspberry Pi 5
- Active cooling
- 128GB microSD
- 27W USB-C power supply
- Micro-HDMI output
- 10” capacitive touchscreen and 5” touchscreen
- Home Assistant server
- Keychron B1 Pro ultra-slim keyboard
- White stylus
- Various smart home components (Zigbee dongle, IKEA plugs, temperature sensors, PIR sensors, Ring doorbell)
It also includes a Ryobi battery inverter, but it’s designed to stay plugged in most of the time.
The Face
The face is a screen, but it carries the personality.
It’s simple and readable, with expressions that match tone and interaction.

Next step is to animate it.
Voice + Interaction
KindBot is designed to be spoken to naturally.
It also has a keyboard and stylus for input. The keyboard supports multiple Bluetooth connections, which is useful.
- Microphone input
- Bluetooth speaker output
- Continuous interaction loop
You talk, it responds.
This feature is currently paused. Talking to a robot still feels weird.
The Brain
KindBot connects to:
- OpenClaw / ChatGPT / Claude / Replicate (via VPS) for conversation and image generation
- Home Assistant running on a separate PC for real-world control
I’m currently integrating OpenClaw with Home Assistant via webhooks and APIs. I’ve avoided installing OpenClaw directly on my Home Assistant server for now.
The goal is something closer to KITT from Knight Rider.
Software
The interface launches on boot using systemd.
It starts instantly—no desktop, no setup.
- Home Assistant runs on the 10” belly screen
- The 5” face is controlled via MQTT (Mosquitto broker)
Build Philosophy
- Small footprint
- Straightforward construction
I’m considering open-sourcing it and selling build packs with all required parts.
What I’m Building
KindBot sits somewhere between a tool and a companion.
It:
- Responds when you talk
- Controls things in your home
That’s enough.
What’s Next
- Reconnect the face screen
- Finalise and 3D print the enclosure (SketchUp first)
- Connect the arcade buttons
- Upgrade the face UI
After that, I’ll decide whether to release files or take it further.
Follow Along
I’ll keep posting updates as it develops.
Right now, I’m being blown away by OpenClaw.
Yesterday, I ran out of OpenAI tokens, so it couldn’t generate images. I asked it which API I should use instead.
It recommended Flux 1.1 Pro via Replicate.
So I signed up, bought credits, and added the API.
Then I asked it to generate an image… and it said it couldn’t actually use Replicate.
That was frustrating.
So I told it.
It apologised and said it would be more careful making recommendations involving purchases.
Then I tried something else.
I asked it to implement Flux 1.1 Pro itself.
And it did.
That moment felt ridiculous, in a good way.
I asked the system what was wrong. It told me. I asked it to fix itself. It did. I felt like I was on Star Trek.
Since then, I’ve been breaking things less. Still breaking things, but less.
Kindbot Openclaws made a little profile logo for himself. So cute.

Quick Note on Setup
If you want to try OpenClaw, run it on a VPS.
I used Hostinger, they have a one-click install, which makes life easier.
Running it locally didn’t feel safe or stable.
Get Involved
If you’re building something similar or want to collaborate, I’d love to hear from you.