Interface / media hub
web
README
The web node is the front door of the robot. It serves the
HTTPS UI, bridges browser input into Dora, fans live state back
out over WebSocket, proxies the camera service, manages the photo
gallery, and runs the lightweight face-follow logic.
nodes/web/web/main.py
Consumes power, audio, servo, eyes, config, sequence
Publishes UI/gamepad/audio/servo/sequence events
Key responsibilities
- Serve `Home / Showtime / Gallery` over HTTPS on port `8443`
- Push live robot state to connected browsers
- Proxy `go2rtc` as `/camera/snapshot.jpg` and `/camera/stream.mjpeg`
- Expose `/api/photos` and `/api/face-tracking` endpoints
- Forward gamepad and UI events into Dora outputs
When to touch it
- UI flows, camera behavior, gallery, or Discussions links
- Gamepad handling or web-to-node event routing
- Face-follow logic or mobile layout behavior
The sequence node turns a single scene trigger into a timed plan
across audio, servos, eyes, and tracks. It is responsible for the
robot's gestures, dances, showtime actions, and interruptible
scene behavior.
nodes/sequence/sequence/main.py
Input: `trigger` from web
Outputs: `play_sound_sequence`, `move_servo_sequence`, `move_tracks_sequence`, `play_gif_sequence`, `stop_sequence`, `sequence_state`
What matters
- Scenes are timed plans, not blocking sleeps
- New triggers can interrupt active scenes
- Track movement can be part of a scene, not just servos and sound
- Scenes should return the robot to a neutral pose when finished
Good places to edit
- Add or tune scenes in `runtime.py`
- Keep scene durations and sound timing aligned
- Use tests in `nodes/sequence/tests/test_runtime.py` as the safety net
Actuation / diagnostics
waveshare_servo
README
This node owns the SC-series servo bus. It discovers servos,
moves them, maps settings, exposes live diagnostics, clones
EEPROM settings, factory-resets hardware, and handles
auto-calibration.
nodes/waveshare_servo/entrypoint.py
Main UI inputs: move, wiggle, calibrate, diagnostics, clone, reset
Outputs: `servo_status`, `servos_list`, `servo_diagnostics`
Current capabilities
- Periodic discovery of attached SC-series servos
- Single-servo control plus per-servo settings
- Bulk diagnostics reads for status, model, and config
- Clone and factory-reset operations
- Arrow-structured outputs for the web UI
Why it matters
- This is where robot personality meets hardware reality
- Servo IDs, limits, inversion, and calibration all live here
- Most "why does the pose feel wrong?" questions end up here
The tracks node converts browser/gamepad movement into serial
commands for the RP2040 controller and also accepts short
`move_tracks_sequence` overrides from the sequence node.
nodes/tracks/tracks/main.py
Inputs: left-stick X/Y, `heartbeat`, `move_tracks_sequence`
Talks to the RP2040 over serial at `115200` baud
Current behavior
- Differential drive from manual stick input
- Smoothing / easing before commands are sent
- Regular heartbeat traffic to keep the controller alive
- Timed scene overrides for spins, wiggles, and turns
Boundary of responsibility
- Python side: intent, scaling, timing, serial output
- Firmware side: PWM, direction, safety timeout, motor behavior
- Motor-driver specifics stay hidden behind the RP2040 firmware
The audio node handles sound playback, available-sound discovery,
volume persistence, and sequence interruption via `stop_sequence`.
nodes/audio/audio/main.py
Inputs: `play_sound`, `play_sound_sequence`, `set_volume`, `stop`, `stop_sequence`
Outputs: `available_sounds`, `volume`
Current setup
- Pygame-based playback of local MP3 assets
- Current build prefers the Pi headphones audio device
- Sound list is refreshed periodically for the UI
What to remember
- Sequence interruption should stop scene audio cleanly
- Volume state persists across restarts
- Real hardware audio device assumptions matter on boot
Display / expression
eyes
README
The eyes node syncs GIF/JPG assets to the eye displays and
exposes a simple play interface so the web UI and sequence node
can switch the robot's eye state.
nodes/eyes/entrypoint.py
Inputs: `play_gif`, `play_gif_sequence`, `list_images`, `TICK`
Output: `available_images`
Current behavior
- Periodic asset sync to networked eye displays
- Image discovery for the web UI
- Real-time image changes from UI and sequences
Hardware assumption
- Network-addressable ESP32S3-based eye displays
- Longer timeouts are tolerated because the devices are resource-constrained
Monitoring / safety
power
README
The power node converts INA226 readings into battery telemetry
that is actually useful while driving the robot: voltage, current,
power draw, state of charge, runtime, and low-battery shutdown.
nodes/power/power/main.py
Input: `tick` every 10 seconds
Outputs: `voltage`, `current`, `power`, `soc`, `runtime`, `capacity`, `discharge_rate`, `shutdown`
Battery model
- Reference pack: `HOOVO 3S LiPo 2200mAh 11.1V 50C`
- Uses a LiPo-aware voltage curve instead of a naive linear guess
- Applies smoothing and light load compensation
Why it matters
- This node is safety-relevant, not just cosmetic telemetry
- Bad battery assumptions here can lead to misleading runtime and shutdown behavior
The config node provides shared settings persistence across the
system. Other nodes update values through `update_setting`, and
the config node broadcasts both point changes and periodic full
settings snapshots.
nodes/config/config/main.py
Inputs: `update_setting`, `tick`
Outputs: `setting_updated`, `settings`
What it is good for
- Persisting cross-node settings in one place
- Broadcasting setting changes to interested nodes
- Keeping robot behavior stable across restarts
Typical consumers
- Servo settings and aliases
- Web-exposed configuration
- Any future node that needs durable shared state