flowy

Project image
Purpose
Academic
w/Matteo Loglio
Course
Machine Learning for Designers
Type
Group project
Role
Coder, UX/UI Designer
Year
2025
Duration
1 week

Abstract

flowy is an AI-powered yoga instructor that enhances your yoga practice by analyzing your poses in real time and providing personalized suggestions for improvement. For each pose, it assigns a score based on the accuracy of your execution, using three precision levels (10, 50 or 100) which represent the percentage of pose accuracy.

Abstract Visual

The Challenge

'Things That Think': prototype an interaction leveraging large language models (LLMs), agents, or machine learning techniques. This entailed exploring creative methods for utilizing inputs such as cameras and microphones to detect events, body positions, speech, sounds, and more. These inputs could then drive various forms of output, which may included web interfaces, images, sound, text, speech or even physical interactions

Challenge Visual

The Solution

An object designed to assist individuals practicing yoga at home while offering real-time feedback. We envisioned a compact device that could function either as a smartphone accessory (app + phone stand) or as a standalone unit with its own screen. In both cases, portability was key, ensuring users could enjoy yoga sessions wherever they are.

The primary input is the front-facing camera, which tracks the user movements. The outputs include both visual elements (such as a timer, pose illustration and performance score) and auditory feedback to guide the user in real-time.

Solution Visual

The Process

UX

User Journey

User
detection
Session initiation
Pose preparation
Pose execution & feedback
Flow completion
When flowy detects a person, it activates automatically
Each pose is shown with its name and an illustration
The user has 10 seconds to get into position, guided by a gentle circular indicator
flowy provides real-time vocal feedback and assigns a performance score
After completing all poses, flowy displays a congratulatory message and gives vocal acknowledgment
Ambient Experience:
soft background music plays throughout the session, enhancing focus and relaxation
UI

User Interface

The interface is designed to be as simple and minimal as possible, ensuring that key information remains clear and easily visible from a distance of approximately 2.5 meters, where the user will typically be positioned.
It displays three crucial pieces of information:

  • Circular timer (top) indicates the countdown for different phases, such as pose preparation and pose holding
  • Pose illustration (center) shows an image of the current pose to guide the user visually
  • Score bar (bottom) fills with distinct colors representing three performance levels: red for 10, yellow for 50, and green for 100, making it easy to interpret from afar
UI Sketch UI Example 2



The illustrations of the poses were drawn from scratch, featuring round, playful shapes to better align with the overall branding. The animation of the pose that appears on the final screen was created using Adobe After Effects.

Pose 1

Tree Pose

Pose 3

Crescent Lunge

Pose 2

Pose Animation

ML

Machine Learning and code

We explored different technologies that could support our goal. After experimenting with different tools and ML models (e.g., Teachable Machine, LLAVA), we discovered that GPT-4o was effective enough for our needs.
During this phase, we also considered integrating skeleton detection with MediaPipe to enhance feedback accuracy. However, GPT-4o proved highly reliable in correctly identifying yoga poses.

script.js - main logic

Contains two key functions that use OpenAI GPT-4 API:

personDetection():
  • • Capture an image from the webcam
  • • Ask GPT-4 if there is a person in the image
  • • Return a JSON with isPerson: true/false
poseEvaluation(yogaPose):
  • • Capture an image from the webcam
  • • Ask GPT-4 to analyze if the person is performing the specified yoga pose correctly
  • • Return detailed feedback and a score (10%, 50%, or 100%)
3D

3D Modeling

The 3D model was created in Fusion 360, featuring a custom-designed structure to securely hold the phone, and was subsequently 3D printed.
The shape of the model was inspired by the starting yoga pose, with the crossed legs serving as a support base and the hands joined above the head acting as a handle, also making the object more manageable and portable.

3D Model

Model Sketches

3D Print

Rendered Model


Branding

Branding

Brand Identity

Zoomed image