Focusmotion Digital, Case study Focusmotion [video] by R/GA New York

The Digital Advert titled Focusmotion [video] was done by R/GA New York advertising agency for Focusmotion in United States. It was released in Jan 2016.

Focusmotion: Focusmotion [video]

Released
January 2016
Posted
January 2016
Executive Creative Director
Production Agency

Awards:

D&AD Awards, 2016
Product DesignWearable TechnologyWood Pencil
One Show, 2016
Ux / UiInnovation - Mobile / MobileBronze Pencil
ADC Annual Awards 2016
DIGITALINNOVATION: DIGITALSilver
Cannes Lions Innovation 2016
Creative DataCreative Data: Data-Technology Models / Tools / PlatformsBronze Lion

Credits & Description:

Client: Focusmotion
Title: Focus motion
Category: Sports / Fitness
Agency: R/Ga / New York + La Dodgers / Los Angeles + Focus Motion / Los Angeles
Chief Creative Officer: Cavan Canavan, Grant Hughes
Production Company: Focus Motion / Los Angeles
Agency Producer: Liz David, Maggie Hogan, Parker Sapp
Creative Team: Jonathan Bradley, Dylan Boyd, Nicolas Olivieri, Michael Morowitz, Maddie Garber, Jonathan Greene
Executive Creative Director: Stephen Plumlee
Programmer: Shuo Feng
Technical Producer: Steve Merel Derek Chan
Engineer Steve Merel
Executive Producer Parker Sapp
Senior Copywriter Maddie Garber
Technical Director Michael Morowitz
Advertising Agency R/GA
Chief Executive Officer Cavan Canavan
Chief Operating Officer Grant Hughes
Global Chief Operating Officer Stephen Plumlee
Managing Director Jonathan Greene
Project Director Dylan Boyd Jonathan Bradley Nicolas Olivieri
Project Manager Liz David Maggie Hogan
FocusMotion launched the world’s first algorithm platform that enables any open wearable device to track real human movement. Whereas most wearable device systems track only steps or sleep, FocusMotion recognizes all kinds of movement. Its SDK plugs into any open wearable so developers can augment any current system with the capability to analyze what users are doing, how many times they’ve performed a specific movement, and how well they’ve performed it.
Outcome:
•Our algorithm and products are currently being used by partners and teams to understand and track athletic progression, injury recovery/rehabilitation, and factory labor monitoring. Before our algorithm and our system, movement data had to be hand entered, observed directly by a human, or recorded with expensive computer vision equipment. FocusMotion’s techoolgy lets a physical therapist and orthopedic surgeon know every movement performed by a recovering patient and know how well it was performed. We can identify points of fatigue, hyper extension, and provide guidance and accountability metrics signaling to the patient when they’re over-exerting or performing a movement improperly. In the factory, we can identify improper lifting methods and track the number of times an employee has lifted a box or performed a task; we offer interventional opportunities to inform the employee that they need to improve their form, and, long term, our data is used to prevent injury and reduce worker’s compensation claims for the business. Over time, the larger data sets from these users will be used to build powerful correlations that help design specific movement protocols for individuals that share unique physiologies or movement tendencies and dialects
Strategy:
Data is gathered from single or multiple wearable body sensors in any number of locations. For ACL repair, a user might wear a device on their ankle during physical therapy; for workouts and fitness, an athlete might wear a device on their wrist; and for factory labor monitoring, the worker might wear 5 device on the body (arms, legs, back, and chest). The brilliant and untapped potential of wearable sensors lies in the MPU: the Motion Processing Unit.  This small chip uses 1 to 3 separate nano machines to understand changes in acceleration, angle, and direction – these are technically referred to as the accelerometer, the gyroscope, and the magnetometer. Each of these nano machines provides 3 axes of data in the X, Y, and Z planes. We use filtering techniques and combination techniques to find distinctive patterns in the data. Once refined, you can see distinct and beautiful waveforms. We then use our powerful pattern recognition and machine learning algorithm to associate patterns with activities. Unique movements have unique waveforms – with our data we can discern what a user is doing; movements can be translated, tracked, counted, and analyzed.
Campaign Description:
•We’ve found that we can use the standard signals from a wearable device to discern what movement a user is performing based on the unique profile that every movement presents. While some algorithms use what’s called a Threshold system that determines “steps” or “activity” based on a waveform crossing an amplitude height, we’ve created an algorithm that reads and understands actual movement waveform patterns. Additionally, we’ve determined that not only can we identify what is performed, but we can compare the performed movement with a “master” movement so we can understand if a user was moving with consistent form, speed, and if they were moving with the right range of motionWe’ve used hundreds of thousands of reps from different movements to build our recognition library. As an example in fitness, with our algorithm, you can go to a gym and perform 45 minutes of activity, and we’ll report precisely your exercises, your rest periods, each rep, and feedback on how well you performed each movement.Much like a speech algorithm that can understand multiple languages, we’re not limited to exercise, we’re currently developing the algorithm for physical therapy, gunfire, and workforce monitoring.
Synopsis:
For the past 10 years since Nike+, the sole focus of most wearable fitness tracking devices has been steps and sleep. We believed that there is more to human movement than steps. We believed that if you were to design a smarter algorithm, you could enable more markets and broaden the use case of every wearable device. Our goal was to create a hardware agnostic, OS agnostic platform that would discern the language of human movement and enable any developer to use not only our library of movements but also to add, curate, and deploy their own movement recognition using our machine learning tools. Like Siri understands speech, we would create a tool to understand the nuances of human movement to know what movement a user was doing, how many times it was performed, and how well it was performed.