Rouast Labs

We are building technology for remote video-based vital sign measurement.

More details and announcements to come in the following months.

Philipp Rouast

Philipp Rouast

My research focuses on human-centered applications of deep learning and computer vision, especially in the health domain.

Past projects

Other projects I've worked on.
OREBA dataset.
OREBA dataset
2018–2020 Python
A dataset for intake gesture detection with video and inertial data - 202 sessions with 9069 gestures.
Automatic detection of individual intake gestures based on 360-degree video and deep learning.
Intake gesture detection
2018–2020 Python TensorFlow
Automatic detection of individual intake gestures based on 360-degree video and deep learning.
Contactless heart rate measurement based on face video, implemented for desktop, web, and mobile
rPPG
2015–2019 C++ JavaScript OpenCV
Contactless heart rate measurement based on face video, implemented for desktop, web, and mobile.
Generate maps for conversions from spherical to equirectangular in ffmpeg.
Equirectangular remap
2017 C ffmpeg
Generate maps for conversions from spherical to equirectangular in ffmpeg.
Correlations between returns in the cryptocurrency market.
Cryptocurrency analysis
2017 R
Analysis and visualisation of the cryptocurrency market.
Brownie
2016 Java
A NeuroIS tool for conducting economic experiments.
Planspiel Flächenhandel
2014 JavaScript Groovy Grails
Web-based simulation game for emissions certificate trading.

Publications

Current and forthcoming publications.
OREBA dataset
Single-stage intake gesture detection using CTC loss and extended prefix beam search
Philipp V. Rouast and Marc T. P. Adam
IEEE Journal of Biomedical and Health Informatics (2020)

Accurate detection of individual intake gestures is a key step towards automatic dietary monitoring. Both inertial sensor data of wrist movements and video data depicting the upper body have been used for this purpose. The most advanced approaches to date use a two-stage approach, in which (i) frame-level intake probabilities ... are learned from the sensor data using a deep neural network, and then (ii) sparse intake events are detected by finding the maxima of the frame-level probabilities. In this study, we propose a single-stage approach which directly decodes the probabilities learned from sensor data into sparse intake detections. This is achieved by weakly supervised training using Connectionist Temporal Classification (CTC) loss, and decoding using a novel extended prefix beam search decoding algorithm. Benefits of this approach include (i) end-to-end training for detections, (ii) simplified timing requirements for intake gesture labels, and (iii) improved detection performance compared to existing approaches. Across two separate datasets, we achieve relative F1 score improvements between 1.9% and 6.2% over the two-stage approach for intake detection and eating/drinking detection tasks, for both video and inertial sensors. Read more

OREBA dataset
OREBA: A Dataset for Objectively Recognizing Eating Behaviour and Associated Intake
Philipp V. Rouast, Hamid Heydarian, Marc T. P. Adam, and Megan E. Rollo
IEEE Access 8, 181955–181963 (2020)

Automatic detection of intake gestures is a key element of automatic dietary monitoring. Several types of sensors, including inertial measurement units (IMU) and video cameras, have been used for this purpose. The common machine learning approaches make use of the labelled sensor data to automatically learn how to make detections. ... One characteristic, especially for deep learning models, is the need for large datasets. To meet this need, we collected the Objectively Recognizing Eating Behavior and Associated Intake (OREBA) dataset. The OREBA dataset aims to provide a comprehensive multi-sensor recording of communal intake occasions for researchers interested in automatic detection of intake gestures. Two scenarios are included, with 100 participants for a discrete dish and 102 participants for a shared dish, totalling 9069 intake gestures. Available sensor data consists of synchronized frontal video and IMU with accelerometer and gyroscope for both hands. We report the details of data collection and annotation, as well as technical details of sensor processing. The results of studies on IMU and video data involving deep learning models are reported to provide a baseline for future research. Read more

Learning deep representations for video-based intake gesture detection Learning deep representations for video-based intake gesture detection
Learning deep representations for video-based intake gesture detection
Philipp V. Rouast and Marc T. P. Adam
IEEE Journal of Biomedical and Health Informatics 24 (6), 1727–1737 (2020)

Automatic detection of individual intake gestures during eating occasions has the potential to improve dietary monitoring and support dietary recommendations. Existing studies typically make use of on-body solutions such as inertial and audio sensors, while video is used as ground truth. Intake gesture detection directly based on video has rarely ... been attempted. In this study, we address this gap and show that deep learning architectures can successfully be applied to the problem of video-based detection of intake gestures. For this purpose, we collect and label video data of eating occasions using 360-degree video of 102 participants. Applying state-of-the-art approaches from video action recognition, our results show that (1) the best model achieves an F1 score of 0.858, (2) appearance features contribute more than motion features, and (3) temporal context in form of multiple video frames is essential for top model performance. Read more

Deep Learning for Human Affect Recognition: Insights and New Developments
Deep Learning for Human Affect Recognition: Insights and New Developments
Philipp V. Rouast, Marc T. P. Adam, Raymond Chiong
IEEE Transactions on Affective Computing (2019)

Automatic human affect recognition is a key step towards more natural human-computer interaction. Recent trends include recognition in the wild using a fusion of audiovisual and physiological sensors, a challenging setting for conventional machine learning algorithms. Since 2010, novel deep learning algorithms have been applied increasingly in this field. ... In this paper, we review the literature on human affect recognition between 2010 and 2017, with a special focus on approaches using deep neural networks. By classifying a total of 950 studies according to their usage of shallow or deep architectures, we are able to show a trend towards deep learning. Reviewing a subset of 233 studies that employ deep neural networks, we comprehensively quantify their applications in this field. We find that deep learning is used for learning of (i) spatial feature representations, (ii) temporal feature representations, and (iii) joint feature representations for multimodal sensor data. Exemplary state-of-the-art architectures illustrate the recent progress. Our findings show the role deep architectures will play in human affect recognition, and can serve as a reference point for researchers working on related applications. Read more

Remote heart rate measurement using low-cost RGB face video: a technical literature review
Remote heart rate measurement using low-cost RGB face video: a technical literature review
Philipp V. Rouast, Marc T. P. Adam, Raymond Chiong, David Cornforth, Eva Lux
Frontiers of Computer Science 12 (5), 858–872

Remote photoplethysmography (rPPG) allows remote measurement of the heart rate using low-cost RGB imaging equipment. In this study, we review the development of the field of rPPG since its emergence in 2008. We also classify existing rPPG approaches and derive a framework that provides an overview of modular steps. ... Based on this framework, practitioners can use our classification to design algorithms for an rPPG approach that suits their specific needs. Researchers can use the reviewed and classified algorithms as a starting point to improve particular features of an rPPG algorithm. Read more

Using deep learning and 360 video to detect eating behavior for user assistance systems
Using deep learning and 360 video to detect eating behavior for user assistance systems
Philipp V. Rouast, Marc T. P. Adam, Tracy Burrows, Raymond Chiong, Megan Rollo
European Conference on Information Systems (ECIS 2018)

The rising prevalence of non-communicable diseases calls for more sophisticated approaches to support individuals in engaging in healthy lifestyle behaviors, particularly in terms of their dietary intake. Building on recent advances in information technology, user assistance systems hold the potential of combining active and passive data collection methods to ... monitor dietary intake and, subsequently, to support individuals in making better decisions about their diet. In this paper, we review the state-of-the-art in active and passive dietary monitoring along with the issues being faced. Building on this groundwork, we propose a research framework for user assistance systems that combine active and passive methods with three distinct levels of assistance. Finally, we outline a proof-of-concept study using video obtained from a 360-degree camera to automatically detect eating behavior from video data as a source of passive dietary monitoring for decision support. Read more

Remote photoplethysmography: Evaluation of contactless heart rate measurement in an information systems setting
Remote photoplethysmography: Evaluation of contactless heart rate measurement in an information systems setting
Philipp V. Rouast, Marc T. P. Adam, Verena Dorner, Eva Lux
Applied Informatics and Technology Innovation Conference (AITIC 2016)

As a source of valuable information about a person’s affective state, heart rate data has the potential to improve both understanding and experience of human-computer interaction. Conventional methods for measuring heart rate use skin contact methods, where a measuring device must be worn by the user. In an Information ... Systems setting, a contactless approach without interference in the user’s natural environment could prove to be advantageous. We develop an application that fulfils these conditions. The algorithm is based on remote photoplethysmography, taking advantage of the slight skin color variation that occurs periodically with the user’s pulse. When evaluating this application in an Information Systems setting with various arousal levels and naturally moving subjects, we achieve an average root mean square error of 7.32 bpm for the best performing configuration. We find that a higher frame rate yields better results than a larger size moving measurement window. Regarding algorithm specifics, we find that a more detailed algorithm using the three RGB signals slightly outperforms a simple algorithm using only the green signal. Read more

Timeline

Timeline of my positions and education.

Positions

The University of Newcastle: Associate Lecturer - Computing and IT
2022-now
  • COMP3030: Machine Intelligence
  • INFT6060: The Digital Economy
  • Course development
The University of Newcastle: Teaching Assistant (SEEC)
2015; 2017–2022
  • Delivering labs
  • Marking assignments
  • Course development
Karlsruhe Institute of Technology: Student Research Assistant (IISM)
2012–2015; 2016
  • Development of a web-based prediction market (Groovy/Grails)
  • Development of experiment platform Brownie (Java)
msgGillardon AG: Internship
2013–2014
  • Evaluation of assumptions in the Credit Risk Model CreditMetrics
  • Implementing improvements for Loss Given Default estimation for retail credits
Karlsruhe Institute of Technology: Teaching Assistant (AIFB)
2011–2012
  • Programming I: Java

Education

Reviewer
2017–now
  • Elsevier Advanced Engineering Informatics
  • Elsevier Information Fusion
  • Elsevier Pattern Recognition Letters
  • Emerald Journal of Systems and Information Technology
  • IEEE Computational Intelligence Magazine
  • IEEE Journal of Biomedical and Health Informatics
  • PeerJ Computer Science
  • PLOS ONE
The University of Newcastle: PhD
2017–2020
Int. Postgraduate Research Scholarship
ACPHIS Student Project Award 2017
2017 UON FEBE Postgraduate Research Prize
2019 UON FEBE Postgraduate Research Prize
Thesis: Using deep learning to detect food intake behaviour from video.
Future Award 2016: Category Health
DAAD FIT Worldwide Scholarship
BW Study Abroad Scholarship
Thesis: Contactless Heart Rate Measurement Using Facial Video: A Real-Time Approach and Evaluation in Information Systems.
Thesis: Partisan Trading Activity: Investigation of a Political Stock Market.