An affective video generation system supporting impromptu musical performance

投稿者: 藤代 一成 投稿日:

Anri KobayashiIssei Fujishiro

in Proceedings of 2016 International Conference on Cyberworlds, pp. 17-24, Chongqing (China), September 2016

[doi: 10.1109/CW.2016.11]
Abstract

When a musical instrument player performs music, the accompanying visual information can have a significant effect on the performance. For example, several players in a jam session may change their style of playing immediately by closely examining the co-players’ expressions and behaviors and predicting their emotion and intention on the fly. In this paper, we propose a system that generates videos in response to the impromptu performance of a single musical instrumental player. The system evaluates the input signals in an affective way and generates a corresponding video based on the results of the evaluation. The player tends to change his/her performance while being inspired by the generated video and giving further triggers for the system to modify the video. The system aims to establish such an affective loop, where it is expected to act as a “co-player” who continues having an influence on the performance of the real player. The final goal of this study is to improve the quality of the player’s performing experience through such interactions between the player and the system. By conducting a user evaluation, it was proven that we were able to inspire an amateur guitarist as the subject through the affective video generation and provide a cyberworld where he is allowed to experience a better performance than playing alone.

2016年の業績ページはこちら

カテゴリー: 会議2016国際

0件のコメント

コメントを残す

アバタープレースホルダー

メールアドレスが公開されることはありません。 が付いている欄は必須項目です