Makoto Uemura, Ryosuke Itoh, Longyin Xu, Masanori Nakayama, Hsiang-Yun Wu, Kazuho Watanabe, Shigeo Takahashi, Issei Fujishiro
TimeTubes: Visualization of Polarization Variations in Blazars
in Galaxies, pp. 23, Vol. 4, No. 3, September 2016 [DOI: 10.3390/galaxies4030023]
Optical polarization provides important clues to the magnetic field in blazar jets. It is easy to find noteworthy patterns in the time-series data of the polarization degree (PD) and position angle (PA). On the other hand, we need to see the trajectory of the object in the Stokes QU plane when the object has multiple polarized components. In this case, ironically, the more data we have, the more difficult it is to gain any knowledge from it. Here, we introduce TimeTubes, a new visualization scheme to explore the time-series data of polarization observed in blazars. In TimeTubes, the data is represented by tubes in 3D (Q, U, and time) space. The measurement errors of Q and U, color, and total flux of objects are expressed as the size, color, and brightness of the tubes. As a result, TimeTubes allows us to see the behavior of six variables in one view. We used TimeTubes for our data taken by the Kanata telescope between 2008 and 2014. We found that this tool facilitates the recognition of the patterns in blazar variations; for example, favored PA of flares and PA rotations associated with a series of flares.


長尾 建, Kwan-Liu Ma, 藤代 一成
画像電子学会誌, Vol. 46, No. 1, pp. 176-188, 2017年1月
大型ディスプレイによる映像表示には, その表示コンテンツを見る人の立ち位置によって, その視界に映る表示コンテンツの形状が歪んでしまうという問題が存在する. これは, 表示コンテンツを理解する際に大きな妨げとなる. 特に大型ディスプレイが電子看板広告として利用される際, その歪みは広告効果に大きな悪影響を及ぼす.さらに, 多くの往来者は最初から電子看板広告に対して関心を抱かずに横切るため, 既存研究での主な解決策である「視聴者の能動的な適切な位置への移動」という行動を要求できない. 本研究では, この問題に対して, 往来者に能動的な「適切な位置への移動」を要求することなく, 即時的に歪みの少ない映像を提供する手法を提案する. 具体的には, 大型ディスプレイの前の往来者の位置を認識し, 表示コンテンツがその位置に正対するよう実空間で回転したように見える表示コンテンツの形状とそれに合わせたレイアウトの最適化を実現する. 本研究では, 本手法にもとづく大型ディスプレイ広告システムを構築し, 設置現場における往来者の位置の認識の精度評価, シミュレーションによる各往来者の視界に映る表示コンテンツの歪みの低減の評価, そして被験者実験による広告効果への影響の評価を行い, その有用性を示した.

神 展彦, 芳賀 直樹, 藤代 一成
画像電子学会誌, Vol. 46, No. 1, pp. 160-164, 2017年1月
グルーヴとは,演奏における音のうねり,リズムのノリや一体感などを表す音楽特徴で,よい演奏に必要不可欠な要素の一つである.グルーヴ可視化は,音楽的センスや聴力の個人差によらないグルーヴの習得や音楽経験に制限されない音楽の感動の共有など,さまざまな応用につながる.本論文では,MIDI 信号として入力された音楽をインタラクティブに図形に変換する,直感的なリズム可視化システムSeeGroove2 を提案し,ライブ演奏の可視化への応用やグルーヴ教育ツールとしての応用可能性を示している.演奏された音符それぞれに対して点をプロットし,滑らかに補間することにより,リズムパターンを多様な環状図形に即時的に表現できる.インタラクティブ性は,1,920 Hzで動作する入力系スレッドと60Hzで動作する計算・描画系スレッドのマルチスレッド設計によって実現される.

井阪 建, 藤代 一成
映像情報メディア学会誌, Vol. 70, No. 6, pp. J143-J146, 2016年5月 [DOI: 10.3169/itej.70.J142]


Yuriko Takakura, Masanori Nakayama, Issei Fujishiro
A Visual Analysis System for Compositional Processes of Composers in Spectral School
in Proceedings of the 5th IIEEJ International Workshop on Image Electronics and Visual Computing, 6 pages, Da Nang (Vietnam), March 2017
We propose a visual analysis approach to explore the compositional process of composers in Spectral School, with a particular focus on the sub-processes of sound analysis and synthesis. Spectral music is one of the significant trends in contemporary music since the 1970s. Composers in Spectral School use the acoustic properties of sound spectra as the basis of their compositional materials. One of the representative software systems they use is AudioSculpt, which has been developed by IRCAM. We have developed an accompanying system with AudioSculpt to represent the history of creating sounds with spectral analysis of sound materials and processing of sound spectra using the various filters. The salient feature of the system lies in its stacked spectrogram space whose axes represent the elapsed time of the sound and the progress of composition. On the pixeloriented spatial substrate, the system makes it possible for users to analyze the compositional processes by dedicated interactive manipulations. As such, the users are allowed to examine the compositional processes of musical pieces, whereas primary targets of most previous studies were only completed pieces. The present system can also be regarded as an initial attempt at managing the provenance of timeseries events in music visualization field. Our approach is intended to open a door for the composers to develop and share the compositional methodologies.

Fumiya Shimizu, Issei Fujishiro
Selection of Localized Audio Track Based on Eye-Tracking Technologies with Application to Musical Art Gallery
in Proceedings of the 5th IIEEJ International Workshop on Image Electronics and Visual Computing, 4 pages, Da Nang (Vietnam), March 2017
Many methods using eye tracking have been proposed for realizing interfaces for everyday device including digital signage and HMD. Such visual interfaces have a potential to be utilized as audio interface. We herein intend to allow the viewer to select the audio data by detecting his/her gaze with a single webcam. As a result, we will be able to provide the viewer with an immersive environment where he/she can focus on the object easily. Also, we present a preliminary design of the interface with motivation for application to a museum or an art gallery.

Ken Nagao, Issei Fujishiro, Ikuo Takahashi
Effects of Non-Contact Interaction on Digital Signage Advertisement
in Proceedings of the 5th IIEEJ International Workshop on Image Electronics and Visual Computing, 6 pages, Da Nang (Vietnam), March 2017
Nowadays a variety of interaction technologies have been developed and some of them allow a user to interact in non-contact ways such as through body gestures. Such non-contact interactions become to be utilized for wide purposes, including communication with digital signage, which is one of the new types of media and often used as advertisements in public. However, current utilization of non-contact interactions for digital signage advertisements lacks consideration of its effects from the aspect of consumer behaviors. In this paper, we study the effects of non-contact interactions on digital signage advertisements through hypothesis testing in the light of consumer behaviors. Our results suggest the pros and cons of non-contact interactions for digital signage advertisements. We discuss how we can capitalize the strengths of non-contact interactions for digital signage advertisement.

Yuto Hayakawa, Issei Fujishiro
2D Fluid Shape Design by Direct Manipulation
in Proceedings of the 5th IIEEJ International Workshop on Image Electronics and Visual Computing, 6 pages, Da Nang (Vietnam), March 2017
In this paper, we propose a simple interface for novices to design the shape of 2D fluid intuitively through direct manipulation, where parameters of fluid simulation are specified with hand motions to be captured. Recognizing specific hand motions as hand gesture commands allows the users to control the dynamics of 2D fluid at will, without detailed knowledge of fluid simulation.

Yasunari Ikeda, Issei Fujishiro
A Recursive Procedural Model for Improving Appearance of Clothes with Fiber-level Details
in Proceedings of the 5th IIEEJ International Workshop on Image Electronics and Visual Computing, 4 pages, Da Nang (Vietnam), March 2017
Making realistic yarn objects, a common material for textile, without losing details is a challenging issue. Recent methods for modeling yarn can be classified into two categories: volume-based methods and fiber-based ones. Among them, one of the latest fiber-based methods attempts to reconstruct twisted fibers from CT (Computed Tomography) images to improve the details of yarn models. While this method dramatically improves the details, mainly because of the limited resolution of the CT scanner, the fluff and fuzz of the yarn still cannot be obtained. Our goal is to realize the fluff and fuzz of yarn. To that end, we developed an extension method to attach microfibers to each fiber of the yarn. The method approximates the distribution of microfibers by recursively using statistical data. In this paper, we discuss the preliminary design of the method.

Anri Kobayashi, Issei Fujishiro
An affective video generation system supporting impromptu musical performance
in Proceedings of 2016 International Conference on Cyberworlds, pp. 17-24, Chongqing (China), September 2016 [DOI: 10.1109/CW.2016.11]
When a musical instrument player performs music, the accompanying visual information can have a significant effect on the performance. For example, several players in a jam session may change their style of playing immediately by closely examining the co-players’ expressions and behaviors and predicting their emotion and intention on the fly. In this paper, we propose a system that generates videos in response to the impromptu performance of a single musical instrumental player. The system evaluates the input signals in an affective way and generates a corresponding video based on the results of the evaluation. The player tends to change his/her performance while being inspired by the generated video and giving further triggers for the system to modify the video. The system aims to establish such an affective loop, where it is expected to act as a “co-player” who continues having an influence on the performance of the real player. The final goal of this study is to improve the quality of the player’s performing experience through such interactions between the player and the system. By conducting a user evaluation, it was proven that we were able to inspire an amateur guitarist as the subject through the affective video generation and provide a cyberworld where he is allowed to experience a better performance than playing alone.

Nobuhiko Jin, Naoki Haga, Issei Fujishiro
SeeGroove2: An orbit metaphor for interactive groove visualization
in Proceedings of 2016 International Conference on Cyberworlds, pp. 131-134, Chongqing (China), September 2016 [DOI: 10.1109/CW.2016.26] [received Best Short Paper Award]
Groove, the sense of rhythmic feel or musical swing, is one of the most essential factors of a good musical performance. Groove visualization can help players/listeners acquire groove sensation and share impressive expressions of music, regardless of personal hearing ability or musical sense. In this paper, we present SeeGroove2 as an extension of our previous system SeeGroove, by featuring a new groove visualization scheme relying on an orbit metaphor. Rhythm patterns are visually interpreted as various orbital shapes by plotting each played note as an orbiting circle and smoothly interpolating them. Thanks to the adoption of a module-oriented and multi-threaded architecture, the system can change the orbit shapes interactively as the music performance progresses, and thereby the user can feel groove transitions in the music on the fly.

Kouhei Yasuda, Shigeo Takahashi, Hsiang-Yun Wu
Enhancing Infographics Based on Symmetry Saliency
in Proceedings of the 9th International Symposium on Visual Information Communication and Interaction (VINCI 2016), pp. 35-42, Dallas (USA), August 2016 [DOI: 10.1145/2968220.2968224]
Image saliency is a biologically inspired concept for characterizing visual conspicuity of individual features in natural images, and provides us with a useful insight into the mechanism for directing instant visual attention from viewers. Nevertheless, this perceptual quality often remains to be further sophisticated especially for enhancing saliency in infographic images since they usually consist of relatively simple visual pattens that result in sharp image edges rather than smooth gradations in natural images. This paper presents a new approach to intentionally drawing visual attention for infographic images, in such a way that the corresponding important features naturally pop up in the image. The idea behind our approach is to introduce the concept of symmetry saliency for enhancing local symmetry inherent in such infographic images. This is accomplished by evaluating how much each image edge contributes to the symmetry saliency, and augmenting the corresponding image gradient in proportion to the amount of its contribution. The intensity field of the given image is then modulated with such enhanced image edges by solving the Poisson equation. Several examples together with statistics obtained through a user study demonstrate that our proposed approach successfully improves the readability of infographic images and effectively attracts visual attention to intended regions of interest.

Longyin Xu, Masanori Nakayama, Hsiang-Yun Wu, Kazuho Watanabe, Shigeo Takahashi, Makoto Uemura, Issei Fujishiro
TimeTubes: Design of a Visualization Tool for Time-Dependent, Multivariate Blazar Datasets
in Proceedings of the NICOGRAPH International 2016, pp. 15–20, Hangzhou (China), July 2016 [DOI: 10.1109/NicoInt.2016.3]
Blazars are active galactic nuclei whose relativistic jets ejected from the central black hole are pointing toward the Earth. Astronomers have attempted to classify blazars, but analyzing the time-dependent multivariate datasets with conventional visualization methods, such as scatter plot matrices, is difficult. This paper presents TimeTubes, a new visualization scheme that allows astronomers to analyze dynamic changes in and feature causality among the multiple time-varying variables. We target six representative time-varying variables from the originals, including two polarization-related parameters and their corresponding errors, intensity, and color. The four polarization parameters with a common time stamp are transformed to an ellipse, and a series of such ellipses are aligned in parallel along the time line to form a volumetric tube in 3D space. The resulting tube is then colorized by the observed intensities and colors of the blazar. We designed a designated interface with nine functions to control the view of the tube interactively. The usability of TimeTubes is discussed with feedback from astronomers.

Rie Ishida, Shigeo Takahashi, Hsiang-Yun Wu
Adaptive Blending of Multiple Network Layouts for Overlap-Free Labeling
in Proceedings of the 20th International Conference on Information Visualisation (iV2016), pp. 15–20, Lisbon (Portugal), July 2016 [DOI: 10.1109/IV.2016.25]
Conventional force-directed algorithms are known as a common approach to aesthetically drawing networks while they still suffer from self-overlaps especially when the network nodes are annotated with text labels. Incorporating space partitioning techniques including Voronoi tessellation are often effective to spare enough space around each node while this may incur different artifacts such as unexpectedly long edges and edge overlaps. This paper presents an approach to resolving overlaps among node labels by adaptively blending multiple layout forces applied to the respective network nodes. This is accomplished by extending our previous approach for transforming the force-directed layout into that obtained through the centroidal Voronoi tessellation. Our technical contribution lies in a novel algorithm for smoothing blending ratios associated with the network nodes so that we can adaptively explore the reasonable balance between the two layouts independently for each node. Experimental results will present that our new approach can produce well-balanced distribution of node labels while maximally avoiding the aforementioned unwanted visual artifacts.

Hsiang-Yun Wu
Focus+Context Metro Map Layout and Annotation
in Proceedings of Spring Conference on Computer Graphics (SCCG2016), Bratislava (Slovakia), April 2016 [DOI: 10.1145/2948628.2948642]
An annotated metro map, which is a graphic representation that abstracts the transportation network and provides additional details of a city, can be hardly drawn due to the landmark density around the city center. A focus+context illustrated map is therefore commonly used to provide detailed information around a focus region while preserving the context area, so that map readers can still keep the mental image of a city. Nonetheless, conventional techniques do not sufficiently keep the layout octiinearity over navigation process, especially large deformation is required when they are significant landmarks around center stations. This paper introduces focus+context annotated metro maps, a design of emphasizing focus regions by embedding landmark icons around the stations together with aesthetically aligning metro lines and label leaders in an octilinear fashion. Our idea is to employ the conventional fisheye technique when considering appropriate edge lengths in a focus region, and to generate sufficient space around the labeled stations by introducing a relative neighborhood graph for deformation purposes. This is accomplished by introducing appropriate design conditions into a linear program so that we can constrain the positions of stations and labels while preserving the octilinearty within both focus and context regions. The optimization problem is then solved in a least square sense. We also provide a user interface for customizing maps through intervention and present several design examples to demonstrate the effectiveness of the approach.

Makoto Uemura, Koji S. Kawabata, Shiro Ikeda, Keiichi Maeda, Hsiang-Yun Wu, Kazuho Watanabe, Shigeo Takahashi, Issei Fujishiro
Data-driven approach to Type Ia supernovae: variable selection on the peak luminosity and clustering in visual analytics
in Journal of Physics: Conference Series, Vol. 699, No. 012009, April 2016 [DOI: 10.1088/1742-6596/699/1/012009]
Type Ia supernovae (SNIa) have an almost uniform peak luminosity, so that they are used as “standard candle” to estimate distances to galaxies in cosmology. In this article, we introduce our two recent works on SNIa based on data-driven approach. The diversity in the peak luminosity of SNIa can be reduced by corrections in several variables. The color and decay rate have been used as the explanatory variables of the peak luminosity in past studies. However, it is proposed that their spectral data could give a better model of the peak luminosity. We use cross-validation in order to control the generalization error and a LASSO-type estimator in order to choose the set of variables. Using 78 samples and 276 candidates of variables, we confirm that the peak luminosity depends on the color and decay rate. Our analysis does not support adding any other variables in order to have a better generalization error. On the other hand, this analysis is based on the assumption that SNIa originate in a single population, while it is not trivial. Indeed, several sub-types possibly having different nature have been proposed. We used a visual analytics tool for the asymmetric biclustering method to find both a good set of variables and samples at the same time. Using 14 variables and 132 samples, we found that SNIa can be divided into two categories by the expansion velocity of ejecta. Those examples demonstrate that the data-driven approach is useful for high-dimensional large-volume data which becomes common in modern astronomy.

Kazuho Watanabe, Hsiang-Yun Wu, Shigeo Takahashi, Issei Fujishiro
Asymmetric biclustering with constrained von Mises-Fisher models
in Journal of Physics: Conference Series, Vol. 699, No. 012018, April 2016 [DOI: 10.1088/1742-6596/699/1/012018]
As a probability distribution on the high-dimensional sphere, the von Mises-Fisher (vMF) distribution is widely used for directional statistics and data analysis methods based on correlation. We consider a constrained vMF distribution for block modeling, which provides a probabilistic model of an asymmetric biclustering method that uses correlation as the similarity measure of data features. We derive the variational Bayesian inference algorithm for the mixture of the constrained vMF distributions. It is applied to a multivariate data visualization method implemented with enhanced parallel coordinate plots.

Malik Olivier Boussejra, Noboru Adachi, Hideki Shojo, Ryohei Takahashi, Issei Fujishiro
LMML: Initial developments of an integrated environment for forensic data visualization
in Proceedings of 18th EG/IEEE VGTC Conference on Visualization (EuroVis Short Papers), pp. 31-35, Groningen(Netherlands), June 2016
Fighting against crime is paramount to any society, maybe more today than ever before. Tools to fight and elucidate crime are rooted in forensic science. Through the autopsy of a body, we can answer a whole range of questions as to how death happened and come up with explanations and counter-measures so that the same dire circumstance does not happen again. Now, because the reports collecting the data are written manually, the recording of the data collected through traditional autopsy still is a cumbersome, time-consuming task. Our framework, based on a mark-up language (that we dubbed ”LMML”) to store, describe and arrange forensic data, aims at overcoming those issues. Our contribution is twofold: the design of the syntax and semantics of LMML, and the conception of an interface to create, edit, analyse or query files written in that language. Thus, this framework allows quicker, smoother input of forensic data, for better automation and visualization thereof, so that they can be used by medical examiners, investigators, as well as judicial courts.


Yusuke Ishikawa, Issei Fujishiro
Visual Analysis of Rugby Matches: Pixel-oriented Visualization and Evaluation indices
IEEE VIS 2016 Poster Session
In this study, we take various features inherent to rugby into account to propose a novel pixel-oriented visualization method that helps spectators immediately grasp the transition of tactile situations in a match. We also deploy a set of indices to quantitatively analyze the strategic advantage that a team gains in the match.


藤代 一成,高橋成雄,渡辺一帆,Hsiang-Yun Wu
電子情報通信学会誌D, Vol. 99, No. 5, pp. 466-470, 2016年5月


宮澤 篤中山 雅紀藤代 一成
複素パラメータによる平面曲線の解析的な延長についてとそのインタラクティブアートへの応用 ―実世界は果たして高次元世界の断面になっているか?
日本図学会2016年度春季大会学術講演論文集(ISSN 2189-0072), pp. 83-88, 八戸グランドホテル(青森県・八戸市), 2016年5月

Makoto Uemura, Longyin Xu, Masanori Nakayama, Hsiang-Yun Wu, Kazuho Watanabe, Shigeo Takahashi, Issei Fujishiro
TimeTubes: Visualization of polarization variations in blazars
Blazars through Shapr Multi-Wavelength Eyes, Malaga(Spain), June 2016

藤代 一成
知覚心理学応用のススメ ―「見せる」から「魅せる」へ
3D合同シンポジウム招待講演, 東京国際フォーラム(東京都・千代田区), 2016年6月

鹿間 脩斗, 川田 玄一, 藤代一成
映像表現・芸術科学フォーラム2017(映像情報メディア学会技術報告,Vol. 41, No. 12, pp. 37-40), 2017年3月 優秀発表賞受賞

中田 聖人, 藤代 一成
映像表現・芸術科学フォーラム2017(映像情報メディア学会技術報告,Vol. 41, No. 12, pp. 53-56), 2017年3月

高橋 玲央, 藤代 一成
映像表現・芸術科学フォーラム2017(映像情報メディア学会技術報告,Vol. 41, No. 12, pp. 149-152), 2017年3月

土方 希, 鹿間 脩斗, 藤代 一成
映像表現・芸術科学フォーラム2017(映像情報メディア学会技術報告,Vol. 41, No. 12, pp. 209-212), 2017年3月 優秀発表賞, CG-ARTS人材育成パートナー企業賞受賞

宮原 裕貴, 中山 雅紀, 藤代 一成
映像表現・芸術科学フォーラム2017(映像情報メディア学会技術報告,Vol. 41, No. 12, pp. 309-312), 2017年3月

湯浅 海貴, 中山 雅紀, 藤代 一成
第79回情報処理学会全国大会, 1X-01 (講演論文集(4), pp. 35-36),2017年3月

土方 希, 鹿間 脩斗, 藤代 一成
CoCoA:コミック / アニメ実写化のための俳優キャスティング支援
第79回情報処理学会全国大会, 1X-02 (講演論文集(4), pp. 37-38),2017年3月 学生奨励賞

宮原 裕貴, 中山 雅紀, 藤代 一成
第79回情報処理学会全国大会, 1X-03 (講演論文集(4), pp. 39-40),2017年3月

澤田 奈生子, 中山 雅紀, 植村 誠, Hsiang-Yun Wu, 藤代 一成
第79回情報処理学会全国大会, 3X-08 (講演論文集(4), pp. 85-86),2017年3月 学生奨励賞

中田 聖人, 藤代 一成
第79回情報処理学会全国大会, 4X-02 (講演論文集(4), pp. 91-92),2017年3月

都甲 裕太朗, 池田 泰成, 藤代 一成
第79回情報処理学会全国大会, 7X-02 (講演論文集(4), pp. 141-142),2017年3月

篠崎 紗衣子, 中山 雅紀, 藤代 一成
第79回情報処理学会全国大会, 7X-03 (講演論文集(4), pp. 143-144),2017年3月