EAI Endorsed Transactions on Creative Technologies

EAI Endorsed Transactions on Creative Technologies

Basic info

  • Publisher: European Alliance for Innovation
  • Country of publisher: belgium
  • Date added to EuroPub: 2019/Jun/07

Subject and more

  • LCC Subject Category: Computer and Information Science, Telecommunications
  • Publisher's keywords: Creative Technologies, Transactions Technologies
  • Language of fulltext: english

Publication charges

  • Article Processing Charges (APCs): No
  • Submission charges: No
  • Waiver policy for charges? No

Editorial information

Open access & licensing

  • Type of License: CC BY
  • License terms
  • Open Access Statement: Yes
  • Year open access content began: 2014
  • Does the author retain unrestricted copyright? True
  • Does the author retain publishing rights? True

Best practice polices

  • Permanent article identifier: DOI
  • Content digitally archived in:
  • Deposit policy registered in: None

This journal has '81' articles

PaisleyTrees: A Size-Invariant Tree Visualization

PaisleyTrees: A Size-Invariant Tree Visualization

Authors: Katayoon Etemad, Dominikus Baur, John Brosz, Sheelagh Carpendale, Faramarz F. Samavati
( 0 downloads)
Abstract

Squeezing large tree structures into suitable visualizations has been a perennial problem. In response to this challenge, we present PaisleyTrees, a size-invariant tree visualization. PaisleyTrees integrate node-of-interest focus with tree-cut presentations to support rapid tree navigation without resorting to zooming and panning. This visualization offers the ability to work with trees of arbitrary depth and breadth, and maintains legibility for displayed elements. These advantages are achieved by using a hybrid layout, inspired by traditional Paisley patterns, that combines node-link, nested and djacency-based tree layout techniques, and offers both depth and breadth elision.

Keywords: Information Visualization, Tree Layout, Hybrid Layout, Mobile, Graphs
Varianish: Jamming with Pattern Repetition

Varianish: Jamming with Pattern Repetition

Authors: Jort Band, Mathias Funk, Peter Peters, Bart Hengeveld
( 0 downloads)
Abstract

In music, patterns and pattern repetition are often regarded as a machine-like task, indeed often delegated to drum Machines and sequencers. Nevertheless, human players add subtle differences and variations to repeated patterns that are musically interesting and often unique. Especially when looking at minimal music, pattern repetitions create hypnotic effects and the human mind blends out the actual pattern to focus on variation and tiny differences over time. Varianish is a musical instrument that aims at turning this phenomenon into a new musical experience for musician and audience: Musical pattern repetitions are found in live music and Varianish generates additional (musical) output accordingly that adds substantially to the overall musical expression. Apart from the theory behind the pattern finding and matching and the conceptual design, a demonstrator implementation of Varianish is presented and evaluated.

Keywords: Musical patterns, rhythm, pattern detection, micro-focus, improvisation
Effect of avatars and viewpoints on performance in virtual world: efficiency vs. telepresence

Effect of avatars and viewpoints on performance in virtual world: efficiency vs. telepresence

Authors: Y. Rybarczyk, T. Coelho, T. Cardoso, R. de Oliveira
( 0 downloads)
Abstract

An increasing number of our interactions are mediated through e-technologies. In order to enhance the human’s feeling of presence into these virtual environments, also known as telepresence, the individual is usually embodied into an avatar. The natural adaptation capabilities, underlain by the plasticity of the body schema, of the human being make a body ownership of the avatar possible, in which the user feels more like his/her virtual alter ego than himself/herself. However, this phenomenon only occurs under specific conditions. Two experiments are designed to study the human’s feeling and performance according to a scale of natural relationship between the participant and the avatar. In both experiments, the human-avatar interaction is carried out by a Natural User Interface (NUI) and the individual’s performance is assessed through a behavioural index, based on the concept of affordances, and a questionnaire of presence The first experiment shows that the feeling of telepresence and ownership seem to be greater when the avatar’s kinematics and proportions are close to those of the user. However, the efficiency to complete the task is higher for a more mechanical and stereotypical avatar. The second experiment shows that the manipulation of the viewpoint induces a similar difference across the sessions. Results are discussed in terms of the neurobehavioral processes underlying performance in virtual worlds, which seem to be based on ownership when the virtual artefact ensures a preservation of sensorimotor contingencies, and simple geometrical mapping when the conditions become more artificial.

Keywords: telepresence, mapping, body ownership, avatar, viewpoint, affordances, virtual environments, NUI
Advancing Performability in Playable Media: A Simulation-based Interface as a Dynamic Score

Advancing Performability in Playable Media: A Simulation-based Interface as a Dynamic Score

Authors: I. Choi
( 0 downloads)
Abstract

When designing playable media with non-game orientation, alternative play scenarios to gameplay scenarios must be accompanied by alternative mechanics to game mechanics. Problems of designing playable media with non-game orientation are stated as the problems of designing a platform for creative explorations and creative expressions. For such design problems, two requirements are articulated: 1) play state transitions must be dynamic in non-trivial ways in order to achieve a significant level of engagement, and 2) pathways for players’ experience from exploration to expression must be provided. The transformative pathway from creative exploration to creative expression is analogous to pathways for game players’ skill acquisition in gameplay. The paper first describes a concept of simulation-based interface, and then binds that concept with the concept of dynamic score. The former partially accounts for the first requirement, the latter the second requirement. The paper describes the prototype and realization of the two concepts’ binding. “Score” is here defined as a representation of cue organization through a transmodal abstraction. A simulation based interface is presented with swarm mechanics and its function as a dynamic score is demonstrated with an interactive musical composition and performance.

Keywords: playability, playable media, performability, simulation-based interface, dynamic score, sound mechanics, prolonged engagement model, creative exploration, creative entertainment, interactive performance
A taxonomy of camera calibration and video projection correction methods

A taxonomy of camera calibration and video projection correction methods

Authors: Radhwan Ben Madhkour, Matei Mancas, Thierry Dutoit
( 0 downloads)
Abstract

This paper provides a classification of calibration methods for cameras and projectors. From basic homography to complex geometric calibration methods, this paper aims at simplifying the choice of the methods to perform a calibration regarding the complexity of the setup. The classical camera calibration methods are presented. A comparison gives the pros and cons for each method. For the projector calibration, the homography, the structured light methods and the geometric calibration are presented. Every general approach for the projector calibration is studied and the limitations of each method are given. Each approach is described through the main reference method. A classification of each projector calibration approach is given.

Keywords: projector calibration
Evaluating music performance and context-sensitivity with Immersive Virtual Environments

Evaluating music performance and context-sensitivity with Immersive Virtual Environments

Authors: Donald Glowinski, Naëm Baron, Kanika Shirole, Sélim Yahia Coll, Lina Chaabi, Tamara Ott, Marc-André Rappaz, Didier Grandjean
( 0 downloads)
Abstract

This study explores a unique experimental protocol that evaluates how a musician’s sensitivity to social context during performance can be analysed through a combination of behavioral analysis, self-report and Immersive Virtual Environment (IVE). An original application has been developed to create audience of avatars that display different motivational states that are known to affect musician's performance. The musicians’ body expressions have then been recorded through a motion capture system and analysed as they relate to audience motivational state. The musician subjective experience has been captured after each performance through semi-structured interviews. Preliminary results depict the strategies implicitly employed by four expert violinists during their performances under the various contexts (empty room and engaged and disengaged audience of avatars). Finally, this study discusses the way to improve methodology, analyses and real-world responses to musician's needs.

Keywords: Virtual Immersive environment, body expressivity, Music Performance
CuriousMind photographer: distract the robot from its initial task

CuriousMind photographer: distract the robot from its initial task

Authors: Vincent Courboulay, Matei Mancas
( 0 downloads)
Abstract

Mainly present in industry, robots begin to invade our every-day lives for very precise tasks. In order to reach a level where more general robots get involved in our lives, the robots' abilities to communicate and to react to unexpected situations must be improved. This paper introduces an attentive computational model for robots as attention can help both in reacting to unexpected situations and to help improving human-robot communication. We propose to enhance and implement an existing real time computational model. Intensity, color and orientation are usually used but we have added information related to depth and isolation. We have built a robotic system based on LEGO Mindstorm platform and the Kinect RGB-D sensor. This robot, called CuriousMind, is able to take a picture of the most interesting part of the scene and it can also be distracted from its first goal by novel situations mimicking in that way the human (and more precisely small children) behaviour.

Keywords: Attentional system, robotic implementation, 3D saliency
Multi-GPU based framework for real-time motion analysis and tracking in multi-user scenarios

Multi-GPU based framework for real-time motion analysis and tracking in multi-user scenarios

Authors: Sidi Ahmed Mahmoudi
( 0 downloads)
Abstract

Video processing algorithms present a necessary tool for various domains related to computer vision such as motion tracking, event detection and localization in multi-user scenarios (crowd videos, mobile camera, scenes with noise, etc.). However, the new video standards, especially those in high definitions require more computation since their treatment is applied on large video frames. As result, the current implementations, even running on modern hardware, cannot provide a real-time processing (25 frames per second, fps). Several solutions have been proposed to overcome this constraint, by exploiting graphic processing units (GPUs). Although they exploit GPU platforms, they are not able to provide a real-time processing of high definition video sequences. In this work, we propose a new framework that enables an efficient exploitation of single and multiple GPUs, in order to achieve real-time processing of Full HD or even 4K video standards. Moreover, the framework includes several GPU based primitive functions related to motion analysis and tracking methods, such as silhouette extraction, contours extraction, corners detection and tracking using optical flow estimation. Based on this framework, we developed several real-time and GPU based video processing applications such as motion detection using moving camera, event detection and event localization

Keywords: Multi-GPU computing, camera motion estimation, event detection and event localization
Virtual Character Animations from Human Body Motion by Automatic Direct and Inverse Kinematics-based Mapping

Virtual Character Animations from Human Body Motion by Automatic Direct and Inverse Kinematics-based Mapping

Authors: Andrea Sanna, Fabrizio Lamberti, Gianluca Paravati, Gilles Carlevaris, Paolo Montuschi
( 0 downloads)
Abstract

Motion capture systems provide an efficient and interactive solution for extracting information related to a human skeleton, which is often exploited to animate virtual characters. When the character cannot be assimilated to an anthropometric shape, the task to map motion capture data onto the armature to be animated could be extremely challenging. This paper presents two methodologies for the automatic mapping of a human skeleton onto virtual character armatures. Kinematics chains of the human skeleton are analyzed in order to map joints, bones and end-effectors onto an arbitrary shaped armatures. Both forward and inverse kinematics are considered. A prototype implementation has been developed by using the Microsoft Kinect as body tracking device. Results show that the proposed solution can already be used to animate truly different characters ranging from a Pixar-like lamp to different kinds of animals.

Keywords: virtual character animation, automatic armature mapping, motion capture, graph similarity, forward kinematics, inverse kinematics
Social retrieval of music content in multi-user performance

Social retrieval of music content in multi-user performance

Authors: Maurizio Mancini, Gualtiero Volpe, Giovanna Varni, Antonio Camurri
( 0 downloads)
Abstract

An emerging trend in interactive music performance consists of the audience directly participating in the performance by means of mobile devices. This is a step forward with respect to concepts like active listening and collaborative music making: non-expert members of an audience are enabled to directly participate in a creative activity such as the performance. This requires the availability of technologies for capturing and analysing in real-time the natural behaviour of the users/performers, with particular reference to non- verbal expressive and social behaviour. This paper presents a prototype of a non-verbal expressive and social search engine and active listening system, enabling two teams of non-expert users to act as performers. The performance consists of real-time sonic manipulation and mixing of music pieces selected according to features characterising performers’ movements captured by mobile devices. The system is described with specific reference to the SIEMPRE Podium Performance, a non-verbal socio-mobile music performance presented at the Art & ICT Exhibition that took place in Vilnius (LI) in November 2013.

Keywords: personalised social media experience in mobile devices, embodied cooperation, expressive and social features, music retrieval
Head pose estimation & TV Context: current technology

Head pose estimation & TV Context: current technology

Authors: Francois Rocca, Matei Mancas, Fabien Grisard, Julien Leroy, Thierry Ravet, Bernard Gosselin
( 0 downloads)
Abstract

With the arrival of low-cost high quality cameras, implicit user behaviour tracking is easier and it becomes very interesting for viewer modelling and content personalization in a TV context. In this paper, we present a comparison between three common algorithms of automatic head direction extraction for a person watching TV in a realistic context. Those algorithms compute the different rotation angles of the head (pitch, roll, yaw) in a non-invasive and continuous way based on 2D and/or 3D features acquired with low cost cameras. These results are compared with a reference based on the Qualisys motion capture commercial system which is a robust marker-based tracking system. The performances of the different algorithms are compared function of different configurations. While our results show that full implicit behaviour tracking in real-life TV setups is still a challenge, with the arrival of next generation sensors (as the new Kinect one sensor), accurate TV personalization based on implicit behaviour is close to become a very interesting option.

Keywords: head pose estimation, viewer interest, face direction, Qualisys, Kinect, face tracking, 3D point cloud
Towards the creation of a Gesture Library

Towards the creation of a Gesture Library

Authors: Bruno Galveia, Tiago Cardoso, Vitor Santor, Yves Rybarczyk
( 0 downloads)
Abstract

The evolution of technology has risen new possibilities in the so called Natural User Interfaces research area. Among distinct initiatives, several researchers are working with the existing sensors towards improving the support to gesture languages. This article tackles the recognition of gestures, using the Kinect sensor, in order to create a gesture library and support the gesture recognition processes afterwards.

Keywords: Kinect Sensor, Gesture Recognition
Evaluation of a Facial Animation Authoring Pipeline Seamlessly Supporting Performance Capture and Manual Key-pose Editing

Evaluation of a Facial Animation Authoring Pipeline Seamlessly Supporting Performance Capture and Manual Key-pose Editing

Authors: Fabrizio Nunnari, Alexis Heloir
( 0 downloads)
Abstract

In this paper, we present an architecture following a novel animation authoring pipeline seamlessly supporting performance capture and manual editing of key-frames animation. This pipeline allows novice users to record and author sophisticated facial animations in a fraction of the time that would be required using traditional animation tools. This approach paves the way towards novel animation pipelines which seamlessly merge the roles of the animator and the actor. The second contribution is a method assessing a facial retargeting system, we conducted a user study where participants assessed the emotions conveyed by the facial expression displayed in the control and the authored animation. Contrary to existing evaluation methods, it factors out possible misinterpretations of the intended emotion and focuses on assessing the retargeting quality.

Keywords: facial animation, performance capture, retargeting, evaluation, animation authoring
Characterisation of gestural units in light of human-avatar interaction

Characterisation of gestural units in light of human-avatar interaction

Authors: I. Renna, S. Delacroix, F. Catteau, C. Vincent, D. Boutet
( 0 downloads)
Abstract

We present a method for characterizing coverbal gestural units intended for human-avatar interaction. We recorded 12 gesture types, using a motion-capture system. We used the markers positions thus obtained to determine the gestural units after stroke segmentation. We complement our linguistic analysis of gestures with an elaboration of our biomechanical hypotheses, our method of segmentation, our characterization hypotheses and the results obtained.

Keywords: Gesture units, stroke segmentation, gesture characterization

About Europub

EuroPub is a comprehensive, multipurpose database covering scholarly literature, with indexed records from active, authoritative journals, and indexes articles from journals all over the world. The result is an exhaustive database that assists research in every field. Easy access to a vast database at one place, reduces searching and data reviewing time considerably and helps authors in preparing new articles to a great extent. EuroPub aims at increasing the visibility of open access scholarly journals, thereby promoting their increased usage and impact.