Tutorials and Workshops

Tutorials and Workshops

Tutorials

Speakers

Title

Room

Schedule

Julien Epps

Automatic Affective and Task Analysis for Wearable Computing

Panorama I.

October 9, 9:00-12:30

Ildar Batyrshin

New Ways to Look at Similarity and Association Measures and the Methods of their Construction

Duna Salon IV.

October 9, 13:30-18:00

Robert Kozma
 

Cognitive Phase Transitions in the Cerebral Cortex: Brain Imaging Experiments, Graph Theory Models, and Engineering Applications

Panorama IV.

October 9, 9:00-12:30

Hermann Kaindl

Human-Machine Interaction

Duna Salon III.

October 9, 9:00-12:30

Tricia Gibo,
Erwin Boer,
David Abbink,
Makoto Itoh,
Tom Carlson,
René van Paassen

Honest Evaluations of Shared Human-Machine Control Systems

Duna Salon III.

October 9, 13:30-18:00

 

BMI Tutorials

Speakers

Title

Room

Schedule

Reinhold Scherer,
Olga Sourina

Mobile BCI application: Neuroscience-based design and neurorehabilitation

Duna Salon II.

October 9, 13:30-15:30

Maureen Clerc
Jérémie Mattout

Why bother with advanced modeling in BCI? Lessons from neuroimaging.

Duna Salon II.

October 9, 15:30-17:30

 

Workshops

Organizer(s)

Title

Room

Schedule

Levente Kovács,
Clara Ionescu

Workshop on Women in Engineering

Panorama V.

October 9, 9:00-12:30

Hamido Fujita,
Enrique Herrera-Viedma,
Ali Selamat,
Amedeo Cesta,
Francisco Chiclana

Big Data based Technological Innovations on Intelligent Health Service in the Clouds

Duna Salon IV.

October 9, 9:00-12:30

IEEE SMC Technical Committee on Brain-Machine Interfaces Systems

Brain-Machine Interface Workshop

Detailed program can be found in the SMC 2016 final program.

 

Tutorial’s program in details:

Tutorial 1: Automatic Affective and Task Analysis for Wearable Computing

 

 

 Speaker: Julien Epps

 Affiliation: University of New-South Wales, Australia

 

 

 

 

Abstract:

A day in anyone’s life can be segmented into a series of broadly defined tasks: you begin a task, become loaded to some extent by the objects, movements, communication and/or mental challenges that comprise that task, then at some point you switch to or are distracted to a new task, and so on. A “task” is arguably the most fundamental unit of human activity from a machine perspective, and yet at present we have only extremely limited means by which to detect when a human has changed tasks and to estimate what level of emotion, physical load, mental load and other load types they experience during tasks. The growth in wearable computing presents both an opportunity and an imperative for computers to significantly better understand the user’s primary task and its demands on the user in real time. Recent wearables like Google Glass and EyeSpeak show the future: content, functionality and interruptions persistently in the user’s field of view, but also the opportunity to position near-field sensors directly where they are most useful for task analysis – in front of the eye, near the mouth and fixed to the head. Task analysis at present offers virtually no automatic means to detect the points at which a user transitions from one task to the next (or one emotional or mental state to the next) except when all of the user’s primary tasks can be predefined and are contained within a desktop PC being monitored – this is an increasingly unrealistic view of computer use. An automatic alternative to the very dated manual analysis methods for task transition detection and task load level estimation is needed, and behavioural signals are preferable because they are noninvasive. Head-mounted wearable sensors bring within grasp the prospect of ‘always-on’ automatic analysis of emotion and of physical and mental tasks based on signals such as speech, eye activity and body movement. Human task analytics of this kind represent a huge opportunity for individual users to empower themselves and interact more seamlessly with machines in the age of big data.

This tutorial introduces and examines some of the key research problems in using behavioural signals in particular to automatically analyse tasks and emotions: understanding the psychophysiological basis of signals during speech production, eye activity and body movement; pre-processing and calibrating signals like eye video and accelerometry data; extracting suitable features; reducing feature variability due to illumination, movement, speaker and linguistic content; developing machine learning methods for detecting task transition and for estimating the level of affective intensity and of particular types of task load; comparing and evaluating diverse methods; constructing databases for developing and evaluating systems of this kind; and system design for continuous and robust automatic task analysis. The discussion of task analysis and task load estimation is framed within a recognition that there is a need to move beyond classifying a limited set of pre-defined, application-specific task, emotion or mental state categories, to a more general dimensional framework of assessing the levels of various types of affect and task load. The discussion includes perspectives from the wider context of affective computing and human-computer interaction, and some key insights from the signal processing domain will be covered, particularly in the areas of feature extraction, modelling, and variability compensation. The tutorial will also discuss system design, engineering

applications and the use of other biomedical signals for load measurement. Participants will be exposed to likely future challenges, both during the tutorial presentation and during the ensuing discussion.

 

Speaker Bio:

Julien Epps received the BE and PhD degrees in Electrical Engineering from UNSW Australia, in 1997 and 2001 respectively. After working as a Senior Research Engineer at Motorola Labs and then as a Senior Researcher at National ICT Australia, he was appointed as a Senior Lecturer at UNSW Electrical Engineering and Telecommunications in 2007 and then as an Associate Professor in 2013. A/Prof Epps is also a Contributed Principal Researcher at Data61, CSIRO, and been a Visiting Scientist at the A*STAR Institute for Infocomm Research (Singapore). He has authored or co-authored around 180 publications and three patents, which have been collectively cited nearly 3000 times. He has delivered invited tutorials to INTERSPEECH 2014 and 2015, and invited keynotes to the 4th Int. Workshop on Audio-Visual Emotion Challenge (part of ACM Multimedia 2014) and the 6th Workshop on Eye Gaze in Intelligent Human Machine Interaction (part of ACM ICMI 2013).

A/Prof Epps is serving as an Associate Editor for IEEE Transactions on Affective Computing and for Frontiers in ICT (Human-Media Interaction section), and has served as a Guest Editor for the EURASIP Journal on Advances in Signal Processing Special Issue on Emotion and Mental State Recognition from Speech. He is currently a member of the Advisory Board of the ACM Int. Conf. on Multimodal Interaction. In 2016 he is an Area Chair for the ACM Multimedia (Emotional and Social Signals in Multimedia), INTERSPEECH (Paralinguistics), ACM ETRA and ACM UMAP (Adaptive, Intelligent, & Multimodal User Interfaces) conferences. He is currently authoring an invited chapter on “Task Load and Stress” in the Wiley Handbook of Human-Computer Interaction 2016, and coordinating an invited chapter on “Multimodal assessment of depression and related disorders based on behavioural signals” in the ACM Handbook of Multimodal Multisensor Interfaces, 2017.

 

Tutorial 2: New Ways to Look at Similarity and Association Measures and the Methods of their Construction

 

 

 

 

 Speaker: Ildar Batyrshin

 Affiliation: National Polytechnic  Institute, Mexico

 

 

 

 

 

Abstract:

Similarity and association measures (SAMs) are used in different areas like Pattern Recognition, Knowledge Acquisition, Machine Learning, Medical Informatics, Information Systems, Computational Intelligence etc. The selection and usage of SAM that is adequate to the type of the analyzed data and to the specific data analysis problem is the crucial point of any research based on the analysis of the possible relationships between data.

The tutorial discusses the new ways to look at similarity and association measures and considers the methods of their construction recently developed by the author.  The tutorial uses non-statistical approach to analysis of SAMs considered as functions satisfying some reasonable properties. The general methods of construction of association measures on sets with involution operation using similarity measures and pseudo-difference operations associated with t-conorms are discussed.

The survey of SAMs and their construction on different domains is considered. The new methods of comparative analysis of SAMs for binary variables and 2x2 tables such as Jaccard & Tanimoto, Sokal & Sneath, Ochiai, Yule, Hamann, Baroni-Urbani  etc. are considered and new SAMs are proposed. The measures of association on [0,1], on bipolar scales, on the sets of fuzzy sets, time series, n-tuples etc are considered.

 

Speaker Bio:

Ildar Batyrshin graduated from the Moscow Physical-Technical Institute in 1975. He received PhD and Dr. Sci. (habilitation) degrees in 1983 and 1996, respectively. During 1975-2003, he served as professor and Head of Department of Informatics and Applied Mathematics of Kazan State Technological University, Russia, and as a Leading Researcher of the Institute of Problems of Informatics of Academy of Sciences of the Republic of Tatarstan, Russia. Since 2003 he was with Research Program of Applied Mathematics and Computations of Mexican Petroleum Institute as Invited Distinguished Researcher, Leading Researcher and Project Head. Currently he is a Titular Professor "C" of the Center for Computing Research of Mexican National Polytechnic Institute. He is the Past President of the Russian Association for Fuzzy Systems and Soft Computing, member of the Councils of the International Fuzzy Systems Association (IFSA), Mexican Society for Artificial Intelligence (SMIA) and Russian Association for Artificial Intelligence, Senior Member of IEEE (CI and SMC Societies) and the member of the Board of Directors of NAFIPS. He is a member of editorial boards of several scientific journals. He served as a Co-Chair of 10 International Conferences on Soft Computing, Artificial Intelligence and Computational Intelligence. He is an author and editor of 20 books and special volumes of journals. He was awarded by the State Research Fellowship of the Presidium of Russian Academy of Sciences for Distinguished Researchers; he is an Honorary Researcher of the Republic of Tatarstan, Russia, an Honorary Professor of Obudu University, Budapest, Hungary, the Fellow of SMIA and the member of the National System of Researchers of Mexico. He presented Plenary, Invited Talks and Tutorials on several international conferences: FSSCEF 2004, MICAI 2013, CINTI 2015, FCDM 2015, WCSC 2016 etc.

 

Tutorial 3: Cognitive Phase Transitions in the Cerebral Cortex: Brain Imaging Experiments, Graph Theory Models, and Engineering Applications

 

 

 

 

 

Speaker: Robert Kozma

 Affiliation: University of Memphis, USA

 

 

 

 

 

 

Abstract:

This tutorial provides a comprehensive overview of novel brain imaging results for cognitive monitoring. It introduces mathematical and computational models to interpret the experimental results, and describes several engineering applications of the findings, with special focus on bran-computer interfaces. The following main areas will be covered:

 

1.    Overview of new experimental developments in brain imaging, including EEG, ECoG, fMRI, and MEG, which indicate discontinuities in brain dynamics at theta rates (4-8 Hz). The observed neural processes are interpreted as neural correlates of cognition.

2.    Mathematical theories of brain dynamics, in which brains are viewed as open thermodynamic systems converting fluctuating sensory data into meaningful knowledge. Random graphs have unique advantages by characterizing cortical processes as phase transitions and transient percolation processes in probabilistic cellular automata. Criticality and self-organization are key components of the model of cortical phase transitions.

3.    Engineering applications include novel principles of building autonomous, intelligent robotic systems. Of special interest are non-invasive Brain Computer Interface (BCI) techniques to monitor cognitive activity of the user and to support healthy brain operation.

 

The presented material is self-contained and it will be accessible to an audience with basic knowledge of signal processing and neural modeling. The tutorial aims at scientists interested in the newest developments of brain monitoring and it is supporting the conference focus area on BCI.

 

This presentation is dedicated to the memory of Walter Freeman, a pioneer of brain network dynamics, coauthor of the following reference material:

“Cognitive Phase Transitions in the Cerebral Cortex - Enhancing the Neuron Doctrine by Modeling Neural Fields, “ R. Kozma & W.J. Freeman, Springer (2016). 

 

Speakers Bio:

Robert Kozma (Fellow IEEE, Fellow INNS) is Professor of Mathematics, Director of the Center of Large-Scale Integration and Optimization Networks (CLION), at the University of Memphis, TN, USA. Dr. Kozma holds a Ph.D. in Physics (Delft, The Netherlands, 1992), two M.Sc. degrees (Mathematics, Budapest, Hungary, 1988; Power Engineering, Moscow, Russia, 1982). He serves on the Board of SMC (2016-18), and is President-Elect of INNS (2016). He conducts research on spatio-temporal brain dynamics and advanced optimization techniques inspired by brains. His main focus is neuropercolation approach to brain networks, based on random graph theory and percolation processes to describe brains as non-equilibrium systems at the edge of criticality. He has published 6 books, and over 250 research papers.

 

 

Tutorial 4: Human-Machine Interaction

 

 

 

 

 Speaker: Hermann Kaindl

 Affiliation: TU Wien, Austria

 

 

 

 

 

 

 

Abstract:

Usually, courses are given on human-computer interaction these days, while in recent years there was a major shift towards (mobile) devices and machines, with new human interfaces. Of course, they include embedded computers and software, but their interaction with users poses many new challenges and offers new solutions. Unfortunately, previously educated embedded engineers are often unaware of them and only focus on the functionality and other technical properties of devices and machines.

This tutorial shows manifold usability problems as observed by the proposer in daily life, beyond those usually known from graphical user interfaces (GUIs) of traditional PCs (including laptop computers). It explains them by human factors usually unknown to embedded engineers and motivates user experience. User-centered and Usage-centered Design are compared with the result that they typically overlap but have a different focus each on Interaction Design. Usability Test and  Usability Study are explained and contrasted as well. In addition, this tutorial explains key properties of Multimodal Interfaces and UIs of Mobile Devices. Finally, it culminates in a sketch of specific challenges of Human-Robot Interaction.

 

Speaker Bio:

Hermann Kaindl is a full professor, the director of the Institute of Computer Technology and a member of the senate at TU Wien.  Prior to moving to academia in early 2003, he has gained nearly 25 years of industrial experience at Siemens Austria.  Kaindl is a Senior Member of the IEEE and a Distinguished Scientist Member of the ACM.

 

Tutorial 5: Honest Evaluations of Shared Human-Machine Control Systems

 

 

 Speaker: Tricia Gibo

 Affiliation: Delft University of Technology, The Netherlands

 

 

 

 

 

 

 Speaker: Erwin Boer

 Affiliation: Delft University of Technology, The Netherlands,
 University of California in San Diego

 

 

 

 

 

 Speaker: David Abbink

 Affiliation: Delft University of Technology, The Netherlands

 

 

 

 

 

 

 Speaker: Makoto Itoh

 Affiliation: University of Tsukuba, Japan

 

 

 

 

 

 

 

 

 Speaker: Tom Carlson

 Affiliation: University College London, UK

 

 

 

 

 

 




 

Speaker: René van Paassen
Affiliation: Delft University of Technology, The Netherlands

 

 

 

 

 

 

Abstract:

As designers of support systems, we often evaluate our systems in controlled environments, showing favorable performance under conditions for which they were specifically developed. However, there is little agreement on how to honestly evaluate such a system and especially how to compare what appear to be very different types of support systems. The focus of this tutorial will be on how to evaluate and compare different human support systems (shared control systems) in a realistic manner, thereby honestly exposing the limitations of the proposed support system. This approach towards evaluation addresses the fact that people will push usage/application of support systems beyond their intended boundaries and that support system functioning is based on a large number of assumptions, all of which will not always be true in reality. The goal is to develop a framework/ontology that will give attendants: i) a way to place their type of support system in a broader context of other support systems (focus on types/levels of human-machine interaction), ii) a means to characterize how a support system alters the task structure (focus on the hierarchical decomposition of tasks), and iii) a set of methodologies to evaluate their system honestly by exploring its limitations. This framework facilitates comparison between apparently different systems, such as manual versus autonomous control or manual control versus shared control. We demonstrate the need and utility of the evaluation taxonomy in the context of driving and then apply it to the shared control application most prevalent within the audience (e.g. teleoperation, brain machine interaction, medical).

 

Speakers Bio:

David A. Abbink, PhD (1977) received his M.Sc. degree (2002) and Ph.D degree (2006) in Mechanical Engineering from Delft University of Technology. He is currently an Associate Professor at Delft University of Technology in 2009, heading the Delft Haptics Laboratory (www.delfthapticslab.nl). David was awarded the best Ph.D. dissertation in the area of movement sciences in the Netherlands (2006), and two prestigious personal grants - VENI (2010) and VIDI (2015). His research has received continuous funding by industry (Nissan, Boeing). Currently he is a co-PI on the H-Haptics project (www.h-haptics.nl), where 16 PhD students and 3 postdocs collaborate on designing human-centered haptic shared control for a wide variety of applications. David is an IEEE senior member, an associate editor for IEEE Transaction on Human-Machine Systems, and co-founder of the IEEE SMC Technical Committee on Shared Control.

 

Erwin R. Boer received his MSc in electrical engineering from Twente University of Technology in The Netherlands in 1990 and his PhD also in electrical engineering from the University of Illinois in Chicago in 1995. In 2000, Dr. Boer founded his own automotive human machine interaction consulting company Entropy Control, Inc. in La Jolla, CA. Currently he holds a part time associate professor of mechanical engineering position at Delft University of Technology, is Visiting Professor at the Institute for Transport Studies at the University of Leeds in the UK as well as part time associate professor of Ophthalmology in the medical school at the University of California in San Diego. His research interests include computational driver modeling, shared control, performance assessment, virtual prototyping in driving simulators, and employment of virtual reality for medical diagnostics and rehabilitation. IHe is now serving as a co-chair of IEEE SMC Technical Committee on Shared Control.

 

Tom Carlson (1984) received his PhD in Intelligent Robotics (2010) and his MEng in Electrical and Electronic Engineering (2006), both from Imperial College London, UK. Tom is currently a Lecturer (assistant prof.) at the ASPIRE Centre for Rehabilitation Engineering and Assistive Technology, University College London. He is also a Visiting Professor at the University of Valenciennes, France and co-directs the INRIA associated team ISI4NAVE. His research focus is on developing assistive robotic technology for people with spinal cord injuries. In addition to his academic partners, he collaborates with Invacare Europe, Dynamic Controls and Rex Bionics. Previously, he spent 3.5 years as a research scientist at the École Polytechnique Fédérale de Lausanne (EPFL), Switzerland, where he worked extensively to combine brain computer interfaces (BCI) with robotic wheelchairs and other assistive technologies. His Ph.D. thesis dealt extensively with both the design and user evaluation of shared control systems and he was subsequently invited to present his research to members of the UK Houses of Parliament at SET for Britain 2009. Tom has been an IEEE Member since 2007 and an IEEE-SMC society member since 2010. He co-founded the IEEE SMC Technical Committee on Shared Control in 2012, with which he co-edited a Special Issue of JHRI in 2015. He has published in the IEEE Transactions on Systems, Man, and Cybernetics: Part B and acted as a reviewer for all parts and their successors. Furthermore, he is on the program committee for the annual IEEE SMC Conference and co-organised special sessions at IEEE-SMC 2012-2015 as well as accompanying workshops/tutorials each year.

 

M.M. (René) van Paassen received the M.Sc. degree (cum laude) from the Delft University of Technology, Delft, The Netherlands, in 1988, and a Ph.D. in 1994, both on studies into the neuromuscular system of the pilot's arm. He thereafter was a Brite/EuRam Research Fellow with the University of Kassel, Germany, where he worked on means-ends visualization of process control dynamics, and a post-doc at the Technical University of Denmark. René van Paassen is an associate professor in Aerospace Engineering at the Delft University of Technology, working on human machine interaction and aircraft simulation. His work on human-machine interaction ranges from studies of perceptual processes, haptics and haptic interfaces and human manual control to design of and interaction with complex cognitive systems. René is a senior member of IEEE and of AIAA, and an associate editor for IEEE Transactions on Human-Machine System.

 

Tricia L. Gibo received the BS degree from the University of Southern California (California, USA) in 2007, and the MS and PhD degrees from The Johns Hopkins University (Maryland, USA) in 2009 and 2013, respectively, in mechanical engineering. She is currently a postdoctoral research fellow at Delft University of Technology (Netherlands). Her research interests include human motor control and learning, human-robot interaction, and haptics. She received the Graduate Research Fellowship from the US National Science Foundation in 2007 and the Link Foundation Fellowship in Advanced Simulation and Training in 2011.

 

Makoto Itoh received his B.S., M.S., and Doctor's degrees in engineering from University of Tsukuba in 1993, 1995, and 1999, respectively. From 1998 to 2002 he was a Research Associate at the University of Electro-Communications, Japan. After coming back to University of Tsukuba in 2002, he became a Professor with the Faculty of Engineering, Information and Systems in 2013. His research interests include shared control, adaptive automation, and building of appropriate trust as well as prevention of over-trust and distrust. He is now serving as a co-chair of IEEE SMC Technical Committee on Shared Control. He is also a member of IFAC TC 9.2 "Social Impact of Automation."

 

 

BMI Tutorial 1: Mobile BCI application: Neuroscience-based design and neurorehabilitation

 

 

 

 Speaker: Reinhold Scherer

 Affiliation: Graz University of Technology, Austria

 

 

 

 

 

 

 

 

 

 

 Speaker: Olga Sourina

 Affiliation: Nanyang Technological University, Singapore

 

 

 

 

 

 

Abstract:

Mobile BCI application has attracted recently more attention from the research community and industry as wireless portable EEG devices became easily available on the market. Now, EEG-based technology has been applied in anesthesiology, psychology, rehabilitation, serious games, design, or even in marketing. This tutorial will provide an overview of mobile BCI applications with emphases on neurorehabilitation and neuroscience-based design. State-of the art Electroencephalogram (EEG) signal features & machine learning tools for BMI will be discussed. BMI technology has been increasingly studied in recent years with the aim to help individuals affected by neurological injuries and disorders (e.g. stroke, spinal cord injury, cerebral palsy) to improve their functional outcome. First results of this very experimental approach suggest that the combined use of BMIs that detect imagined or attempted movement, robotized rehabilitation devices and virtual reality positively impact on functional outcome. Another approach is to use BMI technology to study and model neuroplasticity with the aim to characterize mechanisms of motor learning and motor control. This second approach requires recording of behavioral and neural data from people while they are engaged in motor (rehabilitation) tasks. This allows for brain and body imaging. With the rising interest in neurorehabilitation and real-world BMI applications, obtaining high quality data during ambulatory/mobile use is crucial. Recording clean brain signals, however, is very challenging. Movement artifacts as well as interferences from dynamic environments can make an analysis difficult. We will discuss current noninvasive electroencephalogram-based (EEG) BMI-based neurorehabilitation protocols, provide insight into methods and technology, and address current open issues. Neuroscience-based or neuroscience-informed design is a new area of mobile BCI application. It takes its roots in study of human well-being in architecture and human factors study in engineering and manufacturing. We will share our research and development of an EEG-based system to monitor and analyse human factors measurements of newly designed systems, hardware and/or working places. The EEG is used as a tool to monitor and record the brain states of subjects during human factors study experiments. In traditional human factors studies, the data of mental workload, stress, and emotion are obtained through questionnaires that are administered upon completion of some task or the whole experiment. However, this method only offers the evaluation of overall feelings of subjects during the task performance and/or after the experiment. Real-time EEG-based human factors evaluation of designed systems allows researchers to analyse the changes of subjects’ brain states during the performance of various tasks. We discuss real-time algorithms of emotion recognition, mental workload, stress recognition from EEG and its integration in human-machine interfaces including car driving assistant systems, air-traffic controller stress assessment and cadets/captains stress assessment systems. The tutorial includes demos of algorithms integrated in CogniMeter system, serious games demos, etc using Emotiv Epoc device.

 

Speakers Bio:

Reinhold Scherer received his M.Sc. and Ph.D. degree in Computer Science from Graz University of Technology, Graz Austria in 2001 and 2008, respectively. From 2008 to 2010 he was postdoctoral researcher and member of the Neural Systems and the Neurobotics Laboratories at the University of Washington, Seattle, USA. Since 2011, he is Assistant Professor and Deputy Head of the Institute of Neural Engineering, Laboratory for Brain-Computer Interfaces (BCI-Lab) at the Graz University of Technology, Graz, Austria and member of the Institute for Neurological Rehabilitation and Research at the clinic Judendorf-Strassengel, Austria. His research interests include BCIs based on EEG and ECoG signals, statistical and adaptive signal processing, mobile brain and body imaging and robotics-mediated rehabilitation.

 

Olga Sourina received her MSc in Computer Engineering from Moscow Engineering Physics Institute (MEPhI) in 1983, and her PhD in Computer Science from NTU in 1998. Dr Sourina worked as a software engineer, then as a Research Scientist in MEPhI. For her scientific achievements Dr. Sourina was awarded the honorary diploma of the Academy of Sciences of USSR, the Silver Medal of the National Exhibition Centre of USSR, and the Medal of the Ministry of Education of USSR. After receiving her PhD from NTU she worked as a Research Fellow in the Centre for Graphics and Imaging Technology (CGIT), NTU. Then, she worked as a Senior Scientist in Institute of Computing for Physics and Technology in Russia. Since 2001 Dr Sourina she worked as an Assistant Professor in NTU. In 2013, she created a Cognitive Human Computer Interaction research lab in FraunhoferIDM@NTU Center, and currently, she is Principal Research Scientist of NTU and is leading research and industrial projects in Interactive Digital Media (IDM) and Biomedical Engineering in FraunhoferIDM@NTU Center. Her research interests are in brain-computer interfaces including real-time emotion, stress, vigilance and mental workload recognition, neuroscience-based design, visual and haptic interfaces, serious games, visual data mining and virtual reality. Dr Sourina has more than 150 publications including more than 40 research papers in international refereed journals and 3 books. She gave 15 invited and keynote talks at International conferences. She is a member of program committee of international conferences, a senior member of IEEE, a member of Biomedical Engineering Society and a member of International Organization of Psychology.

 

BMI Tutorial 2: Why bother with advanced modeling in BCI? Lessons from neuroimaging.

 

 

 

Speaker: Maureen Clerc

 Affiliation: Inria Sophia Antipolis Mediterranean, France

 

 

 

 

 

 

 

Speaker: Jérémie Mattout

 Affiliation: Lyon Neuroscience Research Center, France

 

 

 

 

Abstract:

The objective of this tutorial is to describe the benefits which neuroimaging models can bring to BCI. Computational and neurophysiological models as developed in neuroimaging provide features that are closer to the neural activity and the mental processes of interests than features directly observable at sensor-level. Priors can then be incorporated (e.g. as to the location or polarity of a neural activity of interest), which may yield more relevant and more robust features.

In the first part of this tutorial session, Maureen Clerc will present the process of estimating brain activity from EEG or MEG data. For this purpose, the relationship between sensor measurements and source activity will be presented. Methods for source reconstruction will focus on those that are applicable in real-time for BCI applications. Guidelines will also be provided on when it may be wise to use neuroimaging in BCI.

In the second part of this tutorial session, Jérémie Mattout will illustrate the usefulness of generative models in human electrophysiology. Bayesian inference as a powerful and generic framework for model selection and model fitting will be highlighted. It will be shown how source reconstruction models, as introduced in Part 1, can be extended to explain electrophysiological activities in terms of modulations of effective connectivity in a cortical network. As an example often encountered in BCI, auditory oddball paradigms will be used to illustrate concrete applications of advanced computational and neurophysiological models. In particular, it will be shown how such models can address questions pertaining to mental processes such as attention or learning. Finally, the usefulness of this general framework for online use, in BCI, will be advocated through concrete examples.

 

Speakers Bio:

Maureen Clerc, research director at Inria, Sophia Antipolis, where she develops new methods for extracting dynamic information from the living human brain, and in particular, Brain Computer Interfaces. In 2014 she was awarded the Pierre Faurre prize from the French Academy of Sciences, recognizing her work in the application of mathematics and computer science to the life sciences. She had recently co-edited a 2-volume reference book on Brain-Computer Interfaces."

 

Jérémie Mattout is an INSERM researcher at the Lyon Neuroscience Research Center in France. He is working on methods in human electrophysiology and computational neuroscience applied to cognition and brain-computer interfaces for basic research and clinical applications. He developed advanced modelling and inference approaches for EEG and MEG data analysis that have been implemented in the SPM (Statistical Parametric Mapping) software.