Keynotes


Bin Hu
Professor,Lanzhou University,China

Biography:
Dr. Bin Hu, Professor, Dean, School of Information Science and Engineering, Lanzhou University, China,bh@lzu.edu.cn; IET Fellow; Member at Large of ACM China, Chair of ACM SIGBio China; Vice Chair of International Society for Social Neuroscience (China Committee) and Member of IET Healthcare Technology Network; Chair of IEEE SMC TC on Computational Psychophysiology; Chair Professor of the National Recruitment Programme of Global Experts; Chief Scientist of the National Fundamental Research Program of China (973 Program); Board Member of the Computer Science Committee, Ministry of Education, China; Member of the Division of Computer Science Review Panel, Natural Science Foundation China.
His research interests include Computational Psychophysiology, Pervasive Computing, Mental Health Care. His work has been funded by the “973”, National Science Foundation China, Natural Science Foundation China(NSFC), European Framework Programme 7 and HEFCE UK. He has published more than 200 papers in peer reviewed journals, conferences, and book chapters. He has served as associate editor in peer reviewed journalssuch as IEEE Trans. Affective Computing, IET Communications, Cluster Computing, Wireless Communications and Mobile Computing, The Journal of Internet Technology, Wiley’s Security and Communication Networks, Brain Informatics etc.

Title: Computational Psychophysiology Based Emotion Analysis for Mental Health

Abstract:
Computational psychophysiology is a new direction that broadens the field of psychophysiology by allowing for the identification and integration of multimodal signals to test specific models of mental states and psychological processes. Additionally, such approaches allows for the extraction of multiple signals from large-scale multidimensional data, with a greater ability to differentiate signals embedded in background noise. Further, these approaches allows for a better understanding of the complex psychophysiological processes underlying brain disorders such as autism spectrum disorder, depression, and anxiety. Given the widely acknowledged limitations of psychiatric nosology and the limited treatment options available, new computational models may provide the basis for a multidimensional diagnostic system and potentially new treatment approaches.

Jeffrey Cohn
Professor,University of Pittsburgh

Biography:
Jeffrey Cohn is Professor of Psychology and Psychiatry at the University of Pittsburgh and Adjunct Professor of Computer Science at the Robotics Institute, Carnegie Mellon University. He leads interdisciplinary and inter-institutional efforts to develop advanced methods of automatic analysis and synthesis of facial expression and prosody and applies those tools to research in human emotion, nonverbal communication, psychopathology, and biomedicine. He has served as General Chair of FG2017, FG2015, FG2008, ACII 2009, and ICMI 2014. He is a past co-editor of IEEE Transactions in Affective Computing (TAC) and has co-edited special issues on affective computing for the Journal of Image and Vision Computing, Pattern Recognition Letters, Computer Vision and Image Understanding, and ACM Transactions on Interactive Intelligent Systems.
For further background, please see http://www.jeffcohn.net.

Title:Automatic Multimodal Analysis and Synthesis of Behavior for Research and Clinical Use
Multimodal behavior communicates emotion, intention, and physical state, and regulates social interaction. Manual measurement is labor intensive and difficult to scale. Convergence of computer vision, machine learning, and behavioral science has made possible automatic multimodal analysis and synthesis. I will present 1) human-observer based approaches to measurement that inform this advance; 2) current state of the art in detecting occurrence, intensity, and dynamics of multimodal behavior with applications in depression, autism spectrum disorder, and interpersonal influence; 3) expression transfer and de-identification for behavioral and clinical science and practice; and 4) current challenges.


Stephen Brewster
Professor,University of Glasgow

Biography:
Stephen Brewster is a Professor of Human-Computer Interaction in the School of Computing Science at the University of Glasgow, where he leads the Multimodal Interaction Group. His research focuses on multimodal HCI, or using multiple sensory modalities and control mechanisms (particularly audio, haptics and gesture) to create a rich, natural interaction between human and computer. His work has a strong experimental focus, applying perceptual research to practical situations. A long-term focus has been on mobile interaction and how we can design better user interfaces for users who are on the move. Other areas of interest include accessibility, health, wearable devices and in-car interaction. He pioneered the study of non-speech audio and haptic interaction for mobile devices with work starting in the 1990's. He is a Fellow of the Royal Society of Edinburgh, a Member of the ACM SIGCHI Academy and an ACM Distinguished Speaker. He has published over 300 papers with more than 20 best paper awards and nominations. His work has over 14,000 citations.

Title: Multimodal Human-Computer Interfaces

Abstract:
The use of multiple human sense and control capabilities allows rich interaction between people and technology. We are no longer limited to just screens and keyboards; we can take advantage of all the capabilities of our users. However, designing user interfaces with multiple modalities can be difficult. In this presentation, I will discuss how to design successful haptic interfaces using, for example, pressure for input, and vibrotactile and thermal feedback for output. I will present work on audio and 3D sound and show how they can be used to create new interactions that allow people to focus on important tasks while on the move.


Shrikanth (Shri) Narayanan

University of Southern California, Los Angeles, CA

Signal Analysis and Interpretation Laboratory

http://sail.usc.edu

Biography:
Shrikanth (Shri) Narayanan is the Niki & C. L. Max Nikias Chair in Engineering at the University of Southern California, where he is Professor of Electrical Engineering, and jointly in Computer Science, Linguistics, Psychology, Neuroscience and Pediatrics, Director of the Ming Hsieh Institute and Research Director of the Information Sciences Institute. Prior to USC he was with AT&T Bell Labs and AT&T Research. His research focuses on human-centered information processing and communication technologies. He is a Fellow of the Acoustical Society of America, IEEE, ISCA, the American Association for the Advancement of Science (AAAS) and the National Academy of Inventors. Shri Narayanan is Editor in Chief for IEEE Journal of Selected Topics in Signal Processing and an Editor for the Computer, Speech and Language Journal and an Associate Editor for the APISPA Transactions on Signal and Information Processing having previously served an Associate Editor for the IEEE Transactions of Speech and Audio Processing (2000-2004), the IEEE Signal Processing Magazine (2005-2008), the IEEE Transactions on Signal and Information Processing over Networks (2014-2015), IEEE Transactions on Multimedia (2008-2012), the IEEE Transactions on Affective Computing, and the Journal of Acoustical Society of America. He is a recipient of several honors including the 2015 Engineers Council’s Distinguished Educator Award, a Mellon award for mentoring excellence, the 2005 and 2009 Best Journal Paper awards from the IEEE Signal Processing Society and serving as its Distinguished Lecturer for 2010-11, as an ISCA Distinguished Lecturer for 2015-16 and the 2017 Willard R. Zemlin Memorial Lecturer for ASHA. With his students, he has received a number of best paper awards including a 2014 Ten-year Technical Impact Award from ACM ICMI and a six-time winner of the Interspeech Challenges. He has published over 750 papers and has been granted 17 U.S. patents.

Title: Behavioral informatics for health applications

Abstract:
The convergence of sensing, communication and computing technologies is allowing capture and access to data, in diverse forms and modalities, in ways that were unimaginable even a few years ago. These include data that afford the analysis and interpretation of multimodal cues of verbal and non-verbal human behavior to facilitate human behavioral research and its translational applications in healthcare. These data not only carry crucial information about a person’s intent, identity and trait but also underlying attitudes, emotions and other mental state constructs. Automatically capturing these cues, although vastly challenging, offers the promise of not just efficient data processing but in creating tools for discovery that enable hitherto unimagined scientific insights, and means for supporting diagnostics and interventions.
Recent computational approaches that have leveraged judicious use of both data and knowledge have yielded significant advances in this regard, for example in deriving rich, context-aware information from multimodal signal sources including human speech, language, and videos of behavior. These are even complemented and integrated with data about human brain and body physiology. This talk will focus on some of the advances and challenges in gathering such data and creating algorithms for machine processing of such cues. It will highlight some of our ongoing efforts in Behavioral Signal Processing (BSP)—technology and algorithms for quantitatively and objectively understanding typical, atypical and distressed human behavior—with a specific focus on communicative, affective and social behavior. The talk will illustrate Behavioral Informatics applications of these techniques that contribute to quantifying higher-level, often subjectively described, human behavior in a domain-sensitive fashion. Examples will be drawn from mental health and well being realms such as Autism Spectrum Disorders, Couple therapy, Depression and Addiction counseling.


Support:acii_asia_2018@163.com | Last Updated Wednesday, May 16, 2018