KEYNOTE SPEAKERS

It is our pleasure to announce the Keynote Speakers for IberSpeech 2014. Professors Pedro Gómez Vilda and Roger Moore are outstanding researchers in the field of speech and language technology.

Professor Roger K. Moore

Department of Computer Science

University of Sheffield

United Kingdom

 

Wednesday 19 November

11:00 to 12:00

Building A, Salón de Actos

 

Towards Spoken Language Interaction with 'Intelligent' Systems: where are we, and what should we do next?

Abstract:

Over the past thirty or so years, the field of spoken language processing has made impressive progress from simple laboratory demonstrations to mainstream consumer products. However, the limited capabilities of commercial applications such as Siri highlight the fact that there is still some way to go before we are capable of creating Autonomous Social Agents that are truly capable of conversing effectively with their human counterparts in real-world situations. This talk will address the fundamental issues facing spoken language processing, and will highlight the need to go beyond the current fashion for using machine learning in a more-or-less blind attempt to train static models on ecologically unrealistic amounts of unrepresentative training data. Rather, the talk will focus on critical developments outside the field of speech and language - particularly in the neurosciences and in cognitive robotics - and will show how insights into the behaviour of living systems in general and human beings in particular could have a direct impact on the next generation of spoken language systems.

Speaker Bio:

Prof. Roger K. Moore studied Computer and Communications Engineering at the University of Essex and was awarded the B.A. (Hons.) degree in 1973.  He subsequently received the M.Sc. and Ph.D. degrees from the same university in 1975 and 1977 respectively, both theses being on the topic of automatic speech recognition.  After a period of post-doctoral research in the Phonetics Department at University College London, Prof. Moore was head-hunted in 1980 to establish a speech recognition research team at the Royal Signals and Radar Establishment (RSRE) in Malvern.

In 1985 Prof. Moore became head of the newly created 'Speech Research Unit' (SRU) and subsequently rose to the position of Senior Fellow (Deputy Chief Scientific Officer - Individual Merit) in the 'Defence and Evaluation Research Agency' (DERA).  Following the privatisation of the SRU in 1999, Prof. Moore continued to provide the technical lead as Chief Scientific Officer at 20/20 Speech Ltd. (now Aurix Ltd.) - a joint venture company between DERA (now QinetiQ) and NXT plc.  In 2004 Prof. Moore was appointed Professor of Spoken Language Processing in the 'Speech and Hearing' Research Group (SPandH) at Sheffield University, where he is pioneering research that is aimed at developing computational models of spoken language processing by both mind and machine.

Prof. Moore is currently working on a unified theory of spoken language processing in the general area of 'Cognitive Informatics' called 'PRESENCE' (PREdictive SENsorimotor Control and Emulation).  PRESENCE weaves together accounts from a wide variety of different disciplines concerned with the behaviour of living systems - many of them outside the normal realms of spoken language - and compiles them into a new framework that is intended to breathe life into a new generation of research into spoken language processing.

Prof. Moore has authored and co-authored over 150 scientific publications in the general area of speech technology applications, algorithms and assessment.  He is a Fellow of the UK Institute of Acoustics, a Visiting Professor in the Department of Phonetics and Linguistics at University College London and a Visiting Professor at the Bristol Robotics Laboratory.  He is Editor-in-Chief of 'Computer Speech and Language' and a member of the Editorial/Advisory boards for 'Speech Communication', 'Languages' and the 'International Journal of Cognitive Informatics and Natural Intelligence' (IJCiNi). He is past Chairman of the 'European Expert Advisory Group on Language Engineering Standards' (EAGLES) working party on spoken language resources, and Editor of the 'Handbook of Standards and Resources for Spoken Language Systems'.

Prof. Moore served as President of the 'European Speech Communication Association' (ESCA) and the 'International Speech Communication Association' (ISCA) from 1997 to 2001, and as President of the Permanent Council of the 'International Conferences on Spoken Language Processing' (PC-ICSLP) from 1996 to 2001.  During this period he pioneered the internationalisation of ESCA, the integration of the EUROSPEECH and ICSLP conferences into an annual INTERSPEECH conference, and chaired the joint ISCA/PC-ICSLP working party which drew up the detailed recommendations for the merger.

In 1994 Prof. Moore was awarded the prestigious UK Institute of Acoustics Tyndall medal for “distinguished work in the field of speech research and technology” and in 1999 he was presented with the NATO RTO Scientific Achievement Award for “repeated contribution in scientific and technological cooperation”.  In 2008 he was elected as one of the first ISCA Fellows “in recognition of his applications of human speech perception and production models to speech technologies and his service to ISCA as President”.

Prof. Moore was General Chair for INTERSPEECH 2009.

 

Professor Pedro Gómez Vilda

Facultad de Informática

Universidad Politécnica de Madrid

Spain

 

Friday 21 November

11:00 to 12:00

Building A, Salón de Actos

 

Speech as a Vehicular Tool for Neurological Disease Monitoring

Abstract:

Modern Statistical Signal Processing and Machine Learning Techniques are opening a new research area of most relevance for Speech Technologies, which is the field of medical applications. Organic Larynx Pathology Detection and Grading is being successfully accomplished thanks to the advances produced during the last decade, and is becoming a reality nowadays. Neurological Disease Monitoring and Assessment is one of the emerging fields for its interest in the years to come, especially in relation with Neurodegenerative Diseases as Parkinson's, Alzheimer's, Amyotrophic Lateral Sclerosis, or other non AD Aging Dementias, among others. Neuromotor and/or cognitive degeneration behind these diseases need a systemic neuromechanical description, in terms of the different physiological organs involved in speech production, mainly in the laryngeal, naso-pharyngeal and oral subsystems. Possible strategies to collect observable acoustic correlates from the speech signal, in reference to specific biomechanical systems in larynx (vocal folds), pharynx (velopharyngeal switch) and mouth (lingual complex, jaw, lips), are described. Methodologies to associate acoustic correlates to neuromotor and neurocognitive activity by means of different Statistical Pattern Recognition Techniques are also commented. Results from different on-going studies are to be presented and discussed.

Speaker Bio:

Dr. Pedro Gómez Vilda was born at Burgo de Osma (Soria), Spain. He received the degrees of Communications Engineer (MSc. level), Universidad Politécnica de Madrid (1978), and Ph.D. in Computer Science, Universidad Politécnica de Madrid (1983). His professional and academic activities can be summarized as 1976-77: Scholarship from Nuclear Studies Center, Nuclear Energy Board, Spain; 1977-78: I+D Engineer, NORTRON Electronics; 1978-82: Assistant Teacher; 1982-88, and 1988-till now: Full Professor, Facultad de Informática, Universidad Politécnica de Madrid. His research lines are in Signal Processing, Speech Recognition, Biomechanical System Modeling, Bioengineering, Bioinformatics, Pattern Recognition, Neural Networks, Speech Perception and Production, Neuromorphic Brain Modeling, Forensic Sciences, and Neurological Disease Monitoring. Prof. Gómez-Vilda is author or co-author of 290 publications, including book chapters and journal articles with international referencing in ISBN and/or ISSN, and 105 conferences and lectures in different institutions and scenarios. He is currently Head of the Research Group on Informatics Applied to Signal and Image Processing, and director of the Neuromorphic Speech Processing Lab, Center for Biomedical Technology, Universidad Politécnica de Madrid. He is member of IEEE, Life Sciences Systems and Applications Technical Committee, International Speech Communication Association (ISCA), and European Association of Signal Processing (EURASIP). He is also scientific reviewer of IEEE Transactions on Circuits and Sistems, Neural Networks, Speech and Audio and Signal Processing, Speech Communication, the Journal of the Acoustical Society of America, Neurocomputing, Cognitive Computation, Computers in Medicine and Biology, Biomedical Signal Processing and Control, Electronic Letters, and Signal Processing Letters. He has been also Professeur Invité par la División de l’Enseignement Superieur et la Recherche, France, Enseignant-Chercheur invité par l’Université de Cergy-Pontoise, France, Gastwissenschaftler (Invited Professor) an der Universität Regensburg, Freeistaat Bayern, Germany, Honorary Professor of Technical University of Cluj-Napoca, Romania, and Doctor Honoris Causa by Technical University of Cluj-Napoca, Romania. He is co-author of three patents in Spain and the USA, and founding partner and scientific director of the start-up BioMetroSoft SL (www.biometrosoft.com), created in 2011 from a contest of ideas to promote technology-based companies.

 

Additional information