LIDIAP and Idiap – in detail with Prof. Bourlard

The LIDIAP laboratory of the School of Engineering is the satellite at EPFL of the Idiap research institute at Martigny in Valais.  In actual fact, the LIDIAP laboratory and Idiap are one and the same. Created in 1991 by the town of Martigny, the Valais, EPFL, Geneva University and Swisscom. Idiap has been affiliated to the EPFL by a joint development plan since 2008. Previously there were already several collaboration agreements between the two institutions; for example the academic titles of its researchers (about 90, of which thirty-six are EPFL doctoral students) follow the rules of EPFL. This alliance has helped increase the visibility of Idiap, helping it to attract high level researchers to meet its objectives. As a whole, Idiap runs around forty active projects per year, in Switzerland and throughout the world.
I spoke to the director, Prof. Hervé Bourlard, who told me about the activities of this Valaisan institution, and about its recent developments – activities which bring glory to itself but also to the EPFL. 

Professor, what is the mission of the Idiap research Institute?

Our objectives are research, teaching and the transfer of technology into the domains of perceptive and cognitive systems, social and human behaviour, information and presentation interfaces, biometric authentication and automatic instruction. We try to concretise our research in order to convert it into products. With a view to this our first spin-off, IdeArk SA (similar to the PSE scientific park at EPFL) plays the role of incubator for the companies coming out of our institute. Our financing comes from public funds and industry, research sponsors (European Union, National Funds, CTI, for example), and from our program of international visitors.

What are the principle activities of the Idiap research institute?

Our institute works in the domain of the treatment of human and media computing. We also have an NCCR in interactive multimodal in formation management. By multimodal I mean audio, video and text.

In the beginning we were interested in the treatment of meetings (face to face and videoconferencing) and the automatic extraction multimodal information, with the help of multi-sensors. Little by little we realised that to understanding meetings and interaction between people sensors were insufficient. We had reached the limits of engineering! We wanted to go further and particularly to understand non-verbal language (in the domain of political debate, for example), the dynamics of meetings or the psychology of people in various social situations. Basically, everything that cannot be dealt with solely by multimedia. So we started to make use of social sciences and in particular psychology for the extraction of information.

At the moment we are working on several mandates based on this research with various industrial and academic partners. These contracts, and the funds they bring, encourage us to continue our research in this domain.

Tell me about some current creations among the many projects you are leading.

In 2008 we initiated the ACLD project (Automatic Content Linking Device), or virtual assistant, which is still running today. The ACLD allows one to hold ‘intelligent meetings’. During such meetings the virtual secretary can, thanks to multiple sensors distributed around the room, perform searches on the libraries of a project in real time, in multimedia documents stored in databases available over the web. This information appears on a common screen or on the portable computers of the participants and can help with taking decisions and makes a net gain of administration time, for example.

Two start-ups coming out of Idiap (Klewel and Koemei) are marketing derivatives of this technology based on sound: the first, on the basis of ‘intelligent courses’ (a prototype has been ordered by EPFL), the second on the basis of meetings.

A more recent example is a project we have been working which has just been renewed with an industrial partner. The aim is for a better understanding of the general, social, even dynamic context in which people evolve. We are trying to understand and to define this context in order to model it, in order to make new applications for portable phones. Picture this: your phone could ‘hear’ (or detect) the general context in which you find yourself, and adapt itself accordingly without you having to do anything. There are many possible applications for this, but for example you can imagine yourself in a meeting when you receive a call on your ‘intelligent’ phone: it could automatically put itself in silent mode or divert the call to a messaging service, etc. As far as we can see, nobody has managed to get a complete understanding of the notion of context in information technologies. This is what we intend to do at the Idiap research institute.

What makes Idiap different to other similar research institutes around the world?

I would say that we have a very good international reputation and that we have been able to develop and maintain a vast network over the last twenty years. What is more, our way of working internally is coherent and dynamic, with an emphasis on the multidisciplinary. This allows us to carry out all our projects as efficiently as possible.

For more information on Idiap projects you can see detailed descriptions on the website, particularly in the annual reports available online. 


This interview was undertaken by Lara Rossi of STATION-SUD