Interview with Elena Montiel-Ponsoda

2318
Elena Ponsoda

Elena Montiel-Ponsoda is Associate Professor at the Universidad Politécnica de Madrid, where she belongs to the Applied Linguistics Department since 2012, and member of the “Ontology Engineering Group” at the same university since 2006. She holds a M.A. in Conference Interpreting and Translation (2000) from Universidad de Alicante, a B.A. in Technical Interpreting (2003) from Hochschule Magdeburg-Stendal, Germany, and a PhD on Applied Linguistics (2011) from UPM.Elena Montiel-Ponsoda Her research interests are at the intersection between translation and terminology, and knowledge representation (ontologies, linked data), including amongst others: ontology localization and lexicalization, or the automatic conversion of linguistic resources (especially terminologies) into data resources. Currently, she is coordinating an H2020 Innovation Action called Lynx (780602) on the creation of a Legal Knowledge Graph for Smart Compliance Services in Multilingual Europe.

You have studied in Germany. How important is it for a linguist (in the broad sense) not only to study in a foreign language but in a foreign country?

It is absolutely crucial. Language and culture are so intertwined that one cannot really understand one without the other. Learning a language is learning to see the world through the eyes of their speakers. When I started learning German, I fell in love with the language, but when I moved to Germany to study there, I fell in love with the people, their culture, I started to make sense of the words and expressions I had learnt. 

How did you first get in touch with terminology and how did it become part of your daily working life?

It was precisely during my studies in Germany, at the Hochschule Magdeburg-Stendal, that I decided to work on legal terminology for my final degree project. I built a bilingual (German-Spanish) terminological glossary of legal terms related to Tötung (killing, murder). This helped me understand how important and challenging it is to precisely define terms and build terminological resources. This made me think a lot about the type of information that a terminology entry should contain to solve translators and interpreters needs.  

Do you have any background in computer linguistics? And do you encourage the integration of such classes into linguistics degrees of any kind (translation, interpreting, terminology…), and for what reasons?

Not really, I do not have any “official” background in Computational Linguistics, but when I started my PhD work at the Ontology Engineering Group, a research group that belongs to the Artificial Intelligence Department of the UPM’s Computer Engineering School, my PhD supervisors advised me to take some courses on Logic, Ontology Engineering, Semantic Web, and other “technological” curses. Nowadays, this is even regulated for students of the Artificial Intelligence Master, who do not have a Computer Engineering background. But, definitely, such a specialization for students of Linguistics would be highly recommendable on the light of the demanded profiles by the data industry. In fact, up to now, linguistics and engineering have been quite apart. I would encourage the implementation of such a degree, Computational Linguistics, in Spain, which to the best of my knowledge, does not exist so far, as is the case of other European countries.

What is linked data and how could it enhance IATE?

Linked data is based on a standard way of representing any piece of information so that it can be precisely identified and defined in the Web of Data, in a way that can be easily “consumed” by machines (precisely, because of following such formatting standards). Moreover, it also allows to define explicit relations or links between those pieces of information, and create services or applications that can make use of this information to help humans solve tasks.

For humans, what linked data does may seem pretty obvious, because it imitates how we organise and structure knowledge in our brains. Linked data simply allows to represent, in a computerised manner, that the term “maternity leave” is a type of “leave” that has an effect on “women” only, that entitles them to “take some time off work”, and that the “time period” varies from “country” to country and is regulated by the corresponding “regulations”. Now imagine all these information pieces connected as in a mind map that can be “understood” by a system that decides if a person is entitled to it or not.

Applying the ontological engineering principles to terminology would not change that much the way we work on terminology at the creation stage, but rather at the exploitation stage, i.e., when using it, and not so much for humans, but for machines.

And the great potential that linked data offers is the fact that it connects these information pieces with the information pieces of the database of employees of a company (whose information is also represented according to the linked data principles), or the one of a national statistics centre.

You are a member of the Ontology Engineering Group. What is the main purpose of it, and how does it change the way we work on terminology?

As its very name says, one of the main purposes of my research group is to provide the necessary methodological and technical support for the development of ontologies. Ontologies can be defined as constructs that allow the knowledge of any domain to be represented in a structured manner, also based on the relations that exist between concepts. In fact, the linked data paradigm is based on the ontological engineering principles.

Applying the ontological engineering principles to terminology would not change that much the way we work on terminology at the creation stage, but rather at the exploitation stage, i.e., when using it, and not so much for humans, but for machines. Generally speaking, the emphasis would be on “listing” the defining properties of terms and “expressing or explicitly accounting for” the relations between terms, as understood by a certain group of users and for certain purposes. But the real value of rendering terminologies in these formats is what other services could do with them.

You are currently working on a project called Lynx, how would you describe it to our readers who have never heard about this project?

The main objective of the Lynx project is to provide tools to assist lawyers and companies in internationalization processes in Europe, that is, when they want to expand their businesses to a different European country. How? In simple words, helping them to find the norm they need to comply with in a foreign legislation.

The main novelty is that the services that we are building in the project take advantage and exploit a wealth of legal and regulatory “open” data represented as linked data.

To give you an example, in one of the business cases of the project, we are developing a cross-lingual question-answering service to find the most relevant legal provisions in terms of employment in various European jurisdictions. As in any question-answering system, the idea is to find the answer within a corpus. There are several ways of approaching this. However, we believe that having the information in documents represented in a structured manner as linked data allows us to better understand how documents relate to each other, and present the relevant information to the user regardless of the document in which it is contained.

Translators and interpreters will continue being very relevant to better and precisely interpret human communication, but their working methodology will inevitably change with the advances of technology in language industries.

What fascinates you the most about the intersection between linguistics and computer science (linked data, ontologies)?

What fascinates me the most are the great advances that have been made in very few years to process text and speech, and how important the work of linguists has shown to be. Linguists are the ones to spell out how language works so that it can be “taught” to machines, and I think that linguists have also learnt a lot in the process. It is not the same to talk to another human (with whom you have a common shared background) than to a machine (who knows nothing, feels nothing).

What are the terminology tools/software that you would absolutely recommend to EU-terminologists?

I think that they already count on great resources. IATE 2.0 is already a fabulous tool and termcoord.eu is a great source of terminological resources. I think it is more a matter of bringing all these sources of terms together, not only by having them in the same portal, which is a very necessary first step, but also to start integrating them by taking advantage of technologies such as linked data, to be able to build “smart” services on top of them.

Concerning the rise of machine translation and even interpretation, what would you like to respond to people claiming that soon translators and interpreters will be obsolete?

I believe that translators and interpreters will continue being very relevant to better and precisely interpret human communication, but their working methodology will inevitably change with the advances of technology in language industries.

Do you know termcoord.eu? What is your opinion about our efforts to share terminology resources and to network the EU terminology with the academic terminology world?

I do know TermCoord’s website, and I think it is an excellent initiative. Having a centralized place where terminologies are shared is in the benefit of all. I wish there would be more initiatives of the sort, also at a national level, where all languages spoken in Europe would be represented.


djamila kleinWritten by Djamila Anita Klein, former Terminology trainee at Terminology Coordination Unit.