Pamela Faber holds a Bachelor of Arts in Audiovisual Communication from the University of North Carolina, a Bachelor of Arts in English Language and Literature and a PhD in English Language and Linguistics from the University of Granada, and a Masters of Advanced Studies from the Paris-Sorbonne University. Faber has been a full-time Professor since 2001, but in 1987 she began teaching at the University of Granada, where she is Head of the LexiCon Research Group.
Her main research areas are terminology, specialized lexicography and knowledge representation. As the result of her work in specialized language and translation, over the years, she created the Frame-Based Terminology approach, whose practical application is EcoLexicon. Faber is one of the ten most cited terminologists according to the H-index, which captures output based on the total number of publications and the total number of citations to those works.
1. You come from the USA, but you also have Spanish citizenship and carry Puerto Rico’s legacy in your second surname (Benítez). Do you believe that your background ignited your passion in Terminology, and if yes, in what way?
For the first 19 years of my life, I lived in Miami, a city that links the USA with South America and the Caribbean. In my family, both English and Spanish were routinely spoken. However, it was not until I came to live, study, and work in Spain that I became aware of the importance of language and of the benefits of understanding and speaking different languages. I applied that philosophy to my own family at a time when people were reticent about raising children to be native speakers of more than one language. The results could not have been more positive. Despite the dire predictions of friends, in-laws and even a few medical professionals, I raised my children to be multilingual, and as a result, all four are fluent in English, Spanish, and French.
2. You wrote your PhD thesis on poetry translation. How did you end up creating the Frame-Based Terminology approach?
It is true that many years ago I wrote my PhD thesis on poetry translation. However, I soon found that I did not wish to continue on that path. My first job was at the University of Granada, where I had the opportunity to participate in an innovative research project that focused on electronic lexical resource design and contrastive lexical fields. This was at the beginning of the digital revolution, and this decision irrevocably changed my research path. In fact, 25 years later, my work in that project led to EcoLexicon, the Terminological Knowledge Base that is the practical application of Frame-Based Terminology. Apart from my work in research projects, and as a way of remaining actively connected to translation, I did (and do) professional translations, primarily in the fields of civil engineering, medicine, and linguistics.
3. Previously in your career, you were asked to change the final version of a translation that you did because it was basically a free translation. Since there is a mass debate surrounding literal vs. free translation, could you share your stance on this with TermCoord?
The translation that you mention was one that I did some years ago. It was a chapter in a book on language universals for a prestigious publishing house. I did the translation and submitted it to the client, who, after reading it, requested that I do the translation again. I was perplexed because I knew that the translation was good. It flowed smoothly and was an accurate rendition of his text. I asked where the problem was, and he said that there was no problem. I had accurately conveyed the meaning, but in this case, semantic accuracy was not sufficient. He wanted me to provide a translation that also followed the syntactic structure of the original text.
I told him that when translating from Spanish to English, this type of approach is often a stylistic disaster because, among other things, it usually means writing in long sentences with an endless string of subordinate clauses and dangling participles. In diplomatic terms, I mentioned that he was asking me to treat his chapter as though it were poetry or sacred scripture, whose form is also a part of their meaning. Since he kept insisting, I said that I would be willing to give him a more literal version, but not for free, and that he would have to pay me a second time because it would count as another translation. Rather surprisingly, he agreed, and so I provided him with another version of the text, which I regarded as acceptable (though in my opinion, not as good as the first version). He was extremely satisfied: I got paid twice; and everyone was happy.
The lesson learned from this experience was that each translation job is unique. After that experience, I always explain to clients what translation entails. In some cases, I give them a sample (the first page of the text) to be sure that it meets their expectations. Over the years I have learned that in scientific and technical translation jobs, where the client is often not a good writer, he/she does not mind when the translator substantially modifies the form of the text in the target language. In fact, the client is usually grateful because his/her text reads more smoothly. However, there are still some people who wish the translator to be (painfully) literal. Needless to say, this type of translation is infinitely more difficult because writing styles tend to differ between languages, and what is a hallmark of good writing in one language is precisely the opposite in another. Although I am more in favour of free(r) translations that sound more natural in the target language, it is also true that the client is always right. It is merely a question of being aware of his/her preferences, and creating the type of text that he/she wishes.
4. Thanks to multilingual terminology databases, glossaries, dictionaries, encyclopaedias, online media and corpora, translators and interpreters can search for terms, expressions and contexts. Which strategies should they apply for their terminography documentation in the Internet era?
As I tell my students, translation strategies are quite different from what they used to be. In bygone ages, the documentation process involved long hours at the library as well as pilgrimages to organizations where domain experts could be found and hopefully consulted. Thanks to the digital revolution, in today’s world, almost all information is a keystroke away. Given the abundance of resources, it is mostly a question of knowing how to find the right data. However, one should have an excellent grasp of which resources are most reliable, what types of information each offers, and when and how to use a certain information type. Although easy access to information makes translation work easier, it is also a double-edged sword because a translator must have the criteria to perceive the best solution of the many offered. He/she must also know how to make intelligent searches on Internet and correctly process the information obtained.
As specialized translators are aware, a considerable percentage of translation quality depends on obtaining optimal correspondences for the specialized language units or terms used to convey the text message. Terms, semantic clusters of terms, and their configurations activate segments of the conceptual structure of a knowledge domain, which are hopefully present in the source and target language-cultures.
Even though the meaning of certain concepts and relations are evident in the surface structure of the text, this is merely the tip of the iceberg. There is a whole world of meaning lying beneath the surface, which translators must be able to perceive. For this reason, one of the most valuable types of information is contextual. This is also the type of information least frequently provided in online resources.
Because of their recency as well as the rapid evolution of specialized knowledge, many terms, especially multi-word expressions, are not found in terminological databases. Professional translators, who are generally specialized in a certain knowledge domain or domains, are thus obliged to create their own corpus for term searches and/or to use a large general corpus, such as those found on Sketch Engine. This means that the specialized knowledge units in a text as well as their relations must be analysed at various levels: (i) term level; (ii) phrase level; (iii) wider knowledge frame level. The expansion and enhancement of knowledge is thus an important part of the specialized translation process.
5. Nowadays, English is the lingua franca of the international scientific community. This forces authors of texts either to draft directly in English or to translate into English (not always good quality). Considering this, what is your advice to translators when doing terminography documentation?
The answer to this question is directly related to the preceding one. It is true that translators have a vast array of information at their fingertips, but its quality is variable. For this reason, translators must also possess linguistic criteria, which act as cognitive filters to eliminate unreliable data. Given the current state of scientific writing in English, it is not a good idea to blindly trust only one source. In fact, it is imperative to obtain a certain consensus of various sources before making a decision. Furthermore, not being an expert in the field does not preclude possessing knowledge of how high-quality scientific texts are written, especially regarding their stylistic and pragmatic features.
6. Public IATE receives an average of 3700 queries per hour from all over the world. How do you explain the success of such a terminology-oriented database to your students?
The answer to that question is very simple. Public IATE is an excellent multi-purpose terminological database with over eight million terms from a wide range of different knowledge domains. It can be used as a standard, given its quality and the institutions who participate in its design and implementation, not to mention the team of professionals, who constantly work to improve and update it. Even when it does not offer a definitive solution for a translation problem, it provides specialized translators with a ‘jumping-off place’ when they are searching for information about a term.
7. Frame-Based Terminology structures specialized knowledge units or terms in frames, which are non-language-specific representations. Which premises is this approach based on?
As reflected in its name, Frame-Based Terminology uses frames to structure specialized knowledge in a knowledge domain. In its most general sense, a frame is a type of mental representation that reflects an organization of knowledge about a concept or a set of related concepts, which humans retrieve from long-term memory to make sense of the world. Although frames have been applied in a wide range of disciplines, they are slippery customers and somewhat difficult to pin down.
For example, the concept of frame in Ontologies and Artificial Intelligence, which is non-language-specific and has a strictly hierarchical representation, is somewhat different from the concept of frame in Linguistics, which is less relationally constrained and is often linked to a specific language. Nevertheless, in terminology and specialized language, both frames are relevant since, when language is conceived as a mirror of the mind, frames can be elicited from oral texts emitted by experts or extracted from written texts by corpus analysis. Of course, the representations themselves inevitably differ, depending on whether one is referring to the strictly hierarchical structure or highly formal organization of a knowledge domain or to related clusters of concepts within the context of an action, process, or event.
The frames in Frame-Based Terminology are a blend of both perspectives since they include both language-specific and non-language-specific information. Frames are extracted from corpus texts in different languages through the use of knowledge patterns that encode semantic relations. The data thus obtained are used to structure categories to create concept frames as well as to characterize general processes and actions. When frames are specified as an action or process with participants (in the environmental domain), this provides a predicative frame linking two semantic categories. Although corpus data are used to extract information, the assumption is that the frames in FBT (unlike those in FrameNet) encode conceptual knowledge that is non-language-specific. Non-language specific information not only comes in the form of semantic relations, but also in the form of conceptual invariants encoded in a wide range of languages that are used for specialized communication.
8. The research group, which you lead at the University of Granada, developed EcoLexicon. In this visual thesaurus, each environment term appears in the context of a specialized frame that highlights its relation to other concepts, and makes its designations explicit in English, Spanish, German, French, Russian, and Greek. How was this process developed and what are the specific characteristics of this resource compared to a terminology database?
EcoLexicon (ecolexicon.ugr.es) was created over a period of 15 years, thanks to a series of research projects funded by the Spanish government. The main objective was to create a resource that would include interrelated term entries enriched with conceptual and linguistic information that would be valuable for users, who have to understand or create specialized environmental texts in another language.
EcoLexicon is a freely accessible Terminological Knowledge Base (TKB) on the environment with terms in six languages: English, French, German, Modern Greek, Russian, and Spanish. As previously mentioned, it is the practical application of Frame-Based Terminology, which configures specialized domains on the basis of definitional templates and creates situated representations for specialized knowledge concepts. The specification of the conceptual structure of (sub)events and the description of the lexical units are the result of a top-down and bottom-up approach that extracts information from a wide range of resources. This includes the use of corpora, the factorization of definitions from specialized resources and the extraction of conceptual relations with knowledge patterns.
EcoLexicon is different from other terminology databases because it provides entries in the form of semantic networks that specify relations between environmental concepts. All entries are linked to a corresponding (sub)event and conceptual category. In other words, the structure of the conceptual, graphical, and linguistic information relative to entries is based on an underlying conceptual frame. Graphical information includes photos, images, and videos, whereas linguistic information not only specifies the grammatical category of each term, but also phraseological, and contextual information. The TKB also provides access to the specialized corpus created for its development and a search engine to query it. One of the challenges for EcoLexicon in the near future is its inclusion in the Linguistic Linked Open Data Cloud.
Interviewer: Víctor Mir – Robert Schuman Communications’ Trainee at TermCoord
Víctor, who hails from Spain, studied in a German School and holds a Bachelor’s degree in Translation and Interpreting, a University-specific degree in Linguistic Mediation and a Master’s degree in Edition, Production and New Journalistic Technologies. He did an internship as a terminological researcher for the Quebec Board of the French Language. Víctor demonstrated his communication skills when writing, translating and presenting a wide range of topics, from finance and culture to health, events and sport for print and online media, including Swiss TV and Spanish and International newspapers, magazines, newsletters and blogs. Víctor, who speaks Spanish, German, English, Catalan, Italian and French (in that order), has taught languages at schools in Austria and multinationals and the Ministry of Foreign Affairs and Cooperation, Defence and Energy, Tourism and Digital Agenda in Spain.