Three types of VLE had been identified prior to the project start as being highly relevant for the interpreting context, namely 3D virtual worlds, videoconference tools and video corpora. The decision about which VLEs to use was made in line with the partners’ own assessment of, and access to, different environments. A crucial criterion was that the environments must be adaptable to suit the needs of the EVIVA evaluation. The initial suggestions were to use the 3D virtual environment developed in the IVY project, the video corpora and corpus search site developed in the BACKBONE project, the video clips and exercises developed in the BMT2 project, and a videoconference-based environment. These suggestions were consolidated in the early phase of the project. With regard to the videoconferencing environment, it was decided to use Google+ Hangout. The selected VLEs are summarised in the table below.
|Learning scenario||VLEs used|
A brief description of each VLE, access details and further information are provided below.
The BACKBONE corpora were developed in the LLP project BACKBONE, coordinated by the University of Tübingen, as a resource for language and interpreter training. They include video recordings and time-aligned transcripts of narrative interviews with native speakers of English, French, German, Polish, Spanish and Turkish, as well as with non-native speakers of English (English as a Lingua Franca corpus). The BACKBONE video corpus environment consists of a suite of corpora and a suite of corpus tools. The corpora consist of narratives by speakers from different walks of life including education, local politics, tourism, banking, environmental protection, sports and the media. BACKBONE comes with a range of annotation and search/retrieval functions for language and interpreter training purposes.
In order to access the BACKBONE project website and corpora, please click here
For further information on how to use BACKBONE, please download our Quick Guide to BACKBONE.
The IVY 3D virtual environment was developed in the LLP project IVY – Interpreting in Virtual Reality, which was coordinated by the University of Surrey. It is a resource for the training of interpreting students and clients of interpreters. It is implemented as a bespoke region in Second Life that hosts a range of virtual interpreting scenarios where business and community/Public Service interpreting typically takes place in order to support situated learning. The scenarios include, for example, a meeting room, presentation area, courtroom, doctor’s office, and others.
The environment has different modes that users may work in, namely:
- Interpreting practice mode, which gives access to prepared audio content, i.e. monologues and bilingual dialogues. The materials are embedded with a range of learning activities designed to help students prepare for an interpreting assignment and reflect on their performance. They also provide guidance for practising core interpreting skills such as active listening, anticipation, note taking, and target-text production and delivery.
- Exploration mode, which is designed for clients of interpreting services to learn about what an interpreter does, what the challenges are for both interpreters and clients, and how to work successfully with an interpreter.
- Live interaction mode, which enables all users (students and clients) to come together for joint practice. Making use of the “voice chat” in Second Life, students can use this mode to practice role-plays, and clients can take part either as experts or as observers of the communication.
For further information on how to access and use the IVY 3D environment, please click here.
The Building Mutual Trust 2 project (BMT2, coordinated by Middlesex University) aimed to create digital resources specifically for users of interpreting services in legal settings. The BMT2 website bases its offer on a series of short video clips for training legal practitioners in how to work with an interpreter. The video clips are based on key stages of criminal proceedings (police interview, lawyer consultation, pre-trial hearings, trial, sentencing) and capture the specific challenges of interpreting in each stage as well as the general challenges of interpreting in legal settings. The video clips are embedded in a web-based environment and enriched by a series of learning activities to enable legal practitioners to acquire and test their knowledge.
To access the BMT2 website, please click here
For further information on how to use the BMT2 website, please make reference to the relevant Quick Guide to BMT2.
Google+ Hangout is a third-party videoconference tool used during the evaluation phase of the EVIVA project. It is free and easy to use, entirely cloud-based, and does not require the installation of any client software. Moreover, it provides the option to conduct multi-point conferences free of charge. Multi-point conferences can be seen as a fairly ‘neutral’ way of using videoconferencing technology in connection with interpreter training, i.e. of re-creating the conditions of traditional dialogue interpreting.
In order to learn more about Google+ Hangout and to access it, please click here.