Back to top

How language testing is embracing technology

A new breed of digitally-delivered English exams don’t stop at testing verbs, they also want to democratise education and see themselves leading the way. But as the testing industry embraces technology, these changes may go as far as to challenge our idea of test itself. Claudia Civinini reports.
March 29 2019
7 Min Read

On a hot summer’s day in Australia, about nine years ago, I was standing in a long queue to register at an IELTS test centre. We were all nervous, overdressed, and around AUS$300 lighter. If someone had told me then that I could have taken the test in my pyjamas from the comfort of my bedroom, and still gained admission to a university, I would have thought them delirious and called an ambulance, fearing heatstroke. 

But times have changed. While digital delivery is striving towards universal access at a lower cost, AI marking allows for instantaneous results and feedback, and branching algorithms build adaptive tests, which make the gruelling experience much shorter.

Some platforms are aiming to “knock down the test centre barriers”, such as the Duolingo English Test or the recently launched English3, which students can take online anywhere in the world to apply to an increasing number of US-based colleges and universities.

Others instead offer cheaper and faster testing solutions for individuals, companies, and governments.

Language learning app Babbel developed its two-skill auto-marked exam with Cambridge for the benefits of its users, but workplaces are also “loving the test,” reports chief product officer Geoff Stead. 

“The classic assessment business stakeholders tie themselves in knots about security”

The similarly auto-marked and two-skill online EF Standard English Test, developed by the testing arm of language teaching giant EF, was used to conduct a large-scale assessment in Italian state schools.

Traditional exam boards are not shying away from digital delivery either – Cambridge Assessment English, for example, offers Linguaskill, a fully online four-skill English test for companies. Computer-based tests such as Pearson PTE Academic, are now not an exception. 

But while the market is certainly asking for cheaper and faster tests, can technology deliver on its promise to democratise access to education? 

And what is technology’s relationship with the testing industry: is it really just a mere enabler, or can technology influence and mould the examination environment in ways that at times may not be entirely positive?

What the market asks

Educational, certification and licensing organisations are attracted by the logistical and ecological advantages of digital delivery, the tailored experience enabled by adaptive technology and the faster results, EF SET academic director Dana Alhadeedi explains.

The same advantages apply for users, with technology providing choice, flexibility, and equal opportunities, Pearson’s head of assessment Freya Thomas Monk says.

“I see digital assessment as providing a level playing field; it’s essentially the same experience wherever you take the test in the world,” she explains.

While Thomas Monk says learners particularly appreciate the speed of feedback but are not too concerned by the speed of the test itself, for other players in the industry this is a key selling point. 

“I don’t think there is enough evidence to say that technology measures language ability better”

LanguageCert for example, an on-demand test provider, is going to replace its linear computer-based test with an adaptive model, cutting time to about 60 minutes – but for two skills. “This is going to make the computer-based test a lot more popular than it is at the moment,” says LanguageCert Portfolio manager Mary Yannacopoulou. 

What has technology ever done for us?

Beyond flexibility, facilitated access and cheaper fees, technology is changing the shape of language exams themselves, making them shorter, personalised, and allowing for creative solutions for assessing integrated skills. Platforms such as English3 or Duolingo even allow for universities to directly review part of the candidate performance or provide a video interview.

For Stead at Babbel, digital delivery allows them to create test items that are better indicators of language ability. “There are a bunch of those opportunities sitting there, waiting for new digital agitators to shake things up,” he says.  

Sarah Rogerson*, who leads the assessment development team at Cambridge Assessment English, explains that particularly interesting developments are contextualised, immersive and scenario-based assessment.

But she maintains that technology is not inherently better, just different. “I don’t think there is enough evidence to say that technology measures language ability better. It enables us to do a lot of things differently as a powerful tool,” she tells The PIE.

But not everyone is using it well. “I see lots of shiny digital solutions with excellent user experience… but often based on old pedagogy, such as grammar-translation, moving back in time,” she adds.

I have marked things you machines wouldn’t believe

Part of the problem is that authenticity doesn’t sit well with AI marking, which provides one of the most exciting new developments in testing – instantaneous feedback. 

“I see lots of shiny digital solutions with excellent user experience… but often based on old pedagogy”

AI can assess linguistic competence: all the building blocks such as pronunciation, fluency, grammar, syntax, vocabulary, are within its power, Trinity College London lead academic Alex Thorp explains. The problem, however, is that communicative competence is missing. 

“We have a lot of difficulties when it comes to the bigger picture,” he says.

Skills – such as how we deal with spontaneous communication or how we strategise conversations – still escape AI.

Presently, these limitations constitute a problem now if language tests are produced to match AI engines’ current marking capabilities, Rogerson explains.

“A lot of test writers focus on those items that can be auto-marked by an AI engine… these may be dangerous because we are limiting what we learn and what we can assess based on what the technology can currently mark,” she says. 

“We need to provide tests that allow communicative competence to be assessed and maybe focus more on the human-machine synergy.”

Leading the change

The new kids on the block such as Duolingo or English3 don’t see themselves as challengers – more as leaders showing the way to a more democratic testing industry. By lowering barriers, they could even help revive the US’s international recruitment drive. 

DET head of strategy Jen Dewar, who says Duolingo can serve as a model of how technology can be used to democratise education, says that lower admission costs and facilitating access from countries with no official test centre will be beneficial. 

Coming from an admissions background herself, Dewar knows universities are not well aware of these challenges for students, but with US international recruitment decreasing, they’ll be more eager to listen to these arguments.

“We are helping universities understand what a barrier this is for students… universities are becoming more sensitive to this. I am seeing mobility patterns changing… so the issue of access is becoming a more compelling argument,” she explains to The PIE. 

“If you rely on a single high-stakes test, then you will always leave room for fraud”

But universities need to have confidence in the new testing solutions first, warns English3 COO Moroni Flake. ”One of the primary drivers of the international student drop in the US is cost,” he says. “Universities can attract more applicants by accepting a test that is more affordable and convenient, but they must have confidence that the test is an accurate reflection of academic English ability.”

But it’s not only for recruitment’s sake. The push to reach more students is to give everyone a chance to demonstrate their linguistic ability, which can be a fundamental barrier to progress in many countries. 

However, one of the most obvious blocks on the road to universal access is that not everyone has access to technology. Cambridge Assessment English has also been working with other organisations to increase access to education for refugees, Rogerson explains – and while technology plays a role in it, it’s not the only solution. “It’s important to remember that there is still a digital divide that is not going to be overcome in the short-term,” she says.

Another problem, By Degrees CEO Danny Bielik thinks, is that governments, employers and certifying bodies are still ambivalent about e-learning qualifications. By Degrees offers mobile-delivered courses and exam preparation and is particularly active in India.

“There is a lot of talk around trends, trends towards MOOC, towards e-learning, and the fact is that where education need is the greatest, the new trend hasn’t delivered in outcome,” he posits.

High stakes and ongoing assessment

There is no limit to what the testing industry will deploy to guarantee security on high-stakes tests; exams used for migration or university application purposes. Frightening invigilators and fingerprints are now being substituted by webcams and browser-blocking software while remote proctoring is carried out by raters trained to spot instances of “malicious behaviour”.

While some are suspicious of digitally-delivered high-stakes exams, others make the point that technology can actually improve security and surveillance.

But others think that no amount of technology or human invigilation will guarantee total security on high-stakes tests, and that a shift is coming. 

“The classic assessment business stakeholders tie themselves in knots about security,” Stead explains. He thinks the very nature of testing itself is going to change, and he’s not alone. 

“Fraud is in the classroom too. Whether people do the test online or with pen and paper, if you rely on a single high-stakes test, then you will always leave room for fraud,” adds Bielik. 

Continuous assessment cannot only solve security issues but be formative as well. That’s the philosophy behind TrackTest, an online English language test which users can access for 12 months and take multiple times while monitoring their performance. 

CEO Brano Pokrivcak, a philologist-turned-technology- entrepreneur, explains that the clue is in the name. “When you drive a race car, you don’t improve your performance during the races,” he says. “You improve when you are track testing: you do the track test, go back to the garage, fix your car, then try again. That’s what we offer.”

“If you can evidence across the board, this seems more persuasive to institutions”

Thorp at Trinity College London adds that some universities are already shifting from a high-stake admission test to a portfolio of evidence. “If you can evidence across the board, this seems more persuasive to institutions,” he says.

A digital future?

So what will future tests look like? Pen and paper tests will “hang around for longer than anyone imagines,” Stead acknowledges, nodding to the reality of fast internet accessibility globally. But the push towards fully AI-rated exams is quite strong, according to Thorp, who mentions Chinese projects such as iFlytek.

However, the predominant model for the near future is a blended AI-human marking. Polkrivcak says that for the next five years at least that won’t change, although AI will become stronger. 

Examiners don’t need to worry about losing their jobs anytime soon; they’ll just need to get used to sharing their desk with increasingly smart algorithms – for a couple more years at least.

This is an abridged version of an article that originally appeared in the July 2018 edition of The PIE Review, our quarterly print publication.

*Sarah Rogerson is currently director of assessment at Oxford University Press

1
Comments
Add Your Opinion
Show Response
Leave Your Comment

Your email address will not be published. Required fields are marked *