Part I: How well are EPSRC CDTs doing?

First Published: Thursday 25th October 2018
This is the first blog in our series on the performance of EPSRC funded Centres for Doctoral Training (CDTs) based an analysis of the results of the EPSRC's 2016/17 mid-term review.

EPSRC (Engineering and Physical Sciences Research Council) is a major funder of scientific research in the UK, disbursing nearly £1 billion each year in research funding to Higher Education Institutions (HEIs) in the United Kingdom (and some other Research Institutes and other Independent Research Organisations). Part of the funding it provides to HEIs is for the training of doctoral researchers (PhD students). A large proportion of this is done using Centres for Doctoral Training (CDTs), where over 50 studentships (5 cohorts of 10 or more students) are allocated to a specific research area, usually led and dominated by a single University (although there are some multi-institutional CDTs). The 115 CDTs that EPSRC funds account for around 10% of its overall budget (with the current portfolio costing just shy of £500 million). According to Phil Nelson (until recently EPSRC CEO) centres are part of EPSRC’s vision to provide its funding in a "consolidated, critical mass sort of way". In other words, funding is to be focussed primarly on a small number research groups at a small number of universities, rather than the previous approach of spreading things more thinly across and within universities. The CDT scheme is part of a wider policy initiative to focus funds on universities that perform well in certain league tables, at the expense of others. Accordingly, its success (or not) has important implications for future higher education research funding in the UK.

In August 2017, EPSRC issued a press-release about results of the mid-term review of its CDT scheme, as well as a publicity brochure ("Building skills for a prosperous nation") which identified a number of CDTs as exemplars of quality of training that was being provided. There was much fanfare. EPSRC asserted that the review panel were "tremendously impressed". The publicity brochure claimed that the CDT scheme was "setting the gold standard for cohort-based doctoral training in the UK". However, EPSRC were incredibly reluctant to disclose the actual mid-term review scores for each CDT, despite the usual expection for "open data" in science and government. When challenged, the Information Commissioner issued a lengthy decision notice in which EPSRC were ordered to release the data. Our analysis of the data released shows that:

Overall, our analysis of the results of EPSRC's own mid-term review of its CDT scheme indicates that it is hard to justify their claim that it is "setting the gold standard for cohort-based doctoral training in the UK".

The CDT mid-term review exercise

In 2017, EPSRC conducted a “mid-term” review of its “investments” in the 115 CDTs which were awarded in 2013/14. Each CDT was required to produce a report structured around a set of questions, as well as having the opportunity to provide case studies. These were in turn reviewed by EPSRC staff and then an Oversight Panel. Each CDT was given a score from one of four options (best to worst): “Good”, “Good/Satisfactory”, “Satisfactory” and “Interview”. The score of “Interview” meant that EPSRC had sufficient concern (based on the written report) as to consider withdrawing funding for future cohorts of students at a CDT, whilst "Satisfactory" meant a CDT was only one grade off having to be interviewed. Achieving the top rating of "Good" was not difficult. A CDT simply needed to demonstrate that is was fulfilling the promises made in its funding proposal. For example, see the mid-term review report (with an attached set of case studies) released under the Freedom of Information Act (2000) by Lancaster University concerning the operation of its "Good" ranked CDT (STOR-i). Note that Professor Mark Smith, the Vice-Chancellor of Lancaster University, was the chair of EPSRC mid-term review panel itself.

These scores are certainly the real deal: in other words, those CDTs with low scores do not achieve the high standards that EPSRC claims of the CDT scheme. If it was not so, then the good offices of the Information Commissioner would not have had the following to say about disclosing the specific feedback in her decision notice:

“84. The key aim of EPSRC (and UKRI) is to ensure research and innovation continues to flourish in the UK by investing wisely and supporting researchers. It would not be in the public interest to disclose information which may undermine these key aims. EPSRC has stated that in some case the discussions following the feedback are ongoing and, even in the cases where it is not, it seems that the nature of the feedback should it be disclosed may have an impact on EPSRC being able to continue supporting and encouraging centres and institutions. This is because the undue scrutiny these institutions may find they are under following disclosure would not be conducive to the centres and institutions making improvements and continuing to have open dialogue with EPSRC and the review process.”

So "Interview" and "Satisfactory" descriptors are analogous to a less than desirable Ofsted Inspection report for a school (which are actually published with the written reasons). Since the Information Commissioner's decision EPSRC now say that in some cases the grade of "Interview" was awarded where clarification concerning aspects of a CDT's submission was needed. However, no records of this were provided to the Information Commissioner by EPSRC, and EPSRC has not published any revised scores (post-interview).

What did we find?

We have developed a league table of CDTs based upon EPSRC's response to our request for mid-term review scores under the Freedom of Information Act (2000). These can be found in the Appendices (below). For present purposes, it suffices to say we have done an analysis of the data provided. We have used a numerical score that maps EPSRC Descriptors as follows (4 is best, 1 is worst): "Good" = 4, "Good/Satisfactory" = 3, "Satisfactory" = 2, "Interview" = 1. Where we have reached a finding on an insitution, this is based on a "Grade Point Average", i.e. the mean score for all CDTs led by the insitution. The data itself, together with the full methodology can be found in Appendix A and Appendix B.

The overall picture

Here is the overall distribution of scores by number of CDT's (hover on the segments to see the overall costs and numbers).
These results show that it is difficult to justify EPSRC's assertion that this approach is the "gold standard for cohort-based doctoral training in the UK" and that the CDT scheme is performing "tremendously" as over 60% of the CDT's were rated lower than "Good". Moreover, 19 of the centres (out of 115) were rated "Interview", meaning that serious consideration was being given to the possibility of withdrawing funding for future cohorts of students, whilst a further 34 CDTs were rated "Satisfactory", i.e. one grade off of being required to attend an interview.

Second time around?

Below is a plot of performance of renewed CDTs (which at the time of 2016/17 mid-term review had been running for eight years) against newly awarded CDTs (in 2013/14):
The results of this analysis are concerning. One would expect that a renewed centre would be consistently operating at a high level, i.e. it would score either "Good" or "Good/Satisfactory". Yet, on average, the renewed CDTs have actually performed worse than new CDTs, with over 50% scoring either "Satisfactory" or "Interview". The reasons for this are unclear, it may be that EPSRC have focussed upon priority areas instead of quality, as can be seen by their selection of 9 "Satisfactory" CDTs for inclusion in their mid-term review publicity brochure ("Building skills for a prosperous nation"). Exactly why EPSRC chose to highlight so many lower performing CDTs in their publicity material for the CDT scheme is unclear. It may also be true that that processes for the identification of CDTs that are having problems (and subsequent support mechanisms) are ineffective. Either way, it both raises questions as to the management of the CDT portfolio, and the viability of the CDT scheme overall.

Esteem versus quality

The bubble chart plot below shows the Grade Point Average (GPA) score versus the THE (Times Higher Education) World University Ranking of all insitutions that lead at least one CDT (note that Cranfield University is excluded, as it does not have a THE World Univeristy Ranking):

Appendix B includes the raw data used to construct the league table. Notably, some highly esteemed institutions have done poorly. For example, the University of Cambridge has a GPA of 2.0, which is towards the bottom of our league table (Cambridge was awarded 8 centres by EPSRC at a total cost of £34m) and the University of Manchester scored a GPA of 1.8 for its 5 centres (total cost £22m). Notably, the University of Southampton, was second from bottom (3 CDTs, GPA 1.66, at a cost of over £10m) despite being the institution who employs Prof Philip Nelson (who until recently was CEO of EPSRC). From the distribution of CDTs to institutions, it is clear that esteem is a factor in how EPSRC and its panels decide to award centres, but the esteem of institutions has little bearing on the performance of CDTs and may even detract from it (indeed, there is a modest, but not statistically signficant, correlation between lower THE World University Ranking and performance).

How good was the peer review and assessment process?

2013/14 CDTs were awarded in three waves, based on the results of an assessment of CDT proposals that included peer reviews and interviews. The first wave of awards went to those CDTs ranked first by interview panels. Following this first wave, EPSRC negotiated directly with the universities leading these centres to reduce their budgets, which in part freed up resources for a second wave (CDT proposals ranked second by interview panels) and a final third wave (CDT proposals ranked third and fourth by interview panels). The chart compares the performance of CDTs awared in the first wave (top-ranked CDTs at interview) with those awarded in the second and third waves. The results here are a mixed bag. On average, those ranked first within the assessment process performed better (with a GPA of 2.88 vs 2.53 for those announced in the second and third waves). However, of those centres ranked first (so in the top 43% of full proposals), there were still a signficant number that were either required to be interviewed (16%) or were only ranked "Satisfactory" (23%). This suggests that rankings in EPSRC's selection process only provide a weak indication of subsequent performance, and cannot reliably predict the success of each of these multi-million pound "investments".

Industry and national laboratory partners

A feature of the CDT scheme that has been much hailed by EPSRC is the involvement of industry partners. For the 2013/14 scheme EPSRC claimed more than £450 million in cash and in-kind match funding from businesses, universities and other stakeholders. The contribution of industry partners to the success of CDTs is difficult to infer from mid-term review scores. However, a number of these "partners" have been involved in multiple CDTs, some in as many as 25 (e.g. DSTL).

The importancee of industry involvment in CDTs, and industrial partners' awareness of the performance of the CDTs that they are involved in, is a sensitive issue for EPSRC. Indeed, prior to EPSRC's publication of the mid-term review results (as required by the Information Commissioner), Neil Viner (UKRI/EPSRC Director of Programme Delivery) wrote to all CDT Directors to indicate that EPSRC is "conscious that there are a number of key partners involved in CDTs and we would like you to use your discretion in informing the appropriate contacts for those CDTs where you are the lead university."

We took a closer look at how industry involvement is reflected in the scores awarded at the mid-term review. In Appendix C we show the average scores for the CDTs that industry (and other types of partners) are involved in, for the 89 organisations that have a stake in 4 or more CDTs. Keep in mind that the average performance overall for CDTs is 2.77. Once again, the result is a mixed bag, with the CDTs related to some industry partners performing very well, but many others performing poorly.

One might expect that having a flagship national laboratory or national institute as a CDT partner would have a positive impact on performance, however, in many cases we found the opposite to be true. The Ministry of Defence's executive agency DSTL (Defence Science & Tech Lab) is a partner in 25 CDTs and averages 2.75; lower performers include Diamond Light Source, the UK’s national synchrotron science facility (partner in 12 CDTs, average 2.58), STFC Laboratories (partner in 6 CDTs, average 2.23) and the National Physical Laboratory (partner in 22 CDTs, average 2.23).


Potentially misleading statements and advertisements

Our analysis of the mid-term review results raises a number of questions about what has been said about the CDT scheme by those involved in it. Perhaps one of the most striking examples is the EPSRC CDT for Physical Sciences for Health (Sci-Phy-4-Health), at the University of Birmingham, which scored "Satisfactory", but announced its "Mid-term review success" by stating that it was "pleased to report that the centre was highly commended". More recently, in the outline descriptions of CDT proposals shortlisted for the current competition for the next phase of CDT's, a number of potentially misleading claims are made that reviewers and assessment panels would be unlikely to be able to verify without access the the mid-term review results.

Finally, there are various statements by EPSRC itself:

Conclusion

It is highly concerning that when given the opportunity to mark their own homework, EPSRC decided to embellish the performance of their program, under the belief that the CDT mid-term review scores along with all other peer review information is not disclosible under FOIA. Indeed, an UKRI manager even claimed to the information requester that as one of the "fundamental principles of the Research Councils review processes is that of confidentiality to the parties concerned in the exercise" the material was exempt, an argument quickly dismissed by the Information Commissioner. The results of this exercise and EPSRC's attempt to portray the CDT scheme as being "tremendous" demonstrate a need for transparency in respect of their own decision making. It also suggests that EPSRC staff, who are responsible and accountable for the performance of their policies and decision making, should not be involved in assessing the outcome and performance of these initiatives given the potential conflict of interest. The introduction of UKRI presents an opportunity to separate out the audit of programs from decision making going forwards.

Students in the CDTs may have some big questions which we hope will be answered. In particular, they may want to know why EPSRC decided not to facilitate student choice and instead allow relatively poorly performing centres to describe themselves as being successful. This demonstrates a lack of respect for the interests of students. Other unanswered questions include why poorly performing centres have been allowed to operate for almost a decade, and whether reviewers and assessment panels have the information to assess claims made in new CDT proposals' as to their "success". Goging forwards, the needs of students should be championed and the principle of student choice respected: after all, this is what a previous CDT mid-term review concluded, that "Students should be empowered to make their own decisions on appropriate training".

We have found that EPSRC's CDT scheme is not performing "tremendously" as they claim it is, and questions persist as to the wisdom of concentrating funding on a small number of "esteemed" institutions. A large number of centres in this mid-term review scored less than the grade "Good" and a signficant number produced reports that were deemed so poor that EPSRC held an interview to consider whether they should be allowed to continue. We have also identified significant concerns with EPSRC's decision making, both in respect of the decisions which were taken on particular centres and also their general management of the scheme. We hope that our analysis will enable reviewers, assessment panel members, HEI's and EPSRC themselves to think carefully about the future of the CDT scheme and how funding for research training is disbursed going forwards.



Appendix A - Institutional League Table

This league table is based upon the data for individual centres provided by EPSRC, which has been combined with information from EPSRC's Grants on the Web website and other information available on EPSRC's website. Note that the average score for a CDT in the mid-term review was 2.76. ()
We have ranked each institution as follows.
  • The Rank is directly based on the Grade Point Average score, with a Grade Point Average of 4 being best and 1 the worst. Institutions with the same Grade Point Average have been given the same Rank.
  • The Grade Point Average is the average (mean) score given in EPSRC's 2017 Mid Term Review for each centre that they are named by EPSRC as being the lead on (partners are excluded from this analysis).
  • The THES score is taken from the 2019 rankings found here. Note that Cranfield has not been scored. Where an insitution is in a range (e.g. 201-250), the top value is taken (in the example, 201)
  • The other statistics per Institution are computed only in respect of centres that the University in question are those in which they are named by EPSRC as being the lead on (there are accordingly derived from the Individual League Table in Appendix B.


Appendix B - Individual CDT Performance

This league table is primarily based upon the raw data provided by EPSRC, which has been combined with information from EPSRC's Grants on the Web website and other information available on EPSRC's website. Note that the average score for a CDT in the mid-term review was 2.76. ()
The data for each column of the table has been assembled as follows.
  • The following columns come directly from EPSRC's information disclosure: Grant ID, Institution, Principle Investigator, EPSRC Descriptor, Centre Name
  • The Score is computed by mapping EPSRC Descriptors as follows (4 is best, 1 is worst): "Good" = 4, "Good/Satisfactory" = 3, "Satisfactory" = 2, "Interview" = 1
  • The Cost of each centre is the raw award figure as published on EPSRC's 'Grants on the Web': this was scraped automatically using a Matlab Script.
  • The Renewal column was determined by examining the list of CDT's funded in the previous competition and seeing if they had the same PI or effective Title (if so, then this was marked as a "yes") or if the University had been previously awarded a centre in this domain (if not, then the answer was "no"). The remaining centres were determined by examining the CDT websites and abstracts on Grants on the Web to determine if they had previous cohorts. Through this process, we were able to determine which category a centre fell into.
  • The Outline column involved a similar exercise to the Renewal column, however we were only able to go on the publically available PDF released by EPSRC. Accordingly, centres were marked "yes" where we were able to confirm a link, "maybe" if we were convinced there was a likely link and "no" if we were not able to identify a link at all.

Appendix C - CDT Partner Performance

This league table is primarily based upon the raw data provided by EPSRC and EPSRC's Grants on the Web website (to identify the project partners). The Grade Point average is the mean of the mid-term review scores for the CDTs that a partner organisation is involved in. Note that the average score for a CDT in the mid-term review was 2.76.