Cargando…
How well do doctors think they perform on the General Medical Council's Tests of Competence pilot examinations? A cross-sectional study
OBJECTIVE: To investigate how accurately doctors estimated their performance on the General Medical Council's Tests of Competence pilot examinations. DESIGN: A cross-sectional survey design using a questionnaire method. SETTING: University College London Medical School. PARTICIPANTS: 524 medica...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BMJ Publishing Group
2014
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3918998/ https://www.ncbi.nlm.nih.gov/pubmed/24503300 http://dx.doi.org/10.1136/bmjopen-2013-004131 |
_version_ | 1782303010799484928 |
---|---|
author | Mehdizadeh, Leila Sturrock, Alison Myers, Gil Khatib, Yasmin Dacre, Jane |
author_facet | Mehdizadeh, Leila Sturrock, Alison Myers, Gil Khatib, Yasmin Dacre, Jane |
author_sort | Mehdizadeh, Leila |
collection | PubMed |
description | OBJECTIVE: To investigate how accurately doctors estimated their performance on the General Medical Council's Tests of Competence pilot examinations. DESIGN: A cross-sectional survey design using a questionnaire method. SETTING: University College London Medical School. PARTICIPANTS: 524 medical doctors working in a range of clinical specialties between foundation year two and consultant level. MAIN OUTCOME MEASURES: Estimated and actual total scores on a knowledge test and Observed Structured Clinical Examination (OSCE). RESULTS: The pattern of results for OSCE performance differed from the results for knowledge test performance. The majority of doctors significantly underestimated their OSCE performance. Whereas estimated knowledge test performance differed between high and low performers. Those who did particularly well significantly underestimated their knowledge test performance (t (196)=−7.70, p<0.01) and those who did less well significantly overestimated (t (172)=6.09, p<0.01). There were also significant differences between estimated and/or actual performance by gender, ethnicity and region of Primary Medical Qualification. CONCLUSIONS: Doctors were more accurate in predicating their knowledge test performance than their OSCE performance. The association between estimated and actual knowledge test performance supports the established differences between high and low performers described in the behavioural sciences literature. This was not the case for the OSCE. The implications of the results to the revalidation process are discussed. |
format | Online Article Text |
id | pubmed-3918998 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2014 |
publisher | BMJ Publishing Group |
record_format | MEDLINE/PubMed |
spelling | pubmed-39189982014-02-11 How well do doctors think they perform on the General Medical Council's Tests of Competence pilot examinations? A cross-sectional study Mehdizadeh, Leila Sturrock, Alison Myers, Gil Khatib, Yasmin Dacre, Jane BMJ Open Medical Education and Training OBJECTIVE: To investigate how accurately doctors estimated their performance on the General Medical Council's Tests of Competence pilot examinations. DESIGN: A cross-sectional survey design using a questionnaire method. SETTING: University College London Medical School. PARTICIPANTS: 524 medical doctors working in a range of clinical specialties between foundation year two and consultant level. MAIN OUTCOME MEASURES: Estimated and actual total scores on a knowledge test and Observed Structured Clinical Examination (OSCE). RESULTS: The pattern of results for OSCE performance differed from the results for knowledge test performance. The majority of doctors significantly underestimated their OSCE performance. Whereas estimated knowledge test performance differed between high and low performers. Those who did particularly well significantly underestimated their knowledge test performance (t (196)=−7.70, p<0.01) and those who did less well significantly overestimated (t (172)=6.09, p<0.01). There were also significant differences between estimated and/or actual performance by gender, ethnicity and region of Primary Medical Qualification. CONCLUSIONS: Doctors were more accurate in predicating their knowledge test performance than their OSCE performance. The association between estimated and actual knowledge test performance supports the established differences between high and low performers described in the behavioural sciences literature. This was not the case for the OSCE. The implications of the results to the revalidation process are discussed. BMJ Publishing Group 2014-02-06 /pmc/articles/PMC3918998/ /pubmed/24503300 http://dx.doi.org/10.1136/bmjopen-2013-004131 Text en Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 3.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/3.0/ |
spellingShingle | Medical Education and Training Mehdizadeh, Leila Sturrock, Alison Myers, Gil Khatib, Yasmin Dacre, Jane How well do doctors think they perform on the General Medical Council's Tests of Competence pilot examinations? A cross-sectional study |
title | How well do doctors think they perform on the General Medical Council's Tests of Competence pilot examinations? A cross-sectional study |
title_full | How well do doctors think they perform on the General Medical Council's Tests of Competence pilot examinations? A cross-sectional study |
title_fullStr | How well do doctors think they perform on the General Medical Council's Tests of Competence pilot examinations? A cross-sectional study |
title_full_unstemmed | How well do doctors think they perform on the General Medical Council's Tests of Competence pilot examinations? A cross-sectional study |
title_short | How well do doctors think they perform on the General Medical Council's Tests of Competence pilot examinations? A cross-sectional study |
title_sort | how well do doctors think they perform on the general medical council's tests of competence pilot examinations? a cross-sectional study |
topic | Medical Education and Training |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3918998/ https://www.ncbi.nlm.nih.gov/pubmed/24503300 http://dx.doi.org/10.1136/bmjopen-2013-004131 |
work_keys_str_mv | AT mehdizadehleila howwelldodoctorsthinktheyperformonthegeneralmedicalcouncilstestsofcompetencepilotexaminationsacrosssectionalstudy AT sturrockalison howwelldodoctorsthinktheyperformonthegeneralmedicalcouncilstestsofcompetencepilotexaminationsacrosssectionalstudy AT myersgil howwelldodoctorsthinktheyperformonthegeneralmedicalcouncilstestsofcompetencepilotexaminationsacrosssectionalstudy AT khatibyasmin howwelldodoctorsthinktheyperformonthegeneralmedicalcouncilstestsofcompetencepilotexaminationsacrosssectionalstudy AT dacrejane howwelldodoctorsthinktheyperformonthegeneralmedicalcouncilstestsofcompetencepilotexaminationsacrosssectionalstudy |