- Research article
- Open Access
- Open Peer Review
Does patient satisfaction of general practice change over a decade?
BMC Family Practice volume 10, Article number: 13 (2009)
The Patient Participation Program (PPP) was a patient satisfaction survey endorsed by the Royal Australian College of General Practitioners and designed to assist general practitioners in continuous quality improvement (CQI). The survey was been undertaken by 3500 practices and over a million patients between 1994 and 2003. This study aimed to use pooled patient questionnaire data to investigate changes in satisfaction with primary care over time.
The results of 10 years of the PPP surveys were analyzed with respect to 10 variables including the year of completion, patient age, gender, practice size, attendance at other doctors, and whether the practice had previously undertaken the survey. Comparisons were made using Logistic Generalized Estimating Equations (LGEE).
There was a very high level of satisfaction with general practice in Australia (99% of respondents). An independent indicator of satisfaction was created by pooling the results of 12 questions. This new indicator had a greater variance than the single overall satisfaction question. Participants were shown to have higher levels of satisfaction if they were male, older, did not attend other practitioners or the practice was small in size. A minimal improvement in satisfaction was detected in this pooled indicator for the second or third survey undertaken by a practice. There was however no statistically significant change in pooled satisfaction with the year of survey.
The very high level of satisfaction made it difficult to demonstrate change. It is likely that this and the presentation of results made it difficult for GPs to use the survey to improve their practices. A more useful survey would be more sensitive to detect negative patient opinions and provide integrated feedback to GPs. At present, there are concerns about the usefulness of the PPP in continuous quality improvement in general practice.
There is an extensive literature on patient satisfaction with health care but only a few that have been specifically designed and validated for their use in continuous quality improvement (CQI). CQI is a management concept that utilizes repeated cycles of data gathering, analysis, action and reappraisal. It seeks consumer feedback and uses this to generate change and improvement in a service. Examples of such surveys include the General Practice Assessment Questionnaire (GPAQ) used by the National Health Service [1–4], and one designed by the European taskforce (EUROPEP) for comparative evaluation of health quality between different countries in Europe. [5–10]
The Patient Participation Program (PPP) is an Australian survey designed by the Royal Australian College of General Practitioners (RACGP) in 1992 – 93 and which had been in use in general practice until 2003.[11, 12] It resulted in over a million patients being surveyed from 3500 general practices over a 10 year period. GPs and practices chose to participate in order to provide points for practitioner's vocational registration and later for the practices accreditation.
There are two versions of the survey that we have named 45Q and 60Q according to the number of questions they contain. Each survey encompassed a range of topics including interaction with the doctor, accessibility of care and the range of services available within the practice. The survey was completed by the patient in the waiting room before and after a consultation. The initial 45Q survey was validated by factor analysis.[11, 12] In 1999 the instrument was modified to include additional questions designed for practice accreditation.
In the literature the longest time period over which patient satisfaction has been analysed in general practice is only 15 months and the study involved showed that patient satisfaction improved over time.  No articles on the measurement of patient satisfaction over a ten year period were discovered in a thorough review of the literature. Unfortunately, the few longitudinal satisfaction studies that do exist, such as those originating from health funds in the United States, have had significant methodological limitations. 
The aim of this study was to investigate whether patient satisfaction varied with practice characteristics and time. It was postulated that changes in patient satisfaction might, in part, reflect consumer/patient acceptance of broader changes in general practice.
The secondary aim was to determine whether undertaking the PPP program would improve subsequent patient satisfaction results from participating practices. This would be a reasonable assumption if the practices were undertaking CQI processes effectively.
The survey results were stored by the RACGP in numerous ASCII databases. (ASCII is a standard 7-bit code for the transmission of data). The RACGP gave permission to undertake secondary data analyses, provided anonymity was maintained. The data were converted into two excel spreadsheets and analysed using Logistic Generalized Estimating Equations (LGEE).
Development of indicators of satisfaction
Each of the two surveys contained a single question that enquired about the respondent's overall level of satisfaction with the practice. (This variable we named "overall") The four point answer scale was dichotomized into satisfied ('very satisfied' and 'satisfied') and unsatisfied responses('dissatisfied' and 'very dissatisfied'). Despite excellent face validity, this question had problems as an indicator of satisfaction in that there was very poor response variability (figure 1). More than 99% of respondents were fully satisfied with their practice.
A separate indicator or measure of satisfaction was derived, in the absence of any such indicator in the original survey. We chose 12 questions that represented a range of important determinants of satisfaction and had almost identical wording in the two versions of the questionnaire. (45Q and 60Q) Refer to table 1. We have named this indicator "multistat" (pooled results of multiple statistics). This indicator was dichotomized into a group who were satisfied with all 12 items and a group that were dissatisfied with one or more items. A small pilot of 28 patients completing both 45Q and 60Q surveys concurrently indicated 82% concordance of the "multistat" indicator derived from each version of the survey.
Multivariable analysis was undertaken comparing patient satisfaction, as measured by the "overall" and "multistat" indicators, to 10 independent variables. The independent variables were the patient's age, gender, years attending the practice and whether they saw a doctor from another practice, the practice size, the practice location, using the Accessibilty and Remoteness Index of Australia (ARIA code), and socioeconomic status, using the Socio Economic Indexes For Areas (SEIFA code), the year the survey was conducted, the number of times a practice had conducted the survey and the questionnaire that had been used. Logistic Generalized Estimating Equations (LGEE)[17, 18] was chosen for the analysis. LGEE analyses data in discrete clusters, regarding all of the surveys from a single practice as being separate from surveys from other practices.
The completed database included surveys collected from 1,119,688 patients. This represented 10,709 survey episodes undertaken by 3,554 distinct practices. We have no information on response rates. Also it was not possible to match 845 surveys (7.9%) to a known practice and these results were excluded from analysis. The earliest survey was scanned on the 12th January 1994 and the latest on the 8th December 2003. After a peak of 218,033 in 1996, the number of surveys per year has dropped to 28,448 in 2003. Figure 2 gives a breakdown of the number of patients surveyed each year.
Figure 1 illustrates the distribution of practices according to the proportion of dissatisfied responses for each of the two indicators (overall satisfaction and multistat) within discrete practices.
The median dissatisfaction rate for the "overall" indicator was only 0.5% with an intraquartile range from 0%–1.2%. The median dissatisfaction rate for the "multistat" indicator was more substantial at around 18% with a intraquartile range from 12%–26%.
It was noteworthy that 2 practices stood out with over 20% of patients dissatisfied (overall), and 2 practices had over 90% of patients dissatisfied with at least one of the 12 selected items (multistat).
Within both surveys, the questions with the most dissatisfaction included appointment availability, access to home visits, access to after hour care, waiting time, discussion of the costs of treatments and the cost of investigations.
Figure 3 demonstrates the level of dissatisfaction in each of the two indicators for each year of the survey. The apparent change in the multistat indicator in 1999 is presumably due to the switch from the 45Q to the 60Q survey.
Multivariable analysis showed that satisfaction was related to all the variables examined except year of survey for the multistat (i.e. time). Although the overall indicator demonstrated significant change with time(p = 0.01) the size of this change was very small (Table 2) rendering this result unimportant or without meaning.
The odds ratios (OR) for the independant variables in the multivariable analysis are listed in tables 3 and 4. An OR greater than 1 indicates higher dissatisfaction. This was shown to diminish as measured by the "multistat" indicator with advancing patient age, male gender, smaller practice size, patients who do not visit other doctors, attendance at practices in highly accessible areas (ARIA) and high SE areas (SEIFA). Dissatisfaction slightly increased with respect to duration of attending a practice, particularly after the first year. All of these changes were statistically significant. The Overall indicator gave similar results with the exception of gender, survey instrument, and years attending the practice, where satisfaction was influenced in the opposite direction.
Figure 4 compares satisfaction according to survey sequence (the first, second, third or more survey conducted by a given practice). This should not be confused with the year of the survey. The change in relation to the Overall indicator did not reach significance. Although dissatisfaction with the multistat appeared to increase with each subsequent survey episode, the multivariable analysis indicates otherwise. The odds ratio (table 4) show that there was a small drop in dissatisfaction with the second and third surveys and then 4th and 5th surveys show the same level of dissatisfaction as the initial survey. It is noteworthy that the relative magnitude of this decrease in dissatisfaction (multistat) between first and second surveys is only 7%. (Odds ratio = 1.07) In other words dissatisfaction has only dropped from around 21% to 19.5%.
The primary aim of this analysis was to look at change in patient satisfaction over time. On multivariable analysis we found that there was a significant change in the "overall" indicator but not in the more robust "multistat". The actual size of the change in "overall" satisfaction was less than 1 percent and must be regarded as inconsequential. Accordingly we conclude there was no meaningful change with time.
We had analyzed over a million surveys and it is unlikely that the survey lacks power.
It could be argued that the survey lacked sensitivity. The survey however performs as well as other surveys. Female patients, younger patients and those who regularly attend other doctors exhibited more dissatisfied responses. (multistat) Also, there was greater dissatisfaction with larger sized practices. These findings have been independently found in other surveys. [19–26] The replication of these finding offers a degree of criterion validity to the survey instruments.
It should be noted that the two indicators demonstrated opposite effects on some variables (gender, survey instrument, years attending the practice), suggesting that they have slightly different meanings. We have found the multistat to be more useful as it has greater response variability.
Alternatively the effect with time could have been confounded by the change in survey instrument midway through the 10 year period. We attempted to ameliorate this by including it as a variable in the multivariable analysis.
Despite the survey being a sincere attempt to provide practices with opportunities to practice CQI, there may have been limitations in the administration of the instrument. It is not possible to control the selection of patients, or the introduction of bias from reception staff or practitioners. Regardless, it represents a real life attempt at providing a survey for a large number of practices over a prolonged period of time.
The fact that patient satisfaction did not change in a decade that saw major changes to the structure of general practice in Australia, like accreditation, Divisions of General Practice, changes in the GP demographic, vocational registration and continuing medical education is itself surprising. It could be that patients remain satisfied despite these changes. It would seem more likely that "patient satisfaction" in the PPP survey did not measure satisfaction with the structure of general practice, either on the micro or macro scale. It implies that patient satisfaction may in fact be relatively stable over time.
The secondary aim was to compare satisfaction from practices undertaking the PPP for the first time, with second, third and subsequent surveys. Practices undertaking the program were required to review the results and identifying changes that could be made to their practice. It was hypothesized that subsequent surveys should show improvement in patient satisfaction. Multivariable analysis indicated that only the more robust "multistat" indicator showed significant change. This change however was rather meager. It should be noted that the power of the analysis drops off with the sequence, as fewer practices undertook the larger number of surveys.
This small improvement in patient's perceptions was noted only for practices undertaking the program at the 2nd and 3rd visit. The odds ratio 1.07 (between first and second surveys) represents only a small change from 21% to 19.5% dissatisfaction. The size of this change is so small in magnitude as to be rendered almost meaningless. If there is an improvement in patient satisfaction, it is eroded by the third cycle and is completely lost by the fourth or fifth cycle. In light of this result the effectiveness of the PPP survey as an instrument for CQI should be regarded as questionable.
The study uncovered several deficiencies in the survey design. These included the lack of an integrated index like the "multistat" in feedback to GPs, and the very high level of satisfaction, leaving no room to register improvement. Although many patient surveys report high satisfaction levels, they often fail to uncover the negative opinions of respondents.  In addition it has been noted that GPs are not disposed to respond to negative information.[13, 28–30] There was evidence of this effect when we reviewed GP responses to their survey results.
In conclusion, the PPP has failed to identify changes in patient satisfaction with time, and has shown only small non sustained improvement with subsequent cycles of the program. Although minimal initial improvement in satisfaction was demonstrated, the small magnitude and transience brings it's usefulness in CQI open to question. It could be enhanced if future surveys address some of the major deficiencies of this survey, namely failing to elicit negative feedback from patients, lack of an integrated index and failing to address GP attitudes to negative feedback.
45 question survey
60 question survey
Accessibilty and Remoteness Index of Australia
American Standard Code for Information Interchange
continuous quality improvement
General Practice Questionaire designed by European Taskforce
General Practice Assessment Questionnaire
Logistic Generalized Estimating Equations
Patient satisfaction measure derived from multiple questionnaire statistics
Patient Participation Program
Royal Australian College of General Practitioners
Socio Economic Indexes For Areas
Ramsay J, Campbell JL, Schroter S, Green J, Roland M: The General Practice Assessment Survey (GPAS): tests of data quality and measurement properties. Fam Pract. 2000, 17 (5): 372-9. 10.1093/fampra/17.5.372.
Bower P, Roland MO: Bias in patient assessments of general practice: general practice assessment survey scores in surgery and postal responders. Br J Gen Pract. 2003, 53 (487): 126-8.
Bower P, Mead N, Roland M: What dimensions underlie patient responses to the General Practice Assessment Survey? A factor analytic study. Fam Pract. 2002, 19 (5): 489-95. 10.1093/fampra/19.5.489.
Campbell JL, Ramsay J, Green J: Age, gender, socioeconomic, and ethnic differences in patients' assessments of primary health care. Qual Health Care. 2001, 10 (2): 90-5. 10.1136/qhc.10.2.90.
Grol R, Wensing M: Patients in Europe evaluate general practice care: an international comparison. Br J Gen Pract. 2000, 50 (460): 882-7.
Klingenberg A, Bahrs O, Szecsenyi J: [How do patients evaluate general practice? German results from the European Project on Patient Evaluation of General Practice Care (EUROPEP)]. Z Arztl Fortbild Qualitatssich. 1999, 93 (6): 437-45.
Kroneman MW, Maarse H, Zee van der J: Direct access in primary care and patient satisfaction: a European study. Health Policy. 2006, 76 (1): 72-9. 10.1016/j.healthpol.2005.05.003.
Vedsted P, Mainz J, Lauritzen T, Olesen F: Patient and GP agreement on aspects of general practice care. Fam Pract. 2002, 19 (4): 339-43. 10.1093/fampra/19.4.339.
Wensing M, Vedsted P, Kersnik J, Peersman W, Klingenberg A, Hearnshaw H, Hjortdahl P, Paulus D, Kunzi B, Mendive J, Grol Rl: Patient satisfaction with availability of general practice: an international comparison. Int J Qual Health Care. 2002, 14 (2): 111-8.
Wensing M, Baker R, Szecsenyi J, Grol R: Impact of national health care systems on patient evaluations of general practice in Europe. Health Policy. 2004, 68 (3): 353-7. 10.1016/j.healthpol.2003.10.010.
Steven ID, Thomas SA, Eckerman E, Browning C, Dickens E: The provision of preventive care by general practitioners measured by patient completed questionnaires. J Qual Clin Pract. 1999, 19 (4): 195-201. 10.1046/j.1440-1762.1999.00332.x.
Steven ID, Thomas SA, Eckerman E, Browning C, Dickens E: A patient determined general practice satisfaction questionnaire. Aust Fam Physician. 1999, 28 (4): 342-8.
Vingerhoets E, Wensing M, Grol R: Feedback of patients' evaluations of general practice care: a randomised trial. Qual Health Care. 2001, 10 (4): 224-8. 10.1136/qhc.0100224...
Kahn KL, Liu H, Adams JL, Chen WP, Tisnado DM, Carlisle DM, Hays RD, Mangione CM, Damberg CL: Methodological challenges associated with patient responses to follow-up longitudinal surveys regarding quality of care. Health Serv Res. 2003, 38 (6 Pt 1): 1579-98. 10.1111/j.1475-6773.2003.00194.x.
Measuring Remoteness: Accessibility/Remoteness Index of Australia. Australian Government Department of Health and Ageing, Occasional Paper New Series: Number 14. 2001, [http://www.health.gov.au/internet/main/publishing.nsf/Content/7B1A5FA525DD0D39CA25748200048131/$File/ocpanew14.pdf]
Socio-Economic Indexes for Areas (SEIFA index). Australian Bureau of Statistics, Catalogue No. 2039.0, – Information Paper: An Introduction to Socio-Economic Indexes for Areas (SEIFA), 2006 – Latest ISSUE Released at 11:30 AM (CANBERRA TIME) 03/26/2008. 2039, [http://www.abs.gov.au/AUSSTATS/ABS@.NSF/Latestproducts/2039.0Main%20Features32006?opendocument&tabname=Summary&prodno=2039.0&issue=2006&num=&view=] .0, – Information Paper: An Introduction to Socio-Economic Indexes for Areas (SEIFA), 2006 – Latest ISSUE Released at 11:30 AM (CANBERRA TIME) 03/26/2008
Carriere KC, Roos LL, Dover DC: Across Time and Space: Variations in Hospital Use During Health Reform. Health Services Research. 2000, 35 (2): 467-487.
Liang KY, Z S: Longitudinal data analysis using general linear models. Biometrika. 1986, 73 (1): 13-22. 10.1093/biomet/73.1.13.
Grol R, Wensing M: Patients Evaluate General/family Practice; The Europep Instrument. Mediagroep KUN/UMC. 2000
Greco M, Brownlea A, McGovern J: Impact of patient feedback on the interpersonal skills of general practice registrars: results of a longitudinal study. Med Educ. 2001, 35 (8): 748-56. 10.1046/j.1365-2923.2001.00976.x.
Kalda R, Polluste K, Lember M: Patient satisfaction with care is associated with personal choice of physician. Health Policy. 2003, 64 (1): 55-62. 10.1016/S0168-8510(02)00160-4.
Baker R: Characteristics of practices, general practitioners and patients related to levels of patients' satisfaction with consultations. Br J Gen Pract. 1996, 46 (411): 601-5.
Wensing M, Vleuten van de C, Grol R, Felling A: The reliability of patients' judgements of care in general practice: how many questions and patients are needed?. Qual Health Care. 1997, 6 (2): 80-5. 10.1136/qshc.6.2.80.
McKinley RK, Manku-Scott T, Hastings AM, French DP, Baker R: Reliability and validity of a new measure of patient satisfaction with out of hours primary medical care in the United Kingdom: development of a patient questionnaire. Bmj. 1997, 314 (7075): 193-8.
McKinley RK, Stevenson K, Adams S, Manku-Scott TK: Meeting patient expectations of care: the major determinant of satisfaction with out-of-hours primary medical care?. Fam Pract. 2002, 19 (4): 333-8. 10.1093/fampra/19.4.333.
Campbell JL, Ramsay J, Green J: Practice size: impact on consultation length, workload, and patient assessment of care. Br J Gen Pract. 2001, 51 (469): 644-50.
Williams B, Coyle J, Healy D: The meaning of patient satisfaction: an explanation of high reported levels. Soc Sci Med. 1998, 47 (9): 1351-9. 10.1016/S0277-9536(98)00213-5.
Rider EA, Perrin JM: Performance profiles: the influence of patient satisfaction data on physicians' practice. Pediatrics. 2002, 109 (5): 752-7. 10.1542/peds.109.5.752.
Kvamme OJ, Sandvik L, Hjortdahl P: [Quality of general practice as experienced by patients]. Tidsskr Nor Laegeforen. 2000, 120 (21): 2503-6.
Wensing M, Vingerhoets E, Grol R: Feedback based on patient evaluations: a tool for quality improvement?. Patient Educ Couns. 2003, 51 (2): 149-53. 10.1016/S0738-3991(02)00199-4.
The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2296/10/13/prepub
South Australian Faculty of the Royal Australian College of General Practitioners, Primary Health Care Research Evaluation and Development Program, University of Adelaide – discipline of General Practice, Department of General Practice Monash University, my wife and family.
The authors declare that they have no competing interests.
JA conceived and supervised the project. This was undertaken as a Masters thesis by distance education from Monash University, with support from the University of Adelaide Discipline of General Practice and funding supplied by the Primary Health Care Research Evaluation and Development (PHC RED) Program. He negotiated with the Royal Australian College of general Practitioners (RACGP) for access to the data, converted the data into an access database, proposed the research questions and drafted the manuscript. PS is Clinical associate Professor of the department of general practice, Monash University. He supervised the Masters thesis reviewing the proposal, analysis and manuscript. NS is Professor of the Discipline of General Practice at the University of Adelaide and Director of the Primary Health Care Research Evaluation and Development Program (PHC RED Program) at the University of Adelaide. He co-supervised the thesis. ER is a statistician within the discipline of General Practice, University of Adelaide. She undertook the statistical modeling and reviewed the manuscript/analysis. All authors read and approved the final manuscript.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.