Open Access
Open Peer Review

This article has Open Peer Review reports available.

How does Open Peer Review work?

A framework to evaluate research capacity building in health care

BMC Family Practice20056:44

DOI: 10.1186/1471-2296-6-44

Received: 12 June 2005

Accepted: 27 October 2005

Published: 27 October 2005



Building research capacity in health services has been recognised internationally as important in order to produce a sound evidence base for decision-making in policy and practice. Activities to increase research capacity for, within, and by practice include initiatives to support individuals and teams, organisations and networks. Little has been discussed or concluded about how to measure the effectiveness of research capacity building (RCB)


This article attempts to develop the debate on measuring RCB. It highlights that traditional outcomes of publications in peer reviewed journals and successful grant applications may be important outcomes to measure, but they may not address all the relevant issues to highlight progress, especially amongst novice researchers. They do not capture factors that contribute to developing an environment to support capacity development, or on measuring the usefulness or the 'social impact' of research, or on professional outcomes.

The paper suggests a framework for planning change and measuring progress, based on six principles of RCB, which have been generated through the analysis of the literature, policy documents, empirical studies, and the experience of one Research and Development Support Unit in the UK. These principles are that RCB should: develop skills and confidence, support linkages and partnerships, ensure the research is 'close to practice', develop appropriate dissemination, invest in infrastructure, and build elements of sustainability and continuity. It is suggested that each principle operates at individual, team, organisation and supra-organisational levels. Some criteria for measuring progress are also given.


This paper highlights the need to identify ways of measuring RCB. It points out the limitations of current measurements that exist in the literature, and proposes a framework for measuring progress, which may form the basis of comparison of RCB activities. In this way it could contribute to establishing the effectiveness of these interventions, and establishing a knowledge base to inform the science of RCB.


The need to develop a sound scientific research base to inform service planning and decision-making in health services is strongly supported in the literature [1], and policy [2]. However, the level of research activity and the ability to carry out research is limited in some areas of practice, resulting in a low evidence base in these areas. Primary Care, for example, has been identified as having a poor capacity for undertaking research [35], and certain professional groups, for example nursing and allied health professionals, lack research experience and skills [57]. Much of the literature and the limited research on research capacity building (RCB) has therefore focused on this area of practice, and these professional groups. Policy initiatives to build research capacity include support in developing research for practice, where research is conducted by academics to inform practice decision making, research within or through practice, which encompasses research being conducted in collaboration with academics and practice, and research by practice, where ideas are initiated and research is conducted by practitioners [3, 8].

The interventions to increase research capacity for, within, and by practice incorporates initiatives to support individuals and teams, organisations and networks. Examples include fellowships, training schemes and bursaries, and the development of support infrastructures, for example, research practice networks [913]. In the UK, the National Coordinating Centre for Research Capacity Development has supported links with universities and practice through funding a number of Research and Development Support Units (RDSU) [14]which are based within universities, but whose purpose is to support new and established researchers who are based in the National Health Service (NHS). However, both policy advisers and researchers have highlighted a lack of evaluative frameworks to measure progress and build an understanding of what works[15, 16].

This paper argues for a need to establish a framework for planning and measuring progress, and to initiate a debate about identifying what are appropriate outcomes for RCB, not simply to rely on things that are easy to measure. The suggested framework has been generated through analysis of the literature, using policy documents, position statements, a limited amount of empirical studies on evaluating research RCB, and the experience of one large RSDU based in the UK.


The Department of Health within the UK has adopted the definition of RCB as 'a process of individual and institutional development which leads to higher levels of skills and greater ability to perform useful research". (pp1321) [17]

Albert & Mickan cited the National Information Services in Australia [18] who define it as

" an approach to the development of sustainable skills, organizational structures, resources and commitment to health improvement in health and other sectors to multiply health gains many times over.'

RCB can therefore be seen as a means to an end, the end being 'useful' research that informs practice and leads to health gain, or an end in itself, emphasising developments in skills and structures enabling research to take place.

A framework for measuring capacity building should therefore be inclusive of both process and outcome measures [19], to capture changes in both the 'ends' and 'means'; it should measure the ultimate goals, but also measure the steps and mechanisms to achieve them. The notion of measuring RCB by both process and outcome measures is supported within the research networks literature [12, 20], and capacity building in health more generally [19, 21]. Some argue we should acknowledge 'process as outcome', particularly if capacity building is seen as an end in itself [21]. In this context process measures are 'surrogate' [12], or 'proxy' outcome measures[16]. Carter et al [16]stress caution in terms of using 'proxy' measures in the context of RCB, as there is currently little evidence to link process with outcome. They do not argue against the notion of collecting process data, but stress that evaluation work should examine the relationship of process to outcome. The proposed framework discussed in this paper suggests areas to consider for both process and outcome measurement.

The most commonly accepted outcomes for RCB cited in the literature includes traditional measures of high quality research including publications, conference presentations, successful grant applications, and qualifications obtained. Many evaluations of RCB have used these as outcomes [9, 10, 22, 23]. Some argue that publications in peer reviewed journals are a tall order for the low research skills base in some areas of health care practice [5], and argue for an appropriate time frame to evaluate progress. Process measures in this context could measure progress more sensitively and quickly.

However, using traditional outcomes may not be the whole story in terms of measuring impact. Position statements suggest that the ultimate goal of research capacity building is one of health improvement [17, 18, 24]. In order for capacity building initiatives to address these issues, outcomes should also explore the direct impact on services and clients: what Smith [25]defines as the social impact of research.

There is a strong emphasis within the primary care literature that capacity building should enhance the ability of practitioners to build their research skills: to support the development of research 'by' and 'with' practice [3, 26], and suggests 'added value' to develop such close links to practice. A framework to measure RCB should explore and try to unpack this 'added value', both in terms of professional outcomes,[10] which include increasing professional enthusiasm, and supporting the application of critical thinking, and the use of evidence in practice. Whilst doing research alongside practice is not the only way these skills and attitudes can be developed, it does seem to be an important impact of RCB that should be examined.

The notion of developing RCB close to practice does not necessarily mean that it is small scale just because it is close to the coal face. Obviously, in order for individuals and teams to build up a track record of experience their initial projects may justifiably be small scale, but as individual's progress, they may gain experience to be able to conduct large scale studies, still based on practice problems, working in partnership with others. Similarly networks can support large scale studies as their capacity and infrastructure is developed to accommodate them.

The framework

The framework is represented by Figure 1. It has two dimensions
Figure 1

Research Capacity Building: A Framework for Evaluation.

Four structural levels of development activity. These include individual, team, organisational, and the network or supra- organisational support level (networks and support units). These are represented by the concentric circles within the diagram.

Six principles of capacity building. This are discussed in more detail below but include: building skills and confidence, developing linkages and partnerships, ensuring the research is 'close to practice', developing appropriate dissemination, investments in infrastructure, and building elements of sustainability and continuity. Each principle is represented by an arrow within the diagram, which indicates activities and processes that contribute towards capacity building. The arrows cut across the structural levels suggesting that activities and interventions may occur within, and across, structural levels. The arrow heads point in both directions suggesting that principles applied to each structural level could have an impact on other levels.

The framework acknowledges that capacity building is conducted within a policy context. Whilst this paper focuses on measurement at different structural levels, it should be acknowledged that progress and impact on RCB can be greatly nurtured or restricted by the prevailing policy. Policy decisions will influence opportunities for developing researchers, can facilitate collaborations in research, support research careers, fund research directed by practice priorities, and can influence the sustainability and the very existence of supportive infrastructures such as research networks.

The paper will explain the rationale for the dimensions of the framework, and then will suggest some examples of measurement criteria for each principle at different structural levels to evaluate RCB. It is hope that as the framework is applied, further criteria will be developed, and then used taking into account time constraints, resources, and the purpose of such evaluations.

Structural levels at which capacity building takes place

The literature strongly supports that RCB should take place at an individual and organisational level [8, 15, 27, 28]. For example, the conceptual model for RCB in primary care put forward by Farmer & Weston [15] focuses particularly on individual General Practitioners (GPs) and primary care practitioners who may progress from non participation through participation, to become academic leaders in research. Their model also acknowledges the context and organisational infrastructure to support RCB by reducing barriers and accommodating diversity through providing mentorship, collaborations and networking, and by adopting a whole systems approach based on local need and existing levels of capacity. Others have acknowledged that capacity development can be focussed at a team level [11, 29]. Jowett et al [30] found that GPs were more likely to be research active if they were part of a practice where others were involved with research. Guidance from a number of national bodies highlights the need for multiprofessional and inter-professional involvement in conducting useful research for practice [3, 4, 6, 31] which implies an appropriate mix of skills and practice experience within research teams to enable this [32]. Additionally, the organisational literature has identified the importance of teams in the production of knowledge [18, 33, 34].

Developing structures between and outside health organisations, including the development of research networks seems important for capacity building [12, 24, 34]. The Department of Health in the UK [14] categorizes this supra-organisational support infrastructure to include centres of academic activity, Research & Development Support Units, and research networks.

As interventions for RCB are targeted at different levels, the framework for measuring its effectiveness mirrors this. However, these levels should not be measured in isolation. One level can have an impact on capacity development at another level, and could potentially have a synergistic or detrimental effect on the other.

The six principles of research capacity building

Evaluation involves assessing the success of an intervention against a set of indicators or criteria [35, 36], which Meyrick and Sinkler [37] suggest should be based on underlying principles in relation to the initiative. For this reason the framework includes six principles of capacity building. The rationale for each principle is given below, along with a description of some suggested criteria for each principle. The criteria presented are not an exhaustive list. As the framework is developed and used in practice, a body of criteria will be developed and built on further.

Principle 1. Research capacity is built by developing appropriate skills, and confidence, through training and creating opportunities to apply skills


The need to develop research skills in practitioners is well established [3, 4, 6], and can be supported through training [14, 26], and through mentorship and supervision [15, 24, 28]. There is some empirical evidence that research skill development increases research activity [23, 38], and enhances positive attitudes towards conducting and collaborating in research [39]. Other studies cite lack of training and research skills as a barrier to doing research [30, 31]. The need to apply and use research skills in practice is highlighted in order to build confidence [40]and to consolidate learning.

Some needs assessment studies highlight that research skills development should adopt 'outreach' and flexible learning packages and acknowledge the skills, background and epistemologies of the professional groups concerned [7, 15, 39, 41, 42]. These include doctors, nurses, a range of allied health professional and social workers. Developing an appropriate mix of professionals to support health services research means that training should be inclusive and appropriate to them, and adopt a range of methodologies and examples to support appropriate learning and experience [15, 31, 41]. How learning and teaching is undertaken, and the content of support programmes to reflect the backgrounds, tasks and skills of participants should therefore be measured. For example, the type of research methods teaching offered by networks and support units should reflect a range and balance of skills needed for health service research, including both qualitative and quantitative research methods.

Skills development also should be set in the context of career development, and further opportunities to apply skills to practice should be examined. Policy and position statements [14, 26] support the concept of career progression or 'careers escalator', which also enables the sustainability of skills. Opportunities to apply research skills through applications for funding is also important [9, 10, 22, 43, 44].

At team and network level Fenton et al [34]suggest that capacity can be increased through building intellectual capacity (sharing knowledge), which enhances the ability to do research. Whilst there is no formal measure for this, an audit of the transfer of knowledge would appear to be beneficial. For example teams may share expertise within a project to build skills in novice researchers [45]which can be tracked, and appropriate divisions of workload through reading research literature and sharing this with the rest of the team/network could be noted.

The notion of stepping outside of a safety zone may also suggest increased confidence and ability to do research. This may be illustrated at an individual level by the practitioner-researcher taking on more of a management role, supervising others, or tackling new methodologies/approaches in research, or in working with other groups of health and research professionals on research projects. This approach is supported by the model of RCB suggested by Farmer and Weston [15] which supports progress from participation through to academic leadership.

Some examples of criteria for measuring skills and confidence levels are give in table 1.
Table 1

Building skills and confidence

Structural level

Examples of suggested criteria


• Skills developed (and how)

• Evidence of progressive skill development

• Evidence of confidence building through sharing new skills with others, applying existing skills in new situations, working with other professional groups in research

• Research undertaken


• Skills developed (and how)

• Skill mix of team

• Skill/knowledge transfer tracked- (intellectual capital)

• Evidence of progressive skill development

• Evidence of confidence building through sharing new skills with others, applying existing skills in new situations, working with other professional groups in research

• Research undertaken


• Evidence of training research needs assessment

• Availability and use of training funds

• Evidence of outreach work undertaken in organisations

• Levels of skills within workforce, and skill mix of the skills across groups

• Evidence of matching novice and experienced researchers

• Research undertaken, funding approved.

Supra organisational (networks and support units)

• Provision of flexible learning packages

• Provision of training shaped around the skills, background and needs of differing professional groups

• Examples of knowledge/information transfer (through a variety of mechanisms, including workshops, web-based discussions forums)

• Evidence of outreach work, its take up and use

• Responses to needs based work

• Evidence of secondment opportunities offered and taken up.

Principle 2. Research capacity building should support research 'close to practice' in order for it to be useful


The underlying philosophy for developing research capacity in health is that it should generate research that is useful for practice. The North American Primary Care Group [24] defined the 'ultimate goal' of research capacity development as the generation and application of new knowledge to improve the health of individuals and families (p679). There is strong support that 'useful' research is that which is conducted 'close' to practice for two reasons. Firstly by generating research knowledge that is relevant to service user and practice concerns. Many argue that the most relevant and useful research questions are those generated by, or in consultation with, practitioners and services [3, 11, 24], policy makers [46] and service users [47, 48]. The level of 'immediate' usefulness [49] may also mean that messages are more likely to taken up in practice[50]. Empirical evidence suggests that practitioners and policy makers are more likely to engage in research if they see its relevance to their own decision making [31, 39, 46]. The notion of building research that is 'close to practice' does not necessarily mean that they are small scale, but that the research is highly relevant to practice or policy concerns. A large network of practitioners could facilitate large scale, experimental based projects for example. However, the adoption of certain methodologies is more favoured by practice because of their potential immediate impact on practice [47] and this framework acknowledges such approaches and their relevance. This includes action research projects, and participatory inquiry [31, 42]. An example where this more participatory approach has been developed in capacity building is the WeLREN (West London Research Network) cycle [51]. Here research projects are developed in cycles of action, reflection, and dissemination, and use of findings is integral to the process. This network reports high levels of practitioner involvement.

Secondly, building research capacity 'close to practice' is useful because of the skills of critical thinking it engenders which can be applied also to practice decision making [28], and which supports quality improvement approaches in organisations [8]. Practitioners in a local bursary scheme, for example, said they were more able to take an evidence-based approach into their every day practice [9].

Developing a 'research culture' within organisations suggests a closeness to practice that impacts on the ability of teams and individuals to do research. Lester et al [23] touched on measuring this idea through a questionnaire where they explored aspects of a supportive culture within primary care academic departments. This included aspects around exploring opportunities to discuss career progression, supervision, formal appraisal, mentorship, and junior support groups. This may be a fruitful idea to expand further to develop a tool in relation to a health care environment.

Some examples of criteria for measuring the close to practice principle are give in table 2
Table 2

Close to practice

Structural level

Examples of suggested criteria

Individuals and teams

• Evidence of clinical expertise and 'hunches' within the research questions and projects

• Examples of critical thinking used in practice

• Evidence of patient centred outcome measures in projects, and impact of project on patients' quality of life, including social capital and health gain.

• Use of methodologies that are action orientated

• Use of methodologies that include cost effectiveness approaches

• Evidence on level, and nature, of service user involvement


• Evidence of informing research questions by gaps in knowledge at an organisational level

• Measurements on a culture where research is 'valued, accepted, encouraged and enjoyed'.

• Evidence of managerial support/involvement on research projects

• Evidence of supporting service user links in research

Supra-organisational (networks and support units)

• Evidence of research questions being developed with practice, needs and priorities

• Co-ordination of research programmes between health organisations and university

• Development and use of outcomes measures useful for research and practice

• Development and use of cost effectiveness methodologies

• Action research orientated approaches undertaken

• Development of service user panels

3. Linkages, partnerships and collaborations enhance research capacity building


The notion of building partnerships and collaborations is integral to capacity building [19, 24]. It is the mechanism by which research skills, and practice knowledge is exchanged, developed and enhanced [12], and research activity conducted to address complex health problems [4]. The linkages between the practice worlds and that of academia may also enhance research use and impact [46].

The linkages that enhance RCB can exist between

  • Universities and practice [4, 14, 43]

  • Novice and experienced researchers [22, 24, 51].

  • Different professional groups [2, 4, 20, 34]

  • Different health and care provider sectors [4, 31, 47, 52]

  • Service users, practitioners and researchers [47, 48]

  • Researchers and policy makers [46]

  • Different countries [28, 52]

  • Health and industry [53, 54]

It is suggested that it is through networking and building partnerships that intellectual capital (knowledge) and social capital (relationships) can be built, which enhances the ability to do research [12, 31, 34]. In particular, there is the notion that the build up of trust between different groups and individuals can enhance information and knowledge exchange[12]. This may not only have benefits for the development of appropriate research ideas, but may also have benefits for the whole of the research process including the impact of research findings.

The notion of building links with industry is becoming progressively evident within policy in the UK [54] which may impact on economic outcomes to health organisations and the society as a whole[55, 56].

Some examples of criteria for measuring linkages and collaborations are given in table 3.
Table 3

Linkages, collaborations and partnerships.

Structural level

Examples of suggested criteria


• Who they have worked with: to gain knowledge and to share knowledge

• Evidence of increased number of research partnerships

• Evidence of inter-professional working


• Who the team has worked with: academic and practice

• Network development (work with other teams)

• Evidence of inter-professional and other links


• Links with universities/RDSUs

• Evidence of joint posts with university

• Evidence of working with other service organisations on research

• Evidence of contribution/memberships to Networks

• Work with funding bodies

Supra-organisational (networks and support units)

• Joint posts hosted

• Evidence of research collaboration with practitioners, teams, networks and organisations in health care practice

• Development of links across networks

• International links

4. Research capacity building should ensure appropriate dissemination to maximize impact


A widely accepted measure to illustrate the impact of RCB is the dissemination of research in peer reviewed publications, and through conference presentations to academic and practice communities [5, 12, 26, 57]. However this principle extends beyond this more traditional method of dissemination. The litmus test that ultimately determines the success of capacity building is that it should impact on practice, and on the health of patients and comminutes[24] that is; the social impact of research [25]. Smith [25]argues that the strategies of dissemination should include a range of methods that are 'fit for purpose'. This includes traditional dissemination, but also includes other methods, for example, instruments and programmes of care implementation, protocols, lay publications, and publicity through factsheets, the media and the Internet.

Dissemination and tracking use of products and technologies arising from RCB should also be considered, which relate to economic outcomes of capacity building [55]. In the UK, the notion of building health trusts as innovative organisations which can benefit economically through building intellectual property highlights this as an area for potential measurement [56].

Some examples of criteria for measuring appropriate dissemination are given in table 4
Table 4

Appropriate dissemination and impact

Structural level

Examples of suggested criteria

Individuals and Teams

• Papers in research and practice journals

• Conference presentations

• Applied dissemination of findings

• Evidence of influence on local strategy and planning


• Ease of access to research undertaken locally

• Seminar programmes relating to research undertaken

• Examples of evidence based practice and applying locally developed knowledge in strategy policy and practice

• Funding to support practitioners and teams to disseminate findings

• Successful applications for intellectual property submitted based on R&D developed in organisation

Supra-organisational (networks and support units)

• Papers focussing on health services research, written with practitioners

• Conference presentations at practice- focussed conferences

• Applied dissemination

• Innovative dissemination

• Successful applications for intellectual property submitted based on R&D developed in partnership with health organisations

5. Research capacity building should include elements of continuity and sustainability


Definitions of capacity building suggest that it should contain elements of sustainability which alludes to the maintenance and continuity of newly acquired skills and structures to undertake research [18, 19]. However the literature does not explore this concept well [19]. This in itself may be partly due problems around measuring capacity building. It is difficult to know how well an initiative is progressing, and how well progress is consolidated, if there are no benchmarks or outcomes against which to demonstrate this.

Crisp et al [19] suggests that capacity can be sustained by applying skills to practice. This gives us some insight about where we might look for measures of sustainability. It could include enabling opportunities to extend skills and experience, and may link into the concept of a career escalator. It also involves utilizing the capacity that has been already built. For example engaging with those who have gained skills in earlier RCB initiatives to help more novice researchers, once they have become 'experts', and in finding an appropriate place to position the person with expertise with the organisation. It could also be measured by the number of opportunities for funding for continued application of skills to research practice.

Some examples of criteria for measuring sustainability and continuity are gibe in table 5
Table 5

Continuity and sustainability

Structural level

Examples of suggested criteria


• Successful access to funding for continued application of skills (grants and fellowships)

• Continued contacts with collaborators/linkages

• Examples of continued support and supervision arrangements


• Recognition and matching of skills

• Successful access to funding for continued application of skills


• Secondment opportunities, available and used

• Local responsive funding access and use

• Recognition and matching of skills

• Examples of continued collaboration

Supra-organisational (networks and support units)

• Examples of continued collaboration

• Linked support within career pathways

• Fellowships supported

6. Appropriate infrastructures enhance research capacity building


Infrastructure includes structures and processes that are set up to enable the smooth and effective running of research projects. For example, project management skills are essential to enable projects to move forward, and as such should be measured in relation to capacity building. Similarly, projects should be suitably supervised with academic and management support. To make research work 'legitimate' it may be beneficial to make research a part of some job descriptions for certain positions, not only to reinforce research as a core skill and activity, but also to review in annual appraisals, which can be a tool for research capacity evaluation. Information flow about calls for funding and fellowships and conferences is also important. Hurst [42] found that information flow varied between trusts, and managers were more aware of research information than practitioners.

The importance of protected time and backfill arrangements as well as funding to support this, is an important principle to enable capacity building [9, 15, 24, 58]. Such arrangements may reduce barriers to participation and enable skills and enthusiasm to be developed[15]. Infrastructure to help direct new practitioners to research support has also been highlighted[14]. This is particularly true in the light of the new research governance and research ethics framework in the UK [59]. The reality of implementing systems to deal with the complexities of the research governance regulations has proved problematic, particularly in primary care, where the relative lack of research management expertise and infrastructure has resulted in what are perceived as disproportionately bureaucratic systems. Recent discussion in the literature has focused on the detrimental impact of both ethical review, and NHS approval systems, and there is evidence of serious delays in getting research projects started [60]. Administrative and support staff to help researchers through this process is important to enable research to take place [61].

Some examples of criteria for measuring are given in table 6.
Table 6


Structural level

Examples of suggested criteria


• Evidence of project management in projects (objective setting with time scales)

• A description of mentorship and supervision structures

• Research is part of job description and reviewed in annual appraisal


• Evidence of project management in projects

• A description of mentorship and supervision

• Protected time taken


• Evidence of R&D information dissemination strategies

• Use and availability of protected time

• Evidence of back fill availability and use

• Research is part of annual appraisal for some jobs

• Evidence of help with governance and ethics

Supra-organisational (networks and support units)

• The nature of collaborations (co-authorship, order of authorship)

• Organize information exchange events. Description of attendance


This paper suggests a framework which sets out a tentative structure by which to start measuring the impact of capacity building interventions, and invites debate around the application of this framework to plan and measure progress. It highlights that interventions can focus on individuals, teams, organisations, and through support infrastructures like RDSUs and research networks. However, capacity building may only take place once change has occurred at more than one level: for example, the culture of an organisation in which teams and individuals work may have an influence of their abilities and opportunities to do research work. It is also possible that the interplay between different levels may have an effect on the outcomes at other levels. In measuring progress, it should be possible to determine a greater understanding of the relationship between different levels. The framework proposed in this paper may be the first step to doing this.

The notion of building capacity at any structural level is dependent on funding and support opportunities, which are influenced by policy and funding bodies. The ability to build capacity across the principles developed in the framework will also be dependent of R&D strategy and policy decisions. For example, if policy fluctuates in its emphasis on building capacity 'by', 'for' or 'with' practice, the ability to build capacity close to practice will be affected.

In terms of developing a science of RCB, there is a need to capture further information on issues of measuring process and outcome data to understand what helps develop 'useful' and 'useable' research. The paper suggests principles whereby a number of indicators could be developed. The list is not exhaustive, and it is hoped that through debate and application of the framework further indicators will be developed.

An important first step to building the science of RCB should be debate about identifying appropriate outcomes. This paper supports the use of traditional outcomes of measurement, including publications in peer reviewed journals and conference presentations. This assures quality, and engages critical review and debate. However, the paper also suggests that we might move on from these outcomes in order to capture the social impact of research, and supports the notion of developing outcomes which measure how research has had an impact on the quality of services, and on the lives of patients and communities. This includes adopting and shaping the type of methodologies that capacity building interventions support, which includes incorporating patient centred outcomes in research designs, highlighting issues such as cost effectiveness of interventions, exploring economic impact of research both in terms of product outputs and health gain, and in developing action oriented, and user involvement methodologies that describe and demonstrate impact. It also may mean that we have to track the types of linkages and collaborations that are built through RCB, as linkages that are close to practice, including those with policy makers and practitioners, may enhance research use and therefore 'usefulness'. If we are to measure progress through impact and change in practice, an appropriate time frame would have to be established alongside these measures.

This paper argues that 'professional outcomes' should also be measured, to recognize how critical thinking developed during research impacts on clinical practice more generally.

Finally, the proposed framework provides the basis by which we can build a body of evidence to link process to the outcomes of capacity building. By gathering process data and linking it to appropriate outcomes, we can more clearly unpack the 'black box' of process, and investigate which processes link to desired outcomes. It is through adopting such a framework, and testing out these measurements, that we can systematically build a body of knowledge that will inform the science and the art of capacity building in health care.


• There is currently little evidence on how to plan and measure progress in research capacity building (RCB), or agreement to determining its ultimate outcomes.

• Traditional outcomes of publications in peer reviewed journals, and successful grant applications may be the easy and important outcomes to measure, but do not necessarily address issues to do with the usefulness of research, professional outcomes, the impact of research activity on practice, or on measuring health gain.

• The paper suggests a framework which provides a tentative structure by which measuring the impact of RCB could be achieved, shaped around six principles of research capacity building, and includes four structural levels on which each principle can be applied.

• The framework could be the basis by which RCB interventions could be planned, and progress measured. It could act as a basis of comparison across interventions, and could contribute to establishing a knowledge base on what is effective in RCB in healthcare



My warm thanks go to my colleagues in the primary care group of the Trent RDSU for reading and commenting on earlier drafts of this paper, and for their continued support in practice.

Authors’ Affiliations

Primary Care and Social Care Lead, Trent Research and Development Unit, formerly, Trent Focus Group, ICOSS Building, The University of Sheffield


  1. Muir Gray JA: Evidence-based Healthcare. How to make health policy and management decisions. 1997, Edinburgh, Churchill LivingstoneGoogle Scholar
  2. Department of Health: Research and Development for a First Class Service. 2000, Leeds, DoHGoogle Scholar
  3. Mant D: National working party on R&D in primary care. Final Report. 1997, London, NHSE South and West.Google Scholar
  4. Department of Health: Strategic review of the NHS R&D Levy (The Clarke Report). 1999, , Central Research Department, Department of Health, 11-Google Scholar
  5. Campbell SM, Roland M, Bentley E, Dowell J, Hassall K, Pooley J, Price H: Research capacity in UK primary care. British Journal of General Practice. 1999, 49: 967-970.PubMedPubMed CentralGoogle Scholar
  6. Department of Health: Towards a strategy for nursing research and development. 2000, London, Department of HealthGoogle Scholar
  7. Ross F, Vernon S, Smith E: Mapping research in primary care nursing: Current activity and future priorities. NT Research. 2002, 7: 46-59.View ArticleGoogle Scholar
  8. Marks L, Godfrey M: Developing Research Capacity within the NHS: A summary of the evidence. 2000, Leeds, Nuffield Portfolio Programme Report.Google Scholar
  9. Lee M, Saunders K: Oak trees from acorns? An evaluation of local bursaries in primary care. Primary Health Care Research and Development. 2004, 5: 93-95. 10.1191/1463423604pc197xx.View ArticleGoogle Scholar
  10. Bateman H, Walter F, Elliott J: What happens next? Evaluation of a scheme to support primary care practitioners with a fledgling interest in research. Family Practice. 2004, 21: 83-86. 10.1093/fampra/cmh118.View ArticlePubMedGoogle Scholar
  11. Smith LFP: Research general practices: what, who and why?. British Journal of General Practice. 1997, 47: 83-86.PubMedPubMed CentralGoogle Scholar
  12. Griffiths F, Wild A, Harvey J, Fenton E: The productivity of primary care research networks. British Journal of General Practice. 2000, 50: 913-915.PubMedPubMed CentralGoogle Scholar
  13. Fenton F, Harvey J, Griffiths F, Wild A, Sturt J: Reflections from organization science of primary health care networks. Family Practice. 2001, 18: 540-544. 10.1093/fampra/18.5.540.View ArticlePubMedGoogle Scholar
  14. Department of Health: Research Capacity Development Strategy. 2004, London, Department of HealthGoogle Scholar
  15. Farmer E, Weston K: A conceptual model for capacity building in Australian primary health care research. Australian Family Physician. 2002, 31: 1139-1142.PubMedGoogle Scholar
  16. Carter YH, Shaw S, Sibbald B: Primary care research networks: an evolving model meriting national evaluation. British Journal of General Practice. 2000, 50: 859-860.PubMedPubMed CentralGoogle Scholar
  17. Trostle J: Research Capacity building and international health: Definitions, evaluations and strategies for success. Social Science and Medicine. 1992, 35: 1321-1324. 10.1016/0277-9536(92)90035-O.View ArticlePubMedGoogle Scholar
  18. Albert E, Mickan S: Closing the gap and widening the scope. New directions for research capacity building in primary health care. Australian Family Physician. 2002, 31: 1038 -10341.Google Scholar
  19. Crisp BR, Swerissen H, Duckett SJ: Four approaches to capacity building in health: consequences for measurement and accountability. Health Promotion International. 2000, 15: 99-107. 10.1093/heapro/15.2.99.View ArticleGoogle Scholar
  20. Ryan , Wyke S: The evaluation of primary care research networks in Scotland. British Journal of General Practice. 2001, 154-155.Google Scholar
  21. Gillies P: Effectiveness of alliances and partnerships for health promotion. Health Promotion International. 1998, 13: 99-120. 10.1093/heapro/13.2.99.View ArticleGoogle Scholar
  22. Pitkethly M, Sullivan F: Four years of TayRen, a primary care research and development network. Primary Care Research and Development. 2003, 4: 279-283. 10.1191/1463423603pc167oa.View ArticleGoogle Scholar
  23. Lester H, Carter YH, Dassu D, Hobbs F: Survey of research activity, training needs. departmental support, and career intentions of junior academic general practitioners. British Journal of General Practice. 1998, 48: 1322-1326.PubMedPubMed CentralGoogle Scholar
  24. North American Primary Care Research Group: What does it mean to build research capacity?. Family Medicine. 2002, 34: 678-684.Google Scholar
  25. Smith R: Measuring the social impact of research. BMJ. 2001, 323: 528-10.1136/bmj.323.7312.528.View ArticlePubMedPubMed CentralGoogle Scholar
  26. Sarre G: Capacity and activity in research project (CARP): supporting R&D in primary care trusts. 2002Google Scholar
  27. Del Mar C, Askew D: Building family/general practice research capacity. Annals of Family Medicine. 2004, 2: 535-540.View ArticleGoogle Scholar
  28. Carter YH, Shaw S, Macfarlane F: Primary Care research team assessment (PCRTA): development and evaluation. Occasional paper (Royal College of General Practitioners). 2002, 81: 1-72.Google Scholar
  29. Jowett S, Macleod J, Wilson S, Hobbs F: Research in Primary Care: extent of involvement and perceived determinants among practitioners for one English region. British Journal of General Practice. 2000, 50: 387-389.PubMedPubMed CentralGoogle Scholar
  30. Cooke J, Owen J, Wilson A: Research and development at the health and social care interface in primary care: a scoping exercise in one National Health Service region. Health and Social Care in the Community. 2002, 10: 435 -4444. 10.1046/j.1365-2524.2002.00395.x.View ArticlePubMedGoogle Scholar
  31. Raghunath AS, Innes A: The case of multidisciplinary research in primary care. Primary Care Research and Development. 2004, 5: 265-273.Google Scholar
  32. Reagans RZER: Networks, Diversity and Productivity: The Social Capital of Corporate R&D Teams. Organisational Science. 2001, 12: 502-517. 10.1287/orsc.12.4.502.10637.View ArticleGoogle Scholar
  33. Ovretveit J: Evaluating Health Interventions. 1998, Buckingham, Open UniversityGoogle Scholar
  34. Meyrick J, Sinkler P: An evaluation Resource for Healthy Living Centres. 1999, London, Health Education AuthorityGoogle Scholar
  35. Hakansson A, Henriksson K, Isacsson A: Research methods courses for GPs: ten years' experience in southern Sweden. British Journal of General Practice. 2000, 50: 811-812.PubMedPubMed CentralGoogle Scholar
  36. Bacigalupo B, Cooke J, Hawley M: Research activity, interest and skills in a health and social care setting: a snapshot of a primary care trust in Northern England. Health and Social Care in the Community.
  37. Kernick D: Evaluating primary care research networks - exposing a wider agenda. British Journal of General Practice. 2001, 51: 63-PubMedPubMed CentralGoogle Scholar
  38. Owen J, Cooke J: Developing research capacity and collaboration in primary care and social care: is there enough common ground?. Qualitative Social Work. 2004, 3: 398-410. 10.1177/1473325004048022.View ArticleGoogle Scholar
  39. Hurst: Building a research conscious workforce. Journal of Health Organization and management. 2003, 17: 373-384.View ArticlePubMedGoogle Scholar
  40. Gillibrand WP, Burton C, Watkins GG: Clinical networks for nursing research. International Nursing Review. 2002, 49: 188-193. 10.1046/j.1466-7657.2002.00124.x.View ArticlePubMedGoogle Scholar
  41. Campbell J, Longo D: Building research capacity in family medicine: Evaluation of the Grant Generating Project. Journal of Family Practice. 2002, 51: 593-PubMedGoogle Scholar
  42. Cooke J, Nancarrow S, Hammersley V, Farndon L, Vernon W: The "Designated Research Team" approach to building research capacity in primary care. Primary Health Care Research and Development.
  43. Innvaer S, Vist G, Trommald M, Oxman A: Health policy- makers' perceptions of their use of evidence: a systematic review. Journal of Health Services Research and Policy. 2002, 7: 239-244. 10.1258/135581902320432778.View ArticlePubMedGoogle Scholar
  44. NHS Service Delivery Organisation : NHS Service Delivery and Against National R&D programme, National listening exercise. 2000, London, NHS SDOGoogle Scholar
  45. Hanley J, Bradburn S, Gorin M, Barnes M, Evans C, Goodare HB: Involving consumers in research and development in the NHS: briefing notes for researchers. 2000, Winchester, Consumers in NHS Research Support Unit,Google Scholar
  46. Frenk J: Balancing relevance and excellence: organisational responses to link research with decision making. Social Science and Medicine. 1992, 35: 1397-1404. 10.1016/0277-9536(92)90043-P.View ArticlePubMedGoogle Scholar
  47. National Audit Office.: An international review on Governments' research procurement strategies. A paper in support of Getting the evidence: Using research in policy making. 2003, London, The Stationary Office.Google Scholar
  48. Thomas P, While A: Increasing research capacity and changing the culture of primary care towards reflective inquiring practice: the experience of West London Research Network (WeLReN). Journal of Interprofessional Care. 2001, 15: 133-139. 10.1080/13561820120039865.View ArticlePubMedGoogle Scholar
  49. Rowlands G, Crilly T, Ashworth M, Mager J, Johns C, Hilton S: Linking research and development in primary care: primary care trusts, primary care research networks and primary care academics. Primary Care Research and Development. 2004, 5: 255-263. 10.1191/1463423604pc201oa.View ArticleGoogle Scholar
  50. Davies S: R&D for the NHS- Delivering the research agenda: ; London. 2005, National Coordinating Centre for Research Capacity DevelopmentGoogle Scholar
  51. Department of Health.: Best Research for Best Health: A New National Health Research Strategy. The NHS contribution to health research in England: A consultation. 2005, London, Department of HealthGoogle Scholar
  52. Buxton M, Hanney S, Jones T: Estimating the economic value to societies of the impact of health research: a critical review. Bulletin of the World Health Organisation. 2004, 82: 733-739.Google Scholar
  53. Department of Health.: The NHS as an innovative organisation: A framework and guidance on the management of intellectual property in the NHS. 2002, London, Department of HealthGoogle Scholar
  54. Sarre G: Trent Focus Supporting research and development in primary care organisations: report of the capacity and activity in research project (CARP). 2003Google Scholar
  55. Department of Health: Research Governance Framework for Health and Social Care. 2001, London, Department of Health.Google Scholar
  56. Hill J, Foster N, Hughes R, Hay E: Meeting the challenges of research governance. Rheumatology. 2005, 44: 571-572. 10.1093/rheumatology/keh579.View ArticlePubMedGoogle Scholar
  57. Shaw S: Developing research management and governance capacity in primary care organizations: transferable learning from a qualitative evaluation of UK pilot sites. Family Practice. 2004, 21: 92-98. 10.1093/fampra/cmh120.View ArticlePubMedGoogle Scholar
  58. Pre-publication history

    1. The pre-publication history for this paper can be accessed here:


© Cooke; licensee BioMed Central Ltd. 2005

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.