Review Which of the following are recommended actions after you finish a research interview?

Thủ Thuật Hướng dẫn Which of the following are recommended actions after you finish a research interview? 2022

Bùi Thanh Tùng đang tìm kiếm từ khóa Which of the following are recommended actions after you finish a research interview? được Cập Nhật vào lúc : 2022-11-13 06:40:06 . Với phương châm chia sẻ Kinh Nghiệm Hướng dẫn trong nội dung bài viết một cách Chi Tiết Mới Nhất. Nếu sau khi tham khảo Post vẫn ko hiểu thì hoàn toàn có thể lại Comments ở cuối bài để Mình lý giải và hướng dẫn lại nha.

How well have we achieved our original aim and objectives?

The initially stated overarching aim of this research was to identify the contextual factors and mechanisms that are regularly associated with effective and cost-effective public involvement in research. While recognising the limitations of our analysis, we believe we have largely achieved this in our revised theory of public involvement in research set out in Chapter 8. We have developed and tested this theory of public involvement in research in eight diverse case studies; this has highlighted important contextual factors, in particular PI leadership, which had not previously been prominent in the literature. We have identified how this critical contextual factor shapes key mechanisms of public involvement, including the identification of a senior lead for involvement, resource allocation for involvement and facilitation of research partners. These mechanisms then lead to specific outcomes in improving the quality of research, notably recruitment strategies and materials and data collection tools and methods. We have identified a ‘virtuous circle’ of feedback to research partners on their contribution leading to their improved confidence and motivation, which facilitates their continued contribution. Following feedback from the HS&DR Board on our original application we did not seek to assess the cost-effectiveness of different mechanisms of public involvement but we did cost the different types of public involvement as discussed in Chapter 7. A key finding is that many research projects undercost public involvement.

Nội dung chính Show
    How well have we achieved our original aim and objectives?What were the limitations of our study?What would we do differently next time?Implications for research practice and fundingWhen preparing questions for a research interview you should?Which of the following are steps to take before an arranged interview quizlet?Which are major styles most commonly used by communication scholars for citing sources in a bibliography multiple select question?What are the three criteria for evaluating Internet documents?

In our original proposal we emphasised our desire to include case studies involving young people and families with children in the research process. We recruited two studies involving parents of young children aged under 5 years, and two projects involving ‘older’ young people in the 18- to 25-years age group. We recognise that in doing this we missed studies involving children and young people aged under 18 years; in principle we would have liked to have included studies involving such children and young people, but, given the resources our disposal and the additional resource, ethical and governance issues this would have entailed, we regretfully concluded that this would not be feasible for our study. In terms of the four studies with parental and young persons’ involvement that we did include, we have not done a separate analysis of their data, but the themes emerging from those case studies were consistent with our other case studies and contributed to our overall analysis.

In terms of the initial objectives, we successfully recruited the sample of eight diverse case studies and collected and analysed data from them (objective 1). As intended, we identified the outcomes of involvement from multiple stakeholders‘ perspectives, although we did not get as many research partners‘ perspectives as we would have liked – see limitations below (objective 2). It was more difficult than expected to track the impact of public involvement from project inception through to completion (objective 3), as all of our projects turned out to have longer time scales than our own. Even to track involvement over a stage of a case study research project proved difficult, as the research usually did not fall into neatly staged time periods and one study had no involvement activity over the study period.

Nevertheless, we were able to track seven of the eight case studies prospectively and in real time over time periods of up to 9 months, giving us an unusual window on involvement processes that have previously mainly been observed retrospectively. We were successful in comparing the contextual factors, mechanisms and outcomes associated with public involvement from different stakeholders‘ perspectives and costing the different mechanisms for public involvement (objective 4). We only partly achieved our final objective of undertaking a consensus exercise among stakeholders to assess the merits of the realist evaluation approach and our approach to the measurement and valuation of economic costs of public involvement in research (objective 5). A final consensus sự kiện was held, where very useful discussion and amendment of our theory of public involvement took place, and the economic approach was discussed and helpfully critiqued by participants. However, as our earlier discussions developed more fully than expected, we decided to let them continue rather than interrupt them in order to run the final exercise to assess the merits of the realist evaluation approach. We did, however, test our analysis with all our case study participants by sending a draft of this final report for comment. We received a number of helpful comments and corrections but no disagreement with our overall analysis.

What were the limitations of our study?

Realist evaluation is a relatively new approach and we recognise that there were a number of limitations to our study. We sought to follow the approach recommended by Pawson, but we acknowledge that we were not always able to do so. In particular, our theory of public involvement in research evolved over time and initially was not as tightly framed in terms of a testable hypothesis as Pawson recommends. In his latest book Pawson strongly recommends that outcomes should be measured with quantitative data,17 but we did not do so; we were not aware of the existence of quantitative data or tools that would enable us to collect such data to answer our research questions. Even in terms of qualitative data, we did not capture as much information on outcomes as we initially envisaged. There were several reasons for this. The most important was that capturing outcomes in public involvement is easier the more operational the focus of involvement, and more difficult the more strategic the involvement. Thus, it was relatively easy to see the impact of a patient panel on the redesign of a recruitment leaflet but harder to capture the impact of research partners in a multidisciplinary team discussion of research design.

We also found it was sometimes more difficult to engage research partners as participants in our research than researchers or research managers. On reflection this is not surprising. Research partners are generally motivated to take part in research relevant to their lived experience of a health condition or situation, whereas our research was quite detached from their lived experience; in addition people had many constraints on their time, so getting involved in our research as well as their own was likely to be a burden too far for some. Researchers clearly also face significant time pressures but they had a more direct interest in our research, as they are obliged to engage with public involvement to satisfy research funders such as the NIHR. Moreover, researchers were being paid by their employers for their time during interviews with us, while research partners were not paid by us and usually not paid by their research teams. Whatever the reasons, we had less response from research partners than researchers or research managers, particularly for the third round of data collection; thus we have fewer data on outcomes from research partners‘ perspectives and we need to be aware of a possible selection bias towards more engaged research partners. Such a bias could have implications for our findings; for example payment might have been a more important motivating factor for less engaged advisory group members.

There were a number of practical difficulties we encountered. One challenge was when to recruit the case studies. We recruited four of our eight case studies prior to the full application, but this was more than 1 year before our project started and 15 months or more before data collection began. In this intervening period, we found that the time scales of some of the case studies were no longer ideal for our project and we faced the choice of whether to continue with them, although this timing was not ideal, or seek a late moment to recruit alternative ones. One of our case studies ultimately undertook no involvement activity over the study period, so we obtained fewer data from it, and it contributed relatively little to our analysis. Similarly, one of the four case studies we recruited later experienced some delays itself in beginning and so we had a more limited period for data collection than initially envisaged. Research governance approvals took much longer than expected, particularly as we had to take three of our research partners, who were going to collect data within NHS projects, through the research passport process, which essentially truncated our data collection period from 1 year to 9 months. Even if we had had the full year initially envisaged for data collection, our conclusion with hindsight was that this was insufficiently long. To compare initial plans and intentions for involvement with the reality of what actually happened required a longer time period than a year for most of our case studies.

In the light of the importance we have placed on the commitment of PIs, there is an issue of potential selection bias in the recruitment of our sample. As our sampling strategy explicitly involved a networking approach to PIs of projects where we thought some significant public involvement was taking place, we were likely (as we did) to recruit enthusiasts and, worst, those non-committed who were least open to the potential value of public involvement. There were, unsurprisingly, no highly sceptical PIs in our sample. We have no data therefore on how public involvement may work in research where the PI is sceptical but may feel compelled to undertake involvement because of funder requirements or other factors.

What would we do differently next time?

If we were to design this study again, there are a number of changes we would make. Most importantly we would go for a longer time period to be able to capture involvement through the whole research process from initial design through to dissemination. We would seek to recruit far more potential case studies in principle, so that we had greater choice of which to proceed with once our study began in earnest. We would include case studies from the application stage to capture the important early involvement of research partners in the initial design period. It might be preferable to research a smaller number of case studies, allowing a more in-depth ethnographic approach. Although challenging, it would be very informative to seek to sample sceptical PIs. This might require a brief screening exercise of a larger group of PIs on their attitudes to and experience of public involvement.

The economic evaluation was challenging in a number of ways, particularly in seeking to obtain completed resource logs from case study research partners. Having a 2-week data collection period was also problematic in a field such as public involvement, where activity may be very episodic and infrequent. Thus, collecting economic data alongside other case study data in a more integrated way, and particularly with interviews and more ethnographic observation of case study activities, might be advantageous. The new budgeting tool developed by INVOLVE and the MHRN may provide a useful resource for future economic evaluations.23

We have learned much from the involvement of research partners in our research team and, although many aspects of our approach worked well, there are some things we would do differently in future. Even though we included substantial resources for research partner involvement in all aspects of our study, we underestimated how time-consuming such full involvement would be. We were perhaps overambitious in trying to ensure such full involvement with the number of research partners and the number and complexity of the case studies. We were also perhaps naive in expecting all the research partners to play the same role in the team; different research partners came with different experiences and skills, and, like most of our case studies, we might have been better to be less prescriptive and allow the roles to develop more organically within the project.

Implications for research practice and funding

If one of the objectives of R&D policy is to increase the extent and effectiveness of public involvement in research, then a key implication of this research is the importance of influencing PIs to value public involvement in research or to delegate to other senior colleagues in leading on involvement in their research. Training is unlikely to be the key mechanism here; senior researchers are much more likely to be influenced by peers or by their personal experience of the benefits of public involvement. Early career researchers may be shaped by training but again peer learning and culture may be more influential. For those researchers sceptical or agnostic about public involvement, the requirement of funders is a key factor that is likely to make them engage with the involvement agenda. Therefore, funders need to scrutinise the track record of research teams on public involvement to ascertain whether there is any evidence of commitment or leadership on involvement.

One of the findings of the economic analysis was that PIs have consistently underestimated the costs of public involvement in their grant applications. Clearly the field will benefit from the guidance and budgeting tool recently disseminated by MHRN and INVOLVE. It was also notable that there was a degree of variation in the real costs of public involvement and that effective involvement is not necessarily costly. Different models of involvement incur different costs and researchers need to be made aware of the costs and benefits of these different options.

One methodological lesson we learned was the impact that conducting this research had on some participants’ reflection on the impact of public involvement. Particularly for research staff, the questions we asked sometimes made them reflect upon what they were doing and change aspects of their approach to involvement. Thus, the more the NIHR and other funders can build reporting, audit and other forms of evaluation on the impact of public involvement directly into their processes with PIs, the more likely such questioning might stimulate similar reflection.

There are a number of gaps in our knowledge around public involvement in research that follow from our findings, and would benefit from further research, including realist evaluation to extend and further test the theory we have developed here:

In-depth exploration of how PIs become committed to public involvement and how to influence agnostic or sceptical PIs would be very helpful. Further research might compare, for example, training with peer-influencing strategies in engendering PI commitment. Research could explore the leadership role of other research team members, including research partners, and how collective leadership might support effective public involvement.

More methodological work is needed on how to robustly capture the impact and outcomes of public involvement in research (building as well on the PiiAF work of Popay et al.51), including further economic analysis and exploration of impact when research partners are integral to research teams.

Research to develop approaches and carry out a full cost–benefit analysis of public involvement in research would be beneficial. Although methodologically challenging, it would be very useful to conduct some longer-term studies which sought to quantify the impact of public involvement on such key indicators as participant recruitment and retention in clinical trials.

It would also be helpful to capture qualitatively the experiences and perspectives of research partners who have had mixed or negative experiences, since they may be less likely than enthusiasts to volunteer to participate in studies of involvement in research such as ours. Similarly, further research might explore the (relatively rare) experiences of marginalised and seldom-heard groups involved in research.

Payment for public involvement in research remains a contested issue with strongly held positions for and against; it would be helpful to further explore the value research partners and researchers place on payment and its effectiveness for enhancing involvement in and impact on research.

A final relatively narrow but important question that we identified after data collection had finished is: what is the impact of the long periods of relative non-involvement following initial periods of more intense involvement for research partners in some types of research, particularly clinical trials?

When preparing questions for a research interview you should?

Wording of Questions. Wording should be open-ended. Respondents should be able to choose their own terms when answering questions.. Questions should be as neutral as possible. ... . Questions should be asked one a time.. Questions should be worded clearly. ... . Be careful asking “why” questions..

Which of the following are steps to take before an arranged interview quizlet?

Which of the following are steps to take before an arranged interview? Determine whether the interview will be recorded, define the purpose of the interview, and prepare your questions.

Which are major styles most commonly used by communication scholars for citing sources in a bibliography multiple select question?

Two of the most frequently used forms for writing bibliographies and citations are the MLA (Modern Language Association) and the APA (American Psychological Association) styles.

What are the three criteria for evaluating Internet documents?

There are six (6) criteria that should be applied when evaluating any Web site: authority, accuracy, objectivity, currency, coverage, and appearance. Tải thêm tài liệu liên quan đến nội dung bài viết Which of the following are recommended actions after you finish a research interview?

Video Which of the following are recommended actions after you finish a research interview? ?

Bạn vừa Read Post Với Một số hướng dẫn một cách rõ ràng hơn về Clip Which of the following are recommended actions after you finish a research interview? tiên tiến nhất

Chia Sẻ Link Download Which of the following are recommended actions after you finish a research interview? miễn phí

You đang tìm một số trong những Chia SẻLink Tải Which of the following are recommended actions after you finish a research interview? miễn phí.

Giải đáp thắc mắc về Which of the following are recommended actions after you finish a research interview?

Nếu sau khi đọc nội dung bài viết Which of the following are recommended actions after you finish a research interview? vẫn chưa hiểu thì hoàn toàn có thể lại phản hồi ở cuối bài để Ad lý giải và hướng dẫn lại nha #recommended #actions #finish #research #interview