How to Use a Conceptual Framework for Better Research

hero-img

A conceptual framework in research is not just a tool but a vital roadmap that guides the entire research process. It integrates various theories, assumptions, and beliefs to provide a structured approach to research. By defining a conceptual framework, researchers can focus their inquiries and clarify their hypotheses, leading to more effective and meaningful research outcomes.

What is a Conceptual Framework?

A conceptual framework is essentially an analytical tool that combines concepts and sets them within an appropriate theoretical structure. It serves as a lens through which researchers view the complexities of the real world. The importance of a conceptual framework lies in its ability to serve as a guide, helping researchers to not only visualize but also systematically approach their study.

Key Components and to be Analyzed During Research

  • Theories: These are the underlying principles that guide the hypotheses and assumptions of the research.
  • Assumptions: These are the accepted truths that are not tested within the scope of the research but are essential for framing the study.
  • Beliefs: These often reflect the subjective viewpoints that may influence the interpretation of data.
  • Ready to use
  • Fully customizable template
  • Get Started in seconds

exit full-screen

Together, these components help to define the conceptual framework that directs the research towards its ultimate goal. This structured approach not only improves clarity but also enhances the validity and reliability of the research outcomes. By using a conceptual framework, researchers can avoid common pitfalls and focus on essential variables and relationships.

For practical examples and to see how different frameworks can be applied in various research scenarios, you can Explore Conceptual Framework Examples .

Different Types of Conceptual Frameworks Used in Research

Understanding the various types of conceptual frameworks is crucial for researchers aiming to align their studies with the most effective structure. Conceptual frameworks in research vary primarily between theoretical and operational frameworks, each serving distinct purposes and suiting different research methodologies.

Theoretical vs Operational Frameworks

Theoretical frameworks are built upon existing theories and literature, providing a broad and abstract understanding of the research topic. They help in forming the basis of the study by linking the research to already established scholarly works. On the other hand, operational frameworks are more practical, focusing on how the study’s theories will be tested through specific procedures and variables.

  • Theoretical frameworks are ideal for exploratory studies and can help in understanding complex phenomena.
  • Operational frameworks suit studies requiring precise measurement and data analysis.

Choosing the Right Framework

Selecting the appropriate conceptual framework is pivotal for the success of a research project. It involves matching the research questions with the framework that best addresses the methodological needs of the study. For instance, a theoretical framework might be chosen for studies that aim to generate new theories, while an operational framework would be better suited for testing specific hypotheses.

Benefits of choosing the right framework include enhanced clarity, better alignment with research goals, and improved validity of research outcomes. Tools like Table Chart Maker can be instrumental in visually comparing the strengths and weaknesses of different frameworks, aiding in this crucial decision-making process.

Real-World Examples of Conceptual Frameworks in Research

Understanding the practical application of conceptual frameworks in research can significantly enhance the clarity and effectiveness of your studies. Here, we explore several real-world case studies that demonstrate the pivotal role of conceptual frameworks in achieving robust research conclusions.

  • Healthcare Research: In a study examining the impact of lifestyle choices on chronic diseases, researchers used a conceptual framework to link dietary habits, exercise, and genetic predispositions. This framework helped in identifying key variables and their interrelations, leading to more targeted interventions.
  • Educational Development: Educational theorists often employ conceptual frameworks to explore the dynamics between teaching methods and student learning outcomes. One notable study mapped out the influences of digital tools on learning engagement, providing insights that shaped educational policies.
  • Environmental Policy: Conceptual frameworks have been crucial in environmental research, particularly in studies on climate change adaptation. By framing the relationships between human activity, ecological changes, and policy responses, researchers have been able to propose more effective sustainability strategies.

Adapting conceptual frameworks based on evolving research data is also critical. As new information becomes available, it’s essential to revisit and adjust the framework to maintain its relevance and accuracy, ensuring that the research remains aligned with real-world conditions.

For those looking to visualize and better comprehend their research frameworks, Graphic Organizers for Conceptual Frameworks can be an invaluable tool. These organizers help in structuring and presenting research findings clearly, enhancing both the process and the presentation of your research.

Step-by-Step Guide to Creating Your Own Conceptual Framework

Creating a conceptual framework is a critical step in structuring your research to ensure clarity and focus. This guide will walk you through the process of building a robust framework, from identifying key concepts to refining your approach as your research evolves.

Building Blocks of a Conceptual Framework

  • Identify and Define Main Concepts and Variables: Start by clearly identifying the main concepts, variables, and their relationships that will form the basis of your research. This could include defining key terms and establishing the scope of your study.
  • Develop a Hypothesis or Primary Research Question: Formulate a central hypothesis or question that guides the direction of your research. This will serve as the foundation upon which your conceptual framework is built.
  • Link Theories and Concepts Logically: Connect your identified concepts and variables with existing theories to create a coherent structure. This logical linking helps in forming a strong theoretical base for your research.

Visualizing and Refining Your Framework

Using visual tools can significantly enhance the clarity and effectiveness of your conceptual framework. Decision Tree Templates for Conceptual Frameworks can be particularly useful in mapping out the relationships between variables and hypotheses.

Map Your Framework: Utilize tools like Creately’s visual canvas to diagram your framework. This visual representation helps in identifying gaps or overlaps in your framework and provides a clear overview of your research structure.

A mind map is a useful graphic organizer for writing - Graphic Organizers for Writing

Analyze and Refine: As your research progresses, continuously evaluate and refine your framework. Adjustments may be necessary as new data comes to light or as initial assumptions are challenged.

By following these steps, you can ensure that your conceptual framework is not only well-defined but also adaptable to the changing dynamics of your research.

Practical Tips for Utilizing Conceptual Frameworks in Research

Effectively utilizing a conceptual framework in research not only streamlines the process but also enhances the clarity and coherence of your findings. Here are some practical tips to maximize the use of conceptual frameworks in your research endeavors.

  • Setting Clear Research Goals: Begin by defining precise objectives that are aligned with your research questions. This clarity will guide your entire research process, ensuring that every step you take is purposeful and directly contributes to your overall study aims. \
  • Maintaining Focus and Coherence: Throughout the research, consistently refer back to your conceptual framework to maintain focus. This will help in keeping your research aligned with the initial goals and prevent deviations that could dilute the effectiveness of your findings.
  • Data Analysis and Interpretation: Use your conceptual framework as a lens through which to view and interpret data. This approach ensures that the data analysis is not only systematic but also meaningful in the context of your research objectives. For more insights, explore Research Data Analysis Methods .
  • Presenting Research Findings: When it comes time to present your findings, structure your presentation around the conceptual framework . This will help your audience understand the logical flow of your research and how each part contributes to the whole.
  • Avoiding Common Pitfalls: Be vigilant about common errors such as overcomplicating the framework or misaligning the research methods with the framework’s structure. Keeping it simple and aligned ensures that the framework effectively supports your research.

By adhering to these tips and utilizing tools like 7 Essential Visual Tools for Social Work Assessment , researchers can ensure that their conceptual frameworks are not only robust but also practically applicable in their studies.

How Creately Enhances the Creation and Use of Conceptual Frameworks

Creating a robust conceptual framework is pivotal for effective research, and Creately’s suite of visual tools offers unparalleled support in this endeavor. By leveraging Creately’s features, researchers can visualize, organize, and analyze their research frameworks more efficiently.

  • Visual Mapping of Research Plans: Creately’s infinite visual canvas allows researchers to map out their entire research plan visually. This helps in understanding the complex relationships between different research variables and theories, enhancing the clarity and effectiveness of the research process.
  • Brainstorming with Mind Maps: Using Mind Mapping Software , researchers can generate and organize ideas dynamically. Creately’s intelligent formatting helps in brainstorming sessions, making it easier to explore multiple topics or delve deeply into specific concepts.
  • Centralized Data Management: Creately enables the importation of data from multiple sources, which can be integrated into the visual research framework. This centralization aids in maintaining a cohesive and comprehensive overview of all research elements, ensuring that no critical information is overlooked.
  • Communication and Collaboration: The platform supports real-time collaboration, allowing teams to work together seamlessly, regardless of their physical location. This feature is crucial for research teams spread across different geographies, facilitating effective communication and iterative feedback throughout the research process.

Moreover, the ability t Explore Conceptual Framework Examples directly within Creately inspires researchers by providing practical templates and examples that can be customized to suit specific research needs. This not only saves time but also enhances the quality of the conceptual framework developed.

In conclusion, Creately’s tools for creating and managing conceptual frameworks are indispensable for researchers aiming to achieve clear, structured, and impactful research outcomes.

Join over thousands of organizations that use Creately to brainstorm, plan, analyze, and execute their projects successfully.

More Related Articles

What is a Thematic Analysis and How to Conduct One

Chiraag George is a communication specialist here at Creately. He is a marketing junkie that is fascinated by how brands occupy consumer mind space. A lover of all things tech, he writes a lot about the intersection of technology, branding and culture at large.

Skip navigation

Nielsen Norman Group logo

World Leaders in Research-Based User Experience

ResearchOps 101

research operational framework

August 16, 2020 2020-08-16

  • Email article
  • Share on LinkedIn
  • Share on Twitter

ResearchOps is a specialized area of DesignOps focused specifically on components concerning user-research practices.

ResearchOps (ReOps):  The orchestration and optimization of people, processes, and craft in order to amplify the value and impact of research at scale.

In This Article:

Researchops efforts, why researchops matters now, researchops is not just participant recruitment, common components of researchops, note: this model is not comprehensive, how to get started with researchops, the researchops community.

ResearchOps a collective term for efforts aimed at supporting researchers in planning, conducting, and applying quality user research, such as:

  • Standardizing research methods and supporting documentation (e.g., scripts, templates, and consent forms) to save time and enable consistent application across teams
  • Recruiting and managing research participants across studies
  • Ensuring research ethics are understood and upheld by individual researchers across studies
  • Educating research-team partners and leadership about the value of user research
  • Managing user-research insights and making data accessible throughout the team and the organization
  • Socializing success stories and ensuring that the overall impact of user research is known

The exponential growth of the UX profession means that more companies are realizing the value of UX and that the demand for UX and user research is increasing. This is great news: the value of our work is known and deemed necessary much more so than it was in the recent past.

The practical task of scaling research practices to meet this increased demand, however, often falls to existing UX research staff, with little guidance or additional bandwidth. Senior user researchers or research managers must deal with the responsibility and challenge of developing processes to scale their practices to match demand —  all while simultaneously continuing to plan and facilitate research sessions.

If a company does 10× the amount of user research it used to, the cost shouldn’t be 11× the old budget, as is all too likely if more projects lead to more bureaucracy, coordination, and other overhead costs. The new cost should be 9× due to economies of scale and reuse of prep work across studies. In fact, the ResearchOps cost goal should really be 8× or lower.

ResearchOps can provide relief, with dedicated roles (or at least focused efforts, if dedicated roles are not feasible) to create and compile intentional strategies and tools for managing the operational aspects of research, so that researchers can focus on conducting studies and applying research insights.

Many people equate ResearchOps with participant management (e.g., screening and scheduling participants for research studies), because this aspect is often an immediately obvious pain point for researchers and takes much time. While participant management is certainly an important component of ResearchOps, it is not the only aspect. The full landscape of operational elements necessary for creating and scaling a research practice is much broader.

As a former contract ResearchOps Specialist at Uber aptly explained to me during a series of interviews that I conducted with DesignOps and ResearchOps professionals: “The value ResearchOps can bring is not just calling and getting a participant but building a program and establishing consistent quality for communications and research methods.”

ResearchOps addresses a tapestry of interwoven operational aspects concerning user research, where every component both affects and is affected by the other elements.

The ResearchOps model described below was created by identifying key focus areas from our DesignOps and ResearchOps practitioner interviews. It outlines 6 common focus areas of ResearchOps:

  • Participants: Recruiting, screening, scheduling, and compensating participants
  • Governance: Processes and guidelines for consent, privacy, and information storage
  • Knowledge: Processes and platforms for collecting, synthesizing, and sharing research insights
  • Tools: Enabling efficiencies in research through consistent toolsets and platforms
  • Competency: Enabling, educating, and onboarding others to perform research activities
  • Advocacy: Defining, sharing, and socializing the value of user research throughout the organization

As the cyclical design of the model conveys, these are not isolated elements, but interrelated factors that drive the need for and influence each other.

Research Ops model with 6 areas: participants, governance, knowledge, tools, competency and advocacy

Participant Management

The first component of ResearchOps — but not the only one — is participant management. This area includes creating processes for finding, recruiting, screening, scheduling, and compensating research-study participants. It’s often low-hanging fruit, because it’s typically the most apparent and immediate need of overloaded research teams.

Common ResearchOps activities and efforts within participant management include:

  • Building a database or panel of potential study participants or researching and selecting external recruiting platforms
  • Screening and approving participants
  • Managing communication with participants
  • Building frameworks for fair and consistent incentive levels based on participant expertise and required time investment

Governance guidelines are a necessity for any study involving participants. For example, consent templates must be compliant with existing data-privacy regulations, such as GDPR, and written in plain, transparent language. Additionally, as participants’ personally identifiable information (PII) is collected, the organization must follow legal regulations and ethical standards concerning where that information is stored, how long it is stored, how it is protected, and how its storage is made transparent to the participant. (PII refers to any data that could be used to identify a person, such as a full name, date of birth, or email address.)

Common ResearchOps activities and efforts within governance include:

  • Researching and understanding the application of data-privacy regulations, such as GDPR, to the UX-research process
  • Establishing ethically sound processes and communications
  • Writing and standardizing compliant and transparent consent forms for various study types and formats of data collected
  • Managing the proper maintenance and disposal of PII and study artifacts, such as interview scripts or audio- and video-session recordings

Knowledge Management

As data begins to accumulate from studies, the need for knowledge management becomes increasingly apparent. This element of ResearchOps is focused on collecting and synthesizing data across research studies and ensuring that it is findable and accessible to others. Not only can effectively compiled and managed research insights help research teams share findings and avoid repetitious studies, but they can also serve to educate those outside the team.

Common ResearchOps activities and efforts within knowledge management include:

  • Developing standardized templates for data collection during studies
  • Building a shared database of research insights (sometimes called a research repository) where findings from studies across the organization can be stored 
  • Developing regular meetings or other avenues for sharing and updating the organization about known user insights
  • Coordinating with other teams conducting research (e.g.,  marketing or business intelligence) in order to create a comprehensive source of insights

Most of the activities discussed so far require tools or platforms. For example: What platform will be used to recruit and screen participants? What applications will be used to manage participant PII? What programs will be used to house all of the resulting research findings? Furthermore, tools that facilitate the actual research, such as remote usability-testing platforms, analytics, or survey platforms, or video-editing and audio- transcription tools, must be considered. While autonomy in choice can be valuable, auditing the research toolset to create some level of consistency across the team expedites sharing and collaboration. 

Common ResearchOps activities and efforts within tools include:

  • Researching and comparing appropriate platforms for recruiting and managing participant information
  • Selecting research tools for usability testing, surveys, remote interviews, or any other types of research
  • Managing access privileges and platform seats across individual user researchers and teams
  • Auditing the research toolkit to ensure that all platforms and applications in use are compliant with data-privacy regulations
  • While buildings and facilities are usually not thought of as “tools,” ResearchOps should also manage any usability labs as well as non-lab testing rooms, including contracts for outsourced locations.

As the demand for and amount of research conducted continues to scale, it becomes critical to also grow the organization’s research capabilities and skills. The competency component is concerned with enabling more people to understand and do research. This effort often involves providing resources and education both to (1) researchers, so that they can continue to develop their skills, and (2) nonresearchers, so that they can integrate basic research activities into their work when researchers are unavailable (and know when to call for help instead of rolling their own study).

Common ResearchOps activities and efforts within competency include:

  • Developing standardized and consistent professional-development opportunities for researchers who want to grow deeply or broadly in their expertise
  • Establishing mentorship programs to onboard new researchers and help them learn and develop new research skills
  • Creating a playbook or database of research methods to onboard new researchers or educate others outside of the team
  • Developing formalized training or curricula to train nonresearchers and expose them to user-centered approaches and activities, so that basic research can be incorporated into work when researchers cannot scale to demand

The final component, advocacy, is concerned with how the value of UX research is defined and communicated to the rest of the organization. Simply put, what is being done to ensure that the rest of the organization is aware of the value and impact of research? For example, does the team socialize success stories and demonstrate the impact of user research? To come full circle on the cyclical nature of the model, proper advocacy helps ensure fuel and resources for all the other focus areas and ensures the ResearchOps practice can continue to scale effectively.

Common ResearchOps activities and efforts within advocacy include:

  • Creating a UX research-team mission or statement of purpose that can be used to talk about the team’s purpose with other colleagues
  • Developing case studies that demonstrate the impact of properly applied research findings on company metrics and KPI’s
  • Developing a process for regularly sharing insights and success stories with the rest of the organization (e.g., lunch-and-learns, email newsletters, posters,)

The 6 components in this model are specialized areas that research practices must consider in order to create consistent, quality research efforts across teams; however, there are other elements that must be considered and intentionally designed that are critical to the health of any research team or practice.

One such area is documented career pathways. The documentation and use of career pathways in general is rare. (In our recent DesignOps research , only 11% of respondents reported having a documented, shared growth path — an abysmal percentage.) But, especially within relatively nascent domains, such as ResearchOps, where there is no decisive, publicly available legacy of successful team structures or models for roles and responsibilities, it’s equally both critical and challenging to create and document such pathways.

To make sure that you include additional elements that are not represented in this ResearchOps model, reference our DesignOps framework . It provides a comprehensive landscape of potential focus areas for operationalizing design in general; many of these areas equally apply to creating a healthy, focused ResearchOps practice. Team structure and role definitions, consistent hiring and onboarding practices, team communication and collaboration methods, and workflow balance and planning are just a few additional areas to consider.

As mentioned, ResearchOps is a whole of many parts that are best considered holistically, because every component both affects and is affected by the other factors. However, when establishing a ResearchOps practice, not all aspects can be addressed at once.

The first step to figuring out where to start is understanding where the biggest pain points are. Are researchers overwhelmed with the logistics of recruiting and scheduling participants? Maybe participant management is the best starting point for the team. Is research data scattered and inaccessible to new team members, causing duplicative research efforts and poor research memory? Perhaps knowledge management is where the team should focus.

Begin by identifying the current problems that necessitate ResearchOps. Perform internal research to understand where the biggest pain points currently exist for research teams and research-team partners. For example, you could send out a survey or have focus groups with researchers to collect information on whether current processes enable them to be effective and what gets in their way the most. Additionally, carry out internal stakeholder interviews to uncover the biggest pain points for partners within the research process. This knowledge will help you create a clear role for ResearchOps.

Just remember, when it comes to scaling research, balance your focus between the component that you chose to address and the overall tapestry of considerations. Evolve and expand your focus as needs shift to maintain a balanced practice.

The ResearchOps Community is a group of ResearchOps professionals and researchers who have conducted extensive research to understand the way the UX community thinks about and addresses ResearchOps challenges. They have compiled a collection of resources and thought leadership on the topic, available on the group’s website .

Related Courses

Researchops: scaling user research.

Orchestrate and optimize research to amplify its impact

DesignOps: Scaling UX Design and Research

Define, share, and implement design operations

UX Basic Training

Foundational concepts that everyone should know

Interaction

Related Topics

  • Design Process Design Process

Learn More:

Please accept marketing cookies to view the embedded video. https://www.youtube.com/watch?v=5kOW237B6Gc

research operational framework

UX Researchers Reporting Structure

Kara Pernice · 3 min

research operational framework

Strategic & Reactionary User Research

Kara Pernice · 4 min

research operational framework

The Number One Reason for Not Doing UX Research

Kara Pernice · 2 min

Related Articles:

Research Repositories for Tracking UX Research and Growing Your ResearchOps

Kara Pernice · 5 min

3 Steps for Getting Started with DesignOps

Kate Kaplan · 6 min

Skill Mapping: A Digital Template for Remote Teams

Rachel Krause · 5 min

The 6 Levels of UX Maturity

Kara Pernice, Sarah Gibbons, Kate Moran, and Kathryn Whitenton · 10 min

DesignOps Maturity: Low in Most Organizations

Kate Kaplan · 10 min

The State of Design Teams: Structure, Alignment, and Impact

Kate Kaplan · 4 min

Scientific Research and Methodology

2.2 conceptual and operational definitions.

Research studies usually include terms that must be carefully and precisely defined, so that others know exactly what has been done and there are no ambiguities. Two types of definitions can be given: conceptual definitions and operational definitions .

Loosely speaking, a conceptual definition explains what to measure or observe (what a word or a term means for your study), and an operational definitions defines exactly how to measure or observe it.

For example, in a study of stress in students during a university semester. A conceptual definition would describe what is meant by ‘stress.’ An operational definition would describe how the ‘stress’ would be measured.

Sometimes the definitions themselves aren’t important, provided a clear definition is given. Sometimes, commonly-accepted definitions exist, so should be used unless there is a good reason to use a different definition (for example, in criminal law, an ‘adult’ in Australia is someone aged 18 or over ).

Sometimes, a commonly-accepted definition does not exist, so the definition being used should be clearly articulated.

Example 2.2 (Operational and conceptual definitions) Players and fans have become more aware of concussions and head injuries in sport. A Conference on concussion in sport developed this conceptual definition ( McCrory et al. 2013 ) :

Concussion is a brain injury and is defined as a complex pathophysiological process affecting the brain, induced by biomechanical forces. Several common features that incorporate clinical, pathologic and biomechanical injury constructs that may be utilised in defining the nature of a concussive head injury include: Concussion may be caused either by a direct blow to the head, face, neck or elsewhere on the body with an “impulsive” force transmitted to the head. Concussion typically results in the rapid onset of short-lived impairment of neurological function that resolves spontaneously. However, in some cases, symptoms and signs may evolve over a number of minutes to hours. Concussion may result in neuropathological changes, but the acute clinical symptoms largely reflect a functional disturbance rather than a structural injury and, as such, no abnormality is seen on standard structural neuroimaging studies. Concussion results in a graded set of clinical symptoms that may or may not involve loss of consciousness. Resolution of the clinical and cognitive symptoms typically follows a sequential course. However, it is important to note that in some cases symptoms may be prolonged.

While this is all helpful… it does not explain how to identify a player with concussion during a game.

Rugby decided on this operational definition ( Raftery et al. 2016 ) :

… a concussion applies with any of the following: The presence, pitch side, of any Criteria Set 1 signs or symptoms (table 1)… [ Note : This table includes symptoms such as ‘convulsion,’ ‘clearly dazed,’ etc.]; An abnormal post game, same day assessment…; An abnormal 36–48 h assessment…; The presence of clinical suspicion by the treating doctor at any time…

Example 2.3 (Operational and conceptual definitions) Consider a study requiring water temperature to be measured.

An operational definition would explain how the temperature is measured: the thermometer type, how the thermometer was positioned, how long was it left in the water, and so on.

research operational framework

Example 2.4 (Operational definitions) Consider a study measuring stress in first-year university students.

Stress cannot be measured directly, but could be assessed using a survey (like the Perceived Stress Scale (PSS) ( Cohen et al. 1983 ) ).

The operational definition of stress is the score on the ten-question PSS. Other means of measuring stress are also possible (such as heart rate or blood pressure).

Meline ( 2006 ) discusses five studies about stuttering, each using a different operational definition:

  • Study 1: As diagnosed by speech-language pathologist.
  • Study 2: Within-word disfluences greater than 5 per 150 words.
  • Study 3: Unnatural hesitation, interjections, restarted or incomplete phrases, etc.
  • Study 4: More than 3 stuttered words per minute.
  • Study 5: State guidelines for fluency disorders.

A study of snacking in Australia ( Fayet-Moore et al. 2017 ) used this operational definition of ‘snacking’:

…an eating occasion that occurred between meals based on time of day. — Fayet-Moore et al. ( 2017 ) (p. 3)

A study examined the possible relationship between the ‘pace of life’ and the incidence of heart disease ( Levine 1990 ) in 36 US cities. The researchers used four different operational definitions for ‘pace of life’ (remember the article was published in 1990!):

  • The walking speed of randomly chosen pedestrians.
  • The speed with which bank clerks gave ‘change for two $20 bills or [gave] two $20 bills for change.’
  • The talking speed of postal clerks.
  • The proportion of men and women wearing a wristwatch.

None of these perfectly measure ‘pace of life,’ of course. Nonetheless, the researchers found that, compared to people on the West Coast,

… people in the Northeast walk faster, make change faster, talk faster and are more likely to wear a watch… — Levine ( 1990 ) (p. 455)

Logo for Mavs Open Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

10.3 Operational definitions

Learning objectives.

Learners will be able to…

  • Define and give an example of indicators and attributes for a variable
  • Apply the three components of an operational definition to a variable
  • Distinguish between levels of measurement for a variable and how those differences relate to measurement
  • Describe the purpose of composite measures like scales and indices

Conceptual definitions are like dictionary definitions. They tell you what a concept means by defining it using other concepts. Operationalization occurs after conceptualization and is the process by which researchers spell out precisely how a concept will be measured in their study. It involves identifying the specific research procedures we will use to gather data about our concepts. It entails identifying indicators that can identify when your variable is present or not, the magnitude of the variable, and so forth.

research operational framework

Operationalization works by identifying specific  indicators that will be taken to represent the ideas we are interested in studying. Let’s look at an example. Each day, Gallup researchers poll 1,000 randomly selected Americans to ask them about their well-being. To measure well-being, Gallup asks these people to respond to questions covering six broad areas: physical health, emotional health, work environment, life evaluation, healthy behaviors, and access to basic necessities. Gallup uses these six factors as indicators of the concept that they are really interested in, which is well-being .

Identifying indicators can be even simpler than this example. Political party affiliation is another relatively easy concept for which to identify indicators. If you asked a person what party they voted for in the last national election (or gained access to their voting records), you would get a good indication of their party affiliation. Of course, some voters split tickets between multiple parties when they vote and others swing from party to party each election, so our indicator is not perfect. Indeed, if our study were about political identity as a key concept, operationalizing it solely in terms of who they voted for in the previous election leaves out a lot of information about identity that is relevant to that concept. Nevertheless, it’s a pretty good indicator of political party affiliation.

Choosing indicators is not an arbitrary process. Your conceptual definitions point you in the direction of relevant indicators and then you can identify appropriate indicators in a scholarly manner using theory and empirical evidence.  Specifically, empirical work will give you some examples of how the important concepts in an area have been measured in the past and what sorts of indicators have been used. Often, it makes sense to use the same indicators as previous researchers; however, you may find that some previous measures have potential weaknesses that your own study may improve upon.

So far in this section, all of the examples of indicators deal with questions you might ask a research participant on a questionnaire for survey research. If you plan to collect data from other sources, such as through direct observation or the analysis of available records, think practically about what the design of your study might look like and how you can collect data on various indicators feasibly. If your study asks about whether participants regularly change the oil in their car, you will likely not observe them directly doing so. Instead, you would rely on a survey question that asks them the frequency with which they change their oil or ask to see their car maintenance records.

TRACK 1 (IF YOU ARE CREATING A RESEARCH PROPOSAL FOR THIS CLASS):

What indicators are commonly used to measure the variables in your research question?

  • How can you feasibly collect data on these indicators?
  • Are you planning to collect your own data using a questionnaire or interview? Or are you planning to analyze available data like client files or raw data shared from another researcher’s project?

Remember, you need raw data . Your research project cannot rely solely on the results reported by other researchers or the arguments you read in the literature. A literature review is only the first part of a research project, and your review of the literature should inform the indicators you end up choosing when you measure the variables in your research question.

TRACK 2 (IF YOU AREN’T CREATING A RESEARCH PROPOSAL FOR THIS CLASS): 

You are interested in studying older adults’ social-emotional well-being. Specifically, you would like to research the impact on levels of older adult loneliness of an intervention that pairs older adults living in assisted living communities with university student volunteers for a weekly conversation.

  • How could you feasibly collect data on these indicators?
  • Would you collect your own data using a questionnaire or interview? Or would you analyze available data like client files or raw data shared from another researcher’s project?

Steps in the Operationalization Process

Unlike conceptual definitions which contain other concepts, operational definition consists of the following components: (1) the variable being measured and its attributes, (2) the measure you will use, and (3) how you plan to interpret the data collected from that measure to draw conclusions about the variable you are measuring.

Step 1 of Operationalization: Specify variables and attributes

The first component, the variable, should be the easiest part. At this point in quantitative research, you should have a research question with identifiable variables. When social scientists measure concepts, they often use the language of variables and attributes . A variable refers to a quality or quantity that varies across people or situations.  Attributes are the characteristics that make up a variable. For example, the variable hair color could contain attributes such as blonde, brown, black, red, gray, etc.

Levels of measurement

A variable’s attributes determine its level of measurement. There are four possible levels of measurement: nominal, ordinal, interval, and ratio. The first two levels of measurement are  categorical , meaning their attributes are categories rather than numbers. The latter two levels of measurement are  continuous , meaning their attributes are numbers within a range.

Nominal level of measurement

Hair color is an example of a nominal level of measurement. At the nominal level of measurement , attributes are categorical, and those categories cannot be mathematically ranked. In all nominal levels of measurement, there is no ranking order; the attributes are simply different. Gender and race are two additional variables measured at the nominal level. A variable that has only two possible attributes is called binary or dichotomous . If you are measuring whether an individual has received a specific service, this is a dichotomous variable, as the only two options are received or not received.

What attributes are contained in the variable  hair color ?  Brown, black, blonde, and red are common colors, but if we only list these attributes, many people may not fit into those categories. This means that our attributes were not exhaustive. Exhaustiveness means that every participant can find a choice for their attribute in the response options. It is up to the researcher to include the most comprehensive attribute choices relevant to their research questions. We may have to list a lot of colors before we can meet the criteria of exhaustiveness. Clearly, there is a point at which exhaustiveness has been reasonably met. If a person insists that their hair color is light burnt sienna , it is not your responsibility to list that as an option. Rather, that person would reasonably be described as brown-haired. Perhaps listing a category for  other color  would suffice to make our list of colors exhaustive.

What about a person who has multiple hair colors at the same time, such as red and black? They would fall into multiple attributes. This violates the rule of  mutual exclusivity , in which a person cannot fall into two different attributes. Instead of listing all of the possible combinations of colors, perhaps you might include a  multi-color  attribute to describe people with more than one hair color.

research operational framework

Making sure researchers provide mutually exclusive and exhaustive attribute options is about making sure all people are represented in the data record. For many years, the attributes for gender were only male or female. Now, our understanding of gender has evolved to encompass more attributes that better reflect the diversity in the world. Children of parents from different races were often classified as one race or another, even if they identified with both. The option for bi-racial or multi-racial on a survey not only more accurately reflects the racial diversity in the real world but also validates and acknowledges people who identify in that manner. If we did not measure race in this way, we would leave empty the data record for people who identify as biracial or multiracial, impairing our search for truth.

Ordinal level of measurement

Unlike nominal-level measures, attributes at the  ordinal level of measurement can be rank-ordered. For example, someone’s degree of satisfaction in their romantic relationship can be ordered by magnitude of satisfaction. That is, you could say you are not at all satisfied, a little satisfied, moderately satisfied, or highly satisfied. Even though these have a rank order to them (not at all satisfied is certainly worse than highly satisfied), we cannot calculate a mathematical distance between those attributes. We can simply say that one attribute of an ordinal-level variable is more or less than another attribute.  A variable that is commonly measured at the ordinal level of measurement in social work is education (e.g., less than high school education, high school education or equivalent, some college, associate’s degree, college degree, graduate  degree or higher). Just as with nominal level of measurement, ordinal-level attributes should also be exhaustive and mutually exclusive.

Rating scales for ordinal-level measurement

The fact that we cannot specify exactly how far apart the responses for different individuals in ordinal level of measurement can become clear when using rating scales . If you have ever taken a customer satisfaction survey or completed a course evaluation for school, you are familiar with rating scales such as, “On a scale of 1-5, with 1 being the lowest and 5 being the highest, how likely are you to recommend our company to other people?” Rating scales use numbers, but only as a shorthand, to indicate what attribute (highly likely, somewhat likely, etc.) the person feels describes them best. You wouldn’t say you are “2” likely to recommend the company, but you would say you are “not very likely” to recommend the company. In rating scales the difference between 2 = “ not very likely” and 3 = “ somewhat likely” is not quantifiable as a difference of 1. Likewise, we couldn’t say that it is the same as the difference between 3 = “ somewhat likely ” and 4 = “ very likely .”

Rating scales can be unipolar rating scales where only one dimension is tested, such as frequency (e.g., Never, Rarely, Sometimes, Often, Always) or strength of satisfaction (e.g., Not at all, Somewhat, Very). The attributes on a unipolar rating scale are different magnitudes of the same concept.

There are also bipolar rating scales where there is a dichotomous spectrum, such as liking or disliking (Like very much, Like somewhat, Like slightly, Neither like nor dislike, Dislike slightly, Dislike somewhat, Dislike very much). The attributes on the ends of a bipolar scale are opposites of one another. Figure 10.1 shows several examples of bipolar rating scales.

Figure showing scales (Strongly agree, agree, neither agree nor disagree, disagree, strongly disagree and an anchored scale from 1 to 7 with Extremely Unlikely and Extremely Likely at the ends

Interval level of measurement

Interval measures are continuous, meaning the meaning and interpretation of their attributes are numbers, rather than categories. Temperatures in Fahrenheit and Celsius are interval level, as are IQ scores and credit scores. Just like variables measured at the ordinal level, the attributes for variables measured at the interval level should be mutually exclusive and exhaustive, and are rank-ordered. In addition, they also have an equal distance between the attribute values.

The interval level of measurement allows us to examine “how much more” is one attribute when compared to another, which is not possible with nominal or ordinal measures. In other words, the unit of measurement allows us to compare the distance between attributes. The value of one unit of measurement (e.g., one degree Celsius, one IQ point) is always the same regardless of where in the range of values you look. The difference of 10 degrees between a temperature of 50 and 60 degrees Fahrenheit is the same as the difference between 60 and 70 degrees Fahrenheit.

We cannot, however, say with certainty what the ratio of one attribute is in comparison to another. For example, it would not make sense to say that a person with an IQ score of 140 has twice the IQ of a person with a score of 70. However, the difference between IQ scores of 80 and 100 is the same as the difference between IQ scores of 120 and 140.

You may find research in which ordinal-level variables are treated as if they are interval measures for analysis. This can be a problem because as we’ve noted, there is no way to know whether the difference between a 3 and a 4 on a rating scale is the same as the difference between a 2 and a 3. Those numbers are just placeholders for categories.

Ratio level of measurement

The final level of measurement is the ratio level of measurement .  Variables measured at the ratio level of measurement are continuous variables, just like with interval scale. They, too, have equal intervals between each point. However, the ratio level of measurement has a true zero, which means that  a value of zero on a ratio scale means that the variable you’re measuring is absent. For example, if you have no siblings, the a value of 0 indicates this (unlike a temperature of 0 which does not mean there is no temperature). What is the advantage of having a “true zero?” It allows you to calculate ratios. For example, if you have a three siblings, you can say that this is half the number of siblings as a person with six.

At the ratio level, the attribute values are mutually exclusive and exhaustive, can be rank-ordered, the distance between attributes is equal, and attributes have a true zero point. Thus, with these variables, we can  say what the ratio of one attribute is in comparison to another. Examples of ratio-level variables include age and years of education. We know that a person who is 12 years old is twice as old as someone who is 6 years old. Height measured in meters and weight measured in kilograms are good examples. So are counts of discrete objects or events such as the number of siblings one has or the number of questions a student answers correctly on an exam. Measuring interval and ratio data is relatively easy, as people either select or input a number for their answer. If you ask a person how many eggs they purchased last week, they can simply tell you they purchased `a dozen eggs at the store, two at breakfast on Wednesday, or none at all.

The differences between each level of measurement are visualized in Table 10.2.

Levels of measurement=levels of specificity

We have spent time learning how to determine a variable’s level of measurement. Now what? How could we use this information to help us as we measure concepts and develop measurement tools? First, the types of statistical tests that we are able to use depend on level of measurement. With nominal-level measurement, for example, the only available measure of central tendency is the mode. With ordinal-level measurement, the median or mode can be used. Interval- and ratio-level measurement are typically considered the most desirable because they permit any indicators of central tendency to be computed (i.e., mean, median, or mode). Also, ratio-level measurement is the only level that allows meaningful statements about ratios of scores. The higher the level of measurement, the more options we have for the statistical tests we are able to conduct. This knowledge may help us decide what kind of data we need to gather, and how.

That said, we have to balance this knowledge with the understanding that sometimes, collecting data at a higher level of measurement could negatively impact our studies. For instance, sometimes providing answers in ranges may make prospective participants feel more comfortable responding to sensitive items. Imagine that you were interested in collecting information on topics such as income, number of sexual partners, number of times someone used illicit drugs, etc. You would have to think about the sensitivity of these items and determine if it would make more sense to collect some data at a lower level of measurement (e.g., nominal: asking if they are sexually active or not) versus a higher level such as ratio (e.g., their total number of sexual partners).

Finally, sometimes when analyzing data, researchers find a need to change a variable’s level of measurement. For example, a few years ago, a student was interested in studying the association between mental health and life satisfaction. This student used a variety of measures. One item asked about the number of mental health symptoms, reported as the actual number. When analyzing data, the student examined the mental health symptom variable and noticed that she had two groups, those with none or one symptoms and those with many symptoms. Instead of using the ratio level data (actual number of mental health symptoms), she collapsed her cases into two categories, few and many. She decided to use this variable in her analyses. It is important to note that you can move a higher level of data to a lower level of data; however, you are unable to move a lower level to a higher level.

  • Check that the variables in your research question can vary…and that they are not constants or one of many potential attributes of a variable.
  • Think about the attributes your variables have. Are they categorical or continuous? What level of measurement seems most appropriate?

Step 2 of Operationalization: Specify measures for each variable

Let’s pick a social work research question and walk through the process of operationalizing variables to see how specific we need to get. Suppose we hypothesize that residents of a psychiatric unit who are more depressed are less likely to be satisfied with care. Remember, this would be an inverse relationship—as levels of depression increase, satisfaction decreases. In this hypothesis, level of depression is the independent (or predictor) variable and satisfaction with care is the dependent (or outcome) variable.

How would you measure these key variables? What indicators would you look for? Some might say that levels of depression could be measured by observing a participant’s body language. They may also say that a depressed person will often express feelings of sadness or hopelessness. In addition, a satisfied person might be happy around service providers and often express gratitude. While these factors may indicate that the variables are present, they lack coherence. Unfortunately, what this “measure” is actually saying is that “I know depression and satisfaction when I see them.” In a research study, you need more precision for how you plan to measure your variables. Individual judgments are subjective, based on idiosyncratic experiences with depression and satisfaction. They couldn’t be replicated by another researcher. They also can’t be done consistently for a large group of people. Operationalization requires that you come up with a specific and rigorous measure for seeing who is depressed or satisfied.

Finding a good measure for your variable depends on the kind of variable it is. Variables that are directly observable might include things like taking someone’s blood pressure, marking attendance or participation in a group, and so forth. To measure an indirectly observable variable like age, you would probably put a question on a survey that asked, “How old are you?” Measuring a variable like income might first require some more conceptualization, though. Are you interested in this person’s individual income or the income of their family unit? This might matter if your participant does not work or is dependent on other family members for income. Do you count income from social welfare programs? Are you interested in their income per month or per year? Even though indirect observables are relatively easy to measure, the measures you use must be clear in what they are asking, and operationalization is all about figuring out the specifics about how to measure what you want to know. For more complicated variables such as constructs, you will need compound measures that use multiple indicators to measure a single variable.

How you plan to collect your data also influences how you will measure your variables. For social work researchers using secondary data like client records as a data source, you are limited by what information is in the data sources you can access. If a partnering organization uses a given measurement for a mental health outcome, that is the one you will use in your study. Similarly, if you plan to study how long a client was housed after an intervention using client visit records, you are limited by how their caseworker recorded their housing status in the chart. One of the benefits of collecting your own data is being able to select the measures you feel best exemplify your understanding of the topic.

Composite measures

Depending on your research design, your measure may be something you put on a survey or pre/post-test that you give to your participants. For a variable like age or income, one well-worded item may suffice. Unfortunately, most variables in the social world are not so simple. Depression and satisfaction are multidimensional concepts. Relying on a indicator that is a single item on a questionnaire like a question that asks “Yes or no, are you depressed?” does not encompass the complexity of constructs.

For more complex variables, researchers use scales and indices (sometimes called indexes) because they use multiple items to develop a composite (or total) score as a measure for a variable. As such, they are called composite measures . Composite measures provide a much greater understanding of concepts than a single item could.

It can be complex to delineate between multidimensional and unidimensional concepts. If satisfaction were a key variable in our study, we would need a theoretical framework and conceptual definition for it. Perhaps we come to view satisfaction has having two dimensions: a mental one and an emotional one. That means we would need to include indicators that measured both mental and emotional satisfaction as separate dimensions of satisfaction. However, if satisfaction is not a key variable in your theoretical framework, it may make sense to operationalize it as a unidimensional concept.

Although we won’t delve too deeply into the process of scale development, we will cover some important topics for you to understand how scales and indices developed by other researchers can be used in your project.

Need to make better sense of the following content:

Measuring abstract concepts in concrete terms remains one of the most difficult tasks in empirical social science research.

A scale , XXXXXXXXXXXX .

The scales we discuss in this section are a  different from “rating scales” discussed in the previous section. A rating scale is used to capture the respondents’ reactions to a given item on a questionnaire. For example, an ordinally scaled item captures a value between “strongly disagree” to “strongly agree.” Attaching a rating scale to a statement or instrument is not scaling. Rather, scaling is the formal process of developing scale items, before rating scales can be attached to those items.

If creating your own scale sounds painful, don’t worry! For most constructs, you would likely be duplicating work that has already been done by other researchers. Specifically, this is a branch of science called psychometrics. You do not need to create a scale for depression because scales such as the Patient Health Questionnaire (PHQ-9) [1] , the Center for Epidemiologic Studies Depression Scale (CES-D) [2] , and Beck’s Depression Inventory [3] (BDI) have been developed and refined over dozens of years to measure variables like depression. Similarly, scales such as the Patient Satisfaction Questionnaire (PSQ-18) have been developed to measure satisfaction with medical care. As we will discuss in the next section, these scales have been shown to be reliable and valid. While you could create a new scale to measure depression or satisfaction, a study with rigor would pilot test and refine that new scale over time to make sure it measures the concept accurately and consistently before using it in other research. This high level of rigor is often unachievable in smaller research projects because of the cost and time involved in pilot testing and validating, so using existing scales is recommended.

Unfortunately, there is no good one-stop-shop for psychometric scales. The Mental Measurements Yearbook provides a list of measures for social science variables, though it is incomplete and may not contain the full documentation for instruments in its database. It is available as a searchable database by many university libraries.

Perhaps an even better option could be looking at the methods section of the articles in your literature review. The methods section of each article will detail how the researchers measured their variables, and often the results section is instructive for understanding more about measures. In a quantitative study, researchers may have used a scale to measure key variables and will provide a brief description of that scale, its names, and maybe a few example questions. If you need more information, look at the results section and tables discussing the scale to get a better idea of how the measure works.

Looking beyond the articles in your literature review, searching Google Scholar or other databases using queries like “depression scale” or “satisfaction scale” should also provide some relevant results. For example, searching for documentation for the Rosenberg Self-Esteem Scale, I found this report about useful measures for acceptance and commitment therapy which details measurements for mental health outcomes. If you find the name of the scale somewhere but cannot find the documentation (i.e., all items, response choices, and how to interpret the scale), a general web search with the name of the scale and “.pdf” may bring you to what you need. Or, to get professional help with finding information, ask a librarian!

Unfortunately, these approaches do not guarantee that you will be able to view the scale itself or get information on how it is interpreted. Many scales cost money to use and may require training to properly administer. You may also find scales that are related to your variable but would need to be slightly modified to match your study’s needs. You could adapt a scale to fit your study, however changing even small parts of a scale can influence its accuracy and consistency. Pilot testing is always recommended for adapted scales, and researchers seeking to draw valid conclusions and publish their results should take this additional step.

Types of scales

Likert scales.

Although Likert scale is a term colloquially used to refer to almost any rating scale (e.g., a 0-to-10 life satisfaction scale), it has a much more precise meaning. In the 1930s, researcher Rensis Likert (pronounced LICK-ert) created a new approach for measuring people’s attitudes (Likert, 1932) . [4] It involves presenting people with several statements—including both favorable and unfavorable statements—about some person, group, or idea. Respondents then express their approval or disapproval with each statement on a 5-point rating scale: Strongly Approve ,  Approve , Undecided ,  Disapprove,  Strongly Disapprove . Numbers are assigned to each response a nd then summed across all items to produce a score representing the attitude toward the person, group, or idea. For items that are phrased in an opposite direction (e.g., negatively worded statements instead of positively worded statements), reverse coding is used so that the numerical scoring of statements also runs in the opposite direction.  The scores for the entire set of items are totaled for a score for the attitude of interest. This type of scale came to be called a Likert scale, as indicated in Table 10.3 below. Scales that use similar logic but do not have these exact characteristics are referred to as “Likert-type scales.” 

Semantic Differential Scales

Semantic differential scales are composite scales in which respondents are asked to indicate their opinions or feelings toward a single statement using different pairs of adjectives framed as polar opposites. Whereas in a Likert scale, a participant is asked how much they approve or disapprove of a statement, in a semantic differential scale the participant is asked to indicate how they about a specific item using several pairs of opposites. This makes the semantic differential scale an excellent technique for measuring people’s feelings toward objects, events, or behaviors. Table 10.4 provides an example of a semantic differential scale that was created to assess participants’ feelings about this textbook.

Guttman Scales

A specialized scale for measuring unidimensional concepts was designed by Louis Guttman. A Guttman scale (also called cumulative scale ) uses a series of items arranged in increasing order of intensity (least intense to most intense) of the concept. This type of scale allows us to understand the intensity of beliefs or feelings. Each item in the Guttman scale below has a weight (this is not indicated on the tool) which varies with the intensity of that item, and the weighted combination of each response is used as an aggregate measure of an observation.

Table XX presents an example of a Guttman Scale. Notice how the items move from lower intensity to higher intensity. A researcher reviews the yes answers and creates a score for each participant.

Example Guttman Scale Items

  • I often felt the material was not engaging                               Yes/No
  • I was often thinking about other things in class                     Yes/No
  • I was often working on other tasks during class                     Yes/No
  • I will work to abolish research from the curriculum              Yes/No

An index is a composite score derived from aggregating measures of multiple indicators. At its most basic, an index sums up indicators. A well-known example of an index is the consumer price index (CPI), which is computed every month by the Bureau of Labor Statistics of the U.S. Department of Labor. The CPI is a measure of how much consumers have to pay for goods and services (in general) and is divided into eight major categories (food and beverages, housing, apparel, transportation, healthcare, recreation, education and communication, and “other goods and services”), which are further subdivided into more than 200 smaller items. Each month, government employees call all over the country to get the current prices of more than 80,000 items. Using a complicated weighting scheme that takes into account the location and probability of purchase for each item, analysts then combine these prices into an overall index score using a series of formulas and rules.

Another example of an index is the Duncan Socioeconomic Index (SEI). This index is used to quantify a person’s socioeconomic status (SES) and is a combination of three concepts: income, education, and occupation. Income is measured in dollars, education in years or degrees achieved, and occupation is classified into categories or levels by status. These very different measures are combined to create an overall SES index score. However, SES index measurement has generated a lot of controversy and disagreement among researchers.

The process of creating an index is similar to that of a scale. First, conceptualize the index and its constituent components. Though this appears simple, there may be a lot of disagreement on what components (concepts/constructs) should be included or excluded from an index. For instance, in the SES index, isn’t income correlated with education and occupation? And if so, should we include one component only or all three components? Reviewing the literature, using theories, and/or interviewing experts or key stakeholders may help resolve this issue. Second, operationalize and measure each component. For instance, how will you categorize occupations, particularly since some occupations may have changed with time (e.g., there were no Web developers before the Internet)? As we will see in step three below, researchers must create a rule or formula for calculating the index score. Again, this process may involve a lot of subjectivity, so validating the index score using existing or new data is important.

Differences between scales and indices

Though indices and scales yield a single numerical score or value representing a concept of interest, they are different in many ways. First, indices often comprise components that are very different from each other (e.g., income, education, and occupation in the SES index) and are measured in different ways. Conversely, scales typically involve a set of similar items that use the same rating scale (such as a five-point Likert scale about customer satisfaction).

Second, indices often combine objectively measurable values such as prices or income, while scales are designed to assess subjective or judgmental constructs such as attitude, prejudice, or self-esteem. Some argue that the sophistication of the scaling methodology makes scales different from indexes, while others suggest that indexing methodology can be equally sophisticated. Nevertheless, indexes and scales are both essential tools in social science research.

Scales and indices seem like clean, convenient ways to measure different phenomena in social science, but just like with a lot of research, we have to be mindful of the assumptions and biases underneath. What if the developers of scale or an index were influenced by unconscious biases? Or what if it was validated using only White women as research participants? Is it going to be useful for other groups? It very well might be, but when using a scale or index on a group for whom it hasn’t been tested, it will be very important to evaluate the validity and reliability of the instrument, which we address in the rest of the chapter.

Finally, it’s important to note that while scales and indices are often made up of items measured at the nominal or ordinal level, the scores on the composite measurement are continuous variables.

Looking back to your work from the previous section, are your variables unidimensional or multidimensional?

  • Describe the specific measures you will use (actual questions and response options you will use with participants) for each variable in your research question.
  • If you are using a measure developed by another researcher but do not have all of the questions, response options, and instructions needed to implement it, put it on your to-do list to get them.
  • Describe at least one specific measure you would use (actual questions and response options you would use with participants) for the dependent variable in your research question.

research operational framework

Step 3 in Operationalization: Determine how to interpret measures

The final stage of operationalization involves setting the rules for how the measure works and how the researcher should interpret the results. Sometimes, interpreting a measure can be incredibly easy. If you ask someone their age, you’ll probably interpret the results by noting the raw number (e.g., 22) someone provides and that it is lower or higher than other people’s ages. However, you could also recode that person into age categories (e.g., under 25, 20-29-years-old, generation Z, etc.). Even scales or indices may be simple to interpret. If there is an index of problem behaviors, one might simply add up the number of behaviors checked off–with a range from 1-5 indicating low risk of delinquent behavior, 6-10 indicating the student is moderate risk, etc. How you choose to interpret your measures should be guided by how they were designed, how you conceptualize your variables, the data sources you used, and your plan for analyzing your data statistically. Whatever measure you use, you need a set of rules for how to take any valid answer a respondent provides to your measure and interpret it in terms of the variable being measured.

For more complicated measures like scales, refer to the information provided by the author for how to interpret the scale. If you can’t find enough information from the scale’s creator, look at how the results of that scale are reported in the results section of research articles. For example, Beck’s Depression Inventory (BDI-II) uses 21 statements to measure depression and respondents rate their level of agreement on a scale of 0-3. The results for each question are added up, and the respondent is put into one of three categories: low levels of depression (1-16), moderate levels of depression (17-30), or severe levels of depression (31 and over) ( NEEDS CITATION) .

Operationalization is a tricky component of basic research methods, so don’t get frustrated if it takes a few drafts and a lot of feedback to get to a workable operational definition.

Key Takeaways

  • Operationalization involves spelling out precisely how a concept will be measured.
  • Operational definitions must include the variable, the measure, and how you plan to interpret the measure.
  • There are four different levels of measurement: nominal, ordinal, interval, and ratio (in increasing order of specificity).
  • Scales and indices are common ways to collect information and involve using multiple indicators in measurement.
  • A key difference between a scale and an index is that a scale contains multiple indicators for one concept, whereas an indicator examines multiple concepts (components).
  • Using scales developed and refined by other researchers can improve the rigor of a quantitative study.

Use the research question that you developed in the previous chapters and find a related scale or index that researchers have used. If you have trouble finding the exact phenomenon you want to study, get as close as you can.

  • What is the level of measurement for each item on each tool? Take a second and think about why the tool’s creator decided to include these levels of measurement. Identify any levels of measurement you would change and why.
  • If these tools don’t exist for what you are interested in studying, why do you think that is?

Using your working research question, find a related scale or index that researchers have used to measure the dependent variable. If you have trouble finding the exact phenomenon you want to study, get as close as you can.

  • What is the level of measurement for each item on the tool? Take a second and think about why the tool’s creator decided to include these levels of measurement. Identify any levels of measurement you would change and why.
  • Kroenke, K., Spitzer, R. L., & Williams, J. B. (2001). The PHQ-9: validity of a brief depression severity measure. Journal of general internal medicine, 16(9), 606–613. https://doi.org/10.1046/j.1525-1497.2001.016009606.x ↵
  • Radloff, L. S. (1977). The CES-D scale: A self report depression scale for research in the general population. Applied Psychological Measurements, 1, 385-401. ↵
  • Beck, A. T., Ward, C. H., Mendelson, M., Mock, J., & Erbaugh, J. (1961). An inventory for measuring depression. Archives of general psychiatry, 4, 561–571. https://doi.org/10.1001/archpsyc.1961.01710120031004 ↵
  • Likert, R. (1932). A technique for the measurement of attitudes.  Archives of Psychology, 140 , 1–55. ↵

process by which researchers spell out precisely how a concept will be measured in their study

Clues that demonstrate the presence, intensity, or other aspects of a concept in the real world

unprocessed data that researchers can analyze using quantitative and qualitative methods (e.g., responses to a survey or interview transcripts)

“a logical grouping of attributes that can be observed and measured and is expected to vary from person to person in a population” (Gillespie & Wagner, 2018, p. 9)

The characteristics that make up a variable

variables whose values are organized into mutually exclusive groups but whose numerical values cannot be used in mathematical operations.

variables whose values are mutually exclusive and can be used in mathematical operations

The lowest level of measurement; categories cannot be mathematically ranked, though they are exhaustive and mutually exclusive

Exhaustive categories are options for closed ended questions that allow for every possible response (no one should feel like they can't find the answer for them).

Mutually exclusive categories are options for closed ended questions that do not overlap, so people only fit into one category or another, not both.

Level of measurement that follows nominal level. Has mutually exclusive categories and a hierarchy (rank order), but we cannot calculate a mathematical distance between attributes.

An ordered set of responses that participants must choose from.

A rating scale where the magnitude of a single trait is being tested

A rating scale in which a respondent selects their alignment of choices between two opposite poles such as disagreement and agreement (e.g., strongly disagree, disagree, agree, strongly agree).

A level of measurement that is continuous, can be rank ordered, is exhaustive and mutually exclusive, and for which the distance between attributes is known to be equal. But for which there is no zero point.

The highest level of measurement. Denoted by mutually exclusive categories, a hierarchy (order), values can be added, subtracted, multiplied, and divided, and the presence of an absolute zero.

measurements of variables based on more than one one indicator

An empirical structure for measuring items or indicators of the multiple dimensions of a concept.

measuring people’s attitude toward something by assessing their level of agreement with several statements about it

Composite (multi-item) scales in which respondents are asked to indicate their opinions or feelings toward a single statement using different pairs of adjectives framed as polar opposites.

A composite scale using a series of items arranged in increasing order of intensity of the construct of interest, from least intense to most intense.

a composite score derived from aggregating measures of multiple concepts (called components) using a set of rules and formulas

Doctoral Research Methods in Social Work Copyright © by Mavs Open Press. All Rights Reserved.

Share This Book

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Indian J Community Med
  • v.44(4); Oct-Dec 2019

Operational Research in Health-care Settings

Rajesh kunwar.

Department of Community Medicine, TS Misra Medical College, Department of Community Medicine, Prasad Institute of Medical Sciences, Lucknow, Uttar Pradesh, India

V. K. Srivastava

Origin of the term operational research (OR), also known as operations research, can be traced back to World War II when a number of researches carried out during military operations helped British Forces produce better results with lesser expenditure of ammunition. The world soon realised the potential of this kind of research and many disciplines especially management sciences, started applying its principles to achieve better returns on their investments.

Following World War II in 1948, the World Health Organization (WHO) came into existence with research as one of its core functions. It emphasized the need of identifying health-related issues needing research and thereby generation, dissemination, and utilization of the newly acquired knowledge for health promotion.[ 1 ] In 1978, Alma Ata Declaration acknowledged that primary health care was well known globally but, at the same time, also noted that modalities of its implementation were likely be different in different countries depending on their socioeconomic conditions, availability of resources, development of technology, and motivation of the community. A number of issues were yet to be resolved and researched before primary health care was operationalized under local conditions.[ 2 ]

T HE D EFINITION

The kind of research that Alma Ata Declaration recommended for improvement of health-care delivery is essentially OR. Described as “the science of better,” it helps in identifying the alternative service delivery strategy which not only overcomes the problems that limit the program quality, efficiency, and effectiveness but also yields the best outcome.[ 3 ] In its report on “The Third Ten Years of the WHO,” WHO has highlighted the usefulness of OR in improvement of health-care delivery in terms of its efficiency, effectiveness, and wider coverage by testing alternative approaches even in countries with limited national resources.[ 4 ]

OR has been variously defined. Dictionary of Epidemiology defined it as a systematic study of the working of a system with the aim of improvement.[ 5 ] From a health program perspective, OR is defined as the search for strategies and interventions that enhance the quality and effectiveness of the program.[ 6 ] A global meeting held in Geneva in April 2008 to develop the framework of OR, defined the scope of OR in context to public health as “ Any research producing practically usable knowledge (evidence, finding, information, etc.) which can improve program implementation (e.g., effectiveness, efficiency, quality, access, scale up, sustainability) regardless of the type of research (design, methodology, approach), falls within the boundaries of OR .”[ 7 ]

OR, however, is different from clinical or epidemiological research. It addresses a specific problem within a specific program. It examines a system, for example, health-care delivery system, and experiments in the environment specific to the program with alternative strategies to find the most suitable one and has an objective of improvement in the system. On the other hand, clinical or epidemiological research studies individuals and groups of individuals in search of new knowledge. In addition, ethical issues, which form an integral part of all clinical and epidemiological research, have their role poorly defined in OR, more so if it is based on secondary data.

The keyword in all the definitions is improvement, which is to be brought about by means of research in the operation of an ongoing program. Its characteristics include:

  • It focuses on a specific problem in an ongoing programme
  • It involves research into the problem using principles of epidemiology
  • It tests more than one possible solution and provides rational basis, in the absence of complete information, for the best alternative to improve program efficiency
  • It requires close interaction between program managers and researchers
  • It succeeds only if the research is conducted in the existing environment and study results are implemented in true letter and spirit.

T HE P ROCESS

In health-care settings, an ongoing health program often fails to achieve its expected objective and the program managers are faced with problems factors responsible for which are not apparent. This is the stage where process of OR is initiated. In a standard OR process, planning begins with organization of a research team, which should have a mix of people with different backgrounds such as epidemiology, biostatistics, health managements, etc., The program managers may not be able to carry out the research themselves because of their work responsibility and in all probability, their biased views. However, they need to have a working relationship with the research team to ensure smooth conduct of the research and ownership of the result by all parties.

According to Fisher et al ., OR is a continuous process of problem identification, selection of a suitable strategy/intervention, experimentation of the selected strategy/intervention, dissemination of the findings, and utilization of the information so derived.[ 8 ] However, it may not always be possible to follow a step by step approach in OR since it is carried out in the existing environment, and many of the activities may be taking place simultaneously. The process involves the following steps [ Figure 1 ].

An external file that holds a picture, illustration, etc.
Object name is IJCM-44-295-g001.jpg

Process of operational research

Identifying problems

Like any other research, it is essential to have a research question as to the first and foremost step for beginning the process of OR. Discussion with program managers and staff, review of project reports and local documentation, discussion with experts in the field and literature search gives an insight into why the problem is occurring and what are possible solutions; and help in the identification of the research question. OR methods are useful for the systematic identification of problems and the search for potential solutions. Structured approaches to identifying options, such as the strategic choice approach or systematic creativity approaches have great potential for use in low-resource settings.[ 9 ]

Choosing interventions

Choosing appropriate interventions is clearly a crucial step. Effectiveness, safety, cost, and equity should all be considered, and researchers should be familiar with standard textbook methods for assessing these. Finding the best combinations and delivery methods is a major research exercise in its own right. Modeling different intervention strategies before rollout is now ubiquitous in many industries but is less common in healthcare.[ 10 ] Modeling work has been done on ways to reduce maternal mortality and in cervical cancer screening in low-resource settings.[ 11 ]

An appropriate intervention design, depending on available time and resources, should have a written protocol spelling out details of steps to be taken during implementation. Only valid and reliable instruments – be it quantitative or qualitative study-should be used; and wherever possible, a pilot study be carried out to further refine the conduct of the intervention. The contribution that OR and management science can make to design and delivery is not restricted to high technology. Oral rehydration therapy is a “low-tech–low-cost–high-impact” innovation, in which OR was used to explore ways it could be administered using readily available ingredients by laypeople, with an escalation pathway to treatment by health-care professionals when necessary.[ 12 ]

Small-scale projects generally need considerable modifications to work on a larger scale. Classic OR techniques such as simulation modeling can be used in locating services, managing the supply chain, and developing the health-care workforce.

Integrating into health systems

After analysis of the result, the information gathered should be disseminated to stakeholders and decision-makers. The modalities of information utilization should have been predecided and included in the research proposal. Successes in global health programs often result from synergistic interactions between individual, community and national actors rather than from any single “magic bullet.” A greater focus is needed on how interventions should be used in a complex behavioral environment, to better capture the dynamics of social networks, and to understand how complex systems can adapt positively to change. This is a task where OR and management science tools can be useful, as demonstrated by systems analysis of programs for cervical cancer prevention[ 13 ] or agent simulation modeling of spread of HIV in villages.[ 14 ]

E VALUATION

One of the greatest challenges for global health is the measurement and evaluation of performance of projects and programs. The WHO defines evaluation as “ the systematic and objective assessment of an ongoing or completed initiative, its design, implementation, and results. The aim is to determine the relevance and fulfillment of objectives, efficiency, effectiveness, impact, and sustainability .”[ 15 ] It may or may not lead to improvement.

Accelerated Child Survival and Development (ACSD) program, an initiative of UNICEF, was implemented in eleven West African countries from 2001 to 2005 with an objective of reducing mortality among under-fives by at least 25% by the end of 2006. Retrospective evaluation of the program was carried out in Benin, Ghana, and Mali by comparing data of ACSD focus districts with those of remainder districts. It showed that the difference in coverage of preventive interventions in ACSD focus areas before and after program implementation was not significant in Benin and Mali. This probably resulted in failure of ACSD program to accelerate survival of under-fives in-focus areas of Benin and Mali as compared to comparison areas. The inputs obtained from the evaluation of the program if translated into policy or national program would have delivered the desired result of ACSD program implementation.[ 16 ] Evaluation, thus, is fundamental to good management and is an essential part of the process of developing effective public policy. It is a complex enterprise, requiring researchers to balance the rigors of their research strategies with the relevance of their work for managers and policymakers.[ 17 ]

Standard control trial approaches to evaluation are sometimes feasible and appropriate but often a more flexible systems-oriented approach is required, together with modeling to help assess the effectiveness of preventive interventions.[ 18 ] Decision tree modeling can give rapid insights into the operational effectiveness and cost-effectiveness of procedures[ 19 ] and programs.[ 20 ]

O PERATIONAL R ESEARCH IN H EALTH-CARE S ETTINGS : E XAMPLES

The relevance of OR in health-care settings cannot be overemphasized. It has been successfully used all over the world in various health programs such as family planning, HIV, tuberculosis (TB), and malaria control programs to name a few. Its role in causing improvement in various health programs and the development of policies has been acknowledged globally. Sustained OR efforts of several decades helped in developing the Global strategy for control of TB. India and Malawi provide the most successful example of OR in this field.[ 21 ] In India, it was demonstrated by OR that successful implementation of DOTS strategy throughout the country led to reduction in the prevalence of TB, reduction in fatality due to TB and release of hospital beds occupied by TB patients; and thereby a potential gain to the Indian economy.[ 22 ]

For the treatment of TB, about half of TB patients in India rely on the private sector. In spite of it being a notifiable disease, TB notification from private sector has been a challenge. In 2014, Delhi state, by adopting direct “one to one” sensitization of private practitioners by TB notification committee, was able to accelerate notification of TB cases from the private sector.[ 23 ]

In view of the growing burden of multidrug-resistance TB (MDR-TB), an OR was conducted in the setting of Revised National Tuberculosis Programme on patients with presumptive MDR-TB in North and Central Chennai, in 2014 to determine prediagnosis attrition and pre-treatment attrition, and factors associated with it. Prediagnosis and pretreatment attrition were found 11% and 38%, respectively. The study showed that patients with smear-negative TB were less likely to undergo drug susceptibility testing (DST) and more attention was required to be paid to this group for improving DST.[ 24 ]

One of the most successful examples of OR in India is the experimental study carried out in Gadchiroli district of Maharashtra from 1993 to 1998. In their path-breaking field trial, Bang et al . trained village level workers in neonatal care who subsequently made home visits at scheduled intervals and managed premature birth/low birthweight, birth asphyxia, hypothermia, neonatal sepsis, and breastfeeding problems. This led to a significant reduction in neonatal mortality rates in intervention villages.[ 25 ] Encouraged by the success of this field trial, Home-Based Newborn Care has been adopted by many districts in India to combat neonatal mortality.

In leprosy case detection campaign (LCDC), introduced under National Leprosy Eradication Programme of India in 2016, false-positive diagnosis is a major issue. A study carried out in four districts of Bihar found 30% false-positive cases during LCDC. Using “appreciative inquiry” as a tool, Wagh et al . were able to achieve a decline in false-positive diagnosis.[ 26 ]

OR has been successfully used in hospital settings too. In Latin America, unsafe abortions used to be one of the most common causes of high maternal mortality. Billings and Bensons reviewed ten completed OR projects conducted in public sector hospitals of seven Latin American countries. Their findings indicated that sharp curettage replaced by manual vacuum aspiration for conducting abortion reduced the requirement of resources for postabortion care, reduced cost, and length of hospital stay and reduced maternal mortality.[ 27 ]

C ONCLUSION

Following Alma Ata declaration and Millennium Development Goals, all countries of the world have instituted their own National Health Programmes in a bid to improve health of their countrymen. Although health programs are in place, Governments are committed, guidance from the WHO is available, support from NGOs have been garnered, still many countries have not been able to achieve their desired goals. Operational Research is now being used as a key instrument, especially in resource-poor countries, to tap the untapped information. Administrators are using it as a searchlight for discovering what is still in the dark. It is there to stay. It is high time that the scientific community working in health-care settings gets acquainted with the nuances of OR and uses it more often for improving the outcome of health programs and for making them more efficient and effective.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

R EFERENCES

  • Open access
  • Published: 06 June 2016

Operational research as implementation science: definitions, challenges and research priorities

  • Thomas Monks 1  

Implementation Science volume  11 , Article number:  81 ( 2015 ) Cite this article

32k Accesses

35 Citations

16 Altmetric

Metrics details

Operational research (OR) is the discipline of using models, either quantitative or qualitative, to aid decision-making in complex implementation problems. The methods of OR have been used in healthcare since the 1950s in diverse areas such as emergency medicine and the interface between acute and community care; hospital performance; scheduling and management of patient home visits; scheduling of patient appointments; and many other complex implementation problems of an operational or logistical nature.

To date, there has been limited debate about the role that operational research should take within implementation science. I detail three such roles for OR all grounded in upfront system thinking: structuring implementation problems, prospective evaluation of improvement interventions, and strategic reconfiguration. Case studies from mental health, emergency medicine, and stroke care are used to illustrate each role. I then describe the challenges for applied OR within implementation science at the organisational, interventional, and disciplinary levels. Two key challenges include the difficulty faced in achieving a position of mutual understanding between implementation scientists and research users and a stark lack of evaluation of OR interventions. To address these challenges, I propose a research agenda to evaluate applied OR through the lens of implementation science, the liberation of OR from the specialist research and consultancy environment, and co-design of models with service users.

Operational research is a mature discipline that has developed a significant volume of methodology to improve health services. OR offers implementation scientists the opportunity to do more upfront system thinking before committing resources or taking risks. OR has three roles within implementation science: structuring an implementation problem, prospective evaluation of implementation problems, and a tool for strategic reconfiguration of health services. Challenges facing OR as implementation science include limited evidence and evaluation of impact, limited service user involvement, a lack of managerial awareness, effective communication between research users and OR modellers, and availability of healthcare data. To progress the science, a focus is needed in three key areas: evaluation of OR interventions, embedding the knowledge of OR in health services, and educating OR modellers about the aims and benefits of service user involvement.

Peer Review reports

Operational research (OR) is the discipline of using models, either quantitative or qualitative, to aid decision-making in complex problems [ 1 ]. The practice of applied healthcare OR distinguishes itself from other model-based disciplines such as health economics as it is action research based where operational researchers participate collaboratively with those that work in or use the system to define, develop, and find ways to sustain solutions to live implementation problems [ 2 ]. The methods of OR have been used in healthcare since the 1950s [ 3 ] to analyse implementation problems in diverse areas such as emergency departments [ 4 – 6 ] and management policies for ambulance fleet [ 7 ]; acute stroke care [ 8 – 11 ], outpatient clinic waiting times [ 12 ], and locations [ 13 ]; cardiac surgery capacity planning [ 14 ]; the interface between acute and community care [ 15 ]; hospital performance [ 16 ]; scheduling and routing of nurse visits [ 17 ]; scheduling of patient appointments [ 18 ]; and many other complex implementation problems of an operational or logistical nature.

Implementation science is the study of methods to increase the uptake of research findings in healthcare [ 19 ]. Given the volume of OR research in healthcare implementation problems, it is remarkable that limited discussion of the discipline has occurred within the implementation science literature. A rare example of debate is given by Atkinson and colleagues [ 20 ] who introduce the notion of system science approaches for use in public health policy decisions. Their argument focused on two modelling methods, system dynamics and agent-based simulation, and the potential benefits they bring for disinvestment decisions in public health. To complement and extend this debate, I define the overlap between implementation science and OR. I have focused on the upfront role that OR takes when used as an implementation science tool. Although some detail of method is given, the full breath of OR is beyond the scope of this article; a detailed overview of all the methods can be found elsewhere [ 21 ]. I describe three roles for OR within implementation science: structuring an implementation problem, prospective evaluation of an intervention, and strategic reconfiguration of services. For each role, I provide a case study to illustrate the concepts described. I then describe the challenges for OR within implementation science at the organisational, interventional, and disciplinary levels. Given these challenges, I derive a research agenda for implementation science and OR.

OR to structure an implementation problem

The first role for OR in implementation science is to provide a mechanism for structuring an implementation problem. Within OR, problem structuring methods provide participatory modelling approaches to support stakeholders in addressing problems of high complexity and uncertainty [ 22 ]. These complex situations are often poorly defined and contain multiple actors with multiple perspectives and conflicting interests [ 23 ]. As such, they are unsuitable for quantitative approaches. Problem structuring methods aim to develop models that enable stakeholders to reach a shared understanding of their problem situation and commit to action(s) that resolve it [ 23 ]. Approaches might serve as a way to clearly define objectives for a quantitative modelling study [ 24 ], systematically identify the areas to intervene within a system [ 25 ], or may be an intervention to improve a system in its own right.

A case example—understanding patient flow in the mental health system

A mental health service provider in the UK provided treatment to patients via several specialist workforces. Here, I focus on two: psychology and psychiatric talking therapies (PPT) and recovering independent life (RIL) teams. Waiting times to begin treatment under these services were high (e.g. for RIL team median = 55 days, inter-quartile range = 40–95 days), and treatment could last many years once it had begun. The trust’s management team were eager to implement new procedures to help staff manage case load and hence reduce waiting times to prevent service users, here defined as patients, their families, and carers, from entering a crisis state due to diminishing health without treatment. Management believed that reasons for delays were more complex than lack of staff, but the exact details were unclear and there was much disagreement between the senior management. The implementation science intervention I detail was conducted as an OR problem structuring exercise.

A system dynamics (SD) model was constructed to aid management target their interventions. SD is a subset of system thinking—the process of understanding how things within a system influence one another within the whole. SD models can be either qualitative or quantitative. In this case, a purely qualitative model was created. Figure  1 illustrates stock and flow notation that is commonly used in SD. The example is the concept of a simple waiting list for a (generic) treatment. It can be explained as follows. General practitioners (GPs) refer service users to a waiting list at an average daily rate, while specialist clinicians treat according to how much daily treatment capacity they have. The variable waiting list is represented as a rectangular stock: an accumulation of patients. The waiting list stock is either depleted or fed by rate variables, referring and treating, represented as flows (pipes with valves) entering and leaving the stock. Figure  1 also contains two feedback loops that are illustrated by the curved lines. The first loop is related to the GP reluctance to refer to a service with a long waiting time. As the waiting list for a service increases in number, so does the average waiting time of service users and so does the pressure for GPs to consider an alternative service (lowering the daily referral rate). The second loop is related to specialist clinicians reacting to long waiting lists by creating a small amount of additional treatment capacity and increasing admission rates.

Example system thinking for a waiting list—stock and flow notation. Notation guide. Rectangles represent stocks which are acculations of quantity of interest; Pipes with valves represent flows which feed or deplete stocks; arrows represent how one aspect of a system positively or negatively influences another

A preliminary version of the SD model was created using a series of interviews with clinicians and managers from the three services. This was followed by a group model building workshop that involved all senior management. Group model building is a structured process that aims to create a shared mental model of a problem [ 26 ]. The workshop began with a nominal group exercise. The group were asked to individually write down what they believed were the key factors that affected patient waiting times. The group were specifically asked to focus on strategic issues as opposed to detailed process-based problems. After all individual results had been shared, the group were asked to (i) hypothesise how these factors influenced each other and (ii) propose any missing variables that may mediate influence. For example, available treatment capacity is reduced by non-clinical workload. Non-clinical workload is increased by several other factors (discussed below in results) and so on.

Figure  2 illustrates one of the qualitative SD models developed in collaboration with the mental health trust. It uses the same stock and flow notation illustrated in Fig.  1 . The model shown is focussed on the RIL teams. Several insights were gained in its construction. First, it was clear to all parties that that this was not a simple demand and treatment capacity problem. For example, a great deal of non-core work takes place due to monitoring of ‘discharged’ service users within social care. The fraction of service users who undergo monitoring is determined by the degree of trust between clinicians and social care teams. When trust is low, the fraction of service users monitored increases and vice versa. A similar soft issue can be found in the discharge of complex patients, i.e. those that require a combination of medication, management by GPs in the community, and social care input. In this case, there is a delay while GPs build confidence that it is appropriate for a patient to be discharged into their care. While this negotiation takes place, a patient still requires regular monitoring by a mental health clinician. Other systemic issues are also visible. For example, the long delays in beginning treatment lead to clinicians spending time contacting patients by phone before they were admitted. This all takes time and reinforces the delay cycle.

A simplified version of the RIL team patient flow model

The results of the modelling were used to inform where interventions could be targeted. For example, a more detailed qualitative SD study to identify the trust issues between clinicians, social services, and general practitioners.

OR as a tool for prospective evaluation

The second role of OR within implementation science is as a prospective evaluation tool. That is, to provide a formal assessment and appraisal of competing implementation options or choices before any actual implementation effort, commitment of resources or disinvestment takes place. Informally, this approach is often called what-if analysis [ 21 ]. A mathematical or computational model of a healthcare system is developed that predicts one or more measures of performance, for example, service waiting times, patients successfully treated, avoided mortality, or operating costs. The model can be set up to test and compare complex interventions to the status-quo. For example, decision makers may wish to compare the number of delayed transfers of care in a rehabilitation pathway before and after investment in services to prevent hospital admissions and disinvestment in rehabilitation in-patient beds. The approach has been applied widely in the areas outlined in the introduction to this article.

A case example—emergency medicine capacity planning

As a simple case example of prospective evaluation, consider the emergency department (ED) overcrowding problems faced by the United Kingdom’s (UK) National Health Service (NHS). The performance of NHS EDs is (very publically) monitored by recording the proportion of patients who can be seen and discharged from an ED within 4 hours of their arrival. The UK government has set a target that 95 % of service users must be processed in this time. In recent years, many NHS EDs have not achieved this benchmark. The reasons for this are complex and are not confined to the department [ 27 ] or even the hospital [ 15 ]. However, given the high public interest, many EDs are attempting to manage the demands placed on them by implementing initiatives to reduce waiting times and optimise their own processes.

Our case study took place at a large ‘underperforming’ hospital in the UK. The management team were divided in their view about how to reduce waiting times. One option was to implement a clinical decision-making unit (CDU). A CDU is a ward linked to the ED that provides more time for ED clinicians to make decisions about service users with complex needs. However, at times of high pressure, a CDU can also serve as buffer capacity between the ED and the main hospital. That is, a CDU provides space for service users at risk of breaching the 4-h target once admitted; service users are no longer at risk of breach. The question at hand was if a CDU were implemented, how many beds are required in order for the ED to achieve the 95 % benchmark?

Figure  3 illustrates the logic of a computer simulation model that was developed to evaluate the implementation of a CDU on ED waiting times. A computer simulation model is a simplified dynamic representation of the real system that in most cases is accompanied by an animation to help understanding. In this case, the simulation mimicked the flow of patients into an ED, their assessment, and treatment by clinicians and then flow out to different parts of the hospital or to leave the hospital entirely. The scope of the modelling included the hospital’s Acute Medical Unit (AMU) that admits medical patients from the ED. In Fig.  3 , the rectangular boxes represent processes, for example, assessment and treatment in the ED. The partitioned rectangles represent queues, for example, patient waiting for admission to the AMU. The model was set up to only admit patients to the CDU who had been in ED longer than 3.5 h and only then if there was a free bed. Once a patient’s CDU stay was complete, they would continue on their hospital journey as normal, i.e. discharged home, admitted to the AMU or admitted to another in-patient ward.

Emergency department and clinical decision-making unit model. Notation guide. Rectangles represent processes; partitioned rectangles represent queues; ellipses represent start and end points; arrows represent the direction of patient flow

In the model, the various departments and wards are conceptualised as stochastic queuing systems subject to constraints. This means that the variability we see in service user arrival and treatment rates (e.g. sudden bursts in arrivals combined with more complex and hence slower treatments) combined with limited cubicle and bed numbers results in queues. There are three reasons why prospective evaluation is appropriate for these systems. First, capacity planning for such complex systems based on average occupancy fails to take queuing into account and will substantially underestimate capacity requirements [ 28 ]. Second, the processing time, i.e. the time taken to transfer a patient to a ward and then to make a clinical decision, within a CDU is uncertain, although it is likely to be slower than the high pressure environment of the ED. Third, as the same ED and AMU clinicians must staff the CDU, the (negative or positive) impact on their respective processing times is uncertain.

The model developed was a discrete-event simulation [ 29 ] that mimics the variation in service user arrival and treatment rates in order to predict waiting times. The uncertainty in CDU processing time was treated as an unknown and varied in a sensitivity analysis . The limits of this analysis were chosen as 2 and 7 h on average, as these were observed in similar wards elsewhere.

The model predicted that the number of CDU beds would need to be between 30 and 70 in order to achieve the ED target (for reference, the ED had 10 cubicles for minor cases and 18 cubicles for major cases). This result illustrated that even if a decision was made in 2 h on average with no negative effect on ED or AMU processing time, the CDU would need to be at least the same size as the ED overall. It also highlighted that the CDU impact on ED performance was highly sensitive to processing time.

The benefit of evaluating the CDU implementation upfront was that it ruled the CDU out as a feasible intervention before any substantial resource had been mobilised to implement it. The hospital could not safely staff a 30-bedded CDU or indeed provide space for that size of ward. As such, the modelling helped the management team abandon their CDU plan and consider alternative solutions with minimal cost and no disruption to the service.

OR as a tool for strategic reconfiguration

The previous section described an implementation science approach to evaluate a small number of competing options at an operational level. In some instances, particularly in healthcare logistics and estate planning, a more strategic view of a system is needed to shortlist or choose options for reconfiguration. In such implementation problems, there may be a large number of options reaching into the hundreds, if not hundreds of thousands of competing alternatives. To analyse these problems, mathematical and computational optimization techniques are required. For example, if a provider of sexual health services wanted to consolidate community clinics from 50 to 20 and there are 100 candidate locations, then there are in the order of 10 20 configurations to consider. OR’s implementation science role is to provide tools that identify options that help meet a strategic objective. For example, this might be maintaining equitable patient access to services across different demographics groups or modes of transportation while increasing service quality and reducing cost.

A case example—where should TIA outpatient clinics be located?

As a simple exemplar, consider a rural region in the UK that provided a 7-day transient ischemic attack (TIA) service through outpatient clinics in the community. Clinics ran at five locations but with only one location open per day. Magnetic resonance imaging (MRI) was available at three locations. Service users attending clinics without imaging but who require access to an MRI make an additional journey to the closest location with imaging capacity.

Service users are booked into clinic appointments across the week as they are referred to the TIA service by their diagnosing clinician, typically the patients local GP or an attending emergency department physician. The diagnosing clinician risk stratifies service users as high or low risk of a major stroke. High-risk service users require to be seen within 24 h of symptom onset and low-risk patients within 7 days [ 30 ].

The healthcare providers had concerns that splitting the clinics across five sites increased the variation in care received by service users and wished to consolidate to one to three clinic locations. Hence, there were two complicating factors when assessing equitable access: how many locations and which ones. There were also concerns that one location—clinic X—on the coast of the region was extremely difficult for high-risk TIAs to reach on the same day as diagnosis. There would also be political implications for any closure at clinic X. In total, there were 25 combinations of clinics for the providers to consider for both the low- and high-risk TIA groups, i.e. 50 options to review.

A discrete-choice facility location model was developed to evaluate the consequences of different TIA clinic configurations and inform the decision-making process for the reconfiguration of the service. Location analysis is a specialised branch of combinatorial optimisation and involves solving for the optimal placement of a set of facilities in a region in order to minimise or maximise a measure of performance such transportation costs, travel time, or population coverage [ 31 ]. In this, case an analysis was conducted separately for high-risk and low-risk TIAs. The analysis of high-risk TIAs aimed to minimise the maximum travel time of a service user from their home location to the closest clinic (as these service users must be seen the same day). The low-risk analysis minimised the weighted average travel time to their closest clinic. The weighted average measure allows for locations with the highest level of demand to have the greatest impact on results, diminishing the impact of outlying points. In general, if there are n demand locations and on a given day the travel time x from locations i to the nearest clinic, then the weighted average travel \( \overline{x} \) time is given by the simple formula depicted in Eq. ( 1 ). Table  1 illustrates the use of the equation with two fictional locations. For each location, the number of patients who travel and the travel time for patients to a hospital is given. In the table, the weighted average is compared to the more familiar mean average.

The model demonstrated that clinics most central to the region were all good choices to provide equitable patient access. A three-clinic solution provided the most equitable solution for service users. The problematic clinic X on the coast of the region was not included in an optimal configuration; however, it could be included in a three-clinic solution without substantial effect on travel times if scheduled infrequently. This latter result allowed the decision makers to move on from the strategic debate about location and focus on the more detailed implementation issues of scheduling and capacity planning for clinics. This was again addressed upfront using a computer simulation study to evaluate a small number of competing options for scheduling the clinics.

Lessons for implementation science

Each of the three roles emphasises the use of OR to conduct implementation science upfront before any action to alter a care pathway or service has been taken. Many OR scholars argue that the benefit of constructing a model upfront is that it forces decision makers to move from a world of imprecise language to a world of a precise language (sometimes referred to as a common language [ 32 ]) and ultimately develop a shared understanding of the problem; although as I will argue later, there is very limited empirical evidence supporting this proposition. Such a shared understanding increases the likelihood if implementation will actually go ahead and importantly if it will be sustained or normalised.

It is important to emphasise that the three case studies illustrate the simpler end of what can be achieved in using OR for upfront implementation science. This is partly a stylistic choice in order to aid reader understanding, for example, many optimisation problems are hugely complex, but also because in my experience simpler models tend to be accepted and used more in healthcare. Simpler models also need less input data and hence can be built and run quickly.

Along with the three case studies, OR is in general grounded in the use of models to improve upfront decision-making in complex implementation problems. Although there is a significant overlap between OR and implementation research, there are differences. For example, OR would not provide the rich contextual information collected in a process evaluation.

Implementation science challenges for OR

Implementation science poses a number of challenges for OR. I propose that these lie at three levels: disciplinary, organisational, and interventional. Table  2 summarises these key challenges.

Challenges at a disciplinary level

This article describes three roles for OR within implementation science. An irony is that OR interventions themselves are poorly understood with barely any published evaluation of practice or impact [ 33 – 36 ]. Limited examples can be found in Monks et al. [ 37 ], Pagel et al. [ 38 ], and Brailsford et al. [ 39 ]. The explanation for this can be found at a disciplinary level. That is, academic OR is predominately driven and rewarded by the development of theory for modelling methodology as opposed to understanding interventions and the issues they raise for practice. As such, a discipline that promotes the use of evidence for decision-making in healthcare cannot confidently answer the question does OR in health work ? I am regularly challenged on this point by healthcare professionals.

A second disciplinary challenge is to systematically involve service users in the co-design of OR interventions. To date, evidence of service user involvement is limited (see Walsh and Holstick [ 40 ] for an example). There is also confusion between service users framed as research participants (typically treated as a data source to parameterise models with behavioural assumptions) and co-designers of research objectives and methods, although there has been an effort to clarify the important difference [ 41 ].

Challenges at the organisational level

The three roles of OR outlined above are widely applicable across healthcare implementation problems. However, before OR can be used within practice, users of the research, in this case, healthcare managers, clinicians and service users, must be aware of the approaches. This is currently a substantial barrier to a wide scale adoption in health services [ 42 – 44 ] and stands in stark contrast to domains such as manufacturing and defence where it is used frequently to generate evidence before action [ 45 ]. The implication of low awareness of OR in health is that it is often difficult to engage senior decision makers in the complex operational and logistical problems that matter the most for service users.

Challenges at an interventional level

Fifty years ago, Churchman and Schainblatt [ 46 ] wrote about a ‘dialectic of implementation’ in the journal management science. In this paper, the two authors advocated that a position of mutual understanding between a researcher and manager was necessary in order to implement results of a study. That is, the researcher must understand the manager’s position, values, and implementation problem in order to tackle the correct problem in the right way. The manager must understand the method that the researcher has applied, at least at the conceptual level, in order to scrutinise, challenge, and implement results. The concept of mutual understanding is an elegant one, but in practice, achieving it is a challenge for both sides. As a simple example from a researcher perspective, it is difficult to assess if the users of a model understand why a model is producing certain results [ 42 ]. That is, do users understand how the model works or are they simply accepting the results based on some heuristic, such as ‘these are the results I want’ or ‘I trust the person telling me the results’? Given the disciplinary challenge outlined above, to date, there is limited validated guidance about how to manage such complex interventions within OR.

The computer software used in the three case studies have been available for considerable time, but appropriate data to parameterise the quantitative models used to illustrate the second and third roles are potentially not collected routinely. All models require data from the system studied. The TIA clinic study had relatively low requirements: individual service user-level data detailing date of clinic attendance, clinic attended, the risk classification of patient, and a home location of the patient—much of which is collected routinely by a health system for financial reporting purposes. Simulation modelling studies such as that described in the emergency department case study have high data requirements, including fine-grained timings of processes such as triaging and doctor assessment. It is unlikely such data are collected routinely as they have no use in financial reporting.

An agenda for OR in implementation science

Given the organisational, interventional, and disciplinary issues outlined in the ‘ Implementation science challenges for OR ’ section, I propose the following agenda for OR within implementation science.

Priority 1: creating the evidence base

At the forefront of the research agenda is the need to evaluate the impact of OR on complex interventions. The focus here should be on the consumers of research as opposed to the modellers and the process they follow [ 47 , 48 ]. There is a need to understand how stakeholders make sense of an OR intervention and how the results of studies are used to assist decision-making. Recent research offers some promise in progressing this aim. PartiSim [ 49 ] is a participative modelling framework that aims to involve stakeholders in structured workshops throughout a simulation study. Structured frameworks like PartiSim provide an opportunity to study the user side of OR more efficiently, as the modelling steps are known upfront. Another area showing promise is the recent emergence of Behavioural OR [ 50 ]. One of the core aims of Behavioural OR is to analyse and understand the practice and impact of OR on context (e.g. [ 51 – 53 ]).

Priority 2: raising demand and the liberation of OR

Much of the challenge in the use of OR as an implementation science technique that I outline is rooted in the lack of organisational awareness and experience of the approach. But what if this challenge were to be resolved? To examine this further, consider a counterfactual world where all health service users, managers, and clinicians are well versed in the three implementation science roles of OR and all have free access to a substantial evidence base detailing the efficacy of the approach. In this world, where OR is an accepted implementation science approach, the constraint has now moved from demand to supply of modelling services. Current supply is predominately provided by the (relatively) small specialist consultancy and research communities. There is a great need to liberate OR from its roots as the tool of the ‘specialist’ and transfer knowledge to research users. Two initial efforts to achieve this priority include the Teaching Operational Research for Commissioning in Health (TORCH) in the UK [ 54 ] and the Research into Global Healthcare Tools (RIGHT) Project [ 55 ]. TORCH successfully developed a curriculum for teaching OR to commissioners, although it has yet to be implemented on a wide scale or evaluated. The RIGHT project developed a pilot web tool to enable healthcare providers select an appropriate OR approach to assist with an implementation problem. Both of these projects demonstrate preliminary efforts at liberating OR from the traditional paradigm of specialist delivery.

The liberation of OR has already taken place in some areas in the form of Community OR . The three case studies illustrated interventions where the collaboration puts the emphasis on a modeller to construct the model and provide results for the wider stakeholder group. Alternatively, service users could develop or make use of OR methods to analyse a problem themselves. Community OR changes the role of an operational researcher from a modeller to a facilitator in order to aid those from outside of OR to create appropriate systematic methodology to tackle important social and community-based issues. In a rare example of community OR in healthcare [ 40 ], two examples illustrate where service users take the lead. In the first example, users of mental health services used system methods to produce a problem structuring tool to evaluate the impact of service users on NHS decision-making. In the second example, service users developed and applied an idealised planning approach for the future structure of mental health services. These approaches are qualitative in nature but are systematic and in-line with an OR implementation science approach.

Priority 3: PPI education for OR modellers

The first two priorities listed might be considered long-term goals for the OR implementation science community. An immediate priority that is arguably achievable over the short term is Patient and Public Involvement (PPI) education for OR modellers. The co-design of healthcare models with decision makers is often held up as a critical success factor for modelling interventions [ 42 ]. For ethical and practical reasons, co-design of OR modelling interventions should also include service users [ 41 ]. Education need not be complicated and could at first be done through widely read OR magazines and a grass roots movement delivered through master degree courses.

Conclusions

Operational research offers improvement scientists and individuals who work in complex health systems the opportunity to do more upfront system thinking about interventions and change. OR's upfront role within implementation science aims to answer questions such as where best to target interventions, will such an intervention work even under optimistic assumptions, which options out of many should we implement, and should we consider de-implementing part of a service in favour of investing elsewhere. As OR becomes more widely adopted as an implementation science technique, evaluation of the method through the lens of implementation science itself becomes more necessary in order to generate an evidence base about how to effectively conduct OR interventions. It is also necessary to liberate OR from its traditional roots as a specialist tool.

Operational research (OR) is a mature discipline that has developed a significant volume of methodology to improve health services. OR offers implementation scientists the opportunity to do more upfront system thinking before committing resources and taking risks. OR has three roles within implementation science: structuring an implementation problem, upfront evaluation of implementation problems, and a tool for strategic reconfiguration of health services. Challenges facing OR as implementation science include limited evidence or evaluation of impact, limited service user involvement, a lack of managerial awareness, effective communication between research users and OR modellers, and availability of healthcare data. To progress the science, a focus is needed in three key areas: evaluation of OR interventions, transferring the knowledge of OR to health services, and educating OR modellers about the aims and benefits of service user involvement.

Abbreviations

AMU, Acute Medical Unit; CDU, clinical decision-making unit; ED, emergency department; GP, general practitioner; MRI, magnetic resonance imaging; NHS, National Health Service; OR, operational research (UK)/operations research (US); PPI, Patient and Public Involvement; PPT, psychology and psychiatric talking therapies; RIGHT, Research into Global Healthcare Tools; RIL, recovering independent life; SD, system dynamics; TIA, transient ischemic attack; TORCH, Teaching Operational Research for Commissioning in Health

Pitt M, Monks T, Crowe S, Vasilakis C. Systems modelling and simulation in health service design, delivery and decision making. BMJ Qual Saf. 2015. doi: 10.1136/bmjqs-2015-004430 .

PubMed   Google Scholar  

Ackoff RL. The future of operational research is past. J Oper Res Soc. 1979;30(2):93–104. doi: 10.2307/3009290 .

Article   Google Scholar  

Royston G. One hundred years of operational research in health—UK 1948-2048[star]. J Oper Res Soc. 2009;60(1):169–79.

Lane DC, Monefeldt C, Rosenhead JV. Looking in the wrong place for healthcare improvements: a system dynamics study of an accident and emergency department. J Oper Res Soc. 2000;51(5):518–31. doi: 10.2307/254183 .

Günal MM, Pidd M. Understanding target-driven action in emergency department performance using simulation. Emerg Med J. 2009;26(10):724–7. doi: 10.1136/emj.2008.066969 .

Article   PubMed   Google Scholar  

Fletcher A, Halsall D, Huxham S, Worthington D. The DH accident and emergency department model: a national generic model used locally. J Oper Res Soc. 2007;58(12):1554–62.

Knight VA, Harper PR. Modelling emergency medical services with phase-type distributions. HS. 2012;1(1):58–68.

Google Scholar  

Monks T, Pitt M, Stein K, James MA. Hyperacute stroke care and NHS England’s business plan. BMJ. 2014;348. doi: 10.1136/bmj.g3049 .

Monks T, Pitt M, Stein K, James M. Maximizing the population benefit from thrombolysis in acute ischemic stroke: a modeling study of in-hospital delays. Stroke. 2012;43(10):2706–11. doi: 10.1161/strokeaha.112.663187 .

Lahr MMH, van der Zee D-J, Luijckx G-J, Vroomen PCAJ, Buskens E. A simulation-based approach for improving utilization of thrombolysis in acute brain infarction. Med Care. 2013;51(12):1101–5. doi: 10.1097/MLR.0b013e3182a3e505 .

Monks T, Pearn K, Allen M. Simulating stroke care systems. In: Yilmaz L, et al, editors. Proceedings of the 2015 Winter Simulation Conference. Piscataway, New Jersey: IEEE; 2015. p. 1391–1402. doi: 10.1109/WSC.2015.7408262 .

Jun J, Jacobson S, Swisher J. Application of discrete-event simulation in health care clinics: a survey. J Oper Res Soc. 1999;50(2):109–23.

Harper PR, Shahani AK, Gallagher JE, Bowie C. Planning health services with explicit geographical considerations: a stochastic location–allocation approach. Omega. 2005;33(2):141–52. doi: 10.1016/j.omega.2004.03.011 .

Gallivan S, Utley M, Treasure T, Valencia O. Booked inpatient admissions and hospital capacity: mathematical modelling study. BMJ. 2002;324(7332):280–2. doi: 10.1136/bmj.324.7332.280 .

Article   PubMed   PubMed Central   Google Scholar  

Brailsford SC, Lattimer VA, Tarnaras P, Turnbull JC. Emergency and on-demand health care: modelling a large complex system. J Oper Res Soc. 2004;55(1):34–42.

Gunal MM. A guide for building hospital simulation models. Health Syst. 2012;1(1):17–25. doi: 10.1057/hs.2012.8 .

Bertels S, Fahle T. A hybrid setup for a hybrid scenario: combining heuristics for the home health care problem. Comput Oper Res. 2006;33(10):2866–90. doi: 10.1016/j.cor.2005.01.015 .

Gupta D, Denton B. Appointment scheduling in health care: challenges and opportunities. IIE Trans. 2008;40(9):800–19. doi: 10.1080/07408170802165880 .

Foy R et al. Implementation science: a reappraisal of our journal mission and scope. Implement Sci. 2015;10(1):1–7. doi: 10.1186/s13012-015-0240-2 .

Atkinson J-A, Page A, Wells R, Milat A, Wilson A. A modelling tool for policy analysis to support the design of efficient and effective policy responses for complex public health problems. Implement Sci. 2015;10(1):26.

Pitt M, Monks T, Allen M. Systems modelling for improving healthcare. In: Richards D, Rahm Hallberg I, editors. Complex interventions in health: an overview of research methods. London: Routledge; 2015.

Westcombe M, Alberto Franco L, Shaw D. Where next for PSMs—a grassroots revolution? J Oper Res Soc. 2006;57(7):776–8.

Mingers J, Rosenhead J. Problem structuring methods in action. Eur J Oper Res. 2004;152(3):530–54. http://dx.doi.org/10.1016/S0377-2217(03)00056-0 .

Kotiadis K, Mingers J. Combining PSMs with hard OR methods: the philosophical and practical challenges. J Oper Res Soc. 2006;57(7):856–67. doi: 10.1057/palgrave.jors.2602147 .

Penn ML, Kennedy AP, Vassilev II, Chew-Graham CA, Protheroe J, Rogers A, Monks T. Modelling self-management pathways for people with diabetes in primary care. BMC Fam Pract. 2015;16(1):1–10. doi: 10.1186/s12875-015-0325-7 .

Vennix JAM. Group model-building: tackling messy problems. Syst Dyn Rev. 1999;15(4):379–401.

Cooke MW, Wilson S, Halsall J, Roalfe A. Total time in English accident and emergency departments is related to bed occupancy. Emerg Med J. 2004;21(5):575–6. doi: 10.1136/emj.2004.015081 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Utley M, Worthington D. Capacity planning. In: Hall R, editor. Handbook of Healthcare System Scheduling. New York: Springer; 2012.

Robinson S. Simulation: the practice of model development and use. London: Wiley; 2004.

National Institute of Clinical Excellence, Stroke. In: NICE Clinical Guideline, editor. Diagnosis and initial management of acute stroke and transient ischemic attack (TIA). 2008.

Smith HK, Harper PR, Potts CN, Thyle A. Planning sustainable community health schemes in rural areas of developing countries. Eur J Oper Res. 2009;193(3):768–77. doi: 10.1016/j.ejor.2007.07.031 .

Franco AL, Lord E. Understanding multi-methodology: evaluating the perceived impact of mixing methods for group budgetary decisions. Omega. 2010;39:362–72.

Katsaliaki K, Mustafee N. Applications of simulation within the healthcare context. J Oper Res Soc. 2011;62(8):1431–51.

Günal M, Pidd M. Discrete event simulation for performance modelling in health care: a review of the literature. J Simul. 2011;4:42–51.

Fone D et al. Systematic review of the use and value of computer simulation modelling in population health and health care delivery. J Public Health. 2003;25(4):325–35. doi: 10.1093/pubmed/fdg075 .

Brailsford SC, Harper PR, Patel B, Pitt M. An analysis of the academic literature on simulation and modelling in health care. J Simul. 2009;3(3):130–40.

Monks T, Pearson M, Pitt M, Stein K, James MA. Evaluating the impact of a simulation study in emergency stroke care. Oper Res Health Care. 2015;6:40–9. http://dx.doi.org/10.1016/j.orhc.2015.09.002 .

Pagel C et al. Real time monitoring of risk-adjusted paediatric cardiac surgery outcomes using variable life-adjusted display: implementation in three UK centres. Heart. 2013;99(19):1445–50. doi: 10.1136/heartjnl-2013-303671 .

Brailsford SC et al. Overcoming the barriers: a qualitative study of simulation adoption in the NHS. J Oper Res Soc. 2013;64(2):157–68.

Walsh M, Hostick T. Improving health care through community OR. J Oper Res Soc. 2004;56(2):193–201.

Pearson M et al. Involving patients and the public in healthcare operational research—the challenges and opportunities. Oper Res Health Care. 2013;2(4):86–9. http://dx.doi.org/10.1016/j.orhc.2013.09.001 .

Jahangirian M, Taylor SJE, Eatock J, Stergioulas LK, Taylor PM. Causal study of low stakeholder engagement in healthcare simulation projects. J Oper Res Soc. 2015;66(3):369–79. doi: 10.1057/jors.2014.1 .

Young T, Eatock J, Jahangirian M, Naseer A, Lilford R. Three critical challenges for modeling and simulation in healthcare. In: Simulation Conference (WSC), Proceedings of the 2009 Winter. 2009.

Seila AF, Brailsford S. Opportunities and challenges in health care simulation. In: Alexopoulos C, Goldsman D, Wilson JR, editors. Advancing the Frontiers of Simulation. US: Springer; 2009. p. 195–229.

Chapter   Google Scholar  

Jahangirian M, Eldabi T, Naseer A, Stergioulas LK, Young T. Simulation in manufacturing and business: a review. Eur J Oper Res. 2010;203(1):1–13. doi: 10.1016/j.ejor.2009.06.004 .

Churchman CW, Schainblatt AH. The researcher and the manager: a dialectic of implementation. Manag Sci. 1965;11(4):69–87. doi: 10.2307/2628012 .

Willemain TR. Model formulation: what experts think about and when. Oper Res. 1995;43(6):916–32. doi: 10.1287/opre.43.6.916 .

Pidd M, Woolley RN. A pilot study of problem structuring. J Oper Res Soc. 1980;31(12):1063–8. doi: 10.2307/2581818 .

Tako AA, Kotiadis K. PartiSim: a multi-methodology framework to support facilitated simulation modelling in healthcare. Eur J Oper Res. 2015;244(2):555–64. http://dx.doi.org/10.1016/j.ejor.2015.01.046 .

Franco LA, Hämäläinen RP. Behavioural operational research: returning to the roots of the OR profession. Eur J Oper Res. 2016;249(3):791–5. http://dx.doi.org/10.1016/j.ejor.2015.10.034 .

Gogi A, Tako AA, Robinson S. An experimental investigation into the role of simulation models in generating insights. Eur J Oper Res. 2016;249(3):931–44. http://dx.doi.org/10.1016/j.ejor.2015.09.042 .

Monks T, Robinson S, Kotiadis K. Learning from discrete-event simulation: exploring the high involvement hypothesis. Eur J Oper Res. 2014;235(1):195–205. http://dx.doi.org/10.1016/j.ejor.2013.10.003 .

Monks T, Robinson S, Kotiadis K. Can involving clients in simulation studies help them solve their future problems? A transfer of learning experiment. Eur J Oper Res. 2016;249(3):919–30. http://dx.doi.org/10.1016/j.ejor.2015.08.037 .

Pitt M, Davies R, Brailsford SC, Chausselet T, Harper PR, Worthington D, Pidd M, Bucci G. Developing competence in modelling and simulation for commissioning and strategic planning. A guide for commissioners. 2009 [cited 2016 07/01/2016]; Available from: http://mashnet.info/wp-content/files/CurriculumInModellingAndSimulation4Commissioning.pdf .

Naseer A, Eldabi T, Young TP. RIGHT: a toolkit for selecting healthcare modelling methods. J Sim. 2010;4(1):2–13.

Download references

Acknowledgements

This article presents independent research funded by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care (CLAHRC) Wessex. The views expressed in this publication are those of the author(s) and not necessarily those of the National Health Service, the NIHR, or the Department of Health.

Case studies 1 and 3 were funded by NIHR CLAHRC South West Peninsula. Case study 3 used Selective Analytics MapPlace software.

Author’s contribution

TM developed the models described in the case studies, conceived the idea for debate, and wrote the paper.

Author’s information

TM leads the NIHR Collaboration in Leadership in Health Research and CLAHRC Wessex’s methodological hub where he conducts applied health service research in collaboration with the NHS. He is an operational researcher with experience in industry, the public sector, and academic research.

Competing interests

The author declares that he has no competing interest.

Author information

Authors and affiliations.

NIHR CLAHRC Wessex, Faculty of Health Sciences, University of Southampton, Southampton, UK

Thomas Monks

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Thomas Monks .

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Monks, T. Operational research as implementation science: definitions, challenges and research priorities. Implementation Sci 11 , 81 (2015). https://doi.org/10.1186/s13012-016-0444-0

Download citation

Received : 19 March 2016

Accepted : 25 May 2016

Published : 06 June 2016

DOI : https://doi.org/10.1186/s13012-016-0444-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Operational Research
  • National Health Service
  • Travel Time
  • Service User
  • System Dynamic Model

Implementation Science

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

research operational framework

Artificial intelligence in strategy

Can machines automate strategy development? The short answer is no. However, there are numerous aspects of strategists’ work where AI and advanced analytics tools can already bring enormous value. Yuval Atsmon is a senior partner who leads the new McKinsey Center for Strategy Innovation, which studies ways new technologies can augment the timeless principles of strategy. In this episode of the Inside the Strategy Room podcast, he explains how artificial intelligence is already transforming strategy and what’s on the horizon. This is an edited transcript of the discussion. For more conversations on the strategy issues that matter, follow the series on your preferred podcast platform .

Joanna Pachner: What does artificial intelligence mean in the context of strategy?

Yuval Atsmon: When people talk about artificial intelligence, they include everything to do with analytics, automation, and data analysis. Marvin Minsky, the pioneer of artificial intelligence research in the 1960s, talked about AI as a “suitcase word”—a term into which you can stuff whatever you want—and that still seems to be the case. We are comfortable with that because we think companies should use all the capabilities of more traditional analysis while increasing automation in strategy that can free up management or analyst time and, gradually, introducing tools that can augment human thinking.

Joanna Pachner: AI has been embraced by many business functions, but strategy seems to be largely immune to its charms. Why do you think that is?

Subscribe to the Inside the Strategy Room podcast

Yuval Atsmon: You’re right about the limited adoption. Only 7 percent of respondents to our survey about the use of AI say they use it in strategy or even financial planning, whereas in areas like marketing, supply chain, and service operations, it’s 25 or 30 percent. One reason adoption is lagging is that strategy is one of the most integrative conceptual practices. When executives think about strategy automation, many are looking too far ahead—at AI capabilities that would decide, in place of the business leader, what the right strategy is. They are missing opportunities to use AI in the building blocks of strategy that could significantly improve outcomes.

I like to use the analogy to virtual assistants. Many of us use Alexa or Siri but very few people use these tools to do more than dictate a text message or shut off the lights. We don’t feel comfortable with the technology’s ability to understand the context in more sophisticated applications. AI in strategy is similar: it’s hard for AI to know everything an executive knows, but it can help executives with certain tasks.

When executives think about strategy automation, many are looking too far ahead—at AI deciding the right strategy. They are missing opportunities to use AI in the building blocks of strategy.

Joanna Pachner: What kind of tasks can AI help strategists execute today?

Yuval Atsmon: We talk about six stages of AI development. The earliest is simple analytics, which we refer to as descriptive intelligence. Companies use dashboards for competitive analysis or to study performance in different parts of the business that are automatically updated. Some have interactive capabilities for refinement and testing.

The second level is diagnostic intelligence, which is the ability to look backward at the business and understand root causes and drivers of performance. The level after that is predictive intelligence: being able to anticipate certain scenarios or options and the value of things in the future based on momentum from the past as well as signals picked in the market. Both diagnostics and prediction are areas that AI can greatly improve today. The tools can augment executives’ analysis and become areas where you develop capabilities. For example, on diagnostic intelligence, you can organize your portfolio into segments to understand granularly where performance is coming from and do it in a much more continuous way than analysts could. You can try 20 different ways in an hour versus deploying one hundred analysts to tackle the problem.

Predictive AI is both more difficult and more risky. Executives shouldn’t fully rely on predictive AI, but it provides another systematic viewpoint in the room. Because strategic decisions have significant consequences, a key consideration is to use AI transparently in the sense of understanding why it is making a certain prediction and what extrapolations it is making from which information. You can then assess if you trust the prediction or not. You can even use AI to track the evolution of the assumptions for that prediction.

Those are the levels available today. The next three levels will take time to develop. There are some early examples of AI advising actions for executives’ consideration that would be value-creating based on the analysis. From there, you go to delegating certain decision authority to AI, with constraints and supervision. Eventually, there is the point where fully autonomous AI analyzes and decides with no human interaction.

Because strategic decisions have significant consequences, you need to understand why AI is making a certain prediction and what extrapolations it’s making from which information.

Joanna Pachner: What kind of businesses or industries could gain the greatest benefits from embracing AI at its current level of sophistication?

Yuval Atsmon: Every business probably has some opportunity to use AI more than it does today. The first thing to look at is the availability of data. Do you have performance data that can be organized in a systematic way? Companies that have deep data on their portfolios down to business line, SKU, inventory, and raw ingredients have the biggest opportunities to use machines to gain granular insights that humans could not.

Companies whose strategies rely on a few big decisions with limited data would get less from AI. Likewise, those facing a lot of volatility and vulnerability to external events would benefit less than companies with controlled and systematic portfolios, although they could deploy AI to better predict those external events and identify what they can and cannot control.

Third, the velocity of decisions matters. Most companies develop strategies every three to five years, which then become annual budgets. If you think about strategy in that way, the role of AI is relatively limited other than potentially accelerating analyses that are inputs into the strategy. However, some companies regularly revisit big decisions they made based on assumptions about the world that may have since changed, affecting the projected ROI of initiatives. Such shifts would affect how you deploy talent and executive time, how you spend money and focus sales efforts, and AI can be valuable in guiding that. The value of AI is even bigger when you can make decisions close to the time of deploying resources, because AI can signal that your previous assumptions have changed from when you made your plan.

Joanna Pachner: Can you provide any examples of companies employing AI to address specific strategic challenges?

Yuval Atsmon: Some of the most innovative users of AI, not coincidentally, are AI- and digital-native companies. Some of these companies have seen massive benefits from AI and have increased its usage in other areas of the business. One mobility player adjusts its financial planning based on pricing patterns it observes in the market. Its business has relatively high flexibility to demand but less so to supply, so the company uses AI to continuously signal back when pricing dynamics are trending in a way that would affect profitability or where demand is rising. This allows the company to quickly react to create more capacity because its profitability is highly sensitive to keeping demand and supply in equilibrium.

Joanna Pachner: Given how quickly things change today, doesn’t AI seem to be more a tactical than a strategic tool, providing time-sensitive input on isolated elements of strategy?

Yuval Atsmon: It’s interesting that you make the distinction between strategic and tactical. Of course, every decision can be broken down into smaller ones, and where AI can be affordably used in strategy today is for building blocks of the strategy. It might feel tactical, but it can make a massive difference. One of the world’s leading investment firms, for example, has started to use AI to scan for certain patterns rather than scanning individual companies directly. AI looks for consumer mobile usage that suggests a company’s technology is catching on quickly, giving the firm an opportunity to invest in that company before others do. That created a significant strategic edge for them, even though the tool itself may be relatively tactical.

Joanna Pachner: McKinsey has written a lot about cognitive biases  and social dynamics that can skew decision making. Can AI help with these challenges?

Yuval Atsmon: When we talk to executives about using AI in strategy development, the first reaction we get is, “Those are really big decisions; what if AI gets them wrong?” The first answer is that humans also get them wrong—a lot. [Amos] Tversky, [Daniel] Kahneman, and others have proven that some of those errors are systemic, observable, and predictable. The first thing AI can do is spot situations likely to give rise to biases. For example, imagine that AI is listening in on a strategy session where the CEO proposes something and everyone says “Aye” without debate and discussion. AI could inform the room, “We might have a sunflower bias here,” which could trigger more conversation and remind the CEO that it’s in their own interest to encourage some devil’s advocacy.

We also often see confirmation bias, where people focus their analysis on proving the wisdom of what they already want to do, as opposed to looking for a fact-based reality. Just having AI perform a default analysis that doesn’t aim to satisfy the boss is useful, and the team can then try to understand why that is different than the management hypothesis, triggering a much richer debate.

In terms of social dynamics, agency problems can create conflicts of interest. Every business unit [BU] leader thinks that their BU should get the most resources and will deliver the most value, or at least they feel they should advocate for their business. AI provides a neutral way based on systematic data to manage those debates. It’s also useful for executives with decision authority, since we all know that short-term pressures and the need to make the quarterly and annual numbers lead people to make different decisions on the 31st of December than they do on January 1st or October 1st. Like the story of Ulysses and the sirens, you can use AI to remind you that you wanted something different three months earlier. The CEO still decides; AI can just provide that extra nudge.

Joanna Pachner: It’s like you have Spock next to you, who is dispassionate and purely analytical.

Yuval Atsmon: That is not a bad analogy—for Star Trek fans anyway.

Joanna Pachner: Do you have a favorite application of AI in strategy?

Yuval Atsmon: I have worked a lot on resource allocation, and one of the challenges, which we call the hockey stick phenomenon, is that executives are always overly optimistic about what will happen. They know that resource allocation will inevitably be defined by what you believe about the future, not necessarily by past performance. AI can provide an objective prediction of performance starting from a default momentum case: based on everything that happened in the past and some indicators about the future, what is the forecast of performance if we do nothing? This is before we say, “But I will hire these people and develop this new product and improve my marketing”— things that every executive thinks will help them overdeliver relative to the past. The neutral momentum case, which AI can calculate in a cold, Spock-like manner, can change the dynamics of the resource allocation discussion. It’s a form of predictive intelligence accessible today and while it’s not meant to be definitive, it provides a basis for better decisions.

Joanna Pachner: Do you see access to technology talent as one of the obstacles to the adoption of AI in strategy, especially at large companies?

Yuval Atsmon: I would make a distinction. If you mean machine-learning and data science talent or software engineers who build the digital tools, they are definitely not easy to get. However, companies can increasingly use platforms that provide access to AI tools and require less from individual companies. Also, this domain of strategy is exciting—it’s cutting-edge, so it’s probably easier to get technology talent for that than it might be for manufacturing work.

The bigger challenge, ironically, is finding strategists or people with business expertise to contribute to the effort. You will not solve strategy problems with AI without the involvement of people who understand the customer experience and what you are trying to achieve. Those who know best, like senior executives, don’t have time to be product managers for the AI team. An even bigger constraint is that, in some cases, you are asking people to get involved in an initiative that may make their jobs less important. There could be plenty of opportunities for incorpo­rating AI into existing jobs, but it’s something companies need to reflect on. The best approach may be to create a digital factory where a different team tests and builds AI applications, with oversight from senior stakeholders.

The big challenge is finding strategists to contribute to the AI effort. You are asking people to get involved in an initiative that may make their jobs less important.

Joanna Pachner: Do you think this worry about job security and the potential that AI will automate strategy is realistic?

Yuval Atsmon: The question of whether AI will replace human judgment and put humanity out of its job is a big one that I would leave for other experts.

The pertinent question is shorter-term automation. Because of its complexity, strategy would be one of the later domains to be affected by automation, but we are seeing it in many other domains. However, the trend for more than two hundred years has been that automation creates new jobs, although ones requiring different skills. That doesn’t take away the fear some people have of a machine exposing their mistakes or doing their job better than they do it.

Joanna Pachner: We recently published an article about strategic courage in an age of volatility  that talked about three types of edge business leaders need to develop. One of them is an edge in insights. Do you think AI has a role to play in furnishing a proprietary insight edge?

Yuval Atsmon: One of the challenges most strategists face is the overwhelming complexity of the world we operate in—the number of unknowns, the information overload. At one level, it may seem that AI will provide another layer of complexity. In reality, it can be a sharp knife that cuts through some of the clutter. The question to ask is, Can AI simplify my life by giving me sharper, more timely insights more easily?

Joanna Pachner: You have been working in strategy for a long time. What sparked your interest in exploring this intersection of strategy and new technology?

Yuval Atsmon: I have always been intrigued by things at the boundaries of what seems possible. Science fiction writer Arthur C. Clarke’s second law is that to discover the limits of the possible, you have to venture a little past them into the impossible, and I find that particularly alluring in this arena.

AI in strategy is in very nascent stages but could be very consequential for companies and for the profession. For a top executive, strategic decisions are the biggest way to influence the business, other than maybe building the top team, and it is amazing how little technology is leveraged in that process today. It’s conceivable that competitive advantage will increasingly rest in having executives who know how to apply AI well. In some domains, like investment, that is already happening, and the difference in returns can be staggering. I find helping companies be part of that evolution very exciting.

Explore a career with us

Related articles.

Floating chess pieces

Strategic courage in an age of volatility

Bias Busters collection

Bias Busters Collection

Policy decision-support for inland waterway transport in sustainable urban areas: an analysis of economic viability

  • Original Research
  • Published: 16 May 2024

Cite this article

research operational framework

  • Anicia Jaegler   ORCID: orcid.org/0000-0002-3014-4561 1 ,
  • Laingo M. Randrianarisoa 1 &
  • Hiba Yahyaoui 1  

Many cities have initiated promoting inland waterway transport to serve parcel delivery in urban areas to achieve sustainable transportation. This research aims to identify the key components of an economically viable urban network distribution, where inland waterway transport is deployed as the main transport, and cargo bikes or electric vehicles are used for last-mile delivery. The analysis uses a decision-support framework based on a two-echelon city distribution scheme. Numerical experiments are conducted using near-practical instances in France and inputs from industry stakeholders. From the study, it was found that the economic viability of inland waterways as main transport depends on the number of parcels to be delivered, and the existing infrastructures. The city configurations, their accessibility, as well as their operating conditions are also of utmost importance. The lowest CO 2 equivalent emissions are obtained by combining inland waterways with electric bikes. This paper can be used as decision-making support and guidance for transport companies, industry stakeholders, and public authorities for an efficient and sustainable urban distribution network combining inland waterways and bikes/vans.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

research operational framework

Data availability

The data that support the findings of this study are available on request from the corresponding author, AJ. The data are not publicly available due to confidentiality.

Delay costs are the costs associated with the value of travel time lost relative to a free-flow situation, while the deadweight loss costs is the part of the delay costs that is regarded as a proper basis for transport pricing.

Source: https://transport.ec.europa.eu/transport-modes/inland-waterways/promotion-inland-waterway-transport_en .

Roso et al. ( 2020 ) show that a barge can replace between 70 and 80 trucks that travel to the same place.

Details on each programme can be found on the European Commission website.

EC and CCNR (2022) provide more details about the projects that have been implemented and those that are still at early stage in Europe.

In practices, the parcels can be sorted either on-board of the barge or at the departure port. In the present framework, we do not make any distinction between these alternatives and consider that the time spent to sort the parcels are the same wherever the sorting location. This allows us to focus on the main operations of IWT and last-mile delivery.

The 100 km may represent the average distance of a typical urban city.

www.ecotransit.world .

ADB, A. A., Furceri, D., & IMF, P. T. (2016). The macroeconomic effects of public investment: Evidence from advanced economies. Journal of Macroeconomics, 50 , 224–240.

Article   Google Scholar  

Al Enezy, O., et al. (2017). Developing a cost calculation model for inland navigation. Research in Transportation Business & Management, 23 , 64–74.

Anderluh, A., Nolz, P. C., Hemmelmayr, V. C., & Crainic, T. G. (2021). Multi-objective optimization of a two-echelon vehicle routing problem with vehicle synchronization and ‘grey zone’ customers arising in urban logistics. European Journal of Operational Research, 289 (3), 940–958.

Bac, U., & Erdem, M. (2021). Optimization of electric vehicle recharge schedule and routing problem with time windows and partial recharge: A comparative study for an urban logistics fleet. Sustainable Cities and Society, 70 , 102883.

Beelen, M. (2011). Structuring and modeling decision-making in the inland navigation sector. Ph.D. Dissertation. Antwerp: Antwerp University.

Calderón-Rivera, N., Bartusevičienė, I., & Ballini, F. (2024a). Sustainable development of inland waterways transport: A review. Journal of Shipping and Trade., 9 , 3.

Calderón-Rivera, N., Bartusevičienė, I., & Ballini, F. (2024b). Barriers and solutions for sustainable development of inland waterway transport: A literature review. Transport Economics and Management, 2024 (2), 31–44.

Caris, A., Limbourg, S., Macharis, C., Van Lier, T., & Cools, M. (2014). Integration of inland waterway transport in the intermodal supply chain: A taxonomy of research challenges. Journal of Transport Geography, 41 , 126–136.

Cempírek, V., Stopka, O., Meško, P., Dočkalíková, I., & Tvrdoň, L. (2021). Design of distribution center location for small e-shop consignments using the Clark-wright method. Transportation Research Procedia, 53 , 224–233.

Central Commission for the Navigation of the Rhine (CCNR) (2024). An assessment of new market opportunities for inland waterway transport. Thematic Report. European Commission. February 2024 (p. 157).

Chen, C., Demir, E., Huang, Y., & Qiu, R. (2021). The adoption of self-driving delivery robots in last-mile logistics. Transportation Research Part e: Logistics and Transportation Review, 146 , 102214.

Dablanc, L. (2011). City distribution, a key element of the urban economy: Guidelines for practitioners . Edward Elgar Publishing.

Google Scholar  

EPA (2019). Overview of greenhouse gases . United States Environmental Protection Agency. https://www.epa.gov/ghgemissions/overview-greenhouse-gases

European Commission (2020). Smart and Sustainable Mobility Strategy. Staff Working Document, 2020. Available at: https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12438-Sustainableand-Smart-Mobility-Strategy_en (last consulted on 21.03.2024).

Fontaine, P. (2022). The vehicle routing problem with load-dependent travel times for cargo bicycles. European Journal of Operational Research, 300 (3), 1005–1016.

Janjevic, M., & Ndiaye, A. B. (2014). Inland waterways transport for city logistics: A review of experiences and the role of local public authorities. Urban Transport XX, 138 , 279–290.

Jonkeren, O., Francke, J., & Visser, J. (2019). A shift-share based tool for assessing the contribution of a modal shift to the decarbonisation of inland freight transport. European Transport Research Review, 11 (1), 8.

Jourquin, B., Beuthe, M., & Demilie, C. L. (1999). Freight bundling network models: Methodology and application. Transportation Planning and Technology, 23 (2), 157–177.

Lendjel, E., & Fischman, M. (2014). Innovations in barge transport for supplying French urban dense areas: A transaction costs approach. Supply Chain Forum: An International Journal, 15 (4), 16–27.

Macharis, C., Caris, A., Jourquin, B., & Pekin, E. (2011). A decision support framework for intermodal transport policy. European Transport Research Review, 3 , 167–178.

Malladi, S. S., Christensen, J. M., Ramírez, D., Larsen, A., & Pacino, D. (2022). Stochastic fleet mix optimization: Evaluating electromobility in urban logistics. Transportation Research Part E: Logistics and Transportation Review, 158 , 102554.

Marrekchi, E., Besbes, W., Dhouib, D., & Demir, E. (2021). A review of recent advances in the operations research literature on the green routing problem and its variants. Annals of Operations Research, 304 , 529–574.

Mühlbauer, F., & Fontaine, P. (2021). A parallelized large neighborhood search heuristic for the asymmetric two-echelon vehicle routing problem with swap containers for cargo bicycles. European Journal of Operational Research, 289 (2), 742–757.

Oulfarsi, S. (2016). Inland waterway transport of goods in France: What favorable growth prospects for sustainable development? Transport & Logistics: the International Journal, 16 (40), 19–25.

Rai, H. B., Van Lier, T., Meers, D., & Macharis, C. (2017). Improving urban freight transport sustainability: Policy assessment framework and case study. Research in Transportation Economics, 64 , 26–35.

Ramirez-Villamil, A., Jaegler, A., & Montoya-Torres, J. R. (2021). Sustainable local pickup and delivery: The case of Paris. Research in Transportation Business Management, 45 , 100692.

Ramirez-Villamil, A., Montoya-Torres, J. R., & Jaegler, A. (2023). Urban logistics through river: A two-echelon distribution model. Applied Sciences, 13 (12), 7259.

Reyes-Rubiano, L., Ferone, D., Juan, A. A., & Faulin, J. (2019). A simheuristic for routing electric vehicles with limited driving ranges and stochastic travel times. Sort, 1 , 3–24.

Rezgui, D., Aggoune-Mtalaa, W., Bouziri, H., & Siala, J. C. (2019). An enhanced evolutionary method for routing a fleet of electric modular vehicles. In 2019 6th International conference on models and technologies for intelligent transportation systems (MT-ITS) (pp. 1–9). https://doi.org/10.1109/MTITS.2019.8883377

Roso, V., Vural, C., Abrahamsson, A., Engström, M., Rogerson, S., & Santén, V. (2020). Drivers and barriers for inland waterway transportation. Operations and Supply Chain Management: An International Journal, 13 (4), 406–417.

Strale, M. (2019). Sustainable urban logistics: What are we talking about? Transportation Research Part a: Policy and Practice, 130 , 745–751.

Tahami, H., Rabadi, G., & Haouari, M. (2020). Exact approaches for routing capacitated electric vehicles. Transportation Research Part E: Logistics and Transportation Review, 144 , 102126.

United Nations (2015). Sustainable development goals. Report. New York, USA.

Van Duin, J. H. R., Tavasszy, L. A., & Quak, H. J. (2013). Towards E (lectric)- urban freight: first promising steps in the electric vehicle revolution. European Transport-Trasporti Europei, 54.

Van Essen, H., Van Wijngaarden, L., Schroten, A., Sutter, D., Bieler, C., Maffii, S., Brambilla, M., Fiorello, D., Fermi, F., Parolin, R., & El Beyrouty, K. (2019). Handbook on the external costs of transport (No. 18.4 K83. 131).

Wang, H., Li, M., Wang, Z., Li, W., Hou, T., Yang, X., Zhao, Z., Wang, Z., & Sun, T. (2022). Heterogeneous fleets for green vehicle routing problem with traffic restrictions. IEEE Transactions on Intelligent Transportation Systems, 24 , 8667–8676. https://doi.org/10.1109/TITS.2022.3197424

Winkenbach, M., Kleindorfer, P. R., & Spinler, S. (2016). Enabling urban logistics services atla posted through multi-echelon location routing. Transportation Science, 50 (2), 520–540.

Wiśnicki, B. (2016). Determinants of river ports development into logistics trimodal nodes, illustrated by the ports of the Lower Vistula River. Transportation Research Procedia, 16 , 576–586.

Yahyaoui, H., Jaegler, A., & Randrianarisoa, L. M. (2023). A cost calculation model for urban delivery of parcels by river. Research in Transportation and Business Management, 51 , 101059.

Download references

This work was supported by a post-doctoral scholarship from GeoPoste.

Author information

Authors and affiliations.

Kedge Business School, 40 Avenue des Terroirs de France, 75012, Paris, France

Anicia Jaegler, Laingo M. Randrianarisoa & Hiba Yahyaoui

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization: AJ, LR, HY; Data curation: HY; Formal analysis: AJ, HY; Funding acquisition: AJ; Investigation: AJ, LR, HY; Methodology: AJ, LR, HY; Project administration: AJ; Resources: AJ, LR, HY; Software: HY; Supervision: AJ, LR; Validation: AJ, LR; Visualization: AJ, LR, HY; Roles/Writing—original draft: AJ, LR, HY; and Writing—review & editing: AJ, LR.

Corresponding author

Correspondence to Anicia Jaegler .

Ethics declarations

Conflict of interest.

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Jaegler, A., Randrianarisoa, L.M. & Yahyaoui, H. Policy decision-support for inland waterway transport in sustainable urban areas: an analysis of economic viability. Ann Oper Res (2024). https://doi.org/10.1007/s10479-024-06034-0

Download citation

Received : 27 August 2023

Accepted : 24 April 2024

Published : 16 May 2024

DOI : https://doi.org/10.1007/s10479-024-06034-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Inland waterway transport
  • Last-mile delivery
  • City logistics
  • Economic cost
  • Decision-support
  • Carbon emissions
  • Find a journal
  • Publish with us
  • Track your research
  • Open access
  • Published: 18 May 2024

Determinants of appropriate antibiotic and NSAID prescribing in unscheduled outpatient settings in the veterans health administration

  • Michael J. Ward 1 , 2 , 3 , 4 ,
  • Michael E. Matheny 1 , 4 , 5 , 6 ,
  • Melissa D. Rubenstein 3 ,
  • Kemberlee Bonnet 7 ,
  • Chloe Dagostino 7 ,
  • David G. Schlundt 7 ,
  • Shilo Anders 4 , 8 ,
  • Thomas Reese 4 &
  • Amanda S. Mixon 1 , 9  

BMC Health Services Research volume  24 , Article number:  640 ( 2024 ) Cite this article

Metrics details

Despite efforts to enhance the quality of medication prescribing in outpatient settings, potentially inappropriate prescribing remains common, particularly in unscheduled settings where patients can present with infectious and pain-related complaints. Two of the most commonly prescribed medication classes in outpatient settings with frequent rates of potentially inappropriate prescribing include antibiotics and nonsteroidal anti-inflammatory drugs (NSAIDs). In the setting of persistent inappropriate prescribing, we sought to understand a diverse set of perspectives on the determinants of inappropriate prescribing of antibiotics and NSAIDs in the Veterans Health Administration.

We conducted a qualitative study guided by the Consolidated Framework for Implementation Research and Theory of Planned Behavior. Semi-structured interviews were conducted with clinicians, stakeholders, and Veterans from March 1, 2021 through December 31, 2021 within the Veteran Affairs Health System in unscheduled outpatient settings at the Tennessee Valley Healthcare System. Stakeholders included clinical operations leadership and methodological experts. Audio-recorded interviews were transcribed and de-identified. Data coding and analysis were conducted by experienced qualitative methodologists adhering to the Consolidated Criteria for Reporting Qualitative Studies guidelines. Analysis was conducted using an iterative inductive/deductive process.

We conducted semi-structured interviews with 66 participants: clinicians ( N  = 25), stakeholders ( N  = 24), and Veterans ( N  = 17). We identified six themes contributing to potentially inappropriate prescribing of antibiotics and NSAIDs: 1) Perceived versus actual Veterans expectations about prescribing; 2) the influence of a time-pressured clinical environment on prescribing stewardship; 3) Limited clinician knowledge, awareness, and willingness to use evidence-based care; 4) Prescriber uncertainties about the Veteran condition at the time of the clinical encounter; 5) Limited communication; and 6) Technology barriers of the electronic health record and patient portal.

Conclusions

The diverse perspectives on prescribing underscore the need for interventions that recognize the detrimental impact of high workload on prescribing stewardship and the need to design interventions with the end-user in mind. This study revealed actionable themes that could be addressed to improve guideline concordant prescribing to enhance the quality of prescribing and to reduce patient harm.

Peer Review reports

Adverse drug events (ADEs) are the most common iatrogenic injury. [ 1 ] Efforts to reduce these events have primarily focused on the inpatient setting. However, the emergency department (ED), urgent care, and urgent primary care clinics are desirable targets for interventions to reduce ADEs because approximately 70% of all outpatient encounters occur in one of these settings. [ 2 ] Two of the most commonly prescribed drug classes during acute outpatient care visits that have frequent rates of potentially inappropriate prescribing include antibiotics and non-steroidal anti-inflammatory drugs (NSAIDs). [ 3 , 4 ]

An estimated 30% of all outpatient oral antibiotic prescriptions may be unnecessary. [ 5 , 6 ] The World Health Organization identified overuse of antibiotics and its resulting antimicrobial resistance as a global threat. [ 7 ] The Centers for Disease Control and Prevention (CDC) conservatively estimates that in the US there are nearly 3 million antibiotic-resistant infections that cause 48,000 deaths annually. [ 8 ] Antibiotics were the second most common source of adverse events with nearly one ADE resulting in an ED visit for every 100 prescriptions. [ 9 ] Inappropriate antibiotic prescriptions (e.g., antibiotic prescription for a viral infection) also contribute to resistance and iatrogenic infections such as C. difficile (antibiotic associated diarrhea) and Methicillin-resistant Staphylococcus aureus (MRSA) . [ 8 ] NSAID prescriptions, on the other hand, result in an ADE at more than twice the rate of antibiotics (2.2%), [ 10 ] are prescribed to patients at an already increased risk of potential ADEs, [ 4 , 11 ] and frequently interact with other medications. [ 12 ] Inappropriate NSAID prescriptions contribute to serious gastrointestinal, [ 13 ] renal, [ 14 ] and cardiovascular [ 15 , 16 ] ADEs such as gastrointestinal bleeding, acute kidney injury, and myocardial infarction or heart failure, respectively. Yet, the use of NSAIDs is ubiquitous; according to the CDC, between 2011 and 2014, 5% of the US population were prescribed an NSAID whereas an additional 2% take NSAIDs over the counter. [ 11 ]

Interventions to reduce inappropriate antibiotic prescribing commonly take the form of antimicrobial stewardship programs. However, no such national programs exist for NSAIDs, particularly in acute outpatient care settings. There is a substantial body of evidence supporting the evidence of such stewardship programs. [ 17 ] The CDC recognizes that such outpatient programs should consist of four core elements of antimicrobial stewardship, [ 18 ] including commitment, action for policy and practice, tracking and reporting, and education and expertise. However, the opportunities to extend antimicrobial stewardship in EDs are vast. Despite the effectiveness, there is a recognized need to understand which implementation strategies and how to implement multifaceted interventions. [ 19 ] Given the unique time-pressured environment of acute outpatient care settings, not all antimicrobial stewardship strategies work in these settings necessitating the development of approaches tailored to these environments. [ 19 , 20 ]

One particularly vulnerable population is within the Veterans Health Administration. With more than 9 million enrollees in the Veterans Health Administration, Veterans who receive care in Veteran Affairs (VA) hospitals and outpatient clinics may be particularly vulnerable to ADEs. Older Veterans have greater medical needs than younger patients, given their concomitant medical and mental health conditions as well as cognitive and social issues. Among Veterans seen in VA EDs and Urgent Care Clinics (UCCs), 50% are age 65 and older, [ 21 ] nearly three times the rate of non-VA emergency care settings (18%). [ 22 ] Inappropriate prescribing in ED and UCC settings is problematic with inappropriate antibiotic prescribing estimated to be higher than 40%. [ 23 ] In a sample of older Veterans discharged from VA ED and UCC settings, NSAIDs were found to be implicated in 77% of drug interactions. [ 24 ]

Learning from antimicrobial stewardship programs and applying to a broader base of prescribing in acute outpatient care settings, it is necessary to understand not only why potentially inappropriate prescribing remains a problem for antibiotics, but for medications (e.g., NSAIDs) which have received little stewardship focus previously. This understanding is essential to develop and implement interventions to reduce iatrogenic harm for vulnerable patients seen in unscheduled settings. In the setting of the Veterans Health Administration, we sought to use these two drug classes (antibiotics and NSAIDs) that have frequent rates of inappropriate prescribing in unscheduled outpatient care settings, to understand a diverse set of perspectives on why potentially inappropriate prescribing continues to occur.

Selection of participants

Participants were recruited from three groups in outpatient settings representing emergency care, urgent care, and urgent primary care in the VA: 1) Clinicians-VA clinicians such as physicians, advanced practice providers, and pharmacists 2) Stakeholders-VA and non-VA clinical operational and clinical content experts such as local and regional medical directors, national clinical, research, and administrative leadership in emergency care, primary care, and pharmacy including geriatrics; and 3) Veterans seeking unscheduled care for infectious or pain symptoms.

Clinicians and stakeholders were recruited using email, informational flyers, faculty/staff meetings, national conferences, and snowball sampling, when existing participants identify additional potential research subjects for recruitment. [ 25 ] Snowball sampling is useful for identifying and recruiting participants who may not be readily apparent to investigators and/or hard to reach. Clinician inclusion criteria consisted of: 1) at least 1 year of VA experience; and 2) ≥ 1 clinical shift in the last 30 days at any VA ED, urgent care, or primary care setting in which unscheduled visits occur. Veterans were recruited in-person at the VA by key study personnel. Inclusion criteria consisted of: 1) clinically stable as determined by the treating clinician; 2) 18 years or older; and 3) seeking care for infectious or pain symptoms in the local VA Tennessee Valley Healthcare System (TVHS). TVHS includes an ED at the Nashville campus with over 30,000 annual visits, urgent care clinic in Murfreesboro, TN with approximately 15,000 annual visits, and multiple primary care locations throughout the middle Tennessee region. This study was approved by the VA TVHS Institutional Review Board as minimal risk.

Data collection

Semi-structured interview guides (Supplemental Table 1) were developed using the Consolidated Framework for Implementation Research (CFIR) [ 26 ] and the Theory of Planned Behavior [ 27 , 28 ] to understand attitudes and beliefs as they relate to behaviors, and potential determinants of a future intervention. Interview guides were modified and finalized by conducting pilot interviews with three members of each participant group. Interview guides were tailored to each group of respondents and consisted of questions relating to: 1) determinants of potentially inappropriate prescribing; and 2) integration into practice (Table. 1 ). Clinicians were also asked about knowledge and awareness of evidence-based prescribing practices for antibiotics and NSAIDs. The interviewer asked follow-up questions to elicit clarity of responses and detail.

Each interview was conducted by a trained interviewer (MDR). Veteran interviews were conducted in-person while Veterans waited for clinical care so as not to disrupt clinical operations. Interviews with clinicians and stakeholders were scheduled virtually. All interviews (including in-person) were recorded and transcribed in a manner compliant with VA information security policies using Microsoft Teams (Redmond, WA). The audio-recorded interviews were transcribed and de-identified by a transcriptionist and stored securely behind the VA firewall using Microsoft Teams. Study personnel maintained a recording log on a password-protected server and each participant was assigned a unique participant ID number. Once 15 interviews were conducted per group, we planned to review interviews with the study team to discuss content, findings, and to decide collectively when thematic saturation was achieved, the point at which no new information was obtained. [ 29 ] If not achieved, we planned to conduct at least 2 additional interviews prior to group review for saturation. We estimated that approximately 20–25 interviews per group were needed to achieve thematic saturation.

Qualitative data coding and analysis was managed by the Vanderbilt University Qualitative Research Core. A hierarchical coding system (Supplemental Table 2) was developed and refined using an iterative inductive/deductive approach [ 30 , 31 , 32 ] guided by a combination of: 1) Consolidated Framework for Implementation Research (CFIR) [ 26 ]; 2) the Theory of Planned Behavior [ 27 , 28 ]; 3) interview guide questions; and 4) a preliminary review of the transcripts. Eighteen major categories (Supplemental Table 3) were identified and were further divided into subcategories, with some subcategories having additional levels of hierarchical division. Definitions and rules were written for the use of each of the coding categories. The process was iterative in that the coding system was both theoretically informed and derived from the qualitative data. The coding system was finalized after it was piloted by the coders. Data coding and analysis met the Consolidated Criteria for Reporting Qualitative Research (COREQ) guidelines. [ 33 ]

Four experienced qualitative coders were trained by independently coding two transcripts from each of the three participant categories. Coding was then compared, and any discrepancies resolved by reconciliation. After establishing reliability in using the coding system, the coders divided and independently coded the remaining transcripts in sequential order. Each statement was treated as a separate quote and could be assigned up to 21 different codes. Coded transcripts were combined and sorted by code.

Following thematic saturation, the frequency of each code was calculated to understand the distribution of quotes. Quotes were then cross-referenced with coding as a barrier to understand potential determinants of inappropriate prescribing. A thematic analysis of the barriers was conducted and presented in an iterative process with the research team of qualitative methodologists and clinicians to understand the nuances and refine the themes and subthemes from the coded transcripts. Transcripts, quotations, and codes were managed using Microsoft Excel and SPSS version 28.0.

We approached 132 individuals and 66 (50%) agreed to be interviewed. Participants included 25 clinicians, 24 stakeholders, and 17 Veterans whose demographic characteristics are presented in Table 2 . The clinicians were from 14 VA facilities throughout the US and 20 physicians, and five advanced practice providers. Of the clinicians, 21 (84%) worked in either an ED or urgent care while the remainder practiced in primary care. The 24 stakeholders included 13 (54%) clinical service chiefs or deputy chief (including medical directors), five (21%) national directors, and six (25%) experts in clinical content and methodology. The 17 Veterans interviewed included 15 (88%) who were seen for pain complaints.

Results are organized by the six thematic categories with several subthemes in each category. Themes and subthemes are presented in Table 3  and are visually represented in Fig.  1 . The six themes were: 1) perceived versus actual Veterans expectations about prescribing, 2) the influence of a time-pressured clinical environment on prescribing stewardship, 3) limited clinician knowledge, awareness, and willingness to use evidence-based care, 4) uncertainties about the Veteran condition at the time of the clinical encounter, 5) limited communication, and 6) technology barriers.

figure 1

Visual representation of themes and subthemes from 66 clinician, stakeholder, and Veteran interviews

Theme 1: Perception that Veterans routinely expect a medication from their visit, despite clinical inappropriateness

According to clinicians, Veterans frequently expect to receive a prescription even when this decision conflicts with good clinical practice.

Certainly lots of people would say you know if you feel like you’re up against some strong expectations from the patients or caregivers or families around the utility of an antibiotic when it’s probably not indicated…In the emergency department the bias is to act and assume the worst and assume like the worst for the clinical trajectory for the patient rather than the reverse. [Clinician 49, Physician, ED]

In addition, stakeholders further stated that patient prescription expectations are quite influential and are likely shaped by Veterans’ prior experiences.

I think the patients, particularly for antibiotics, have strong feelings about whether they should or shouldn’t get something prescribed. [Stakeholder 34] You know I think the biggest challenge, I think, is adjusting patients’ expectations because you know they got better the last time they were doing an antibiotic. [Stakeholder 64]

Patient satisfaction and clinician workload may also influence the clinician’s prescription decision.

We have a lot of patients that come in with back pain or knee pain or something. We’ll get an x-ray and see there’s nothing actually wrong physically that can be identified on x-ray at least and you have to do something. Otherwise, patient satisfaction will dip, and patients leave angry. [Clinician 28, Physician, urgent care clinic] For some clinicians it’s just easier to prescribe an antibiotic when they know that’s the patient’s expectation and it shortens their in-room discussion and evaluation. [Clinician 55, Physician, ED]

Despite clinician perception, Veterans communicated that they did not necessarily expect a prescription and were instead focused on the clinical interaction and the clinician’s decision.

I’m not sure if they’ll give me [unintelligible] a prescription or what they’ll do. I don’t care as long as they stop the pain. [Patient 40, urgent care clinic] I don’t expect to [receive a prescription], but I mean whatever the doctor finds is wrong with me I will follow what he says. [Patient 31, ED]

Theme 2: Hectic clinical environments and unique practice conditions in unscheduled settings provide little time to focus on prescribing practices

Clinicians and stakeholders reported that the time-constrained clinical environment and need to move onto the next patient were major challenges to prescribing stewardship.

The number one reason is to get a patient out of your office or exam bay and move on to the next one. [Stakeholder 28] It takes a lot of time and you have to be very patient and understanding. So, you end up having to put a fair bit of emotional investment and intelligence into an encounter to not prescribe. [Stakeholder 1]

Stakeholders also noted that unique shift conditions and clinician perceptions that their patients were “different” might influence prescribing practices.

A common pushback was ‘well my patients are different.’ [Stakeholder 4] Providers who worked different types of shifts, so if you happened to work on a Monday when the clinics were open and had more adults from the clinics you were more likely to prescribe antibiotics than if you worked over night and had fewer patients. Providers who worked primarily holidays or your Friday prescribing pattern may be very different if you could get them into a primary care provider the next day. [Stakeholder 22]

Clinicians also reported that historical practices in the clinical environment practices may also contribute to inappropriate prescribing.

I came from working in the [outpatient] Clinic as a new grad and they’re very strict about prescribing only according to evidence-based practice. And then when I came here things are with other colleagues are a little more loose with that type of thing. It can be difficult because you start to adopt that practice to. [Clinician 61, Nurse Practitioner, ED]

Theme 3: Clinician knowledge, awareness, and willingness to use evidence-based care

Stakeholders felt that clinicians had a lack of knowledge about prescribing of NSAIDs and antibiotics.

Sometimes errors are a lack of knowledge or awareness of the need to maybe specifically dose for let’s say impaired kidney function or awareness of current up to date current antibiotic resistance patterns in the location that might inform a more tailored antibiotic choice for a given condition. [Stakeholder 37] NSAIDs are very commonly used in the emergency department for patients of all ages…the ED clinician is simply not being aware that for specific populations this is not recommended and again just doing routine practice for patients of all ages and not realizing that for older patients you actually probably should not be using NSAIDs. [Stakeholder 40]

Some clinicians may be unwilling to change their prescribing practices due to outright resistance, entrenched habits, or lack of interest in doing so.

It sounds silly but there’s always some opposition to people being mandated to do something. But there are some people who would look and go ‘okay we already have a handle on that so why do we need something else? I know who prescribes inappropriately and who doesn’t. Is this a requirement, am I evaluated on it? That would come from supervisors. Is this one more thing on my annual review?’ [Stakeholder 28] If people have entrenched habits that are difficult to change and are physicians are very individualistic people who think that they are right more often than the non-physician because of their expensive training and perception of professionalism. [Stakeholder 4]

Theme 4: Uncertainty about whether an adverse event will occur

Clinicians cited the challenge of understanding the entirety of a Veteran’s condition, potential drug-drug interactions, and existing comorbidities in knowing whether an NSAID prescription may result in an adverse event.

It’s oftentimes a judgement call if someone has renal function that’s right at the precipice of being too poor to merit getting NSAIDs that may potentially cause issues. [Clinician 43, Physician, inpatient and urgent care] It depends on what the harm is. So, for instance, you can’t always predict allergic reactions. Harm from the non-steroidals would be more if you didn’t pre-identify risk factors for harm. So, they have ulcer disease, they have kidney problems where a non-steroidal would not be appropriate for that patient. Or potential for a drug-drug interaction between that non-steroid and another medication in particular. [Clinician 16, Physician, ED]

Rather than be concerned about the adverse events resulting from the medication itself, stakeholders identified the uncertainty that clinicians experience about whether a Veteran may experience an adverse event from an infection if nothing is done. This uncertainty contributes to the prescription of an antibiotic.

My experience in working with providers at the VA over the years is that they worry more about the consequences of not treating an infection than about the consequences of the antibiotic itself. [Stakeholder 19] Sometimes folks like to practice conservatively and they’ll say even though I didn’t really see any hard evidence of a bacterial infection, the patient’s older and sicker and they didn’t want to risk it. [Stakeholder 16]

Theme 5: Limited communication during and after the clinical encounter

The role and type of communication about prescribing depended upon the respondent. Clinicians identified inadequate communication and coordination with the Veteran’s primary care physician during the clinical encounter.

I would like to have a little more communication with the primary doctors. They don’t seem to be super interested in talking to anyone in the emergency room about their patients… A lot of times you don’t get an answer from the primary doctor or you get I’m busy in clinic. You can just pick something or just do what you think is right. [Clinician 25, Physician, ED]

Alternatively, stakeholders identified post-encounter patient outcome and clinical performance feedback as potential barriers.

Physicians tend to think that they are doing their best for every individual patient and without getting patient by patient feedback there is a strong cognitive bias to think well there must have been some exception and reason that I did it in this setting. [Stakeholder 34] It’s really more their own awareness of like their clinical performance and how they’re doing. [Stakeholder 40]

Veterans, however, prioritized communication during the clinical encounter. They expressed the need for clear and informative communication with the clinician, and the need for the clinician to provide a rationale for the choice and medication-specific details along with a need to ask any questions.

I expect him to tell me why I’m taking it, what it should do, and probably the side effects. [Patient 25, ED] I’d like to have a better description of how to take it because I won’t remember all the time and sometimes what they put on the bottle is not quite as clear. [Patient 22, ED]

Veterans reported their desire for a simple way to learn about medication information. They provided feedback on the current approaches to educational materials about prescriptions.

Probably most pamphlets that people get they’re not going to pay attention to them. Websites can be overwhelming. [Patient 3, ED] Posters can be offsetting. If you’re sick, you’re not going to read them…if you’re sick you may glance at that poster and disregard it. So, you’re not really going to see it but if you give them something in the hand people will tend to look at it because it’s in their hand. [Patient 19, ED] It would be nice if labels or something just told me what I needed to know. You know take this exactly when and reminds me here’s why you’re taking it for and just real clear and not small letters. [Patient 7, ED]

Theme 6: Technology barriers limited the usefulness of clinical decision support for order checking and patient communication tools

Following the decision to prescribe a medication, clinicians complained that electronic health record pop-ups with clinical decision support warnings for potential safety concerns (e.g., drug-drug interactions) were both excessive and not useful in a busy clinical environment.

The more the pop ups, the more they get ignored. So, it’s finding that sweet spot right where you’re not constantly having to click out of something because you’re so busy. Particularly in our clinical setting where we have very limited amount of time to read the little monograph. Most of the time you click ‘no’ and off you go. (Clinician 16, Physician, ED) Some of these mechanisms like the EMR [electronic medical record] or pop-up decision-making windows really limit your time. If you know the guidelines appropriately and doing the right thing, even if you’re doing the right thing it takes you a long time to get through something. (Clinician 19, Physician, Primary care clinic)

For post-encounter communication that builds on Theme 5 about patient communication, patients reported finding using the VA patient portal (MyHealtheVet) challenging for post-event communication with their primary care physician and to review the medications they were prescribed.

I’ve got to get help to get onto MyHealtheVet but I would probably like to try and use that, but I haven’t been on it in quite some time. [Patient 22, ED] I tried it [MyHealtheVet] once and it’s just too complicated so I’m not going to deal with it. [Patient 37, Urgent care]

This work examined attitudes and perceptions of barriers to appropriate prescribing of antibiotics and NSAIDs in unscheduled outpatient care settings in the Veterans Health Administration. Expanding on prior qualitative work on antimicrobial stewardship programs, we also included an examination of NSAID prescribing, a medication class which has received little attention focused on prescribing stewardship. This work seeks to advance the understanding of fundamental problems underlying prescribing stewardship to facilitate interventions designed to improve not only the decision to prescribe antibiotics and NSAIDs, but enhances the safety checks once a decision to prescribe is made. Specifically, we identified six themes during these interviews: perceived versus actual Veteran expectations about prescribing, the influence of a time-pressured clinical environment on prescribing stewardship, limited clinician knowledge, awareness, and willingness to use evidence-based care, uncertainties about the Veteran condition at the time of the clinical encounter, limited communication, and technology barriers.

Sensitive to patient expectations, clinicians believed that Veterans would be dissatisfied if they did not receive an antibiotic prescription, [ 34 ] even though most patients presenting to the ED for upper respiratory tract infections do not expect antibiotics. [ 35 ] However, recent work by Staub et al. found that among patients with respiratory tract infections, receipt of an antibiotic was not independently associated with improved satisfaction. [ 36 ] Instead, they found that receipt of antibiotics had to match the patient’s expectations to affect patient satisfaction and recommended that clinicians communicate with their patients about prescribing expectations. This finding complements our results in the present study and the importance of communication about expectations is similarly important for NSAID prescribing as well.

A commitment to stewardship and modification of clinician behavior may be compromised by the time-pressured clinical environment, numerous potential drug interactions, comorbidities of a vulnerable Veteran population, and normative practices. The decision to prescribe medications such as antibiotics is a complex clinical decision and may be influenced by both clinical and non-clinical factors. [ 34 , 37 , 38 ] ED crowding, which occurs when the demand for services exceeds a system’s ability to provide care, [ 39 ] is a well-recognized manifestation of a chaotic clinical environment and is associated with detrimental effects on the hospital system and patient outcomes. [ 40 , 41 ] The likelihood that congestion and wait times will improve is unlikely as the COVID-19 pandemic has exacerbated the already existing crowding and boarding crisis in EDs. [ 42 , 43 ]

Another theme was the uncertainty in the anticipation of adverse events that was exacerbated by the lack of a feedback loop. Feedback on clinical care processes and patient outcomes is uncommonly provided in emergency care settings, [ 44 ] yet may provide an opportunity to change clinician behavior, particularly for antimicrobial stewardship. [ 45 ] However, the frequent use of ineffective feedback strategies [ 46 ] compromises the ability to implement effective feedback interventions; feedback must be specific [ 47 ] and address the Intention-to-Action gap [ 48 ] by including co-interventions to address recipient characteristics (i.e., beliefs and capabilities) and context to maximize impact. Without these, feedback may be ineffective.

An additional barrier identified from this work is the limited communication with primary care following discharge. A 2017 National Quality Forum report on ED care transitions [ 49 ] recommended that EDs and their supporting hospital systems should expand infrastructure and enhance health information technology to support care transitions as Veterans may not understand discharge instructions, may not receive post-ED or urgent care, [ 50 , 51 , 52 ] or may not receive a newly prescribed medication. [ 24 ] While there are existing mechanisms to communicate between the ED and primary care teams such as notifications when a Veteran presents to the ED and when an emergency clinician copies a primary care physician on a note, these mechanisms are insufficient to address care transition gaps and are variable in best practice use. To address this variability, the VA ED PACT Tool was developed using best practices (standardized processes, "closed-loop" communication, embedding into workflow) to facilitate and standardize communication between VA EDs and follow-up care clinicians. [ 53 ] While the ED PACT Tool is implemented at the Greater Los Angeles VA and can create a care coordination order upon ED discharge, its use is not yet widely adopted throughout the VA.

In the final theme about technology barriers, once the decision has been made to prescribe a medication, existing electronic tools that are key components of existing stewardship interventions designed to curtail potentially inappropriate prescriptions may be compromised by their lack of usability. For example, clinician and stakeholder interview respondents described how usability concerns were exacerbated in a time-pressured clinical environment (e.g., electronic health record clinical decision support tools). Clinical decision support is an effective tool to improve healthcare process measures in a diverse group of clinical environments; [ 54 ] however, usability remains a barrier when alerts must be frequently overridden. [ 55 , 56 ] Alert fatigue, as expressed in our interviews for order checking and recognized within the VA’s EHR, [ 57 , 58 ] may contribute to excessive overrides reducing the benefit of clinical decision support, [ 56 , 59 ] there was a notable lack of discussion about the decision to initiate appropriate prescriptions, which is a key action of the CDC’s outpatient antibiotic stewardship campaign. [ 18 ] Thus, a potentially more effective, albeit challenging approach, is to “nudge” clinicians towards appropriate prescribing and away from the initial decision to prescribe (e.g., inappropriate antibiotic prescribing for viral upper respiratory tract infections) with either default order sets for symptom management or to enhance prescription decisions through reminders about potential contraindications to specific indications (e.g., high risk comorbidities). Beyond EHR-based solutions that might change clinician behavior, the CDC’s outpatient antibiotic stewardship program provides a framework to change the normative practices around inappropriate prescribing and includes a commitment to appropriate prescribing, action for policy and change, tracking and reporting, and education and expertise. [ 18 ]

Another technical barrier faces patients through patient-facing electronic tools such as the VA’s MyHealtheVet portal, which was developed to enhance patient communication following care transitions and to allow Veterans to review their medications and to communicate with their primary care clinical team. Patient portals can be an effective tool for medication adherence [ 60 ] and offer promise to provide patient education [ 61 ] following a clinical encounter. However, they are similarly limited by usability concerns, representing an adoption barrier to broader Veteran use after unscheduled outpatient care visits [ 62 ], particularly in an older patient population.

These interviews further underscored that lack of usability of clinical decision support for order checking that arises from ineffective design and is a key barrier preventing health information technology from reaching its promise of improving patient safety. [ 63 ] A common and recognized reason for these design challenges include the failure to place the user (i.e., acute care clinician) at the center of the design process resulting in underutilization, workarounds, [ 64 ] and unintended consequences, [ 65 ] all of which diminish patient safety practices and fail to change clinician behavior (i.e., prescribing). Complex adaptive systems work best when the relative strengths of humans (e.g., context sensitivity, situation specificity) are properly integrated with the information processing power of computerized systems. [ 66 ] One potential approach to address usability concerns is through the integration of user-centered design into technology design represents an opportunity to design more clinician- and patient-centric systems of care to advance prescribing stewardship interventions that may have lacked broader adoption previously. As antimicrobial stewardship and additional prescribing stewardship efforts focus on time-pressured environments where usability is essential to adoption, taking a user-centered design approach to not only the development of electronic tools but also in addressing the identified barriers in prescribing represents a promising approach to enhance the quality of prescribing.

Limitations

The study findings should be considered in light of its limitations. First, the setting for this work was the Veterans Health Administration, the largest integrated health system in the US. Also, while we focused on the stewardship of two drug classes, there are numerous additional drug classes that are prescribed in these settings. Studies in other settings or on other drug classes may not generalize to other settings and drug classes. Second, while clinicians and stakeholder perspectives included diverse, national representation, the Veterans interviewed were local to the Tennessee Valley Healthcare System. Given the concurrent COVID-19 pandemic at the time of enrollment, most of the Veterans were seen for pain-related complaints, and only two infectious-related complaints were included. However, we also asked them about antibiotic prescribing. Clinician and stakeholder narratives may not completely reflect their practice patterns as their responses could be influenced by social desirability bias. Third, responses may be subject to recall bias and may influence the data collected. Finally, the themes and subthemes identified may overlap and have potential interactions. While we used an iterative process to identify discrete themes and subthemes, prescription decisions represent a complex decision process that are influenced by numerous patient and contextual factors and may not be completely independent.

Despite numerous interventions to improve the quality of prescribing, the appropriate prescription of antibiotics and NSAIDs in unscheduled outpatient care settings remains a challenge. Using the Veterans Health Administration, this study found that challenges to high quality prescribing include perceived Veteran expectations about receipt of medications, a hectic clinical environment deprioritizing stewardship, limited clinician knowledge, awareness, and willingness to use evidence-based care, uncertainty about the potential for adverse events, limited communication, and technology barriers. Findings from these interviews suggest that interventions should consider the detrimental impact of high workload on prescribing stewardship, clinician workflow, the initial decision to prescribe medications, and incorporate end-users into the intervention design process. Doing so is a promising approach to enhance adoption of high quality prescribing practices in order to improve the quality and patient outcomes from NSAID and antibiotic prescribing.

Availability of data and materials

De-identified datasets used and/or analysed during the current study will be made available from the corresponding author on reasonable request.

Leape LL, Brennan TA, Laird N, et al. The nature of adverse events in hospitalized patients. Results of the Harvard Medical Practice Study II. N Engl J Med. 1991;324(6):377–384.

CAS   PubMed   Google Scholar  

Pitts SR, Carrier ER, Rich EC, Kellermann AL. Where Americans get acute care: increasingly, it’s not at their doctor’s office. Health Aff (Millwood). 2010;29(9):1620–9.

PubMed   Google Scholar  

Palms DL, Hicks LA, Bartoces M, et al. Comparison of antibiotic prescribing in retail clinics, urgent care centers, emergency departments, and traditional ambulatory care settings in the United States. Jama Intern Med. 2018;178(9):1267–9.

PubMed   PubMed Central   Google Scholar  

Davis JS, Lee HY, Kim J, et al. Use of non-steroidal anti-inflammatory drugs in US adults: changes over time and by demographic. Open Heart. 2017;4(1):e000550.

Fleming-Dutra KE, Hersh AL, Shapiro DJ, et al. Prevalence of inappropriate antibiotic prescriptions among US ambulatory care visits, 2010–2011. JAMA. 2016;315(17):1864–73.

Shively NR, Buehrle DJ, Clancy CJ, Decker BK. Prevalence of Inappropriate Antibiotic Prescribing in Primary Care Clinics within a Veterans Affairs Health Care System. Antimicrob Agents Chemother. 2018;62(8):e00337–18. https://doi.org/10.1128/AAC.00337-18 .  https://pubmed.ncbi.nlm.nih.gov/29967028/ .

World Health Organization. Global antimicrobial resistance and use surveillance system (GLASS) report: 2022. 2022.

Centers for Disease Control and Prevention. COVID-19: U.S. Impact on Antimicrobial Resistance, Special Report 2022. Atlanta: U.S. Department of Health and Human Services, CDC; 2022.

Google Scholar  

Shehab N, Lovegrove MC, Geller AI, Rose KO, Weidle NJ, Budnitz DS. US emergency department visits for outpatient adverse drug events, 2013–2014. JAMA. 2016;316(20):2115–25.

Fassio V, Aspinall SL, Zhao X, et al. Trends in opioid and nonsteroidal anti-inflammatory use and adverse events. Am J Manag Care. 2018;24(3):e61–72.

Centers for Disease Control and Prevention. Chronic Kidney Disease Surveillance System—United States. http://www.cdc.gov/ckd . Accessed 21 March 2023.

Cahir C, Fahey T, Teeling M, Teljeur C, Feely J, Bennett K. Potentially inappropriate prescribing and cost outcomes for older people: a national population study. Br J Clin Pharmacol. 2010;69(5):543–52.

Gabriel SE, Jaakkimainen L, Bombardier C. Risk for Serious Gastrointestinal Complications Related to Use of Nonsteroidal Antiinflammatory Drugs - a Metaanalysis. Ann Intern Med. 1991;115(10):787–96.

Zhang X, Donnan PT, Bell S, Guthrie B. Non-steroidal anti-inflammatory drug induced acute kidney injury in the community dwelling general population and people with chronic kidney disease: systematic review and meta-analysis. BMC Nephrol. 2017;18(1):256.

McGettigan P, Henry D. Cardiovascular risk with non-steroidal anti-inflammatory drugs: systematic review of population-based controlled observational studies. PLoS Med. 2011;8(9): e1001098.

CAS   PubMed   PubMed Central   Google Scholar  

Holt A, Strange JE, Nouhravesh N, et al. Heart Failure Following Anti-Inflammatory Medications in Patients With Type 2 Diabetes Mellitus. J Am Coll Cardiol. 2023;81(15):1459–70.

Davey P, Marwick CA, Scott CL, et al. Interventions to improve antibiotic prescribing practices for hospital inpatients. Cochrane Database Syst Rev. 2017;2(2):CD003543.

Sanchez GV, Fleming-Dutra KE, Roberts RM, Hicks LA. Core Elements of Outpatient Antibiotic Stewardship. MMWR Recomm Rep. 2016;65(6):1–12.

May L, Martin Quiros A, Ten Oever J, Hoogerwerf J, Schoffelen T, Schouten J. Antimicrobial stewardship in the emergency department: characteristics and evidence for effectiveness of interventions. Clin Microbiol Infect. 2021;27(2):204–9.

May L, Cosgrove S, L'Archeveque M, et al. A call to action for antimicrobial stewardship in the emergency department: approaches and strategies. Ann Emerg Med. 2013;62(1):69–77 e62.

Veterans Health Administration Emergency Medicine Management Tool. EDIS GeriatricsAgeReport v3.

Cairns C KK, Santo L. National Hospital Ambulatory Medical Care Survey: 2020 emergency department summary tables. NHAMCS Factsheets - EDs Web site. https://www.cdc.gov/nchs/data/nhamcs/web_tables/2020-nhamcs-ed-web-tables-508.pdf . Accessed 20 Dec 2022.

Lowery JL, Alexander B, Nair R, Heintz BH, Livorsi DJ. Evaluation of antibiotic prescribing in emergency departments and urgent care centers across the Veterans’ Health Administration. Infect Control Hosp Epidemiol. 2021;42(6):694–701.

Hastings SN, Sloane RJ, Goldberg KC, Oddone EZ, Schmader KE. The quality of pharmacotherapy in older veterans discharged from the emergency department or urgent care clinic. J Am Geriatr Soc. 2007;55(9):1339–48.

Goodman LA. Snowball sampling. The annals of mathematical statistics. 1961. pp. 148–170.

Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50.

Ajzen I. The theory of planned behavior. Organ Behav Hum Decis Process. 1991;50(2):179–211.

Ajzen I. The theory of planned behaviour: reactions and reflections. Psychol Health. 2011;26(9):1113–27.  https://doi.org/10.1080/08870446.2011.613995 .  https://www.tandfonline.com/doi/full/10.1080/08870446.2011.613995 .

Morse JM. The significance of saturation. Qual Health Res. 1995;5(2):147–9.

Azungah T. Qualitative research: deductive and inductive approaches to data analysis. Qual Res J. 2018;18(4):383–400.

Tjora A. Qualitative research as stepwise-deductive induction. Routledge; 2018.  https://www.routledge.com/Qualitative-Research-as-Stepwise-Deductive-Induction/Tjora/p/book/9781138304499 .

Fereday J, Muir-Cochrane E. Demonstrating rigor using thematic analysis: A hybrid approach of inductive and deductive coding and theme development. Int J Qual Methods. 2006;5(1):80–92.

Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57.

Patel A, Pfoh ER, Misra Hebert AD, et al. Attitudes of High Versus Low Antibiotic Prescribers in the Management of Upper Respiratory Tract Infections: a Mixed Methods Study. J Gen Intern Med. 2020;35(4):1182–8.

May L, Gudger G, Armstrong P, et al. Multisite exploration of clinical decision making for antibiotic use by emergency medicine providers using quantitative and qualitative methods. Infect Control Hosp Epidemiol. 2014;35(9):1114–25.

Staub MB, Pellegrino R, Gettler E, et al. Association of antibiotics with veteran visit satisfaction and antibiotic expectations for upper respiratory tract infections. Antimicrob Steward Healthc Epidemiol. 2022;2(1): e100.

Schroeck JL, Ruh CA, Sellick JA Jr, Ott MC, Mattappallil A, Mergenhagen KA. Factors associated with antibiotic misuse in outpatient treatment for upper respiratory tract infections. Antimicrob Agents Chemother. 2015;59(7):3848–52.

Hruza HR, Velasquez T, Madaras-Kelly KJ, Fleming-Dutra KE, Samore MH, Butler JM. Evaluation of clinicians’ knowledge, attitudes, and planned behaviors related to an intervention to improve acute respiratory infection management. Infect Control Hosp Epidemiol. 2020;41(6):672–9.

American College of Emergency Physicians Policy Statement. Crowding. https://www.acep.org/globalassets/new-pdfs/policy-statements/crowding.pdf . Published 2019. Accessed 11 Oct 2023.

Bernstein SL, Aronsky D, Duseja R, et al. The effect of emergency department crowding on clinically oriented outcomes. Acad Emerg Med. 2009;16(1):1–10.

Rasouli HR, Esfahani AA, Nobakht M, et al. Outcomes of crowding in emergency departments; a systematic review. Arch Acad Emerg Med. 2019;7(1):e52.

Janke AT, Melnick ER, Venkatesh AK. Monthly Rates of Patients Who Left Before Accessing Care in US Emergency Departments, 2017–2021. JAMA Netw Open. 2022;5(9): e2233708.

Janke AT, Melnick ER, Venkatesh AK. Hospital Occupancy and Emergency Department Boarding During the COVID-19 Pandemic. JAMA Netw Open. 2022;5(9): e2233964.

Lavoie CF, Plint AC, Clifford TJ, Gaboury I. “I never hear what happens, even if they die”: a survey of emergency physicians about outcome feedback. CJEM. 2009;11(6):523–8.

Ivers N, Jamtvedt G, Flottorp S, et al. Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2012;(6):CD000259. https://doi.org/10.1002/14651858.CD000259.pub3 .

Hysong SJ, SoRelle R, Hughes AM. Prevalence of Effective Audit-and-Feedback Practices in Primary Care Settings: A Qualitative Examination Within Veterans Health Administration. Hum Factors. 2022;64(1):99–108.

Presseau J, McCleary N, Lorencatto F, Patey AM, Grimshaw JM, Francis JJ. Action, actor, context, target, time (AACTT): a framework for specifying behaviour. Implement Sci. 2019;14(1):102.

Desveaux L, Ivers NM, Devotta K, Ramji N, Weyman K, Kiran T. Unpacking the intention to action gap: a qualitative study understanding how physicians engage with audit and feedback. Implement Sci. 2021;16(1):19.

National Quality Forum. Emergency Department Transitions of Care: A Quality Measurement Framework—Final Report: DHHS contract HHSM‐500–2012–000091, Task Order HHSM‐500‐T0025. Washington, DC: National Quality Forum; 2017.

Kyriacou DN, Handel D, Stein AC, Nelson RR. Brief report: factors affecting outpatient follow-up compliance of emergency department patients. J Gen Intern Med. 2005;20(10):938–42.

Vukmir RB, Kremen R, Ellis GL, DeHart DA, Plewa MC, Menegazzi J. Compliance with emergency department referral: the effect of computerized discharge instructions. Ann Emerg Med. 1993;22(5):819–23.

Engel KG, Heisler M, Smith DM, Robinson CH, Forman JH, Ubel PA. Patient comprehension of emergency department care and instructions: are patients aware of when they do not understand? Ann Emerg Med. 2009;53(4):454–461 e415.

Cordasco KM, Saifu HN, Song HS, et al. The ED-PACT Tool Initiative: Communicating Veterans’ Care Needs After Emergency Department Visits. J Healthc Qual. 2020;42(3):157–65.

Bright TJ, Wong A, Dhurjati R, et al. Effect of clinical decision-support systems: a systematic review. Ann Intern Med. 2012;157(1):29–43.

Weingart SN, Toth M, Sands DZ, Aronson MD, Davis RB, Phillips RS. Physicians’ decisions to override computerized drug alerts in primary care. Arch Intern Med. 2003;163(21):2625–31.

van der Sijs H, Aarts J, Vulto A, Berg M. Overriding of drug safety alerts in computerized physician order entry. J Am Med Inform Assoc. 2006;13(2):138–47.

Shah T, Patel-Teague S, Kroupa L, Meyer AND, Singh H. Impact of a national QI programme on reducing electronic health record notifications to clinicians. BMJ Qual Saf. 2019;28(1):10–4.

Lin CP, Payne TH, Nichol WP, Hoey PJ, Anderson CL, Gennari JH. Evaluating clinical decision support systems: monitoring CPOE order check override rates in the Department of Veterans Affairs’ Computerized Patient Record System. J Am Med Inform Assoc. 2008;15(5):620–6.

Middleton B, Bloomrosen M, Dente MA, et al. Enhancing patient safety and quality of care by improving the usability of electronic health record systems: recommendations from AMIA. J Am Med Inform Assoc. 2013;20(e1):e2-8.

Han HR, Gleason KT, Sun CA, et al. Using Patient Portals to Improve Patient Outcomes: Systematic Review. JMIR Hum Factors. 2019;6(4): e15038.

Johnson AM, Brimhall AS, Johnson ET, et al. A systematic review of the effectiveness of patient education through patient portals. JAMIA Open. 2023;6(1):ooac085.

Lazard AJ, Watkins I, Mackert MS, Xie B, Stephens KK, Shalev H. Design simplicity influences patient portal use: the role of aesthetic evaluations for technology acceptance. J Am Med Inform Assoc. 2016;23(e1):e157-161.

IOM. Health IT and Patient Safety: Building Safer Systems for Better Care. Washington, DC: NAP;2012.

Koppel R, Wetterneck T, Telles JL, Karsh BT. Workarounds to barcode medication administration systems: their occurrences, causes, and threats to patient safety. J Am Med Inform Assoc. 2008;15(4):408–23.

Ash JS, Sittig DF, Poon EG, Guappone K, Campbell E, Dykstra RH. The extent and importance of unintended consequences related to computerized provider order entry. J Am Med Inform Assoc. 2007;14(4):415–23.

Hollnagel E, Woods D. Joint Cognitive Systems: Foundations of Cognitive Systems Engineering. Boca Raton: CRC Press; 2006.

Download references

Acknowledgements

This material is based upon work supported by the Department of Veterans Affairs, Veterans Health Administration, Office of Research and Development, Health Services Research and Development (I01HX003057). The content is solely the responsibility of the authors and does not necessarily represent the official views of the VA.

Author information

Authors and affiliations.

Education, and Clinical Center (GRECC), VA , Geriatric Research, Tennessee Valley Healthcare System, 2525 West End Avenue, Ste. 1430, Nashville, TN, 37203, USA

Michael J. Ward, Michael E. Matheny & Amanda S. Mixon

Medicine Service, Tennessee Valley Healthcare System, Nashville, TN, USA

Michael J. Ward

Department of Emergency Medicine, Vanderbilt University Medical Center, Nashville, TN, USA

Michael J. Ward & Melissa D. Rubenstein

Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN, USA

Michael J. Ward, Michael E. Matheny, Shilo Anders & Thomas Reese

Department of Biostatistics, Vanderbilt University Medical Center, Nashville, TN, USA

Michael E. Matheny

Division of General Internal Medicine & Public Health, Vanderbilt University Medical Center, Nashville, TN, USA

Department of Psychology, Vanderbilt University, Nashville, TN, USA

Kemberlee Bonnet, Chloe Dagostino & David G. Schlundt

Center for Research and Innovation in Systems Safety, Vanderbilt University Medical Center, Nashville, TN, USA

Shilo Anders

Section of Hospital Medicine, Vanderbilt University Medical Center, Nashville, TN, USA

Amanda S. Mixon

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization: MJW, ASM, MEM, DS, SA. Methodology: MJW, ASM, MEM, DS, KB, SA, TR. Formal analysis: KB, DS, CD, MJW. Investigation: MJW, MDR, DS. Resources: MJW, MEM. Writing—Original Draft. Preparation: MJW, ASM, KB, MDR. Writing—Review & Editing: All investigators. Supervision: MJW, ASM, MEM. Funding acquisition: MJW, MEM.

Corresponding author

Correspondence to Michael J. Ward .

Ethics declarations

Ethics approval and consent to participate.

This study was approved by the VA Tennessee Valley Healthcare System Institutional Review Board as minimal risk (#1573619). A waiver of informed consent was approved and each subject was verbally consented prior to interviews. The IRB determined that all requirements set forth in 38CFR16.111 in accordance for human subjects research have been satisfied. All the methods were carried out according the relevant guidelines and regulations.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Ward, M.J., Matheny, M.E., Rubenstein, M.D. et al. Determinants of appropriate antibiotic and NSAID prescribing in unscheduled outpatient settings in the veterans health administration. BMC Health Serv Res 24 , 640 (2024). https://doi.org/10.1186/s12913-024-11082-0

Download citation

Received : 11 October 2023

Accepted : 07 May 2024

Published : 18 May 2024

DOI : https://doi.org/10.1186/s12913-024-11082-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Non-Steroidal Anti-Inflammatory Drugs
  • Antibiotics
  • Qualitative Methods
  • Emergency Department
  • Urgent Care
  • Primary Care
  • Prescribing Stewardship

BMC Health Services Research

ISSN: 1472-6963

research operational framework

IMAGES

  1. Research operational framework.

    research operational framework

  2. Top 10 Operational Framework Templates with Samples and Examples

    research operational framework

  3. Visual Representation of Research Operational Framework

    research operational framework

  4. The operational framework

    research operational framework

  5. Operational framework of the study.

    research operational framework

  6. Operational framework.

    research operational framework

VIDEO

  1. Launch: JOINT OPERATIONAL FRAMEWORK

  2. #3. Definition & Characteristics of Operation Research

  3. Thesis 101: Building a Theoretical Framework

  4. IMFS Policy Talk with Tobias Linzert, ECB: "The New Operational Framework of the ECB"

  5. MANAGED SERVICES_DIGITALWAVE

  6. Observability vs. Monitoring: Enterprise Management Strategies

COMMENTS

  1. How to Use a Conceptual Framework for Better Research

    A conceptual framework in research is not just a tool but a vital roadmap that guides the entire research process. It integrates various theories, assumptions, and beliefs to provide a structured approach to research. By defining a conceptual framework, researchers can focus their inquiries and clarify their hypotheses, leading to more ...

  2. Operationalization

    Operationalization example. The concept of social anxiety can't be directly measured, but it can be operationalized in many different ways. For example: self-rating scores on a social anxiety scale. number of recent behavioral incidents of avoidance of crowded places. intensity of physical anxiety symptoms in social situations.

  3. ResearchOps 101

    ResearchOps 101. Summary: The practice of Research Operations (ResearchOps) focuses on processes and measures that support researchers in planning, conducting, and applying quality research at scale. ResearchOps is a specialized area of DesignOps focused specifically on components concerning user-research practices.

  4. 2.2 Conceptual and operational definitions

    2.2 Conceptual and operational definitions. Research studies usually include terms that must be carefully and precisely defined, so that others know exactly what has been done and there are no ambiguities. Two types of definitions can be given: conceptual definitions and operational definitions. Loosely speaking, a conceptual definition explains what to measure or observe (what a word or a ...

  5. What Is Operational Framework in a Thesis?

    An operational framework is the connection of ideas that support your overall theme of your thesis. Think of it as the structure that supports all of your ideas and strings everything together. ... 2 Organizing Your Socail Sciences Research Paper: Theoretical Framework ; About the Author. Nicholas Zacharewicz is the holder of a Master of Arts ...

  6. PDF CHAPTER CONCEPTUAL FRAMEWORKS IN RESEARCH distribute

    The conceptual framework helps you cultivate research questions and then match . the methodological aspects of the study with these questions. In this sense, the con-ceptual framework helps align the analytic tools and methods of a study with the focal topics and . core constructs. as they are embedded within the research questions. This

  7. Operational Research

    It is a discipline that uses advanced analytical methods to better understand complex systems and aid in decision-making. Many definitions of operational research (OR) exist, but from a disease control perspective, it is the search for knowledge on strategies, interventions, or technologies that can improve the results of the health programs ...

  8. Building a Conceptual Framework: Philosophy, Definitions, and Procedure

    A conceptual framework is defined as a network or a "plane" of linked concepts. Conceptual framework analysis offers a procedure of theorization for building conceptual frameworks based on grounded theory method. The advantages of conceptual framework analysis are its flexibility, its capacity for modification, and its emphasis on ...

  9. What Is a Conceptual Framework?

    Developing a conceptual framework in research. Step 1: Choose your research question. Step 2: Select your independent and dependent variables. Step 3: Visualize your cause-and-effect relationship. Step 4: Identify other influencing variables. Frequently asked questions about conceptual models.

  10. 10.3 Operational definitions

    Operationalization involves spelling out precisely how a concept will be measured. Operational definitions must include the variable, the measure, and how you plan to interpret the measure. There are four different levels of measurement: nominal, ordinal, interval, and ratio (in increasing order of specificity).

  11. Defining and Scaling User Research: Unveiling the Power of Operations

    At the heart of Operations Thinking is the 8 Ps Framework, a comprehensive guide that focuses on the essential elements necessary to define and scale user research operations.

  12. PDF Frameworks for Qualitative Research

    research emerged in the past century as a useful framework for social science research, but its history has not been the story of steady, sustained progress along one path. Denzin and Lincoln (1994, 2005) divide the history of 20th-century qualitative social science research, broadly defined, into eight moments.

  13. Operational Research Approaches

    2.1 Operational Research as a Collection of Modelling Techniques . Operational research (referred to as operations research in the USA) can be viewed as a collection of conceptual, mathematical, statistical, and computational modelling techniques used for the structuring, analysis, and solving of problems related to the design and operation of complex human systems.

  14. Research Ops: What It Is and Why It's So Important

    From Building a Research Practice . There are six common focus areas (core components) of the Research Ops framework:. Participant management includes recruiting, screening, scheduling, and distributing incentives to research participants. This is primarily what many people think of when they think of "research operations." Governance involves the safety, legality, and ethics of research.

  15. (PDF) Operational Research: Methods and Applications

    Throughout its history, Operational Research has evolved to include a variety of methods, models and algorithms that have been applied to a diverse and wide range of contexts. This encyclopedic ...

  16. What is a research framework and why do we need one?

    A research framework provides an underlying structure or model to support our collective research efforts. Up until now, we've referenced, referred to and occasionally approached research as more of an amalgamated set of activities. But as we know, research comes in many different shapes and sizes, is variable in scope, and can be used to ...

  17. Operational Research in Health-care Settings

    Operational Research is now being used as a key instrument, especially in resource-poor countries, to tap the untapped information. Administrators are using it as a searchlight for discovering what is still in the dark. ... Framework for Operations and Implementation Research in Health and Disease Control Programs. [Last accessed on 2019 Feb 10].

  18. Structure, trend and prospect of operational research: a scientific

    As an important research direction, operational research (OR) has always attracted scholars worldwide. We study the structure, trend and prospect in the OR field by conducting a bibliometric analysis of publications in the period of 1952-2020, which are included in the Web of Science (WoS) database. Using three effective bibliometric tools, namely, VOS viewer, CiteSpace, and Bibliometrix, a ...

  19. Operational research as implementation science: definitions, challenges

    Operational research (OR) is the discipline of using models, either quantitative or qualitative, to aid decision-making in complex problems [].The practice of applied healthcare OR distinguishes itself from other model-based disciplines such as health economics as it is action research based where operational researchers participate collaboratively with those that work in or use the system to ...

  20. Full article: Operational Research: methods and applications

    1. Introduction Footnote 1. The year 2024 marks the 75 th anniversary of the Journal of the Operational Research Society, formerly known as Operational Research Quarterly.It is the oldest Operational Research (OR) journal worldwide. On this occasion, my colleague Fotios Petropoulos from University of Bath proposed to the editors of the journal to edit an encyclopedic article on the state of ...

  21. Explainable AI for Operational Research: A defining framework, methods

    For example, in operational research domains, data analytics have long been promoted as a way to enhance decision-making. This study proposes a comprehensive, normative framework to define explainable artificial intelligence (XAI) for operational research (XAIOR) as a reconciliation of three subdimensions that constitute its requirements ...

  22. Research Framework

    Figure 3.1 shows an operational framework that will be followed in this study. Fig. 3.1. Research framework. Overview of Research Framework. The study is divided into three phases and each phase's output is an input to the next phase. Phase-1 is based on dataset processing and feature extraction. Phase-2 is based on evaluating individual ...

  23. The operational framework

    In essence, the operational framework of the research ( Figure 1 ), governs the conduct of the study which involved three phases, namely, problem identification & objectives formulation ...

  24. What is the basic differences between Conceptual Framework and

    From a research perspective, generally, I might consider a conceptual framework to be a theoretical frame, and the operational framework to be the research method. A means of thinking about a ...

  25. Bridging Investment and Impact: Building Impact-Driven Operational

    During our work building up the framework, I learned a lot from my team mates on how to build a standardized impact framework and refine it so that it can be used in different industries. Moreover, I also learned how to effectively align business practices with sustainable development goals and integrate strategic goals across different sectors.

  26. Transportation-Enabled Services: Concept, Framework, and Research

    These research opportunities include demand and supply management, transportation system management and operations, coordination among stakeholders, and the evaluation and regulation of transportation-enabled services.

  27. AI strategy in business: A guide for executives

    Marvin Minsky, the pioneer of artificial intelligence research in the 1960s, talked about AI as a "suitcase word"—a term into which you can stuff whatever you want—and that still seems to be the case. ... AI say they use it in strategy or even financial planning, whereas in areas like marketing, supply chain, and service operations, it ...

  28. Lean Six Sigma and Industry 4.0 implementation framework for

    Abstract. The adoption of Lean Six Sigma (LSS) 4.0 has been recommended by researchers and practitioners as an effective strategy to enhance the operational excellence of companies, especially in highly competitive environments given the complementarity of LSS with Industry 4.0 (I4.0).

  29. Policy decision-support for inland waterway transport in ...

    This research aims to identify the key components of an economically viable urban network distribution, where inland waterway transport is deployed as the main transport, and cargo bikes or electric vehicles are used for last-mile delivery. The analysis uses a decision-support framework based on a two-echelon city distribution scheme.

  30. Determinants of appropriate antibiotic and NSAID prescribing in

    Semi-structured interview guides (Supplemental Table 1) were developed using the Consolidated Framework for Implementation Research (CFIR) and the Theory of Planned Behavior [27, 28] to understand attitudes and beliefs as they relate to behaviors, and potential determinants of a future intervention. Interview guides were modified and finalized ...