Coming November 13: Insight7 Launches Conversational Intelligence. Learn more
Extract insights from interviews. at scale., analytical research methods explained with examples.
Home » Analytical Research Methods Explained with Examples
Analytical Research Techniques are fundamental tools that help researchers make sense of complex data. Imagine trying to decode insights from countless customer interactions without a systematic approach; the task would become overwhelming and inefficient. These techniques offer structured methods to analyze information, derive meaningful interpretations, and ultimately inform better decision-making in various fields.
Understanding these techniques is essential for effectively interpreting data and recognizing patterns. By employing analytical research methods, organizations can transform raw data into actionable insights. This not only fosters informed strategies but also enhances overall organizational performance. As we explore examples and applications, you'll gain insight into how these techniques can be effectively utilized in your research endeavors.
Types of Analytical Research Techniques
Analytical research techniques are essential tools for systematically gathering and interpreting data. Understanding these techniques allows researchers to derive meaningful insights and make informed decisions. Various methods exist, each serving specific purposes. For instance, qualitative techniques focus on understanding deeper motivations and attitudes, while quantitative techniques emphasize numerical data and statistical analysis.
The primary types of analytical research techniques include case studies, surveys, content analysis , and experimental research. Case studies provide in-depth investigations into specific instances, revealing complex dynamics. Surveys are effective for collecting broad data from target populations, enabling the identification of trends. Content analysis systematically evaluates existing materials, such as text or media, to uncover patterns. Experimental research, on the other hand, tests hypotheses through structured setups, providing causal insights.
By mastering these analytical research techniques, researchers can extract valuable insights that inform choices and strategies effectively. Understanding when to apply each technique is vital for optimizing research outcomes.
Quantitative Analytical Research Techniques
Quantitative analytical research techniques involve the systematic collection and analysis of numerical data to uncover patterns and draw conclusions. These methods allow researchers to quantify behaviors, opinions, and phenomena, enabling effective data-driven decision-making. Surveys and experiments are common approaches in this realm, as they allow for the collection of vast amounts of data in a structured manner.
Key techniques include descriptive statistics, which summarize data characteristics, and inferential statistics, which help make predictions or generalizations about a population based on sample data. Additionally, regression analysis can identify relationships between variables, while hypothesis testing provides a framework for validating theories. Collectively, these quantitative techniques form a robust foundation for analytical research methods, yielding actionable insights for various fields, from marketing to healthcare.
Qualitative Analytical Research Techniques
Qualitative analytical research techniques focus on understanding human behavior, emotions, and experiences. These methods gather rich, detailed data through various approaches, such as interviews, focus groups, and observations. Researchers often analyze this data to uncover patterns, themes, and insights that quantitative methods may overlook. By delving into participants' thoughts and feelings, qualitative methods offer a deeper comprehension of underlying motivations.
Several key techniques are commonly used in qualitative research . First, in-depth interviews provide personalized insights, allowing participants to share their stories and experiences openly. Second, focus groups facilitate dynamic discussions among participants, generating diverse perspectives on a topic. Finally, observational research enables researchers to witness behavior in natural settings, providing context to the data collected. Each technique plays a crucial role in shaping an understanding of the subject matter, ultimately enhancing the analytical research techniques available for interpretation and application.
Steps in Conducting Analytical Research
Conducting analytical research effectively involves a structured approach to gather and analyze data. First, define your research question. This step focuses on clarifying what you aim to uncover through research. An explicit question guides all subsequent steps by maintaining focus. Next, collect relevant data through various methods. This may include surveys, interviews, or secondary data sources, depending on the analytical research techniques you choose to utilize.
Once data is gathered, the next step is analysis. Employ statistical tools or qualitative methods to derive meaningful insights from the collected data. After analyzing, it's crucial to interpret the results. Consider how your findings relate to the initial research question. Finally, communicate your results plainly. Presenting your findings in a clear and actionable format ensures stakeholders can understand and apply the insights. Following these steps will enhance the effectiveness of your analytical research, leading to better-informed decisions.
Defining the Research Problem and Objectives
Defining a clear research problem is essential for any analytical study. It serves as the foundation upon which all elements of research are built. Initially, identifying the core issue helps researchers focus their inquiries and sets the direction for their analytical research techniques. Once the problem is articulated, specific objectives can be formulated that guide the research process and define the expected outcomes.
The objectives should align with the research problem and be measurable, allowing for a systematic approach to data collection and analysis. For instance, researchers might aim to assess user satisfaction, identify market trends, or understand consumer behavior. Establishing well-defined objectives not only clarifies the purpose of the research but also enhances the reliability of the findings. By understanding the problem and setting clear goals, researchers can utilize analytical methods more effectively, ensuring that their results generate meaningful insights.
Data Collection and Analysis Methods
Data collection and analysis methods are fundamental components of analytical research techniques. The process begins with identifying the research objectives, which guide what data needs to be collected. Researchers often employ qualitative methods like interviews or focus groups and quantitative methods such as surveys to gather valuable insights. Each method serves a different purpose, allowing researchers to explore in-depth nuances or identify broader trends.
Analysis follows data collection and typically includes coding qualitative data or employing statistical methods for quantitative data. Researchers can use various tools and techniques to extract meaningful patterns, trends, and anomalies. For instance, employing a matrix to pull specific insights from interviews can help pinpoint common pain points, as evidenced in the data trends discovered during the conversation analysis. Each step in this process is critical for achieving valid and actionable insights that inform decision-making.
Conclusion on Analytical Research Techniques
In conclusion, Analytical Research Techniques are essential for extracting valuable insights from various data sources. These techniques enable researchers to identify patterns and trends that inform decision-making processes across multiple disciplines. By employing these methods, organizations can create reports that convey pertinent findings to stakeholders effectively.
Furthermore, the application of these techniques promotes a deeper understanding of customer behavior and market dynamics. Analyzing data collaboratively improves content accuracy and enhances strategic planning. Ultimately, mastering analytical research techniques equips teams with the tools needed to navigate complex information and make informed decisions that drive success.
Turn conversations into actionable insights
On this Page
Top 12 Market Research Tools and Techniques for 2024
You may also like, deductive approach in qualitative research: a complete overview.
How to Apply Inductive Reasoning in Qualitative Research
Deductive vs inductive analysis: which is right for you.
Unlock Insights from Interviews 10x faster
- See a Live demo
- Start Analyzing Free
Our CEO, Odun Odubanjo introduces new tools that simplify and enhance conversation analysis at scale. With Project Kits and a Visual Designer, Insight7 now makes it easier than ever to turn conversations into actionable insights and assets.
- Skip to main content
- Skip to primary sidebar
- Skip to footer
- QuestionPro
- Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case AskWhy Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
- Resources Blog eBooks Survey Templates Case Studies Training Help center
Home Market Research Research Tools and Apps
Analytical Research: What is it, Importance + Examples
Finding knowledge is a loose translation of the word “research.” It’s a systematic and scientific way of researching a particular subject. As a result, research is a form of scientific investigation that seeks to learn more. Analytical research is one of them.
Any kind of research is a way to learn new things. In this research, data and other pertinent information about a project are assembled; after the information is gathered and assessed, the sources are used to support a notion or prove a hypothesis.
An individual can successfully draw out minor facts to make more significant conclusions about the subject matter by using critical thinking abilities (a technique of thinking that entails identifying a claim or assumption and determining whether it is accurate or untrue).
What is analytical research?
This particular kind of research calls for using critical thinking abilities and assessing data and information pertinent to the project at hand.
Determines the causal connections between two or more variables. The analytical study aims to identify the causes and mechanisms underlying the trade deficit’s movement throughout a given period.
It is used by various professionals, including psychologists, doctors, and students, to identify the most pertinent material during investigations. One learns crucial information from analytical research that helps them contribute fresh concepts to the work they are producing.
Some researchers perform it to uncover information that supports ongoing research to strengthen the validity of their findings. Other scholars engage in analytical research to generate fresh perspectives on the subject.
Various approaches to performing research include literary analysis, Gap analysis , general public surveys, clinical trials, and meta-analysis.
Importance of analytical research
The goal of analytical research is to develop new ideas that are more believable by combining numerous minute details.
The analytical investigation is what explains why a claim should be trusted. Finding out why something occurs is complex. You need to be able to evaluate information critically and think critically.
This kind of information aids in proving the validity of a theory or supporting a hypothesis. It assists in recognizing a claim and determining whether it is true.
Analytical kind of research is valuable to many people, including students, psychologists, marketers, and others. It aids in determining which advertising initiatives within a firm perform best. In the meantime, medical research and research design determine how well a particular treatment does.
Thus, analytical research can help people achieve their goals while saving lives and money.
Methods of Conducting Analytical Research
Analytical research is the process of gathering, analyzing, and interpreting information to make inferences and reach conclusions. Depending on the purpose of the research and the data you have access to, you can conduct analytical research using a variety of methods. Here are a few typical approaches:
Quantitative research
Numerical data are gathered and analyzed using this method. Statistical methods are then used to analyze the information, which is often collected using surveys, experiments, or pre-existing datasets. Results from quantitative research can be measured, compared, and generalized numerically.
Qualitative research
In contrast to quantitative research, qualitative research focuses on collecting non-numerical information. It gathers detailed information using techniques like interviews, focus groups, observations, or content research. Understanding social phenomena, exploring experiences, and revealing underlying meanings and motivations are all goals of qualitative research.
Mixed methods research
This strategy combines quantitative and qualitative methodologies to grasp a research problem thoroughly. Mixed methods research often entails gathering and evaluating both numerical and non-numerical data, integrating the results, and offering a more comprehensive viewpoint on the research issue.
Experimental research
Experimental research is frequently employed in scientific trials and investigations to establish causal links between variables. This approach entails modifying variables in a controlled environment to identify cause-and-effect connections. Researchers randomly divide volunteers into several groups, provide various interventions or treatments, and track the results.
Observational research
With this approach, behaviors or occurrences are observed and methodically recorded without any outside interference or variable data manipulation . Both controlled surroundings and naturalistic settings can be used for observational research . It offers useful insights into behaviors that occur in the actual world and enables researchers to explore events as they naturally occur.
Case study research
This approach entails thorough research of a single case or a small group of related cases. Case-control studies frequently include a variety of information sources, including observations, records, and interviews. They offer rich, in-depth insights and are particularly helpful for researching complex phenomena in practical settings.
Secondary data analysis
Examining secondary information is time and money-efficient, enabling researchers to explore new research issues or confirm prior findings. With this approach, researchers examine previously gathered information for a different reason. Information from earlier cohort studies, accessible databases, or corporate documents may be included in this.
Content analysis
Content research is frequently employed in social sciences, media observational studies, and cross-sectional studies. This approach systematically examines the content of texts, including media, speeches, and written documents. Themes, patterns, or keywords are found and categorized by researchers to make inferences about the content.
Depending on your research objectives, the resources at your disposal, and the type of data you wish to analyze, selecting the most appropriate approach or combination of methodologies is crucial to conducting analytical research.
Examples of analytical research
Analytical research takes a unique measurement. Instead, you would consider the causes and changes to the trade imbalance. Detailed statistics and statistical checks help guarantee that the results are significant.
For example, it can look into why the value of the Japanese Yen has decreased. This is so that an analytical study can consider “how” and “why” questions.
Another example is that someone might conduct analytical research to identify a study’s gap. It presents a fresh perspective on your data. Therefore, it aids in supporting or refuting notions.
Descriptive vs analytical research
Here are the key differences between descriptive research and analytical research:
The study of cause and effect makes extensive use of analytical research. It benefits from numerous academic disciplines, including marketing, health, and psychology, because it offers more conclusive information for addressing research issues.
QuestionPro offers solutions for every issue and industry, making it more than just survey software. For handling data, we also have systems like our InsightsHub research library.
You may make crucial decisions quickly while using QuestionPro to understand your clients and other study subjects better. Make use of the possibilities of the enterprise-grade research suite right away!
LEARN MORE FREE TRIAL
MORE LIKE THIS
Total Experience in Trinidad & Tobago — Tuesday CX Thoughts
Oct 29, 2024
You Can’t Please Everyone — Tuesday CX Thoughts
Oct 22, 2024
Life@QuestionPro Presents: Andrews Sekar
Oct 14, 2024
Edit survey: A new way of survey building and collaboration
Oct 10, 2024
Other categories
- Academic Research
- Artificial Intelligence
- Assessments
- Brand Awareness
- Case Studies
- Communities
- Consumer Insights
- Customer effort score
- Customer Engagement
- Customer Experience
- Customer Loyalty
- Customer Research
- Customer Satisfaction
- Employee Benefits
- Employee Engagement
- Employee Retention
- Friday Five
- General Data Protection Regulation
- Insights Hub
- Life@QuestionPro
- Market Research
- Mobile diaries
- Mobile Surveys
- New Features
- Online Communities
- Question Types
- Questionnaire
- QuestionPro Products
- Release Notes
- Research Tools and Apps
- Revenue at Risk
- Survey Templates
- Training Tips
- Tuesday CX Thoughts (TCXT)
- Uncategorized
- What’s Coming Up
- Workforce Intelligence
Understanding and solving intractable resource governance problems.
- Conferences and Talks
- Exploring models of electronic wastes governance in the United States and Mexico: Recycling, risk and environmental justice
- The Collaborative Resource Governance Lab (CoReGovLab)
- Water Conflicts in Mexico: A Multi-Method Approach
- Past projects
- Publications and scholarly output
- Research Interests
- Higher education and academia
- Public administration, public policy and public management research
- Research-oriented blog posts
- Stuff about research methods
- Research trajectory
- Publications
- Developing a Writing Practice
- Outlining Papers
- Publishing strategies
- Writing a book manuscript
- Writing a research paper, book chapter or dissertation/thesis chapter
- Everything Notebook
- Literature Reviews
- Note-Taking Techniques
- Organization and Time Management
- Planning Methods and Approaches
- Qualitative Methods, Qualitative Research, Qualitative Analysis
- Reading Notes of Books
- Reading Strategies
- Teaching Public Policy, Public Administration and Public Management
- My Reading Notes of Books on How to Write a Doctoral Dissertation/How to Conduct PhD Research
- Writing a Thesis (Undergraduate or Masters) or a Dissertation (PhD)
- Reading strategies for undergraduates
- Social Media in Academia
- Resources for Job Seekers in the Academic Market
- Writing Groups and Retreats
- Regional Development (Fall 2015)
- State and Local Government (Fall 2015)
- Public Policy Analysis (Fall 2016)
- Regional Development (Fall 2016)
- Public Policy Analysis (Fall 2018)
- Public Policy Analysis (Fall 2019)
- Public Policy Analysis (Spring 2016)
- POLI 351 Environmental Policy and Politics (Summer Session 2011)
- POLI 352 Comparative Politics of Public Policy (Term 2)
- POLI 375A Global Environmental Politics (Term 2)
- POLI 350A Public Policy (Term 2)
- POLI 351 Environmental Policy and Politics (Term 1)
- POLI 332 Latin American Environmental Politics (Term 2, Spring 2012)
- POLI 350A Public Policy (Term 1, Sep-Dec 2011)
- POLI 375A Global Environmental Politics (Term 1, Sep-Dec 2011)
Writing theoretical frameworks, analytical frameworks and conceptual frameworks
Three of the most challenging concepts for me to explain are the interrelated ideas of a theoretical framework, a conceptual framework, and an analytical framework. All three of these tend to be used interchangeably. While I find these concepts somewhat fuzzy and I struggle sometimes to explain the differences between them and clarify their usage for my students (and clearly I am not alone in this challenge), this blog post is an attempt to help discern these analytical categories more clearly.
A lot of people (my own students included) have asked me if the theoretical framework is their literature review. That’s actually not the case. A theoretical framework , the way I define it, is comprised of the different theories and theoretical constructs that help explain a phenomenon. A theoretical framework sets out the various expectations that a theory posits and how they would apply to a specific case under analysis, and how one would use theory to explain a particular phenomenon. I like how theoretical frameworks are defined in this blog post . Dr. Cyrus Samii offers an explanation of what a good theoretical framework does for students .
For example, you can use framing theory to help you explain how different actors perceive the world. Your theoretical framework may be based on theories of framing, but it can also include others. For example, in this paper, Zeitoun and Allan explain their theoretical framework, aptly named hydro-hegemony . In doing so, Zeitoun and Allan explain the role of each theoretical construct (Power, Hydro-Hegemony, Political Economy) and how they apply to transboundary water conflict. Another good example of a theoretical framework is that posited by Dr. Michael J. Bloomfield in his book Dirty Gold, as I mention in this tweet:
In Chapter 2, @mj_bloomfield nicely sets his theoretical framework borrowing from sociology, IR, and business-strategy scholarship pic.twitter.com/jTGF4PPymn — Dr Raul Pacheco-Vega (@raulpacheco) December 24, 2017
An analytical framework is, the way I see it, a model that helps explain how a certain type of analysis will be conducted. For example, in this paper, Franks and Cleaver develop an analytical framework that includes scholarship on poverty measurement to help us understand how water governance and poverty are interrelated . Other authors describe an analytical framework as a “conceptual framework that helps analyse particular phenomena”, as posited here , ungated version can be read here .
I think it’s easy to conflate analytical frameworks with theoretical and conceptual ones because of the way in which concepts, theories and ideas are harnessed to explain a phenomenon. But I believe the most important element of an analytical framework is instrumental : their purpose is to help undertake analyses. You use elements of an analytical framework to deconstruct a specific concept/set of concepts/phenomenon. For example, in this paper , Bodde et al develop an analytical framework to characterise sources of uncertainties in strategic environmental assessments.
A robust conceptual framework describes the different concepts one would need to know to understand a particular phenomenon, without pretending to create causal links across variables and outcomes. In my view, theoretical frameworks set expectations, because theories are constructs that help explain relationships between variables and specific outcomes and responses. Conceptual frameworks, the way I see them, are like lenses through which you can see a particular phenomenon.
A conceptual framework should serve to help illuminate and clarify fuzzy ideas, and fill lacunae. Viewed this way, a conceptual framework offers insight that would not be otherwise be gained without a more profound understanding of the concepts explained in the framework. For example, in this article, Beck offers social movement theory as a conceptual framework that can help understand terrorism . As I explained in my metaphor above, social movement theory is the lens through which you see terrorism, and you get a clearer understanding of how it operates precisely because you used this particular theory.
Dan Kaminsky offered a really interesting explanation connecting these topics to time, read his tweet below.
I think this maps to time. Theoretical frameworks talk about how we got here. Conceptual frameworks discuss what we have. Analytical frameworks discuss where we can go with this. See also legislative/executive/judicial. — Dan Kaminsky (@dakami) September 28, 2018
One of my CIDE students, Andres Ruiz, reminded me of this article on conceptual frameworks in the International Journal of Qualitative Methods. I’ll also be adding resources as I get them via Twitter or email. Hopefully this blog post will help clarify this idea!
You can share this blog post on the following social networks by clicking on their icon.
Posted in academia .
Tagged with analytical framework , conceptual framework , theoretical framework .
By Raul Pacheco-Vega – September 28, 2018
8 Responses
Stay in touch with the conversation, subscribe to the RSS feed for comments on this post .
Thanks, this had some useful clarifications for me!
I GOT CONFUSED AGAIN!
No need to be confused!
Thanks for the Clarification, Dr Raul. My cluttered mind is largely cleared, now.
Thanks,very helpful
I too was/am confused but this helps 🙂
Thank you very much, Dr.
Some people mislead the definition of analytical framework and I found the correct definition explained just in a short sentence in this blogpost. Thank you very much.
Leave a Reply Cancel Some HTML is OK
Name (required)
Email (required, but never shared)
or, reply to this post via trackback .
About Raul Pacheco-Vega, PhD
Find me online.
My Research Output
- Google Scholar Profile
- Academia.Edu
- ResearchGate
My Social Networks
- Polycentricity Network
Recent Posts
- The value and importance of the pre-writing stage of writing
- My experience teaching residential academic writing workshops
- “State-Sponsored Activism: Bureaucrats and Social Movements in Brazil” – Jessica Rich – my reading notes
- Reading Like a Writer – Francine Prose – my reading notes
- Using the Pacheco-Vega workflows and frameworks to write and/or revise a scholarly book
Recent Comments
- Raul Pacheco-Vega on Online resources to help students summarize journal articles and write critical reviews
- Muhaimin Abdullah on Writing journal articles from a doctoral dissertation
- Muhaimin Abdullah on Writing theoretical frameworks, analytical frameworks and conceptual frameworks
- Joseph G on Using the rhetorical precis for literature reviews and conceptual syntheses
- Alma Rangel on Improving your academic writing: My top 10 tips
Follow me on Twitter:
Proudly powered by WordPress and Carrington .
Carrington Theme by Crowd Favorite
Analytical Modeling: Turning Complex Data into Simple Solutions
Updated: January 28, 2024 by iSixSigma Staff
Not everything in business is quantifiable, but most of it is. Understanding the relationships between dozens of different factors and forces influencing a specific outcome can seem impossible, but it’s not. Analytical modeling is an effective and reliable technique for turning a mess of different variables and conditions into information you can actually use to make decisions.
Overview: What is analytical modeling?
Analytical modeling is a mathematical approach to business analysis that uses complex calculations that often involve numerous variables and factors. This type of analysis can be a powerful tool when seeking solutions to specific problems when used with proper technique and care.
3 benefits of analytical modeling
It’s hard to overstate the value of strong analytics. Mathematical analysis is useful at any scale and for almost every area of business management.
1. Data-driven decisions
The primary benefit of leveraging analytical modeling is the security of making data-driven decisions. Leaders don’t have to take a shot in the dark. They can use analytics to accurately define problems , develop solutions and anticipate outcomes.
2. Logical information structure
Analytical modeling is all about relating and structuring information in a sensible way. This means you can use the results to trace general outcomes to specific sources.
3. Can be shared and improved
The objective nature of analytical modeling makes it a perfect way to establish a common foundation for discussion among a diverse group. Rather than trying to get everyone on the same page through personal and subjective theorizing, using analytical data establishes a singular framework for universal reference within an organization.
Why is analytical modeling important to understand?
Like any other business practice, it’s important to understand this kind of analysis so you know what it can and can’t do. Even though it’s a powerful tool in the right hands, it’s not a magic solution that’s guaranteed to fix your problems.
Information requires interpretation
Information can be invaluable or completely worthless depending on how you use it. You should always carefully examine the factors and implications of the data in question before basing major decisions on it.
Analytics needs good data
Accurate, complete and relevant information are essential for a useful outcome. If poor data is put into a model, poor results will come out. Ensuring quality of data collection techniques is just as important as the modeling itself.
Various applications and approaches
Analytical modeling tends to focus on specific issues, questions or problems. There are several different types of models that can be used, which means you need to figure out the one that best fits each situation.
An industry example of analytical modeling
A barbecue restaurant serves customers every day of the week from lunch through dinner. To increase overall profit, management wants to reduce losses from waste and cut down on missed sales. Since they need to start preparing meat days in advance and any leftovers are discarded, the establishment needs to find a way to accurately predict how many customers they will have each day.
The restaurant hires outside contractors to create a predictive analytics model to address this need. The modelers examine various relevant factors, including historical customer attendance in previous weeks, weather predictions and upcoming specials or events of nearby restaurants. They create an initial model and start comparing actual results against predicted results until they’ve reached 90 percent accuracy, which is enough to meet the restaurant’s goals.
3 best practices when thinking about analytical modeling
Think about analytical modeling as a starting point for decisions and a tool that can be continually improved as you use it.
1. Start with a goal
Analytical modeling can’t answer a question that isn’t asked. It’s easy to make the mistake of looking for answers or patterns in general data. This kind of modeling is best used by created calculations to answer a specific initial question, like: “How can we turn more visitors into customers?” or “How can we make this process less wasteful.”
2. Continue to refine parameters
Think of the first model as a rough draft. Once you have an initial model delivering results, it’s important to compare it to reality and find ways to make the results even better.
3. Be consistent
Don’t just turn to analytics when faced with an urgent problem. If you make data mining and analysis a part of your daily operations, you’ll be in a much better position to actually leverage this strategy when the time comes.
Frequently Asked Questions (FAQ) about analytical modeling
What are the common forms of analytical models.
There are four main types of models: descriptive, diagnostic, predictive and prescriptive. The right one to use depends on the kind of question you need an answer to.
How do you make an analytical model?
Modeling requires access to a full set of relevant data points, relationship conditions and project objectives. For example, when trying to predict the outcome of a certain situation, modelers need to account for every factor that can impact this outcome and understand how each one of those factors influences the results as well as other variables in the calculation in a quantifiable way.
What is the purpose of analytical models?
The purpose of analytical modeling is to make sense of a process or situation that has too many variables to estimate accurately. It’s particularly important when dealing with larger operations and processes.
Managing with models
Companies survived for hundreds of years without computing technology to help them do complex modeling. However, that doesn’t mean you will be fine without it. The data revolution has already happened and the capabilities it offers companies can’t be ignored. Business leaders in every industry should be moving modeling to the center of their management practices if they are serious about growing in the years ahead.
Join 65,000 Black Belts and Register For The Industry Leading ISIXSIGMA Newsletter Today
About the author.
iSixSigma Staff
Marketing Research
21 analytical models.
Marketing models consists of
- Analytical Model: pure mathematical-based research
- Empirical Model: data analysis.
“A model is a representation of the most important elements of a perceived real-world system”.
Marketing model improves decision-making
Econometric models
- Description
Optimization models
- maximize profit using market response model, cost functions, or any constraints.
Quasi- and Field experimental analyses
Conjoint Choice Experiments.
“A decision calculus will be defined as a model-based set of procedures for processing data and judgments to assist a manager in his decision making” ( Little 1976 ) :
- easy to control
- as complete as possible
- easy to communicate with
( K. S. Moorthy 1993 )
Mathematical Theoretical Models
Logical Experimentation
An environment as a model, specified by assumptions
Math assumptions for tractability
Substantive assumptions for empirical testing
Decision support modeling describe how things work, and theoretical modeling present how things should work.
Compensation package including salaries and commission is a tradeoff between reduced income risk and motivation to work hard.
Internal and External Validity are questions related to the boundaries conditions of your experiments.
“Theories are tested by their predictions, not by the realism of their super model assumptions.” (Friedman, 1953)
( McAfee and McMillan 1996 )
Competition is performed under uncertainty
Competition reveals hidden information
Independent-private-values case: selling price = second highest valuation
It’s always better for sellers to reveal information since it reduces chances of cautious bidding that is resulted from the winner’s curse
Competition is better than bargaining
- Competition requires less computation and commitment abilities
Competition creates effort incentives
( Leeflang et al. 2000 )
Types of model:
Predictive model
Sales model: using time series data
Trial rate: using exponential growth.
Product growth model: Bass ( 1969 )
Descriptive model
Purchase incidence and purchase timing : use Poisson process
Brand choice: Markov models or learning models.
Pricing decisions in an oligopolistic market Howard and Morgenroth ( 1968 )
Normative model
- Profit maximization based on price, adverting and quality ( Dorfman and Steiner 1976 ) , extended by ( H. V. Roberts, Ferber, and Verdoorn 1964 ; Lambin 1970 )
Later, Little ( 1970 ) introduced decision calculus and then multinomial logit model ( Peter M. Guadagni and Little 1983 )
Potential marketing decision automation:
Promotion or pricing programs
Media allocation
Distribution
Product assortment
Direct mail solicitation
( K. S. Moorthy 1985 )
Definitions:
Rationality = maximizing subjective expected utility
Intelligence = recognizing other firms are rational.
Rules of the game include
feasible set of actions
utilities for each combination of moves
sequence of moves
the structure of info (who knows what and when?)
Incomplete info stems from
unknown motivations
unknown ability (capabilities)
different knowledge of the world.
Pure strategy = plan of action
A mixed strategy = probability dist of pure strategies.
Strategic form representation = sets of possible strategies for every firm and its payoffs.
Equilibrium = a list of strategies in which “no firm would like unilaterally to change its strategy.”
Equilibrium is not outcome of a dynamic process.
Equilibrium Application
Oligopolistic Competition
Cournot (1838): quantities supplied: Cournot equilibrium. Changing quantities is more costly than changing prices
Bertrand (1883): Bertrand equilibrium: pricing.
Perfect competition
Product Competition: Hotelling (1929): Principle of Minimum Differentiation is invalid.
first mover advantage
deterrent strategy
optimal for entrants or incumbents
Perfectness of equilibria
Subgame perfectness
Sequential rationality
Trembling-hand perfectness
Application
Product and price competition in Oligopolies
Strategic Entry Deterrence
Dynamic games
Long-term competition in oligopolies
Implicit Collusion in practice : price match from leader firms
Incomplete Information
Durable goods pricing by a monopolist
predatory pricing and limit pricing
reputation, product quality, and prices
Competitive bidding and auctions
21.1 Building An Analytical Model
Notes by professor Sajeesh Sajeesh
Step 1: Get “good” idea (either from literature or industry)
Step 2: Assess the feasibility of the idea
Is it interesting?
Can you tell a story?
Who is the target audience?
Opportunity cost
Step 3: Don’t look at the literature too soon
- Even when you have an identical model as in the literature, it’s ok (it allows you to think)
Step 4: BUild the model
Simplest model first: 1 period, 2 product , linear utility function for consumers
Write down the model formulation
Everything should be as simple as possible .. but no simpler
Step 5: Generalizing the model
- Adding complexity
Step 6: Searching the literature
- If you find a paper, you can ask yourself why you didn’t do what the author has done.
Step 7: Give a talk /seminar
Step 8: Write the paper
21.2 Hotelling Model
( KIM and SERFES 2006 ) : A location model with preference variety
( Hotelling 1929 )
Stability in competition
Duopoly is inherently unstable
Bertrand disagrees with Cournot, and Edgeworth elaborates on it.
- because Cournot’s assumption of absolutely identical products between firms.
seller try to \(p_2 < p_1 c(l-a-b)\)
the point of indifference
\[ p_1 + cx = p_2 + cy \]
c = cost per unit of time in each unit of line length
q = quantity
x, y = length from A and B respectively
\[ a + x + y + b = l \]
is the length of the street
Hence, we have
\[ x = 0.5(l - a - b + \frac{p_2- p_1}{c}) \\ y = 0.5(l - a - b + \frac{p_1- p_2}{c}) \]
Profits will be
\[ \pi_1 = p_1 q_1 = p_1 (a+ x) = 0.5 (l + a - b) p_1 - \frac{p_1^2}{2c} + \frac{p_1 p_2}{2c} \\ \pi_2 = p_2 q_2 = p_2 (b+ y) = 0.5 (l + a - b) p_2 - \frac{p_2^2}{2c} + \frac{p_1 p_2}{2c} \]
To set the price to maximize profit, we have
\[ \frac{\partial \pi_1}{\partial p_1} = 0.5 (l + a - b) - \frac{p_1}{c} + \frac{p_2}{2c} = 0 \\ \frac{\partial \pi_2}{\partial p_2} = 0.5 (l - a + b) - \frac{p_2}{c} + \frac{p_1}{2c} = 0 \]
which equals
\[ p_1 = c(l + \frac{a-b}{3}) \\ p_2 = c(l - \frac{a-b}{3}) \]
\[ q_1 = a + x = 0.5 (l + \frac{a -b}{3}) \\ q_2 = b + y = 0.5 (l - \frac{a-b}{3}) \]
with the SOC satisfied
In case of deciding locations, socialism works better than capitalism
( d’Aspremont, Gabszewicz, and Thisse 1979 )
- Principle of Minimum Differentiation is invalid
\[ \pi_1 (p_1, p_2) = \begin{cases} ap_1 + 0.5(l-a-b) p_1 + \frac{1}{2c}p_1 p_2 - \frac{1}{2c}p_1^2 & \text{if } |p_1 - p_2| \le c(l-a-b) \\ lp_1 & \text{if } p_1 < p_2 - c(l-a-b) \\ 0 & \text{if } p_1 > p_2 + c(l-a-b) \end{cases} \]
\[ \pi_2 (p_1, p_2) = \begin{cases} bp_2 + 0.5(l-a-b) p_2 + \frac{1}{2c}p_1 p_2 - \frac{1}{2c}p_2^2& \text{if } |p_1 - p_2| \le c(l-a-b) \\ lp_2 & \text{if } p_2 < p_1 - c(l-a-b) \\ 0 & \text{if } p_2 > p_1 + c(l-a-b) \end{cases} \]
21.3 Positioning Models
Tabuchi and Thisse ( 1995 )
Relax Hotelling’s model’s assumption of uniform distribution of consumers to non-uniform distribution.
Assumptions:
Consumers distributed over [0,1]
\(F(x)\) = cumulative distribution of consumers where \(F(1) = 1\) = total population
2 distributions:
Traditional uniform density: \(f(x) =1\)
New: triangular density: \(f(x) = 2 - 2|2x-1|\) which represents consumer concentration
Transportation cost = quadratic function of distance.
Hence, marginal consumer is
\[ \bar{x} = (p_2 - p_1 + x^2_2-x_1^2)/2(x_2-x_1) \]
then when \(x_1 < x_2\) the profit function is
\[ \Pi_1 = p_1 F(\bar{x}) \]
\[ \Pi_2 = p_2[1-F(\bar{x})] \]
and vice versa for \(x_1 >x_2\) , and Bertrand game when \(x_1 = x_2\)
If firms pick simultaneously their locations, and then simultaneously their prices, and consumer density function is log-concave, then there is a unique Nash price equilibrium
Under uniform distribution, firms choose to locate as far apart as possible (could be true when observing shopping centers are far away from cities), but then consumers have to buy products that are far away from their ideal.
Under triangular density, no symmetric location can be found, but two asymmetric Nash location equilibrium can still be possible (decrease in equilibrium profits of both firms)
If firms pick sequentially their locations, and pick their prices simultaneously,
- Under both uniform and triangular, first entrant will locate at the market center
Sajeesh and Raju ( 2010 )
Model satiation (variety-seeking) as a relative reduction in the willingness to pay of the previously purchased brand. also known as negative state dependence
Previous studies argue that in the presence of variety seeking consumers, firms should enjoy higher prices and profits, but this paper argues that average prices and profits are lower.
- Firms should charge lower prices in the second period to prevent consumers from switching.
Period 0, choose location simultaneously
Period 1, choose prices simultaneously
Period 2, firms choose prices simultaneously
- K. S. Moorthy ( 1988 )
- 2 (identical) firms pick product (quality) first, then price.
Tyagi ( 2000 )
Extending Hotelling ( 1929 ) Tyagi ( 1999b ) Tabuchi and Thisse ( 1995 )
Two firms enter sequentially , and have different cost structures .
Paper shows second mover advantage
KIM and SERFES ( 2006 )
Consumers can make multiple purchases.
Some consumers are loyal to one brand, and others consume more than one product.
Shreay, Chouinard, and McCluskey ( 2015 )
- Quantity surcharges from different sizes of the same product (i.e., imperfect substitute or differentiated products) can be led by consumer preferences.
21.4 Market Structure and Framework
Basic model utilizing aggregate demand
Bertrand Equilibrium: Firms compete on price
Cournot Market structure: Firm compete on quantity
Stackelberg Market structure: Leader-Follower model
Because we start with the quantity demand function, it is important to know where it’s derived from Richard and Martin ( 1980 )
- studied how two firms compete on product quality and price (both simultaneous and sequential)
21.4.1 Cournot - Simultaneous Games
\[ TC_i = c_i q_i \text{ where } i= 1,2 \\ P(Q) = a - bQ \\ Q = q_1 +q_2 \\ \pi_1 = \text{price} \times \text{quantity} - \text{cost} = [a - b(q_1 +q_2)]q_1 - c_1 q_1 \\ \pi_2 = \text{price} \times \text{quantity} - \text{cost} = [a - b(q_1 +q_2)]q_1 - c_2 q_2 \\ \]
From (21.1)
is called reaction function, for best response function
From (21.2)
\[ q_1 = \frac{a-c_1}{2b} - \frac{a-c_2}{4b} + \frac{q_1}{4} \]
\[ q_1^* = \frac{a-2c_1+ c_2}{3b} \\ q_2^* = \frac{a-2c_2 + c_1}{3b} \]
Total quantity is
\[ Q = q_1 + q_2 = \frac{2a-c_1 -c_2}{3b} \]
\[ a-bQ = \frac{a+c_1+c_2}{3b} \]
21.4.2 Stackelberg - Sequential games
also known as leader-follower games
Stage 1: Firm 1 chooses quantity
Stage 2: Firm 2 chooses quantity
\[ c_2 = c_1 = c \]
Stage 2: reaction function of firm 2 given quantity firm 1
\[ R_2(q_1) = \frac{a-c}{2b} - \frac{q_1}{2} \]
\[ \pi_1 = [a-b(q_1 + \frac{a-c}{2b} - \frac{q_1}{2})]q_1 - cq_1 \\ = [a-b( \frac{a-c}{2b} + \frac{q_1}{2}]q_1 + cq_1 \]
\[ \frac{d \pi_1}{d q_1} = 0 \]
\[ \frac{a+c}{2} - b q_1 -c =0 \]
The Stackelberg equilibrium is
\[ q_1^* = \frac{a-c}{2b} \\ q_2^* = \frac{a-c}{4b} \]
Under same price (c), Cournot =
\[ q_1 = q_2 = \frac{a-c}{3b} \]
Leader produces more whereas the follower produces less compared to Cournot
\[ \frac{d \pi_W^*}{d \beta} <0 \]
for the entire quantity range \(d < \bar{d}\)
As \(\beta\) increases in \(\pi_W^*\) Firm W wants to reduce \(\beta\) .
Low \(\beta\) wants more independent
Firms W want more differentiated product
On the other hand,
\[ \frac{d \pi_S^*}{d \beta} <0 \]
for a range of \(d < \bar{d}\)
Firm S profit increases as \(\beta\) decreases when d is small
Firm S profit increases as \(\beta\) increases when d is large
Firm S profit increases as as product are more substitute when d is large
Firm S profit increases as products are less differentiated when d is large
21.5 More Market Structure
Dixit ( 1980 )
Based on Bain-Sylos postulate: incumbents can build capacity such that entry is unprofitability
Investment in capacity is not a credibility threat if incumbents can change their capacity.
Incumbent cannot deter entry
Tyagi ( 1999a )
More retailers means greater competition, which leads to lower prices for customers.
Effect of \((n+1)\) st retailer entry
Competition effect (lower prices)
Effect on price (i.e., wholesale price), also known as input cost effect
Manufacturers want to increase wholesale price because now manufacturers have higher bargaining power, which leads other retailers to reduce quantity (bc their choice of quantity is dependent on wholesale price), and increase in prices.
Jerath, Sajeesh, and Zhang ( 2016 )
Organized Retailer enters a market
Inefficient unorganized retailers exit
Remaining unorganized retailers increase their prices. Thus, customers will be worse off.
Amaldoss and Jain ( 2005 )
consider desire for uniqueness and conformism on pricing conspicuous goods
Two routes:
higher desire for uniqueness leads to higher prices and profits
higher desire for conformity leads to lower prices and profits
Under the analytical model and lab text, consumers’ desire for unique is increased from price increases, not the other way around.
\[ U_A = V - p_A - \theta t_s - \lambda_s(n_A) \\ U_B = V - p_B - (1-\theta) t_s - \lambda_s(n_B) \]
\(\lambda_s\) = sensitivity towards externality.
\(\theta\) is the position in the Hotelling’s framework.
\(t_s\) is transportation cost.
\[ U_A = V - p_A - \theta t_s + \lambda_c(n_A) \\ U_B = V - p_B - (1-\theta) t_s + \lambda_c(n_B) \]
Rational Expectations Equilibrium
If your expectations are rational, then your expectation will be realized in equilibrium
Say, Marginal Snob = \(\theta_s\) and \(\beta\) = number of snob in the market
\[ U_A^c \equiv U_B^c = \theta_s \]
Conformists
\[ U_A^c =U_B^c = \theta_c \]
Then, according to rational expectations equilibrium, we have
\[ \beta \theta_s +( 1- \beta) \theta_c = n_A \\ \beta (1-\theta_s) +( 1- \beta) (1-\theta_c) = n_B \]
\(\beta \theta_s\) = Number of snobs who buy from firm A
\((1-\beta)\theta_c\) = Number of conformists who buy from firm B
\(\beta(1-\theta_s)\) = Number of snobs who buy from firm B
\((1-\beta)(1-\theta_c)\) = Number of conformists who buy from firm B
which is the rational expectations equilibrium (whatever we expect happens in reality).
In other words, expectation are realized in equilibrium.
The number of people expected to buy the product is endogenous in the model, which will be the actual number of people who will buy it in the market.
We should not think of the expected value here in the same sense as expected value in empirical research ( \(E(.)\) ) because the expected value here is without any errors (specifically, measurement error).
- The utility function for snobs is such that overall when price increase for one product, snob will like to buy the product more. When price increases, conformist will reduce the purchase.
Balachander and Stock ( 2009 )
Adding a Limited edition product has a positive effect on profits (via increased willingness of consumers to pay for such a product), but negative strategic effect (via increasing price competition between brands)
Under quality differentiation, high-quality brand gain from LE products
Under horizontal taste differentiation, negative strategic effects lead to lower equilibrium profits for both brands, but they still have to introduce LE products because of prisoners’ dilemma
Sajeesh, Hada, and Raju ( 2020 )
two consumer segments:
functionality-oriented
exclusivity-oriented
Firm increase value enhancements when functionality-oriented consumers perceive greater product differentiation
Firms decrease value enhancements if exclusivity-oriented perceive greater product differentiation
21.6 Market Response Model
Marketing Inputs:
- Selling effort
- advertising spending
- promotional spending
Marketing Outputs:
Give phenomena for a good model:
- P1: Dynamic sales response involves a sales growth rate and a sales decay rate that are different
- P2: Steady-state response can be concave or S-shaped . Positive sales at 0 adverting.
- P3: Competitive effects
- P4: Advertising effectiveness dynamics due to changes in media, copy, and other factors.
- P5: Sales still increase or fall off even as advertising is held constant.
Saunder (1987) phenomena
- P1: Output = 0 when Input = 0
- P2: The relationship between input and output is linear
- P3: Returns decrease as the scale of input increases (i.e., additional unit of input gives less output)
- P4: Output cannot exceed some level (i.e., saturation)
- P5: Returns increase as scale of input increases (i.e., additional unit of input gives more output)
- P6: Returns first increase and then decrease as input increases (i.e., S-shaped return)
- P7: Input must exceed some level before it produces any output (i.e., threshold)
- P8: Beyond some level of input, output declines (i.e., supersaturation point)
Aggregate Response Models
Linear model: \(Y = a + bX\)
Through origin
can only handle constant returns to scale (i.e., can’t handle concave, convex, and S-shape)
The Power Series/Polynomial model: \(Y = a + bX + c X^2 + dX^3 + ...\)
- can’t handle saturation and threshold
Fraction root model/ Power model: \(Y = a+bX^c\) where c is prespecified
c = 1/2, called square root model
c = -1, called reciprocal model
c can be interpreted as elasticity if a = 0.
c = 1, linear
c <1, decreasing return
c>1, increasing returns
Semilog model: \(Y = a + b \ln X\)
- Good when constant percentage increase in marketing effort (X) result in constant absolute increase in sales (Y)
Exponential model: \(Y = ae^{bX}\) where X >0
b > 0, increasing returns and convex
b < 0, decreasing returns and saturation
Modified exponential model: \(Y = a(1-e^{-bX}) +c\)
Decreasing returns and saturation
upper bound = a + c
lower bound = c
typically used in selling effort
Logistic model: \(Y = \frac{a}{a+ e^{-(b+cX)}}+d\)
increasing return followed by decreasing return to scale, S-shape
saturation = a + d
good with saturation and s-shape
Gompertz model
ADBUDG model ( Little 1970 ) : \(Y = b + (a-b)\frac{X^c}{d + X^c}\)
c > 1, S-shaped
0 < c < 1
saturation effect
upper bound at a
lower bound at b
typically used in advertising and selling effort.
can handle, through origin, concave, saturation, S-shape
Additive model for handling multiple Instruments: \(Y = af(X_1) + bg(X_2)\)
Multiplicative model for handling multiple instruments: \(Y = aX_1^b X_2^c\) where c and c are elasticities. More generally, \(Y = af(X_1)\times bg(X_2)\)
Multiplicative and additive model: \(Y = af(X_1) + bg(X_2) + cf(X_1) g(X_2)\)
Dynamic response model: \(Y_t = a_0 + a_1 X_t + \lambda Y_{t-1}\) where \(a_1\) = current effect, \(\lambda\) = carry-over effect
Dynamic Effects
Carry-over effect: current marketing expenditure influences future sales
- Advertising adstock/ advertising carry-over is the same thing: lagged effect of advertising on sales
Delayed-response effect: delays between when marketing investments and their impact
Customer holdout effects
Hysteresis effect
New trier and wear-out effect
Stocking effect
Simple Decay-effect model:
\[ A_t = T_t + \lambda T_{t-1}, t = 1,..., \]
- \(A_t\) = Adstock at time t
- \(T_t\) = value of advertising spending at time t
- \(\lambda\) = decay/ lag weight parameter
Response Models can be characterized by:
The number of marketing variables
whether they include competition or not
the nature of the relationship between the input variables
- Linear vs. S-shape
whether the situation is static vs. dynamic
whether the models reflect individual or aggregate response
the level of demand analyzed
- sales vs. market share
Market Share Model and Competitive Effects: \(Y = M \times V\) where
Y = Brand sales models
V = product class sales models
M = market-share models
Market share (attraction) models
\[ M_i = \frac{A_i}{A_1 + ..+ A_n} \]
where \(A_i\) attractiveness of brand i
Individual Response Model:
Multinomial logit model representing the probability of individual i choosing brand l is
\[ P_{il} = \frac{e^{A_{il}}}{\sum_j e^{A_{ij}}} \]
- \(A_{ij}\) = attractiveness of product j for individual i \(A_{ij} = \sum_k w_k b_{ijk}\)
- \(b_{ijk}\) = individual i’s evaluation of product j on product attribute k, where the summation is over all the products that individual i is considering to purchase
- \(w_k\) = importance weight associated with attribute k in forming product preferences.
21.7 Technology and Marketing Structure and Economics of Compatibility and Standards
21.8 conjoint analysis and augmented conjoint analysis.
More technical on 27.1
Jedidi and Zhang ( 2002 )
- Augmenting Conjoint Analysis to Estimate Consumer Reservation Price
Using conjoint analysis (coefficients) to derive at consumers’ reservation prices for a product in a category.
Can be applied in the context of
product introduction
calculating customer switching effect
the cannibalization effect
the market expansion effect
\[ Utility(Rating) = \alpha + \beta_i Attribute_i \]
where \(\alpha\)
Netzer and Srinivasan ( 2011 )
Break conjoint analysis down to a sequence of constant-sum paired comparison questions.
Can also calculate the standard errors for each attribute importance.
21.9 Distribution Channels
McGuire and Staelin ( 1983 )
- Two manufacturing (wholesaling) firms differentiated and competing products: Upstream firms (manufacturers) and downstream channel members (retailers)
3 types of structure:
- Both manufacturers with privately owned retailers (4 players: 2 manufacturers, 2 retailers)
- Both vertically integrated (2 manufacturers)
- Mix: one manufacturer with a private retailer, and one manufacturer with vertically integrated company store (3 players)
Each retail outlet has a downward sloping demand curve:
\[ q_i = f_i(p_1,p_2) \]
Under decentralized system (4 players), the Nash equilibrium demand curve is a function of wholesale prices:
\[ q_i^* = g_i (w_1, w_2) \]
More rules:
- Assume 2 retailers respond, but not the competing manufacturer
And unobserved wholesale prices and market is not restrictive, and Nash equilibrium whole prices is still possible.
Under mixed structure , the two retailers compete, and non-integrated firm account for all responses in the market
Under integrated structure , this is a two-person game, where each chooses the retail price
Decision variables are prices (not quantities)
Under what conditions a manufacturer want to have intermediaries
Retail demand functions are assumed to be linear in prices
Demand functions are
\[ q_1' = \mu S [ 1 - \frac{\beta}{1 - \theta} p_1' + \frac{\beta \theta}{1- \theta}p_2'] \]
\[ q_2' = (1- \mu) S [ 1+ \frac{\beta \theta}{1- \theta} p_1' - \frac{\beta}{1- \theta} p_2'] \]
\(0 \le \mu , \theta \le 1; \beta, S >0\)
S is a scale factor, which equals industry demand ( \(q' \equiv q_1' + q_2'\) ) when prices are 0.
\(\mu\) = absolute difference in demand
\(\theta\) = substutability of products (reflected by the cross elasticities), or the ratio of the rate of change of quantity with respect to the competitor’s price to the rate of change of quantity with respect to own price.
\(\theta = 0\) means independent demands (firms are monopolists)
\(\theta \to 1\) means maximally substitutable
3 more conditions:
\[ P = \{ p_1', p_2' | p_i' -m' - s' \ge 0, i = 1,2; (1-\theta) - \beta p_1' \beta \theta p_2' \ge 0, (1- \theta) + \beta \theta p_1' - \beta p_2' \ge 0 \} \]
where \(m', s'\) are fixed manufacturing and selling costs per unit
To have a set of \(P\) , then
\[ \beta \le \frac{1}{m' + s'} \]
and to have industry demand no increase with increases in either price then
\[ \frac{\theta}{1 + \theta} \le \mu \le \frac{1}{1 + \theta} \]
After rescaling, the industry demand is
\[ q = 2 (1- \theta) (p_1+ p_2) \]
When each manufacturer is a monopolist ( \(\theta = 0\) ), it’s twice as profitable for each to sell through its own channel
When demand is maximally affected by the actions of the competing retailers ( \(\theta \to 1\) ), it’s 3 times as profitable to have private dealers.
The breakeven point happens at \(\theta = .708\)
In conclusion, the optimal distribution system depends of the degree of substitubability at the retail level.
Jeuland and Shugan ( 2008 )
Quantity discounts is offered because
Cost-based economies of scale
Demand based - large purchases tend to be more price sensitive
Strategic reason- single sourcing
Channel Coordination (this is where this paper contributes to the literature
K. S. Moorthy ( 1987 )
- Price discrimination - second degree
Geylani, Dukes, and Srinivasan ( 2007 )
Jerath and Zhang ( 2010 )
21.10 Advertising Models
Three types of advertising:
- Informative Advertising: increase overall demand of your brand
- Persuasive Advertising: demand shifting to your brand
- Comparison: demand shifting away from your competitor (include complementary)
n customers distributed uniformly along the Hotelling’s line (more likely for mature market where demand doesn’t change).
\[ U_A = V - p_A - tx \\ U_B = V - p_B - t(1-x) \]
For Persuasive advertising (highlight the value of the product to the consumer):
\[ U_A = A_A V - p_A - tx \]
or increase value (i.e., reservation price).
\[ U_A = \sqrt{Ad_A} V - p_A - tx \]
or more and more customers want the product (i.e., more customers think firm A product closer to what they want)
\[ U_A = V - p_A - \frac{tx}{\sqrt{Ad_A}} \]
Comparison Advertising:
\[ U_A = V - p_A - t\sqrt{Ad_{B}}x \\ U_B = V - p_B - t \sqrt{Ad_A}(1 - x) \]
Find marginal consumers
\[ V - p_A - t\sqrt{Ad_{B}}x = V - p_B - t \sqrt{Ad_A}(1 - x) \]
\[ x = \frac{1}{t \sqrt{Ad_A} + t \sqrt{Ad_B}} (-p_A + p_B + t \sqrt{Ad_A}) \]
then profit functions are (make sure the profit function is concave)
\[ \pi_A = p_A x n - \phi Ad_A \\ \pi_B = p_B (1-x) n - \phi Ad_B \]
\(\phi\) = per unit cost of advertising (e.g., TV advertising vs. online advertising in this case, TV advertising per unit cost is likely to be higher than online advertising per unit cost)
t can also be thought of as return on advertising (traditional Hotelling’s model considers t as transportation cost)
Equilibrium prices conditioned on advertising
\[ \frac{d \pi_A}{p_A} = - \frac{d}{p_A} () \\ \frac{d \pi_B}{p_B} = \frac{d}{p_B} \]
Then optimal pricing solutions are
\[ p_A = \frac{2}{3} t \sqrt{Ad_A} + \frac{1}{3} t \sqrt{Ad_B} \\ p_B = \frac{1}{3} t \sqrt{Ad_A} + \frac{2}{3} t \sqrt{Ad_B} \]
Prices increase with the intensities of advertising (if you invest more in advertising, then you charge higher prices). Each firm price is directly proportional to their advertising, and you will charger higher price when your competitor advertise as well.
Then, optimal advertising (with the optimal prices) is
\[ \frac{d \pi_A}{d Ad_A} \\ \frac{d \pi_B}{d Ad_B} \]
Hence, Competitive equilibrium is
\[ Ad_A = \frac{25 t^2 n^2}{576 \phi^2} \\ Ad_B = \frac{25t^2 n^2}{576 \phi^2} \\ p_A = p_B = \frac{5 t^2 n }{24 \phi} \]
As cost of advertising ( \(\phi\) ), firms spend less on advertising
Higher level of return on advertising ( \(t\) ), firms benefit more from advertising
With advertising in the market, the equilibrium prices are higher than if there were no advertising.
Since colluding on prices are forbidden, and colluding on advertising is hard to notice, firms could potential collude on advertising (e.g., pulsing).
Assumption:
- Advertising decision before pricing decision (reasonable because pricing is earlier to change, while advertising investment is determined at the beginning of each period).
Collusive equilibrium (instead of using \(Ad_A, Ad_B\) , use \(Ad\) - set both advertising investment equal):
\[ Ad_A = Ad_B = \frac{t^2 n^2}{16 \phi^2} > \frac{25t^2 n^2}{576 \phi^2} \]
Hence, collusion can be make advertising investment equilibrium higher, which makes firms charge higher prices, and customers will be worse off. (more reference Aluf and Shy - check Modeling Seminar Folder - Advertising).
Combine both Comparison and Persuasive Advertising
\[ U_A = V - p_A - tx \frac{\sqrt{Ad_B}}{\sqrt{Ad_A}} \\ U_B = V - p_B - t(1-x) \frac{\sqrt{Ad_A}}{\sqrt{Ad_B}} \]
Informative Advertising
- Increase number of n customers (more likely for new products where the number of potential customers can change)
How do we think about customers, how much to consume. People consume more when they have more availability, and less when they have less in stock ( Ailawadi and Neslin 1998 )
Villas-Boas ( 1993 )
- Under monopoly, firms would be better off to pulse (i.e., alternate advertising between a minimum level and efficient amount of advertising) because of the S-shaped of the advertising response function.
Model assumptions:
- The curve of the advertising response function is S-shaped
- Markov strategies: what firms do in this period depends on what might affect profits today or in the future (independent of the history)
Propositions:
- “If the loss from lowering the consideration level is larger than the efficient advertising expenditures, the unique Markov perfect equilibrium is for firms to advertise, whatever the consideration levels of both firms are.”
Nelson ( 1974 )
Quality of a brand is determined before a purchase of a brand is “search qualities”
Quality that is not determined before a purchase is “experience qualities”
Brand risks credibility if it advertises misleading information, and pays the costs of processing nonbuying customers
There is a reverse association between quality produced and utility adjusted price
Firms that want to sell more advertise more
Firms advertise to their appropriate audience, i.e., “those whose tastes are best served by a given brand are those most likely to see an advertisement for that brand” (p. 734).
Advertising for experience qualities is indirect information while advertising for search qualities is direct information . (p. 734).
Goods are classified based on quality variation (i.e., whether the quality variation was based on searhc of experience).
3 types of goods
experience durable
experience nondurable
search goods
Experience goods are advertised more than search goods because advertisers increase sales via increasing the reputability of the sellers.
The marginal revenue of advertisement is greater for search goods than for experience goods (p. 745). Moreover, search goods will concentrate in newspapers and magazines while experience goods are seen on other media.
For experience goods, WOM is better source of info than advertising (p. 747).
Frequency of purchase moderates the differential effect of WOM and advertising (e.g., for low frequency purchases, we prefer WOM) (p. 747).
When laws are moderately enforced, deceptive advertising will happen (too little law, people would not trust, too much enforcement, advertisers aren’t incentivized to deceive, but moderate amount can cause consumers to believe, and advertisers to cheat) (p. 749). And usually experience goods have more deceptive advertising (because laws are concentrated here).
Iyer, Soberman, and Villas-Boas ( 2005 )
Firms advertise to their targeted market (those who have a strong preference for their products) than competitor loyalists, which endogenously increase differentiation in the market, and increases equilibrium profits
Targeted advertising is more valuable than target pricing. Target advertising leads to higher profits regardless whether firms have target pricing. Target pricing increased competition for comparison shoppers (no improvement in equilibrium profits). (p. 462 - 463).
Comparison shoppers size:
\[ s = 1 - 2h \]
where \(h\) is the market size of each firm’s consumers (those who prefer to buy product from that firm). Hence, \(h\) also represents the differentiation between the two firms
See table 1 (p. 469).
\(A\) is the cost for advertising the entire market
\(r\) is the reservation price
Yuxin Chen et al. ( 2009 )
Combative vs. constructive advertising
Informative complementary and persuasive advertising
Informative: increase awareness, reduce search costs, increase product differentiation
Complementary (under comparison): increase utility by signaling social prestige
Persuasive: decrease price sensitivity (include combative)
Consumer response moderates the effect of combative adverting on price competition:
It decreases price competition
It increases price competition when (1) consumers preferences are biased (firms that advertise have their products favored by the consumers), (2) disfavor firms can’t advertise and only respond with price. because advertising war leads to a price war (when firms want to increase their own profitability while collective outcome is worse off).
21.11 Product Differentiation
Horizontal differentiation: different consumers prefer different products
Vertical differentiation: where you can say one good is “better” than the other.
Characteristics approach: products are the aggregate of their characteristics.
21.12 Product Quality, Durability, Warranties
Horizontal Differentiation
\[ U = V -p - t (\theta - a)^2 \]
Vertical Differentiation
\[ U_B = \theta s_B - p_B \\ U_A = \theta s_A - p_A \]
Assume that product B has a higher quality
\(\theta\) is the position of any consumer on the vertical differentiation line.
When \(U_A < 0\) then customers would not buy
Point of indifference along the vertical quality line
\[ \theta s_B - p_B = \theta s_A - p_A \\ \theta(s_B - s_A) = p_B - p_A \\ \bar{\theta} = \frac{p_B - p_A}{s_B - s_A} \]
If \(p_B = p_A\) for every \(\theta\) , \(s_B\) is preferred to \(s_A\)
\[ \pi_A = (p_A - c s_A^2) (Mktshare_A) \\ \pi_B = (p_B - cs_B^2) (Mktshare_B) \\ U_A = \theta s_A - p_A = 0 \\ \bar{\theta}_2 = \frac{p_A}{s_A} \]
- Wauthy ( 1996 )
\(\frac{b}{a}\) = such that market is covered, then
\[ 2 \le \frac{b}{a} \le \frac{2s_2 + s_1}{s_2 - s_1} \]
for the market to be covered
In vertical differentiation model, you can’t have both \(\theta \in [0,1]\) and full market coverage.
Alternatively, you can also specify \(\theta \in [1,2]; [1,4]\)
\[ \theta \in \begin{cases} [1,4] & \frac{b}{a} = 4 \\ [1,2] & \frac{b}{a} = 2 \end{cases} \]
Under Asymmetric Information
Adverse Selection: Before contract: Information is uncertain
Moral Hazard: After contract, intentions are unknown to at least one of the parties.
Alternative setup of Akerlof’s (1970) paper
Used cars quality \(\theta \in [0,1]\)
Seller - car of type \(\theta\)
Buyer = WTP = \(\frac{3}{2} \theta\)
Both of them can be better if the transaction occurs because buyer’s WTP for the car is greater than utility received by seller.
- Assume quality is observable (both sellers and buyers do know the quality of the cars):
Price as a function of quality \(p(\theta)\) where \(p(\theta) \in [\theta, 3/2 \theta]\) both parties can be better off
- Assume quality is unobservable (since \(\theta\) is uniformly distributed) (sellers and buyers do not know the quality of the used cars):
\[ E(\theta) = \frac{1}{2} \]
then \(E(\theta)\) for sellers is \(1/2\)
\(E(\theta)\) for buyer = \(3/2 \times 1/2\) = 3/4
then market happens when \(p \in [1/2,3/4]\)
- Asymmetric info (if only the sellers know the quality)
Seller knows \(\theta\)
Buyer knows \(\theta \sim [0,1]\)
From seller perspective, he must sell at price \(p \ge \theta\) and
From buyer perspective, quality of cars on sale is between \([0, p]\) . Then, you will have a smaller distribution than \([0,1]\)
If \(E[(\theta) | \theta \le p] = 0.5 p\)
Buyers’ utility is \(3/4 p\) but the price he has to pay is \(p\) (then market would not happen)
21.12.1 Akerlof ( 1970 )
- This paper is on adverse selection
- The relationship between quality and uncertainty (in automobiles market)
- 2 x 2 (used vs. new, good vs. bad)
\(q\) = probability of getting a good car = probability of good cars produced
and \((1-q)\) is the probability of getting a lemon
Used car sellers have knowledge about the probability of the car being bad, but buyers don’t. And buyers pay the same price for a lemon as for a good car (info asymmetry).
Gresham’s law for good and bad money is not transferable (because the reason why bad money drives out goo d money because of even exchange rate, while buyers of a car cannot tell if it is good or bad).
21.12.1.1 Asymmetrical Info
Demand for used automobiles depends on price quality:
\[ Q^d = D(p, \mu) \]
Supply for used cars depends on price
\[ S = S(p) \]
and average quality depends on price
\[ \mu = \mu(p) \]
In equilibrium
\[ S(p) = D(p, \mu(p)) \]
At no price will any trade happen
Assume 2 groups of graders:
First group: \(U_1 = M = \sum_{i=1}^n x_i\) where
\(M\) is the consumption of goods other than cars
\(x_i\) is the quality of the i-th car
n is the number of cars
Second group: \(U_2 = M + \sum_{i=1}^n \frac{3}{2} x_i\)
Group 1’s income is \(Y_1\)
Group 2’s income is \(Y_2\)
Demand for first group is
\[ \begin{cases} D_1 = \frac{Y_1}{p} & \frac{\mu}{p}>1 \\ D_1 = 0 & \frac{\mu}{p}<1 \end{cases} \]
Assume we have uniform distribution of automobile quality.
Supply offered by first group is
\[ S_2 = \frac{pN}{2} ; p \le 2 \]
with average quality \(\mu = p/2\)
Demand for second group is
\[ \begin{cases} D_2 = \frac{Y_2}{p} & \frac{3 \mu}{2} >p \\ D_2 = 0 & \frac{3 \mu}{2} < p \end{cases} \]
and supply by second group is \(S_2 = 0\)
Thus, total demand \(D(p, \mu)\) is
\[ \begin{cases} D(p, \mu) = (Y_2 + Y_1) / p & \text{ if } p < \mu \\ D(p, \mu) = (Y_2)/p & \text{ if } \mu < p < 3\mu /2 \\ D(p, \mu) = 0 & \text{ if } p > 3 \mu/2 \end{cases} \]
With price \(p\) , average quality is \(p/2\) , and thus at no price will any trade happen
21.12.1.2 Symmetric Info
Car quality is uniformly distributed \(0 \le x \le 2\)
\[ \begin{cases} S(p) = N & p >1 \\ S(p) = 0 \end{cases} \]
\[ \begin{cases} D(p) = (Y_2 + Y_1) / p & p < 1 \\ D(p) = Y_2/p & 1 < p < 3/2 \\ D(p) = 0 & p > 3/2 \end{cases} \]
\[ \begin{cases} p = 1 & \text{ if } Y_2< N \\ p = Y_2/N & \text{ if } 2Y_2/3 < N < Y_2 \\ p = 3/2 & \text{ if } N < 2 Y_2 <3 \end{cases} \]
This model also applies to (1) insurance case for elders (over 65), (2) the employment of minorities, (3) the costs of dishonesty, (4) credit markets in underdeveloped countries
To counteract the effects of quality uncertainty, we can have
- Brand-name good
- Licensing practices
21.12.2 Spence ( 1973 )
Built on ( Akerlof 1970 ) model
Consider 2 employees:
Employee 1: produces 1 unit of production
Employee 2: produces 2 units of production
We have \(\alpha\) people of type 1, and \(1-\alpha\) people of type 2
Average productivity
\[ E(P) = \alpha + 2( 1- \alpha) = 2- \alpha \]
You can signal via education.
To model cost of education,
Let E to be the cost of education for type 1
E/2 to be the cost education for type 2
If type 1 signals they are high-quality worker, then they have to go through the education and cost is E, and net utility of type 1 worker
\[ 2 - E < 1 \\ E >1 \]
If type 2 signals they are high-quality worker, then they also have to go through the education and cost is E/2 and net utility of type 2 worker is
\[ 2 - E/2 > 1 \\ E< 2 \]
If we keep \(1 < E < 2\) , then we have separating equilibrium (to have signal credible enough of education )
21.12.3 S. Moorthy and Srinivasan ( 1995 )
Money-back guarantee signals quality
Transaction cost are those the seller or buyer has to pay when redeeming a money-back guarantee
Money-back guarantee does not include product return (buyers have to incur expense), but guarantee a full refund of the purchase price.
If signals are costless, there is no difference between money-back guarantees and price
But signal are costly,
Under homogeneous buyers, low-quality sellers cannot mimic high-quality sellers’ strategy (i.e., money-back guarantee)
Under heterogeneous buyers,
when transaction costs are too high, the seller chooses either not to use money-back guarantee strategy or signal through price.
When transaction costs are moderate, there is a critical value of seller transaction costs where
below this point, the high-quality sellers’ profits increase with transaction costs
above this point, the high-quality sellers’ profits decrease with transaction costs
Uninformative advertising (“money-burning”) is defined as expenditures that do not affect demand directly. is never needed
Moral hazard:
- Consumers might exhaust consumption within the money-back guarantee period
Model setup
21.13 Bargaining
Abhinay Muthoo - Bargaining Theory with Applications (1999) (check books folder)
Josh Nash - Nash Bargaining (1950)
Allocation of scare resources
Allocations of
Determining the share before game-theoretic bargaining
Use a judge/arbitrator
Meet-in-the-middle
Forced Final: If an agreement is not reached, one party will use take it or leave it
Art: Negotiation
Science: Bargaining
Game theory’s contribution: to the rules for the encounter
Area that is still fertile for research
21.13.1 Non-cooperative
Outline for non-cooperative bargaining
Take-it-or-leave-it Offers
Bargain over a cake
If you accept, we trade
If you reject, no one eats
Under perfect info, there is a simple rollback equilibrium
In general, bargaining takes on a “take-it-or-counteroffer” procedure
If time has value, both parties prefer to trade earlier to trade later
- E.g., labor negotiations - later agreements come at a price of strikes, work stoppages
Delays imply less surplus left to be shared among the parties
Two-stage bargaining
I offer a proportion, \(p\) , of the cake to you
If rejected, you may counteroffer (and \(\delta\) of the cake melts)
In the first period: 1-p, p
In second period: \((1-\delta) (1-p),(1-\delta)p\)
Since period 2 is the final period, this is just like a take-it-or-leave-it offer
- You will offer me the smallest piece that I will accept, leaving you with all of \(1-\delta\) and leaving me with almost 0
Rollback: then in the first period: I am better off by giving player B more than what he would have in period 2 (i.e., give you at least as much surplus)
You surplus if you accept in the first period is \(p\)
Accept if: your surplus in first period greater than your surplus in second period \(p \ge 1 - \delta\)
IF there is a second stage, you get \(1 - \delta\) and I get 0
You will reject any offer in the first stage that does not offer you at least \(1 - \delta\)
In the first period, I offer you \(1 - \delta\)
Note: the more patient you are (the slower the cake melts) the more you receive now
Whether first or second mover has the advantage depends on \(\delta\) .
If \(\delta\) is high (melting fast), then first mover is better.
If \(\delta\) is low (melting slower), then second mover is better.
Either way - if both players think, agreement would be reached in the first period
In any bargaining setting, strike a deal as early as possible.
Why doesn’t this happen in reality?
reputation building
lack of information
Why bargaining doesn’t happen quickly? Information asymmetry
- Likelihood of success (e.g., uncertainty in civil lawsuits)
Rules of the bargaining game uniquely determine the bargain outcome
which rules are better for you depends on patience, info
What is the smallest acceptable piece? Trust your intuition
delays are always less profitable: Someone must be wrong
Non-monetary Utility
each side has a reservation price
- LIke in civil suit: expectation of wining
The reservation price is unknown
probabilistically determine best offer
but - probability implies a chance that non bargain will take place
Company negotiates with a union
Two types of bargaining:
Union makes a take-it-or-leave-it offer
Union makes a n offer today. If it’s rejected, the Union strikes, then makes another offer
- A strike costs the company 10% of annual profits.
Probability that the company is “highly profitable”, ie., 200k is \(p\)
If offer wage of $150k
Definitely accepted
Expected wage = $150K
If offer wage of $200K
Accepted with probability \(p\)
Expected wage = $200k(p)
\(p = .9\) (90% chance company is highly profitable
best offer: ask for $200K wage
Expected value of offer: \(.9 *200= 180\)
\(p = .1\) (10% chance company is highly profitable
Expected value of offer: \(.1 *200= 20\)
If ask for $10k, get $150k
not worth the risk to ask for more
If first-period offer is rejected: A strike costs the company 10% of annual profits
Strike costs a high-value company more than a low value company
Use this fact to screen
What if the union asks for $170k in the first period?
Low profit firms ($150k) rejects - as can’t afford to take
HIgh profit firm must guess what will happen if it rejects
Best case: union strikes and then asks for only $140k (willing to pay for some cost of strike), but not all)
In the mean time: strike cost the company $20K
High-profit firm accepts
Separating equilibrium
only high-profit firms accept the first period
If offer is rejected, Union knows that it is facing a low-profit firm
Ask for $140k
What’s happening
Union lowers price after a rejection
Looks like giving in
looks like bargaining
Actually, the union is screening its bargaining partner
Different “types” of firms have different values for the future
Use these different values to screen
Time is used as a screening device
21.13.2 Cooperative
two people diving cash
If they do not agree, they each get nothing
They cant divide up more than the whole thing
21.13.3 Nash ( 1950 )
Bargaining, bilateral monopoly (nonzero-sum two -person game).
Non action taken by one individual (without the consent of the other) can affect the other’s gain.
Rational individuals (maximize gain)
Full knowledge: tastes and preferences are known
Transitive Ordering: \(A>C\) when \(A>B\) , \(B>C\) . Also related to substitutability if two events are of equal probability
Continuity assumption
Properties:
\(u(A) > u(B)\) means A is more desirable than B where \(u\) is a utility function
Linearity property: If \(0 \le p \le 1\) , then \(u(pA + (1-p)B) = pu(A) + (1-p)u(B)\)
- For two person: \(p[A,B] + (1-p)[C,D] = [pA - (1-p)C, pB + (1-p)D]\)
Anticipation = \(p A - (1-p) B\) where
\(p\) is the prob of getting A
A and B are two events.
\(u_1, u_2\) are utility function
\(c(s)\) is the solution point in a set S (compact, convex, with 0)
If \(\alpha \in S\) s.t there is \(\beta \in S\) where \(u_1(\beta) > u_2(\alpha)\) and \(u_2(\beta) > u_2(\alpha)\) then \(\alpha \neq c(S)\)
- People try to maximize utility
If \(S \in T\) , c(T) is in S then \(c(T) = c(S)\)
If S is symmetric with respect to the line \(u_1 = u_2\) , then \(c(S)\) is on the line \(u_1 = u_2\)
- Equality of bargaining
21.13.4 Iyer and Villas-Boas ( 2003 )
- Presence of a powerful retailer (e.g., Walmart) might be beneficial to all channel members.
21.13.5 Desai and Purohit ( 2004 )
2 customers segment: hagglers, nonhagglers.
When the proportion of nonhagglers is sufficient high, a haggling policy can be more profitable than a fixed-price policy
21.14 Pricing and Search Theory
21.14.1 varian and purohit ( 1980 ).
From Stigler’s seminar paper ( Stiglitz and Salop 1982 ; Salop and Stiglitz 1977 ) , model of equilibrium price dispersion is born
Spatial price dispersion: assume uninformed and informed consumers
- Since consumers can learn from experience, the result does not hold over time
Temporal price dispersion: sales
This paper is based on
Stiglitz: assume informed (choose lowest price store) and uninformed consumers (choose stores at random)
Shilony ( Shilony 1977 ) : randomized pricing strategies
\(I >0\) is the number of informed consumers
\(M >0\) is the number of uninformed consumers
\(n\) is the number of stores
\(U = M/n\) is the number of uninformed consumers per store
Each store has a density function \(f(p)\) indicating the prob it charges price \(p\)
Stores choose a price based on \(f(p)\)
Succeeds if it has the lowest price among n prices, then it has \(I + U\) customers
Fails then only has \(U\) customers
Stores charge the same lowest price will share equal size of informed customers
\(c(q)\) is the cost curve
\(p^* = \frac{c(I+U)}{(I+U}\) is the average cost with the maximum number of customers a store can get
Prop 1: \(f(p) = 0\) for \(p >r\) or \(p < p^*\)
Prop 2: No symmetric equilibrium when stores charge the same price
Prop 3: No point masses in the equilibrium pricing strategies
Prop 4: If \(f(p) >0\) , then
\[ \pi_s(p) (1-F(p))^{n-1} + \pi_f (p) [1-(1-F(p))^{n-1}] =0 \]
Prop 5: \(\pi_f (p) (\pi_f(p) - \pi_s (p))\) is strictly decreasing in \(p\)
Prop 6: \(F(p^* _ \epsilon) >0\) for any \(\epsilon> 0\)
Prop 7: \(F(r- \epsilon) <1\) for any \(\epsilon > 0\)
Prop 8: No gap \((p_1, p_2)\) where \(f(p) \equiv 0\)
Decision to be informed can be endogenous, and depends on the “full price” (search costs + fixed cost)
21.14.2 Lazear ( 1984 )
Retail pricing and clearance sales
Goods’ characteristics affect pricing behaviors
Market’s thinness can affect price volatility
Relationship between uniqueness of a goods and its price
Price reduction policies as a function of shelf time
Single period model
\(V\) = the price of the only buyer who is willing to purchase the product
\(f(V)\) is the density of V (firm’s prior)
\(F(V)^2\) is its distribution function
Firms try to
\[ \underset{R}{\operatorname{max}} R[1 - F(R)] \]
where \(R\) is the price
\(1 - F(R)\) is the prob that \(V > R\)
Assume \(V\) is uniform \([0,1]\) then
\(F(R) = R\) so that the optimum is \(R = 0.5\) with expected profits of \(0.25\)
Two-period model
Failures in period 1 implies \(V<R_1\) .
Hence, based on Bayes’ theorem, the posterior distribution in period 2 is \([0, R_1]\)
\(F_2(V) = V/R_1\) (posterior distribution)
\(R_1\) affect (1) sales in period 1, (2) info in period 2
Then, firms want to choose \(R_1, R_2\) .Firms try to
\[ \underset{R_1, R_2}{\operatorname{max}} R_1[1 - F(R_1)] + R_2 [1-F_2(R_2)]F(R_1) \]
Then, in period 2, the firms try to
\[ \underset{R_2}{\operatorname{max}} R_2[1 - F_2(R_2)] \]
Based on Bayes’ Theorem
\[ F_2(R_2) = \begin{cases} F(R_2)/ F(R_1) & \text{for } R_2 < R_1 \\ 1 & \text{otherwise} \end{cases} \]
Due to FOC, second period price is always lower than first price price
Expected profits are higher than that of one-period due to higher expected probability of a sale in the two-period problem.
But this model assume
no brand recognition
no contagion or network effects
In thin markets and heterogeneous consumers
we have \(N\) customers examine the good with the prior probability \(P\) of being shoppers, and \(1-P\) being buyers who are willing to buy at \(V\)
There are 3 types of people
- customers = all those who inspect the good
- buyers = those whose value equal \(V\)
- shoppers = those who value equal \(0\)
An individual does not know if he or she is a buyer or shopper until he or she is a customer (i.e., inspect the goods)
Then, firms try to
\[ \begin{aligned} \underset{R_1, R_2}{\operatorname{max}} & R_1(\text{prob sale in 1}) + R_2 (\text{Posterior prob sale in 2})\times (\text{Prob no sale in 1}) \\ & R_1 \times (- F(R_1))(1-P^N) + R_2 \{ (1-F_2(R_2))(1- P^N) \} \times \{ 1 - [(1 - F(R_1))(1-P^N)] \} \end{aligned} \]
Based on Bayes’ Theorem, the density for period 2 is
\[ f_2(V) = \begin{cases} \frac{1}{R_1 (1- P^N) + P^N} \text{ for } V \le R_1 \\ \frac{P^N}{R_1 (1- P^N) + P^N} \text{ for } V > R_1 \end{cases} \]
Conclusion:
As \(P^N \to 1\) (almost all customers are shoppers), there is not much info to be gained. Hence, 2-period is no different than 2 independent one-period problems. Hence, the solution in this case is identical to that of one-period problem.
When \(P^N\) is small, prices start higher and fall more rapid as time unsold increases
When \(P^N \to 1\) , prices tend to be constant.
\(P^N\) can also be thought of as search cost and info.
Observable Time patterns of price and quantity
Pricing is a function of
The number of customers \(N\)
The proportion of shoppers \(P\)
The firm’s beliefs about the market (parameterized through the prior on \(V\) )
Markets where prices fall rapidly as time passes, the probability that the good will go unsold is low.
Goods with high initial price are likely to sell because high initial price reflects low \(P^N\) - low shoppers
Heterogeneity among goods
The more disperse prior leads to a higher expected price for a given mean. And because of longer time on shelf, expected revenues for such a product can be lower.
Fashion, Obsolescence, and discounting the future
The more obsolete, the more anxious is the seller
Goods that are “classic”, have a higher initial price, and its price is less sensitive to inventory (compared to fashion goods)
Discounting is irrelevant to the pricing condition due to constant discount rate (not like increasing obsolescence rate)
For non-unique good, the solution is identical to that of the one-period problem.
Simple model
Customer’s Valuation \(\in [0,1]\)
Firm’s decision is to choose a price \(p\) (label - \(R_1\) )
One-period model
Buy if \(V >R_1\) prob = \(1-R_1\)
Not buy if \(V<R_1\) probability = \(R_1\)
\(\underset{R_1}{\operatorname{max}} [R_1][1-R_1]\) hence, FOC \(R_1 = 1/2\) , then total \(\pi = 1/2-(1/2)^2 = 1/4\)
Two prices \(R_1, R_2\)
\(R_1 \in [0,1]\)
\(R_2 \in [0, R_1]\)
\[ \underset{R_1}{\operatorname{max}} [R_1][1-R_1] + R_2 (1 - R_2)(R_1) \]
\[ \underset{R_1}{\operatorname{max}} [R_2][\frac{R_1 - R_2}{R_1}] \]
FOC \(R_2 = R_1/2\)
\[ \underset{R_1}{\operatorname{max}} R_1(1-R_1) + \frac{R_1}{2}(1 - \frac{R_1}{2}) (R_1) \]
FOC: \(R_1 = 2/3\) then \(R_2 = 1/3\)
\(N\) customers
Each customer could be a
shopper with probability p with \(V <0\)
buyer with probability \(1-p\) with \(V > \text{price}\)
Modify equation 1 to incorporate types of consumers
\[ R_1(1 - R_1)(1- p^N) + R_2 (1- R_2) R_1 (1-p^N) [ 1 - (1-R_1)(1-p^N)] \]
Reduce costs by
Economy of scale \(c(\text{number of units})\)
Economy of scope \(c(\text{number of types of products})\) (typically, due to the transfer of knowledge)
Experience effect \(c(\text{time})\) (is a superset of economy of scale)
Lal and Sarvary ( 1999 )
Conventional idea: lower search cost (e.g., Internet) will increase price competition.
Demand side: info about product attributes:
digital attributes (those can be communicated via the Internet)
nondigital attributes (those can’t)
Supply side: firms have both traditional and Internet stores.
Monopoly pricing can happen when
high proportion of Internet users
Not overwhelming nondigital attributes
Favor familiar brands
destination shopping
Monopoly pricing can lead to higher prices and discourage consumer from searching
Stores serve as acquiring tools, while Internet maintain loyal customers.
Kuksov ( 2004 )
For products that cannot be changed easily (design),lower search costs lead to higher price competition
For those that can be easily changed, lower search costs lead to higher product differentiation, which in turn decreases price competition , lower social welfare, higher industry profits.
( Salop and Stiglitz 1977 )
21.15 Pricing and Promotions
Extensively studied
Issue of Everyday Low price vs Hi/Lo pricing
Short-term price discounts
offering trade-deals
consumer promotions
shelf-price discounts (used by everybody)
cents-off coupons (some consumers whose value of time is relatively low)
Loyalty is similar to inform under analytic modeling:
Uninformed = loyal
Informed = non-loyal
30 years back, few companies use price promotions
Effects of Short-term price discounts
measured effects (84%) ( Gupta 1988 )
Brand switching (14%)
purchase acceleration (2%)
quantity purchased
elasticity of ST price changes is an order of magnitude higher
Other effects:
general trial (traditional reason)
encourages consumers to carry inventory hence increase consumption
higher sales of complementary products
small effect on store switching
Asymmetric effect (based on brand strength) (bigger firms always benefit more)
- expect of store brands
Negative Effects
Expectations of future promotions
Lowering of Reference Price
Increase in price sensitivity
Post-promotional dip
Trade Discounts
Short-term discounts offered to the trade:
Incentivize the trade to push our product
gets attention of sales force
Disadvantages
might not be passed onto the consumer
trade forward buys (hurts production plans)
hard to forecast demand
trade expects discounts in the future (cost of doing business)
Scanback can help increase retail pass through (i.e., encourage retailers to have consumer discounts)
Determinants of pass through
Higher when
Consumer elasticity is higher
promoting brand is stronger
shape of the demand function
lower frequency of promotions
(Online) Shelf-price discounts ( Raju, Srinivasan, and Lal 1990 )
- If you are a stronger brand you can discount infrequently because the weaker brands cannot predict when the stronger brands will promote. Hence, it has to promote more frequently
Little over 1% get redeemed each year
The ability of cents-off coupons to price distribution has reduced considerably because of their very wide availability
Sales increases required to make free-standing-insert coupons profitable are not attainable
Coupon Design
Expiration dates
- Long vs short expiration dates: Stronger brands should have shorter windows (because a lot more of your loyalty customer base will utilize the coupons).
Method of distribution
In-store (is better)
Through the package
Targeted promotions
Package Coupons acquisition and retention trade-offs
3 types of package coupons:
Peel-off (lots more use the coupons) lowest profits for the firm
in-packs (fewer customers will buy the products in the first period)
on-packs (customers buy the products and they redeem in the next period) best approach
Trade and consumer promotion are necessary
Consumer promotion (avoid shelf price discount/news paper coupons, use package coupons
strong interaction between advertising and promotion (area for more research)
3 degrees price discrimination
- First-degree: based on willingness to pay
- Second-degree: based on quantity
- Third-degree: based on memberships
- Fourth-degree: based on cost to serve
21.15.1 Narasimhan ( 1988 )
Marketing tools to promote products:
Advertising
Trade promotions
Consumer promotions
Pricing promotions:
Price deals
Cents-off labels
Brand loyalty can help explain the variation in prices (in competitive markets)
Firms try to make optimal trade-off between
attracting brand switchers
loss of profits from loyal customers.
Deviation from the maximum price = promotion
Firms with identical products, and cost structures (constant or declining). Non-cooperative game.
Same reservation price
Three consumer segments:
Loyal to firm 1 with size \(\alpha_1 (0<\alpha_1<1)\)
Loyal to firm 2 with size \(\alpha_2(0 < \alpha_2 < \alpha_1)\) (asymmetric firm)
Switchers with size \(\beta (0 < \beta = 1 - \alpha_1 - \alpha_2)\)
Costless price change, no intertemporal effects (in quantity or loyalty)
To model \(\beta\) either
- \(d \in (-b, a)\) is switch cost (individual parameter)
\[ \begin{cases} \text{buy brand 1} & \text{if } P_1 \le P_2 - d \\ \text{buy brand 2} & \text{if } P_1 > P_2 - d \end{cases} \]
- Identical switchers (same d)
- \(d = 0\) (extremely price sensitive)
For case 1, there is a pure strategy, while case 2 and 3 have no pure strategies, only mixed strategies
Details for case 3:
Profit function
\[ \Pi_i (P_i, P_j) = \alpha_i P_i + \delta_{ij} \beta P_i \]
\[ \delta_{ij} = \begin{cases} 1 & \text{ if } P_i < P_j \\ 1/2 & \text{ if } P_i = P_j \\ 0 & \text{ if } P_i > P_j \end{cases} \]
and \(i = 1,2, i \neq j\)
Prop 1: no pure Nash equilibrium
Mixed Strategy profit function
\[ \Pi_i (P_i) = \alpha_i P_i + Prob(P_j > P_i) \beta P_i + Prob (P_j = P_i) \frac{\beta}{2} P_i \]
where \(P_i \in S_i^*, i \neq j; i , j = 1, 2\)
Then the expected profit functions of the two-player game is
\[ \underset{F_i}{\operatorname{max}} E(\Pi_i) = \int \Pi_i (P_i) d F_i (P_i) \]
\(P_i \in S_i^*\)
\[ \Pi_i \ge \alpha_i r \\ \int dF_i (P_i) = 1 \\ P_i \in S_i^* \]
21.15.2 Balachander, Ghosh, and Stock ( 2010 )
- Bundle discounts can be more profitable than price promotions (in a competitive market) due to increased loyalty (which will reduce promotional competition intensity).
21.15.3 Goić, Jerath, and Srinivasan ( 2011 )
Cross-market discounts, purchases in a source market can get you a price discounts redeemable in a target market.
- Increase prices and sales in the source market.
21.16 Market Entry Decisions and Diffusion
Peter N. Golder and Tellis ( 1993 )
Peter N. Golder and Tellis ( 2004 )
Boulding and Christen ( 2003 )
Van den Bulte and Joshi ( 2007 )
21.17 Principal-agent Models and Salesforce Compensation
21.17.1 gerstner and hess ( 1987 ), 21.17.2 basu et al. ( 1985 ), 21.17.3 raju and srinivasan ( 1996 ).
Compare to ( Basu et al. 1985 ) , basic quota plan is superior in terms of implementation
Different from ( Basu et al. 1985 ) , basic quota plan has
- Shape-induced nonoptimality: not a general curvilinear form
- Heterogeneity-induced nonoptimality: common rate across salesforce
However, only 1% of cases in simulation shows up with nonoptimality. Hence, minimal loss in optimality
Basic quota plan is a also robust against changes in
salesperson switching territory
territorial changes (e.g., business condition)
Heterogeneity stems from
Salesperson: effectiveness, risk level, disutility for effort, and alternative opportunity
Territory: Sales potential and volatility
Adjusting quotas can accommodate the heterogeneity
To assess nonoptimality, following Basu and Kalyanaram ( 1990 )
Moral hazard: cannot assess salesperson’s true effort.
The salesperson reacts to the compensation scheme by deciding on an effort level that maximizes his overall utility, i.e., the expected utility from the (stochastic) compensation minus the effort distuility.
Firm wants to maximize its profit
compensation is greater than saleperson’s alternative.
Dollar sales \(x_i \sim Gamma\) (because sales are non-negative and standard deviation getting proportionately larger as the mean increases) with density \(f_i(x_i|t_i)\)
Expected sales per period
\[ E[x_i |t_i] = h_i + k_i t_i , (h_i > 0, k_i >0) \]
- \(h_i\) = base sales level
- \(k_i\) = effectiveness of effort
and \(1/\sqrt{c}\) = uncertainty in sales (coefficient of variation) = standard deviation / mean where \(c \to \infty\) means perfect certainty
salesperson’s overall utility
\[ U_i[s_i(x_i)] - V_i(t_i) = \frac{1}{\delta_i}[s_i (x_i)]^{\delta_i} - d_i t_i^{\gamma_i} \] where
- \(0 < \delta_i <1\) (greater \(\delta\) means less risk-averse salesperson)
- \(\gamma_i >1\) (greater \(\gamma\) means more effort)
- \(V_i(t_i) = d_i t_i^{\gamma_i}\) is the increasing disutility function (convex)
21.17.4 Lal and Staelin ( 1986 )
A menu of compensation plans (salesperson can select, which depends on their own perspective)
Proposes conditions when it’s optimal to offer a menu
Under ( Basu et al. 1985 ) , they assume
Salespeople have identical risk characteristics
identical reservation utility
identical information about the environment
When this paper relaxes these assumptions, menu of contract makes sense
If you cannot distinguish (or have a selection mechanisms) between high performer and low performer, a menu is recommended. but if you can, you only need 1 contract like ( Basu et al. 1985 )
21.17.5 Simester and Zhang ( 2010 )
21.18 branding.
Wernerfelt ( 1988 )
- Umbrella branding
W. Chu and Chu ( 1994 )
retailer reputation
21.19 Marketing Resource Allocation Models
This section is based on ( Mantrala, Sinha, and Zoltners 1992 )
21.19.1 Case study 1
Concave sales response function
- Optimal vs. proportional at different investment levels
- Profit maximization perspective of aggregate function
\[ s_i = k_i (1- e^{-b_i x_i}) \]
- \(s_i\) = current-period sales response (dollars / period)
- \(x_i\) = amount of resource allocated to submarket i
- \(b_i\) = rate at which sales approach saturation
- \(k_i\) = sales potential
Allocation functions
Fixed proportion
\(R_i\) = Investment level (dollars/period)
\(w_i\) = fixed proportion or weights
\[ \hat{x}_i = w_i R; \\ \sum_{t=1}^2 w_t = 1; 0 < w_t < 1 \]
Informed allocator
- optimal allocations via marginal analysis (maximize profits)
\[ max C = m \sum_{i = 1}^2 k_i (1- e^{-b_i x_i}) \\ x_1 + x_2 \le R; x_i \ge 0 \text{ for } i = 1,2 \\ x_1 = \frac{1}{(b_1 + b_2)(b_2 R + \ln(\frac{k_1b_1}{k_2b_2})} \\ x_2 = \frac{1}{(b_1 + b_2)(b_2 R + \ln(\frac{k_2b_2}{k_1b_1})} \]
21.19.2 Case study 2
S-shaped sales response function:
21.19.3 Case study 3
Quadratic-form stochastic response function
- Optimal allocation only with risk averse and risk neutral investors.
21.20 Mixed Strategies
Games with finite number, and finite strategy for each player, there will always be a Nash equilibrium (might not be pure Nash, but always mixed)
Extended game
Suppose we allow each player to choose randomizing strategies
For example, the server might serve left half of the time, and right half of the time
In general, suppose the server serves left a fraction \(p\) of the time
What is the receiver’s best response?
Best Responses
If \(p = 1\) , the receiver should defend to the left
\(p = 0\) , the receiver should defend to the right
The expected payoff to the receiver is
\(p \times 3/4 + (1-p) \times 1/4\) if defending left
\(p \times 1/4 + (1-p) \times 3/4\) if defending right
Hence, she should defend left if
\(p \times 3/4 - (1-p)\times 1/4 > p \times 1/4 + (1-p) \times 3/4\)
We said to defend left whenever
\[ p \times 3/4 - (1-p)\times 1/4 > p \times 1/4 + (1-p) \times 3/4 \]
Server’s Best response
Suppose that the receiver goes left with probability \(q\)
if \(q = 1\) , the server should serve right
If \(q = 0\) , the server should server left
Hence, serve left if \(1/4 \times q + 3/4 \times (1-q) > 3/4\times q + 1/4 \times (1-q)\)
Simplifying, he should serve left if \(q < 1/2\)
Mixed strategy equilibrium:
A mixed strategy equilibrium is a pair of mixed strategies that are mutual best responses
In the tennis example, this occurred when each play chose a 50-50 mixture of left and right
Your best strategy is when you make the option given to your opponent is obsolete.
A player chooses his strategy to make his rival indifferent
A player earns the same expected payoff for each pure strategy chosen with positive probability
Important property: When player’s own payoff form a pure strategy goes up (or down), his mixture does not change.
21.21 Bundling
Say we have equal numbers of type 1 and type 2, then you would like to charge $5,000 for the equipment and $2,000 for installation when considering equipment and installment equally . If you price it separately, then your total profit is 14,000.
But you bundle, you get $16,000.
If we know that bundles work. But we don’t see every company does ti?
Because it depends on the number of type 1 and 2 customers, and negative correlation in willingness to pay .
For example:
Information Products
margin cost is close to 0.
Bundling of info products is very easy
hence always bundle
21.22 Market Entry and Diffusion
Product Life Cycle model
Bass ( 1969 )
Discussion of sales has 2 types
\(p\) = coefficient of innovation (fraction of innovators of the untapped market who buy in that period)
\(q\) = coefficient of imitation (fraction of the interaction which lead to sales in that period)
\(M\) = market potential
\(N(t)\) = cumulative sales till time \(t\)
\(M - N(t)\) = the untapped market
Sales in any time is People buying because of the pure benefits of the product, plus people buy the product after interacting with people who owned the product.
\[ S(t) = p(M- N(t)) + q \frac{N(t)}{M} [M-N(t)] \\ = pM + (q-p) N(t) - \frac{q}{M} [N(t)]^2 \]
one can estimate \(p,q,M\) from data
\(q > p\) (coefficient of imitation > coefficient of innovation) means that you have life cycle (bell-shaped curve)
Previous use
limited databases (PIM and ASSESOR) ( Urban et al. 1986 )
exclusion of nonsurvivors
single-informant self-report
New dataset overcomes these limitations and show 50% of the market pioneers fail, and their mean share is much lower
Early market leaders have greater long-term success and enter on average 13 years after pioneers.
Definitions (p. 159)
- Inventor: firms that develop patent or important technologies in a new product category
- Product pioneer: the first firm to develop a working model or sample in a new product category
- Market pioneer is the first firm to sell in a new product category
- Product category: a group of close substitutes
At the business level, being the leader can give you long-term profit disadvantage from the samples of consumer and industrial goods.
First-to-market leads to an initial profit advantage, which last about 12 to 14 years before becoming long-term disadvantage.
Consumer learning (education), market position (strong vs. weak) and patent protection can moderate the effect of first-mover on profit.
Research on product life cycle (PLC)
Consumer durables typically grow 45 per year over 8 years, then slowdown when sales decline by 15%, and stay below those of the previous peak for 5 years.
Slowdown typically happens when the product penetrates 35-50% of the market
large sales increases (at first) will have larger sales declines (at slowdown).
Leisure-enhancing products tend to have higher growth rate and shorter growth stages than non leisure-enhancing products
Time-saving products have lower growth rates and longer growth stages than non time-saving products
Lower likelihood of slowdown correlates with steeper price reduction, lower penetration, and higher economic growth
A hazard model gives reasonable prediction of the slowdown and takeoff.
Innovations market have two segments:
Influentials: aware of new developments an affect imitators
Imitators: model after influentials.
This market structure is reasonable because it exhibits consistent evidence with the prior research and market (e.g., dip between the early and later parts fo the diffusion curve).
” Erroneously specifying a mixed-influence model to a mixture process where influentials act independently from each other can generate systematic changes in the parameter values reported in earlier research.”
Two-segments model performs better than the standard mixed-influence, the Gamma/Shifted Gompertz, the Weibull-Gamma models, and similar to the Karmeshu-Goswami mixed influence model.
21.23 Principal-Agent Models and Salesforce Compensation
Key Question:
- Ensuring agents exert effort
- Design compensation plans such that workers exert high effort?
Designing contracts:
Effort can be monitored
Monitoring costs are too high
- Manger designs the construct
- manager offers the construct and worker chooses to accept
- Worker decides the extent of effort
- Outcome is observed and wage is given to the worker
Scenario 1 : Certainty
e = effort put in by worker
2 levels of e
- 2 if he works hard
- 0 if he shirks
Reservation utility = 10 (other alternative: can work somewhere else, or private money allows them not to work)
Agent’s Utility
\[ U = \begin{cases} w - e & \text{if he exerts effort e} \\ 10 & \text{if he works somewhere else} \end{cases} \]
Revenue is a function of effort
\[ R(e) = \begin{cases} H & \text{if } e = 2 \\ L & \text{if } e = 0 \end{cases} \]
\(w^H\) = wage if \(R(e) = H\)
\(w^L\) = wage if \(R(e) = L\)
Constraints:
Worker has to participate in this labor market - participation constraint \(w^H - 2 \ge 10\)
Incentive compatibility constraint (ensure that the works always put in the effort and the manager always pay for the higher wage): \(w^H - 2 \ge w^L -0\)
\[ w^H = 12 \\ w^L = 10 \]
Thus, contract is simple because of monitoring
Scenario 2 : Under uncertainty
\[ R(2) = \begin{cases} H & \text{w/ prob 0.8} \\ L & \text{w/ prob 0.2} \end{cases} \\ R(0) = \begin{cases} H & \text{w/ prob 0.4} \\ L & \text{w/ prob 0.6} \end{cases} \]
Agent Utility
\[ U = \begin{cases} E(w) - e & \text{if effort e is put} \\ 10 & \text{if they choose outside option} \end{cases} \]
Participation Constraint: \(0.8w^H + 0.2w^L -2 \ge 10\)
Incentive compatibility constraint: \(0.8w^H + 0.2w^L - 2 \ge 0.4 w^H + 0.6w^L - 0\)
\[ w^H = 13 \\ w^L = 8 \]
Expected wage bill that the manager has to pay:
\[ 13\times 0.8 + 8 \times 0.2 = 12 \]
Hence, the expected money the manager has to pay is the same for both cases (certainty vs. uncertainty)
Scenario 3 : Asymmetric Information
Degrees of risk aversion
Manger perspective
\[ R(2) = \begin{cases} H & \text{w/ prob 0.8} \\ L & \text{w/ prob 0.2} \end{cases} \]
Worker perspective (the number for worker is always lower, because they are more risk averse, managers are more risk neural) (the manager also knows this).
\[ R(2) = \begin{cases} H & \text{w/ prob 0.7} \\ L & \text{w/ prob 0.3} \end{cases} \]
Participation Constraint
\[ 0.7w^H + 0.3w^L - 2 \ge 10 \]
Incentive Compatibility Constraint
\[ 0.6 w^H + 0.3 w^L - 2 \ge 0.4 w^H + 0.6 w^L - 0 \]
(take R(0) from scenario 2)
\[ 0.7 w^H + 0.3 w^L = 12 \\ 0.3w^H - 0.3w^L = 2 \]
\[ w^H = 14 \\ w^L = 22/3 \]
Expected wage bill for the manager is
\[ 14 * 0.8 + 22/3*0.2 = 12.66 \]
Hence, expected wage bill is higher than scenario 2
Risk aversion from the worker forces the manager to pay higher wage
Grossman and Hart ( 1986 )
- landmark paper for principal agent model
21.23.1 Basu et al. ( 1985 )
Types of compensation plan:
Independent of salesperson’s performance (e.g., salary only)
Partly dependent on output (e.g., salary with commissions)
In comparison to others (e.g., sales contests)
Options for salesperson to choose the compensation plan
In the first 2 categories, the 3 major schemes:
- Straight salary
- Straight commissions
- Combination of base salary and commission
Dimensions that affect the proportion of salary tot total pay (p. 270, table 1)
Previous research assumes deterministic relationship between sales and effort, but this study says otherwise (stochastic relationship between sales and effort).
Firm: Risk neutral: maximize expected profits
Salesperson: Risk averse . Hence, diminishing marginal utility for income \(U(s) \ge 0; U'(s) >0, U''(s) <0\)
Expected utility of the salesperson for this job > alternative
Utility function of the salesperson: additively separable: \(U(s) - V(t)\) where \(s\) = salary, and \(t\) = effort (time)
Marginal disutility for effort increases with effort \(V(t) \ge 0, V'(t)>0, V''(t) >0\)
Constant marginal cost of production and distribution \(c\)
Known utility function and sales-effort response function (both principal and agent)
dollar sales \(x \sim Gamma, Binom\)
Expected profit for the firm
\[ \pi = \int[(1-c)x - s(x)]f(x|t)dx \]
Objective of the firm is to
\[ \underset{s(x)}{\operatorname{max}} \int[(1-c)x - s(x)]f(x|t)dx \]
subject to (agent’s best alternative e.g., other job offer - \(m\) )
\[ \int [U(s(x))]f(x|t) dx - V(t) \ge m \]
and the agent wants to maximize the the utility
\[ \underset{t}{\operatorname{max}} \int [U(s(x))]f(x|t)dx - V(t) \]
21.23.2 Lal and Staelin ( 1986 )
21.23.3 raju and srinivasan ( 1996 ).
Compare quota-based compensation with ( Basu et al. 1985 ) curvilinear compensation, the basic quota plan is simpler, and only in specical cases (about 1% in simulation) that differs from ( Basu et al. 1985 ) . And it’s easier to adapt to changes in moving salesperson and changing territory, unlike ( Basu et al. 1985 ) ’s plan where the whole commission rate structure needs to be changed.
Heterogeneity stems from:
Salesperson: disutility effort level, risk level, effectiveness, alternative opportunity
Territory: Sales potential an volatility
Adjusting the quota (per territory) can accommodate the heterogeneity
Quota-based < BLSS (in terms of profits)
- quota-based from curve (between total compensation and sales) (i.e., shape-induced nonoptimality)
- common salary and commission rate across salesforce (i.e., heterogeneity-induced nonoptimality)
To assess the shape-induced nonoptimality following
21.23.4 Joseph and Thevaranjan ( 1998 )
21.23.5 simester and zhang ( 2010 ).
- Tradeoff: Motivating manager effort and info sharing.
21.24 Meta-analyses of Econometric Marketing Models
21.25 dynamic advertising effects and spending models, 21.26 marketing mix optimization models.
Check this post for implementation in Python
21.27 New Product Diffusion Models
21.28 two-sided platform marketing models.
Example of Marketing Mix Model in practice: link
Qualitative Data Analysis Methods
The “Big 6” Qualitative Methods + Examples
By: Kerryn Warren (PhD) | Reviewed By: Eunice Rautenbach (D.Tech) | May 2020 (Updated April 2023)
If you’re new to the world of research, qualitative data analysis can look rather intimidating. So much bulky terminology and so many abstract, fluffy concepts. It certainly can be a minefield!
What (exactly) is qualitative data analysis?
To understand qualitative data analysis, we need to first understand qualitative data – so let’s step back and ask the question, “what exactly is qualitative data?”.
Qualitative data refers to pretty much any data that’s “not numbers” . In other words, it’s not the stuff you measure using a fixed scale or complex equipment, nor do you analyse it using complex statistics or mathematics.
So, if it’s not numbers, what is it?
Words, you guessed? Well… sometimes , yes. Qualitative data can, and often does, take the form of interview transcripts, documents and open-ended survey responses – but it can also involve the interpretation of images and videos. In other words, qualitative isn’t just limited to text-based data.
So, how’s that different from quantitative data, you ask?
Simply put, qualitative research focuses on words, descriptions, concepts or ideas – while quantitative research focuses on numbers and statistics . Qualitative research investigates the “softer side” of things to explore and describe , while quantitative research focuses on the “hard numbers”, to measure differences between variables and the relationships between them. If you’re keen to learn more about the differences between qual and quant, we’ve got a detailed post over here .
So, qualitative analysis is easier than quantitative, right?
Not quite. In many ways, qualitative data can be challenging and time-consuming to analyse and interpret. At the end of your data collection phase (which itself takes a lot of time), you’ll likely have many pages of text-based data or hours upon hours of audio to work through. You might also have subtle nuances of interactions or discussions that have danced around in your mind, or that you scribbled down in messy field notes. All of this needs to work its way into your analysis.
Making sense of all of this is no small task and you shouldn’t underestimate it. Long story short – qualitative analysis can be a lot of work! Of course, quantitative analysis is no piece of cake either, but it’s important to recognise that qualitative analysis still requires a significant investment in terms of time and effort.
Need a helping hand?
The “Big 6” Qualitative Analysis Methods
There are many different types of qualitative data analysis, all of which serve different purposes and have unique strengths and weaknesses . We’ll start by outlining the analysis methods and then we’ll dive into the details for each.
The 6 most popular methods (or at least the ones we see at Grad Coach) are:
- Content analysis
- Narrative analysis
- Discourse analysis
- Thematic analysis
- Grounded theory (GT)
- Interpretive phenomenological analysis (IPA)
QDA Method #1: Qualitative Content Analysis
Content analysis is possibly the most common and straightforward QDA method. At the simplest level, content analysis is used to evaluate patterns within a piece of content (for example, words, phrases or images) or across multiple pieces of content or sources of communication. For example, a collection of newspaper articles or political speeches.
With content analysis, you could, for instance, identify the frequency with which an idea is shared or spoken about – like the number of times a Kardashian is mentioned on Twitter. Or you could identify patterns of deeper underlying interpretations – for instance, by identifying phrases or words in tourist pamphlets that highlight India as an ancient country.
Because content analysis can be used in such a wide variety of ways, it’s important to go into your analysis with a very specific question and goal, or you’ll get lost in the fog. With content analysis, you’ll group large amounts of text into codes , summarise these into categories, and possibly even tabulate the data to calculate the frequency of certain concepts or variables. Because of this, content analysis provides a small splash of quantitative thinking within a qualitative method.
Naturally, while content analysis is widely useful, it’s not without its drawbacks . One of the main issues with content analysis is that it can be very time-consuming , as it requires lots of reading and re-reading of the texts. Also, because of its multidimensional focus on both qualitative and quantitative aspects, it is sometimes accused of losing important nuances in communication.
Content analysis also tends to concentrate on a very specific timeline and doesn’t take into account what happened before or after that timeline. This isn’t necessarily a bad thing though – just something to be aware of. So, keep these factors in mind if you’re considering content analysis. Every analysis method has its limitations , so don’t be put off by these – just be aware of them ! If you’re interested in learning more about content analysis, the video below provides a good starting point.
QDA Method #2: Narrative Analysis
As the name suggests, narrative analysis is all about listening to people telling stories and analysing what that means . Since stories serve a functional purpose of helping us make sense of the world, we can gain insights into the ways that people deal with and make sense of reality by analysing their stories and the ways they’re told.
You could, for example, use narrative analysis to explore whether how something is being said is important. For instance, the narrative of a prisoner trying to justify their crime could provide insight into their view of the world and the justice system. Similarly, analysing the ways entrepreneurs talk about the struggles in their careers or cancer patients telling stories of hope could provide powerful insights into their mindsets and perspectives . Simply put, narrative analysis is about paying attention to the stories that people tell – and more importantly, the way they tell them.
Of course, the narrative approach has its weaknesses , too. Sample sizes are generally quite small due to the time-consuming process of capturing narratives. Because of this, along with the multitude of social and lifestyle factors which can influence a subject, narrative analysis can be quite difficult to reproduce in subsequent research. This means that it’s difficult to test the findings of some of this research.
Similarly, researcher bias can have a strong influence on the results here, so you need to be particularly careful about the potential biases you can bring into your analysis when using this method. Nevertheless, narrative analysis is still a very useful qualitative analysis method – just keep these limitations in mind and be careful not to draw broad conclusions . If you’re keen to learn more about narrative analysis, the video below provides a great introduction to this qualitative analysis method.
QDA Method #3: Discourse Analysis
Discourse is simply a fancy word for written or spoken language or debate . So, discourse analysis is all about analysing language within its social context. In other words, analysing language – such as a conversation, a speech, etc – within the culture and society it takes place. For example, you could analyse how a janitor speaks to a CEO, or how politicians speak about terrorism.
To truly understand these conversations or speeches, the culture and history of those involved in the communication are important factors to consider. For example, a janitor might speak more casually with a CEO in a company that emphasises equality among workers. Similarly, a politician might speak more about terrorism if there was a recent terrorist incident in the country.
So, as you can see, by using discourse analysis, you can identify how culture , history or power dynamics (to name a few) have an effect on the way concepts are spoken about. So, if your research aims and objectives involve understanding culture or power dynamics, discourse analysis can be a powerful method.
Because there are many social influences in terms of how we speak to each other, the potential use of discourse analysis is vast . Of course, this also means it’s important to have a very specific research question (or questions) in mind when analysing your data and looking for patterns and themes, or you might land up going down a winding rabbit hole.
Discourse analysis can also be very time-consuming as you need to sample the data to the point of saturation – in other words, until no new information and insights emerge. But this is, of course, part of what makes discourse analysis such a powerful technique. So, keep these factors in mind when considering this QDA method. Again, if you’re keen to learn more, the video below presents a good starting point.
QDA Method #4: Thematic Analysis
Thematic analysis looks at patterns of meaning in a data set – for example, a set of interviews or focus group transcripts. But what exactly does that… mean? Well, a thematic analysis takes bodies of data (which are often quite large) and groups them according to similarities – in other words, themes . These themes help us make sense of the content and derive meaning from it.
Let’s take a look at an example.
With thematic analysis, you could analyse 100 online reviews of a popular sushi restaurant to find out what patrons think about the place. By reviewing the data, you would then identify the themes that crop up repeatedly within the data – for example, “fresh ingredients” or “friendly wait staff”.
So, as you can see, thematic analysis can be pretty useful for finding out about people’s experiences , views, and opinions . Therefore, if your research aims and objectives involve understanding people’s experience or view of something, thematic analysis can be a great choice.
Since thematic analysis is a bit of an exploratory process, it’s not unusual for your research questions to develop , or even change as you progress through the analysis. While this is somewhat natural in exploratory research, it can also be seen as a disadvantage as it means that data needs to be re-reviewed each time a research question is adjusted. In other words, thematic analysis can be quite time-consuming – but for a good reason. So, keep this in mind if you choose to use thematic analysis for your project and budget extra time for unexpected adjustments.
QDA Method #5: Grounded theory (GT)
Grounded theory is a powerful qualitative analysis method where the intention is to create a new theory (or theories) using the data at hand, through a series of “ tests ” and “ revisions ”. Strictly speaking, GT is more a research design type than an analysis method, but we’ve included it here as it’s often referred to as a method.
What’s most important with grounded theory is that you go into the analysis with an open mind and let the data speak for itself – rather than dragging existing hypotheses or theories into your analysis. In other words, your analysis must develop from the ground up (hence the name).
Let’s look at an example of GT in action.
Assume you’re interested in developing a theory about what factors influence students to watch a YouTube video about qualitative analysis. Using Grounded theory , you’d start with this general overarching question about the given population (i.e., graduate students). First, you’d approach a small sample – for example, five graduate students in a department at a university. Ideally, this sample would be reasonably representative of the broader population. You’d interview these students to identify what factors lead them to watch the video.
After analysing the interview data, a general pattern could emerge. For example, you might notice that graduate students are more likely to read a post about qualitative methods if they are just starting on their dissertation journey, or if they have an upcoming test about research methods.
From here, you’ll look for another small sample – for example, five more graduate students in a different department – and see whether this pattern holds true for them. If not, you’ll look for commonalities and adapt your theory accordingly. As this process continues, the theory would develop . As we mentioned earlier, what’s important with grounded theory is that the theory develops from the data – not from some preconceived idea.
So, what are the drawbacks of grounded theory? Well, some argue that there’s a tricky circularity to grounded theory. For it to work, in principle, you should know as little as possible regarding the research question and population, so that you reduce the bias in your interpretation. However, in many circumstances, it’s also thought to be unwise to approach a research question without knowledge of the current literature . In other words, it’s a bit of a “chicken or the egg” situation.
Regardless, grounded theory remains a popular (and powerful) option. Naturally, it’s a very useful method when you’re researching a topic that is completely new or has very little existing research about it, as it allows you to start from scratch and work your way from the ground up .
QDA Method #6: Interpretive Phenomenological Analysis (IPA)
Interpretive. Phenomenological. Analysis. IPA . Try saying that three times fast…
Let’s just stick with IPA, okay?
IPA is designed to help you understand the personal experiences of a subject (for example, a person or group of people) concerning a major life event, an experience or a situation . This event or experience is the “phenomenon” that makes up the “P” in IPA. Such phenomena may range from relatively common events – such as motherhood, or being involved in a car accident – to those which are extremely rare – for example, someone’s personal experience in a refugee camp. So, IPA is a great choice if your research involves analysing people’s personal experiences of something that happened to them.
It’s important to remember that IPA is subject – centred . In other words, it’s focused on the experiencer . This means that, while you’ll likely use a coding system to identify commonalities, it’s important not to lose the depth of experience or meaning by trying to reduce everything to codes. Also, keep in mind that since your sample size will generally be very small with IPA, you often won’t be able to draw broad conclusions about the generalisability of your findings. But that’s okay as long as it aligns with your research aims and objectives.
Another thing to be aware of with IPA is personal bias . While researcher bias can creep into all forms of research, self-awareness is critically important with IPA, as it can have a major impact on the results. For example, a researcher who was a victim of a crime himself could insert his own feelings of frustration and anger into the way he interprets the experience of someone who was kidnapped. So, if you’re going to undertake IPA, you need to be very self-aware or you could muddy the analysis.
How to choose the right analysis method
In light of all of the qualitative analysis methods we’ve covered so far, you’re probably asking yourself the question, “ How do I choose the right one? ”
Much like all the other methodological decisions you’ll need to make, selecting the right qualitative analysis method largely depends on your research aims, objectives and questions . In other words, the best tool for the job depends on what you’re trying to build. For example:
- Perhaps your research aims to analyse the use of words and what they reveal about the intention of the storyteller and the cultural context of the time.
- Perhaps your research aims to develop an understanding of the unique personal experiences of people that have experienced a certain event, or
- Perhaps your research aims to develop insight regarding the influence of a certain culture on its members.
As you can probably see, each of these research aims are distinctly different , and therefore different analysis methods would be suitable for each one. For example, narrative analysis would likely be a good option for the first aim, while grounded theory wouldn’t be as relevant.
It’s also important to remember that each method has its own set of strengths, weaknesses and general limitations. No single analysis method is perfect . So, depending on the nature of your research, it may make sense to adopt more than one method (this is called triangulation ). Keep in mind though that this will of course be quite time-consuming.
As we’ve seen, all of the qualitative analysis methods we’ve discussed make use of coding and theme-generating techniques, but the intent and approach of each analysis method differ quite substantially. So, it’s very important to come into your research with a clear intention before you decide which analysis method (or methods) to use.
Start by reviewing your research aims , objectives and research questions to assess what exactly you’re trying to find out – then select a qualitative analysis method that fits. Never pick a method just because you like it or have experience using it – your analysis method (or methods) must align with your broader research aims and objectives.
Let’s recap on QDA methods…
In this post, we looked at six popular qualitative data analysis methods:
- First, we looked at content analysis , a straightforward method that blends a little bit of quant into a primarily qualitative analysis.
- Then we looked at narrative analysis , which is about analysing how stories are told.
- Next up was discourse analysis – which is about analysing conversations and interactions.
- Then we moved on to thematic analysis – which is about identifying themes and patterns.
- From there, we went south with grounded theory – which is about starting from scratch with a specific question and using the data alone to build a theory in response to that question.
- And finally, we looked at IPA – which is about understanding people’s unique experiences of a phenomenon.
Of course, these aren’t the only options when it comes to qualitative data analysis, but they’re a great starting point if you’re dipping your toes into qualitative research for the first time.
If you’re still feeling a bit confused, consider our private coaching service , where we hold your hand through the research process to help you develop your best work.
Learn More About Qualitative:
Triangulation: The Ultimate Credibility Enhancer
Triangulation is one of the best ways to enhance the credibility of your research. Learn about the different options here.
Structured, Semi-Structured & Unstructured Interviews
Learn about the differences (and similarities) between the three interview approaches: structured, semi-structured and unstructured.
Qualitative Coding Examples: Process, Values & In Vivo Coding
See real-world examples of qualitative data that has been coded using process coding, values coding and in vivo coding.
In Vivo Coding 101: Full Explainer With Examples
Learn about in vivo coding, a popular qualitative coding technique ideal for studies where the nuances of language are central to the aims.
Process Coding 101: Full Explainer With Examples
Learn about process coding, a popular qualitative coding technique ideal for studies exploring processes, actions and changes over time.
📄 FREE TEMPLATES
Research Topic Ideation
Proposal Writing
Literature Review
Methodology & Analysis
Academic Writing
Referencing & Citing
Apps, Tools & Tricks
The Grad Coach Podcast
88 Comments
This has been very helpful. Thank you.
Thank you madam,
Thank you so much for this information
I wonder it so clear for understand and good for me. can I ask additional query?
Very insightful and useful
Good work done with clear explanations. Thank you.
Thanks so much for the write-up, it’s really good.
Thanks madam . It is very important .
thank you very good
Great presentation
very informative. Thank you!!
This has been very well explained in simple language . It is useful even for a new researcher.
Great to hear that. Good luck with your qualitative data analysis, Pramod!
This is very useful information. And it was very a clear language structured presentation. Thanks a lot.
Thank you so much.
very informative sequential presentation
Precise explanation of method.
Hi, may we use 2 data analysis methods in our qualitative research?
Thanks for your comment. Most commonly, one would use one type of analysis method, but it depends on your research aims and objectives.
You explained it in very simple language, everyone can understand it. Thanks so much.
Thank you very much, this is very helpful. It has been explained in a very simple manner that even a layman understands
Thank nicely explained can I ask is Qualitative content analysis the same as thematic analysis?
Thanks for your comment. No, QCA and thematic are two different types of analysis. This article might help clarify – https://onlinelibrary.wiley.com/doi/10.1111/nhs.12048
This is my first time to come across a well explained data analysis. so helpful.
I have thoroughly enjoyed your explanation of the six qualitative analysis methods. This is very helpful. Thank you!
Thank you very much, this is well explained and useful
i need a citation of your book.
Thanks a lot , remarkable indeed, enlighting to the best
Hi Derek, What other theories/methods would you recommend when the data is a whole speech?
Keep writing useful artikel.
It is important concept about QDA and also the way to express is easily understandable, so thanks for all.
Thank you, this is well explained and very useful.
Very helpful .Thanks.
Hi there! Very well explained. Simple but very useful style of writing. Please provide the citation of the text. warm regards
The session was very helpful and insightful. Thank you
This was very helpful and insightful. Easy to read and understand
As a professional academic writer, this has been so informative and educative. Keep up the good work Grad Coach you are unmatched with quality content for sure.
Keep up the good work Grad Coach you are unmatched with quality content for sure.
Its Great and help me the most. A Million Thanks you Dr.
It is a very nice work
Very insightful. Please, which of this approach could be used for a research that one is trying to elicit students’ misconceptions in a particular concept ?
This is Amazing and well explained, thanks
great overview
What do we call a research data analysis method that one use to advise or determining the best accounting tool or techniques that should be adopted in a company.
Informative video, explained in a clear and simple way. Kudos
Waoo! I have chosen method wrong for my data analysis. But I can revise my work according to this guide. Thank you so much for this helpful lecture.
This has been very helpful. It gave me a good view of my research objectives and how to choose the best method. Thematic analysis it is.
Very helpful indeed. Thanku so much for the insight.
This was incredibly helpful.
Very helpful.
very educative
Nicely written especially for novice academic researchers like me! Thank you.
choosing a right method for a paper is always a hard job for a student, this is a useful information, but it would be more useful personally for me, if the author provide me with a little bit more information about the data analysis techniques in type of explanatory research. Can we use qualitative content analysis technique for explanatory research ? or what is the suitable data analysis method for explanatory research in social studies?
that was very helpful for me. because these details are so important to my research. thank you very much
I learnt a lot. Thank you
Relevant and Informative, thanks !
Well-planned and organized, thanks much! 🙂
I have reviewed qualitative data analysis in a simplest way possible. The content will highly be useful for developing my book on qualitative data analysis methods. Cheers!
Clear explanation on qualitative and how about Case study
This was helpful. Thank you
This was really of great assistance, it was just the right information needed. Explanation very clear and follow.
Wow, Thanks for making my life easy
This was helpful thanks .
Very helpful…. clear and written in an easily understandable manner. Thank you.
This was so helpful as it was easy to understand. I’m a new to research thank you so much.
so educative…. but Ijust want to know which method is coding of the qualitative or tallying done?
Thank you for the great content, I have learnt a lot. So helpful
precise and clear presentation with simple language and thank you for that.
very informative content, thank you.
You guys are amazing on YouTube on this platform. Your teachings are great, educative, and informative. kudos!
Brilliant Delivery. You made a complex subject seem so easy. Well done.
Beautifully explained.
Thanks a lot
Is there a video the captures the practical process of coding using automated applications?
Thanks for the comment. We don’t recommend using automated applications for coding, as they are not sufficiently accurate in our experience.
content analysis can be qualitative research?
THANK YOU VERY MUCH.
Thank you very much for such a wonderful content
do you have any material on Data collection
What a powerful explanation of the QDA methods. Thank you.
Great explanation both written and Video. i have been using of it on a day to day working of my thesis project in accounting and finance. Thank you very much for your support.
very helpful, thank you so much
The tutorial is useful. I benefited a lot.
This is an eye opener for me and very informative, I have used some of your guidance notes on my Thesis, I wonder if you can assist with your 1. name of your book, year of publication, topic etc., this is for citing in my Bibliography,
I certainly hope to hear from you
Submit a Comment Cancel reply
Your email address will not be published. Required fields are marked *
Save my name, email, and website in this browser for the next time I comment.
Submit Comment
- Print Friendly
An official website of the United States government
Official websites use .gov A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS A lock ( Lock Locked padlock icon ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
- Publications
- Account settings
- Advanced Search
- Journal List
Analytic model for academic research productivity having factors, interactions and implications
- Author information
- Copyright and License information
Correspondence to: Scott Kern; Email: [email protected]
Corresponding author.
Financial support is dear in academia and will tighten further. How can the research mission be accomplished within new restraints? A model is presented for evaluating source components of academic research productivity. It comprises six factors: funding; investigator quality; efficiency of the research institution; the research mix of novelty, incremental advancement, and confirmatory studies; analytic accuracy; and passion. Their interactions produce output and patterned influences between factors. Strategies for optimizing output are enabled.
Keywords: public policy, research efficiency, productivity, funding
In developed countries, there are increasing pressures from governments and their funding agencies to demonstrate impact from the spent R&D funds. This creates the need to understand better the productivity of public research, both in academic settings and in governmental institutes.
To optimize these goals, the fundamental factors determining the productive output should be analyzed. In the model presented here ( Fig. 1 ), the sources of production are grouped into six top-tier, or alpha, variables: investments and ongoing funding; investigator experience and training; efficiency of the research environment; the research mix of novelty, incremental advancement, and confirmatory studies; analytic accuracy; and passion. Interactions exist between these variables: they operate similar to multiplicative (not additive) factors determining the total product of research output and productivity, and feedback loops and ratchet effects can link between their inputs and outputs. Relating the output to the input resources creates the measure of efficiency, productivity. Certain widely-discussed research characteristics can be reclassified more usefully as subsets of these six dominant variables, while others can be discarded as distractions or pertaining only to special situations. Institutions and nations should choose deliberately among alternative production goals and, in order for the endeavor to remain stable, must maintain a focus on the intended beneficiaries of the output.
A schematic summary of the factor model presented. Variables influencing the output can be nested within six parental conceptual containers, the “alphas” arrayed at the top of this schematic. The model is not additive, but treats alphas as multiplicative factors. A deficiency in an alpha will impair the output regardless of improvements made in other, more visible and popular input variables. Productivity is a rate and is an expression of efficiency that is independently meaningful only when made relative to a reference unit; it is not equivalent to total production or output.
The numbers of publications, of patents, or of clinical trials in a time period are potential measures of output or production. When expressed as a ratio, such as output per dollar, per nation, per institution, per investigator… they become measures of efficiency or productivity. Choices thus are involved in the kind of output to be optimized. For example, optimizing the impact on a field of study is a laudable goal, similar to profitability or market share of a product in a commercial setting. In contrast, one might wish to measure a less ambiguous, more quantitative goal, perhaps work performed. The latter is independent of impact in a practical sense. Imagine two organizations that each measure the number of publications, patents and clinical trials. One organization may choose to focus on work performed as its premier goal, only eventually to fall behind an organization that sets impact as its.
It is difficult to measure impact, lest paradoxes emerge. For example, the list of Nobel laureates in medicine would not widely overlap the list of investigators that have the top ten percentile rank in publications, patent applications, or clinical trials. Research quality need not parallel quantity. As an example, Kary Mullis invented PCR, which was one of the most financially lucrative and scientifically important discoveries. Dr. Mullis, however, was not superlatively productive as a biochemist by other measures.
Whenever metrics are simple, they can become systematically influenced by learned behaviors. For example, even the weakest investigators learn that they must publish a certain number of articles so as to stay employed. Similarly, the “h-index,” a measure of an investigator's impact based on citation rates, 1 is influenced by self-citation or by the behavior of scientific fads, where even poor publications become highly cited due to other poor publications having cited it. To counteract this weakness, better and more intricate metrics can be contrived. For example the h-index is criticized for the ease with which it can be gamed. In response, an “h-squared index” could be proposed, in which the “h function” is applied twice serially, both to each citing publication and to the investigator. In the h-squared index, only highly cited publications (having their own high h-index according to Schubert’s h index of a single publication) 2 count toward calculations of an investigator's h-index. This would be extremely difficult to game. The h-index of the current author is 74, while his h-squared index is merely 35. That is, about 35 of his publications were in turn cited by at least 35 other impact-seeking reports of a particular level of acclaim, such that each report had at least 35 citations of its own.
Metrics can be gamed for good purposes, and it would be desirable to work with these tendencies. Well-designed metrics, encouraging desirable patterns of gaming, resistant to undesirable gaming, and easy to adopt, are worthy of attention and development.
Thus, among metrics, hierarchies exist. For example, despite any possible exceptions to the general rules, we are sure that the success of a Phase 3 clinical trial is a more rigorous measure of impact than is success at the Phase 1 level (i.e., success in a Phase 3 trial can be the basis of a change in the standard of care; success in a Phase 1 trial cannot). Numbers of citations can be more valuable than the number of publications. Patents awarded are more tangible in a commercial sense than patents applied for. Drugs administered have more impact than drugs proposed. To contrast with this focus on impact, when work performed is the measured output of choice in a research setting, one might prefer instead to measure Phase I trials, patents applied for, and numbers of lead compounds identified. The advantage of this latter approach is that it is less subject to conditional late events (such as the occurrence of unanticipated clinical adverse events, financial shocks, or patent office delays) not under the direct control of management and investigators.
Measures of output fall also into two other categories—selfish or service—depending on the beneficiary. Research benefits to the researcher, the reward system, 3 constitute the selfish measure. Most of the growing clamor concerning the proper measurement of academic productivity concerns such selfish measures and applies them only to compare individuals, often for the purpose of allotting promotions and for advising career development. 4 Benefits to the institution, funding agency, contractor, or other constituency (taxpayers, patient advocates, charities, public popularity, economic advancement, etc.) constitute the service. These latter aspects more clearly unite public policy and academic research productivity. 5 Once the research resources are in place, the best short-term predictor of success may be an anticipation of the selfish outputs. But if these discoveries, patents, and trials do not eventually appeal to others, the research may be downsized or terminated. An imbalance in the selfish and service benefits is to be expected, but large imbalances are inherently unstable. Unstable systems trend toward stability. It is interesting that at times when one observes that the research endeavor can seem unstable, it is often because it is merely threatening to move back toward stability. Thus, higher levels of academic discovery constitute a required, but insufficient, goal. The subsequent steps of knowledge diffusion, acceptance, adoption, commercialization, etc., determine the impact of the research. These considerations make the service outputs a significant predictor of long-term success.
The Factors
In the model are six dominant, or alpha, characteristics. Each contributes to the overall output of the research. As best as can be expected in a social science, the alphas are conceived as being independent. Each can be varied intentionally or might remain unmodified when left unattended. Alphas are rather few in number; it is generally most useful to nest any additional considerations within one of these parent categories, for long lists of associated variables are seldom very independent. 6 The paucity of the proposed alpha categories would prove convenient when using a model as an guide for analysis of a research endeavor or for planning interventions to improve productivity. The alphas serve as parent containers into which new subcategories should be nested.
The alphas are envisioned as factors: multiplicands that together determine the mathematical product, output. To depict the potential use of a mathematical theory for research productivity, imagine an idealized scenario with a scale applied to each factor, such that each increment in the scale “costs” the same in input resources. Specifically, let us imagine that the Investigator Quality alpha currently has a value of “2,” “Funding” has a value of “8,” and the institution has four units of enhancement to spread around. Additional contributions to the Investigator Quality alpha would elevate its value and the value of the mathematical product, output, by 200%, i.e., a large amount. In contrast, contributing to Funding would increase it and the output by a mere 50%. Interactions between the alphas might alter this expectation somewhat, and it is indeed whimsical to expect that the incremental contribution of a factor would cost the same irrespective of the factor’s identity or that each factor should be given equal weighting. Yet, it is the main message of this exercise, and of the proposed model itself, that a factor-based analysis of a research endeavor might allow one to reach a higher productivity than that produced by assuming the alphas to be merely additive (in which case the choice of a designated target for a new improvement might be essentially interchangeable) or by mere uninformed investments. As Socrates might have said, the unexamined investment is not worth investing.
The necessity of considering multiple variables simultaneously is not only intuitive, but repeatedly has found research support. Explanatory power is weak when either individual determinants of research productivity 7 or measures at an organizational level 8 are considered separately. For this reason, the proposed six alphas, or factors, below are each explored separately and then, to illustrate their interactions, jointly. Alphas are listed in approximately decreasing order of attention, as judged roughly from official sources such as the programmatic initiatives of national research funding agencies and in the position papers of scientific associations (for example, see ref. 9 ).
(1) Funding
Monetary inputs can be used for prolonged consumption, termed an investment, or for immediate consumption in ongoing activities. Conventional assumptions are that increases and decreases in funding will produce proportionate changes in research output provided that a reasonable balance is maintained between investments and ongoing consumption. When the other five major variables are being utilized at “less than capacity” this predictable relationship may operate, but arguably such a scenario may be infrequent. Trained and experienced investigators may be limited in the short-term, projects well-honed to the required funding criteria may be few, or funding decisions and distribution processes may be inefficient, such that the full amount of the funding may not be readily utilized. This shortcoming is a threshold effect, with the available money “burning a hole” in an agency’s proverbial pocket until it can be “shoveled out the door.” Alternately, an unanticipated increase in funding may encourage a loosening of analytic accuracy and other quality criteria so as to facilitate spending.
(2) Investigator quality
The quality of the investigators is paramount. One suitable measure of quality is provided by observation of their recent successful research experience, or momentum. Experience conveys the concept of practical and informally learned knowledge, and training, the theoretical or formally learned. Investigator quality also encompasses creativity and other less-measurable attributes. This variable is meant to provide a handle on the stable qualities accumulated among the work force, rather than the transient influences on their productivity that might be contributed by the five other factors. For example, training new investigators does not provide the experience component. Both an initial recruitment effort for trainees and subsequent career retention are required to drive quality among the investigator pool. Successful training and later career momentum are highly related. 10
The number of career publications, number of students trained, and promotion or tenure are components often used to measure the value of the investigator, as limited as these metrics may be. In academic labs, the investigator rather than the institution nearly exclusively maintains the network of contacts with outside scientists, a measurable value. Ideally, thus, the metric could be broad, along the lines of “scientific and technical human capital” as proposed by Bozeman, Dietz and Gaughan. 11 Individual situations affect the metrics; Carayol and Matt confirmed prior reports that full-time researchers publish more, as expected. 12 The number of investigators is not an inherent feature of this variable, however, for it can be useful at times to consider their numbers (or the numbers of academic institutions or of scientific projects) as a mere reflection of the magnitude of the Funding variable.
(3) Institutional efficiency
The goal of an institution in optimizing efficiency is, in part, to systematically convert the routine and predictable needs into commodities such as core services and libraries, so as to free up the investigators for tasks requiring their training and creativity. In part, it is to create a predictable administration: a stable foundation permitting the investigators without distraction to focus on more difficult, perhaps longer-range, goals. The institution can facilitate the diffusion of scientific knowledge using proximities between scientists, visitors, vendors, etc. Finally, it provides a departmental effect, where individual productivity of incoming investigators, however diverse, soon conforms to that of their colleagues. 13 Thus, the quality of colleagues matters.
An efficient environment encompasses a set of conventional core functions often provided intentionally by the institution. In biomedical science, NIH-funded academic centers (such as a cancer center) are maintained by a grant devoted almost entirely to improving the efficiency of such core and administrative functions. Large academic centers often accomplish these well, but incompletely; small ones emulate the large centers to jumpstart their own efficiency. The typical components are diverse, including an efficient library/interlibrary loan service, a collection of diverse investigators having useful capabilities, centralized provision of complex multi-user equipment, physical and electronic facilities, a magnet labor pool from which to select new hires, discount-purchasing rapid-delivery systems, a technology-savvy legal office, ethics-review boards, etc. The comprised list also includes the collaborative atmosphere, which the institution cannot directly decree but which is essential to its efficiency.
In academia, the investigator rather than the institution embodies (or fails to embody) the project management skills, a key and yet overlooked component. The pattern may differ in large for-profit companies (such as GE), where ensuring the project management skills is a core function of the company itself (e.g., Six Sigma, performance reviews, milestone-based bonuses, etc.). In academia, no provisions for formal project management are generally found.
The pursuit of research efficiency thus should be extended to novel capabilities. Just as commercial businesses may hire consultants to import skills in project management, an academic institution might make available an expert business consultation service so as to achieve more efficient project management, rather than assuming it to be automatically vested in the investigator’s skills. In academia, providing state-of-the-art professional project-management support to investigators would offer a rather low-cost option for raising productivity. Or an institution might employ expert-scientist rapid-response teams to provide for short-term technical sprints, for which creating short-term employment positions cannot realistically be done de novo. The latter are maintained by some for-profit businesses and very often by sophisticated militaries and police departments, but rarely if ever by academic institutions. The efficient research institutions of the future may become sufficiently motivated to implement such functions.
Interactions immediately suggest themselves. When the investigator finds reinforcement from engaging separate goals (such as performing research alongside teaching duties or clinical care), the interaction might be overall positive. 14 In contrast, when the institution serves disparate goals, focused passion might be discouraged, and the split missions could distract the investigators. And if the institution relies excessively on short-term financial supports, the research foundation could be rendered too unstable to foster pursuits of the deeper scientific questions. Whatever the financial benefits of short-term funding, a price may be paid for it through a decreased institutional efficiency.
(4) Research mix
Useful science does three things. It makes novel discoveries, extends these discoveries incrementally, and confirms and self-corrects itself by enabling other investigators to repeat the reported experiments. It would be a fallacy to simplistically value novelty without incremental science, or to systematically prefer novelty over confirmatory studies and skeptical science. 15 Useful science must incorporate all three components. One of the greatest impediments in modern hypothesis-generating (rather than hypothesis testing) research is the persistent menagerie of unconfirmed novel discoveries and the wasted funding resources consumed to produce it. It takes compassion toward a field, and considerable strength, to muck out its stalls. Imbalance in the research mix is arguably not the natural state of individual scientists, but is often imposed on the scientific work force in self-serving and unavoidable tendencies among funding agencies, publishers, and research institutions. It would not only be refreshing, but it would improve the value of the research mix and the overall research productivity, if greater emphasis were placed on attaining balance among novel, extended, and confirming studies.
(5) Analytic accuracy
Science is accurate when the data and the logic underlying the conclusions are valid. Yet science is not inherently accurate, for it is primarily a social endeavor. Deep inaccuracies can persist for long periods of time. Essentially by definition, such patterns impair research productivity. Data exist to show that incorrect research continues to be cited even after a publication is retracted for fraud. 16 Some patterns of citation are best explained as a type of fad rather than a form of analytically rigorous knowledge pursuit. 15 A theoretical argument persuasively made the case that most published research was false. 17 While misleading science can be readily published, the insights of skeptical readers are seldom shared. Thousands of readers’ comments pertaining to the analytic accuracy of individual published articles are organized and published at www.biomedcriticalcommentary.com , 18 but outside the biomarker field such exchanges among readers are usually informal and sporadic. Primary authors seldom revisit the accuracy of their discoveries through future published follow-up reassessment, despite published suggestions that journals could uniformly inaugurate such a policy. 15 , 19 The test of time is a responsibility of science, but a responsibility waived habitually.
Analytic accuracy thus is the Cinderella of science: too often ignored, deprecated, and unwanted. In a prominent recent example, investigators at Duke University Medical Center developed a biomarker panel technology to predict cancer treatment responses and instituted clinical trials using the markers. 20 A “forensic statistics” team at MD Anderson Cancer Center examined the database and found multiple large errors (such as “off-by-one” spreadsheet transformations of data) that invalidated the key biomarker associations, a direct challenge to the analytic accuracy of the publications and the ethics of the clinical trials then underway. 21 Separately and subsequently, a lead investigator of the Duke team was found to have incorrectly claimed an award in his training record; this was at best an indirect challenge to the ethics of the trials and was not a direct invalidator of the data used to justify the trial designs. 22 Duke University took no permanent actions upon learning of the analytic inaccuracy, first halting and then restarting the trials. Duke also sequestered from the scientific community the extent and relevant details of its investigation. Upon learning of the mistaken award claim, however, Duke University fired the lead investigator, terminated the remaining trials, and began sharing sensitive details publicly. 22 Until the point when a personality failure emerged, the short-shrift given to the issue of analytic accuracy was characteristic of this episode.
(6) Passion
Passion is proposed as an important factor in research success. 23 Passion is a compulsion to perform, impatience to see a result, uncompelled enjoyment and participation, gumption. Although it is not formally measured, a measurement is in theory possible. Passion is by nature malleable, as implied by the existence of words such as “invigorate” or “disenchantment.”
Taking care to sustain the passion of the work force is, it is argued here, generally placed low among the list of institutional scientific objectives. Experience shows that among the best investigators discussions of investigational passion are uncomfortable enough that they are conducted largely in private, and among some investigators these discussions incite anger. 24 Embodied in the design of the current model is a prediction, that greater attention by research institutions and funding agencies so as to officially value passion, to avoiding inducing disenchantment, and to convincingly supply a constructive vision for the future, may yield unexpected benefits even despite prolonged limitations of funding and workforce restrictions. It is a failure of leadership when such measures are not pursued with serious and open devotion.
The investigator embodies initiative or passion (or even inertia), which certainly helps determine the quality of the investigator. Yet, in academia the major determinant of a team's passion might instead be the spirit of the institution, the career tendencies of the work force, and the Zeitgeist of the field of study. These tendencies owe to the transient nature of most participants in academic research; they are students and postdoctoral trainees for the most part. It remains unclear whether, when recruiting students into the profession, graduate schools and mentors can at an early timepoint select for students likely to retain or acquire a “fire-in-the-belly” passion as their career development progresses. Thus, Passion in a research endeavor represents a truly independent variable—it is not merely a subset of the Investigator Quality variable.
To the extent that a lack of passion might be a serious bottleneck to research productivity, the causes should be investigated. 23 Robert Pirsig in Zen and the Art of Motorcycle Maintenance referred to a similar idea as a “gumption trap” and identified two types: external set-backs and internal hang-ups. 25 Institutions can better address an investigator’s external set-backs 7 (such as providing “bridge funding” in a predictable manner during temporary shortfalls in outside funding) and should seek to address any reproducible patterns of internal hang-ups (such as anxiety over career advancement opportunities) affecting their research labor force. Even concerning a very modern impediment such as cyberslacking (the personal use of the Internet at work), an expanding literature is examining its underlying causes, from employees’ perceptions of its acceptability to their own demotivational feelings. 26
There is a great need to understand what motivates investigators, how we can incentivize them, and what types of incentives produce overall-positive consequences over long time horizons.
Feedback Loops
When interesting phenomena need to be explained, multiple variables may have interplay. Feedback loops can be powerful phenomena emerging from fairly simple relationships of dependent and independent variables. It can be difficult to disentangle what is an independent variable (an input) and a dependent variable (an output), but it is valuable to try. Funding levels, for example, are both an input resource and a measure of output, for success in research (the output) begets improved funding (an input). Similarly, the experience of an investigator is both an input and an output; it determines success, but in turn, success permits career longevity. These relationships create positive-feedback loops which can be critical for long-term research success in academia, just as in business. The unfortunate aspect of a positive-feedback loop is that when a problem emerges, it magnifies, and it risks creating deep failure. Thus, a talented scientist leaving academic research for a few years after birth of offspring may find it nearly impossible to become re-established in the prior career. Can the detrimental patterns of feedback loops be recognized? Can detrimental positive-feedback loops be turned around, so that they become beneficial again? Are there also negative-feedback loops, and are they beneficial or detrimental? Perhaps a nation or institution could support its research better by focusing on these variables in a more systematic manner.
Interactions, especially when they change magnitude, can create a ratchet effect. These are often detrimental and should be managed with care. It might be valuable to explore in detail a theoretical “ratchet risk” scenario. For example, let us consider a ratchet constructed solely from the interplay of two alphas: Funding and Investigator Quality. We postulate a research community operating under certain premises. The investigators are of two classes. “Capable” investigators are potentially productive, are experienced and well trained, intelligent, analytic. Their number is finite and subject to slow change: increasing with incentives and falling with disincentives. Additional “not-as-capable” investigators are unlimited in number. They can be recruited from a large pool of trained degree-holders and, due to comparatively fewer attractive career alternatives, can enter the investigator pool more rapidly upon inducements and would be trapped in a foundering field at a higher rate than capable investigators when disincentives dominate or funding decreases.
Under a constant funding payline (i.e., assuming that a constant fraction of proposals become funded), one expects that for any given funding level, a dynamic equilibrium would establish itself so that the ratio of capable and not-so-capable investigators became stable over time. The ratio would be based on a multitude of sociologic considerations and essentially be unpredictable; the ratio would emerge empirically. In a second scenario in which an increase or a decrease in the payline was instead gradual, the pool of capable investigators might be able to grow or shrink along with the funding change. In a third and rapidly oscillating funding scenario, each increase in the payline sees the pool of funded investigators become diluted with a disproportional increase in not-so-capable investigators. Accompanying each payline decrease, the capable investigators would preferentially migrate to opportunities in other careers, again effectively producing a dilution as the proportion of not-so-capable investigators rises. As the cycle begins anew, the ratchet has clicked in the deleterious direction.
One would want to introduce positive ratchets and constructive feedback loops into the research setting, for it is not inherent in these interactions that they be solely deleterious. Loops and ratchets can be manipulated with incentives, evaluations, and other policy interventions.
Quantitation and Paradoxes
It would be valuable to take a quantitative look at academic research productivity. A rigorous look at the six alphas is likely to reveal paradoxes. Paradoxes are fascinating, for they are unusually instructive. For example, in biomedical science, naive investigators might produce the greater number of new biomarkers, a seemingly important indication of creativity and a measure of high output. Yet, experienced investigators having high impact may create most of the biomarkers that become clinically used and FDA-approved, possibly creating an inverse (paradoxical) relationship between numbers of biomarkers identified and impact, or between investigator experience and work produced. This paradox clues to us that an inappropriate metric had been chosen. As a second example, in resource-poor environments, a large research team may signify a very competitive group; at a well-endowed campus, however, it may be a sign of stagnation and evidence of protection from a competitive environment. Providing evidence of this paradox, in a large systematic study that at the Louis Pasteur University, the size of the lab was negatively associated with research performance. 12
As a third example, a highly interactive research group might find itself conscripted into participating in politically expedient but low-impact collaborative research and forced to attend geographically distant planning meetings. Meanwhile, an isolated group might paradoxically outperform, completing the same project without interference and without attending any inter-institutional meetings. The first group is obliged to operate at greater than “critical mass” and in a world of diminishing returns. The latter group is free to function with the number of participants of their choosing and thus remains near the optimal point on the efficiency curve. This represents a paradox in which higher quantitative measures of collaborations, satellite labs, affiliated scientists, and geographic locations prove detrimental.
As a fourth example, it is widely believed that breakthrough discoveries in a field are disproportionately produced by newly arrived, young, investigators. The phenomenon of early productivity is difficult to explain, however. 7 To address this apparent paradox, one could examine specific questions. Are early breakthroughs due to greater risk-taking or other systematic differences in the manner by which young investigators conduct their science? Might the phenomenon be more universal than suspected, even affecting experienced investigators? This could occur due to “experienced” investigators actually having little experience in the direction from which comes the breakthrough (i.e., when assessing the experience level in research, do we do so superficially by overlooking that research fields change over time, constantly producing waves of “inexperience”?). Is it a statistical quirk, due to the number of new investigators being disproportionately large in any given field? Is it a self-fulfillment fallacy, whereby the one making a breakthrough will become stably associated and productive within a field but a similar new investigator lacking a breakthrough may not, 27 or an impression affected by recall bias, in which breakthroughs by younger investigators are more memorable?
Values Not Modeled
Not covered by the proposed model are additional points of critique concerning values other than mere output, but found in viewpoints already published by others regarding focused public research. Should the size of an effort (the lab or grant or collaborative group) should be small or large? Should a project be led by a sole individual or be a team effort? Are certain types of research more appropriately performed by for-profit entities but not by academic or governmental projects? Should social goals be permitted to compete with raw meritocracy, such as spreading the wealth among famous and less-famous institutions, dictating a balance (a target ratio) between research conducted by younger and older generations, and or increasing the number of granted labs as a goal justifying placing restrictions governing the resources for more successfully funded labs, on the view that one group has too little and the other too much? Can the quality of peer review or grant management at the funding source be improved (not should, but can)? Who should own the products of publicly funded research? Should collaborative efforts be favored over an investigator’s freedom of association (and non-association)? How should the discoveries be commercialized? These additional points call for answers that may not be knowable or generalizable. One could not claim that they come from an unbiased application of an analytic model.
The six-alpha model also is qualitative rather than quantitative. It is anticipated that, for certain purposes, it might be valuable to explore quantitative measures and various means of weighting factors. This extension is beyond the scope of the current proposal.
Reclassifications
There are undeniably a multitude of influences that compete for attention in the research management literature. Yet, it is the bold claim of this model to be exclusive. Additional influences on research output are most constructive when re-cast into the six alphas in order that we dissect them into mechanical concepts, thereby elevating the analytic power. Let us take an example. Public policies have increasingly asked that research output be influenced through deliberate “priority-setting,” “research evaluation,” or “performance-based funding” by a research institution or funding agency. 5 Unfortunately, these terms do not readily guide the measurement of progress in individual variables. Let us take the first example: by what mechanisms is a “priority” subsequently made manifest, and are these mechanisms well balanced to optimize output? The application of the proposed model mandates separate accountings: whether the priority-setting created new investments or funding for ongoing research activities; whether the applicant pool of investigators improved upon publicity for the new priority; whether the institution reduced the existing inefficiencies in the priority area; whether the research mix was shuffled; whether increased scrutiny by international colleagues, accompanying the new publicity of the institution’s efforts, encouraged an improvement in analytic accuracy; and whether being the top institutional priority had bolstered raw gumption. To aid learning, the model invites examples of both successful and unsuccessful priority-setting.
A Case Study
It is illustrative to use the six-alpha model to analyze a real-world example. A case report from the University of Missouri Sinclair School of Nursing 28 describes an inspiring, intentional re-engineering of an academic nursing unit from having no NIH funding to a top-20 ranking. The report is detailed and primarily factual, and to place it in a structured philosophical context entails re-interpreting the provided facts according to the model. After a somewhat accomplished past academic history, the faculty was no longer submitting grants to the NIH, and key members were approaching retirement. Investments had been made in clinical and educational space but not for devoted research activities. Funding for ongoing research salaries was available from split careers in which the compensation for clinical activities permitted borrowing of time for some research. The employment of full-time researchers being precluded, the momentum of research was self-limited and thus restricted the research quality of the Investigators. Any renewed attempts to reinvigorate new research initiatives using faculty-wide discussions were hijacked by subsets of persons remaining uncommitted to an increased research intensity—the group’s Passion had been doused by individuals lacking impatience or gumption to see the new research succeed. It became a top institutional priority to elevate the research activities. But specifically, how?
In order to raise the average passion of the key groups, the dean instituted a “passion filter;” specifically those few faculty proposing a new initiative (here, a Research Interest Group) were appointed to lead group meetings. To these groups, startup resources were made available. To raise the maximal quality and passion among the investigator pool, new hires were taken from applicants intending intensified research, and the pool of applicants was intentionally molded by publicizing research as a top institutional goal. The institution raised the efficiency of the research environment by appointing an Associate Dean for Research, offering seed funding for new projects, giving feedback on proposals intended for outside funding, organizing mock reviews, and hiring a professional editor to sharpen up the applications. Success in output (acquiring new NIH funding) beget new inputs into building investments, ongoing consumption including increased research personnel, quality of investigators, environmental efficiencies, and even more passion among the research applicants and investigators.
Although this institution apparently avoided the danger, change is risky. Thus, using a model will generate simple suggestions, and yet judgment must be employed lest one needlessly submit to the law of unintended consequences. For example, instituting a “passion filter” could select for superficial and highly kinetic individuals, eliminating circumspection; in contrast, experienced participants first might prefer to lay a more intellectually sound foundation prior to action. Also, hiring full-time researchers could weaken the thrust toward clinical relevance that had been previously maintained by the dual-careerists. Teaching might become sidelined while research ascends. Instituting broad change could drive away talented individuals whose contributions do not conform to generalisms, plans, or theory. Rapid growth could morph into chasing fads; indeed, the Missouri school resolved to “hire faculty with interests in fundable topics.”
The model is impartial, and so it also generates criticism and questions of this otherwise outstanding case report. First, the new Research Mix was not described. Was the research exclusively incremental, lacking novelty? Was the emergent research team compassionate enough to do confirmatory research? Did they have the authoritative chops to confront widespread but harmful notions while fostering a more accurate view? Second, the report omits mention of analytic accuracy. Thus, the concept of productivity became divorced from the essential concern of whether the new knowledge could stand the test of time, so as to achieve true value. Third, the output measure was not formally defined, but appeared to be work performed. There was no claim regarding the impact of the research. Did a new nursing practice become instituted nationwide on the heels of their new research and publications? Did local rates of intravenous line infections and other nursing deficiencies fall? Fourth, was the research merely selfish, serving in the main to propagate more research at this institution? If their research had never been performed, would the world be a lesser place? Or did the taxpayers, who supplied the NIH funds, get a compensatory benefit from the transaction? In theory, should the taxpayers want to continue supporting this research, provided that they could become well informed about it? In turn, this raises the question of stability. If their NIH funding were to become less available, would the research be compelling enough to attract other sources of funding? Is the research capable of self-support through licensing, entrepreneurship, or philanthropy; could it be sustained entirely funded by their own institution owing to the service the research provides to its academic and patient care missions?
Finally, while inputs rose (i.e., larger NIH funding amounts and institutional expenses to improve efficiency) and output also rose (i.e., production, as measured by publication numbers), actual productivity was not addressed. No particular ratio was defined as the measure of productivity. Did the output per research hour rise? Did the output per dollar, per person, or per square foot of input resources improve? Did the university, the NIH, and the taxpayer pay too much for the output? The model does not provide answers, but it sharpens the analysis.
Implications and Summary
The six-alpha model carries implications. A danger exists in focusing inappropriately on the most convenient of the component variables, for in a multiplicative model threshold effects (diminishing returns) occur when even single factors remain undeveloped. To institute changes intended to improve productivity requires that one consider the impact of the change on all six alphas. A final and optimistic implication lies in opportunities for improved research productivity, aside from any hoped-for improvements in funding levels. These opportunities can lie overlooked and yet available to be exploited.
Policy makers should want to build processes of continuous improvement and problem management. These would include the standard steps of finding the causes of problems, instituting a solution, measuring the effects of an intervention, and reiteration/feedback to re-assess all stages after the measured intervention. Such study and intervention requires a framework for the analysis. With this as the ultimate intent, the factor model is presented.
Acknowledgments
No financial conflicts to declare. This work has been supported by NIH grant CA62924 and the Everett and Marjorie Kovler Professorship in Pancreas Cancer Research. I appreciated the critical reading of the manuscript and suggestions by Phil Phan of the Johns Hopkins Carey Business School, Baltimore.
Previously published online: www.landesbioscience.com/journals/cbt/article/18368
- 1. Hirsch JE. An index to quantify an individual's scientific research output. Proc Natl Acad Sci USA. 2005;102:16569–72. doi: 10.1073/pnas.0507655102. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 2. Schubert A. Using the h-index for assessing single publications. Scientometrics. 2009;78:559–65. doi: 10.1007/s11192-008-2208-3. [ DOI ] [ Google Scholar ]
- 3. Cole S, Cole JR. Scientific output and recognition: a study in the operation of the reward system in science. Am Sociol Rev. 1967;32:377–90. doi: 10.2307/2091085. [ DOI ] [ PubMed ] [ Google Scholar ]
- 4. McDade LA, Maddison DR, Guralnick R, Piwowar HA, Jameson ML, Helgen KM, et al. Biology needs a modern assessment system for professional productivity. Bioscience. 2001;61:619–25. doi: 10.1525/bio.2011.61.8.8. [ DOI ] [ Google Scholar ]
- 5. Leisyte L, Horta H. Academic knowledge production, diffusion, and commercialization: policies, practices and perspectives. Sci Public Policy. 2011;38:422–4. doi: 10.3152/030234211X12960315267697. [ DOI ] [ Google Scholar ]
- 6. Bland CJ, Ruffin MT. Characteristics of a productive research environment: Literature review. Acad Med. 1992;67:385–97. doi: 10.1097/00001888-199206000-00010. [ DOI ] [ PubMed ] [ Google Scholar ]
- 7. Stephan PE. The economics of science. J Econ Lit. 1996;34:1199–235. [ Google Scholar ]
- 8. von Tunzelmann N, Ranga M, Martin B, Geuna A. The effects of size on research performance: A SPRU review. Report prepared for the Office of Science and Technology. Department of Trade and Industry 2003. [ Google Scholar ]
- 9. NCI. The nation's investment in cancer research: An annual plan and budget proposal for fiscal year 2012. National Cancer Institute, National Institutes of Health, U.S. Department of Health and Human Services 2011. [ Google Scholar ]
- 10. Bland CJ, Schmitz CC. Characteristics of the successful researcher and implications for faculty development. J Med Educ. 1986;61:22–31. doi: 10.1097/00001888-198601000-00003. [ DOI ] [ PubMed ] [ Google Scholar ]
- 11. Bozeman B, Dietz JS, Gaughan M. Scientific and technical human capital: an alternative model for research evaluation. Int J Technol Manag. 2001;22:716–40. doi: 10.1504/IJTM.2001.002988. [ DOI ] [ Google Scholar ]
- 12. Carayol N, Matt M. Individual and collective determinants of academic scientists' productivity. Inform Econ Policy. 2003;18:55–72. [ Google Scholar ]
- 13. Long JS, McGinnis R. Organizational context and scientific productivity. Am Sociol Rev. 1981;46:422–42. doi: 10.2307/2095262. [ DOI ] [ Google Scholar ]
- 14. Fox MF. Research, teaching and publication productivity: Mutuality versus competition in academia. Soc Educ. 1992;65:293–305. doi: 10.2307/2112772. [ DOI ] [ Google Scholar ]
- 15. Brody JR, Kern SE. Stagnation and herd mentality in the biomedical sciences. Cancer Biol Ther. 2004;3:903–10. doi: 10.4161/cbt.3.9.1082. [ DOI ] [ PubMed ] [ Google Scholar ]
- 16. Campanario JM. Fraud: retracted articles are still being cited. Nature. 2000;408:288. doi: 10.1038/35042753. [ DOI ] [ PubMed ] [ Google Scholar ]
- 17. Ioannidis JP. Why most published research findings are false. PLoS Med. 2005;2:e124. doi: 10.1371/journal.pmed.0020124. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 18. Diamandis EP. Cancer biomarkers: can we turn recent failures into success? J Natl Cancer Inst. 2010;102:1462–7. doi: 10.1093/jnci/djq306. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 19. Diamandis EP. Quality of the scientific literature: all that glitters is not gold. Clin Biochem. 2006;39:1109–11. doi: 10.1016/j.clinbiochem.2006.08.015. [ DOI ] [ PubMed ] [ Google Scholar ]
- 20. Potti A, Dressman HK, Bild A, Riedel RF, Chan G, Sayer R, et al. Retraction: Genomic signatures to guide the use of chemotherapeutics. Nat Med. 2006;12:1294–300. doi: 10.1038/nm1491. [ DOI ] [ PubMed ] [ Google Scholar ]
- 21. Coombes KR, Wang J, Baggerly KA. Microarrays: retracing steps. Nat Med. 2007;13:1276–7, author reply 7-8. doi: 10.1038/nm1107-1276b. [ DOI ] [ PubMed ] [ Google Scholar ]
- 22. Baggerly KA, Coombes K. Retraction Based On Data Given To Duke Last November, But Apparently Disregarded. Cancer Lett. 2010;36:1–4. [ Google Scholar ]
- 23. Kern SE. Where's the passion? Cancer Biol Ther. 2010;10:655–7. doi: 10.4161/cbt.10.7.12994. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 24. Wylie C. Principal investigators weigh in. Cancer Biol Ther. 2010;10:1–10. doi: 10.4161/cbt.10.9.14054. [ DOI ] [ PubMed ] [ Google Scholar ]
- 25. Pirsig RM. Zen and the art of motorcycle maintenance. William Morrow, 1974. [ Google Scholar ]
- 26. Vitak J, Crouse J, LaRose R. Personal Internet use at work: Understanding cyberslacking. Comput Human Behav. 2011;27:1751–9. doi: 10.1016/j.chb.2011.03.002. [ DOI ] [ Google Scholar ]
- 27. Lightfield ET. Output and recognition of sociologists. Am Sociol. 1971;6:128. [ Google Scholar ]
- 28. Conn VS, Porter RT, McDaniel RW, Rantz MJ, Maas ML. Building research productivity in an academic setting. Nurs Outlook. 2005;53:224–31. doi: 10.1016/j.outlook.2005.02.005. [ DOI ] [ PubMed ] [ Google Scholar ]
- View on publisher site
- PDF (494.8 KB)
- Collections
Similar articles
Cited by other articles, links to ncbi databases.
- Download .nbib .nbib
- Format: AMA APA MLA NLM
Add to Collections
- Privacy Policy
Home » Descriptive Analytics – Methods, Tools and Examples
Descriptive Analytics – Methods, Tools and Examples
Table of Contents
Descriptive Analytics
Definition:
Descriptive analytics focused on describing or summarizing raw data and making it interpretable. This type of analytics provides insight into what has happened in the past. It involves the analysis of historical data to identify patterns, trends, and insights. Descriptive analytics often uses visualization tools to represent the data in a way that is easy to interpret.
Descriptive Analytics in Research
Descriptive analytics plays a crucial role in research, helping investigators understand and describe the data collected in their studies. Here’s how descriptive analytics is typically used in a research setting:
- Descriptive Statistics: In research, descriptive analytics often takes the form of descriptive statistics . This includes calculating measures of central tendency (like mean, median, and mode), measures of dispersion (like range, variance, and standard deviation), and measures of frequency (like count, percent, and frequency). These calculations help researchers summarize and understand their data.
- Visualizing Data: Descriptive analytics also involves creating visual representations of data to better understand and communicate research findings . This might involve creating bar graphs, line graphs, pie charts, scatter plots, box plots, and other visualizations.
- Exploratory Data Analysis: Before conducting any formal statistical tests, researchers often conduct an exploratory data analysis, which is a form of descriptive analytics. This might involve looking at distributions of variables, checking for outliers, and exploring relationships between variables.
- Initial Findings: Descriptive analytics are often reported in the results section of a research study to provide readers with an overview of the data. For example, a researcher might report average scores, demographic breakdowns, or the percentage of participants who endorsed each response on a survey.
- Establishing Patterns and Relationships: Descriptive analytics helps in identifying patterns, trends, or relationships in the data, which can guide subsequent analysis or future research. For instance, researchers might look at the correlation between variables as a part of descriptive analytics.
Descriptive Analytics Techniques
Descriptive analytics involves a variety of techniques to summarize, interpret, and visualize historical data. Some commonly used techniques include:
Statistical Analysis
This includes basic statistical methods like mean, median, mode (central tendency), standard deviation, variance (dispersion), correlation, and regression (relationships between variables).
Data Aggregation
It is the process of compiling and summarizing data to obtain a general perspective. It can involve methods like sum, count, average, min, max, etc., often applied to a group of data.
Data Mining
This involves analyzing large volumes of data to discover patterns, trends, and insights. Techniques used in data mining can include clustering (grouping similar data), classification (assigning data into categories), association rules (finding relationships between variables), and anomaly detection (identifying outliers).
Data Visualization
This involves presenting data in a graphical or pictorial format to provide clear and easy understanding of the data patterns, trends, and insights. Common data visualization methods include bar charts, line graphs, pie charts, scatter plots, histograms, and more complex forms like heat maps and interactive dashboards.
This involves organizing data into informational summaries to monitor how different areas of a business are performing. Reports can be generated manually or automatically and can be presented in tables, graphs, or dashboards.
Cross-tabulation (or Pivot Tables)
It involves displaying the relationship between two or more variables in a tabular form. It can provide a deeper understanding of the data by allowing comparisons and revealing patterns and correlations that may not be readily apparent in raw data.
Descriptive Modeling
Some techniques use complex algorithms to interpret data. Examples include decision tree analysis, which provides a graphical representation of decision-making situations, and neural networks, which are used to identify correlations and patterns in large data sets.
Descriptive Analytics Tools
Some common Descriptive Analytics Tools are as follows:
Excel: Microsoft Excel is a widely used tool that can be used for simple descriptive analytics. It has powerful statistical and data visualization capabilities. Pivot tables are a particularly useful feature for summarizing and analyzing large data sets.
Tableau: Tableau is a data visualization tool that is used to represent data in a graphical or pictorial format. It can handle large data sets and allows for real-time data analysis.
Power BI: Power BI, another product from Microsoft, is a business analytics tool that provides interactive visualizations with self-service business intelligence capabilities.
QlikView: QlikView is a data visualization and discovery tool. It allows users to analyze data and use this data to support decision-making.
SAS: SAS is a software suite that can mine, alter, manage and retrieve data from a variety of sources and perform statistical analysis on it.
SPSS: SPSS (Statistical Package for the Social Sciences) is a software package used for statistical analysis. It’s widely used in social sciences research but also in other industries.
Google Analytics: For web data, Google Analytics is a popular tool. It allows businesses to analyze in-depth detail about the visitors on their website, providing valuable insights that can help shape the success strategy of a business.
R and Python: Both are programming languages that have robust capabilities for statistical analysis and data visualization. With packages like pandas, matplotlib, seaborn in Python and ggplot2, dplyr in R, these languages are powerful tools for descriptive analytics.
Looker: Looker is a modern data platform that can take data from any database and let you start exploring and visualizing.
When to use Descriptive Analytics
Descriptive analytics forms the base of the data analysis workflow and is typically the first step in understanding your business or organization’s data. Here are some situations when you might use descriptive analytics:
Understanding Past Behavior: Descriptive analytics is essential for understanding what has happened in the past. If you need to understand past sales trends, customer behavior, or operational performance, descriptive analytics is the tool you’d use.
Reporting Key Metrics: Descriptive analytics is used to establish and report key performance indicators (KPIs). It can help in tracking and presenting these KPIs in dashboards or regular reports.
Identifying Patterns and Trends: If you need to identify patterns or trends in your data, descriptive analytics can provide these insights. This might include identifying seasonality in sales data, understanding peak operational times, or spotting trends in customer behavior.
Informing Business Decisions: The insights provided by descriptive analytics can inform business strategy and decision-making. By understanding what has happened in the past, you can make more informed decisions about what steps to take in the future.
Benchmarking Performance: Descriptive analytics can be used to compare current performance against historical data. This can be used for benchmarking and setting performance goals.
Auditing and Regulatory Compliance: In sectors where compliance and auditing are essential, descriptive analytics can provide the necessary data and trends over specific periods.
Initial Data Exploration: When you first acquire a dataset, descriptive analytics is useful to understand the structure of the data, the relationships between variables, and any apparent anomalies or outliers.
Examples of Descriptive Analytics
Examples of Descriptive Analytics are as follows:
Retail Industry: A retail company might use descriptive analytics to analyze sales data from the past year. They could break down sales by month to identify any seasonality trends. For example, they might find that sales increase in November and December due to holiday shopping. They could also break down sales by product to identify which items are the most popular. This analysis could inform their purchasing and stocking decisions for the next year. Additionally, data on customer demographics could be analyzed to understand who their primary customers are, guiding their marketing strategies.
Healthcare Industry: In healthcare, descriptive analytics could be used to analyze patient data over time. For instance, a hospital might analyze data on patient admissions to identify trends in admission rates. They might find that admissions for certain conditions are higher at certain times of the year. This could help them allocate resources more effectively. Also, analyzing patient outcomes data can help identify the most effective treatments or highlight areas where improvement is needed.
Finance Industry: A financial firm might use descriptive analytics to analyze historical market data. They could look at trends in stock prices, trading volume, or economic indicators to inform their investment decisions. For example, analyzing the price-earnings ratios of stocks in a certain sector over time could reveal patterns that suggest whether the sector is currently overvalued or undervalued. Similarly, credit card companies can analyze transaction data to detect any unusual patterns, which could be signs of fraud.
Advantages of Descriptive Analytics
Descriptive analytics plays a vital role in the world of data analysis, providing numerous advantages:
- Understanding the Past: Descriptive analytics provides an understanding of what has happened in the past, offering valuable context for future decision-making.
- Data Summarization: Descriptive analytics is used to simplify and summarize complex datasets, which can make the information more understandable and accessible.
- Identifying Patterns and Trends: With descriptive analytics, organizations can identify patterns, trends, and correlations in their data, which can provide valuable insights.
- Inform Decision-Making: The insights generated through descriptive analytics can inform strategic decisions and help organizations to react more quickly to events or changes in behavior.
- Basis for Further Analysis: Descriptive analytics lays the groundwork for further analytical activities. It’s the first necessary step before moving on to more advanced forms of analytics like predictive analytics (forecasting future events) or prescriptive analytics (advising on possible outcomes).
- Performance Evaluation: It allows organizations to evaluate their performance by comparing current results with past results, enabling them to see where improvements have been made and where further improvements can be targeted.
- Enhanced Reporting and Dashboards: Through the use of visualization techniques, descriptive analytics can improve the quality of reports and dashboards, making the data more understandable and easier to interpret for stakeholders at all levels of the organization.
- Immediate Value: Unlike some other types of analytics, descriptive analytics can provide immediate insights, as it doesn’t require complex models or deep analytical capabilities to provide value.
Disadvantages of Descriptive Analytics
While descriptive analytics offers numerous benefits, it also has certain limitations or disadvantages. Here are a few to consider:
- Limited to Past Data: Descriptive analytics primarily deals with historical data and provides insights about past events. It does not predict future events or trends and can’t help you understand possible future outcomes on its own.
- Lack of Deep Insights: While descriptive analytics helps in identifying what happened, it does not answer why it happened. For deeper insights, you would need to use diagnostic analytics, which analyzes data to understand the root cause of a particular outcome.
- Can Be Misleading: If not properly executed, descriptive analytics can sometimes lead to incorrect conclusions. For example, correlation does not imply causation, but descriptive analytics might tempt one to make such an inference.
- Data Quality Issues: The accuracy and usefulness of descriptive analytics are heavily reliant on the quality of the underlying data. If the data is incomplete, incorrect, or biased, the results of the descriptive analytics will be too.
- Over-reliance on Descriptive Analytics: Businesses may rely too much on descriptive analytics and not enough on predictive and prescriptive analytics. While understanding past and present data is important, it’s equally vital to forecast future trends and make data-driven decisions based on those predictions.
- Doesn’t Provide Actionable Insights: Descriptive analytics is used to interpret historical data and identify patterns and trends, but it doesn’t provide recommendations or courses of action. For that, prescriptive analytics is needed.
About the author
Muhammad Hassan
Researcher, Academic Writer, Web developer
You may also like
Big Data Analytics -Types, Tools and Methods
Sentiment Analysis – Tools, Techniques and...
Prescriptive Analytics – Techniques, Tools and...
Blockchain Research – Methods, Types and Examples
Predictive Analytics – Techniques, Tools and...
What is Data Science? Components, Process and Tools
Strategies and Models
The choice of qualitative or quantitative approach to research has been traditionally guided by the subject discipline. However, this is changing, with many “applied” researchers taking a more holistic and integrated approach that combines the two traditions. This methodology reflects the multi-disciplinary nature of many contemporary research problems.
In fact, it is possible to define many different types of research strategy. The following list ( Business research methods / Alan Bryman & Emma Bell. 4th ed. Oxford : Oxford University Press, 2015 ) is neither exclusive nor exhaustive.
- Clarifies the nature of the problem to be solved
- Can be used to suggest or generate hypotheses
- Includes the use of pilot studies
- Used widely in market research
- Provides general frequency data about populations or samples
- Does not manipulate variables (e.g. as in an experiment)
- Describes only the “who, what, when, where and how”
- Cannot establish a causal relationship between variables
- Associated with descriptive statistics
- Breaks down factors or variables involved in a concept, problem or issue
- Often uses (or generates) models as analytical tools
- Often uses micro/macro distinctions in analysis
- Focuses on the analysis of bias, inconsistencies, gaps or contradictions in accounts, theories, studies or models
- Often takes a specific theoretical perspective, (e.g. feminism; labour process theory)
- Mainly quantitative
- Identifies measurable variables
- Often manipulates variables to produce measurable effects
- Uses specific, predictive or null hypotheses
- Dependent on accurate sampling
- Uses statistical testing to establish causal relationships, variance between samples or predictive trends
- Associated with organisation development initiatives and interventions
- Practitioner based, works with practitioners to help them solve their problems
- Involves data collection, evaluation and reflection
- Often used to review interventions and plan new ones
- Focuses on recognised needs, solving practical problems or answering specific questions
- Often has specific commercial objectives (e.g. product development )
Approaches to research
For many, perhaps most, researchers, the choice of approach is straightforward. Research into reaction mechanisms for an organic chemical reaction will take a quantitative approach, whereas qualitative research will have a better fit in the social work field that focuses on families and individuals. While some research benefits from one of the two approaches, other research yields more understanding from a combined approach.
In fact, qualitative and quantitative approaches to research have some important shared aspects. Each type of research generally follows the steps of scientific method, specifically:
In general, each approach begins with qualitative reasoning or a hypothesis based on a value judgement. These judgements can be applied, or transferred to quantitative terms with both inductive and deductive reasoning abilities. Both can be very detailed, although qualitative research has more flexibility with its amount of detail.
Selecting an appropriate design for a study involves following a logical thought process; it is important to explore all possible consequences of using a particular design in a study. As well as carrying out a scoping study, a researchers should familiarise themselves with both qualitative and quantitative approaches to research in order to make the best decision. Some researchers may quickly select a qualitative approach out of fear of statistics but it may be a better idea to challenge oneself. The researcher should also be prepared to defend the paradigm and chosen research method; this is even more important if your proposal or grant is for money, or other resources.
Ultimately, clear goals and objectives and a fit-for-purpose research design is more helpful and important than old-fashioned arguments about which approach to research is “best”. Indeed, there is probably no such thing as a single “correct” design – hypotheses can be studied by different methods using different research designs. A research design is probably best thought of as a series of signposts to keep the research headed in the right direction and should not be regarded as a highly specific plan to be followed without deviation.
Research models
There is no common agreement on the classification of research models but, for the purpose of illustration, five categories of research models and their variants are outlined below.
A physical model is a physical object shaped to look like the represented phenomenon, usually built to scale e.g. atoms, molecules, skeletons, organs, animals, insects, sculptures, small-scale vehicles or buildings, life-size prototype products. They can also include 3-dimensional alternatives for two-dimensional representations e.g. a physical model of a picture or photograph.
In this case, the term model is used loosely to refer to any theory phrased in formal, speculative or symbolic styles. They generally consist of a set of assumptions about some concept or system; are often formulated, developed and named on the basis of an analogy between the object, or system that it describes and some other object or different system; and they are considered an approximation that is useful for certain purposes. Theoretical models are often used in biology, chemistry, physics and psychology.
A mathematical model refers to the use of mathematical equations to depict relationships between variables, or the behaviour of persons, groups, communities, cultural groups, nations, etc.
It is an abstract model that uses mathematical language to describe the behaviour of a system. They are used particularly in the natural sciences and engineering disciplines (such as physics, biology, and electrical engineering) but also in the social sciences (such as economics, sociology and political science). Types of mathematical models include trend (time series), stochastic, causal and path models. Examples include models of population and economic growth, weather forecasting and the characterisation of large social networks.
Mechanical (or computer) models tend to use concepts from the natural sciences, particularly physics, to provide analogues for social behaviour. They are often an extension of mathematical models. Many computer-simulation models have shown how a research problem can be investigated through sequences of experiments e.g. game models; microanalytic simulation models (used to examine the effects of various kinds of policy on e.g. the demographic structure of a population); models for predicting storm frequency, or tracking a hurricane.
These models are used to untangle meanings that individuals give to symbols that they use or encounter. They are generally simulation models, i.e. they are based on artificial (contrived) situations, or structured concepts that correspond to real situations. They are characterised by symbols, change, interaction and empiricism and are often used to examine human interaction in social settings.
The advantages and disadvantages of modelling
Take a look at the advantages and disadvantages below. It might help you think about what type of model you may use.
- The determination of factors or variables that most influence the behaviour of phenomena
- The ability to predict, or forecast the long term behaviour of phenomena
- The ability to predict the behaviour of the phenomenon when changes are made to the factors influencing it
- They allow researchers a view on difficult to study processes (e.g. old, complex or single-occurrence processes)
- They allow the study of mathematically intractable problems (e.g. complex non-linear systems such as language)
- They can be explicit, detailed, consistent, and clear (but that can also be a weakness)
- They allow the exploration of different parameter settings (i.e. evolutionary, environmental, individual and social factors can be easily varied)
- Models validated for a category of systems can be used in many different scenarios e.g. they can be reused in the design, analysis, simulation, diagnosis and prediction of a technical system
- Models enable researchers to generate unrealistic scenarios as well as realistic ones
- Difficulties in validating models
- Difficulties in assessing the accuracy of models
- Models can be very complex and difficult to explain
- Models do not “provide proof”
The next section describes the processes and design of research.
- - Google Chrome
Intended for healthcare professionals
- Access provided by Google Indexer
- My email alerts
- BMA member login
- Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution
Search form
- Advanced search
- Search responses
- Search blogs
- Economic evaluation...
Economic evaluation using decision analytical modelling: design, conduct, analysis, and reporting
- Related content
- Peer review
- Stavros Petrou , professor of health economics 1 ,
- Alastair Gray , professor of health economics 2
- 1 Clinical Trials Unit, Warwick Medical School, University of Warwick, Coventry CV4 7AL, UK
- 2 Health Economics Research Centre, Department of Public Health, University of Oxford, Oxford, UK
- Correspondence to: S Petrou S.Petrou{at}warwick.ac.uk
- Accepted 8 February 2011
Evidence relating to healthcare decisions often comes from more than one study. Decision analytical modelling can be used as a basis for economic evaluations in these situations.
Economic evaluations are increasingly conducted alongside randomised controlled trials, providing researchers with individual patient data to estimate cost effectiveness. 1 However, randomised trials do not always provide a sufficient basis for economic evaluations used to inform regulatory and reimbursement decisions. For example, a single trial might not compare all the available options, provide evidence on all relevant inputs, or be conducted over a long enough time to capture differences in economic outcomes (or even measure those outcomes). 2 In addition, reliance on a single trial may mean ignoring evidence from other trials, meta-analyses, and observational studies. Under these circumstances, decision analytical modelling provides an alternative framework for economic evaluation.
Decision analytical modelling compares the expected costs and consequences of decision options by synthesising information from multiple sources and applying mathematical techniques, usually with computer software. The aim is to provide decision makers with the best available evidence to reach a decision—for example, should a new drug be adopted? Following on from our article on trial based economic evaluations, 1 we outline issues relating to the design, conduct, analysis, and reporting of economic evaluations using decision analytical modelling.
Glossary of terms
Cost effectiveness acceptability curve —Graphical depiction of the probability that a health intervention is cost effective across a range of willingness to pay thresholds held by decision makers for the health outcome of interest
Cost effectiveness plane —Graphical depiction of difference in effectiveness between the new treatment and the comparator against the difference in cost
Discounting —The practice of reducing future costs and health outcomes to present values
Health utilities —Preference based outcomes normally represented on a scale where 0 represents death and 1 represents perfect health
Incremental cost effectiveness ratio —A measure of cost effectiveness of a health intervention compared with an alternative, defined as the difference in costs divided by the difference in effects
Multiparameter evidence synthesis— A generalisation of meta-analysis in which multiple variables are estimated jointly
Quality adjusted life year (QALY)—Preference-based measure of health outcome that combines length of life and health related quality of life (utility scores) in a single metric
Time horizon —The start and end points (in time) over which the costs and consequences of a health intervention will be measured and valued
Value of information analysis —An approach for estimating the monetary value associated with collecting additional information within economic evaluation
Defining the question
The first stage in the development of any model is to specify the question or decision problem. It is important to define all relevant options available for evaluation, the recipient population, and the geographical location and setting in which the options are being delivered. 3 The requirements of the decision makers should have a crucial role in identifying the appropriate perspective of the analysis, the time horizon, the relevant outcome measures, and, more broadly, the scope or boundaries of the model. 4 If these factors are unclear, or different decision makers have conflicting requirements, the perspective and scope should be broad enough to allow the results to be disaggregated in different ways. 5
Decision trees
The simplest form of decision analytical modelling in economic evaluation is the decision tree. Alternative options are represented by a series of pathways or branches as in figure 1 ⇓ , which examines whether it is cost effective to screen for breast cancer every two years compared with not screening. The first point in the tree, the decision node (drawn as a square) represents this decision question. In this instance only two options are represented, but additional options could easily be added. The pathways that follow each option represent a series of logically ordered alternative events, denoted by branches emanating from chance nodes (circular symbols). The alternatives at each chance node must be mutually exclusive and their probabilities should sum exactly to one. The end points of each pathway are denoted by terminal nodes (triangular symbols) to which values or pay-offs, such as costs, life years, or quality adjusted life years (QALYs), are assigned. Once the probabilities and pay-offs have been entered, the decision tree is “averaged out” and “folded back” (or rolled back), allowing the expected values of each option to be calculated. 4
Fig 1 Decision tree for breast cancer screening options 4
- Download figure
- Open in new tab
- Download powerpoint
Decision trees are valued for their simplicity and transparency, and they can be an excellent way of clarifying the options of interest. However, they are limited by the lack of any explicit time variable, making it difficult to deal with time dependent elements of an economic evaluation. 6 Recursion or looping within the decision tree is also not allowed, so that trees representing chronic diseases with recurring events can be complex with numerous lengthy pathways.
Markov models
An alternative form of modelling is the Markov model. Unlike decision trees, which represent sequences of events as a large number of potentially complex pathways, Markov models permit a more straightforward and flexible sequencing of outcomes, including recurring outcomes, through time. Patients are assumed to reside in one of a finite number of health states at any point in time and make transitions between those health states over a series of discrete time intervals or cycles. 3 6 The probability of staying in a state or moving to another one in each cycle is determined by a set of defined transition probabilities. The definition and number of health states and the duration of the cycles will be governed by the decision problem: one study of treatment for gastro-oesophageal reflux disease used one month cycles to capture treatment switches and side effects, 7 whereas an analysis of cervical cancer screening used six monthly cycles to model lifetime outcomes. 8
Figure 2 ⇓ presents a state transition diagram and matrix of transition probabilities for a Markov model of a hypothetical breast cancer intervention. There are three health states: well, recurrence of breast cancer, and dead. In this example, the probability of moving from the well state at time t to the recurrence state at time t+1 is 0.3, while the probability of moving from well to dead is 0.1. At each cycle the sum of the transition probabilities out of a health state (the row probabilities) must equal 1. In order for the Markov process to end, some termination condition must be set. This could be a specified number of cycles, a proportion passing through or accumulating in a particular state, or the entire population reaching a state that cannot be left (in our example, dead); this is called an absorbing state.
Fig 2 Markov state diagram and transition probability matrix for hypothetical breast cancer intervention. The arrows represent possible transitions between the three health states (well, recurrence, and dead), loops indicate the possibility of remaining in a health state in successive cycles, and the dashed line indicates the possibility of backwards transition from recurrence of breast cancer to the well state after successful treatment. The cycle length is set at one year
An important limitation of Markov models is the assumption that the transition probabilities depend only on the current health state, independent of historical experience (the Markovian assumption). In our example, the probability of a person dying from breast cancer is independent of the number of past recurrences and also independent of how long the person spent in the well state before moving to the recurrent state. This limitation can be overcome by introducing temporary states that patients can only enter for one cycle or by a series of temporary states that must be visited in a fixed sequence. 4
The final stage is to assign values to each health state, typically costs and health utilities. 6 9 Most commonly, such models simulate the transition of a hypothetical cohort of individuals through the Markov model over time, allowing the analyst to estimate expected costs and outcomes. This simply involves, for each cycle, summing costs and outcomes across health states, weighted by the proportion of the cohort expected to be in each state, and then summing across cycles. 3 If the time horizon of the model is over one year, discounting is usually applied to generate the present values of expected costs and outcomes. 1
Alternative modelling approaches
Although Markov models alone or in combination with decision trees are the most common models used in economic evaluations, other approaches are available.
Patient level simulation (or microsimulation) models the progression of individuals rather than hypothetical cohorts. The models track the progression of potentially heterogeneous individuals with the accumulating history of each individual determining transitions, costs, and health outcomes. 3 10 Unlike Markov models, they can simulate the time to next event rather than requiring equal length cycles and can also simulate multiple events occurring in parallel. 10
Discrete event simulations describe the progress of individuals through healthcare processes or systems, affecting their characteristics and outcomes over unrestricted time periods. 10 Discrete event simulations are not restricted to the use of equal time periods or the Markovian assumption and, unlike patient level simulation models, also allow individuals to interact with each other 11 —for example, in a transplant programme where organs are scarce and transplant decisions and outcomes for any individual affect everyone else in the queue.
Dynamic models allow internal feedback loops and time delays that affect the behaviour of the entire health system or population being studied. They are particularly valuable in studies of infectious diseases, where analysts may need to account for the evolving effects of factors such as herd immunity on the likelihood of infection over time, and their results can differ substantially from those obtained from static models. 12
The choice of modelling approach will depend on various factors, including the decision maker’s requirements. 10 11 13
Identifying, synthesising, and transforming data inputs
The process of identifying and synthesising evidence to populate a decision analytical model should be consistent with the general principles of evidence based medicine. 3 14 These principles are broadly established for clinical evidence. 15 Less clear is the strategy that should be adopted to identify and synthesise evidence on other variables, such as costs and health utilities, other than it should be transparent and appropriate given the objectives of the model. 16 Indeed, many health economists recognise that the time and resource constraints imposed by many funders of health technology assessments will tend to preclude systematic reviews of the evidence for all variables. 17
If evidence is not available from randomised trials, it has to be drawn from other sources, such as epidemiological or observational studies, medical records, or, more controversially, expert opinion. And sometimes the evidence from randomised trials may not be appropriate for use in the model—for example, cost data drawn from a trial might reflect protocol driven resource use rather than usual practice 18 or might not be generalisable to the jurisdiction of interest. 5 These methodological considerations have increased interest in multiparameter evidence synthesis (box) 19 in decision analytical modelling. These techniques acknowledge the importance of trying to incorporate correlations between variables in models, which may have an important influence on the resulting estimates of cost effectiveness. 2 However, accurately assessing the correlation between different clinical events, or between events and costs or health utilities, may be difficult without patient level data from a single source. Another complication is that evidence may have to be transformed in complex ways to meet the requirements of the model—for example, interval probabilities reported in the literature may have to be transformed into instantaneous rates and then into transition probabilities corresponding to the cycle length used in a Markov model. 3 4 14
Quantifying and reporting cost effectiveness
Once data on all variables required by the model have been assembled, the model is run for each intervention being evaluated in order to estimate its expected costs and expected outcomes (or effects). The results are typically compared in terms of incremental cost effectiveness ratios and depicted on the cost effectiveness plane (box). 1
Handling variability, uncertainty, and heterogeneity
The results of a decision analytical model are subject to the influences of variability, uncertainty, and heterogeneity, and these must be handled appropriately if decision makers are to be confident about the estimates of cost effectiveness. 3 13
Variability reflects the randomness arising from the modelling process itself—that is, the fact that models typically use random numbers when determining whether an event with a given probability of occurring happens or not in any given cycle or model run, so that an identical patient will experience different outcomes each time they proceed through the model. This variability, sometimes referred to as Monte Carlo uncertainty, is not informative and needs to be eliminated by running the model repeatedly until a stable estimate of the central tendency has been obtained. 20 There is little evidence or agreement on how many model runs are needed to eliminate such variability, but it may be many thousands.
Parameter uncertainty reflects the uncertainty and imprecision surrounding the value of model variables such as transition probabilities, costs, and health utilities. Standard sensitivity analysis, in which each variable is varied separately and independently, does not give a complete picture of the effects of joint uncertainty and correlation between variables. 6 Probabilistic sensitivity analysis, in which all variables are varied simultaneously using probability distributions informed by estimates of the sample mean and sampling error from the best available evidence, is therefore the preferred way of assessing parameter uncertainty. 13 Probabilistic sensitivity analysis is usually executed by running the model several thousand times, each time varying the parameter values across the specified distributions and recording the outputs—for example, costs and effects—until a distribution has been built up and confidence intervals can be estimated. Probabilistic sensitivity analysis also allows the analyst to present cost effectiveness acceptability curves, which show the probability that each intervention is cost effective at an assumed maximum willingness to pay for health gains. 21 If a model has been derived from a single dataset, bootstrapping can be used to model uncertainty—that is, repeatedly re-estimating the model using random subsamples drawn with replacement from the full sample. 22
Structural or model uncertainty reflects the uncertainty surrounding the structure of the model and the assumptions underpinning it—for example, the way a disease pathway is modelled. Such model uncertainty is usually examined with a sensitivity analysis, re-running the model with alternative structural assumptions. 6 Alternatively, several research groups could model the same decision problem in different ways and then compare their results in an agreed way. This approach has been used extensively in fields such as climate change but less commonly in health economics. However, one example is provided by the Mount Hood Challenge, which invited eight diabetes modelling groups to independently predict clinical trial outcomes on the basis of changes in risk factors and then compare their predictions. 23 How the results from different models can be reconciled in the absence of a gold standard is unclear; however, Bojke and colleagues have recommended some form of model averaging, whereby each model’s results could be weighted by a measure of model adequacy. 24
Finally, heterogeneity should be clearly differentiated from variability because it reflects differences in outcomes or in cost effectiveness that can in principle be explained by variations between subgroups of patients, either in terms of baseline characteristics such as age, risk level, or disease severity or in terms of both baseline characteristics and relative treatment effects. As in the analysis of clinical trials, subgroups should be predefined and carefully justified in terms of their clinical and economic relevance. 25 A model can then be re-run for different subgroups of patients.
Alternatively, heterogeneity can be addressed by making model variables functions of other variables—for example, transition probabilities between events or health states might be transformed into functions of age or disease severity. As with subgroup analysis in clinical trials, care must be taken to avoid generating apparently large differences in cost effectiveness that are not based on genuine evidence of heterogeneity. For example, Mihaylova et al, recognising the absence of evidence of heterogeneity in treatment effect across subgroups in the Heart Protection Study, applied the same relative risk reduction to different subgroups defined in terms of absolute risk levels at baseline, resulting in large but reliable differences in cost effectiveness. 26 27
Model evaluation
Evaluation is an important, and often overlooked, step in the development of a decision analytical model. Well evaluated models are more likely to be believed by decision makers. Three steps in model validation of escalating difficulty are face validation, internal validation, and external validation:
Face or descriptive validation entails checking whether the assumptions and structure of a model are reliable, sensible, and can be explained intuitively. 14 This may also require experiments to assess whether setting some variables at null or extreme values generates predictable effects on model outputs.
Internal validation requires thorough internal testing of the model—for example by getting an independent researcher or using different software to construct a replicate of the model and assess whether the results are consistent. 14 28 Internal validation of a model derived from a single data source, for example a Markov model being used to simulate long term outcomes beyond the end of a clinical trial, may involve proving that the model’s predicted results also fit the observed data used in the estimation. 22 In these circumstances some analysts also favour splitting the initial data in two and using one set to “train” or estimate the model and the other to test or validate the model. Some analysts also calibrate the model, adjusting variables to ensure that the results accord with aggregate and observable outcomes, such as overall survival. 29 This approach has been criticised as an ad hoc search for values that makes it impossible to characterise the uncertainty in the model correctly. 30
External validation assesses whether the model’s predictions match the observed results in a population or over a time period that was not used to construct the model. This might entail assessing whether the model can accurately predict future events. For example, the Mount Hood Challenge compared the predictions of the diabetes models with each other and the reported trial outcomes. 23 External validation might also be appropriate for calibrated models.
Value of additional research
Decision analytical models are increasingly used as a framework for indicating the need for and value of additional research. We have established that the analyst will never be certain that the value placed on each variable is correct. As a result, there are distributions surrounding the outputs of decision analytical models that can be estimated using probabilistic sensitivity analysis and synthesised using cost effectiveness acceptability curves. 6 These techniques indicate the probability that the decision to adopt an intervention on grounds of cost effectiveness is correct. The techniques also allow a quantification of the cost of making an incorrect decision, which when combined with the probability of making an incorrect decision generates the expected cost of uncertainty. This has become synonymous with the expected value of perfect information (EVPI)—that is, the monetary value associated with eliminating the possibility of making an incorrect decision by eliminating parameter uncertainty in the model. 31 A population-wide EVPI can be estimated by multiplying the EVPI estimate produced by a decision analytical model by the number of decisions expected to be made on the basis of the additional information. 32 This can then be compared with the potential costs of further research to determine whether further studies are economically worthwhile. 33 34 The approach has been extended in the form of expected value of partial perfect information (EVPPI), which estimates the value of obtaining perfect information on a subset of parameters in the model, and the expected value of sample information (EVSI), which focuses on optimal study design issues such as the optimal sample size of further studies. 3
Conclusions
Further detail on the design, conduct, analysis, and reporting of economic evaluations using decision analytical modelling is available elsewhere. 4 6 This article and our accompanying article 1 show that there is considerable overlap between modelling based and trial based economic evaluations, not only in their objectives but, for example, in dealing with heterogeneity and presenting results, and in both cases we have argued the benefits of using individual patient data. These two broad approaches should be viewed as complements rather than as competing alternatives.
Summary points
Decision analytical modelling for economic evaluation uses mathematical techniques to determine the expected costs and consequences of alternative options
Methods of modelling include decision trees, Markov models, patient level simulation models, discrete event simulations, and system dynamic models
The process of identifying and synthesising evidence for a model should be transparent and appropriate to decision makers’ objectives
The results of decision analytical models are subject to the influences of variability, uncertainty, and heterogeneity, and these must be handled appropriately
Validation of model based economic evaluations strengthens the credibility of their results
Cite this as: BMJ 2011;342:d1766
Contributors: SP conceived the idea for this article. Both authors contributed to the review of the published material in this area, as well as the writing and revising of the article. SP is the guarantor.
Competing interests: All authors have completed the unified competing interest form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare no support from any organisation for the submitted work; The Warwick Clinical Trials Unit benefited from facilities funded through the Birmingham Science City Translational Medicine Clinical Research and Infrastructure Trials Platform, with support from Advantage West Midlands. The Health Economics Research Centre receives funding from the National Institute of Health Research. SP started working on this article while employed by the National Perinatal Epidemiology Unit, University of Oxford, and the Health Economics Research Centre, University of Oxford, and funded by a UK Medical Research Council senior non-clinical research fellowship. AG is an NIHR senior investigator. They have no other relationships or activities that could appear to have influenced the submitted work.
Provenance and peer review: Commissioned; externally peer reviewed.
- ↵ Petrou S, Gray A. Economic evaluation alongside randomised clinical trials: design, conduct, analysis and reporting. BMJ 2011 ; 342 : d1548 . OpenUrl FREE Full Text
- ↵ Sculpher MJ, Claxton K, Drummond M, McCabe C. Whither trial-based economic evaluation for health care decision making? Health Econ 2006 ; 15 : 677 -87. OpenUrl CrossRef PubMed Web of Science
- ↵ Briggs A, Claxton C, Sculpher M. Decision modelling for health economic evaluation . Oxford University Press, 2006 .
- ↵ Gray A, Clarke P, Wolstenholme J, Wordsworth S. Applied methods of cost-effectiveness analysis in health care . Oxford University Press, 2010 .
- ↵ Drummond M, Manca A, Sculpher M. Increasing the generalizability of economic evaluations: recommendations for the design, analysis, and reporting of studies. Int J Technol Assess 2005 ; 21 : 165 -71. OpenUrl
- ↵ Drummond MF, Sculpher MJ, Torrance GW, O’Brien BJ, Stoddart G. Methods for the economic evaluation of health care programmes . 3rd ed. Oxford University Press, 2005 .
- ↵ Bojke L, Hornby E, Sculpher M. A comparison of the cost-effectiveness of pharmacotherapy or surgery (laparoscopic fundoplication) in the treatment of GORD. Pharmacoeconomics 2007 ; 25 : 829 -41. OpenUrl CrossRef PubMed
- ↵ Legood R, Gray A, Wolstenholme J, Moss S, LBC/HPV Cervical Screening Pilot Studies Group. The lifetime effects, costs and cost-effectiveness of using HPV testing to manage low-grade cytological abnormalities: results of the NHS pilot studies. BMJ 2006 ; 332 : 79 -83. OpenUrl Abstract / FREE Full Text
- ↵ Torrance GW, Feeny D. Utilities and quality-adjusted life years. Int J Technol Assess Health Care 1989 ; 5 : 559 -75. OpenUrl CrossRef PubMed
- ↵ Brennan A, Chick SE, Davies R. A taxonomy of model structures for economic evaluation of health technologies. Health Econ 2006 ; 15 : 1295 -310. OpenUrl CrossRef PubMed Web of Science
- ↵ Cooper K, Brailsford SC, Davies R. Choice of modelling technique for evaluating health care interventions. J Oper Res Soc 2007 ; 58 : 168 -76. OpenUrl Web of Science
- ↵ Brisson M, Edmunds WJ. Economic evaluation of vaccination programs: the impact of herd-immunity. Med Decis Making 2003 ; 23 : 76 -82. OpenUrl Abstract / FREE Full Text
- ↵ Barton P, Bryan S, Robinson S. Modelling in the economic evaluation of health care: selecting the appropriate approach. J Health Serv Res Policy 2004 ; 9 : 110 -8. OpenUrl Abstract / FREE Full Text
- ↵ Weinstein MC, O’Brien B, Hornberger J, Jackson J, Johannesson M, McCabe C, et al. Principles of good practice for decision analytic modeling in health-care evaluation: report of the ISPOR Task Force on Good Research Practices—modeling studies. Value Health 2003 ; 6 : 9 -17. OpenUrl CrossRef PubMed Web of Science
- ↵ NHS Centre for Reviews and Dissemination. Undertaking systematic reviews of research on effectiveness: CRD’s guidance for those carrying out or commissioning reviews . NHS CRD, University of York, 2001 .
- ↵ Philips Z, Ginnelly L, Sculpher M, Claxton K, Golder S, Riemsma R, et al. Review of guidelines for good practice in decision-analytic modelling in health technology assessment. Health Technol Assess 2004 ; 8 : iii -xi,1. OpenUrl PubMed
- ↵ Golder S, Glanville J, Ginnelly L. Populating decision-analytic models: the feasibility and efficiency of database searching for individual parameters. Int J Technol Assess Health Care 2005 ; 21 : 305 -11. OpenUrl PubMed Web of Science
- ↵ Coyle D, Lee LM. The problem of protocol driven costs in pharmacoeconomic analysis. Pharmacoeconomics 1998 ; 14 : 357 -63. OpenUrl CrossRef PubMed Web of Science
- ↵ Ades AE, Sutton A. Multiparameter evidence synthesis in epidemiology and medical decision-making: current approaches. J R Stat Soc 2006 ; 169 : 5 -35. OpenUrl CrossRef
- ↵ Weinstein MC. Recent developments in decision-analytic modelling for economic evaluation. Pharmacoeconomics 2006 ; 24 : 1043 -53. OpenUrl CrossRef PubMed Web of Science
- ↵ Fenwick E, O’Brien BJ, Briggs A. Cost-effectiveness acceptability curves: facts, fallacies and frequently asked questions. Health Econ 2004 ; 13 : 405 -15. OpenUrl CrossRef PubMed Web of Science
- ↵ Clarke PM, Gray AM, Briggs A, Farmer A, Fenn P, Stevens R, et al. A model to estimate the lifetime health outcomes of patients with type 2 diabetes: the United Kingdom Prospective Diabetes Study (UKPDS) outcomes model. Diabetologia 2004 ; 47 : 1747 -59. OpenUrl CrossRef PubMed Web of Science
- ↵ Mount Hood. Computer modeling of diabetes and its complications: a report on the fourth Mount Hood challenge meeting. Diabetes Care 2007 ; 30 : 1638 -46. OpenUrl Abstract / FREE Full Text
- ↵ Bojke L, Claxton K, Sculpher M, Palmer S. Characterizing structural uncertainty in decision analytic models: a review and application of methods. Value Health 2009 ; 12 : 739 -49. OpenUrl CrossRef
- ↵ Rothwell PM. Treating individuals 2. Subgroup analysis in randomised controlled trials: importance, indications, and interpretation. Lancet 2005 ; 365 : 176 -86. OpenUrl CrossRef PubMed Web of Science
- ↵ Mihaylova B, Briggs A, Armitage J, Parish S, Gray A, Collins R, et al. Cost-effectiveness of simvastatin in people at different levels of vascular disease risk: a randomised trial in 20 536 individuals. Lancet 2005 ; 365 : 1779 -85. OpenUrl CrossRef PubMed Web of Science
- ↵ Mihaylova B, Briggs A, Armitage J, Parish S, Gray, A, Collins R. Lifetime cost effectiveness of simvastatin in a range of risk groups and age groups derived from a randomised trial of 20 536 people. BMJ 2006 ; 333 : 1145 -8. OpenUrl Abstract / FREE Full Text
- ↵ Philips Z, Bojke L, Sculpher M, Claxton K, Golder S. Good practice guidelines for decision-analytic modelling in health technology assessment: a review and consolidation of quality assessment. Pharmacoeconomics 2006 ; 24 : 355 -71. OpenUrl CrossRef PubMed Web of Science
- ↵ Stout NK, Knudsen AB, Kong CK, McMahon PM, Gazelle GS. Calibration methods used in cancer simulation models and suggested reporting guidelines. Pharmacoeconomics 2009 ; 27 : 533 -45. OpenUrl CrossRef PubMed Web of Science
- ↵ Ades AE, Cliffe S. Markov chain Monte Carlo estimation of a multi-parameter decision model: consistency of evidence and the accurate assessment of uncertainty. Med Decis Making 2002 ; 22 : 359 -71. OpenUrl Abstract / FREE Full Text
- ↵ Claxton K, Ginnelly L, Sculpher M, Philips Z, Palmer S. A pilot study on the use of decision theory and value of information analysis as part of the NHS health technology assessment programme. Health Technol Assess 2004 ; 8 : 1 -103,iii. OpenUrl PubMed
- ↵ Philips Z, Claxton K, Palmer S. The half-life of truth: appropriate time horizons for research decisions? Med Decis Making 2008 ; 28 : 287 -99. OpenUrl Abstract / FREE Full Text
- ↵ Speight PM, Palmer S, Moles DR, Downer MC, Smith DH, Henriksson M, et al . The cost-effectiveness of screening for oral cancer in primary care. Health Technol Assess 2006 ; 10 : 1 -144,iii-iv. OpenUrl PubMed Web of Science
- ↵ Castelnuovo E, Thompson-Coon J, Pitt M, Cramp M, Siebert U, Price A, et al . The cost-effectiveness of testing for hepatitis C in former injecting drug users. Health Technol Assess 2006 ; 10 : iii -iv,ix-xii,1-93. OpenUrl PubMed Web of Science
IMAGES
VIDEO
COMMENTS
Research summary Analytic models are a powerful approach for developing theory, yet are often poorly understood in the strategy and organizations community. Our goal is to enhance the influence of the method by clarifying for consumers of modeling research how to understand and appreciate analytic modeling and use modeling results to enhance ...
Quantitative analytical research techniques involve the systematic collection and analysis of numerical data to uncover patterns and draw conclusions. These methods allow researchers to quantify behaviors, opinions, and phenomena, enabling effective data-driven decision-making. Surveys and experiments are common approaches in this realm, as ...
For example, it can look into why the value of the Japanese Yen has decreased. This is so that an analytical study can consider "how" and "why" questions. Another example is that someone might conduct analytical research to identify a study's gap. It presents a fresh perspective on your data.
A model may be an analytical model or an intelligent model. A set of activities, largely sequential, that leads to the development of a validated model. The exercise of managerial and technical oversight in modeling. The degree to which the outputs of a model correspond to outcomes in the real world.
Decision analysis is a systematic, quantitative, and transparent approach to making decisions under uncertainty. The fundamental tool of decision analysis is a decision-analytic model, most often a decision tree or a Markov model. A decision model provides a way to visualize the sequences of events that can occur following alternative decisions (or actions) in a logical framework, as well as ...
Analytical modeling, also known as analytics modeling, is a method used in data analysis and decision-making processes to gain insights, make predictions, and inform business strategies. It involves the use of mathematical and statistical models to understand and interpret data, identify patterns, and predict future outcomes based on historical ...
An analytical framework is, the way I see it, a model that helps explain how a certain type of analysis will be conducted. For example, in this paper, Franks and Cleaver develop an analytical framework that includes scholarship on poverty measurement to help us understand how water governance and poverty are interrelated.
3 benefits of analytical modeling. It's hard to overstate the value of strong analytics. Mathematical analysis is useful at any scale and for almost every area of business management. 1. Data-driven decisions. The primary benefit of leveraging analytical modeling is the security of making data-driven decisions.
Clarke, R. J. (2005) Research Methodologies: 2 Agenda Definition of Research Research Paradigms (a.k.a research philosophy or research model) specifying concepts- phenomena of interest as defined in model, and statements- propositions involving concepts Theories, Methods and Application Domains Classes of Research Methodologies that have emerged as a consequence of conducting similar
What Is Analysis in Qualitative Research? A classic definition of analysis in qualitative research is that the "analyst seeks to provide an explicit rendering of the structure, order and patterns ... Analytical Strategies—— 211 09-Daly-45155.qxd 1/13/2007 3:39 PM Page 211. decisions at all stages of the research; interactive influences of ...
Research summary Analytic models are a powerful approach for developing theory, yet are often poorly understood in the strategy and organizations community. Our goal is to enhance the influence of the method by clarifying for consumers of modeling research how to understand and appreciate analytic modeling and use modeling results to enhance ...
21 Analytical Models. 21. Analytical Models. Marketing models consists of. Analytical Model: pure mathematical-based research. Empirical Model: data analysis. "A model is a representation of the most important elements of a perceived real-world system". Marketing model improves decision-making. Econometric models.
Descriptive research classifies, describes, compares, and measures data. Meanwhile, analytical research focuses on cause and effect. For example, take numbers on the changing trade deficits between the United States and the rest of the world in 2015-2018. This is descriptive research.
QDA Method #3: Discourse Analysis. Discourse is simply a fancy word for written or spoken language or debate. So, discourse analysis is all about analysing language within its social context. In other words, analysing language - such as a conversation, a speech, etc - within the culture and society it takes place.
In the model presented here (Fig. 1), the sources of production are grouped into six top-tier, or alpha, variables: investments and ongoing funding; investigator experience and training; efficiency of the research environment; the research mix of novelty, incremental advancement, and confirmatory studies; analytic accuracy; and passion ...
Descriptive Analytics. Definition: Descriptive analytics focused on describing or summarizing raw data and making it interpretable. This type of analytics provides insight into what has happened in the past. It involves the analysis of historical data to identify patterns, trends, and insights. Descriptive analytics often uses visualization ...
The model developed on the basis of a systematic analysis process could be tested quantitatively with the themes aiding in the interpretation of results. Therefore, positivist-based methodologies could benefit from using mixed methods to develop a model at the first stage and then test that model at the second stage of research.
Strategies and Models. The choice of qualitative or quantitative approach to research has been traditionally guided by the subject discipline. However, this is changing, with many "applied" researchers taking a more holistic and integrated approach that combines the two traditions. This methodology reflects the multi-disciplinary nature of ...
Evidence relating to healthcare decisions often comes from more than one study. Decision analytical modelling can be used as a basis for economic evaluations in these situations. Economic evaluations are increasingly conducted alongside randomised controlled trials, providing researchers with individual patient data to estimate cost effectiveness.1 However, randomised trials do not always ...
Submarine Capability Assessment Model Using Analytical Hierarchy Process (AHP) and System Dynamics June 2024 AL-MIKRAJ Jurnal Studi Islam dan Humaniora (E-ISSN 2745-4584) 4(02):1551-1572