Inspire. Learn. Create.
Text Analytics & AI
AI & Open End Analysis
How to Choose the Right Solution
The Latest in Market Research
Market Research 101
AI & Open End Analysis
The Value of Open-End Analysis in Market Research: Transforming Open-Ended Questions and Analysis for Market Researchers
Open-ended questions and analysis provide a rich layer of insights that goes beyond the limitations of structured data, offering a clearer understanding of customer sentiment, emerging trends, and overall experience. Whether gathered from surveys, customer reviews, or social media, this feedback can be a goldmine for business insight—but analyzing it effectively can be a challenge. For those dealing with thousands of open-ended questions and responses, traditional methods such as reading through open ends, tallying up responses in Excel, or simply not having the capacity or the right tools to analyze the feedback becomes overwhelming. Additionally, using generic AI tools like ChatGPT often proves inadequate. Without a qualified solution to easily analyze open-ended responses, researchers risk generating incomplete or inaccurate results, which can lead to delays and inefficiencies in completing projects.
At Voxco, one of the reasons we are excited about integrating Ascribe into our online survey platform is because of how Ascribe’s Coder and CX Inspector simplify and streamline open-end analysis, especially for corporate and market researchers. These tools leverage the latest advancements in artificial intelligence, making it easier to unlock actionable insights from vast amounts of open-ended data. This exciting product release showcases our joint commitment to offering innovative solutions that help researchers drive more impactful decisions.
Why Open-Ended Questions and Analysis Matter
- Deeper Customer Insights: Open-end analysis delivers a more comprehensive view of customer behaviors, motivations, and sentiments, helping companies better understand their audiences
- Uncover Hidden Insights: Open-ended responses can reveal unexpected insights that structured data misses, offering an early window into new trends and opportunities.
- Capture Nuances: Unlike structured responses, open-ended feedback allows customers to express their feelings and experiences in their own words, providing valuable context that helps businesses understand emotional drivers and pain points.
- Identify Emerging Trends: By analyzing open-ended feedback, businesses can spot new patterns that may not have been considered in the initial survey design, allowing them to respond quickly.
- Understand the ‘Why’: While quantitative data shows what’s happening, open-end feedback reveals why customers behave the way they do, offering more actionable insights.
How Ascribe’s Coder and CX Inspector Can Help
Ascribe’s industry-leading tools enable the analysis of open-ended feedback at scale with greater accuracy and speed by seamlessly blending artificial intelligence with human expertise. Here’s how Ascribe Coder and CX Inspector, both recently upgraded with Theme Extractor 2.0 and Ask Ascribe, empower researchers:
Theme Extractor 2.0
Automatically analyzes over 95% of open-ended responses and generates accurate, human-like codebooks. The tool eliminates overlap between themes and delivers cleaner, faster results—ideal for streamlining the research process.
Ask Ascribe
Ask Ascribe allows users to query their data in real-time using natural language. This interactive approach enables the quick identification of key themes, emotions, and areas for improvement, allowing businesses to act on insights faster.
Ascribe Coder: Enhancing Coding Productivity
Ascribe Coder enhances productivity by converting unstructured text into structured data. Here’s how:
- AI-Driven Human Coding: Combines AI with human intelligence to accelerate the coding process while maintaining precision.
- Customizable Automation: Allows users to adjust the level of automation based on project needs, ensuring control over cost, timing, and accuracy.
- Data Integration: Integrates open-ended feedback with survey data, providing a 360-degree view of the customer experience.
CX Inspector: Advanced Text Analytics for Deeper Insights
CX Inspector offers a robust platform for extracting and visualizing themes, sentiment, and emerging trends from open-end responses. Key features include:
- Instant Theme Extraction: Automatically identifies clear, descriptive themes from open-ended responses.
- Sentiment Analysis: Detects and visualizes customer sentiment instantly, enabling businesses to prioritize issues based on emotional impact.
- Actionable Insights: Combines theme detection and sentiment analysis to deliver clear, actionable insights that can be shared easily via dashboards and reports.
Conclusion: Unlock the Full Potential of Open-Ended Questions and Analysis
At Voxco, joining forces with Ascribe is part of our mission to empower researchers with powerful tools for analyzing open-ended feedback. As Rick Kieser, Chief Strategy Officer at Voxco, explains:
“Ascribe has dedicated 25 years listening to customer feedback and analyzing open-ended feedback, partnering with the world’s top market research firms and industry-leading corporations. By closely listening to the needs of these pioneers and continuously evolving, Ascribe has delivered cutting-edge solutions that shape the future of text analytics. The launch of Theme Extractor 2.0 and Ask Ascribe represents the pinnacle of this expertise—a culmination of decades of innovation, hard-earned insights, and the processing of over 6 billion customer comments. We’re excited to bring these solutions to Voxco’s customers and continue pushing the boundaries of innovation in research.”
With Ascribe Coder and CX Inspector, researchers can efficiently categorize and act on open-end feedback, driving more informed decisions and enhancing the overall customer experience.
10/28/24
Read more
The Latest in Market Research
The Pitfalls of Binary Thinking in Research and Marketing
Quantitative research is qualitative research in disguise
Whether you’re a market, academic, or social researcher, most human behavior researchers have a preference and expertise for quantitative or qualitative data collection tools. We tend to have preferences between focus groups and questionnaires, individual interviews and eye-tracking, ethnography and biometrics. We have a well-developed hammer, and we know how to make it solve most of our research problems.
However, the human experience is 100% qualitative, and quantitative research is really qualitative research in disguise. Researchers ask people to provide answers using distinct answer boxes, not realizing that they’re asking participants to pre-code highly subjective interpretations of complex experiences into imperfectly operationalized answer options. Those pre-coded answers are neither more precise nor valid than answering open-ended verbatims which are subsequently coded by the researcher. Whether the participant codes them, or the researcher codes them, both are representations of qualitative personal experiences crammed into a box.
We like to differentiate quantitative research as being measurable, structured, and controlled when qualitative research is also very much measurable, structured, and controlled. We like to say qualitative research is rich and in-depth when quantitative research can also be rich and in-depth. When well conducted, both qualitative and quantitative research give people the opportunity to reveal their underlying emotions and motivations. Quantitative research sometimes offers scale and statistical power, but the rest can be quite similar.
What can research users and practitioners do? Most importantly, we need to recognize that neither method is more valid, useful, nor important. It’s irrational to prioritize results from one method over the other. Second, research practitioners and users should have more than a basic level of training in both qualitative and quantitative research. It’s never a flex to be fluent in one method and mostly ignorant of the other. This only limits a researcher's perspective, problem-solving capabilities, and robustness of their findings.
Probability Sampling: A More Rigorous Form of Nonprobability Sampling
When it comes to choosing between probability and nonprobability sampling of human beings, the reality is that just about all sampling is nonprobability sampling. There are very few cases in which every member of population is known and every randomly selected participant consents. Probability sampling exists further along the continuum of nonprobability sampling.
For example, every student registered at a school can be identified but forcing a random sample of that population to participate in a study is impossible. Similarly, even with birth, death, driving, and voting records, it’s impossible to have a perfect list of every citizen in a city and subsequently force a random sample of those people to participate in a study. People will always be accidentally excluded and many who are included will not consent to participate. Nearly every attempt to achieve probability sampling with people is in fact an example of more rigorous nonprobablity sampling.
Regardless, probability sampling isn’t inherently superior to non probabliity sampling. Errors of sampling, data analysis, and interpretation creep into both methods. All participants behave on a continuum of honesty, attention, and care.
What can research users and practitioners do? In the end, the best sample is that one that is best suited for the job. Nonprobability samples are ideal for exploratory research, pilot studies, case studies, niche populations, trending, product testing, and, of course, working within time and budget constraints. If you require more precise statistical extrapolation such as for political polling, policy evaluation, market entry analysis, or demand forecasting, methods that approach probability sampling are preferred.
Every extrovert is an introvert
We love to classify people as male or female, introverted or extroverted, or online or offline shoppers. Our research results are massive collections of artificial, human-made binaries. But the human experience, even the most discrete physical attribute, exists on a continuum.
Binary groupings have a purpose and can be extremely helpful but it’s important to remember that we arbitrarily create the cut-points that become those binary groupings. We choose those cut points out of convenience not because they’re ‘true.’
No matter how we classify people into personality, demographic, shopping, social, or other groups, those groupings are artificial and they exist on a continuum. A group of ‘introverts’ could be subdivided into introverts and extroverts. And that subgroup of introverts could again be subdivided into introverts and extroverts, rinse and repeat. Being classified as a premium or budget shopper differs by who you’re shopping with, the product category, the time of day, and whether you’re hungry. Being classified as rural or urban can depend on political, national, local, and other characteristics.
What can research users and practitioners do? Remember that data tabulations are arbitrary and changeable. They can be redesigned once, twice, and thrice after the preliminary data has been reviewed. Design your initial tables with twice as many groups as will be necessary even if the sample sizes will be too small. With tables in hand, then you can evaluate the results and decide how many groups make sense and whether those groups should be equally sized.
Conclusion
Most binaries are arbitrary. They are human defined, human applied, and can be recoded into innumerable meaningful groups. While they are essential for simplifying the complexities of our world, every binary representation gives researchers a fresh opportunity to pause and question other equally valid categorizations that might exist. Questioning binaries is an important technique for researchers and marketers who want to reveal and explain the true complexities of consumer behaviors and preferences, ultimately improving the accuracy and relevance of marketing insights.
10/2/24
Read more
The Latest in Market Research
Navigating Data Quality in the Market Research AI Landscape
I’ve just crossed the six-month mark at Voxco and it’s been a whirlwind of a journey! I am loving getting to know all the people on our team, how they help our customers with a huge range of needs and challenges, and the potential we have together.
When joining a company as their new CEO, one of the first things I like to do after connecting with my team is to meet our customers, listen to industry experts, and hear from a broad range of stakeholders. What’s important to them? What challenges do they face? What gets them excited about going to work every day?
In talking with people at industry shows like Quirks and AAPOR, I immediately saw that AI has been embraced as a transformative market research technology that warrants significant investment. Exploring the capabilities of generative AI for enterprise applications can further revolutionize market research, enabling businesses to uncover innovative insights and optimize their decision-making processes. People are genuinely committed to the technology. For instance:
- A vast majority of trade show exhibitors have taken AI-forward approaches. And, at least partially because this is what conference committees are seeking, presenters too are taking an AI-forward approach.
- Whether their key services include data quality, sampling, analytics, reporting, or something else, most research providers are actively running internal AI projects, often leveraging tools like AI Chat to enhance their processes. About half of those projects are purely experimental but the other half are already customer-facing and revenue- generating.
Showing and discussing applications of AI in market research, however, is just noise. We need to understand the type and magnitude of impact that AI technologies have. In order to avoid long-term harm, we need to proactively measure, understand, and work toward preventing the misuse of AI. This can happen in several different ways.
- Poor data input: Generative AI has many strengths, but it can also lead to data quality issues. Just as a data analyst knows that poor sampling practices and small sample sizes create large error rates and minimal generalizability, the same is true for GenAI. “Hallucinations” destroy validity, generate incorrect insights, and lead to poor business decisions. AI researchers need to identify and prevent all types of substandard data practices that can mislead AI processes. Companies that outsource digital marketing should be particularly cautious, as the quality of AI-generated content can significantly impact their online presence and brand reputation.
- Misplaced applications: Because AI is amazing in many circumstances, it’s easy to run with it rather than trusting our gut and years of experience. Sometimes, training data doesn’t include the core data needed for making correct inferences. Sometimes we use a generalist AI tool over a research-specific AI tool. Researchers need to address the strengths and weaknesses of any AI tool they use to ensure unconscious biases that lead to incorrect business decisions are avoided.
- Lack of validation: Researchers love data, experimentation, and validation. However, AI is still developing, and there's limited market data to validate new techniques. We don’t yet know if an approach that worked for one ad test will be effective across categories, target audiences, regions, and objectives. This calls for extensive documentation and robust databases.
Of course, there are some immensely valuable and already validated uses for AI tools. Tools like Ascribe (newly acquired as part of the Voxco platform) have already helped the research industry solve a long- running problem of avoiding coding open-ends simply because of time and cost constraints. Given that many questionnaires have ten or more short open-ends plus several long open-ends, this used to be a disappointing waste of respondent time and a loss of valuable insights for brands. This is one big problem solved.
I look forward to seeing how AI continues to evolve to create better business operations, research processes, and exceptional customer experiences. With a proactive approach to quality and validation, the opportunities are endless. I’d love to learn about your AI experiences so please feel free to connect with me on LinkedIn or talk with one of our survey experts.
9/18/24
Read more
Text Analytics & AI
How Can I Create Visualizations From My Open-Ends?
Qualitative data, especially open-ended responses, can provide deep insights into consumer behavior, preferences, emotions, and desires. However, extracting actionable insights from qualitative data is more difficult than briefly skimming through survey responses or counting frequency, instead, it requires quantitative analysis of the open-ended responses in your data set and then effective visualization of the results.
The Power of Visualizations
Visualizations play a crucial role in transforming raw, unstructured data into meaningful insights to help drive business decisions. When it comes to open-ended responses, visualizing the data with infographics, charts, and other visual options helps with identifying patterns, trends, and sentiments that may not be immediately apparent from reading the text alone.
Visual tools have the ability to take vast, complex amounts of qualitative data like consumer comments and transform it into easily digestible, visually appealing formats. By presenting data in this way, these tools enable you to quickly identify key themes and ideas that might otherwise be buried in lengthy text responses. This streamlined approach not only saves time but also enhances your ability to make informed, data-driven decisions with confidence, ensuring that critical insights from your respondents are not overlooked.
Understanding Your Open-End Data
Before creating your data visualizations, it’s essential to understand the nature of your open-end data. Open-ended responses often vary in length and complexity, ranging from short, single-word answers to detailed narratives. This variability makes it necessary to prepare your data carefully to ensure accurate and insightful visualizations.
Preparing Data for Visualization
Preparation is a critical first step in the data visualization process. This phase involves not only the collection of data but also a thorough review and organization of that data to ensure it’s ready for effective visualization.
Initially, it’s essential to gather all relevant data from your surveys, interviews, or other qualitative research sources. Once collected, the data needs to be carefully reviewed, which includes cleaning and organizing the responses. This might involve filtering out irrelevant, duplicate, or "spam" responses that could skew your results, ensuring that only accurate and meaningful data is included in your analysis.
Next, responses should be classified into themes, which is the process of categorizing similar pieces of information under unified topics or concepts. This helps to distill large amounts of qualitative data into more manageable and understandable segments. Additionally, tagging specific segments of responses that are particularly relevant to your analysis can help highlight key insights and trends that align with your research objectives.
It’s also important to consider the context of your data, especially in relation to the research questions your organization has posed. The framing of these questions can significantly influence the way responses are interpreted. By keeping these questions in mind, you can ensure that your visualizations will address the core issues you set out to explore.
Proper preparation lays the groundwork for creating visualizations that accurately reflect the underlying sentiments and themes within your data. This meticulous approach not only enhances the clarity and effectiveness of your visualizations but also provides a robust foundation for deeper data analysis, enabling you to draw more reliable and actionable insights from your quantitative and qualitative research questions.
CX Inspector: A Leading Data Visualization Tool
When it comes to visualizing open-end data results, CX Inspector stands out as a top tool for researchers and analysts. CX Inspector is designed to simplify the complex process of quantitative and qualitative data analysis. Offering advanced features like generative AI and theme extraction, combined with its intuitive interface and powerful analytical capabilities, CX Inspector allows you to import, analyze, and visualize the results with minimal effort, making it an indispensable tool for any research project with qualitative data like open end responses.
Theme and Sentiment Analysis for Qualitative Data Visualization
One of the most effective ways to visualize open-end data is through theme and sentiment analysis. These techniques help you identify the underlying patterns and emotions in your data, which can then be represented visually. Here are some popular methods:
Word Clouds
Word clouds are a simple yet powerful way to visualize the most common words or phrases in your open-end responses. They provide a quick overview of the key themes by displaying words in varying sizes based on their frequency. While word clouds are great for initial exploration, they may oversimplify the data, so it's important to use them in conjunction with more detailed analyses.
Thematic Clustering
Thematic clustering involves grouping similar responses into clusters based on shared themes. This method is particularly useful for identifying patterns and trends in large datasets. By visualizing these clusters, you can easily see which themes are most prominent and how they relate to one another.
Network Diagrams
Network diagrams are another advanced visualization technique that shows the connections between different themes or keywords in your data. These diagrams are especially useful for understanding the relationships and interdependencies between various concepts, providing a more nuanced view of your data.
Frequency Distribution Graphs
Frequency distribution graphs, such as bar charts or histograms, are ideal for visualizing the prevalence of specific themes or sentiments in your open-end data. These graphs provide a clear, quantitative representation of how often certain responses or themes occur, making it easier to compare and contrast different aspects of your data.
Best Practices for Creating Effective Visualizations
Creating effective visualizations requires more than just choosing the right tool or method. It also involves adhering to certain best practices to ensure your visualizations are clear, accurate, and actionable.
Choosing the Right Visualization Method
The first step in creating effective visualizations is selecting the appropriate method based on your research goals and the nature of your data. For example, use word clouds for a high-level overview, network diagrams for exploring relationships, and frequency graphs for detailed quantitative analysis.
Ensuring Clarity and Accuracy in Qualitative Research
Clarity and accuracy are paramount in qualitative research. Ensure that your visualizations accurately represent the data by avoiding common pitfalls like overgeneralization or misinterpretation. Always double-check your data preparation and coding processes to maintain the integrity of your insights.
Making Visualizations Actionable
Finally, your visualizations should be actionable. This means they should not only provide insights but also guide decision-making. Consider how your audience will use the research findings presented and tailor your visualizations to highlight the most critical findings.
Begin Visualizing Qualitative Data with CX Inspector
Ready to transform your open-end data into powerful visualizations? CX Inspector makes it easy to analyze, visualize, and extract meaningful insights from your surveys with qualitative data. Whether you're looking to create word clouds, thematic clusters, or frequency graphs, CX Inspector provides the tools and support you need to succeed.
Don’t let valuable insights remain hidden in your data. Start visualizing with CX Inspector today and unlock the full potential of your open-end responses.
9/16/24
Read more
The Latest in Market Research
Behind the Scenes of Polling: Navigating Voter Intentions with Dr. Don Levy
Introduction
Polling, a cornerstone of political and social research, involves much more than just asking a few questions and compiling results. It’s an intricate process, shaped by many variables that can impact the accuracy and reliability of data. At the heart of this complexity is the challenge of understanding voter intentions amidst the dynamic environment of human behavior and external influences.
To shed light on these challenges, we recently spoke with Dr. Don Levy, Director of the Siena College Research Institute and a respected figure in polling with extensive experience in the field. Dr. Levy shared invaluable insights into the world of polling, offering a detailed look at the practices and factors that influence voter intentions. His extensive knowledge, gathered from our discussion with him, as well as his podcasts with AAPOR and WXXI News, provides a deeper understanding of the methodologies that underpin effective polling.
In this blog, we will explore Dr. Levy’s insights on the challenges of understanding voter behavior, ensuring accurate responses, and achieving comprehensive representation.
Understanding Voter Behavior
A. The Challenge of Gauging Voter Intentions
Predicting voter behavior is a complex challenge that goes beyond simply counting preferences. Voter intentions are influenced by many factors, including:
- Personal beliefs: Personal factors such as individual values, experiences, and priorities can sway voter decisions in significant ways.
- Social dynamics: Social influences, including peer opinions and community norms, also generally influence voter decisions.
- Political contexts: The political climate—marked by shifts in policy, candidate profiles, and campaign strategies—further complicates the task of predicting how voters will cast their ballots.
These elements are not only diverse but also interact in unpredictable ways, making the task of forecasting election outcomes both challenging and intricate.
B. The Role of “Likelihood to Vote” in Polling Models
To navigate these complexities, pollsters often rely on the likelihood of voters participating in the election as a critical variable. This approach involves assessing not just who intends to vote but how likely they are to follow through on their intentions. Consistent voters—those who regularly participate in elections—are given more weight in polling models, reflecting their higher reliability in influencing election outcomes.
In contrast, intermittent or less engaged voters are weighted down in the models. This differentiation helps to adjust the data to better represent the population of likely voters, offering a more accurate snapshot of potential election results. By focusing on these variables, pollsters aim to refine their predictions and enhance the precision of their findings.
C. Insights from Dr. Don Levy
According to Dr. Levy, understanding and correctly applying the likelihood to vote is pivotal in managing the inherent uncertainties of polling. Dr. Levy emphasizes that by carefully weighing consistent voters and adjusting for those less likely to vote, pollsters can more effectively capture the true intentions of the electorate.
“We apply a voter’s likelihood to vote as a weighting variable. For instance, if someone has voted in every single election and they tell us they absolutely will vote, you could consider them close to a 100 percent probability. On the other hand, for intermittent voters, if they express less attention to the election during our conversation, we weigh down their response.
Evaluating the reliability of their probability is crucial. After the election, we follow up to verify if those we rated as highly likely to vote actually did, assessing our predictive accuracy over time.”
Dr. Levy’s expertise highlights the importance of these methodologies in refining polling practices and improving the reliability of election forecasts. His perspective underscores the need for continual adaptation and precision in polling techniques to address the evolving nature of voter behavior.
Addressing Honesty and Non-Responses
A. The Importance of Honest Responses
Ensuring that respondents provide truthful answers is a fundamental challenge in polling. Accurate data relies heavily on the honesty of participants, but several factors can compromise this integrity. Respondents might be influenced by social desirability bias, where they provide answers that they believe are more acceptable or favorable rather than their true opinions. Additionally, the brevity of interviews can sometimes lead to less thoughtful or more guarded responses, further complicating the accuracy of the data collected.
Typical scenarios where honesty may be compromised include sensitive topics or issues that may provoke strong emotional responses. In such cases, respondents might be reluctant to share their true feelings, thereby skewing the results.
B. The Issue of Non-Responses
Non-responses, particularly from strong supporters of specific candidates or issues, present another significant challenge. These individuals may abstain from participating due to distrust in the media or pollsters, or because they believe their responses may not be taken seriously. This reluctance can create a gap in the data, leaving certain groups underrepresented.
Distrust in media and polling organizations exacerbates this issue, leading to lower response rates from certain demographic groups. This situation is problematic because it can distort the overall representation of voter intentions and opinions, impacting the reliability of polling results.
D. Insights from Dr. Don Levy
Dr. Levy addresses non-responses and dishonesty with a multi-faceted approach. He emphasizes the importance of understanding respondents' perspectives and incorporating adjustments to account for biases and missing data. His approach involves a combination of rigorous data analysis, transparency, and continuous efforts to engage with diverse respondent groups.
“Within interviews lasting between 7 and 12 minutes, participants generally tend to be truthful. However, our main challenge revolves around non-responses. To address this, we inquire about various attitudes, including their perspectives on media and current social issues. Sometimes, we apply weights based on these attitudes to better represent the non-responsive group.
Unlike some who focus solely on specific regions like just western Pennsylvania, we take a more detailed approach, recognizing the diversity within areas, such as distinguishing between Pittsburgh and the rest of western Pennsylvania. This approach requires additional work, urging our call center staff to search for representative samples, even among the demographics least likely to respond.”
Dr. Levy’s approach highlights the ongoing commitment to improving polling practices and addressing the complexities of voter behavior and response accuracy.
Ensuring Comprehensive Representation
A. The Need for Representative Samples
Achieving a sample that accurately reflects the broader population is critical for polling organizations. Representative samples ensure that the data collected is reflective of the diversity and complexities within the entire electorate. This representation is crucial for generating accurate insights into voter intentions and behaviors.
One of the main challenges in achieving representative samples is dealing with regions that have diverse demographics. In such areas, capturing the full spectrum of views requires careful consideration and nuanced understanding of various sub-groups. Without addressing these demographic intricacies, poll results can become skewed, leading to misleading conclusions.
B. Detailed Approach to Sampling
To overcome these challenges, pollsters employ a detailed approach to sampling. Instead of relying solely on broad geographical areas, pollsters focus on understanding and accounting for regional nuances. This involves segmenting regions into smaller, more specific areas to capture the diversity within them accurately.
A broad geographical sampling approach might provide a general overview but lacks the granularity needed to understand local variations. In contrast, a detailed, nuanced sampling strategy involves breaking down regions into smaller units and applying targeted methodologies to ensure all demographic groups are represented. This meticulous approach helps in obtaining a more accurate and comprehensive picture of voter intentions.
C. Insights from Dr. Don Levy
Dr. Levy emphasizes the importance of detailed sampling in enhancing polling accuracy. According to Dr. Levy, understanding and addressing regional nuances significantly impact the reliability of poll results. He advocates for a detailed approach to sampling that goes beyond broad geographic classifications to capture the complexities of diverse populations.
"Rigorous quoting, stratified sampling, aggressively seeking to keep the drop-offs—these are all the steps that we're taking to protect against the threat of inaccurate polling results.”
By focusing on detailed sampling methods, pollsters can improve the accuracy of their results and provide more meaningful insights into voter behavior.
Conclusion
Understanding the intricacies of polling is crucial for grasping how voter intentions are measured and interpreted. We’ve explored the challenges of predicting voter behavior, the importance of honesty and addressing non-responses, and the need for comprehensive, representative samples.
Pollsters face a complex landscape, but with methods such as weighting for likelihood to vote and detailed sampling approaches, they strive to provide accurate insights. Dr. Levy's perspective highlights the ongoing efforts to improve polling accuracy and the significant role polling plays in informing democracy.
As we look to the future, Dr. Levy’s optimism about the continued evolution of polling and its impact on our understanding of public sentiment reinforces the value of behind-the-scenes processes in shaping democratic discourse.
Siena College Research Institute: A Leading Force in Public Opinion Polling
Founded in 1980 at Siena College in New York’s Capital District, Siena College Research Institute (SCRI) conducts a wide range of regional, statewide, and national surveys on political, economic, and social issues. Under Dr. Levy’s leadership, SCRI has become the exclusive pollster for The New York Times, SCRI has become a trusted polling partner for The New York Times, playing a pivotal role in shaping major pre-election polls and key issue-based surveys. SCRI’s results are regularly featured in prestigious publications like The Wall Street Journal and The New York Times, and it has been recognized as the most accurate pollster in America by FiveThirtyEight.com. As a valued Voxco customer, SCRI uses Voxco’s platform to power these critical efforts, ensuring precise, data-driven insights that shape public discourse.
9/16/24
Read more
The Latest in Market Research
Exceptional Customer Experiences via Surveys
Ready for a fresh take on participant engagement? We thought so! That's why we invited Annie Pettit, an industry expert in data quality and participant engagement, to share her insights. Whether you're here for practical tips or thought-provoking ideas, this post will get you thinking. Enjoy!
Creating engaging customer experiences is so important that nearly every retail and customer group has prepared extensive guidelines on how to do so. Among thousands of other guidebooks, manuals, and compendiums, the AMA has a Customer Engagement Playbook and Workbook, Hubspot has its “Ultimate Guide to Customer Engagement in 2024,” and Forbes has its “Customer Engagement in 2024: The Ultimate Guide.”
Retailers, marketers, and stakeholders put a lot of effort into creating engaging experiences for their consumers, constituents, and employees for good reason. According to Gallup, increasing customer engagement can lead to a 10% increase in profits, 66% higher sales growth, and 25% higher customer loyalty.
Because they spend so much time researching it, market researchers have deep insights into what exceptional customer experiences really are and how important it is. They also realize that participating in social and marketing research has the potential to be an intensely engaging and personally satisfying experience as well.
Why, then, does the research experience seem to be such a transactional exchange? Researchers write surveys. Participants give answers. Participant experiences decline. Response rates decline. Repeat.
It’s time for research and marketing leaders to apply what they’ve learned about the customer experience to the survey experience. Let’s consider a few ways of creating intensely engaging research experiences for participants that will ultimately benefit stakeholders and elevate the ROI research.
Desirable incentives and fun questions are table stakes
When we think about creating an engaging research experience, most of us turn to creating a more fun and entertaining experience. In addition to creating simply better quality questions, we do this by:
- Offering incentives such as cash rewards, loyalty points, and exciting prizes. Research participants are human, after all and something is often better than nothing to convince someone to “Click to start” a survey. That’s one step forward for completion rates and representativity.
- Incorporating fun question types that help keep people motivated. For example, rather than asking people what they like best about ten different insurance companies, they can be asked what the superpower of each company is. Or, what animal or comic character or celebrity best reflects each company.
However, incentives and fun questions are table stakes. Participants look for and expect to see these things in every research study. If your research doesn’t already incorporate these things, it’s time to demand better.
Take the next step to ignite curiosity and encourage personal growth
Perhaps more importantly, though, are intrinsically engaging experiences. Many people like participating in the research experience because they value being heard and keeping informed about new products and services. There are, however, much more significant opportunities for personal growth. For example:
- Questionnaires that incorporate personality, descriptive, or preference statements can encourage self-reflection and highlight areas for personal growth and development.
- Health, fitness, food, beverage, financial, and environmental research can cause people to reflect on their personal behaviors and consider whether they are interested in changing any components of their lifestyle.
- Many studies are simply a good way to stimulate thinking, enhance concentration, and test out new ways of thinking, particularly for people who have fewer opportunities to do so in their daily lives.
Let’s return for a moment to the customer experience. When marketers present new products or services to customers, they explain the benefits clearly. People expect to learn what is new or fun or intriguing about a product they are considering purchasing.
The research experience should be no different. Researchers need to help participants understand how they will benefit from participating. Among many others, here are a few ways we can do this.
- At the beginning of a questionnaire, invite people to consider their participation as a small journey in self-discovery. Invite them to use their curiosity to its fullest and try out new ways of thinking.
- Add a question at the end of the study inviting people to share with other participants what they’ve learned about themselves as a result of their participation. Most participants are curious to learn about the outcomes of the research projects they participate in and, with consent, this question is perfect for sharing when others cannot.
- At the end of a questionnaire, conclude with an offer to share links to trustworthy third-party websites so interested participants can learn more about the topic. If someone selects the “Yes, please share” box, offer links to free college courses or trusted, neutral websites with information about finances, the environment, healthcare, or child development.
Remember, offering these benefits must always be offered upon consent.
Help people be the change they want to see
It’s fun to joke about online algorithms that serve us weeks of advertisements for vacuum cleaners after we’ve just bought one that should last twenty years. But in the research space, it’s a different story.
After we’ve bought that vacuum cleaner (or soap or beer), we do want to talk about it for weeks. We want to ensure that other people benefit from our experience. We want to share our opinions, offer advice, and shape new innovations. It feels good to help other people make decisions that are right for them.
By participating in research, people don’t simply help others buy a better vacuum cleaner. Sharing experiences with new products and services helps brands build products that enable people to eat healthier, have more fun, become more self-sufficient, access essential social services, and improve life itself. Research improves lives and can even save lives.
As before, we can’t simply assume that people will know the benefits of participating in research. Just as marketers tell people that this vacuum cleaner has the best suction, researchers should tell people how research helps the broader community. How do we action this?
- At the beginning of a study, remind people of the good that will come out of it. You already know the business objectives and the research objectives. You simply have to translate those into consumer facing language. Tell people that their participation will help many people in the future by creating more beneficial products and services.
- At the end of a study, offer more specific outcomes. Explain that their contributions will help people who have skin problems find personal cleaning products that are less irritating. Or, that everyone deserves a little joy in their lives even if that means determining which flavor of potato chips they’re going to make next. Tell people that their contributions make it easier for people to stay healthy, enjoy meals with their family, or give them more free time.
Naturally, it’s important not to jeopardize the research goals so ensure any specifics are left to the end of the research.
Summary
It’s so easy to pull out a survey template, change the brand names, add a couple new questions, and launch it. We’ve got decades of experience doing just that. But it’s time to say no to the templates we’ve relied on for years and built a new, and better template. One that prioritizes the survey experience just as marketers, companies, and organization have prioritized the customer and employee experience.
With a more engaging and personally fulfilling survey at hand, research participants will find it far easier to truly engage in the content, think deeply about their answers, and provide richer, more accurate data. Ultimately, investing in the survey experience translates to unlocking better quality insights, more informed decisions, and happier customers.
If happy customers are important to you, please get in touch with our survey experts. They’d love to help you collect more valid and reliable data. Talk to a survey expert.
8/29/24
Read more
Text Analytics & AI
Want to Know What Your Customers Really Think? Simplify Your Satisfaction Survey!
By Rick Kieser, Ascribe CEO
The customer satisfaction survey has become an epidemic. Whether you are buying a product, eating at a restaurant, or enjoying some other experience, it won't be long before you receive an email asking you to complete a survey about the experience. As sociologist Anne Karpf writes in The Guardian, "So many organizations now want our feedback that if we acceded to them all, it would turn into a full-time job – unpaid, of course. … The result is that I'm suffering from feedback fatigue and have decided to go on a feedback strike." She is certainly not the only consumer who feels this way!
Quality of Feedback: A Tale of Two Surveys
With consumers having such negative perceptions and experiences with customer satisfaction surveys, you have to wonder about the quality of the feedback going back to the business. I recently took my family to Disneyland. As usual for Disney, most of the experience was stellar. After our visit, I had two ideas I wanted to share. 1) The staff was outstanding, knowledgeable, and helpful, and 2) They should not upcharge Genie+ customers for lightning lanes on select rides. As I expected, less than 24 hours after leaving, I received the usual invitation to give my feedback about our experience at Disneyland. Given my profession, I was looking forward to this! I clicked on the link and started the survey.
Ten minutes later, I had completed less than 20% of the questionnaire, it was a compilation of closed and open end questions with no end in sight. I was done. I aborted the survey. Even worse, in my ten minutes invested, I did not find an opportunity to provide the two pieces of feedback I wanted to share!
Now, compare that to the survey sent by a hotel I visited. It was only two questions long. The first question asked me to rate my experience on a 10-point scale. The second was an open-ended question: "Please tell us about your experience." Again, I wanted to share two thoughts: 1) The hotel restaurant was spectacular, with a beach view and great food. 2) We had to wait over 20 minutes before a server came to help us. As you can imagine, I was happy to complete that survey! Three minutes, DONE.
Which survey do you think gave better information about my thoughts and feelings? The hotel survey, of course, because it let guests tell them what they wanted to share about their visit in their own words.
Customer Satisfaction Surveys that Customers Like
Now, there may be internal or political reasons that make it difficult to change from a rating scale-based survey to one that is primarily open-ended. However, if we want more insightful feedback and customers who are happy to give it, we need to respect the customer’s time and move beyond lengthy surveys with many frustrating questions. We need short and sweet surveys that allow the respondent to express their thoughts clearly and quickly their way.
One of the traditional complaints about using open-ended responses over scaled responses is that open-ended responses are too wordy, too complicated, and too expensive to code and analyze quickly. That is no longer true, as we have the technology today to interpret these results efficiently and cost-effectively. Because of this, we need to get our surveys aligned with what is possible in data analysis solutions now, or we risk alienating our survey respondents to the point where they will no longer volunteer to answer questionnaires and we risk eroding their view of the brand or service.
The best solution is to create questionnaires with a few closed-end questions and one open-ended question: "Tell us about your experience." Yes, just one open-ended question. The technology can separate and analyze the responses. A few closed-end questions are needed to filter for data analysis, such as satisfaction rating, demographics, and so forth. But you can replace all the open-ended questions (e.g., What did you like? What did you dislike? Why did you give that rating?) with just one question.
Open-End Analysis in Just Minutes
The latest and best technologies can take even the most wordy, rambling, and detailed responses and analyze them in minutes. When you are thinking about collected customer opinions, social reviews are the epitome of vehicles through which customers express how they are really feeling in their own words. Here's an example of over 1,500 reviews scraped from the internet from recent London Eye visitors, all unstructured, open-ended comments. As you may know, the London Eye or Millennium Wheel, on the South bank of the Thames, is the most popular paid tourist attraction in the U.K., with over 3 million visitors annually. Here is an example of one person's review.
In spite of some rather lengthy reviews, within a matter of minutes we were able to identify and quantify the dominant themes from these 1,500 reviews using Ascribe's CX Inspector with Theme Extractor. We also created a cross tab identifying differences in responses based on who else was along for the experience: family, couples, friends, or solo. If coded manually, this data set would have taken a market research firm two days to analyze, at significant cost. With CX Inspector the results were ready within 30 minutes.
Here is another example of what is possible with today's technology. We analyzed 1,500 customer reviews with 145,000 words on a local ice cream shop in just over 20 minutes using Ascribe's CX Inspector. Again, the key themes were immediately identified, and using sentiment analysis, we could quickly understand customer likes and dislikes. It looks like the ice cream is delicious, and some staff are friendly and provide a positive experience, but some people indicate the experience is marred by poor service and expensive prices! This store owner would be able to quickly understand what they need to address to improve customer satisfaction.
As a final example, here are results of 2,500 customer surveys for a sports arena. In addition to a seven-point rating question, the survey included a follow-up open-ended question: "Why did you rate your experience 1 to 7?" The responses, which included a total of 58,000 words, were analyzed in 20 minutes with CX Inspector to reveal that while the arena delivers a great experience with terrific staff, concession lines and parking are key drivers of dissatisfaction. Again, the arena management can quickly understand what they need to work on to improve the visitor experience.
Find Out What Customers Really Think
Customer satisfaction surveys are ubiquitous, but the traditional approach of lengthy questionnaires may not be the best way to understand what customers are truly thinking if they get impatient answering the questions or are not willing to finish the survey. With new technology capable of coding and analyzing open-ends so easily, quickly, and cost-effectively, there is no need to have burdensome customer satisfaction surveys with a battery of close-ended and open-ended questions. By allowing customers to express themselves in their own words quickly, brands can better understand the customer experience and what matters most to them, while building customer loyalty through an improved survey experience. You will get better and richer customer feedback. And the best and only open-end question you need to ask is, "Tell us about your experience."
Embracing open-ended questions in your customer satisfaction surveys lets you alleviate feedback fatigue and invite genuine insights. The advent of generative AI-driven text analytics tools like Ascribe's CX Inspector with Theme Extractor allows brands to delve deeper into open-ended feedback quickly and easily. Customers will reward brands willing to ditch the traditional satisfaction survey in favor of an open-ended approach with more meaningful and actionable customer feedback.
Increase your customers' satisfaction by simplifying your surveys! Contact Ascribe today to discuss your needs, and we will find the best solution for you!
5/6/24
Read more
Text Analytics & AI
Ascribe Podcast #2: The Importance of Human Intelligence in Leveraging AI Innovations
https://youtu.be/0Q2S80m-NQY
The next episode of the Ascribe Podcast is now live! In this episode, special guest Lara Rice, Managing Director of Ascribe, joins Chrissy and Gustav to discuss the evolution of coding within the market research industry, and at Ascribe how years of manual coding experience coupled with valuable feedback from customers on their needs have been fueling constant innovation. Lara shares insights into the development of Ascribe solutions’ artificial intelligence (AI) capabilities and their journey from machine learning to AI Coder and beyond.
We’ll also discuss how AI technologies have transformed the coding process, making it faster and more efficient while emphasizing the continued relevance of human intervention for oversight and quality assurance. Our panel tackles common fears and misconceptions of AI, such as job displacement and data security concerns, while sharing how Ascribe is prepared to combat these issues.
With Artificial Intelligence creating a deeper foothold in the market research industry, Ascribe’s ongoing efforts to develop AI-assisted tools will continue.
Follow Gustav on LinkedIn Follow Chrissy on LinkedIn Follow Lara on LinkedIn
4/10/24
Read more