Survey research can be a powerful management tool when conducted according to accepted and proven principles. But how can business leaders know which research reports they can base their decisions on?
I have spent a great deal of time in my more than 40 years in business conducting surveys for clients. I’ve also spent a considerable amount of time reading surveys from various sources. I learned early that survey data could be misleading and untrustworthy.
Recently, I discussed this issue with Karen Beaman, who is the founder of Jeitosa International and a prominent survey researcher and analyst. Currently, she is the lead researcher for the Global Payroll Benchmarking Survey, a thought leadership outfit from the American Payroll Association and the Global Payroll Management Institute.
We agreed that there is some very good research out there. There are also many shady research reports being published and promoted. Here is what leaders should look for to tell if research is legitimate.
First, keep in mind that anyone can construct a set of questions and put them into a list with some type of response scale following each item.
Before reading the survey results, it is important to know who is sponsoring the research. We should also be able to identify the purpose behind the study. Often, the sponsor is not too subtlety pushing an agenda for the purpose of eliciting future business rather than enlightening readers.
A good place to start a review of a survey project is to ask why these items were chosen. Are they relevant to the objective of the research? Meaning, will the answers to the questions provide information that is valid, reliable and meaningful?
A study’s methodology should also be questioned. In addition to improper questions, there are other issues that can skew data. These include the forms of questions asked: Is there built in bias? Did they use odd- or even-numbered scales? How might that have affected the responses and conclusions? Are the questions leading the respondent in a predisposed direction? What about the subjects, or people, surveyed? Are the subjects competent to answer the questions? Are their responses serving their own agendas?
On the analytic side, ask what the sample size is. Are the researchers drawing significant conclusions and making generalized statements from limited samples? Did the researchers perform any tests of statistical significance to support their findings? Do you sometimes see broad conclusions drawn about differences in this or that, which could be derived entirely by chance?
When evaluating research, I often recall this statement famously attributed to Mark Twain: “There are three kinds of lies: lies, damned lies, and statistics.”
It’s true that some reports are purposely or accidently deceptive. Remember, research is mostly the product of humans. Many simply lack the knowledge, skills, resources or integrity to do things thoroughly, while others let bias or other factors dilute their data.
Consciously or not, everyone puts their own spin on the numbers. It’s important to understand where the researcher is coming from to be able to interpret their findings within the right context.
There are a number of books to help with this problem. Two with the same title, “How to Lie With Statistics,” show common errors and deceptions in statistical analysis.
One that we often see is the over-reliance on averages. When there are outliers or exceptions in the dataset, which is often the case, it is better to use the median, or the value that falls in the middle of the range, not the value derived from the sum of a set divided by the number of figures.
Another practice, which drives the result from accidently deceptive to purposely deceitful, is where researchers manipulate the graph proportions or the length or width of the bars to make some numbers look more or less important than they really are.
This is usually accompanied with subjective statements like “climbed a whopping 10 percent” vs. “only grew by 10 percent last year.” These statements are clearly biased, with the intent to sway the reader toward the researcher’s agenda.
The bottom line here is buyer beware. Anyone can collect data and draw conclusions. Don’t take the risk of presenting research data to your management team if you are not fully confident in the integrity of the results. Your reputation is on the line.
Author: Jac Fitz-enz