Employee engagement surveys and pulse surveys, their limitations, and how to overcome them

It is undeniable that we live in an increasingly data-driven world. As early as 2013, a study by McKinsey found that companies that rely heavily on customer analytics outperform their competitors in profit, sales, growth, and ROI. The belief that data drives success has pushed developments in customer data collection, moving from survey data to organic data like buying behaviour, social media listening, transaction tracking, and website cookies to better understand customers.

To better understand employees? Not much has changed in terms of data collection. While we’ve seen collection and evaluation automated and digitized, leaders and HR departments still depend on surveys and “self-report methods” as the primary quantitative data collection method to learn about the sentiments, needs, fears, and motivations of their workforce.

Self-report methods, measures, instruments, or techniques describe data-gathering wherein participants are asked to report directly on their own sentiments, behaviors, beliefs, attitudes, or intentions.

This article will explore the limitations to using self-report methods to understand employees and human psychology in general, and attempt to uncover the path to evolve and professionalize employee sentiment analysis, making it a more reliable basis for business decisions.

3 Limitations of Employee Engagement Surveys and Self-report Methods

The complexity of human psychology

Understanding people, and particularly employees and the workforce, is an essential element to being a successful leader. When you understand who your employees are, what motivates them, and how they behave in different situations, you can create an environment that encourages them to be productive and engaged. You can also take steps to reduce turnover and increase morale by making sure that employees feel like they are part of something bigger than themselves.However, people are complex and their feelings or sentiments are constantly changing, influenced by genetics, childhood, environment, traumas, lifestyle, and daily stressors. There are many different factors that affect human psychology, emotions, and behavior; they can be dynamic and differ from one moment to the next.

1. The frequency dilemma

Employee engagement surveys are often conducted annually, even when engagement levels of each employee might change daily (or even more frequently). Can an annual survey truly measure the aggregated engagement level of the workforce?

Perhaps through a careful design of each question to aid employees in evaluating their own relationship with their work, the accuracy of results may increase. One must think of when to conduct the survey as well, perhaps during a time when employees aren’t too stressed or relaxed? There are also advocates of regular pulse surveys to get a regular gauge employee sentiment and including these results in annual analyses.

To increase the frequency for better accuracy of data, shorter versions of surveys have been created. However, these attempts reduced the interest of participants in answering said surveys, while increasing the cost of time and money spent.

The dilemma here is that more frequent data is better, yet more frequent surveys means greater risk of survey fatigue and even ‘inaction fatigue’ at greater cost, without necessarily improving accuracy and reliability of results.

Survey fatigue refers to the emotional frustration caused by repeatedly completing a survey. Some claim that more than distaste for surveys and providing feedback, employees are more frustrated by the inaction of management after feedback is given or inaction fatigue.

2. Are surveys credible?

In addition to frequency, the method of surveys and interviews are inherently biased because the responses rely on the employee’s self-awareness as well as their willingness to answer truthfully. One must take into consideration the ‘social desirability bias’, which affects the credibility of the results.

The social desirability bias refers to the tendency of people to answer questions in a way that makes them look good. It’s a form of self-presentation bias. For example, when you ask people questions about their health, they might give answers that make them look healthier than they actually are (Bernardi & Nash, 2022).

3. Do survey results represent the entire workforce?

Although the survey method is considered a quantitative data collection method and are considered to provide a direct score of the level of engagement, if the response rate of an employee engagement survey is low, the result may not be representative of the entire workforce (Shami et al., 2013).

So, are surveys reliable?

The reliability of survey data in the field of psychology has been in question in recent years. How do we determine reliability? According to the scientific method, a study is reliable if it can be reproduced and replicated. Although the two are similar terms, let’s clarify their distinction:

Reproducibility

Reproducibility implies that external researchers can methodologically and analytically recreate the results and conclusions from the same body of data. It means that various researchers can perform the same experiment and come up with the same analysis and conclusions. For example, in measuring the height of the Boston Celtics team, different people can make the same measurement of each player over and over again and come up with the same average height.

Reproducibility has been investigated in numerous studies and has been applied to all fields of psychology. The research conducted by Open Science Collaboration (OSC) replicated 100 scientific studies related to clinical, social, and experimental psychology. Their goal was to evaluate the rate of reproducibility and they found that 'the effects of the reproductions were half the magnitude of the original effects' and that while 97% of the original studies had statistically significant results, only 36% of the reproductions had the same conclusion (Johnson et al., 2017).

Replicability

In contrast, replicability pertains to replicating the general conclusions of the study. Let’s dial back to the NBA example. Imagine if the conclusion from the Boston Celtics study is that the average height of an NBA team is 6’ 8”. Replicability would be getting the average height of any other NBA team and comparing the results with the original study.

Replicability pertains to the ability to arrive at the same generalization, conclusion, or theory when recreating a study. Evaluating studies in the field of psychology, we encounter a problem because most researchers do not document every detail of their data and their process (Wicherts et al., 2006), making it more difficult to verify the data and results, leading to the Replication Crisis in Psychology (Stroebe et al., 2012). To help solve these two problems, and thus the problem of reliability, data science could play a key role. In the field of psychology, and perhaps in HR, It is seen as an emerging area of work that deals with the collection, preparation, visualization, management, and curation of large amounts of information (Stanton, 2013).

Data Science to overcome the reliability conundrum

Data science could be a path forward to more reliable people insights and metrics. It introduces new theoretical perspectives in data measurement and processing. Data science, for example, requires a complete tracking of a study often through online platforms. Applying this to a psychological study that uses self-report methods would enable immediate and easier verification by interested researchers, providing social scientists the ability to verify the research process step by step.

Best practices from data science also encourages transparent and open methods of complete tracking and recording of the ‘hows’ of data administration and statistical tests performed, from importing of raw data all the way to the presentation of the final results. This facilitates replicability and reproducibility in the field of psychology and HR (Landers et al., 2019).

In addition to improved replicability, data science also proposes new innovative ways to validate new instruments that measure human sentiment, behavior, or attitudes. For example, researchers could use data science methods to integrate previous psychometric results from validation studies and continuously update validity and reliability estimates of instruments on an open-source platform (Landers et al., 2019). This promotes replicability and reproducibility, which in turn enhances the reliability and credibility of results where quantitative assessments that use self-report methods fall short.

As a psychologist, I find data science fascinating and exciting as it empowers us with tools and techniques to overcome the limitations of traditional single-method research designs of psychology studies. It offers solutions to the incomprehensibility of questionnaires, demand characteristics in experimentation, and social desirability in case studies (Landers et al., 2019). It even helps with the measurement of constructs, which has been complicated to crack!

Letting my geek flag fly a little bit more: Luciano et al. (2017) note that data science can be instrumental in capturing data on dynamic phenomena using data streams that index, store, and track behavior, speech and writing patterns, and physiological responses that had traditionally been difficult to obtain. Landers et al. (2016) indicate that data science methods offer more insights into dynamic, multidimensional phenomena that were previously impossible to capture, and the availability of multiple perspectives converging on the same theory improves the reliability of that theory.

Conclusion

Landing on the moon, animal clones, a synthetic organ transplant… These are things we never thought were possible, until they became reality. Is it so hard to believe that we can overcome surveys and self-report methods to study and understand the human condition? Why not use the frontiers of technology to understand the workforce?

Psychological studies provide valuable insights into human behavior but are subject to interpretation and biased answers, as are employee surveys (which corrupts the results and analysis). Data science is a dynamic field that can be used to advance this field, as a tool for understanding human behavior and for detecting patterns in large datasets. Psychological studies are increasingly being conducted in the context of big data sets, and data scientists have a key role to play in helping researchers understand these datasets and extract meaningful information from them.

Personally, it’s exciting to be working on the first HR tech solution that is rethinking the data source for sentiment analysis and exploring a new path to more reliable and updated people analytics and insights. Call us idealistic, but we believe that clearer and more accurate workforce insights can lead to better, more compassionate leaders and happier works teams.

Updated content: