Author:
Christopher Dietzel
(MENAFN- The Conversation)
As 2SLGBTQIA+ people are increasingly under threat in Canada , and facing escalating dangers from the Donald trump administration in the United States , more research is urgently needed to understand how to address issues of gender and sexual diversity moving forward.
Unfortunately, researchers who aim to explore emerging issues impacting 2SLGBTQIA+ communities and develop interventions to support them are facing a new problem: what if our research participants aren't actually real?
Anonymous online surveys are a great way for marginalized groups, including 2SLGBTQIA+ communities, to contribute to research without significant time commitments. Anonymous surveys also protect participants from becoming targets of anti-2SLGBTQIA+ hate. However, researchers need to be careful about the potential of disingenuous participants to spoil survey data.
The anonymous nature of online research makes it easy for someone to infiltrate research studies and submit false responses . This issue is not new, as researchers have dealt with this concern for years . Ineligible participants may participate in surveys to access honorariums or sabotage research on topics they disagree with.
As artificial intelligence (AI) becomes more advanced, this problem is magnified . And while AI detectors exist, they are not always accurate and cannot confront the issue of human respondents who are simply lying in their survey responses.
Our team has conducted online research about digital hate targeting 2SLGBTQIA+ professionals and organizations in Canada through the Ontario Digital Literacy and Access Network . We encountered this problem with two surveys we administered in 2024. Researchers from the SHaG Lab at Dalhousie University and the DIGS Lab at Concordia University confronted similar issues when conducting online surveys about 2SLGBTQIA+ issues.
This shared concern about participant authenticity and the potential infiltration of dishonest respondents - whether AI or not - has led us to identify issues that could have a negative impact on online research.
Anonymous online surveys are a great way for marginalized groups, including 2SLGBTQIA+ communities, to contribute to research; however, ineligible participants and AI bots can undermine their accuracy.
(Shutterstock)
The challanges we encountered
Location:
Our most recent survey focused on Two Spirit, trans and non-binary professionals working at 2SLGBTQIA+ organizations in Canada. The narrow participant criteria made it easy to check IP addresses and spot ones that did not qualify. We could also identify and block IP addresses that submitted multiple responses.
When reviewing the data, we found that many of the suspicious responses were linked to one IP address located in China. We also received a high volume of responses claiming to come from Prince Edward Island. This was suspect, not only because of contradictory IP addresses, but because the number of responses seemed disproportionately high for the population of the smallest Canadian province.
Time:
Our survey received 1,491 responses within three days, which was suspicious given the narrow eligibility criteria. Many responses were completed too quickly for a survey that included written responses. We also noticed that there were waves of responses, and those respondents completed the survey in roughly the same amount of time.
Incentives:
It is hard to know exactly why people complete surveys for which they are ineligible. Some people may may do it for the compensation on offer . Others many want to spoil the data. We noticed that false responses increased when some form of compensation was offered, whether it was cash or gift cards.
Read more:
Imposter participants challenge research integrity in the digital age
Email addresses:
Another pattern we noticed was the use of generic Outlook or Yahoo email addresses, which followed the formula of first name-last name-numbers. While many people might use this same format, this is also an easy and quick way to create email addresses en masse.
Contradictions:
When looking at the data, we found that many responses did not make sense for our target demographic group. There were a lot of“prefer not to answer” responses to prompts about pronouns, gender identity and sexual orientation.
Many respondents also selected“yes” when asked if they were First Nations, Inuit or Métis, but then wrote“white” when asked about their race or ethnicity. Identities can be complex, and what appears to be a contradiction may in fact be an intersection that is poorly represented through demographic questionnaires. Flagging potentially fake responses based on how we assume respondents will identify themselves is a bad idea for research about 2SLGBTQIA+ people who inhabit non-normative gender and sexual identities.
Some of these responses were also flagged because of other issues, including IP address and completion rate. However, there were others that were less suspicious, leaving us unsure about their validity.
These responses may have been created by AI bots or by people using AI to generate responses and manually enter them. It could have been someone actively trying to misrepresent themselves or someone who earnestly wants to contribute but does not feel confident in their English-language skills or writing ability. For this reason, it is important to consider multiple factors when reviewing survey responses to determine whether data is usable .
AI presents new opportunities and challenges for online research.
(Shutterstock)
Moving forward
Technology like AI chatbots presents new opportunities and new challenges for online research that require specific interventions. The concerns we've outlined are potential red flags that can help alert researchers to suspicious data.
Some solutions we found for these issues include IP tracking, requiring a password to access the survey, asking the same question twice to verify that the responses match, and having “attention check” or“trap” questions where respondents are asked to select a specific response.
Researchers can also flag“speeder” respondents who take less than one-third of the median response time, and average respondents who select the same responses across the survey, like always choosing the first option. Some researchers may already be aware of these and other solutions, and we encourage anyone doing online research to be prepared to address dishonest participants and protect the integrity of their data.
While these solutions may require additional time, labour and resources, it is important not to abandon online research . In-person methods are not always viable or accessible, particularly to reach 2SLGBTQIA+ people and other marginalized populations.
Research in this area is vital. We encourage other researchers to share their experiences and solutions to these problems to raise awareness.
MENAFN29012025000199003603ID1109145481
Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.