Fake Review Makers Suffer From Pangs Of Conscience
The study, led by the University of York, found individuals to be quite competent in writing compelling fake reviews in unpredictable ways, but it caused a moral dilemma for some.
The researchers say the findings of the study could be used by websites to put in place better systems to detect fake reviews, which could appeal to the contributor’s moral obligation to be truthful.
Problematic
As part of the study, 50 people were asked to write fake hotel reviews in either positive, negative or neutral tones, with negative postings being the most problematic.
One such participant explained: “As a person, I found it hard to be mean while writing fake negative reviews in hotels I never stayed in.” Another questioned: “Why am I badmouthing other hotels?”
The researchers say this could be due to the pangs of conscience that they experienced while writing fake reviews.
Imagination
Fake reviews are estimated to range between 16% and 40% of all reviews posted. This suggests that there must be a sufficiently large group of lay internet users who—for reasons that could range from malicious to benign—are devoted to writing reviews based on imagination rather than genuine post-purchase experiences. This behaviour of writing fake reviews has, however, hardly come in the research spotlight.
The study revealed that to further maintain a psychological distance between themselves and their readers, fake review writers either avoided personal pronouns such as “I” or used collective pronouns such as “we.” Sharing the accountability of faking with imaginary others was used as a coping strategy against their guilty consciousness, the authors concluded.
As part of the study, published in the Journal of Business Research, the paper uncovers four stages of writing a review: gathering information, assimilating information, drafting the fake review, and finalising the fake review production.
Algorithms
Previous Research by the University of York shows the challenges of online ‘fake’ reviews for both users and computer algorithms. It suggests that a greater awareness of the linguistic characteristics of ‘fake’ reviews can allow online users to spot the ‘real’ from the ‘fake’ for themselves.
The latest paper shows that unlike negative and neutral fake reviews that could be exaggerated with affective words in descriptions, positive fake reviews may not necessarily express substantial emotions. Hence, detection algorithms need to be fine-tuned based on the polarity of the incoming reviews.
Dr Snehasish Banerjee, Associate Professor in Marketing at the University of York’s School for Business and Society, said: “There has been a lot of buzz around the use of AI to detect fake reviews but they have not proven to be overly effective in practice. This could be due to the non-formulaic rhetoric that writers commonly adopt. Intriguingly, one of our participants affirm, “If I were to read my [fake] review, I would have believed [it].”
“If automation fails to detect human-generated fake reviews, it is worth exploring the more humane approach of designing review submission interfaces that appeal to the writer’s moral obligation to be truthful.”