Introduction expertise (Dou et al., 2012), valence,

Published by admin on

Introduction

  Nowadays, online shopping has become more and more popular.
Compared to offline shopping, online shopping provides consumers with reviews
of a certain product, which has become one of the advantages of this shopping
pattern because “reviews are a crucial source of information … and can
greatly influence purchase intentions” (Dou, Walden, Lee & Lee,2012,
p1555).

  Recent researches
studying the relationship between online reviews and consumers’ attitude
towards product and purchase intention mostly put their focus on the features
of reviews such as expertise (Dou et al., 2012), valence, social distance (Lin
& Xu, 2017). The helpfulness ratings or the positive community ratings from
other users, which is different from the perceived helpfulness of a review from
a reader, have been somewhat neglected. Therefore, in order to add to the
field, by taking the reviews of a fake laptop brand as an example, this study
will investigate the following research questions:

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

1.        
How
do the helpfulness ratings (the positive community ratings) of online reviews
affect consumers’ evaluation of a product and of a reviewer as well as their
purchase intention?

2.        
Whether
the effect that the helpfulness ratings of online reviews have on consumers’
evaluation of a product and of a reviewer and purchase intention will be
mediated by a consumer’s familiarity with reviewing websites.

  Walther, Liang,
Ganster, Wohn and Emington’s study (2012) revealed that the attitude towards
the reviewer was more positive when helpfulness ratings were greater. Moreover,
Lin and Xu’s study (2017) indicated that perceived reviewer trustworthiness
would be positively related to product attitude and purchase intention. Therefore,
this study’s first hypothesis is:

H1: Reviewers with a high number of positive community ratings
have a stronger influence on consumers’ attitudes and purchase intentions than
reviewers with a low number of ratings.?

  Flanagin, Metzger, Pure,
Markov and Hartsell’s study (2014) showed that user-generated product ratings
were positively associated with product quality. If an online review from a
specific reviewer were considered as a product, then the ratings from other
users would affect consumers’ perception on the review in a positive way. Thus,
the present study’s second hypothesis is:

H2: Sources
with a high number of positive community ratings are evaluated more positively
than sources with a low number of positive ratings.

  Lim and Heide’s
study (2015) indicated that for users familiar with reviewing websites, a
reviewer’s profile including number of friends and number of reviews would
affect the perceived credibility of a review while for users unfamiliar with reviewing
websites, such an effect did not exist. Besides the two factors mentioned in
the previous study, the objective number of positive community ratings a
reviewer obtained for a review could also be counted in his or her profile. Therefore,
the last hypothesis of this study is:

H3: The effect of H2 is stronger for people who are familiar
with reviewing websites than for people who are not?

 

Method

  design overview. To evaluate the hypotheses
mentioned above, a between-subject experiment was conducted in January 2018.
Participants (N=X) were randomly assigned to 1 of 4 conditions, which involved
reading reviews of 1) low expertise and low popularity, 2) low expertise but
high popularity, 3) high expertise but low popularity, or 4) high expertise and
high popularity. Participants were firstly asked to read two online reviews of
a fake laptop product. and then to answer follow-up questions about the
product, the reviews as well as the first reviewer. In these two reviews, only
the first review had been manipulated and the second one had been used as a
balance review and stayed the same in all conditions. And this is why questions
about the first reviewer rather than about both of them were asked.

  participants. People ranged in age from 20 to 28 were the target
participants in this study and based on the data, X participated. Convenient
sampling strategy was used here. We invited people to join the test by
Facebook, therefore all of the participants participated in this study voluntarily
and without any incentive. X participants were dropped from the study because….
As a result, the final sample size is X. The mean age of the participants was X
(SD=X). X females and X males
participated.

  procedure. This is a between-subject experimental study.
Participants were recruited through Facebook over a one-week timeframe from
January 17th to 24th, 2018. More specifically, a link to
a questionnaire powered by Qualtrics was post on Facebook. Facebook users who
saw this link could participate in this study voluntarily. Qualtrics is a
website that hosts electronic surveys. The questionnaire took 5-7 minutes to
complete.

  For those who agreed
to participate, they were randomly assigned to one of four conditions mentioned
above. In the instruction, participants were told that they would be
participating in a study aiming to examine their own experiences with online
recommendations from other consumers.

  After the
instruction, participants were directed to a page of a laptop product, with pictures
of it, a basic description of its function and outward appearance, and two online
reviews from other consumers. After reading the reviews, they were led to the
questionnaire firstly asking about their opinions on the product, purchase
likelihood, and opinions on reviews and the first reviewer. Several questions
relating to participants’ familiarity with online reviewing website and their
interest in electronic devices were asked as mediating factors. As a
manipulation check, we asked participants to indicate the helpfulness ratings
of the first review and the expertise level of the first reviewer. At the end
of the questionnaire, participants were asked about some demographic questions
including gender, age and education level.

  measures. independent
variables.
For this study, the independent variables were the experimental manipulation or
the helpfulness ratings of an online review and the expertise of a reviewer. For
the helpfulness ratings of an online review, the number of positive community
ratings by other users was used to indicate it, with “21 users found that
helpful” as a high level of helpfulness and “3 users found that helpful” as a
low level. And for the expertise of the reviewer, we used “Genius”, “48
purchases 28 written reviews” to show a high level of expertise and “Amateur”,
“5 purchases 3 written reviews” to indicate a low one. It was supposed that the
exposure to reviews of different helpfulness ratings and expertise would result
in differences with results for each dependent and mediating variable.

  As a manipulation
check, at the end of the questionnaire, participants were asked “How many
people found the first reviewer helpful?” to indicate the positive community ratings
of the first review and “What was the level of the first reviewer?” (with “Amateur”,
“Intermediate”, “Advanced”, “Genius” as options) to show the expertise level of
the first reviewer.

  mediating variables.
Familiarity with online reviewing website. Two following statements were
provided for participants: “I am familiar with online review platforms in
general” and “I frequently look at product information and consumer reviews”.
By showing the degree of agreement on these two items, participants could
indicate their familiarity with online reviewing websites. Both of them were
items with seven-point Likert scale from “strongly disagree” to “strongly
agree”. These two items had low/mediate/high internal consistency (Cronbach’s ?
= X). In the analyzing section, the mean of these two questions was calculated
to be average value of familiarity with online reviewing website (M=X, SD=X).

  Interest in
electronic devices. Interest in electronic devices was measured by asking
participants their agreement on one statement “I am interested in electronic
devices” and another “I follow the latest trends of electronic devices”. A
seven-point Likert scale ranging from “strongly disagree” to “strongly agree”
followed the statements. These two items had low/mediate/high internal
consistency (Cronbach’s ? = X). In the analyzing section, the mean of these two
questions was calculated to be average value of interest in electronic devices
(M=X, SD=X).

  dependent variables. The
dependent variables for this study were attitude towards product, purchase
intention and attitude towards reviews and reviewers.

  Attitude towards
product was measured by seven-point semantic differential scales of a series of
seven adjectives (bad/good, not innovative/innovative, not
attractive/attractive, not interesting/interesting, low performance/high
performance, unfavorable/favorable, not expensive/expensive). These seven items
correlated lowly/moderately/highly (Cronbach’s ? = X). Besides, the mean of
these items was calculated to represent consumers’ average attitude towards the
product (M=X, SD=X).

  Purchase intention
was measured by the question “How likely would you be to purchase this
product?”. Following the question, a seven-point scale from “not at all” to
“very much” was provided for participants.

  Attitude towards the
reviews was measured via three items along a seven-point scale anchored between
“strongly disagree” to “strongly agree” (Cronbach’s ? = X). These three items
were: 1) “I feel that I trust the brand after reading the reviews”, 2) “I feel
that I trust the product after reading the reviews”, and 3) I will recommend
this product to my friends after reading the reviews. Meanwhile, the mean of
these items was calculated to represent consumers’ average attitude towards the
reviews (M=X, SD=X).

  In addition, five
items with a seven-point scale anchored between “strongly disagree” to
“strongly agree” were used to measure participants’ attitude towards the first
reviewer (Cronbach’s ? = X). These five items were: 1) “The reviewer made a
clear explanation of the product”, 2) “The reviewer made an objective review of
the product”, 3) “The reviewer has a sufficient knowledge of the product”, 4)
“I am satisfied with the review of the product”, and 5) “I feel that the review
contains an advertisement for the product”. In addition, the mean of these
items was calculated to represent consumers’ average attitude towards the
reviewer (M=X, SD=X). However, before the calculation, the fifth item “I feel that
the review contains an advertisement for the product” had been recoded as
follows: 7à1; 6à2; 5à3;4à4;3à5;2à6; 1à7.

 

Categories: Strategy

x

Hi!
I'm Iren!

Would you like to get a custom essay? How about receiving a customized one?

Check it out