Evaluating crowdsourcing contributions in idea contests: Combining humans and machine learning
Project Leader: Tingru Cui
Primary Contact: Tingru Cui (firstname.lastname@example.org)
Keywords: data mining; information systems
Disciplines: Computing and Information Systems
Followed by the project Winner prediction on crowdsourcing contest.
To solve problem of large amounts of idea evaluation, research on algorithmic approaches proved to be a valuable way to automatically distinguish between high and low-quality ideas. However, such filtering approaches may risk missing promising ideas by identifying “false negatives” (classifying good ideas as bad ones), which is a task that demands human decision makers.
In response to this, this project aims to conduct a design science research to explore mechanisms on how to combine machine learning techniques with crowd and human evaluation to adaptively assign humans that have the required domain knowledge to ideas (ie, a recommendation system with people analytics skills and can match domain experts with ideas efficiently). The semi-automatic approach can leverage the benefits of both approaches and overcome limitations of previous research.