Case Study

NSS Gerard K. O’Neill Space Settlement Contest

How NSS Used the Copyleaks API To Improve Plagiarism
Detection of Contest Essay Submissions

NSS Logo

Solution

Copyleaks API

Product

Plagiarism Detection

Campaign KPIs

Accuracy, Analysis Depth, & Workflow Efficiency

Overview & Background

The National Space Society (NSS) has hosted the annual Gerard K. O’Neill Space Settlement Contest since 1994. Students are not given any requirements other than the project must focus on a free-floating, permanent space settlement concept. Each year, thousands of students worldwide, ranging through the 12th grade, enter the contest, with most submissions being in essay form, with some in past years reaching up to 200 pages. As part of the judging process, each submission is reviewed for plagiarism, a process that was performed manually, often taking weeks to months.

The Challenge

NSS began seeking out a solution to expedite the judging process and increase efficiency, particularly for the plagiarism detection portion. A key consideration for finding a solution was a platform with API integration that could seamlessly work with Award Force, the awards management software used for the contest. Another concern was effective and thorough reporting, documentation, and the capability to handle large amounts of content and data processing.

The Process

In 2018, NSS began working with the Copyleaks platform, utilizing the API integration to detect potential plagiarism within contest submissions. Each scan through Awards Force generates a Suspect Score, allowing the judges to make faster and more informed decisions about possible plagiarism. As a result, processing time was reduced from weeks or a month to a matter of days.

Award Force
The ease of the API integration was one of the reasons we chose Copyleaks. It was seamless. What has made us stay with Copyleaks is customer support. If we have a question, it’s resolved within 24 hours.

Matthew J. Levine, Director of the NSS Gerard K. O’Neill Space Settlement Contest

0% is an automatic pass, 0-5% requires deeper analysis, 5+% is an automatic fail

NSS began deeming that if the Suspect Score was 0%, it was an automatic pass; if it was 5% or greater, it was typically an automatic fail. Anything between 0-5% required further investigation through deeper analysis using the Copyleaks Similarity Report, generated with every scan. Within every report is a Similarity Score (calculated differently from the aforementioned Suspect Score) that does a deeper breakdown of similar text found within a scanned document. These report breakdowns allow judges to compare text side-by-side and determine if the content was plagiarized.

NSS also utilizes the Copyleaks Repository, which allows them to store all scanned submissions in a secure, private database that can be used for future scans to compare against to ensure that no contestant has plagiarized past winning submissions.

The Impact

Since beginning with Copyleaks, NSS has been able to detect plagiarism at a much quicker, more accurate rate within contest submissions. For example, in 2023, out of 4,567 submissions, 1,894 were disqualified for plagiarism, a significant increase from when the task was performed manually.

NSS is also considering how AI-generated content from chatbots such as ChatGPT will affect the Gerard K. O’Neill Space Settlement Contest and its submissions and is exploring utilizing the Copyleaks AI Content Detector as a newly implemented part of the judging process.

4.5k+ Overall contest essay submissions in 2023
41.47% Of essay submissions disqualified for plagiarism