menu

Diane Francis

 44,912

Fast & Furious Fact Check Challenge

Let's create a nearly-instant system for fact checking the content of text.

This challenge is closed

stage:
Won
prize:
$50,000

This challenge is closed

Partners
more
Summary
Timeline
Updates19
Forum8
Teams114
Entries
Claims
Practise Claims
Summary

Overview

Today’s “always on” environment, together with social media, really does give us the ability to hear anything said by anyone, anywhere, anytime. Ironically, this flood of material makes it difficult to know what is actually true! Knowing the believability and accuracy of what we read, hear and see is important around the world -- and no less important for us here in the world’s leading democracy.

Fact checking is the process of verifying what someone has said, and then receiving a rating about the accuracy of the ‘fact.’ Fact checking enables us to sort through a tidal wave of massive information and communication.  

Some fact checking services exist, but none are instant.  

Fact checking today is done mostly by qualified humans. It’s a laborious, time-consuming process that is not easy, quick, cheap or comprehensive. There simply aren’t enough journalism researchers with the skills to verify all the claims made by our political candidates and public figures. It often takes a day or more to verify the accuracy of statements, especially in the context that they were made. And as time elapses, the truth moves further and further away from us.

The critical time to know if political claims and statements are accurate is now -- as we read or view it.  Therefore, the breakthroughs sought in this prize are those that improve speed of results in fact checking.


Guidelines

Challenge Guidelines updated 3 October 2016.

THE PRIZE

The prize offers a total purse of US$50,000 in a competition to develop faster ways to check facts through automation-- whether instantaneous, or just faster than humans can check. This breakthrough will enable people to know if something they see or hear is accurate when it’s most relevant: ASAP.

We invite you or your team to create a solution to increase timeliness of fact checking. We encourage teams to leverage the power of computers, of search technologies, software algorithms, machine learning, natural language processing technologies, crowdsourced checking, data science, voice intonation or facial emotion analyses, chatbots, and even existing technologies such as Facebook’s anti-fake news tool.

 

THE GOAL

TRUTH RATING - How true is a claim or statement? Your solution must assign a “truth rating” to each claim or statement tested and achieve at least 80% accuracy for the competition overall.

To be eligible for an award. The truth rating must use the following scale:

  • 'TRUE’: The statement is fully accurate and there is little room for inference of a different meaning by a reasonable person.
  • ‘Somewhat TRUE’:  the majority of the statement is true; any untrue, questionable, opinion-based elements, or framing of the claim that exists do not materially jeopardize the overall truthfulness of the claim. 
  • ‘Somewhat FALSE’:  parts of the claim could be true, however the portion(s) which are questionable cast a degree of doubt strong enough to shift the overall claim into untrue territory, or, implies something that is not loyal to the intended/professed statement.
  • ‘FALSE’: The statement is incorrect or is unsupportable enough to be deemed untrue.

Creation of Truth Rating.  Teams have 2 options to create the Truth Rating.  Teams may choose only one option for answering all claims:

  1. Provide the fact check justification (basis for Truth Rating) derived from automation, along with a human-assigned Truth Rating
  2. Submit only the Truth Rating for each claim, to be derived 100% from automation

To push available technology ahead a quickly as possible, this allowance for a human-assigned Truth Rating will be accepted if teams choose option A.  For option A, teams may still only input the claim into their software and must receive all results by automated means, however, they may use human input to analyze these results and assign a Truth Rating.

AUTOMATION- automated fact checking will revolutionize our society. Your solution must be automated to be eligible for an award. Only the following tasks can be completed by a person:

  • Inputting or otherwise entering the claim into your solution and initiating fact checking
  • If you choose option A: assessing the results of the check and assigning a Truth Rating
  • Inputting the rating and justification into the HeroX submission form

FAST AND FURIOUS - The fastest solution to assign accurate “truth ratings” via automation will win. It’s that simple.

 

CHALLENGE STRUCTURE

The challenge has three parts:

  • Written submission: teams submit an overview of their approach to automating fact checking.
  • Fact Checking Speed Test: a real time race to determine the fastest automated and accurate fact checking solution.
  • Validation Phase: the fastest teams who achieve at least 80% accuracy need to prove their results were achieved by automated means to be awarded the prize.

 

WRITTEN SUBMISSION

The competition is your canvas!  Teams will submit a description of their approach to increase the speed of fact checking.  This can include the technologies and tools they will leverage, any original pieces of technology in process or fully created, the intent/vision for their solution, and a description of how these elements will be developed into their entry. Teams may cite prior work or work that team members have undertaken as part of their description.

Teams are encouraged to provide a clear picture of the method used to develop their submission, as well as the intended features and strengths of the solution.  Entrants may submit written descriptions, technical descriptions, results of outcomes, as well as photos, drawings, video, comparative examples, and/or other media.  A maximum of 1,500 words (approximately 3 pages) is allowed for the submission.

 

FACT CHECKING SPEED TEST

All registered innovators participate in a real time race to check the truth behind stated claims. Innovators will receive a series of quizzes (consisting of multiple claims) in which they must use their automated solution to determine the truth (and assign a “truth rating” if elected) to each claim in the quiz, and submit their answers in the fastest time possible while achieving at least 80% accuracy over the course of the competition. Winners will be selected based on the team that scores the highest RABBIT marks - awarded based on speed of accurate answers.

Quiz Structure

Quiz release dates and times are outlined below in "Quiz Schedule". Teams will check each claim in the quiz and determine if it is TRUE, Somewhat TRUE, Somewhat FALSE, or FALSE. Teams also have the option to copy and paste their automated justification into the submission form if they elect this option (Option A). Results may only be submitted to HeroX as a batch (answer all the claims in a quiz at the same time).  Each quiz will be given an expiration, and after this time no further submissions for the quiz will be allowed. Teams can submit only once per batch. 

Quiz Schedule

  1. Official Practice Quiz: released Wednesday November 9th at 10:00am EST
  2. First Test Quiz (this one's for real!): Thursday November 10th at 3pm EST
  3. The complete quiz schedule will be released the week of November 7th.

Scoring

Scoring will be based on both accuracy and speed of results for each quiz.

  1. Accuracy: innovators must assign each claim in the quiz a “truth rating.” As explained above, the truth rating must be TRUE, Somewhat TRUE, Somewhat FALSE, or FALSE.
    • Whether an answer is correct or not will be based solely on the Judging panel’s/Full Fact’s assessment of truth.
    • Truth ratings will be determined either “correct” (matches the Judges’ rating exactly), “second degree correct” (incorrect by up to one rating but with adequate justification) or “incorrect” (incorrect by more than one rating and/or lacks adequate justification).
    • Teams must achieve at least 80% accuracy to be eligible for an award)
  2. Speed: It all comes down to speed.
    • Speed is measured as the duration of time between when the quiz was released (predetermined time) and when the quiz answers are submitted. Quiz results will be time stamped upon submission, and all claims for that quiz will have the same stamp. Any (individual) claims within the quiz which are deemed correct are eligible for speed-based RABBIT marks.  Thus, even if a team does not have the fastest time stamp on a quiz, it is possible to receive RABBIT marks for individual claims.
    • The time stamp will determine the Speed Rating for each claim. The fastest “correct” answers will receive RABBIT marks. “Second degree correct” answers are only eligible for RABBIT marks after RABBIT marks have been awarded to all “correct” answers:
      1. The fastest submission gets 5 RABBIT marks
      2. The 2nd fastest submission gets 2 RABBIT marks
      3. The 3rd fastest submission gets 1 RABBIT mark

The fastest submissions will be considered in this order:

  1. Highest priority:  Fastest correct Truth Rating with adequate justification (or fastest Truth Rating 100% by automation) will have first priority for receiving RABBIT marks. 
  2. Next highest priority:  If no team(s) achieve the top 3 fastest responses for a claim, RABBIT marks will be considered with “2nd degree correct” Truth Ratings.  The fastest Truth Ratings (which are incorrect by up to one rating) will be considered correct, and therefore can be eligible for RABBIT marks IF:   The justification submitted has a logical, reasonable, convincing, and reliable basis for the Truth Rating, and this basis is strong enough to equal the justification for the correct Truth Rating.  The determination of “2nd degree correct” will be made by our subject matter expert partner, Full Fact with the oversight of the judging panel.  These determinations will be final and not subject to challenge.

Note that “second degree correct” answers will count toward the minimum accuracy achievement of 80%, regardless of whether RABBIT marks are awarded.

Optional Practice Runs will be held prior to the start of the Fact Checking Speed Test. This is solely for the purpose of enabling teams to test their software and submission posting process.  None of these results will count toward the Speed Rating.

 

VALIDATION ROUND

Once the Fact Checking Speed Tests are completed, the top placing teams must provide a write up detailing their solution and verify the use of automation to achieve their results. These teams must submit proof of automation to the Judges to confirm that results were achieved via automated (not manual) means.  Upon request, teams will have three calendar days to submit their proof of automation.  Teams must prove automation in order to be eligible for an award. Teams will be required to demonstrate automation using only the Truth Rating creation method chosen.  Any team who cannot verify automation will be disqualified from the competition, regardless of their results.

An automated solution is one which the solution does not require human involvement, with the exception of the following tasks:

  • Inputting or otherwise entering the claim into your solution and initiating fact checking
  • Assessing the results of the check and assigning a Truth Rating (if the team elects Option A)
  • Inputting the Truth Rating into the HeroX submission form

The burden for proof of automation will rest upon the teams.  Each fact checking solution will be unique, thus teams must determine the best means to prove automation to the judges for their unique solution. The following examples may or may not be sufficient to substantiate your unique solution:

  • Screencast or video of the entire fact checking process
  • Access to software, code, or algorithms (must provide the judging panel with instructions and, if necessary, a testing environment)
  • Live testing of additional claims specifically for validation (not to be used for scoring)

Judges may request additional information or additional testing to verify automation if your provided material is inconclusive or insufficient.

 

PRIZES

Prizes will be awarded to the teams with the fastest Speed Rating who achieve at least 80% accuracy (and receive a Pass for verifying automated results):

            1st Place:      $40,000 for fastest Speed Rating

            2nd Place:    $10,000 for second fastest Speed Rating

Tiebreakers for the prize award will be based on the speed of checking the most difficult claims. The Judging panel will pre-identify and designate a subset of the most “difficult” claims as tiebreakers. The tiebreaker status of the claims will not be disclosed unless a tie occurs. The team with the most RABBIT ratings for these claims will be the winner.

 

CLAIM TYPES:

Claims to be tested will include, but are not limited to, information about/provided by either U.S. politicians or public figures.  Fact checking will be tested for a variety of fact checking elements. Please see the list of Practise Claims for more information.

 

INTELLECTUAL PROPERTY

The results of the competition will be leveraged and promoted by a respected media figure. Diane Francis is an award-winning columnist, author, investigative journalist, television commentator, and screenplay writer. She is Editor-at-Large at Canada’s National Post and writes for the US Edition of the Huffington Post. Ms. Francis intends to pursue partnerships and collaborations with key media and journalism partners once the competition has concluded. These partnerships will spur the use and trust in fact checking.

Post-prize activities may include efforts to secure additional grants or funding for the technology and to commercialize the winning solution developed in the competition. Teams competing in this Challenge will retain control of their IP, but also agree to provide access to their technology in the form of a licensing agreement, or, receive a stake in a commercial enterprise in exchange for use of their technology (assuming they are the eventual 1st or 2nd place winner). Exact terms of any partnership will be mutually negotiated.

 

JUDGING

The Judging panel will determine the correct answer to each claim, and will determine the accuracy rating by which teams results are compared. Although some degree of interpretation in unavoidable, particularly between True and Somewhat True (or False and Somewhat False), the Judging panel’s determination of the correct answer will be used as the standard by which team submissions are judged.

Automated fact checking is defined as the use of non-human means to check, research, and/or provide context to the claim.  However humans may input the claims on the quizzes manually into the software and initiate the checking.  Humans may also take the results rendered and post/submit the results.

 For validation of automation, Judges will use videoconference, in-person meetings, and/or technical reviews of the team’s entry to confirm achievement via automation (not human fact checking).  If needed, the Judges will ask for the team to run test claims.  The results of these test claims will not count toward the prize award, but will be used to verify eligibility for a prize award. For crowdsourcing, teams must show that the “crowd” is truly a large group of people without any special expertise in fact checking.  Judges may call upon outside technical experts to assist with this validation.

 

TIMELINE

November 6, 11:59 pm ET

Submission Deadline

November 7 - December 9

Fact Check Speed Testing

December 12, 11:59 pm ET

Validation Phase materials due from finalists

December 15 - January 28

Validation

January 29, 2017

Winner announcement

 

 

ADDITIONAL RULES

Participation Eligibility:

The challenge is open to all individuals, private teams, public teams, and collegiate teams. Teams may originate from any country.

No specific qualifications or expertise in working with journalism, media, publishing, fact checking, political research, software, computers or other technology development is required. Challenge organizers encourage outside individuals and non-expert teams to compete and propose new solutions. To give new and innovative ideas due consideration, the judging panel may include individuals who are not subject matter experts in any of these fields.

To be eligible to compete, you must comply with all the terms of the challenge as defined in the Challenge-Specific Agreement.

 

Registration and Submissions:

Submissions must be made online (only), via upload to the HeroX.com website. Teams will be notified of the availability of quizzes for fact checking per a predefined schedule. Quizzes will be posted on or after these dates/times.

Submissions must be made in English. All prize-related communication will be in English.

 

Selection of Winners:

Based on the winning criteria, 2 prizes will be awarded per the Judging Criteria section above. A qualified Judging Panel will determine winners.

 

Judging Panel:

The determination of the winners will be made by a group of people including thought leaders, influencers, and people with unique insight for computational journalism and fact checking, as well as technology development in these areas. Judges will have expertise in journalism, media, publishing, fact checking, political research, software or other technology development areas.

As importantly, the Judging Panel may also include judges who have expertise in new forms of potential technology/innovation, the psychology in media, or crowd source development, but who have no background or experience related to fact checking. The intent of including these individuals is to proliferate approaches which are available but not necessarily adopted or leveraged by ordinary users -- both of which will move the field forward.

 

Challenge Guidelines are subject to change. Registered competitors will receive notification when changes are made, however, we highly encourage you to visit the challenge page often to review updates.

Timeline
Updates19

Challenge Updates

Diane Francis announces the results of the Fast and Furious Fact Check Challenge

Jan. 31, 2017, 5:06 a.m. PST by Kyla Jeffrey

"The world needs automated fact checking, and the world is going to get it." 


What will happen to fact-checking in 2017? Here are 7 guesses.

Jan. 18, 2017, 11:51 a.m. PST by Maureen Murtha

Fake news, misinformation, and journalistic integrity and credibility under attack -- all familiar themes from 2016. From the looks of it, the trend may continue for a while. Here's a Poynter article that makes some predictions for the next 365 days....

 


Who is leading the race in automated fact checking?

Nov. 15, 2016, 1:25 p.m. PST by Kyla Jeffrey

This is the announcement you've been waiting for.

I am pleased to introduce the three teams whose ground-breaking technology is pushing the field of automated fact checking forward.  These teams will be competing in a real time Fact Checking race over the next four weeks to make automated, faster fact checking a reality.

I encourage you to comment on this post and say hello to the teams entering the race. You can also show your support for the cause by sharing on social media and with your network.

Team Sheffield Uni:

We propose to apply state-of-the-art question answering techniques that are able to generate the semantic parse of a claim. We focus on methods that can learn semantic parsers with limited training data so that our approach can be extended to different domains quickly. We plan on taking a novel step to adapt these to construct questions about the claim to be checked, and - using a knowledge base - assess its truthfulness.

Team ClaimBuster

Fact-checking is now a household terminology. The amount of information to be fact-checked is beyond the capability of fact-checkers. ClaimBuster will substantially improve their efficacy. Given a factual claim, it analyzes the claim, collects relevant evidence from multiple sources, and generates justifications to help fact-checkers produce a true/false rating for the claim. ClaimBuster is making strides toward our quest for the ”Holy Grail” – an automated, instant fact-checking machine.

Innovator Ovidiu Dobre

Our solution leverages current AI technologies available for Natural Language Processing, Audio, Video and Image-Based Text Recognition An integrated web-based platform for fact-checking, leveraging latest AI & cloud computing technologies available. 


Two days left to enter!

Nov. 4, 2016, 1:06 p.m. PDT by Kyla Jeffrey

Want to compete?

There are TWO DAYS left to submit the written description of your solution in order to be eligible for the Fact Checking Speed Test! Be sure you enter before 11:59pm EST on November 6! 

You CAN continue to iterate on and develop your soultion throughout the testing period. 

Get a feel for the brilliant minds tackling this problem by checking out the list of innovators and entries as they come in.

 

What do I need to know about the Fact Checking Speed Test?

  1. What: A real time race to check the truth behind stated claims. Innovators will receive a series of quizzes (consisting of multiple claims) in which they must use their automated solution to determine the truth (and assign a “truth rating” if elected) to each claim in the quiz, and submit their answers in the fastest time possible while achieving at least 80% accuracy over the course of the competition. Be sure you review the challenge guidelines for full details.
  2. When: November 7 to December 9 2016
    1. One practice quiz will be released at 10am EST on Wednesday November 9th. 
    2. The first official quiz (this one's for real!) will be released at 3pm EST on Thursday November 10th.
    3. The schedule for the remainder of the testing period will be released next week.
  3. Who: You! Anyone who enters the written description of their solution prior to 11:59pm EDT on November 6th can compete.

 

Can you tell me anything else?

Your fellow innovators have been asking lots of questions and we want to share the answers with you:

  • Q: For verification of quotes, will quote always be given in quotation marks? For this claim type, is there any specific template?
  • A: Correct, quotes will always be given in quotation marks. There is no specific template, but have a look at the practice claims for a few examples.

 

  • Q: For verification of quotes, are we checking whether given claim is the exact quote? What will happen, if it is a paraphrase of the quote?
  • A: here may or may not be paraphrased quotes. If there are, a paraphrased quote will look the same as a normal verification quote. You must determine whether it is paraphrased, and whether the paraphrase changes its meaning and moves it from one of the ratings into another.

 

  • Q: In verification of quote, do we need to fact-check the given claim (quote)? 
  • A: No, the claim made by the person who is quoted can be incorrect. You are to verify the source of the quote (e.g., did Donald Trump say that), the accuracy of the quote (e.g., is that the exact thing he said), and a source for the quote.

Anything not answered for you? Post your question in the forum!

 

Best of luck!

Kyla


This week's practice claims are here!

Oct. 21, 2016, 3:48 p.m. PDT by Kyla Jeffrey

Hi Everyone,

Check out this week's practice claims! We've also released a second practice quiz on the Challenge Page.

Practice Claims:

  1. Spain has the most unemployed young men (True)
  2. Warren Buffett's 2015 tax deductions totalled almost $105.5m (False)
  3. On average each person is the US is responsible for 16.2 tonnes of CO2 emissions (True)
  4. In 1984 there were 682,800 adults in prison in the USA (Somewhat True)
  5. Hillary Clinton voted to raise taxes on workers earning as little as $41,500. (Somewhat False)
  6. Heineken sponsors dog fighting events (False) 
  7. Louisiana Sen. Mary Landrieu "received almost $1.8 million from BP over the last decade." (False)

Warmly,

Kyla


Forum8
Teams114
ClaimBuster ClaimBuster
4 team members
Looking for members
SheffieldUni SheffieldUni
2 team members
Looking for members
Dalhousie Dalhousie
3 team members
Looking for members
DAFNA DAFNA
2 team members
Looking for members
Entries
Claims
Practise Claims