menu

IARPA

 122,816

Credibility Assessment Standardized Evaluation (CASE) Prize Challenge

Develop a ground-breaking and innovative solution to evaluate credibility tools and techniques

This challenge is closed

stage:
Won
prize:
$125,000

This challenge is closed

more
Summary
Timeline
Updates12
Forum11
Teams367
Press
FAQ
Rules
Summary

Overview

Every day we make decisions about whether the people and information sources around us are reliable, honest, and trustworthy – the person, their actions, what they say, a particular news source, or the actual information being conveyed. Often, the only tool to help us make those decisions are our own judgments based on current or past experiences.

For some in-person and virtual interactions there are tools to aid our judgments. These might include listening to the way someone tells a story, asking specific questions, looking at a user badge or rating system, asking for confirming information from other people - or in more formal settings, verifying biometrics or recording someone’s physiological responses, such as is the case with the polygraph. Each of these examples uses a very different type of tool to augment our ability to evaluate credibility. Yet there are no standardized and rigorous tests to evaluate how accurate such tools really are.

Countless studies have tested a variety of credibility assessment techniques and have attempted to use them to rigorously determine when a source and/or a message is credible and, more specifically, when a person is lying or telling the truth. Despite the large and lengthy investment in such research, a rigorous set of valid methods that are useful in determining the credibility of a source or their information across different applications remains difficult to achieve.

 

Challenge Focus

This challenge is focused on the methods used to evaluate credibility assessment techniques or technologies, rather than on the techniques or technologies themselves. In this context, a method is a detailed plan or set of actions that can be easily followed and replicated. 

In this challenge, we ask that your solution is a method for conducting a study, which includes background information, the objectives of the research, study design, the logistics and means for running the study, and details about what data would be collected if your solution were implemented. 

The Intelligence Advanced Research Projects Activity (IARPA) invests in high-risk, high-payoff research programs to tackle some of the most difficult challenges facing the US Government’s Intelligence Community (IC). An important challenge facing the IC is knowing who, or what, is credible. By sharing this challenge with the Hero-X community, IARPA seeks to motivate good ideas from people with diverse backgrounds. Successful solutions could be used to inform future research efforts, to help the Government evaluate new tools, and to develop a deeper understanding of what it means to be credible and how we can evaluate credibility across diverse domains – in person, in virtual spaces, and in the information and media we consume. 

The challenge of developing a useful evaluation of credibility assessment techniques and technologies lies in the method’s design for drawing out real behavior, credible or not, from an individual or other source and then having a valid means to test it. This can be difficult as many current techniques involve actors or games where individuals may not feel that they need to be honest or do not truly act as they would in a real-life scenario. How to Get Involved

 

  • Click FOLLOW above to receive an update once the CASE Challenge is fully launched.
  • Share this CASE Challenge on social media using the Share button above. Show your colleagues, friends, family, or anyone you know who has a passion for discovery!
  • Start a conversation in our Forum, ask questions or connect with other innovators.

Guidelines

Note: Other Government Agencies, Federally Funded Research and Development Centers (FFRDCs), University Affiliated Research Centers (UARCs), or any other similar organizations that have a special relationship with the Government, that gives them access to privileged or proprietary information, or access to Government equipment or real property, will not be eligible to participate in the prize challenge.

Why is this Challenging?

The methodology you present in your solution, that is the way the solution’s method is implemented, will impact the type of information or personal attributes being evaluated. Depending on the approach, the motivations to be credible, as well as subsequent penalties, will vary. In some cases the ground truth about an individual’s credibility or the credibility of their information will be difficult, if not impossible, to establish.

This highlights one of the roadblocks to testing a new credibility assessment method for use in practical applications - there is no universally accepted method by which to establish the performance of a new technique or compare across methods.

A key difficulty in validating that a new method can be used for practical, everyday purposes is in the attempt to move research findings into the real world. Several limitations exist for current methods including, but not limited to:

  1. Simplistic Procedures: In order to achieve clear results, research studies are often simplistic and highly controlled, which does not reflect the complexity of the conditions under which credibility is often assessed in real-world scenarios.
  2. Artificial Motivations: Decisions to be credible or not credible are often assigned, or otherwise controlled, and limit a participant’s ability to determine for themselves if they want to be credible, when, and why.
  3. Low Psychological Realism: There is very little personal investment of the participant in the outcomes of the study, leading to low motivation to behave in natural and authentic ways. And related to this, studies typically lack significant penalties if an individual fails to be judged as credible. The lack of meaningful incentives further exacerbates low intrinsic motivation in participants.

These limitations are particularly impactful when considering applications for national security, where an individual may feel that their livelihood, core values and beliefs, safety, or freedom are in jeopardy if someone doesn’t believe that they are credible. It is difficult to build such motivation and jeopardy into a replicable methodology that is safe, ethical, AND could be used as a common standard across some or all credibility assessment applications.

Another limitation of methods used today is the large amount of work devoted to retrospective evaluations, or situations in which a participant is evaluated based on past experiences. This is in contrast to prospective evaluations that require credibility to be based on future intent or behaviors, which is very challenging to objectively measure. While prospective screening represents the majority of uses of credibility assessment methods by the US Government, for example as screening during employee hiring, it is relatively underrepresented in the research done on credibility assessment.

 

Prizes

Stage 1 Prizes

  • Five (5) Credibility Champions – Following the first round of evaluations, five prizes of $5,000 each will be awarded to the top five overall solutions. Solution submitters/teams will be invited to attend a live judging event, where they will pitch their solution to a panel of experts.
  • One (1) Prospective Perspective Prize – Following the first round of evaluations a special $10,000 prize will be awarded to the solution which best addresses current limitations in finding a true prospective methodology. That is, a method which requires credibility to be based on future intent or behaviors.
  • Five (5) Novelty Prizes – Following the first round of evaluations five prizes of $1,000 each will be awarded to solutions which do the best job at addressing novelty-specific evaluation criteria.
    • One (1) Innovative Methodology Award - Award given for the most innovative methodology
    • One (1) Outstanding Participant Motivation Award - Award given for the most inspirational motivation to participant
    • One (1) Realistic Reflection Award - Award given for the method that best reflects the parameters of the real-world
    • One (1) Creative Technology Award - Award given for the method with the most innovative use of technology
    • One (1) Ground Truth Award - Award given for the method that best establishes ground truths.

Stage 2 Prizes

  • Five (5) Final Prizes – Following the second and final round of evaluations one of the below prizes will be awarded to each of the finalists /participating teams.
    • 1st Prize – $40,000
    • 2nd Prize – $25,000
    • 3rd Prize – $10,000
    • 4th Prize – $5,000
    • 5th Prize – $5,000

 

How Do I Win?

All solvers will submit their solutions in the prescribed format, via the HeroX CASE Challenge site.  You may find the CASE Challenge Solution Submission Template here.  A Word version of the Solution Submission Template can be found here.

Judging Criteria

CASE Challenge solution scoring will consist of three phases: Compliance Review, Stage 1 Scoring, and Stage 2 Scoring. When a solution submission is received, the ‘Solver Names’ field and associated content will be removed and each submission will be given a unique ID. Additionally, throughout the scoring process each solution will be evaluated separately (i.e., multiple solutions submitted by the same respondent will be separated and evaluated individually rather than as a group). 

  1. Compliance Review Panel
    1. The Compliance Review Panel will conduct an initial screening of all submissions. During this phase, the submission will be compared against the specified CASE Challenge minimum requirements and disqualifiers listed above. Submissions that do not comply with the minimum requirements and/or that meet any of the disqualification criteria will not be further evaluated.
  2. Stage 1 Scoring Panel
    1. The Stage 1 Scoring Panel will review all submissions that pass the Compliance Review screening.
    2. The panel will at a minimum consider the following scoring criteria, discussed in more detail below: Scientific Support, Realism, Novelty, and Participant Considerations.  Up to five winners will be selected to move on to the 2nd Round.
  3. Stage 2 Scoring Panel
    1. Winners from the 1st stage will be provided travel awards to attend and present their solution to a panel of judges at a public event for people interested in credibility assessment. Here the judging panel will evaluate the presentation and solution feasibility, alongside the scoring criteria from the Stage 1.

 

Please see CASE Challenge Rules for additional details

The CASE Challenge is the first concerted effort to invite interested individuals to develop credibility assessment evaluation methods that can be used to objectively evaluate both existing and future credibility assessment techniques/technologies. In doing this the CASE Challenge strives to incentivize a broad range of new ideas, while still ensuring their utility to real-world applications. To meet this goal, a scoring panel of experts will evaluate each solution based on the background and strength of the methods, how well it reflects realistic conditions, how creative and clever it is compared to currently used methods like the mock crime scenario, and how well it ensures the responsible care and consideration of participants.

 

Please see CASE Challenge Microsite for more information.

Timeline
Updates12

Challenge Updates

CASE Challenge Stage 2 Winners

Aug. 6, 2019, 6:29 a.m. PDT by Natan Getahun

 

We are excited to announce the overall winners of the Pitch Presentation and the end of the Intelligence Advanced Research Projects Activity’s (IARPA) Credibility Assessment Standardized Evaluation (CASE) Challenge!

The Pitch Presentation, Stage 2, of The CASE Challenge was held during the Challenge Workshop on July 18, 2019 in Washington, D.C. The Workshop was a full day of presentations and panel discussions from subject matter experts across industry, academia, and government around credibility assessment. During the event, the Credibility Champions from Stage 1 pitched their Solution to a live panel of Judges and audience members.

Here are the winners of the Stage 2 of The CASE Challenge!

 

 

1st Place

Computer Augmented Social Engagement Lab team (Taylan Sen (Captain), Ehsan Hoque, and Kurtis Haut) - SPIDER2

SPIDER2 is an interview-based evaluation in which participants are required to construct a device and then are subjected to both an Interview and a Debrief. The Interview and Debriefing sessions are then used as sets of data for credibility assessment.

 

2nd Place

Jeremy Martinez, M.D. - Credibility: Assessing the Assessment (C2A)

C2A adapts the popular social game for use in evaluating credibility assessment techniques. The methodology applies of an engaging, socially embedded game, with natural elements of deception for use in credibility assessment evaluation.

3rd Place

Dr. Joseph R. Stainback IV - Strategic Cognitive and Mobility Room (SCAMR)

SCAMR employs an Escape Room concept and an Espionage scenario to incentivize participant deception and elicit reactions through a series of puzzles, riddles, and strategy.

 

 

Congratulations to the winners of The CASE Challenge! Each team presented a thoughtful and engaging solution. We are eager to see the impact of their Solutions to the future of credibility assessment.

 

Thanks for reading! 

-The CASE Challenge Team 

 

If you have any questions, please contact the challenge team at CASEChallenge@iarpa.gov.

 


Register Today! Credibility Assessment Standardized Evaluation (CASE) Challenge Workshop on July 18, 2019

June 25, 2019, 8:51 a.m. PDT by M M

Greetings,

 

There is still time to register to attend the Credibility Assessment Standardized Evaluation (CASE) Challenge Workshop on July 18, 2019 in Washington, D.C., hosted by the Intelligence Advanced Research Projects Activity (IARPA)

IARPA’s CASE Challenge crowd-sourced the development of innovative methods to evaluate credibility assessment techniques and technologies. The culmination of this challenge is a workshop aimed at fostering a rich exchange of ideas that will help to inform potential ways ahead for future investments in credibility assessment research and development, as well as practical applications, education, and training. 

During the workshop we will also highlight the winners of the CASE Challenge and take stock of past, present and potential future developments in credibility assessment. Participants will include CASE Challenge solvers, industry representatives, academic researchers, and Government stakeholders.

Please join us in this engaging opportunity to learn from and network with experts in the credibility field!

For more information on the CASE Challenge please check out the following:

https://www.iarpa.gov/challenges/casechallenge.html and https://www.herox.com/CASEchallenge.   

Please share this invitation with interested colleagues!

 

Workshop Details

When:  Thursday, July 18, 2019

8:00 AM - 5:00 PM

Where: WeWork Universal North

1875 Connecticut Ave NW

10th Floor

Washington DC, District of Columbia 20009

 

Register online at the link below, by Wednesday, July 10, 2019 (no fee to register or attend):

Site: https://eventmanagement.cvent.com/CASEChallengeFinalWorkshop

Password: CASE829CFW

 

The registration site also includes the agenda, directions, food options, and other visitor information.

Please contact CASE Challenge Team with any additional questions at CASEChallenge@iarpa.gov.

 

Thank you!

 

CASE Challenge Team

 


CASE Challenge Stage 1 Winners

June 11, 2019, 1:30 p.m. PDT by Liz Treadwell

 

We are excited to announce the winners from Stage 1 of the Intelligence Advanced Research Projects Activity’s (IARPA) Credibility Assessment Standardized Evaluation (CASE) Challenge!

The CASE Challenge received high quality solutions from a range of solvers from across the globe. In total, we received a total of 27 solutions from competitors, offering creative approaches to evaluate the validity of current and future credibility assessment techniques and technologies. The submissions ranged from low-tech solutions, such as a clever variation on a popular party game, to high-tech solutions using neural networks for testing protocols.

During the months of April and May our esteemed panel of judges took the time to deliberate on the scores of the submissions. Following a rigorous evaluation process, we have come to a decision as to the winners of the Stage 1 Prizes.

 

Credibility Champions

We are honored to announce three Credibility Champions. These champions are Dr. Joseph R. Stainback IV for his solution of Strategic Cognitive and Mobility Room (SCAMR), Jeremy Martinez, M.D. for his solution of Credibility: Assessing the Assessment (C2A) and the Computer Augmented Social Engagement Lab team (which includes Taylan Sen (Captain), Ehsan Hoque, and Kurtis Haut) for their solution of SPIDER2, congratulations! Each of the solvers provided intriguing and well-crafted solutions.

Being recognized as Credibility Champions means that these solutions had the best overall performance in Stage 1.  As part of their winnings, they move onto Stage 2 of The CASE Challenge where they will pitch their solutions for an opportunity to win additional prizes at the CASE Challenge Workshop on July 18th.

Novelty prizes

Creative Technology: Winner is Lex Fridman for a method with outstanding and innovative use of technology in their solution,Credibility Meta-Assessment with Neural Networks.

Ground Truth:  Winner is Charles McElroy for providing the best method of objectively determining the true state of credibility in their solution, Checkmate Revisited.

Realistic Reflection: Winner is Ilona Palmer for a method with outstanding performance reflecting the parameters of the real-world in their solution, The Purrrfect CAT.

Although Stage 1 of the competition is over, we will continue to provide updates on The CASE Challenge.

Be sure to stay tuned for information about upcoming workshop, exciting news in the field of credibility, details on the solutions presented in Stage 1 and much more! 

 

Thanks for reading! 

 

-The CASE Challenge Team 

 

If you have any questions, please contact the challenge team at CASEChallenge@iarpa.gov.


CASE Challenge Workshop in Washington, DC

June 4, 2019, 1:38 p.m. PDT by Liz Treadwell

Greetings,

 

You are invited to join the Intelligence Advanced Research Projects Activity (IARPA) at the Credibility Assessment Standardized Evaluation (CASE) Challenge Workshop on July 18, 2019 in Washington, DC.

IARPA’s CASE Challenge crowd-sourced the development of innovative methods to evaluate credibility assessment techniques and technologies. The culmination of this challenge is a workshop aimed at fostering a rich exchange of ideas that will help to inform potential ways ahead for future investments in credibility assessment research and development, as well as practical applications, education, and training. 

During the workshop we will also highlight the winners of the CASE Challenge and take stock of past, present and potential future developments in credibility assessment. Participants will include CASE Challenge solvers, industry representatives, academic researchers, and Government stakeholders.

Please join us in this engaging opportunity to learn from and network with experts in the credibility field!

 

For more information on the CASE Challenge please check out the following:

https://www.iarpa.gov/challenges/casechallenge.html and https://www.herox.com/CASEchallenge.   


Just the Weekend Left to Submit your Solutions for the CASE Challenge!

April 12, 2019, 11:30 a.m. PDT by Liz Treadwell

 

Dear CASE Challengers,

There are only a few days left to finalize your solutions! The submission form will be CLOSED on 8:59 p.m. PDT Sunday, April 14th for the CASE Challenge. The Challenge Team looks forward to reviewing your submissions!

Feel free to reach out to  if you have any questions.

 

Good luck!

-The CASE Challenge Team


Forum11
Teams367
winners
1st place
2nd place
3rd place
Novelty: Ground Truth
Novelty: Realistic Reflection
Novelty: Creative Technology
Jonathan Barlow's team Jonathan Barlow's team
3 team members
Looking for members
Tass Gioia's team Tass Gioia's team
1 team member
Looking for members
Amelia Jenkins's team Amelia Jenkins's team
1 team member
Looking for members
Raed Shaiia's team Raed Shaiia's team
1 team member
Looking for members
Computer  Augmented Social Engagement Lab Computer Augmented Social Engagement Lab
3 team members
Looking for members
Joab Rosenberg's team Joab Rosenberg's team
1 team member
Looking for members
Team Technovaction Team Technovaction
1 team member
Looking for members
Darrin Griffin's team Darrin Griffin's team
1 team member
Looking for members
Steven Erly's team Steven Erly's team
2 team members
Looking for members
Zaheer Ali's team Zaheer Ali's team
1 team member
Looking for members
YeshwanthsAI YeshwanthsAI
2 team members
Looking for members
Sandeep Gulati's team Sandeep Gulati's team
1 team member
Looking for members
The Vreeland Institute .'s team The Vreeland Institute .'s team
1 team member
Looking for members
Ibrahim Seck Rodriguez's team Ibrahim Seck Rodriguez's team
1 team member
Looking for members
Clifton Coetzee's team Clifton Coetzee's team
2 team members
Looking for members
Paola Toledo's team Paola Toledo's team
1 team member
Looking for members
Jared Schwartz's team Jared Schwartz's team
1 team member
Looking for members
Anna Feldman's team Anna Feldman's team
1 team member
Looking for members
Shery Sumal's team Shery Sumal's team
1 team member
Looking for members
Ruth Stern's team Ruth Stern's team
1 team member
Looking for members
MisinfoSec MisinfoSec
4 team members
Looking for members
Kirk Luther's team Kirk Luther's team
3 team members
Looking for members
Out-of-the-box 1 Out-of-the-box 1
1 team member
Looking for members
Robin Rowe's team Robin Rowe's team
1 team member
Looking for members
Press
FAQ
Rules