Strictly proper scoring rules (SPSR) are incentive compatible for eliciting information about random variables from strategic agents when the principal can reward agents after the realization of the random variables. They also quantify the quality of elicited information, with more accurate predictions receiving higher scores in expectation. In this paper, we extend such scoring rules to settings where a principal elicits private probabilistic beliefs but only has access to agents’ reports. We name our solution Surrogate Scoring Rules (SSR). SSR is built on a bias correction step and an error rate estimation procedure for a reference answer defined using agents’ reports. We show that, with a little information about the prior distribution of the random variables, SSR in a multi-task setting recover SPSR in expectation, as if having access to the ground truth. Therefore, a salient feature of SSR is that they quantify the quality of information despite the lack of ground truth, just as SPSR do for the setting with ground truth. As a by-product, SSR induce dominant uniform strategy truthfulness in reporting. Our method is verified both theoretically and empirically using data collected from real human forecasters. 
                        more » 
                        « less   
                    This content will become publicly available on March 21, 2026
                            
                            Unveiling Scoring Processes: Dissecting the Differences Between LLMs and Human Graders in Automatic Scoring
                        
                    - Award ID(s):
- 2101104
- PAR ID:
- 10630056
- Publisher / Repository:
- Springer
- Date Published:
- Journal Name:
- Technology, Knowledge and Learning
- ISSN:
- 2211-1662
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
