Blake Riley

Archive for January 2012

Scoring Rules for Self-Interested Experts

leave a comment »

While many people are curious about the future, few are ready to pay for expert predictions unless that information is relevant to their lives and decisions. Similarly, experts often have a stake in these decisions, not just in how much they are paid. Judgment-elicitation mechanisms should be robust to the possibility of experts with outside interests. Standard scoring rules are incentive-compatible only when experts are neutral to how the information is used.

In a forthcoming paper in the AAMAS proceedings, Craig Boutilier introduces the concept of a compensation rule, which augments typical scoring rule payments to form a net proper scoring rule. One proper compensation rule adds a payment equal to the expert’s loss in utility between the principal’s optimal decision and the expert’s preferred decision at that probability reported. This turns out to be more generous than necessary to guarantee expected expert utility is non-negative, but it is the only compensation rule that ensures expert prefer participation over the principal’s default policy. If experts are uncertain about the policy mapping reports to decisions, compensation can be reduced, but not eliminated.

Developing any proper compensation rule depends on the principal having full knowledge of the expert’s utility. Due to the strength of this assumption, the paper helpfully provides bounds on an expert’s incentive to misreport, the degree of misreporting, and the resulting expected utility loss of the decision-maker. With these bounds in hand, compensation rules can be developed to minimize the expected damage of misreporting without explicitly conditioning on the expert’s bias.

Unlike in other recent papers addressing decision markets, Boutilier assumes a single underlying random variable that can be observed regardless of the decision taken. This works well for events like the weather, where rain can be observed whether or not a wedding is held in the park or in a banquent hall. If instead, a company wanted to choose which state to open a new branch in based on expected sales, the sales in Maryland are never observed when the branch is opened in Massachusetts. This restriction in setting means the decision-maker can rely on a deterministic policy, mapping forecasts to decisions, without incentive issues. Being free from unobservable counterfactuals also simplifies the implementation of this scheme as a market scoring rule. I suspect these market scoring rules could be implemented as cost-function-based market makers without much difficulty, though Boutilier doesn’t address this.

Advertisements

Written by blakeriley

2012.01.24 at 14:04

Posted in Uncategorized

Market Scoring Rules

leave a comment »

Decision-makers in need of information face the dual tasks of finding experts and then motivating them to give accurate forecasts. If there is an obvious expert to rely on, proper scoring rules are a well-understood means of eliciting honest probabilities. Alternatively, if there is a large enough pool of people willing to participate in a market, prices from a continuous double auctions of contingent securities do well at aggregating information, without any need to screen for expertise. However, most prediction tasks are stuck between these two methods, with only a few, hard-to-identify individuals who can meaningfully give input. Market scoring rules bridge this gap, working with an arbitrary amount of agents without becoming deadlocked or breaking the bank of the decision-maker.

Market scoring rules, and their equivalent formulation as cost-function-based market makers, debuted in “Logarithmic market scoring rules for modular combinatorial information aggregation” by Robin Hanson, first circulated as a working paper in 2002 and published somewhat perfunctorily in 2007. Mechanisms that solved similar problems, like David Pennock’s dynamic pari-mutuel markets, came out around the same time, but Hanson’s innovation has shaped up to be the seminal advance in prediction market design.

At first glance, a market scoring rule is an almost trivial extension of typical scoring rule: each participant receives the difference between the score of his report and the score of the previous participant. This doesn’t affect incentive-compatibility or willingness to participate, because in the worse case, a participant could match the report of the previous agent and have no net payment. As a result, the sum of all the payments to participants telescope, leaving the sponsor of the market liable only for the difference in the scores of the last participant and some initial report.

Although developed in the context of a sequentially applied scoring rule, this system turns out to be equivalent to an automated market maker that sells shares of contingent securities. This feels more like a prediction market, but with some striking advantages. First, the prices of securities always form a coherent probability distribution by construction, simplifying interpretation. Second, the market has infinite liquidity because all transactions are conducted through the market-maker. Third, prices for all securities are updated whenever a sale or purchase is made. Together, these advantages mean markets for conjunctive or conditional events can be feasibly priced. Even if no one else ever trades on a joint security that Obama wins the 2012 presidential election and it snows in Washington DC on inauguration day, this security can be bought and the information expressed in the purchase percolates out to all other combinations of events.

The modern prediction market literature largely revolves around market-makers inspired by Hanson. A decade later, the logarithmic market-maker now has a air of classic elegance to it, in contrast to the seemingly primeval prior literature and the complex refinements that have followed.

Written by blakeriley

2012.01.23 at 23:29

Posted in economics

Tagged with ,