- Setting up the system:
- Scope definition: who and what indicators (actions/outcomes) are eligible?
- Interval definition: time period
- Indicator measurement
- Evaluation: evaluate and attribute to entities
- Reward distribution
- Repeat
- Impact Evaluator IE = {r, e, m, S}
- r: reward function
- e: evaluation function
- m: measurement function on S
- S: scope
- Reward R = r(e(m(S)))
- Scope
- Actions & outcomes
- What entities? Which indicators? What interval?
- Measurement function
- A pair of objective and indicator
- Evaluation function
- An array of entities and scores
- Evaluators can be humans, organizations or functions
- Final score is a weighted sum of all scores. Weights are based on expert input
- Reward function
- Negative, zero, positive, and superlinear positive sum functions
- Reward can be assets and even recognition
- Human vs Machines
- Up-front and maintenance costs
- Adaptability
- Overhead costs of IEs are expensive
- A transparent IE is the most powerful but requires a lot of work and the algorithm may be exploited
- Or, calibrate the formula over time but establish some certainty
- Reward function design
- Evaluate value creation & reward high potential/uncertainty work
- Set a lower & upper bound for reward per entity
- Direct value accreted to the reward pool can result in a positive sum reward dynamics
- < 10% 'wasted' rewards is reasonable
- Case studies
- A: Bitcoin
- Scope S: Bitcoin network, every 10 minutes
- Measurement m: hashrate contributed per miner per block time
- Evaluation e: calculating the probability of contribution to the mining of a new block
- Reward r: block reward
- Low operational cost, low adaptability, high setup cost
- Fixed reward curve with high clarity
- Incentives are powerful and secondary funding mechanisms (loans, financing for mining) are abundant
- Restricting systems to simplified inputs/outputs
- Reward assets in value-accreting tokens
- B: RetroPGF on IPFS (expert-based IE)
- Limited startup time/limited existing & robust measurements/highly context-dependent and broadly scoped evaluation area
- Quadratic voting/KYC'ed evaluators/contributions from expert & community recommendations/experts vote on all projects blinded/fix reward pool (zero-sum)
- High operational cost/high adaptability/low setup cost
- Scope S
- Indicators: contributing to the growth of IPFS implementations
- Entities: contributors
- Interval: 12 months
- Measurement m
- Via community
- Trim to 30 projects
- Evaluation e
- Expert quadratic voting
- 15 evaluators
- Draw expert from a group
- Reward r
- Fixed FIL
- Allocation based on normalized entity scores from QV evaluation
- Evaluations recur every 6 months and get more quantitative
- Takeaway
- No more than 1 hour of input per expert
- 15-25 experts and 20-40 projects
- C: Hybrid Quantitative ID (DocsDocs)
- Early experiment with purely subjective IEs
- Impact Evaluators:
- finish documentation
- user feedback
- Proof of completion and survey tool
- Low operational cost/medium adaptability and setup cost
- Scope S
- Indicators: Launch documentation, feedback
- Entities: Projects
- Interval: 3 months
- Measurement m
- Sign-up forms & NPS tool widget
- Evaluation e
- NPS score
- Link submissions
- Reward r
- Takeaways
- Tooling (plug-in) help reduce noise
- Work inputs should be computably verifiable
- Incentives must be aligned for all participants
- Critical mass of community voters
- Interoperability with other funding mechanisms
- Prospective funding programs
- Can be most effective since they cover the gap in cash flow across all stages
- Link multiple impact evaluators
- Hypercert (to be looked into)
Thoughts: