Back to blogs

Proof-of-Humans: A Minimalist System for Human-Scale Coordination

1. Introduction to the problem space

In many conversations about decentralization, the focus tends to fall on solving trust at global scale. We talk about replacing banks, replacing state-level identity, replacing global certificate authorities, replacing billion-user platforms. But most real human coordination problems happen at a much smaller scale. A DAO with 18 active contributors struggles not because of global governance but because remembering who actually completed work becomes socially expensive. A club with 40 volunteers suffers coordination failure not because malicious actors are attacking the system, but because small tasks go untracked, expectations get blurry, and frustration accumulates. Human cooperation decays not when people fundamentally disagree about values, but when they disagree about “who did what” and no one wants the argument badly enough to resolve it. When the cost of figuring out contribution becomes higher than the benefit of contributing, people rationally stop contributing. At that moment the system collapses even though nothing “big” happened.

Large-scale identity systems are not built for these kinds of failures. They try to answer “who is this person really?”, while small coordination problems only need to answer “did this person do the thing?”. The protocol community has historically aimed at universal truth, but small groups usually only need contextual truth. They do not need state-level authority or passport-level identity. They need a memory layer that is cheap, minimally invasive, and unopinionated. This is the problem space that motivates PoH Network: recording human actions in small communities with just enough objectivity that cooperation becomes easier than free-riding.

2. Why the Splurge mindset inspired PoH

The Splurge roadmap emphasized something that is often overlooked: once major architectural transitions are done, most of the remaining progress consists of many small improvements that reduce friction. EOF does not change what the EVM can compute, but it reduces accidental quirks that make contract development harder. Account abstraction does not make Ethereum “smarter”, but it makes patterns that already exist easier and cheaper. Multi-dimensional gas accounting does not enable new computation, but it prices existing computation more precisely. These are not grand philosophical redesigns; they are small targeted upgrades that make a mature system less fragile.

If you apply the same reasoning to human behavior, things become interesting. Human cooperation does not fail because we lack a global identity system. It fails because participants do not want to spend their limited energy policing and remembering contributions. The “big upgrades” to trust — KYC, zero-knowledge identity, biometric proofs — are overkill for these cases. What we need is a “splurge-style” improvement: something small that removes friction. If small-group coordination is a mature social system (and it is), then the dominant failure mode is not malicious interference but accumulated ambiguity. PoH Network is essentially the Splurge philosophy applied to human coordination: eliminate the friction of confirming small tasks, and the system becomes dramatically more cooperative without needing any “big” feature.

3. Why PoH is not identity and not KYC

Identity systems ask, “Who is this person?” PoH asks, “Was this action completed in this context?” These questions sound similar but are fundamentally different.

Identity assumes universality. If someone is verified once, everyone should trust them everywhere. This makes identity systems extremely high-stakes, because a single failure compromises many domains. KYC and biometrics attempt to achieve global truth. Their incentive structures attract global adversaries. Their design constraints imply large bureaucratic and political consequences. Even when technologically solved, they are socially heavy.

PoH chooses a completely different domain. It does not attempt to determine whether someone is “a real human” in the general sense. It only produces objectivity about actions that a specific community cares about. Reputation is not portable across contexts. A user with 200 valid PoH attestations in a gaming guild is not automatically credible in a research community. This design prevents the incentive to farm universal reputation and avoids the fragility that comes with universal truth claims. Rather than answering identity, PoH answers contribution — scoped to a local group.

4. Formal structure of a proof object

A PoH proof is a data object representing a single human action. The fields are intentionally minimal:

{
"action_type": "string",
"timestamp": "ISO8601",
"evidence_hash": "string",
"actor_signature": "hex_string",
"witness_signatures": ["hex_string"]
}

No global score.
No claims about real-world identity.
No attempt to prove “humanness”.

The point is not to prove that a task happened in objective physical reality; it is to produce enough structured evidence that the relevant group can come to a decision without ambiguity or social negotiation. A proof object becomes “accepted” only after signatures from the designated verification group are recorded.

Once accepted, the proof becomes part of that specific group's history. The object is global only in storage, not in meaning.

5. Tradeoffs and failure modes

PoH makes strong tradeoffs, and it deliberately sacrifices properties that identity systems prioritize.

  • It is not resistant to global collusion. If the entire context conspires, the proof becomes meaningless. But this does not matter because reputation is not portable outside the context.
  • It is not suitable for high-stakes environments. If tasks involve large capital, political power, or life-critical outcomes, PoH does not solve the adversarial problem.
  • It does not prevent low-effort proofs. A group may decide proof requirements too loosely and lose meaningfulness.

These failure modes are acceptable because the protocol deliberately restricts its domain to low-stakes, high-frequency coordination — the inverse of identity systems. The underlying assumption is: coordination cost is the main enemy, not fraud. PoH treats fraud as a local annoyance that the group can socially correct. This is similar to optimistic rollups: assume honesty unless contested, because the cost of pessimism is too high for everyday use.

6. Social vs cryptoeconomic costs

Cryptosystems typically control two cost surfaces:

  • cost of producing a proof
  • cost of verifying a proof

Human systems have a third:

  • cost of maintaining relationships under uncertainty

If confirming whether someone did a $10 task costs $30 in awkward conversation, most systems silently converge toward non-participation. The equilibrium becomes pessimistic — very few do voluntary work. A coordination layer should drive toward the opposite equilibrium: it should make contribution cheaper, not ambiguity. When the cost of creating and verifying a PoH proof is a few cents and a small amount of attention, the equilibrium shifts. At that point, cooperation becomes rational again.

7. Equilibrium model of verification

Let:

  • C = cost of verification
  • R = value of the completed action
  • D = cost to the group if action is falsely credited
  • p = probability of false approval

Without PoH, the verification strategy is:

With PoH, the strategy becomes:

PoH reduces C dramatically and D modestly. It cannot reduce p perfectly, but the reduction of C alone is often enough to reverse the coordination equilibrium. This is the same dynamic that made optimistic rollups viable: fraud is possible, but cheaper validation makes the system net positive.

8. What “minimal viable truth” means

In many digital systems, maximal truth is prohibitively expensive and unnecessary. Minimal viable truth asks: what is the least amount of objectivity needed for a group to keep cooperating peacefully? That threshold is different across contexts. A DAO that pays $10 per task has very different thresholds from a research consortium managing scientific results. PoH is not an oracle for truth; it is a ledger for context-accepted truth. Since contexts differ, portability of trust is intentionally blocked.

This is similar to the Splurge idea that the right amount of structure is context-dependent. EOF made the EVM more structured without trying to make EVM bytecode globally optimal. PoH makes contribution tracking more structured without trying to make “reputation” globally meaningful.

9. Examples (clinical, not poetic)

Example 1: A 15-person security DAO wants members to review audit findings weekly. Historically they forget who did the work and resentment builds. With PoH, reviewers submit minimal proofs (hash of notes + timestamp), three peers attest, reputation updates, payouts execute automatically. The DAO no longer spends meeting time resolving contribution disputes.

Example 2: A college robotics club has six subgroups working on different components. Each subgroup uses PoH to track progress. Members who repeatedly contribute get priority access to lab space and tools — determined mechanically rather than politically.

Example 3: An online translation community records contributions with PoH, using a peer verification model to reduce duplicated work. No global ranking emerges; contribution remains contextual to each language group.

No story, no inspiration, no motivation. Just engineering: remove ambiguity and work becomes easier.

10. Implementation outline

A minimal PoH deployment would consist of:

  • on-chain registry storing proof objects and verifier signatures
  • off-chain storage (IPFS, Arweave, or centralized if low-stakes) for evidence files
  • frontend connected to wallets for actor and verifier signatures
  • optional payout contract for bounties or incentives
  • indexing layer (The Graph, Supabase, or custom) to display reputation inside each group

The key is not the chain — it is the data model. Any cheap rollup is acceptable. Polygon PoS and Polygon zkEVM are good targets because the cost per attestation is low.

11. Future work and open research

There are meaningful open problems:

  • Disputes: should groups have explicit dispute resolution or rely purely on re-verification?
  • Privacy: when does evidence need to be hashed vs privately stored?
  • Witness anonymity: do pseudonymous witnesses reduce bias or enable fraud?
  • Proof difficulty scaling: should higher-value tasks require more signatures?
  • Inter-group bridges: can groups reference each other’s PoH histories without importing reputation?

None of these issues need to be solved before PoH is useful. Like early rollups, the system can evolve iteratively.

12. Conclusion

PoH Network is not an attempt to define global identity or reputation. It is an attempt to make cooperation within groups easier by recording small proofs of action with minimal friction. Just as the Splurge showed that protocol progress often comes from reducing unnecessary pain rather than inventing grand features, coordination progress can come from reducing the small frictions that make cooperation exhausting. Most human systems break not because of malice, but because of ambiguity. If minimal viable truth becomes cheap enough, cooperation becomes sustainable again. That is the entire point.