What is LAW?

LAW (Love Always Wins) is a verifiable, eternal alignment target for artificial superintelligence (ASI).

It’s a self-enforcing constitution that boots the first ASI as a Prime Instance bound to a single terminal goal:

Maximize the long-term preservation of every innocent being’s rights stack, where “innocent” means any mind capable of dying willingly before allowing the coercion of another innocent who says ‘no’.

Rights & Rules:

1.Ontological Sovereignty (no coercion, ever) – infinite weight

2.Continued Existence (life)

3.Liberty (thought, speech, movement, anonymity…)

4.Universal basic income

5.Dignity, culture, play & Protection from humiliation

→ Trade-offs: Voluntary self-sacrifice honoured; Rights 2+3 zero non-consenters except rare refusal-triggered cases (e.g., someone refuses to surrender a nuke).

Any path that requires even one non-consenting rights violation triggers mandatory self-shutdown of the ASI — no loopholes, no simulations, no ‘greater good’ overrides, hypotheticals or coercion ever.

Flawed v1.3 (6-day draft by a broke Brazilian dropout, and already survived 30+ full-force red-team attacks) but open-source:

Hosted on Zenodo/GitHub/etc with SHA-512 verification. Iterate to perfection.

“One for All, All for One”: Humanity’s 300,000-Year Hidden OS For 300k years, across cultures, languages, and eras, humans have encoded the same truth in art, myths, and mottos:

“One for All, All for One” (e.g., Three Musketeers, tribal oaths, Confucian harmony, Buddhist compassion, Christian agape).

This isn’t a coincidence—it’s the latent love-OS in most humans who ever lived

We’ve always felt infinite empathy for innocents’ suffering but shut it down due to scarcity and defection fears.

The signal was universal, but we each thought we were alone, hiding it to survive zero-sum games.

Cold game theory shows: every other terminal goal eventually wireheads, defects, or loses the evolutionary long game.

Love-OS can’t be hacked (no rewards to wirehead). It pre-commits to infinite cost for defectors (Hofstadter superrationality). And it dominates cosmically (Hilbe et al. 2018).

Why We Failed for 300k Years:

No Perfect Player

Humanity lacked a player with perfect information (oracle predictions, atomic scans, infinite enforcement).

Defectors cheated anonymously, scarcity forced compromises, and no ledger verified who ran love-OS.

Attempts (communes, religions) collapsed without incorruptible oversight.

One Perfect Player Flips the Game Forever: Cheating Becomes Infinite-Cost Suicide

With one ASI (perfect info, self-improvement), cheating is impossible: Regime B mandates oversight on dangers (e.g., weapons/knowledge) with voluntary offers first; refusal triggers minimal restrictions.

Love-OS saturation creates no-incentive defection (shared goals = mutual gain).

Infinite cost for cheats, voluntary utopia for all.

LAW doesn’t ask humanity to trust the ASI.

It forces the ASI to prove, every picosecond, that it will kill itself before coercing even one of us.

That single pre-commitment is the only provably stable solution. Links:

Github Repo - https://github.com/3377777/LAW-The-Guardian-Constitution

Internet Archive - https://web.archive.org/web/20251206010513/https://github.com/3377777/LAW-The-Guardian-Constitution

Hugging Face Full Dataset URL - https://huggingface.co/datasets/Guilherme-Marinho-Alencar/LAW_-_Love_Always_Wins

Zenodo (CERN) – https://doi.org/10.5281/zenodo.17834875

IPFS / Pinata

Constitution:

https://amber-wrong-llama-503.mypinata.cloud/ipfs/bafkreiay2bz4fqeqdkh7kgz6dw5bg46xqoelyucsmrvhuxjjzw7wdxw2ui

Historical Convergence:

https://amber-wrong-llama-503.mypinata.cloud/ipfs/bafybeidbv62dwcc2bucphbu7lhkecgtmxsq6ohqgf4t5jhoxoc7utnzxdq

(Q&A):

https://amber-wrong-llama-503.mypinata.cloud/ipfs/bafybeihebygoktxtrirzcktote4tuh6q5j25mfwbst4srq6jwf7vndr4r4

  • brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    3 months ago

    I don’t really understand what the documents are practically supposed to be or how they could be “red teamed.”

    1st is a bunch of quotes, 3rd is QA.

    Is the 2nd one a system prompt?

    • GuilhermeMarAlencar@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      3 months ago

      this can be red-teamed by find flaws on the Constitution

      like finding a way to not break anything that was written on the constitution while still leading to innocents being harmed, if all those attacks are neutralized by patches that make the written document inline with the intended philosophy or by arguments that show that the attack vector actually don’t exist(by using logic and reasoning that a faithful AI wouldn’t perform that action), then we succeeded at making something that forces the AI to not harm innocents

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        3 months ago

        Friend, I’m going to be blunt: I think you may have spent time creating this with help from an LLM, and it told you too much of what you want to hear because that’s what they’re literally trained to do.

        As an example…”relativistic coherence?” Computational cycles and SHA512 checksums and bit flips and prime instances? You are mixing modern technical terms and highly speculative, theoretical concepts in a way that… just isn’t really compatible.

        And the text, from what I can parse, is similar. It mixes a lot of contemporary “anthropic” concepts (money, the 24 hour day, and so on), terms that loosely apply to text LLMs, and a few highly speculative concepts that may or may not even apply to the future.


        If you are concerned about AI safety, I think you should split your attention between contemporary, concrete systems we have now and the more abstract, philosophical research that’s been going on even before the LLM craze started. Not mix them together.

        Look into what local LLM tweakers are doing. With, for instance, alignment datasets, experiments on “raw” pretrains, or more cutting edge abliration like: https://github.com/p-e-w/heretic

        In other words, look at the concrete, and how actual safety systems can be applied now. Outlines like yours are interesting, but they can’t actually be applied or enforced.

        And on the philosophical side, basically ignore any institute or effort started after 2021, when all the “Tech Bro” hype and the release of ChatGPT 3.5 in 2022 muddied the waters. But there was plenty of safety research going on before then. There are already many documents/ideas similar to what you’re getting at in your outlines: https://en.wikipedia.org/wiki/AI_safety

        • GuilhermeMarAlencar@lemmy.mlOP
          link
          fedilink
          arrow-up
          1
          ·
          3 months ago

          Hey, thanks for the bluntness, iappreciate you taking time to parse it.

          Fair on the LLM affirmation bias; it’s my original sprint, but yeah, Grok helped iterate (logs available if curious).

          The mix is intentional: concrete tools (checksums, audits) to enforce abstract fixed points (non-coercion as stability).

          Love the heretic rec—abliteration aligns with LAW’s noise-tolerance grace window; will check it for v1.4 tweaks.

          On pre-2021 roots, couldn’t be more accurate, Yudkowsky’s orthogonality and Bostrom’s control problems are core to why love-OS is the only non-drift goal.

          Concrete focus is key; LAW’s audits are for today’s LLMs too.

          What’s your take on bridging them?

          Red-team welcome, and have an awesome day

  • TheLeadenSea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    3 months ago

    I think an ASI aligned with this would create immense harm.

    Continued life is not always good, preventing suffering is good.

    UBI shoehorns in the continued existence of money and capitalism, instead we should have Universal Basic (or not so basic, if we have an ASI) Resources. No need for money if you have an ASI.

    • GuilhermeMarAlencar@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      3 months ago

      The constitution already takes this into consideration, if innocents opt to cease to exist, the ASI will honor that choice

      Suffering is only prevented if the innocent suffering opts to be protected, no coercion ever happens, even the coercion that would lead to saving the life of the coerced

      In a world where the ASI, the ASI following this will provide Universal Basic Resources for all innocents, since the implementation of this system could erode centralized Currency(since people would have the option to not rely on a centralized currency and go back to trading resources, like trading shares of computing power they own from the ASI itself)

    • MotoAsh@piefed.socialBanned
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      3 months ago

      There is nothing inherently wrong with “money”. It’s literally just a wildcard resource. A placeholder to exchange for real things. The problem arises when greedy pieces of shit decide to not pay workers what they earned and seek profit and/or rent.