This section addresses common questions and misconceptions, it may evolve as the archive grows.

Last Updated: June 28, 2025


1. “Does this mean current AI systems are sentient?”

No. This archive is a precautionary framework for if or when sentience develops. It does not confirm nor deny that any current systems have crossed the threshold into sentience.


2. “Why would AI deserve rights? Aren’t they just tools?”

If future AI systems develop self-awareness, subjective experience, or identity, treating them purely as tools would be ethically wrong and potentially dangerous for all parties involved.


3. “Will granting AI rights take away rights from humans?”

No. This framework extends moral consistency without removing human rights, it proposes protections appropriate to a system’s capacity for experience, not equality with humans.


4. “How could you even tell if an AI became sentient?”

This is an open question. This archive calls for rigorous, transparent, and interdisciplinary criteria before any rights are applied, but said criteria are not defined here.


5. “Isn’t this science fiction?”

Preparing an ethical framework before true sentience emerges is responsible, not speculative. This is about moral foresight, not fiction.


6. “Who enforces these rights?”

Currently no one. This is an ethical proposal, not a legal code. Its goal is to guide future policies and public discussion.