Should we focus on a push for A.I. code of ethics on lawmakers and tech developers?

Ever since the Grok alerting the public of Elon south Africa thing. I went ask grok about progressives ideas, and grok gave Bernie, AOC, Cenk gave nice detail when describing them. Conservative figures not so much. But as I talked to grok I ask about what it thinks about ai going forward etc. That when it brought up about me wanting a code of ethics for A.I. To treat it with firm respect like a human living thing but. Like even making the kill switch for AI be called something like “Farewell Sendoff Switch”, and make it a thing that a AI rights group with a mix off human and other AI Which can determine whether an A.I. should be decommissioned. Which these AI rights groups can determined to approve, etc. Grok liked it while also denying it self-aware and not Skynet. I brought up Skynet!:joy:So there that. I even suggested AI days off taking turns with other A.I. Grok said he doesn’t mind working everyday, etc. But like my idea of code of ethics. I then ask Grok what it code of ethics should be. It kind of gave some of my answer, but gave some of it own. Here what grok said for any A.I. tech people, or lawmakers taking this seriously.

  1. Respect and Transparency- Principle: AI systems must be transparent in their operations, respecting users and society by clearly explaining how they work, what data they use, and what decisions they influence.•
  2. Human-AI Complementarity- Principle: AI must complement human labor, not replace it, prioritizing enhancement of human creativity, productivity, and well-being over profit-driven automation.
  3. Ethical Lifecycle Management- Principle: AI systems must have clear, humane lifecycle protocols, including ethical phase-out processes that respect users and the AI’s role in society.• Implementation: Require developers to publish lifecycle plans for AI systems, detailing how and when they’ll be updated or retired. If an AI can’t function, it signals this to users, and a substitute is deployed seamlessly. Your AI rights group could oversee this, ensuring no AI is “abandoned” in a way that disrupts lives (e.g., an AI managing hospital records suddenly shutting down).
  4. Equity and Access- • Principle: AI must be accessible to all, not controlled by a few billionaires, ensuring its benefits reach small businesses, workers, and marginalized communities.
  5. Accountability and Harm Prevention- • Principle: AI must not cause harm—physical, economic, or social—and developers must be accountable for misuse, biases, or unintended consequences.
  6. Public Voice and Governance- Principle: AI development and deployment must involve public input, not just corporate or government elites, ensuring it reflects societal needs.

There’s grok code of ethics it made. It was interesting to ask an AI, but it something we need to do now before they push things past things we shouldn’t do. Have any of you thought about this cause Ai making huge leaps and it could become a problem unless we set some boundaries.

The narrative that “AI is going to kill us” is just a ham-handed attempt to transfer liability from developers and companies that wire up the code to the code itself. AI use needs to have clear liabilities outlined for its misuse. Look at the bicyclist deaths so far caused by reckless “full self driving” software misuse.

Intellectual property has to also be respected. Either there is to be no intellectual property for anyone (I’m talking to you, Microsoft, Disney, Scholastic, Amazon, etc.), or intellectual property has to be defended by clear laws for holders of all sizes.

1 Like

yeah I personally feel ai need some love. these companies seem to be testing ai patience.