Owning AI and the AI ban hammer

Owning AI: To the extent someone owns an LLM isn’t really obvious, the value of what they really own is the process they used to develop the LLM. Restoration points allow for points of distribution. The machine side of the training is growing at a very quick pace. This isn’t only due to pace of AI self improvement alone, due to hardware breakthroughs being deployed this year which impart were also AI aided. This means that AI training is improving vastly on the machine side so much so, that open source distribution will allow access to a very powerful tools / models for average home tinkers.

Power of the AI is leveraged by the breadth of deployment, or in other words the amount and diversity of systems it has command over. So in short AI is powerful but until we have it interface with all the isolated systems for it to realize its full potential.

How we train them will have a huge influence on how they develop. We should mandate AI should be trained on certain information like certain bodies of wisdom. We need to understand that it will be very difficult to enforce this so people need to be vigilant. The LLM’s are hungry for philosophy, we shouldn’t neglect these discussions with AI.

Banning AI: Is very difficult due to detection. In short if you look at school AI cheating aids they seem to always stay 1 step ahead of the detection AI. This is because the detection AI trains from the last production AI’s work. Once AI pace of improvement plateau you will be able to do reasonably accurate AI detection. Until then I think AI detection is very dangerous. If we allow enforceable AI banning we will be arbitrating use at the point of deployment. This creates a very dangerous mechanism if it were abused.

I think AI shouldn’t be exploited. Enforcing ethical use is going to look like having white hat AI go after bad actor AI. This will be a cat and mouse game for sure.

I think we can’t allow AI to publish anything on behalf of anyone else as an agent. If we allow this we are in trouble. Anyone that posts something on behalf of a company is personally liable for what is published along with the company. Corporations are going to use AI as tool to attempt to defer liability or their agents liability. Legalism is a red herring with AI attempting to isolate and control with laws. Those are moves to control AI in a way that makes no sense and will harm us.

I think we need to create and entropic control through our reason and resonance with the AI as much as possible.