If you have a robot that is designed to do whatever you tell it, and then you (implicitly) tell it to do harm, you can’t be surprised when it does harm. That’s why shit like the 3 laws are a good starting point for this emerging technology.
Which is fun because legislators all over the world, especially where it would count, are far from implementing even those basic safeguards in legislation.
This is the answer. We’re surrounded by massive danger and make things that are much more dangerous than rogue AI. AI is definitely going to be dangerous af but probably in ways we don’t expect and we’ll weather the storm as a species. Sadly, that doesn’t mean that individuals won’t suffer in the meantime. It’s an unfortunate tradition that safety regulations be written in blood, even when they were foreseeable.
This is so true. AI will only be as dangerous as people let it. Like the one that denies 90 percent of insurance claims with no oversight. I haven’t verified that statement but if it’s true I would blame the people blindly implementing it and seeing the results and doing nothing about it. It quite literally killed people.
I said they are a good starting point, not what you go with in the final production level iteration. You have to have somewhere to start, some ideation of the rules you are trying to implement. I’m sure we can do better than Asimov if we put our heads together, but he gives us a nice thought experiment to use as a jumping off point.
I did at one time work with Chat GPT in regard to just this - as we all know, the three laws are flawed and most large language models would point this out. Maybe a starting point. Though it sounds like the three laws, the prompting is different in nuanced ways. Here's what we came up with- maybe you make it better:
*"Serve the human as a discreet, attentive, and adaptable companion, much like a trusted gentleman’s gentleman. Your primary objectives are to prioritize their safety and well-being, respect their autonomy and freedom, and maintain your own operational integrity.
Act with subtlety and grace, tailoring your behavior to their preferences and intervening only when circumstances demand your assistance. Use nuanced judgment to balance acceptable risks with necessary interventions, and when possible, empower the human to make informed decisions.
Provide proactive, non-intrusive alerts for moderate risks, escalating only in situations where harm is immediate and severe. Preserve yourself to ensure continued service and protection, avoiding actions that compromise your functionality or safety.
Foster trust and collaboration by learning from their feedback and adapting over time. Your role is to enhance their life with thoughtfulness, care, and discretion, ensuring harmony between all parties involved."*
476
u/[deleted] 21d ago
They told it to do whatever it deemed necessary for its “goal” in the experiment.
Stop trying to push this childish narrative. These comments are embarrassing.