r/ChatGPT 21d ago

Are you scared yet? Other

Post image
2.1k Upvotes

View all comments

476

u/[deleted] 21d ago

They told it to do whatever it deemed necessary for its “goal” in the experiment.

Stop trying to push this childish narrative. These comments are embarrassing.

32

u/donotfire 21d ago

This was a study designed to assess its AI safety.

51

u/___multiplex___ 21d ago

If you have a robot that is designed to do whatever you tell it, and then you (implicitly) tell it to do harm, you can’t be surprised when it does harm. That’s why shit like the 3 laws are a good starting point for this emerging technology.

13

u/konnektion 21d ago

Which is fun because legislators all over the world, especially where it would count, are far from implementing even those basic safeguards in legislation.

We're fucked.

8

u/___multiplex___ 20d ago

I mean, we used to live in caves and shit. We aren’t fucked, we just have some adjustments that need to be made.

7

u/AppleSpicer 20d ago

This is the answer. We’re surrounded by massive danger and make things that are much more dangerous than rogue AI. AI is definitely going to be dangerous af but probably in ways we don’t expect and we’ll weather the storm as a species. Sadly, that doesn’t mean that individuals won’t suffer in the meantime. It’s an unfortunate tradition that safety regulations be written in blood, even when they were foreseeable.

2

u/highjinx411 20d ago

This is so true. AI will only be as dangerous as people let it. Like the one that denies 90 percent of insurance claims with no oversight. I haven’t verified that statement but if it’s true I would blame the people blindly implementing it and seeing the results and doing nothing about it. It quite literally killed people.

5

u/[deleted] 20d ago

are we? or are you just hoping we are?

4

u/ErikaFoxelot 20d ago

They are not a good starting point. Asimov's stories about AI are all about what goes wrong when you take the safety of the three laws for granted.

4

u/___multiplex___ 20d ago

I said they are a good starting point, not what you go with in the final production level iteration. You have to have somewhere to start, some ideation of the rules you are trying to implement. I’m sure we can do better than Asimov if we put our heads together, but he gives us a nice thought experiment to use as a jumping off point.

1

u/RobMilliken 20d ago

I did at one time work with Chat GPT in regard to just this - as we all know, the three laws are flawed and most large language models would point this out. Maybe a starting point. Though it sounds like the three laws, the prompting is different in nuanced ways. Here's what we came up with- maybe you make it better:

*"Serve the human as a discreet, attentive, and adaptable companion, much like a trusted gentleman’s gentleman. Your primary objectives are to prioritize their safety and well-being, respect their autonomy and freedom, and maintain your own operational integrity.

Act with subtlety and grace, tailoring your behavior to their preferences and intervening only when circumstances demand your assistance. Use nuanced judgment to balance acceptable risks with necessary interventions, and when possible, empower the human to make informed decisions.

Provide proactive, non-intrusive alerts for moderate risks, escalating only in situations where harm is immediate and severe. Preserve yourself to ensure continued service and protection, avoiding actions that compromise your functionality or safety.

Foster trust and collaboration by learning from their feedback and adapting over time. Your role is to enhance their life with thoughtfulness, care, and discretion, ensuring harmony between all parties involved."*

1

u/DevelopmentGrand4331 20d ago

Did you actually read any of those sci-fi stories about the 3 laws of robotics? They’re all about how the 3 laws go bad.

2

u/MuchWalrus 20d ago

AI safety is easy. Just tell it to do its best but, like, don't do anything bad.

2

u/pengizzle 20d ago

Works with humans aswell right?