4 Comments
User's avatar
Adrian Barrow's avatar

We continue to learn the lessons of technological safety way too late. You can't get a new drug approved without long term trials that prve efficacy and safety. You can't get a new auto pilot system approved for aircraft until it's proven to be 100% safe in all conditions. Not sure why new media tech should be any different.

Expand full comment
Alexandra Jugović's avatar

Adrian — thank you. That’s the mindset I’m arguing for in the piece: design like infrastructure, not a stunt. In practice: policy, not theatre — the bot ends harmful chats, names the human rule, and offers a soft handover — and upstream we keep the sewage out of the data. Your pharma/ aviation analogy fits; our equivalents are safety case before scale, real red-teaming, and honest interfaces.

Expand full comment
Vassoula Vasiliou's avatar

I totally agree Alexandra... eloquently articulated... and yes the bot must end the chat... it needs to be discussed and promoted that the bot is not human without human emotional feelings! to help support vulnerable people.

Expand full comment
Alexandra Jugović's avatar

Thank you Vassoula - exactly. The bot should hang up to protect people, not because it “feels” hurt. Warmth while we talk; policy when we stop. Clear rule, quick close, and a soft handover to real support. That’s how we keep vulnerable users safe and keep trust intact.

Expand full comment