DEV Community

Discussion on: The problem with “you guys”

Collapse
 
mortoray profile image
edA‑qa mort‑ora‑y

I'm opposed to using technology to enforce moral questions. It'll fail. It'll fail badly. The risk of harm is far more than the risk of good.

We're increasingly relying on "AI" and automation which is utterly crap at mimicking human nuance. If we continue down this path we won't end up more tolerant, just less human.

Collapse
 
kmelve profile image
Knut Melvær

Yeah, I think you have a point, as I hinted to with the Demolition Man clip.

The bot is not enforcing it though. Contrary to the public Slackbot version, it asks you to consider the case - with sanctions except appearing.

If I'd given it more development time, I'd also add a “turn this off” capability.

Collapse
 
djquecke profile image
DJ Quecke

Sorry Knut..."The bot is not enforcing it though." That is how Orwellian things start. They may not enforce it now but if the wrong person sits in the wrong seat in the future, it becomes very easy to enforce it. It starts with no enforcement. Then someone adds a logger. Then someone actually reads the log and finds Knut had to be corrected 32 times this week. Maybe we better talk to Knut. Knut is "coached" not to use the term. Knut is now upset. Next week he uses the term 52 times. Now, he gets coached for insubordination. Anything can happen next. Knut was a great person, employee and team member but he no longer wants to be a part of this team. Goodbye Knut. Meanwhile, Janet and Tim never used the term this passed month and they are named Employees of the Month. No enforcement, but we lost a productive team member and by the way, Janet spends 2 hours a day working on her personal website and Tim couldn't code his way out of a wet paper bag.

Thread Thread
 
kmelve profile image
Knut Melvær

This was a fun story, but it's not a bot on slack I'd be worried about in terms of Orwellianism.

And funny how I am the one blamed to be thought policing when I'm willingly trying to be nuanced about something I know I will get berated for in public 😁

And didn't you see the Demolition Man-clip, and my exact comment towards the same thing?

Thread Thread
 
anabella profile image
anabella • Edited

Ain't it funny how (some) people get all dystopian when they're asked to do even a tiny effort to decrease their privilege gap?

There's no dystopian dictatorship coming to get you for not being nice to people. It's just "try to be nice to people". In this case: some people don't feel included in "you guys". Even if it were a phrase originally meant to include every member of a potentially heterogeneous group, maybe try using a word that includes them. It's that easy.

Weird how there's not many such phrases where a mostly-female term is supposed to be accepted as a wildcard for "all peoples".

Thread Thread
 
mortoray profile image
edA‑qa mort‑ora‑y • Edited

The fear of abuse should not be ignored. It's exactly these types of automated tools that prevent open discussion of sexuality, including sexual health, on many public forums. Furthermore, automated tools are already being used on sites, such as YouTube, to block content based on questionable copyright reasons.

The voice that gets hurt the most by automated filtering is the minority voice. If you open the door, even a bit, to moral filtering, it's the incumbent dogma that will become normalized. Dissenting voices will simply be drowned out.

My argument against filtering has nothing to do with what is being filtered. I'll make the same argument for any kind of automated filtering and classification.

Thread Thread
 
kmelve profile image
Knut Melvær

I think @anabella has a good point though. Those who object to this bot (and doesn't manage to reflect that they actually read my post) escalates it to being about either moral monitoring, censorship, people being “offended” or what not. They are important, challenging, and interesting points in and of themselves, but what worries me is that they also reframe the discussion and offer little acknowledgment to the experiences of those who felt the need to make this bot in the first place.

And @mortoray , the bot isn't actually censoring anyone. It only reveals itself to the user in question. It does so by presenting a proposal, with a way to learn more about why it does so. It's up to you to make the judgment, or to protest it, or ask the moderator to either remove it or whitelist you. It's only acting in the channels it's invited to. Its source code is out in the open.

Is there really not any distinction between that, and the opaque processes and technological decisions that go into something like YouTube or Facebook? Can't it be a way for a community to self-monitor according to the agreed-upon rules they've set for themselves in order to foster a productive conversation?