I made a bot that suggests some alternative phrasing when you write guys in Slack. It made for some discussion.
I have seen in different Slack communities that people have implemented a Slackbot response that triggers whenever people write ”guys“, suggesting some alternatives that are considered more inclusive. It also links to an article in Vox about npm’s you guys-jar. A similar commentary was also published in The Atlantic that includes comments from a linguist. There's also a handful of threads on forums where people discuss whether the word "guys" is considered gender neutral or not.
Not surprisingly, not everybody was convinced by the bot’s legitimacy. I had a handful of people tell (on behalf of their English speaking part of the world) me that “guys” actually was gender natural (I am aware). Some went further and suggested that this was censoring and undermining the way they talked. It’s tempting to point out how many of them were not… guys. (well, none).
My tweet also gained a lot of likes and retweets, suggesting that some folks appreciated the idea, or at least wanted to be affiliated by what it signals. And this is taking us closer to the matter of hand. Because I'm all but sure that a “Guys bot” is the right approach to the problem we try to solve: How to make welcoming and inclusive communities in tech. I put it out partly as an experiment and to learn more by the reactions to it.
Nobody likes being told they're wrong
I have a lot of training in being wrong. I have spent years at university being told I'm wrong in all sorts of creative ways. Being open to being wrong is an essential part of learning and developing knowledge, especially in science. Still, I very often get defensive and moody when I'm told so. It's a very human response. It's something I have to actively work with.
The reason it’s irritating is that there’s often a moral dimension to it, that doesn’t align with whatever you intended. I bet that most people that greet a group with “Hi, guys!” don’t intend to be exclusive, but the rather opposite. When you’re instantly hit back with a canned response that suggests that this seemingly innocent way of speech may be considered hurtful to some, I have no problem in seeing that some might find that obtuse.
I suspect, though, that people who have a lived experience of not being included in all sorts of ways, in a field still dominated by mostly men, may have a more empathetic response because they get what the bot is trying to solve.
Bots lack nuance
There are good reasons to object to a bot automatically correcting language in a public community. To borrow the analogy from one of my colleagues, one wouldn't like having something that buzzed every time someone said it in the office space. Like in Demolition Man:
Since nor me or the originators of the Slackbot response have spent time tuning the algorithms, it also triggers on a lot of false positives. Which is somewhat counterproductive.
The reason a bot is tempting for community moderatos is that also lessens the chance that you end up in long-winded debates whenever you have to remind people of the Code of Conduct and telling them they’re wrong. Also, you can’t be everywhere at the same time. A bot can.
Why the bot stays (at least for a while)
None of those who have given me critique have been able to convince me to turn off the bot though. That’s mainly because the critique doesn’t come from a place of empathy, but is mostly self-affirming and resisting consideration of what we try to achieve (to be fair, some have also acknowledged what it tries to do, but it usually ends there).
Some are irritated that the bot corrects what they perceive to be a perfectly legitimate way of addressing a group, considered to be inclusive in their lingual context. And thereby telling me that I'm wrong, and perhaps because I'm not a native speaker (not seeing the irony).
For someone that spends most of their they day communicating in a second language (my native being Norwegian), I must admit my sympathy is lacking for someone who feels it though to adjust by one word. It’s also telling that they don’t consider English to be a highly diverse and multi-faceted language. And that we might have to take other considerations in an online multi-cultural community where gender bias is a proven issue than in one’s hometown.
It's not about you guys
I also suspect that some think of this bot as just another example of virtue signaling and political correctness. Recognized as hostile forces to the open and liberal mind by some. Most of those I’ve encountered that have made this their hill to die on aren’t really open to taking in other perspectives or entertain the motivations behind questioning how language can be used to include or exclude people. They do not ask me why I really did this, or what problem I try to solve. So although I find pointing out hypocrisy a lazy argument, it certainly doesn’t help their case.
So if you are a promoter of the open though and the free word, I'll ask you to reconsider the motivation behind efforts like this bot. For example by watching Patricia Aas’ talk Deconstructing Privilege. Or actually listening to the experiences of different underrepresented groups in tech (and elsewhere), and consider how being reminded that you don't really belong might affect your motivation to stay in web development, technology, or whatever you do.
Because you guys, it's not just about you guys.
Top comments (64)
I'm an empathetic "guy" (lol) and get your points completely. But we can't all be empathetic about everything, and it's probably not a great idea to make everyone feel like they SHOULD take everything (little things like this) so seriously. I'm not trying to take away anything from anyone - some people take this more seriously, and are offended my "non-inclusive terms", but to me this seems to be delving to deep, and really - there is no clear end in sight, we could find 250 words that we want our team to use differently.. each disagreeing with eachother, etc.
A team should be inclusive, safe, friendly, etc place for ALL team members. But when we start applying our personal beliefs about how the english language is flawed, and can upset some team members, and needs to be altered to not upset those certain people, we are really getting far away from the entire point of being a team, and wasting time and nit picky things, that (in my opinion) should not be offensive to most.
This isn't true for all things, obviously certain things are inherently offensive, however when it comes to something like this, its perspective and opinion for the most part, NOT inherently offensive, and has no REAL life negative impact on anyone, except for the way they take it. Some things people are offended by really come down to - they choose to be offended.
I personally believe in reducing how many times I'm offended in a single day - and practice not getting upset over things that are small, personal to me, etc.
Also I will say - communication is an important subject when it comes to teams working well together, sure we could all debate, and agree to use certain words, but in my mind, this could literally go on and on and on, and is completely out of the scope of the teams goals, etc. The point of communication is, do you understand what i'm referring to? not as much - do you love my word choices?
Thank you for your comment @robcapell !
Who said it was about offending or upsetting people? Can't it also be a hack to nudge people to think about how language can be more or less welcoming? Language doesn't have to be offensive in order to be exclusive.
As you point out, “guys” are but one example. As I also write, it is used and interpreted differently, in different contexts. In tech, it has for some, especially those who in a thousand small ways are reminded that they are in a minority, begotten the meaning of being gender normative. Reports of that are enough to make me reconsider it.
In terms of empathy, you state you are of that nature in the beginning. But you spend 5 paragraphs addressing how I'm in the wrong and that people who take issue with gender normative language should “get over themselves” (or “reduce how many times they are 'offended'"). Well, that suggests me that you haven't really considered or talked to underrepresented people about their experiences (just my presumption).
But to the core of your point: It's hard to negotiate language. You're completely right. And that's why, if you read me more closely, you'd see that I'm also hesitant to go that route. That being said, I think it should always be OK to question how language, privilege, and power relate to each other, and how we, in fact, can use it to achieve an "inclusive, safe, friendly, etc place for ALL team members".
If we could do it on popular systems like Slack, then making the editor more like the hemmingway editor would be cool, so you'd get warning highlights as you type. Something that would give you a gentle nudge without feeling like you're being scolded.
I think some people1 might feel like they have to justify themselves, even to a bot, when a response appears in chat, and if we could address the problem before a message got sent that might be a smoother, less annoying/embarrassing experience.
We could certainly do something like that with the editor on dev.to, right?
EDIT: in fact, should we build something like a generic script that highlights things inside WYSIWYG areas?
look at me, I have no data to back anything up. ↩
Oh, I really like that idea -- like Grammarly but for writing inclusive stuff. Hmmm...
We have had an eye towards experimentally building something like this into the DEV editor.
Our thoughts on editor augmentation is that it should act a lot like code editor autocomplete and linting. Doesn't get in the way like like a wysiwyg, but is intuitive and easy to use when it does pop up.
We haven't broken ground on anything like this because it's hard. We want it done well, and for the tool to be rushed and single purpose.
As @kmelve mentioned, language is fluid and encoded. Anything we do should err very far on the cautious end of the spectrum. This is a longterm experimental project, not a quick feature.
Totally, I write all my posts in my text editor and am fine staying that way tbh, I always end up exiting accidentally on the browser if I write in here, and my setup is perfect for me. Definitely a cool idea!
Hemmingway editor is a good example of exactly why I wouldn't want to see this type of technology enforced, or even suggested to all people.
I use Hemmingway often, and it can be helpful, but it also has a lot of terrible opinions about language. Trying to satisfy that editor results in language that, while having a good score, can be hard to read. It breaks up natural language and creates garbage at times.
This is exactly what would happen with bots in forums. We'd end up with worse language, rather than better. While I can appreciate some terms are not as good as others, at least now I know there's a human involved. I don't want to talk like a computer, or read computer generated text.
I'm not suggesting we enforce it, rather that it would be interesting to create something like a greasemonkey script or a wysiwyg plugin that would present the option "show suggestions about...", maybe with a list of types of thing it could know about.
I like the idea in terms of the technical engineering part of it, but as I sort of state in the blog, I'm not sure if it is the right approach socially. Or if it's even fully possible to automate this. One thing is that language is fluid and encoded. What I kind of find effective with the bot, is that it gives you a chance to consider or think about it. And only one the use of one word.
A system that monitored everything, and found every conceivably exclusionist word, would be a bit overkill, and, I suspect work against its purpose.
That being said, I think someone should do it, and see how it feels. Would be an interesting experiment to be sure!
To say that this is a well studied question would be a British level of understatement.
And in the spirit of experimentation, may I suggest including a control group, and allowing live viewing of the resulting data?
I'm opposed to using technology to enforce moral questions. It'll fail. It'll fail badly. The risk of harm is far more than the risk of good.
We're increasingly relying on "AI" and automation which is utterly crap at mimicking human nuance. If we continue down this path we won't end up more tolerant, just less human.
Yeah, I think you have a point, as I hinted to with the Demolition Man clip.
The bot is not enforcing it though. Contrary to the public Slackbot version, it asks you to consider the case - with sanctions except appearing.
If I'd given it more development time, I'd also add a “turn this off” capability.
Sorry Knut..."The bot is not enforcing it though." That is how Orwellian things start. They may not enforce it now but if the wrong person sits in the wrong seat in the future, it becomes very easy to enforce it. It starts with no enforcement. Then someone adds a logger. Then someone actually reads the log and finds Knut had to be corrected 32 times this week. Maybe we better talk to Knut. Knut is "coached" not to use the term. Knut is now upset. Next week he uses the term 52 times. Now, he gets coached for insubordination. Anything can happen next. Knut was a great person, employee and team member but he no longer wants to be a part of this team. Goodbye Knut. Meanwhile, Janet and Tim never used the term this passed month and they are named Employees of the Month. No enforcement, but we lost a productive team member and by the way, Janet spends 2 hours a day working on her personal website and Tim couldn't code his way out of a wet paper bag.
This was a fun story, but it's not a bot on slack I'd be worried about in terms of Orwellianism.
And funny how I am the one blamed to be thought policing when I'm willingly trying to be nuanced about something I know I will get berated for in public 😁
And didn't you see the Demolition Man-clip, and my exact comment towards the same thing?
Ain't it funny how (some) people get all dystopian when they're asked to do even a tiny effort to decrease their privilege gap?
There's no dystopian dictatorship coming to get you for not being nice to people. It's just "try to be nice to people". In this case: some people don't feel included in "you guys". Even if it were a phrase originally meant to include every member of a potentially heterogeneous group, maybe try using a word that includes them. It's that easy.
Weird how there's not many such phrases where a mostly-female term is supposed to be accepted as a wildcard for "all peoples".
The fear of abuse should not be ignored. It's exactly these types of automated tools that prevent open discussion of sexuality, including sexual health, on many public forums. Furthermore, automated tools are already being used on sites, such as YouTube, to block content based on questionable copyright reasons.
The voice that gets hurt the most by automated filtering is the minority voice. If you open the door, even a bit, to moral filtering, it's the incumbent dogma that will become normalized. Dissenting voices will simply be drowned out.
My argument against filtering has nothing to do with what is being filtered. I'll make the same argument for any kind of automated filtering and classification.
I think @anabella has a good point though. Those who object to this bot (and doesn't manage to reflect that they actually read my post) escalates it to being about either moral monitoring, censorship, people being “offended” or what not. They are important, challenging, and interesting points in and of themselves, but what worries me is that they also reframe the discussion and offer little acknowledgment to the experiences of those who felt the need to make this bot in the first place.
And @mortoray , the bot isn't actually censoring anyone. It only reveals itself to the user in question. It does so by presenting a proposal, with a way to learn more about why it does so. It's up to you to make the judgment, or to protest it, or ask the moderator to either remove it or whitelist you. It's only acting in the channels it's invited to. Its source code is out in the open.
Is there really not any distinction between that, and the opaque processes and technological decisions that go into something like YouTube or Facebook? Can't it be a way for a community to self-monitor according to the agreed-upon rules they've set for themselves in order to foster a productive conversation?
Thanks for your input @mudasobwa ,
It's completely within the owners of the site’s right to exercise the Code of Conduct that we agree to when signing up for this free service.
One could easily argue that just greying it out and labeling it is indeed more inclusive and tolerant compared to other sites and forums that would just delete the comment altogether.
And the “shooting”-part of your comment is just unwarranted and unreasonable.
Speaking as someone with "lived experience of not being included in all sorts of ways, in a field still dominated by mostly men", I appreciate the desire to help communities do better on inclusivity but I'm really, really not a fan of the approach. I think the comments here make the case in point: the people who need to be reminded to use inclusive language are exactly the people who will dig their heels in at robotic finger-wagging. And I don't blame them, because automated nagging is annoying however high-minded the intent, and I would absolutely do the same if I were in an analogous situation (in fact, I have done the same in analogous situations!). I've posted on bulletin boards which made extensive use of automated text replacement; that has the advantage of being direct and not coming off as condescending, but of course realtime chat platforms would have to have it built in.
Thank you for your comment @dmfay !
Yeah, as you say from the various comments I've gotten both here and on Twitter, I think you're completely right. Not even a wagging-finger from the adorable BMO gets past them. Next time I'll use Clippy 😄
On the one hand it is very annoying to have people correct your speech. On the other hand anything that can help others introspect on their position and privilege within a system is useful. I wonder what response you would get if you suggested 'you gals' instead of 'y'all'.
I agree: It's super annoying if you haven't decided to be OK with it (and even then it's hard). As for 'you gals', I suspect some would point out that in most cases it's unnecessary to gender a group, or perhaps it would make the bias even clearer?
You implicitly agree to the Code of Conduct by registering an account and continuing to use this site, as you agree to Terms of Service subject to change for literally anything you sign up for.
Are you implying that women wouldn't be included if you said "hey developers" to a group of people? Because that's just ludicrous. The only people it wouldn't include are people who aren't developers, which is likely situationally appropriate.
I honestly do not see why so many people continue to see "hey, consider wording things inclusively" as censorship and inherently offensive. No one is enforcing your use of guys in your own life, and I'm very sorry you've been inconvenienced by being asked to regard other people's feelings.
Well done!! :)
Oh, you would be more careful also in a shame-driven culture (like some of the largest ones in Asia or basically whatever was under the aegis of the catholic church), not just in communist or fascist regimens.
I still see what troubles people so much if they are collectively called "guys" just by convention: in my team (which, again, I definitely like and respect), basically everyone makes cultural references I do not get, on top of speaking a language which is not my native one (although I have been just using that for years): should I ask them to avoid mentioning anything too British for my ears or to speak in Esperanto, so that I can feel like I belong in there?
Come on, let's be realistic: harassment or discrimination are serious issues, while this paranoid campaign to make everybody thread on eggshells (I am not referring to you in particular, it is a general though) it is either useless or actually really offensive towards people who have actually experienced some kind of unfair work environment.
I am pretty confident I am still entitled to say that imho that is censorship and can lead to even worse situations, in the same way you are entitled to disagree: I would never dream to censor these kind of ideas :)
"Guys". Let me state: I am generation or two older than most in the field. I've generally tried to quit using the term and that is too bad. In my generation we used "Guys" when addressing our group/team. It was an interesting word, both exclusive and inclusive. It excluded the rest of the world and included those in our group/team. My experience with women in the workforce has led me to believe that most I work with want nothing more than to be "just one of the guys". Included in that group/team and treated no differently. I work for a woman, the smartest person I have ever met, and I am currently paired with a female teammate. I assure you, both are "just one of the guys". I started by stating I'm old but I learned long ago to roll and update with the times. I've started using the term "Folks", which for me is almost as good as "Guys". It will be a cold day in Hades when I address my group in California with "y'all". As far as the bot...I would never subject my group to machine correction of their speech. That sounds Orwellian. We strive to maintain our humanity amid a world of constant change and update as tech takes over more of our lives.
Well whatever you decide to call yourselves, please prioritize the quality and effectiveness of your code, because our lives (though less of them, these days) are in the balance.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.