Feeling psychologically unsafe? What does it even mean, really? Okay, lets break it down.Amy Edmondson coded description: “A shared belief that the team is safe for interpersonal risk-taking.” Amy EdmondsonA shared belief...that the team is safe...for interpersonal risk-taking. === Desmond SherlockWe agree on...a way to keep the team safe...sharing conflicting ideas. Here is my decoded... Continue Reading →
Slack XXX Group's Code of Conduct We hold all stories or personal material in confidentiality We are careful about interrupting each other.When we disagree, we focus on the idea, not the person.When we have a discussion, we make spaces to pause for reflection.We don’t need to be articulate to express ourselves.We acknowledge that there is... Continue Reading →
We are all conductors in a team, I believe. Like electricity conductors, only in our case we are conductors of information and may be very similar to nodes in a network. And the network suffers when we have a mis-conduction between two team members caused by a misconduct.
Firstly I think an essential part of a code of conduct is what happens when there is a violation of the code and company ethics. A "misconduct," if you will. I refer you to the article, defining a code of conduct. Maybe the code of conduct should be called a "code of misconduct," ha!
Amy Edmondson defines psychological safety as "a shared belief that the team is safe for interpersonal risk-taking." In a nutshell, for a team to speak up, take risks, and share radical ideas, they will need to feel protected from so-called naysaying behavior. I don't think the problem is going to be fixed by creating “a shared belief... Continue Reading →
How would a machine learn to behave civilly in a team environment? No real knowledge in this area, but this is how I would wing it. How about we create an algorithm. It would consist of a team member (anyone in the team can be the trainer) using Step 1. Verbal Caution of the robot,... Continue Reading →
Why are team members in organizations hesitant to take a risk and share their ultra-radical ideas? Because of the feedback response, they are likely to receive if they step too far out of the norm.
Machine moderators may be used in the pre-moderation stage to flag content for review by humans. This would increase moderation accuracy and improve the pre-moderation stage.
Firstly everyone in the team would need to agree to use the safety moderator. It allows anyone to speak up in real-time and object when we feel offended or uncomfortable with how we are treated during a heated discussion.
Q. When will we know we have enough of it? A. When we don't need to talk about it as much.