.jpg)
The Chief Psychology Officer
Exploring the topics of workplace psychology and conscious leadership. Amanda is an award-winning Chartered Psychologist, with vast amounts of experience in talent strategy, resilience, facilitation, development and executive coaching. A Fellow of the Association for Business Psychology and an Associate Fellow of the Division of Occupational Psychology within the British Psychological Society (BPS), Amanda is also a Chartered Scientist. Amanda is a founder CEO of Zircon and is an expert in leadership in crisis, resilience and has led a number of research papers on the subject; most recently Psychological Safety in 2022 and Resilience and Decision-making in 2020. With over 20 years’ experience on aligning businesses’ talent strategy with their organizational strategy and objectives, Amanda has had a significant impact on the talent and HR strategies of many global organizations, and on the lives of many significant and prominent leaders in industry. Dr Amanda Potter can be contacted on LinkedIn: linkedin.com/in/amandapotterzircon www.theCPO.co.uk
The Chief Psychology Officer
Ep84 Unmasking Bias in 360 Assessments
What if the 360 feedback you're receiving is more a reflection of human cognitive shortcuts than your actual performance? In this illuminating conversation, Dr. Amanda Potter—recently named Association of Business Psychologists Practitioner of the Year 2024—takes us behind the scenes of 360 assessments to reveal the hidden biases that distort results and undermine effectiveness.
Our brains evolved to make quick judgements for survival, but these same mechanisms now create significant distortions in how we perceive and rate others. From contrast effect (where we compare individuals against each other rather than objective standards) to recency bias (overemphasizing recent events) and the halo/horns effect (letting one trait color our entire perception), these mental shortcuts compromise the validity of feedback at every stage of the 360 process.
Dr. Amanda Potter the Chief Psychology Officer breaks down exactly how bias infiltrates each phase, from design and administration to completion and feedback conversations, while offering practical, science-backed strategies to minimize its impact. Learn why rigid capability models fall short, how standardized communication prevents experience bias, why a seven-point rating scale creates more accurate results, and how to facilitate feedback conversations that explore context rather than make judgements about character.
This episode delivers exceptional value for HR professionals implementing 360 programs, leaders receiving feedback, and anyone involved in developing talent. You'll discover why psychological safety forms the foundation of honest feedback, how to ensure anonymity without sacrificing insight, and the critical balance between quantitative ratings and qualitative perspectives that creates truly developmental experiences.
Whether you're designing a new 360 program or participating in one, this conversation will transform how you understand, administer, and interpret multi-source feedback. Connect with us on LinkedIn by searching for Caitlin Cooper and Dr. Amanda Potter, or visit www.thecpo.co.uk for more resources on bringing psychology into leadership development.
Episodes are available here https://www.thecpo.co.uk/
To follow Zircon on LinkedIn and to be first to hear about podcasts, publications and news, please like and follow us: https://www.linkedin.com/company/zircon-consulting-ltd/
To access the research white papers mentioned in this and other podcasts, please go to: https://zircon-mc.co.uk/zircon-white-papers.php
For more information about the BeTalent suite of tools and platform please contact: TheCPO@zircon-mc.co.uk
Welcome to the Chief Psychology Officer podcast, the show where we dive deep into the psychology behind leadership, business and success. I'm Caitlin, and today we're joined by our very own Chief Psychology Officer, Dr Amanda Potter, where we will be discussing how to identify and remove bias in 360 assessments. As the Association of Business Psychologists Practitioner of the Year 2024, Amanda brings years of experience in talent assessment and development, helping leaders, teams and organisations realise their full potential. So if you've ever wondered how to make sure you build, administer and use 360 assessments in a way that brings results and doesn't create, confirm or incorporate bias, then this episode is for you.
Dr Amanda Potter:Thank you, caitlin. So I was wondering we've done an episode on 360 already. Why should our subscribers listen to this episode?
Caitlin:You are correct. We have done an episode on this before and I think, as always, our thinking is always evolving. But I feel the last episode was more around you know what 360s are, and I think this one is going to be more specifically around how to make sure you build, administer and basically use it in a way that brings results, that's, avoiding bias, so how to use it in a robust way.
Dr Amanda Potter:So bias is the key. How do we avoid bias in 360s Exactly?
Caitlin:But super quickly. Before we jump in, please, to our listeners, make sure you hit subscribe so you never miss an episode. And if you want to keep the conversation going, connect with us on LinkedIn. Just search for Caitlin and Dr Amanda Potter, plus feel free to check out the cpocouk for more resources. So, amanda, let's get started. Why don't we begin with you giving us a summary of what 360 assessments are? And that's really for any listeners who's newer to the concept.
Dr Amanda Potter:So many people will know what it is. That's why they're dialed in. But 360 is an objective way of gathering feedback from multiple raters from a number of different roles, so that you can get a full circle view of an individual or of yourself. There's two types of data that we collect in a 360. There's qualitative and there's quantitative, but when we're talking about online 360s, the focus is most significantly about the quantitative, but ironically, the data that the recipient appreciates the most and gravitates towards is the qualitative.
Caitlin:Why do organizations use 360s, then For what purposes?
Dr Amanda Potter:So the main purposes are feedback, development, coaching. It can be used for readiness for succession, with the correct deployment, of course, but it should not be used for recruitment and should not be used for restructure, and that's because of exactly what we're talking about today bias.
Caitlin:Okay, so can you expand on that a little bit more, Amanda, because I'm sure people are wanting to know a bit deeper around the whys, because lots of clients come to us and we have lots of debates around this sort of thing.
Dr Amanda Potter:Of course, with anything. When we are creating either psychometric tests or 360 questionnaires, what we're trying to do is, as much as possible, remove the subjectivity and increase the objectivity, but we're never truly going to be fully objective, and 360 even more so because we are dependent on raters. We're dependent on them being objective, on them having a clear understanding of that individual and not having any personal bias when they're completing the questionnaire. So they may have their own judgments that they're making, they may have their own emotions that day. There's a number of things that could be interplaying with how they're feeling when they're completing the 360. So we have to design the 360 in a really objective way around the criteria, the questions, the method, the rating scale, because we are relying on people's perception of behavior and they're providing insight rather than objective evidence. We have to make sure the questionnaire is robust as possible so that we almost take them on the path of being objective as possible.
Caitlin:And that's exactly what we're going to take our listeners through today really is what are the things that we're thinking about from start to finish, when we are designing 360s and when we're deploying them in various organizations? We mentioned before we have done an episode on this before, so it is episode 44 if anyone is interested in going back and having a listen to that one. But again, we'll really be focusing on understanding, identifying and removing bias. So, amanda, let's be clear first about what do we mean by bias?
Dr Amanda Potter:So a bias is basically a mental shortcut. A simple definition is that a bias is a tendency to favour or disfavour one thing or a person over another, or it could be a distortion, and that could be down to a belief, or it could be down to stats, so it's a distortion of information. Do you have an example of that? A good example would be the contrast effect. So imagine you're completing a number of 360s for your team and what you're doing is you're comparing in your mind each of your team members. So that's the contrast effect. So you're comparing. Say, for example, we were comparing John with Eddie, with Pascal and John I'm making these up, by the way John is okay, but Eddie and Pascal are amazing. So consequently, john gets rated much lower than the other two because of the contrast effects, that comparison effect between the two of them.
Dr Amanda Potter:So that's a bias. The reason why biases exist is because they are mental shortcuts. But in that example of contrast effects that I've just given you, what's happening is the brain is looking for the most simplest way of sorting and organizing that information, and that information, in this situation, is statistical, because we're actually allocating a number to a statement and the brain wants to do it as simply as possible and use as little as energy as possible, and so therefore it almost over exit, and there are actually so many biases that exist I think I was reading up somewhere that there's about over 180, so I think we'll tap into a few more but I think the contrast one in the context of 360 is a really good one to use.
Caitlin:And what you were saying there about the mental shortcuts I was reading up about it in terms of it actually comes from early humans surviving by forming tribes, because back in the day your tribe meant safety, food and strength in numbers. And as societies grew and our brains evolved, those survival instincts never really left us, they just shifted and along the way we therefore developed mental shortcuts which are our biases, and that's obviously how we shape and how we see the world. So I thought it was quite interesting thinking about it back in that context so interesting.
Dr Amanda Potter:It all comes down to our perception of safety and our connection of people, because when we feel safe, including connected, we're more likely to trust and to be our real selves and to say what we really think. But when we don't, we don't. And I just wanted to go back to that contrast effect point. I know that some of our competitors around 360s, the way they have built their technology and their systems, is that if you were to complete a questionnaire about 10 employees, for example, you could ask the same question about all 10 employees at the same time. So you'd ask the question and then you'd have each of the names of the individuals and you'd rate each of those individuals against the question.
Dr Amanda Potter:And we actively don't do that. The way we do it is you answer each questionnaire, each 360 questionnaire, about each individual. Because if you imagine, if you've got the single item and the 10 employees, you're going to be trying to show differentiation between them. So therefore you're creating difference that doesn't exist, whereas if you answer the questionnaire just about that single person, that single person's in your mind and you're not having to switch constantly. There's more cognitive focus just on that person did you say that that used to happen?
Caitlin:yeah, it still does happen, our competitors, because I I've never actually come across that I was going to say where did that get eradicated?
Dr Amanda Potter:yeah, there's quite a number of the big 360 players do that, which unfortunately does integrate bias into their technology into their system yeah, I can see why, okay, so then, yeah, perfect to go back and thinking about 360.
Caitlin:Why are the biases problematic in any other ways that you can share with our listeners?
Dr Amanda Potter:ultimately because we're relying on human perception of behavior. Biases are inherent in all of us. It's part of our nature as humans. Our brains seek to simplify information and processes. It seeks to make decisions with limited information, to conserve energy, and so therefore, along the way, we miscalculate, we misjudge, but also we have our own emotions that interplay. So we might be rating someone slightly enviously because they're fantastic at something, and then either it could mean that I overrate or underrate that person. There's a number of ways we overcome this.
Dr Amanda Potter:It's all sounding quite negative at the moment, but there's a number of ways in the science that we've built our 360 system to overcome all of these potential risks of miscalculation or bias I was going to say the same thing.
Caitlin:As you know, we're talking about 360 right now. It's sounding so well. Actually, is there a case why we shouldn't use them? And of course, there is many reasons, as you say, but also there are many reasons as to why we do believe in using them and how they can be valuable. So I wondered if you could shed some light and dig a bit deeper into why we do believe in them.
Dr Amanda Potter:Well, it's because the reality is that it's that point I made earlier that as psychologists we are seeking and striving towards objectivity. But fundamentally what we're doing is we're trying to remove subjectivity on the way, and we're never going to get to a perfect prediction. We just aren't a perfect prediction of one. We're just going to be as close as possible, and 360 is the best way that we have identified of gathering data from multiple people and getting insight. And then that comes back to purpose. As long as the purpose of the questionnaire is to gain insight, to gather feedback, to support the individual on a coaching, development and learning journey, then that's okay. That's why it shouldn't be used for recruitment or for decision-making in succession. It's all about how can we gather data, gain insight from multiple people, multiple lenses?
Caitlin:that really gives an indication of how that person shows up at work it's really a data gathering tool truly help individuals be the best version of themselves ultimately. So we mentioned, or I mentioned, that we'll go through and we'll think about where does bias come in at each stage of the process, of when we're kind of starting to design it versus when you know we're administering it. So why don't we start then? If you could give us a rundown of that first stage, where might we see the bias come in here?
Dr Amanda Potter:so the very first stage for the 360 is the design stage and to understand the biases that might be at play, we need to really be clear about the purpose of 360. What's the intention? What does the organization want to do with it? What needs to change and how are they going to access and use that data and how are people going to receive it? So in that design stage we need to be really clear about the model that's being used At many organizations. Again, competitors have very fixed approaches to 360. They will have built a model of potential, a leadership framework, and that is the only framework that they will put into their 360. So there'll be a set of robust criteria and that's all they'll assess. We are slightly different, again on that angle. We use agile approaches and we don't necessarily just use that very fixed approach. So design is really important. What are the criteria going to assess? How many indicators and so on, and is it going to be a very fixed off the shelf or an agile model?
Caitlin:So in that stage it's really having an exploration conversation, as you said, to discuss what are the challenges in the business at the different leadership levels or individual levels, to then be able to flex and choose the criteria that best fit the solution. Ok, so to reflect that back again. So the formation or design stage. Challenge one is perhaps having an unclear purpose or intention, so maybe not defining what the organization truly wants to learn or change. Challenge two is that rigidity Is that a word rigidity? Rigidity, rigid capability models, where I guess using standardized criteria that may not match the organization's real leadership challenges. So with that then, how can we reduce the bias?
Dr Amanda Potter:Well, it's doing the opposite. So the first thing is creating greater clarity of purpose and intention, so making sure there's a really clear conversation before designing the 360, that you agree the organizational goals, the behaviors, the cultural shifts that the feedback will inform. We call it a blueprint, so we create a blueprint for the organization, which are the criteria against which we're going to assess the individuals With each of those organizations. That blueprint, that 360 template, is bespoke. That that blueprint, that 360 template, is bespoke. For example, we've just worked with FAMAR and FAMAR have been very clear on their purpose and their intention of their 360, which is all about development and coaching and support. And also we designed a bespoke blueprint or set of capabilities for that 360. So it was really aligned with FAMAR, who are quite a unique organization, who are manufacturing pharmaceutical organization.
Caitlin:And are there any other things people can be thinking about at that stage?
Dr Amanda Potter:I think it's that difference between the rigidity versus the agility. So you could take a well-established model and adapt it to make sure that it's really aligned. You might not want to assess every capability or you might want to add a few. Alternatively, you build your own to make sure that truly the language is relevant, because there's nothing worse than somebody trying to answer a question about a colleague and the question just doesn't necessarily make sense. It's not relevant because it's just not the language they would use in the organisation.
Caitlin:I think it's such a good example when we talk about the rigid capability models and giving flexibility for our clients to be able to tailor the 360. And in particular, you mentioned earlier about the value of having qualitative questions in there. So obviously you have quantitative 360s, you have qualitative 360s. In our quantitative one, we have the option at the end that you have qualitative questions so that the raters can shed some extra light on you know their observations of that individual and when we're working with our clients, they can choose what those questions are. They can tailor them to fit the language whether that's what makes this person exceptional versus you know what are this person's top strengths you that tailoring is really important because then they get to ask the questions that they really want to have the answers to.
Dr Amanda Potter:So that's great. The next area I think we should talk about is kind of the administration. So once we've done that consultation and design phase, actually the administration is often seen as a bit of a task and it's viewed quite negatively because it's quite hard work to administer 360s, but actually it's critical yeah, I'd 100% agree that the administration stage can be tricky.
Caitlin:For that reason, I think internally our team are amazing. Got a shout out to Andrea in particular and also Sarah Green as well. We we do minister 360s. But then also from the client side we've got a, I guess, a flexible approach again in terms of being able to give the reins to some degree to our clients in terms of administrating it. So you've got to make sure that you've got the right people internally on board that are really behind the 360 to then make sure you know you're getting those completion rates, and you know we often say that it's it's rare that you do get 100 completions on 360s.
Dr Amanda Potter:I mean, you're very lucky if all your raters do complete it, but if you do have someone that's kind of championing and administrating it, then it makes the process a lot smoother that's such a good point about having the right team from our side, but actually I was thinking about it that we need the right team from the client side as well, because making sure that each of the individuals who are having the 360 completed about them, that they nominate the right raters, is absolutely key, because that can impact the is administering the 360, who has preconceived judgments about which teams are more or less likely to engage with a 360 and therefore takes a different approach with each of those teams.
Dr Amanda Potter:It's going to actually set it up very differently. We know already that certain team leaders are more or less responsive, or some are regarded to be tricky and therefore they might get told or might get encouraged in a slightly different way than a team leader who's generally quite easy to get on with and usually quite responsive. Therefore, there's an experience bias that is integrated into the way the 360 is administered. So, for example, that tricky leader who's told that he has to have it done and is very directive in the language and the tone of the emails and the calls that they've received it would just feel like a burden, whereas for the other person it will feel like it's something that they want to do. So actually, how HR faces off each of those individuals can make a real difference how they sell it to those individuals, whether it's something that they have to do for their performance management versus something that they could do which would benefit their and their development and I think you do.
Caitlin:Just going back to your point, I do think there are split opinions on 360s, as we mentioned at the start, but there are definitely loads of examples. I don't know if we mentioned it actually in the previous podcast, but I definitely read an article a couple years ago now, but it was around Jim Shark and one of the leaders there. He did a 360 and he found it invaluable and I think, yeah, if you've got people, if you are marketing it and explaining as well when you're setting out people are receiving invites. So whether that's raters receiving invites, that they've been invited to complete a 360, I think it's really important to state the purpose of that 360. You know the benefits of a 360 and just so people know what's expected of them and also what value this feedback is going to have for that individual.
Dr Amanda Potter:Yeah, absolutely, and I think standardization is absolutely key. So within BeTalent we always standardize everything. So once we agree, the communications and the reminders and the nudges, each of those things are standardized. So it doesn't matter if you're in the tax team and super responsive and you're in the project management team and not responsive. You would get the same experience. So you would get the same communication, the same experience, the same prompting, all the way through the 360. So the bias doesn't come into it.
Caitlin:So then I guess the next stage after that is the actual completion stage, right? So what can happen here from a bias perspective?
Dr Amanda Potter:I think that's where you might suggest we get the most bias actually, and it's amazing that the data says that 60% of managers have admitted to be influenced by bias when rating human performance in 360, according to a 2022 survey.
Dr Amanda Potter:So, if you think about it, it comes from two places.
Dr Amanda Potter:It comes from the rating scale because, of course, with the rating scale, if it's too many points or too few points or even an odd number, it makes a difference in how people use a rating scale. We have a seven point rating scale because it's seen to have the greatest normal distribution. It's not too coarse like a five point. It uses a midpoint because some people do want to use a midpoint. They don't want to be forced not to use the midpoint. Another thing it comes from the item design making sure the items aren't double loaded, that they're not asking about two different things or they're not prone to social desirability or other types of bias like leniency. We have to really check the items and the questionnaire so that, when individuals are completing it, that we can as much as possible avoid central tendency, social desirability, leniency, bias and so on, and we get through that by having between four to six items for each of the criteria, a seven point rating scale and making sure that items don't have a double loading, and so on.
Caitlin:Yeah, and that double loading again. So then it makes it the experience easier for the raters as well in terms of thinking about the examples they might have for that individual demonstrating that behavior. The other thing I think we've had quite a lot of conversations about over the last few years is the option of having a non-applicable as well, because there might be some stakeholders that have been asked to, or customers. Even so, you know you can have different rated groups in a 360, or at least in ours. You can choose whether it's your peers, managers, direct reports, stakeholders, customers and if you've got maybe customers, they might not be able to see you in the same context that your direct reports will.
Dr Amanda Potter:So having that non-applicable or non-observable as an option has been something that clients have been keen to have in the 360, from my experience yeah, and I think that's the right thing to do, because actually it did create bias not to have it, because without it what happens is some people use the midpoint and just say, well, I can't really answer, so I'm just going to put the midpoint and other people might give the person a one saying well, I can't answer it.
Dr Amanda Potter:I haven't seen it, so I'll give them a one. It skews the data.
Caitlin:Okay. So again coming back to the biases there, because obviously I think you've given some quite practical tips. Actually, around you know what people can be thinking about when designing 360s. What are the main ones that come up? You mentioned contrast effect being a bias when our raters are completing it, so what are the other ones that you have in mind that people can be thinking and challenging themselves when they're completing 360s in the future?
Dr Amanda Potter:So one could be leniency or rater acquiescence. So in other words, yes, saying it could be just wanting to rate the person favorably, not wanting to give too tougher feedback because feeling uncomfortable to do so. We worked with a university a few years ago where the 360 results came out incredibly positively and when I first saw it I was like wow, they really rate each other highly. And then when we also looked at the psychological safety data, which was quite low, we realized there was a fear of giving feedback. There was a really strong leniency bias there. That could be because of a lack of safety, that could be a lack of comfort or because they're trying to.
Dr Amanda Potter:It's a very nice culture wanting to be nice to each other, but there's also an issue around halo and horns. So imagine, caitlin, that John in my previous example had done something fantastic at work the day before for you. You might then rate John super well because of that one thing that John had really achieved and was very successful on. So that's halo effect. Or John had done something really bad. That's the horns effect. You rate everything down, so that recency bias of seeing someone very recently and their performance very recently, instead of over the whole review period, which is what the 360 is supposed to be covering. You know, our brains really go to the most recent information and we kind of anchor onto that information.
Caitlin:I think that's really interesting that you said that, because I think from my knowledge of different biases I've definitely heard the halo and horns when you think about interview processes, and obviously the recency one makes a lot of sense to me, but it sounded as though what you've done there is almost. They can work in combination as well that storytelling they have, haven't they?
Dr Amanda Potter:because if the most recent thing that you've seen is either good or bad, then you've got both yeah because, again, that simplicity and the need for the brain to kind of not use up too much energy, we're not going to work too hard and we also want to get through things like 360 questionnaires. A lot of people rush them. They don't take time to answer the questionnaires and really reflect on the questions. They just they have that immediate in the moment response, a bit like they would with a personality questionnaire. But a 360 is quite different. You've got to actually really think about it and really think about the person in the context of work over the last six months. It's not an immediate response.
Caitlin:It does take longer to answer and I guess that goes back to the importance of the comms and laying that out in the communications around. You know, really think about what you're answering but also be aware of the biases. So again, whether that's in the system as an automatic communication that comes out when they're invited, or whether it's the team who's administrating it, both externally and internally, then I think it's important to kind of continuously elaborate on people being mindful of when they're completing 360s.
Dr Amanda Potter:Yeah, and we try to overcome that through making sure that we're really clear around the anonymization and the transparency about who's going to see what, because there is a bit of a trend with 360s that when people complete them they try and work out who said what. So we have to make sure that we have certain rules about how many people can complete within certain categories and how many questions we ask so that people cannot be identified. But another option is to have dedicated training around how to avoid bias in communications and how to encourage people who are completing the ratings to avoid bias, to help them understand the impact of bias as well.
Caitlin:I agree, I think, with the conversations I have. When people approach us about 360s, that's one of the first things that they ask is how do I ensure there's anonymity and how do I have confidence? Because that's what, ultimately, my people are going to be wanting is they want to know that they're rating it in confidence. So what would you share, amanda, on that?
Dr Amanda Potter:Other than the line manager rating, because often there's only one line manager, sometimes two, with the matrix organization, with all of the other categories of raters if we're talking about peers or clients, whatever the categories are because those categories can change. We would encourage clients to invite as many as possible for each of those categories. I mean not dozens, but maybe up to 10, but no fewer than four, because what we want is have a minimum of three people rating in each of those categories so that you have that combination of scores from three people and the combination of qualitative comments from three people in each of those rater categories, so that you don't have that sense of. Well, clearly that was Caitlin saying that about me or Sarah, but actually you've got the insight from multiple lenses. So you want to have more rather than less raters in a 360 so that you can start to really look at the patterns and the themes and you don't just kind of try to spot who says what.
Caitlin:So I guess to reflect that back then, we're thinking about reducing rater bias. It's a couple of things, a couple of different solutions. It's around how many people per category are rating. It's about giving an option for dedicated training, but also transparency in the communication stage around what anonymization looks like. So, now that we potentially have a better understanding around the biases that come from raters, what about the last stage, which I believe is a development conversation stage? So, essentially, where either a coach the person who's going to be giving the 360 feedback or a manager if a manager is trained in that space is going to be going into a one-to-one with the individual and having a conversation about their 360 report. What can we be thinking about here? What do we need to be wary of?
Dr Amanda Potter:So it's absolutely crucial when we go into a feedback conversation around 360, that we do not just take the scores at face value, but we understand the context around them. I know we've already spoken briefly about the fact that high scores are often seen as being exclusively positive, but there's a risk of that competency overuse and there's a lot of other factors that could be impacting. So, for 360, if we're using it as a three monthly review for new employee, we need to make sure that the raters are going to be the right raters, because they may not have known that person long enough to assess them accurately, and so we might see a bit more central tendency. With a newer employee, for example, where someone who's really established and well known within the organization, they might get more extreme scores, both highs and lows, because people know that individual more and more prepared to use the full range of rating scale. So you need to know the context, therefore, of that individual, how long they've been working in the organisation, how well they've known their raters, who were the raters, because you'll have the names and also do you know if there's any potential issues or conflicts between this individual and those.
Dr Amanda Potter:The way, then, to do it when you're giving feedback is to ask questions and not make assumptions, to go in facilitating conversation but not delivering messages. So you're not going oh, look at this, this is interesting. And then start repeating what the questionnaire says. What you're going to be doing is going to be asking questions about the impact of what the data says on the individual, what the insight is and what's their interpretation of it and why they think they've been rated the way they have. So you create a really good conversation.
Caitlin:What about? I was just thinking about fundamental attribution error and how that maybe comes into play at this part, because we're assuming that poor feedback is about someone's character and not circumstances. Have you ever come across an example of that in your experience?
Dr Amanda Potter:I think we do it all the time. I think the key is not making giant leaps and making assumptions, because as soon as we interpret the feedback as being about someone's character rather than the circumstance, we're being very judgmental. So we need to refrain and hold back from being judgmental. We don't want to attribute the behavior as being assuming that's around their intention. We've got to understand their intention.
Caitlin:We've got to ask questions and I think everything that you've just said is kind of also maybe highlighting the importance of having someone who is trained in giving 360 feedback, so having a coach or, you know, having a training for managers who are going to be giving feedback on these reports. Because I don't know about anyone else, but I remember the first ever time that I went to go and read through my own 360 report, there's a lot of you know, um, you can.
Caitlin:It can be quite scary because you're about to hear what people essentially think of you and when you deeply care, it's important, and so it's important that you have someone who knows to ask the right questions, to remain objective and to be able to navigate that conversation in a productive way with the individual. Absolutely so, I guess, bringing this all together, it'd be good to leave some final thoughts for people listening and organizations around what people can do to improve their feedback culture when using a 360 tool. But maybe first if you could summarize what we've essentially just gone through in each of those stages and how we can seek to reduce bias sure.
Dr Amanda Potter:So number one, design be clear on your purpose and intention, make a really good decision and have a really clear idea about the capability model and whether you're going to go off the shelf or whether you're going to build your own. And I would always recommend the language of the items being as tailored as possible to the culture and the strategy of the organization, so if you can build your own. The next is the administration stage, and that's all about consistency of communication, but also making sure that the HR team or the leadership team within the organization that's rolling out the 360, that they're on board and they understand the importance of standardization. So that's really the key with that administration stage, so that people aren't treated differently in the way that the tool is administered.
Dr Amanda Potter:The next is completion. There's a whole load of raft of biases that can come into play with the completion around halo and horns, and so we could potentially think about how could we do some bias training or communicate the importance of not being prone to bias in those comms, but actually, with that rate of completion, a lot of it is about how we design the questionnaire and how we deploy the questionnaire ourselves actually. And then feedback we should really make sure the people who are giving the feedback are facilitating conversation and not making judgments, they're not being prone to that fundamental attribution error, that they're engaging in a conversation and they're trying to understand the context and the situation and the meaning behind that data and not going in just delivering and summarizing what the questioner has said.
Caitlin:Well, it sounds to me, at the end of the day, that a 360 is only as good as the thinking that goes behind it, which is actually making me think about AI. Now I don't know, my brain's gone to AI, you know it's only as good as the people behind what the prompts are. So if we're clear on the purpose, the design in terms of realities of the organization, and make space for the nuance behind the numbers, then we move beyond just collecting opinions and instead we're uncovering real insights that can really help people grow. So I wondered if maybe we could finish on maybe a success story. I don't know if you've got any of your favorite stories of using a 360 where you felt like it really made a difference with one of your clients.
Dr Amanda Potter:Yeah, I've got a client in mind and the client in mind did it really well, because what they did is they firstly focused on psychological safety. They created an environment where people felt safe to speak up, to be candest, to be honest, which are all, of course, core aspects of psych safety. And if we create the right environment where people don't fear the consequences of being honest, then we can build the foundation of trust where people are more likely to give and want to receive open feedback. Then what we did with that client is we were really clear about purpose. So we were really clear that what the 360 was for and the 360 was very much for leadership development and for growth.
Dr Amanda Potter:So when we communicated, we were really clear that there was going to be no wolf in sheep's clothing, that this truly was about feedback and for each of the individuals to own the data and to take that data forward with their line manager. So we really messaged the importance and communicated the importance of confidentiality. At all times. We were very transparent about who would receive the data, how it was stored, where it was stored, how it was going to be emailed and so on, and we built the model which was completely designed for them. It was an organization where we built the model. It was based on their capabilities and it meant that the language was their language. It didn't jar when people were completing it. And then, finally, we had both qualitative and quantitative questions, so people felt that they had the chance to both rate and share information. So, truly, it was a really robust approach and we got great feedback and we've got incredible testimonials from this client as well.
Caitlin:So that brings us to the end of this episode. I believe we are running out of time, so, as always, thank you, amanda. Thank you to our listeners. If you did like what you heard, then please do feel free to give us a rating so they can tune in and hear all about psychology at work amazing, caitlin.
Dr Amanda Potter:Thank you, and thank you everyone for listening. I hope this 360 conversation how to reduce bias has been useful and I hope you have a wonderful and successful day.