Daily Archives: October 15, 2018

13 Reasons your RFP Scoring Sucks


Today we welcome another guest post from Brian Seipel a Procurement Consultant at Source One Management Services focused on helping corporations understand their spend profile and develop actionable strategies for cost reduction and supplier relationship management. Brian has a lot of real-world project experience in supply chain distribution, and brings some unique insight on the topic.

The most thorough, best designed RFP questionnaire counts for nothing if Procurement can’t interpret the results. Proper submission scoring is critical, yet many Procurement Pros commit at least a few mistakes that seriously damage their ability to assess RFP responses.

I’ve seen my share of such mistakes over the years, and work with clients to clear them up before it’s too late. I’ve included the worst offenders, “The Unlucky 13”, below.

How many did your last RFP fall victim to?

Evaluating Questions

1. Questions aren’t weighted (or aren’t weighted properly).

Not every question is created equal. Consider how important one response is versus another. Critical questions should receive the lion’s share of total weight. I recommend starting at a high-level, assigning weight to each category of questions. Once done, delve into each category to distribute this weight to each individual question.

Not every question needs to be scored – some are for information gathering only. However, if you notice too many unscored questions, evaluate whether they all need to be included in the first place.

2. “Kill switch” responses aren’t treated as such.

On the subject of weight, some responses are so heavy that the wrong answer can (and should) disqualify a participant out of the gate. If an unacceptable answer invalidates a proposal, don’t bother weighting it – call out the answer as grounds for dismissal.

For example, one critical question may ask for confirmation that a respondent can handle required volumes. If any responses indicate a supplier can’t, no amount of weight would suffice – they simply are no longer viable.

3. Scoring is overly simplistic.

True/false questions are easy to understand and score, but too many of these cause problems in the long run. Odds are your suppliers will end up looking too similar if the bulk of responses fall into simple yes/no buckets.

4. Scoring is overly complex.

On the other side, some scoring systems end up too complex to be reasonably applied. I’ve seen scores range from 1 to 20. On paper, this appears to allow fine-tuned scoring. In reality, I’d challenge anyone to properly differentiate a score of “12” from “13.”

Evaluating Responses

5. Questions from participants go unanswered.

Your questionnaire may seem clear to your team, but chances are good that one or more participants either don’t understand your intent or don’t have the background information from you to properly answer.

Every RFP should include chances for Q&A with participants. If you don’t provide this opportunity, responses will hinge on assumptions made by participants – enough assumptions, and the end result may not align at all with your requirements.

6. Questions to participants go unasked.

The same is true on the other end. If a response is unclear to the scorer, then clarification should be sought. Otherwise, the scorer is left to make assumptions in order to interpret a response.

7. The wheat wasn’t separated from the chaff.

Anyone who’s ever scored a Marketing RFP will be familiar with this concept. Ever read a 200 word reply to a question, only realize at the end that the participant never gave a direct answer? Quantity does not equal quality – a detailed non-response is still a non-response.

Evaluating the Scoring Process

8. Clear criteria aren’t provided to scorers.

Simply providing a scoring scale isn’t enough. If you ask for a score of one to five, be sure to provide concrete direction on what constitutes a one versus a five and every point between.

9. Too few scorers are included.

The more stakeholders involved in scoring, the less likely results will be thrown by huge score discrepancies. The team in charge of scoring should encompass any stakeholders who would interact with the supplier or the product/service in addition to Procurement.

10. Score results are averaged blindly.

As a counterpoint to the above, don’t simply average all scores together at the end of the initiative. Large discrepancies in scores may indicate that two or more scorers viewed either the question or response (or both) differently. Use big discrepancies as a flag to ensure everyone is on the same page and revise accordingly.

11. External factors influence results.

Score only what it within the questionnaire. Don’t award ghost points to an incumbent based on their years of service. Likewise, don’t give an artificial boost to a hungry alternate because they came in competitively on pricing. There will be time later to consider outside elements – for now, stay focused on specific questions and responses.

12. Internal factors influence results.

“What? Dave’s team gave these guys a ‘nine’?!” “Don’t worry about it – just give them a ‘two’ to even the score out.” I wish I made this example up. I did not. I’ve worked with stakeholders who doctored their own scores to offset other scores that they disagreed with. Needless to say, this artificial tampering helps nobody.

13. Scoring lacks consistency from one response to another.

Here’s a fun way to screw with your team. Give them a pop quiz by asking them to rescore one of their first questions right after they finish scoring all responses. I’d be willing to bet on the outcome – the scores won’t match. Maybe by a little, possibly by a fair margin. When we’re evaluating half a dozen or more participants by scoring potentially hundreds of questions… it’s easy to get fatigued or change your mindset midway through.

Many people like to score one participant fully, then moving on to the next. I recommend scoring on a per-question basis instead. Take a question, and score the response from each participant down the line. Repeat for the next question. So on, so forth. This way, you’ll be in the same frame of mind and consider each response on the same standing.

Do your RFP justice – you worked hard to develop it and marshal participants through it to the end. Before working through responses, sit down with your team and review your strategy for evaluating the results. And make sure everyone is on the same page when it comes to avoiding the mistakes above.

Thanks, Brian!