Category Archives: Risk Management

You Need to Get a Handle on Your Global Trade Risks – FAST!

Because, if you don’t, in three months, one in every five shipments you make is going to result in a large fine! By the end of May, the United States Customs and Border Protection (CBP) is expected to issue a proposed rule that would make various changes to increase the accuracy and reliability of the advance information submitted under the Importer Security Filing (ISF or 10+2). No big deal, right? Wrong! It is further expected that the final ISF rule will follow later this year and that the CBP will, upon release of the new rule, begin to enforce (the full extent of) the penalties associated with the ISF.

This is a major risk for most organizations as the most recent statistic that is publicly available where 10+2 compliance is concerned is a compliance rate of 80%. In other words, 20% of shipments are not compliant! It’s hard to say why. It could be because, up until now, the CBP has not issued much (if anything) in the way of penalties for violations and failures, and many importers, (customers) brokers, and forwarders are taking advantage of the situation and not doing anything to improve their processes and procedures when they (regularly) make late and inaccurate filings. And if this is the case, this is a dangerous game — for you!

We have to remember that the CBP has the right to enforce a minimum fine of $5,000 for EACH 10+2 violation. If you do a lot of importing, this will add up fast if you are in violation with every fifth shipment. Even if you only did 10 inbound shipments a month, you could expect to lose at least 120K a year to fines! And that’s (significantly) more* than what an average mid-size organization can expect to pay for an annual license to a basic SaaS e-Trade Document management system these days. So they should get one, and begin to get a grip on their global trade risks, fast, before they burn money needlessly.

* A good (end-to-end) global trade management system will still run you six figures, but it goes way beyond e-Document management and provides multiple ROI in terms of process improvement, tactical man-hour reduction, global supply chain visibility, compliance monitoring, etc. (But if you’re small, or just getting started, you can start with just the e-Document management and ease your way into a bigger system.)

Getting a Grip on Multi-Tier Supply Chain Risk – A Resilinc Commentary


Today’s commentary guest post is from Jon Bovit, Chief Marketing Officer of Resilinc, a provider of supply chain resiliency solutions for industries including high-tech, medical devices, and automotive manufacturers. SI recently covered Resilinc in detail in Do You Know What’s At Risk? Resilinc Does! and Will Resilinc Resonate with Your Supply Chain.

Today’s supply chains are complex, global, and highly dependent on
sub-tier suppliers. Long term sustained success of companies is hugely dependent on the resiliency of their suppliers. Despite this, most supply chain leaders are unable to readily access critical supplier information necessary in order to manage business effectively. Supply chain leaders need a solution that maps the global supply chain across multiple tiers, identifies critical supply chain dependencies, exposes critical vulnerabilities and single points of failure, manages risk mitigation across the organization, and optimizes resiliency practices throughout the organization.

Despite popular opinion to the contrary, the harsh reality is that measuring supply chain risk at the supplier, or even the location, level is inadequate for today’s global and complex supply chains. In order to properly managing supply chain risk, a company must start by mapping its global supply chain down to the individual products, parts, sites, and revenue across each of the multiple tiers. Once the multi-tier supply chain is mapped down to the product and part level, with the proper methodology, the company can calculate risk scores based on (multiple measures of) financial risk, location (economic and geopolitical) risk, and recovery risk (recovery time and BCP). By evaluating supply chain elements based on inherent financial, location and recovery risks (which align well with the risks identified in the recent World Economic Forum Global Risks report), supply chain practitioners can choose the most effective mitigation
strategies.

As an example, by utilizing the above methodology, the Resilinc platform is able to quickly identify high risk, high revenue, single sourced parts for a high-revenue producing business unit. The high risk may come from long recovery times from a specific supplier manufacturing site in Malaysia or Japan. The customer can then come up with specific risk mitigations strategies for those specific high risk, high revenue single sourced parts before a disruption occurs, which could save the company millions in losses and unmeasurable damage to its brand. If risk was measured at the supplier level, these details would have been missed completely.

Customers should not only focus on assessing and mapping risks based on their supplier global footprint and site locations, but also should capture sub-contractor and sub-tier supplier dependencies, site activities, part origin, alternate sites, recovery times, emergency
contacts, and business continuity planning (BCP) information. By focusing on identifying critical vulnerabilities and the highest risk exposures using quantitative scores and impact analysis at the product, part, and site level, leaders can direct limited budget and resources into the right areas for optimal protection against future supply chain disruptions.

Thanks, Jon!

What is Necessary to Get a Grip on Risk before You Select a Supplier for Outsourcing?

Outsourcing ain’t going away. The best we can hope for is near-sourcing, but that will depend on the ability to find the needed expertise and scale at competitive rates (at least until oil and transport across large distances becomes so expensive that labour rates don’t matter). So we need a way to select a supplier that won’t increase risk to ridiculous levels and almost guarantee that, at some point, our supply chain will come to a grinding halt when the supplier goes bankrupt, gets cutoff from its supplier, or gets cut off from us.

One way is to get an assessment of risk for the supplier, the city the supplier is located in, and the country the city is located in, build a composite picture, and determine if there is any serious risk of supplier failure, inbound supply chain failure or inaccessibility, or outbound supply chain failure or inaccessibility. But where do we get that risk assessment? And how do we know it’s the right one for us?

Where is external to the organization. We go to an organization like D&B, Resilinc, or Neo Group which has been collecting data on the supplier, city, and country and get their report. But how do we know we’re getting the right report? This is the toughie.

First of all, are they using the right risk model? If you refer back to the World Economic Forum’s annual Global Risks report, you see that, at the very least, you have to consider societal, environmental, geopolitical, economic, and technological factors at the region level, but since you will be conducting business with a supplier at a physical location, business, legal/regulatory, infrastructure, and local quality of life will also play a role. When you start talking about suppliers, you need to look at their financial stability, associations (clients/partners), governance, workforce, and (service) innovation (leadership) capabilities.

But how do you define each of these in a way that can be measured in a standard way? And will such definitions incorporate all that is relevant to your organization? For example, when we’re talking economic we’re talking inflation, currency, fiscal deficit, GDP growth, stock market performance, reserves, etc. However, when we’re talking supplier service capability, we’re talking workforce education level, tools, language proficiency, incentives, etc.

It’s a very tough question. And often what matters is category specific. I’ve reached out to a couple of the big providers of Risk Monitoring solutions. Let’s see if any take me up and provide their viewpoint.

I Hope You’re Not Paying a Wealth Investment Advisor!

Because if you are, the only person getting wealthy out of the deal is the investment advisor on your money! Especially when your LOLCat can do a better job, and will work for temptations and catnip!

As per this recent article in the The Observer, a ginger tabby named Orlando beat a team of professionals and a group of students in a year-long stock-picking experiment summarized in a recent article on how Orlando is the cat’s whiskers of stock picking. The cat, who selected stocks by throwing his favourite toy mouse on a grid of numbers allocated to different companies, beat Justin Urquhart Stewart of Seven Investment Management, Paul Kavanagh of Killick & Co, and Andy Brough of Schroders who had decdes of investment knowledge.

So if you really want to beat the market, replace your stock analysts with cats who are just as accurate (and don’t put much faith into predictive analytics no matter how much big data you have).

Is The Air Force’s Billion Dollar Flop the Biggest Supply Chain Failure in History?

Six years ago, Supply Chain Digest published a piece on “The 11 Greatest Supply Chain Disasters” in history, which was updated in a blog post on The Top Supply Chain Disasters of All Time by Editor-in-Chief Dan Gilmore back in 2009 which added five new ones to the list, bringing the total to 16.

The top three were:

  • the failure of Foxmeyer’s “Lights Out” Warehouse,
    which was the top disaster in the original report and wiped out the 5 Billion dollar company almost over night;
  • the Boeing outsourcing fiasco,
    which led to massive 2-year plus delays in the production and delivery of the long-awaited 787 Dreamliner and some 2 Billion in charges to fix supplier problems; and
  • GM’s Robot Mania,
    in the 1980s when CEO Robert Smith pent 40 Billion on robots that didn’t work for the most part

But SI thinks the Recent Air Force Modernization Effort should top the list. As per this great article over on the New York Times Site on the Billion-Dollar Flop, the six-year old effort that had already eaten up more than 1 Billion didn’t even achieve a quarter of the planned capabilities — with another Billion required to achieve that minimal target. This says that the effort, supposed to cost $628 Million, would require over 8 Billion to complete! This easily dwarfs the 2 Billion in charges plus losses due to delayed sales suffered by Boing and the 5 Million Foxmeyer failure.

Does it dwarf the GM failure? The failed gamble cost GM a lot, but they are still in business, and posted almost 1.5 Billion in profit last year. And they were able to fix their processes and technology and improve over time.

In comparison, the Air Force is stuck relying on legacy logistics systems, some of which have been in use since the 1970s. And it turns out that this failure is just the tip of the iceberg, with the Institute for Defense Analyses noting that modernization of the department’s software systems, which has been a priority for 15 years, has cost over 5.8 Billion as of 2009 and most large operational software system efforts are still behind schedule. So now we’re up to six billion.

And the losses mount for every year a legacy system (way) past it’s prime has to remain in production. With today’s rapid pace of software, and hardware, refresh cycles, it’s often difficult to find a replacement part for a piece of hardware that is only 3-years old, and if you do find it, it’s costly. The Air Force has to find replacement parts for systems that are 13 and 30 years old! And lets not forget energy and support costs! Older systems often consume way more power and require more support hours than newer systems. Plus, over time, the expertise in supporting such systems goes from relatively common to extremely rare as more and more people retire or move to different systems and technologies and no new people learn the antiquated systems. As a result, the expertise that remains becomes very costly as the few people left demand a premium and expenses mount when they have to be flown in from halfway across the country.

Plus, the failure has instilled a fear of future technology fiascos, causing them to impose an across-the-board deadline of 18 to 24 months for future upgrade projects. While this sounds good in theory, and an upgrade project for most systems generally shouldn’t take longer, there are some systems where the requirements analysis is going to take 6-12 months and the migration plan, which will involve a lot of data mappings, development, and testing, will take just as long. Add a staged implementation plan, quality assurance, and user testing, as well as time for any customizations the COTS (Commercial Off The Shelf) Vendor has to make to the core system, and the project could take longer. So, this is going to prevent some upgrades from happening until COTS technology in certain area improves or a vendor is willing to bite the bullet and create the mapping middleware without a contract in the hopes it will get one. In the mean time, losses mount.

While SI does not have the data to calculate, it would bet that if you did a total loss analysis over all delayed and failed projects leading up to, revolving around, and including the modernization initiative, over the last decade, the number would be 5 times higher, just like the license cost of an on-premise software solution amortized over five years turns out to often be 1/10th of the total cost of ownership.

It might not add up to a 40 Billion loss yet, but by the time the Air Force recovers and modernizes all of the systems that need modernizing, it will likely get there.