11/07/2019 10:18 | Share
Last year, Amazon’s Prime Day saw the company sell $4.2bn worth of goods online in a single 36-hour period. Several months later, Cyber Monday saw global revenues of $7.9bn in a mere 24 hours. With Prime Day exceeding 2017’s sales by 33%, and expected to demonstrate similar growth in 2019, the trend…
11/07/2019 10:18 | Share
San Francisco-based startup Orderful announced today that it has raised US$10mn in a Series A funding round led by Andreessen Horowitz. Orderful will reportedly use the money to add additional features to its SaaS platform that connects supply chain players through electronic data interchange. Speci…
11/07/2019 10:18 | Share
The e-commerce giant, Amazon, has received a patent for drone technology which is set to perform surveillance as a service as it begins the development of its security and surveillance tools, according to Supply Chain Dive.
As confirmed by Jeff Wilke, Amazon Consumer Products Head, Amazon reveale…
11/07/2019 10:18 | Share
You read that right. AI at most companies is not Artificial Intelligence. It’s not Autonomous Intelligence, Augmented Intelligence, Assisted Intelligence, or even Amplified Intuition. In reality, it is marketers taking Green Day’s AI a little to literally (and treating everyone like an American Idiot*) and repackaging old tech with a new label.
You see, most of what the Marketing Mad Men are trying to sell as AI are just old-school statistical algorithms in a brand-new wrapper. And the only reason these technologies are finally hitting the market and getting good results is the sheer amount of processing power and data we have at our disposal — because dumb algorithms (which is what they are) only work well when you have a lot of processing power, a lot more data, and the power plant to run that processing power 24/7 at 99% capacity across dozens, if not hundreds, of trial parameterizations until you find something that, well, just works.
But it’s not intelligence. It’s advanced curve fitting, regression, k-means clustering, support vector machines, and other statistical inference techniques that existed in SAS in the 1990s. Except now, the curve fitting is nth degree polynomial, advanced trigonometric, geometric, n-dimensional, step-wise, and adaptive. The regression is nonlinear, non-parametric, stepwise, and much more robust … and accurate because you can process millions of data points if you have them. The k-means is not clustering around one or two dimensions, but one or two dozen if necessary in a large multi-dimensional space — and the clusters can be of arbitrary n-dimensional geometric shapes using kernal machines. The support vector machines are not just based on primal, dual, and kernal classification with a bit of gradient descent but enhanced with multi-class support vectors, advanced regression, and transduction (to work with partial valued data). And so on.
And don’t think there’s anything new about “deep neural networks” either. They are just multi-level neural networks which were common-place in the 1990s with more levels and more nodes per level with more advanced statistical classification functions in each node trying to figure out how to extract patterns from unclassified data to classify and structure it, which happen to get better results because they can work on millions of data points, instead of thousands, and do tens of millions of calculations and re-calculations instead of tens of thousands. And that’s the only reason they get better results “out of the box”. There is absolutely nothing better or more advanced about the core technology. Nothing. It’s still as dumb as a door-knob, no matter how whizz-bang the markets make it out to be.
And at the end of the day, the “active” part of the neural network is a fraction of the overall network (which means as much as 90% of the computation is wasted), and if that can be identified and abstracted, you typically end up with a small neural network no bigger than the ones being used twenty years ago, which, even if more than three or four layers, can probably be redesigned as a three-or-four layer network. (See the recent article on the recent MIT Research, for example.) [But if you’ve studied advanced mathematical systems, this is not an unexpected results. Over-dumbification has always led to unnecessary processing and inferior results. Of course, over-smartification also leads to ineffective algorithms because data, typically produced by humans, is not perfect either and we need to account for this as well and detect small perturbations and deal with them. But it’s always better to be thoughtful in our design than to just brute force it.
In other words, many modern marketing madmen in enterprise software have become the new snake-oil salesmen, often selling simple statistical packages for a million dollars or raising tens of millions for yesterday’s tech in a shiny new wrapper. But it’s not intelligent, or even intuitive, by any stretch of the imagination.
That’s not to say that there isn’t technology that can qualify as assisted technology (and maybe even augmented in special cases), just that the majority of what’s being pushed your way isn’t.
So how do you know if you are among the majority being subjected to Applied Indirection or one of the few minority being offered a solution with true Assisted Intelligence capabilities? Stay tuned as we discuss this topic more in depth in the weeks to come …
* It’s much preferable to be a Canadian Idiot. We’re nicer and the “AI” marketers don’t bother us as much.
11/07/2019 10:18 | Share
As we said five years ago (and probably even earlier than that), spot buying individual categories at market lows or evening running reverse auctions at opportune times is NOT category management. And for that matter, neither is a strategic sourcing event that throws everything in the category into a strategic negotiation, especially if the category is metals and you are including the kitchen sink.
And you might be thinking that the doctor needs a psychiatrist because how could it not be category management if you are addressing the whole category? Category Management isn’t just about grouping all seemingly related items and running an event. Category management is about grouping items that have related characteristics that allow the items to be sourced effectively under the same strategy.
For example, while it might make theoretical sense to group printers, ink, and paper together —- because you use them together, from a sourcing point of view, ink and paper often go better with office supplies and printers with hardware. You can probably get them thrown in for free with a server purchase. But that’s just the start.
For example, if you source a lot of metal parts, you should probably start by grouping them by primary metal, since the price of steel, aluminum, etc. will largely dictate the price of those parts. Furthermore, it might even make sense to not only source all of the parts from the same supplier but even buy the metal on behalf of the supplier with your better negotiating power and/or credit rating.
But that’s just the start. Then you have to make sure the parts are (best) produced using similar processes, because giving a part to a supplier that is only easily produced by laser cutting when the supplier only has traditional machining / cutting is not going to be a good decision. Even though the volume will lower their cost of metal, the extra work will increase the cost per unit.
So sometimes you will need to group the category into sub-category by metal and production style and get bids separately and together (from any supplier that can offer both) and do a multi-level analysis to find out the best approach. (And this is yet a another reason that SI has been telling you since DAY ONE that you need an optimization-backed sourcing platform as this is the only way you can effectively analyze all the options.)
And sometimes you will have to ignore items with a large demand or core material component because they are cheaper when sourced as part of a different category buy as they can be produced by other suppliers or bundled for a larger volume-based discount.
For example, consider an organization-wide UPS replacement. They are technically a power transformer with a battery, but you wouldn’t source them from the manufacturer that manufactures custom transformers for your on-site renewable solar and wind farm since you’d source them from your hardware supplier who supplies you with the rest of your office electronics as they would be buying such units in bulk from a manufacturer who produces them in bulk and gives you a better deal.
Comprehensive category management is looking at a category from a holistic perspective and finding the right segmentation to get the best overall value through the right sourcing method at the right time.
It’s not just a one-time slice-and-dice, it’s a continual analysis of the category from a multi-dimensional and current market perspective to make sure each time an event is run, the right strategy is used across the right sub-category of products and services which are offered to the right prospective supply base.
And it requires up-front market analysis before the event as well as optimization-backed analysis during. So you need a good analytics platform, preferably with some automation that can constantly pull in market data, analyze it to current cost, plot and predict the trends, and provide the necessary market intelligence that can be compared to a best-practice knowledge base that will indicate the event type that has been the most historically successful under current conditions. (And in the spirit of our recent Applied Indirection series, this is not AI, this is RPA with parameterized suggestion look-up.)