While driving to the airport, I talked with the driver about Uber and the development of services in general. He, by the way, is a former Cobol programmer 🙂
I came to three thoughts:
First. The business of providing “traditional” services should consolidate because the bar for service quality and customer expectations for these services will continuously rise, as will the cost of maintaining that standard. For small companies, this starts to become insurmountable, and they merge with larger ones to maintain the infrastructure at the necessary level.
In this case, competition essentially occurs between the major players, while the small ones only operate in niches that are hard to classify as traditional. For example, women’s or social taxi services. Thus, the fate of “Uberization” awaits most traditional businesses.
Along with consolidation, the volume of data grows. And the big data, which was still being buzzed about five to seven years ago, is really working: machine learning in action is already widespread.
A serious downside that I see here is that large entities start using knowledge from small players’ sales for “neighbors” of these small players, and this might backfire on the small players. For instance, a small player introduces a new product, and it sells wildly, like, say, fidget spinners, and the issue becomes how to offer smart procurement services to others while keeping trade secrets. Questionable.
The second thought that came to my mind after endless discussions about machine learning is that it will indeed be increasingly prevalent, but there’s a downside – people will gradually stop understanding how it works. Including developers – they too will stop understanding why the system made a wrong decision. Because historical data trained it wrongly. Fixing this is possible, but then you need to retrain the system with previous data, and that can be very expensive. For example, if your autonomous vehicle suddenly crashes into a tree, perhaps no one will be able to tell why. It so happened that some wrong conditions at the entry of the decision-making system coincided, which operates on an extract from hundreds of terabytes of original data over decades, and taught it wrongly. Now the data processing algorithms have been adjusted not to learn from bad examples, but retraining the model for decision-making is already complex and expensive.
The third thought I have is that in the near future, retail will be automatic. Like playing the stock market. You set a million parameters and go to sleep. The system does everything else. If you’re lazy to set up, you choose the “minimum yield, minimum risks” package and go to sleep. If you want more money – you play with the parameters, sometimes make mistakes, but still go to sleep. Actually, you don’t sleep, instead, you worry and think and think and think. Something like that. Auto pricing, automatic warehouse replenishment, call center outsourcing, etc. – all components are already there.
What do you think?
