IBM Cloud

Empowering the New Data Developer

March 16, 2018
Written by:

After years of frustration with the trucking industry’s slow and inconsistent processes for loading and unloading cargo, Malcolm McLean in 1956 watched as his SS Ideal-X left port in New Jersey loaded with 58 of the world’s first intermodal shipping containers – a product he invented and patented.

The defining feature of his container was its simplicity: he designed it to be easy to load and unload, with no unique assembly required. It is estimated that this seemingly simple concept reduced shipping costs by 90% (from $5.86/ton to $.16/ton), which in turn led to a global standardization. With virtually unlimited expansion of cargo, on the basis of this standardization, global commerce began to accelerate.

Like shipping, the technical world has sought standardization throughout its history, as well. Some of the more recent advances in high tech, of course, include the advance of TCP and TCP/IP, the broad adoption of Linux, and has now entered into a new era with Kubernetes. The benefit of standardization in this realm is flexibility and portability: engineers build on a standard and in the case of Kubernetes, their work is fully flexible, with no need to understand the underlying infrastructure. Like McClean’s intermodal shipping container, the benefit is reuse, flexibility and efficiency.

With shipping containers, the expansion of cargo drove a revolution in commerce. The cargo was the purpose, and the container the mechanism. The cargo in the current technology landscape is data. Data that is put to work by the new data developers and holds the insights that determine competitive advantage in all industries.

Most of the advances in IT over the past few years have been focused on making it easy for application developers. But, no one has unleashed the data developer. Every enterprise is on the road to AI. But, AI requires machine learning, which requires analytics, which requires the right data/information architecture. These building blocks, essential for AI, when integrated, provide a clear business benefit: 6% higher productivity according to a recent MIT Sloan study.

When enterprise intelligence is enhanced, productivity increases and standards can emerge. The only drawback is the assembly required: all systems need to talk to each other and data architecture must be normalized. What if an organization could establish the building blocks of AI, with no assembly required?

Announced today, IBM Cloud Private for Data is an engineered solution for doing data science, data engineering and application building, with no assembly required. As an aspiring data scientist, anyone can find relevant data, do ad-hoc analysis, build models, and deploy them into production, within a single integrated experience.

For the first time ever, data has superpowers. Consider the following, which only IBM Cloud Private for Data provides: Seamless access to data across on-premises and all clouds; a cloud-native data architecture, behind the firewall; and data ingestion rates of up to 250 billion events per day.

What Kubernetes solved for application developers (dependency management, portability, etc), IBM Cloud Private for Data will solve for data developers and speed people’s journey to AI. Much like what McLean’s architecture did for commerce, this too will do for unleashing data for competitive advantage. Now is the time, to make your data ready for AI.

This story first appeared on IBM THINK Blog

Related: