Why An AI Head Needs a Big Data Body and Cloud Feet

ByRon Raffensperger

February 14, 2019

Ron Raffensperger

One of the things about newly hyped technologies is that they are often described in magical terms and Artificial Intelligence (AI) is no different. Reading the technology press and vendor announcements make it seem as if AI can cure all your business’ problems and allow you to leapfrog the competition, succeed in new markets in days and delight your customers while reducing costs by 80%.  What is particularly fascinating about AI is that the hype is not new, in fact it is decades old.  The first Neural Network, a foundation of today’s AI research, was actually created in the late 1950’s.  This led to a large leap in expectations, which were never realized.  Around 1970, Larry Tesler, a famous software engineer and the inventor of cut/paste, made an observation which has become known as Tesler’s Theorem: “AI is whatever hasn’t been done yet.”  The skepticism around AI has continued, but we are now at a point where many of the fantastic claims may be able to be realized.

Planning for the Future

The most overblown claims are unlikely to happen easily but businesses need to begin to understand the opportunities and plan for the future use of AI. As with all new technologies, there are right and wrong ways to use them and in most cases organizations must devote time and energy to training staff and managing the integration of the new technologies with existing capabilities. But before you try to understand “back propagation” and put up a job posting for someone who understands “Deep Convolutional Networks”, let’s look at what type of foundation is required before you can jump into AI.
In general, AI can be divided into Machine Learning and Deep Learning.  The simplest way to think of these is that Machine Learning is primarily statistical and involves refining hypothesis about a problem until you have a solution that appears to fit the problem.  Deep Learning, on the other hand, uses techniques like Neural Networks to allow models to be built automatically by the algorithms.  In both cases, the model is “trained” on information where the answer is known and then run on new information.  In some cases, the model will learn from adding the new information to what it had previously known. The “run” stage is called Inference because it uses the model to infer the answer based on the new information.  So clearly the key is to be able to Train the model as accurately as possible and that takes LOTS and LOTS of information.

Data Lakes and Big Data

How a business gathers this information and how it is stored will vary based on the business problem that is being addressed, but in virtually all cases you will need more than normal databases and even more than a data warehouse.  You will need to bring together all sorts of disparate information, including from outside the core business, and this will generally require something like the Data Lakes found in Big Data solutions.  In fact, Machine Learning problems can usually be solved using just a Big Data platform, without resorting to buying expensive dedicated hardware like GP-GPUs.  It also turns out that most of the common AI problems that businesses first want to tackle, such as Recommendations, Customer Retention, Marketing Effectiveness and Fraud Detection are all cases where Machine Learning is used.  It is only when moving to image recognition, machine vision and speech recognition that Deep Learning, with its increased expenses, is required.

What You Need To Plan

Before beginning to build a Data Lake, it is necessary to have a plan for which business problems need to be solved the most urgently.  It is also important to have a plan for how future problems will be built on both the initial Data Lake and the initial AI solution.  Starting small, with solutions which pay immediate rewards will provide the experience and success visibility to support enhancing both the Data Lake and the types of solutions which will bring business benefits.  What IT organizations need to guard against, is a situation like a very well-known global financial institution who identified 200 data sources for their Data Lake but after two years, had only managed to move 30 sources to the Data Lake.

Bringing In Cloud Computing

The final area of consideration when looking at AI is what role Cloud Computing should play. The advantage of Cloud Computing, whether a Private Cloud in your own Data Center or a Public Cloud, is that you can allocate resources as you need them, without having dedicated infrastructure sitting idle when you don’t need it as well as the ability to share the resources among different types of uses and among different users. In general, most businesses prefer to begin building their Data Lake on a Private Cloud because it solves many security and privacy problems and has predictable costs.  For specialized types of analysis or for providing public access to the capabilities enabled by AI, a Public Cloud set up with the Private Cloud as a Hybrid Cloud, will often be the optimal solution.
In summary, when businesses begin to look at exploiting the advantages of AI, they must first build Big Data capabilities and to optimize the operations and capabilities, will usually want to use Cloud techniques.  In other words, an AI head, needs a Big Data body and Cloud feet.
Click the link for more information about Huawei’s AI strategy and portfolio.

Disclaimer: Any views and/or opinions expressed in this post by individual authors or contributors are their personal views and/or opinions and do not necessarily reflect the views and/or opinions of Huawei Technologies.

Leave a Comment

Posted in Technology Posted in Technology
Published by

Ron Raffensperger

View all posts >