Silverlight Authors: Automic Blog, Michael Kopp, AppDynamics Blog, Kaazing Blog, Steven Mandel

Related Topics: @CloudExpo

@CloudExpo: Blog Feed Post

Moving to the Cloud: Managing Your Environment

The issues involved when moving applications between internal data centers and public clouds

Cloudonomics Journal

This post is part of a series examining the issues involved when moving applications between internal data centers and public clouds.

One of the advantages of cloud computing is that someone else is managing the infrastructure – including the servers, network devices and storage systems, not to mention the data center power conditioning, cooling and fire suppression equipment.  One of the costs of offloading this infrastructure is that the cloud becomes something different and separate from your data center.  In most deployments today, the cloud is almost completely isolated from your data center, and this often requires changes in how you manage and interact with your applications.

So what does “management” mean in this context?  I look at it in terms of provisioning resources and managing the infrastructure, operating systems and applications.  Over the years a remarkable set of tools and processes has been developed to handle these tasks in the data center, and the challenge now is how you integrate all this investment with the new cloud deployments.

For provisioning, the cloud has a model similar to a data center virtualized environment, in that you can provision virtual resources from a pool of physical resources.  However, the definition of these resources is dictated by the cloud provider, which means you have to adjust your processes to account for the capabilities and limitations imposed by the cloud, specifically CPU, memory and storage resources.  To be successful in the cloud, you need to match the resources required for your application to the capabilities of the cloud (i.e., pick enough CPU/memory/etc. to meet your application’s requirements while balancing the costs of the cloud resources).  If you already have a provisioning system, you need to expand and modify it to account for the cloud (e.g., add parameters to account for cloud capabilities, build connectors to the cloud API, tie into the cloud account mechanism, etc.).  If you don’t have a system in place, you need to build a new process to access the cloud resources.  The overriding issue for either approach is that there are no standards yet for cloud provisioning, so the work you do to tie into a cloud provider is not portable to another cloud.  The promise of cloud products which offer multi-cloud capabilities is that they can connect you to different clouds using a common set of interfaces.

Managing your cloud infrastructure can be a lot of work.  You need to integrate with an architecture defined by the cloud provider, using its specific primitives for working with cloud components.  This requires tying into the cloud APIs for configuring IP addresses, subnets and firewalls, as well as data service functions for your storage.  Because control of these functions is based on the cloud provider’s infrastructure and services, you also have to modify your internal processes and control systems to integrate with the cloud infrastructure management.

Even managing your operating systems as part of a cloud deployment presents challenges.  Many cloud services provide “base servers” or templates that contain a simple distribution or OS, which are then used to build up your specific server/OS/application.  This approach works well when the provider has the exact base server you want to start from, and you have a process in place to build from a running server.  The challenge is that when you build up a server based on a gold image, it  may: a) not match the base cloud OS version, b) be built from a non-running or base OS versus a fully-running OS (as required by most clouds), and c) use internal resources (boot servers, internal repositories, etc.) that are not available in the cloud.  From a maintenance perspective, many organizations use central controls for updates (like WSUS for windows), and these services depend on access to data center networks and services.  Since public clouds are running external to your data center, these services either won’t work, or need to be altered to run the hybrid environment.

Finally, the cloud creates additional complexity for managing applications.  You almost always need to modify applications to accommodate cloud differences (virtual environment, networks and storage), which means that the applications in the cloud diverge from the “original” or base applications in your data center.  You may also use third-party tools to help with integration into the cloud (such as VPN software, integration scripts, encryption software, etc.), which then need to be maintained.  Each of these software elements has it its own lifecycle and update management, most of which apply to every image deployed into the cloud.

The management problems introduced by including the cloud in your infrastructure all have their source in the same issue – the cloud is something separate and different from your data center.  This separation becomes clear when you consider the integration and management issues that span everything from provisioning to reengineering your applications to changes in lifecycle management.  At CloudSwitch we’re streamlining and automating cloud management to eliminate most of these issues, and bridge the separation between the cloud and your data center.

Next: Key Considerations for Cloud Performance

Read the original blog entry...

More Stories By Ellen Rubin

Ellen Rubin is the CEO and co-founder of ClearSky Data, an enterprise storage company that recently raised $27 million in a Series B investment round. She is an experienced entrepreneur with a record in leading strategy, market positioning and go-to- market efforts for fast-growing companies. Most recently, she was co-founder of CloudSwitch, a cloud enablement software company, acquired by Verizon in 2011. Prior to founding CloudSwitch, Ellen was the vice president of marketing at Netezza, where as a member of the early management team, she helped grow the company to more than $130 million in revenues and a successful IPO in 2007. Ellen holds an MBA from Harvard Business School and an undergraduate degree magna cum laude from Harvard University.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...