OpenStack Lands IBM, Red Hat

The open source OpenStack cloud project has officially revealed its list of corporate supporters today and for the first time, both Red Hat and IBM are among the supporters.
OpenStack is currently in the process of migrating from a loosely governed project to full open source foundation governance model. AT&T, Canonical, HP, IBM, Nebula, Rackspace, Red Hat and SUSE will be Platinum members of the new foundation. Cisco, ClearPath, Cloudscaling, Dell, DreamHost, ITRI, Mirantis, Morphlabs, NetApp, Piston Cloud Computing and Yahoo are joining the effort as Gold members. The difference between membership tiers is about money and doesn't affect the technical direction of the project.
Jonathan Bryce, chairman of the Project Policy Board for OpenStack, explained to InternetNews.com that as part of the process of building the OpenStack Foundation, it's now time to get companies to take a formal step and state their intentions around sponsoring the foundation.
The Platinum membership involves a three-year commitment of $500,000 a year, which will provide the OpenStack Foundation as a whole with a minimum of $4 million in funding a year.
"Money is just one part of it, we want companies to all pitch in so we can build something great together," Byrce said. "Platinum members also have requirements around full-time employees that they have contributing to the open source project and a corporate strategy that lines up with OpenStack as well."
That corporate strategy includes using OpenStack clouds and building it into products. He added that all of the companies that are members have also contributed code in the recent Essex release of OpenStack.
Gold membership, in contrast, is variable and is based on company size and can range from $50,000 to $200,000 a year. In terms of what different membership classes enable companies to do, it's not a limiting factor for technical development.
"The purpose of the foundation is really focused on community building," Bryce said. "The actual development, technical meritocracy and project technical leads that are elected by committers – all of those things are not changing."
Bryce stressed that the current development process is healthy and working as is. From a technical perspective, he added that no one has to pay anything to participate or consume the code.
In terms of what the membership fees will all be used for, Rackspace VP of Business & Corporate Development Mark Collier explained that Rackspace today is paying for the community building activities for OpenStack.
"We have dedicated community managers and folks that manage events so all those activities have costs and headcount associated with them," Collier said. "So the foundation will be taking over the responsibility for those activities."
Collier noted that as Rackspace transitions their responsibilities, they want to make sure that similar or greater resources are available to build the community.
The process to build an OpenStack Foundation kicked off in October of 2011. The general idea is to have an open structure that will enable the further development and adoption of OpenStack. OpenStack was originally started by Rackspace and NASA in July of 2010 and has since grown to include the support of 166 contributing companies.
While some of the new Platinum members, including AT&T, Canonical and HP, had previously publicly announced OpenStack efforts, Red Hat in particular had not.
"Now that OpenStack is moving to a foundation, Red Hat felt that this new governance structure would provide a good framework for enhancing open source collaboration around OpenStack," Red Hat wrote in a statement. "Yes, Red Hat is planning to introduce an enterprise distribution of OpenStack. However, we are not announcing any specific product plans right now."

Dell Cloud Computing: Ricky Santos Interview

As the vice president of Dell Cloud Solutions, Ricky Santos has a lot on his plate. Dell has spent nearly $2 billion in the last year acquiring firms to bulk up its cloud computing offering. As the executive in charge of that strategy, Santos is clearly one of the leaders in this emerging technology.
In a wide-ranging interview, I spoke with Santos about what companies need to know about deploying to the cloud, the hybrid-public-private cloud combination, and other cloud-related topics:
What are some key things that companies need to be aware of as they deploy assets to the cloud? I think there’s a certain amount of confusion among many different sizes of companies, in terms of “what do we need to be thinking of in this process?” What sort of advice would you give on this?
RS: Before even deploying any assets to the cloud, it’s important to know what those assets are doing for the company, so you can see if it fits a purpose. If it’s data, then our thinking is, definitely, the hybrid cloud is where companies are going to get the most benefit.
Ricky Santos, Dell VP Cloud Computing
In terms of core assets, the thinking is, build on that company’s private cloud – regardless of the size of the private cloud. You can focus on a smaller, highly optimized, efficient private cloud footprint, while taking advantage of the public clouds – the different cloud types out there, community clouds – to extend that footprint for capacity, scalability, that kind of stuff.
There’s a whole governance aspect to this, right? – having a strategy for service management and governance. Because besides having an awareness of what assets to deploy and how to manage that, it’s a question of how it’s to be used and what falls out of policy, enforceable policy.
Dell's Ricky Santos
For example, it is very easy to go to the cloud today, not just the IT [department] but in particular the business units that IT supports. It’s very easy for them to circumvent IT. And if you don’t have any governance around that, it means that services are being used that are not easy to understand. The business units will circumvent IT – it takes a credit card, that’s all it takes. The next thing you know, you’ll have many different instances of public clouds that are outside the realm.
Q: You mean different divisions in a company are setting up their own departmental cloud computing infrastructure?
RS: Absolutely. And this is an actual example, without naming any customers, there’s a company that did an audit and realized there were close to 300 American Express accounts at public cloud providers.
At its most basic, that means hundreds of cloud architectures that I have no control over, with no visibility to, with 200+ variances of risk for my company.
Q: If you were an IT manager choosing a cloud vendor, what sorts of things would ask if you were weighing the pros and cons of various prospective vendors? What do you recommend for this selection process?
RS: I would want to understand how that prospective vendor is going to integrate those services, those functions, with the rest of my IT. Meaning, my legacy traditional environments, right? Because the last thing I want to do is have the different operating environments, [including] my traditional legacy environment, and have a completely different for cloud services that’s completely disconnected. This is an example of Level One monitoring – problem management, incident management, request change.
So I’d like to understand what are the integration points that vendor would provide and identify for me. Give me the criteria so I can plug into that. It all comes down to a strategy around the service management, the governance and the integration of those services.
Otherwise, I better be just be using that vendor for a project to test things out. Because if I’m going to take advantage of the cloud, I want it to be an extension of IT and not treated separately.
Q: How would you describe Dell’s focus as a cloud computing provider? What aspects does the company target the most among the many aspects of cloud computing?
RS: There are four key attributes of Dell Cloud. First is enterprise security. Embedding what we do with SecureWorks. So we can protect the cloud, information in the cloud, etc.
Next is cloud infrastructure and hybrid integration. So this is the advantage of Dell servers, storage and networking, and actually having a cloud component in each of those products. A good example I like to describe is “cloud ready solutions” – take the PowerEdge C and our plans to integrate the ability to burst to the public cloud from that server, and hybrid integration. So, tying them very closely together, the software and things that we’re doing with VIS, have a cloud integration component to it. Again, the focus is extending that private cloud.
The third is application integration, aggregation. This is a huge challenge today, if you’re looking to develop the most value from the cloud. The most value in the cloud starts with being able to enable that hybrid cloud for our customers. And when they have a hybrid cloud, there’s more flexibility, there’s so much richness to the data. That [helps] my ability to do analytics, because I can move data from one location to the other, whether it’s my private cloud or it’s a multi-tenant cloud, to do the kind of analytics I need to do, to get the trends that I was limited from before, and also understand risk. I can now do this with a Boomi solution.
The fourth one is around services integration. [This] allows us to extend cloud to the rest of IT. If I go back to my example of Level One monitoring, I can see incidents, I can see requests, manage them, handle them, across the different environments that IT supports today: Legacy, additional IT, and now the cloud component.
Q: You talked about hardware, software and services. Let me play the skeptical customer for a moment. I’m going to say ‘I know Dell is a hardware company, and services, okay, I can buy that, but Dell as a software company? I’ve never thought of Dell as a software company. And for cloud services, you have to be pretty savvy in the software world.
RS: There’s two [points] to that. One, part of Dell’s legacy on the hardware side is [software], which has been tightly integrated into Dell products. And of course you’re very well aware of our software organization led by John Swainson. And that’s a major focus to now build on an enterprise-class portfolio for both horizontal and vertical software as a service.
You mentioned Dell as a hardware company. Yeah, for skeptical customers, they’re surprised when we start talking about services and service integration, and our ability to deliver these types of solutions. We have a 40,000 [employee] strong services organization – people who are focused on delivering Dell services.
Obviously we’re leveraging that, and the capabilities delivered out of that group to create Dell Cloud Services – infrastructure through application modernization, migration, and of course security.
Q: I know that Dell is an active participant in the OpenStack community. Why has Dell chosen to focus on open solutions for the cloud, what’s the advantage there?
RS: We’re focusing on both, we’re definitely committed to open solutions. We’re also still driving our solutions with VMware. We announced last summer our public cloud offering with VMware, Vcloud. [Open solutions] enables us to be much more flexible in terms of solutions, in terms of managing the cloud.
And also in allowing us to tap into the cloud ecosystem even better. We understand, again, it’s a multi-sourced environment, and if we’re looking to deliver on what we see as the two biggest benefits and reasons for the cloud – cost is a big component of that, and efficiency – the next thing is enabling new businesses, right? Allowing new businesses, start-ups, to innovate, without being hindered with capacity and costs, etc.
Now there’s also the medium-sized business and enterprise level – obviously there’s significant cost efficiency there. The business opportunity for them is identifying new revenue. If there were services they had planned, or were in the works, but were hampered because of scalability requirements or elasticity for compute, that now is available via the cloud, and it makes the business viable. So it’s new revenue opportunity.
Q: Dell has spent almost $2 billion in the last year or so acquiring cloud companies. What’s the focus behind this strategy?
RS: I’d like to be selfish [laughs] because I’m responsible for Dell and cloud. But there are, at least, the five or six acquisitions where it enhances our cloud portfolio, and our ability to create IP and deliver cloud end to end.
And I’m going to go back to delivering those four attributes: service integration is hard. To get true integration of these different environments and deliver interoperability, we need the IP, we need the management solutions to be able to deliver that, securing the cloud, protecting information with SecureWorks. And then the other aspect of integration with Boomi.
Add to that, other acquisitions we’ve made, like, Clerity, which enhances our ability to do application modernization, which underpins why customers would want to move to the cloud. SonicWall for network security and again I’m going to data protection, it enhances what we’re able to do with SecureWorks.
And also understand what we’re doing with these companies, and I’ll use SecureWorks as an example. Like our Vcloud offering, we are providing options to that offering. Now it is a fully managed service, right? A fully managed public cloud service that we offer customers today. But we add enhancements to that. Customers can elect to increase the security aspect of the managed service. So think of your car, what other upgrades can I get with this service?
Q: Anything you’d like to add?

RS: Going back to the legacy that Dell has built, we have the credibility and the capability to deliver these services, not just within each of their areas, but actually deliver them in an integrated fashion for our customers. So it is an end to end solution for our customers for the cloud.
I think, if anything ,we’re very sensitive to the fact that, yup, you can gain efficiencies by going to the public cloud solution, and we have that. But also thinking of, what’s next? Where can we take our customers, how can we transform their environment so they get the most advantage from cloud? That is really looking at all four [of the Dell points] and delivering that kind of value to our customers today.

Educating the Cloud: How ETS is Moving Up

For the last 65 years, the Educational Testing Service (ETS) has been helping to perform large scale educational assessments like the SAT and GRE. ETS now does over 50 million assessments a year globally, all supported by a backend IT infrastructure.
It's an infrastructure that is now, in part, headed to the cloud in an effort to improve operational efficiency and scale.
"The whole foundation for us to scale what we do is based on complex algorithms and the ability to do that at massive scale in a fairly short period of time," Daniel Wakeman, VP and CIO, Educational Testing Service, told InternetNews.com. "Because of that, we need a lot of computing horsepower and also, the dichotomy is, we do it in bursts as our business is very cyclical."
The SAT college entrance examination, for example, occurs five times a year. When testing isn't occurring, all of ETS's computer infrastructure sits idle, as they currently don't have an easy way to re-purpose it. There is also a need for large amounts of compute power to enable research projects into different types of cognitive testing. That's why ETS began to consider the cloud as a way to lower their fixed costs in order to deal with the seasonality of their computing needs.
"Our situation today is that we have a big server farm of 900 servers running at 6 percent utilization," Wakeman said. "We have been virtualizing a bit and now we're looking at cloud."

Challenges

Moving to an on-demand cloud infrastructure is not a simple one-step process for ETS.
ETS has multiple challenges in moving to the cloud, including classifying all of their various applications and figuring out what can be moved and when. All that, in turn, needs to be balanced against release cycles and testing events.
Another key challenge involves managing both old and new environments, since the company can't move everything to the cloud. What ETS is aiming to do is to be able to encapsulate their workloads into images that can then be moved when and where needed.
"We also need to be able to understand how much the cloud deployment will cost," Wakeman said. "On one hand there is Amazon, where you pull out your credit card and setup a cloud. On the other hand, many of the other cloud service providers have their grand powerpoints and plans, but they couldn't tell us how they were going to charge us."
ETS has chosen to go with cloud and data center vendor CSC, as they gave them a variable pricing model. ETS had been using CSC for the past 10 years for outsourcing of its core data center infrastructure. As to why ETS didn't choose to simply use Amazon, Wakeman noted that there was a bit too much of a do-it-yourself approach with Amazon. Instead, ETS opted for a fully managed cloud that could work with their existing data center deployment.
"Unless you're going to write your applications from scratch to work over a WAN, they're not going to perform well," Wakeman said. "That's what we found out initially when we tried out a few applications on Amazon, and they had real issues of latency back to our datacenter, even though we have large pipes and so does Amazon."

Costs

CSC is using VCE vBlocks for cloud infrastructure. VCE is a joint venture of Cisco, EMC and VMware and vBlock is the core unit of hardware that delivers an integrated hardware and software approach for cloud.
What ETS now has is a hybrid cloud that leverages the traditional data center assets with CSC's on-premise vBlocks for on-demand cloud computing.
The way ETS pays for their cloud deployment is also a hybrid approach. They pay for a certain amount of reserve capacity, then there is also the ability to buy additional capacity on demand at set prices. Attached to those on-demand compute cycles are service level agreements around how fast those compute cycles can be brought online. The on-demand compute units also get cheaper if ETS is able to provide more advance notice to CSC.
From a total cost of ownership perspective, figuring out if the cloud makes sense is a multi-tiered equation. Not all of ETS's applications can move easily and many require an investment of skills and resources to re-write for the cloud. Wakeman noted that ETS is still not clear how much rework will need to be done and how much that will cost.
"For the things that we can move easily, there is a quick payback," Wakeman said. "There is some payback since the cost to run a server at six percent utilization versus a virtual server on demand, provides savings that are real and tangible."

Google v. Oracle Courts Cloud Controversy

Nowadays when it comes to IP squabbles, the IT industry can't help but be a little jaded -- even when tech titans take to the courtroom. However, the copyright case between Oracle and Google has cloud computing innovators fearing the fallout.
At issue are Oracle's copyrights surrounding Java application programming interfaces (APIs), 37 of which the company accuses Google of misappropriating for the Android operating system. On Monday, a jury in California concluded that Google violated Oracle's copyrights in a partial verdict that was anything but cut-and-dried.
The split-decision left one crucial question unanswered. Did Google's use of the Java APIs constitute fair use?

Free-Wheeling Cloud APIs

Barring uses that compromise the integrity of their platforms or run afoul of law enforcement, IT companies generally like to take a hands-off approach to their APIs in order to grow their platforms, spark market adoption, and build healthy, revenue-generating ecosystems around their technologies. In the cloud provider market, Amazon stands as a prime example.
Amazon's APIs are widely used by startups and established cloud services alike. They have helped foster innovation that, in part, is currently driving a massive amount of IT investments in cloud infrastructures and supporting software. That's why Amazon raised eyebrows in March when it officially sanctioned Eucalyptus's use of its AWS APIs for private and hybrid cloud deployments.
As viewed by industry watchers, the move is indicative of Amazon's strategy for bringing more enterprise customers into the AWS fold by eliminating legal and technical uncertainty. It's also an attempt to fend off competing -- and relatively unrestricted -- cloud technologies like the open source OpenStack platform.
Lingering in the air is the possibility that one day Amazon won't like how another firm is using its APIs and take legal action. It's a scenario that can have devastating effects on the IT industry, experts warn.

Will Clouds Go Dark?

In the Electronic Frontier Foundation's (EFF) reaction to the Oracle v. Google case, staff attorney Julie Samuels states that in the EFF's view, Google made fair use of the Java APIs. She also spells out the fundamental issue that exists at the intersection of copyrights and APIs.
She writes, "Here's the problem: Treating APIs as copyrightable would have a profound negative impact on interoperability, and, therefore, innovation. APIs are ubiquitous and fundamental to all kinds of program development. It is safe to say that all software developers use APIs to make their software work with other software."
The effects of enforcing copyrights on APIs can be devastating to the burgeoning cloud market, according to George Reese, Chief Technology Officer for enStratus Networks, a cloud management specialist. He told Wired Enterprise that copy-protected APIs "would put any company that has implemented the Amazon APIs at risk unless they have some kind of agreement with Amazon on those APIs."
At the very least, following Oracle's lead on APIs could erect costly barriers to entry for cloud companies.
In his examination of the verdict for Wired Enterprise, Robert McMillan wrote that the "case could give Amazon legal grounds to seek licensing deals from OpenStack users such as Hewlett-Packard and Rackspace." OpenStack mimics Amazon's APIs, as does Citrix's CloudStack and middleware from Jclouds and Fog.
Despite this case, Samuels says that the courts already have clear guidelines when it comes to copyrights and APIs.
"Setting aside the practical consequences, there’s a perfectly good legal reason not to treat APIs as copyrightable material: they are purely functional. The law is already clear that copyright cannot cover programming languages, which are merely mediums for creation (instead, copyright may potentially cover what one creatively writes in that language)."

HP's OpenStack-Powered Cloud Enters Public Beta

HP reached a big milestone in its Converged Cloud initiative by kicking off a public beta for its HP Cloud Services suite.
The IT giant is entering the public cloud services market with an infrastructure based on OpenStack, an open source cloud platform developed by NASA and Rackspace. In just under two short years, the project has attracted an impressive army of supporters, many of which are making both financial and technological contributions.

OpenStack Momentum Builds

IBM and Red Hat are among the latest to enter the OpenStack fold as the platform evolves beyond a group of loosely-linked contributors to an organized open source foundation. It has captured industry mind share as an open source alternative to cloud software based on proprietary technologies like Amazon's AWS. In turn, a dynamic developer community has surfaced and the platform is currently fueling a growing OpenStack software and services market.
Indeed, HP emphasizes those factors as major selling points for its Cloud Services offerings. "Designed with OpenStack technology, the open-sourced-based architecture ensures no vendor lock-in, improves developer productivity, features a full stack of easy-to-use tools for faster time to code, provides access to a rich partner ecosystem, and is backed by personalized customer support," says HP in a company statement.
Staying true to the timetable set by last month's Converged Cloud announcement, HP Thursday flipped the switch on HP Cloud Compute, HP Cloud Object Storage and HP Cloud Content Delivery Network. The aim, according to the company, is to get businesses of all stripes -- from web app startups to enterprises -- up and running quickly on an infrastructure that scales to their needs.
"Whether you are an independent developer, ISV or the CIO of a major organization, the priority is to design your applications for today's cloud economy," said Zorawar Singh, senior vice president and general manager of HP Cloud Services, in a statement.
Costs are tallied using a pay-as-you-go model, as is typical with cloud services. For example, HP Cloud Object Storage costs $0.12 per GB per month up to 50 TB and drops to $0.10 per GB per month for the next 950 TB. HP is by slashing prices by 50 percent during the public beta period.

Early HP Cloud Partners

The HP Cloud Services public beta launches with the support of nearly 40 companies, a group that the company hopes will one day develop into a bustling HP Cloud Services Marketplace. The app store approach to cloud services and partner solutions calls for access and billing via a single, unified account.
For now, the HP Cloud Services partner ecosystem is made up of a mix of cloud providers, including management, security, storage and Platform-as-a-Service (PaaS) specialists. For instance, CloudSoft, RightScale and Smartscale Systems have joined HP on the cloud management front. Security options include Dome9 and SecludIT. ActiveState, CloudBees, and Gigaspaces are among the PaaS options.

VMware Advances Cloud Automation with vFabric 5.1

The cloud makes it easy to build out pools of compute resources. But how do you scale out applications in the same way? That's the goal of VMware's latest vFabric 5.1 release.
The vFabric application development platform first debuted back in 2010 and has been evolving ever since. The vFabric 5.1 release now includes the vFabric Application Director, integration with in-memory database and traditional SQL database technology as well as full support for the open source Apache Tomcat application server.
"The vFabric Application Director is a tool that allows you to leverage the construct of a virtual machine to automate the deployment of application architecture," David McJannet, VMware's director of Cloud and Application Services, told InternetNews.com. "Application Director lets you create a blueprint so that every new web application you create for an environment for deployment can be replicated and automated."
The vFabric Application Director approach fits into VMware's overall Software Defined Data Center vision that CTO Steve Herrod articulated at Interop last week. It's a vision where software is the platform that defines how things work in a data center, instead of relying solely on hardware. While vFabric can automate application deployments, on its own it doesn't handle the infrastructure piece of the puzzle.
"Application Director is simply an exercise in creating common blueprints for application deployment," McJannet, said. "It presupposes that you already have a pool of infrastructure already setup, secured and available. Application Director is about leveraging infrastructure that is already in place."
So from a VMware portfolio perspective, an enterprise would use vCloud Director to first setup the pools of virtual server infrastructure. Application Director then sits on top of that to automate the application deployment piece.

Databases

A critical part of any modern application is the database layer, which is another area that VMware's vFabric 5.1 release is going after. The new vFabric release now includes support for the open source PostgreSQL database. VMware first began supporting PostgreSQL in August of 2011 and is now expanding the offering. The initial PostgreSQL support came with VMware's data connector for automating database lifecycles. Now VMware is including PostgreSQL as a standalone database inside of vFabric Suite 5.1.
VMware is also providing a new in-memory database as part of the vFabric 5.1 Suite. SQLfire is based on VMware's Gemfire distributed database technology. What SQLfire provides that Gemfire did not is a standard SQL interface, making it easier for developers to leverage with existing applications.
"This is a distributed in-memory database where you create a grid of nodes inside your data center or across data centers," McJannet explained. "So you can scale your applications horizontally by just adding more capacity at the data tier as an application comes under load."

HP's Christian Verstraete: The Value of Hybrid Cloud Computing

What’s the best solution for most companies, a public cloud or a private cloud? In the view of Christian Verstraete, HP Chief Technologist for Cloud Strategy, the best answer for most firms is a combination of the two, a hybrid cloud.
In a wide-ranging interview conducted over Skype, Verstraete – who clearly enjoys talking about tech – spoke about many cloud-related topics (see video below).
Known as an expert on cloud security, he gave advice to firms concerned about the safety of their data in the cloud. He also put himself in the shoes of an IT manager shopping for a cloud computing solution, and provided some guidance for this process. Lastly – and most refreshingly for a vendor – he spoke about the importance of avoiding vendor lock-in. Keep your options open, he advised.