A compendium of Windows Azure, SQL Azure Database, AppFabric, Windows Azure Platform Appliance and other cloud-computing articles.
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:
- Azure Blob, Drive, Table and Queue Services
- SQL Azure Database and Reporting
- Marketplace DataMarket and OData
- Windows Azure AppFabric: Apps, Access Control, WIF and Service Bus
- Windows Azure VM Role, Virtual Network, Connect, RDP and CDN
- Live Windows Azure Apps, APIs, Tools and Test Harnesses
- Visual Studio LightSwitch and Entity Framework v4+
- Windows Azure Infrastructure and DevOps
- Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds
- Cloud Security and Governance
- Cloud Computing Events
- Other Cloud Computing Platforms and Services
To use the above links, first click the post’s title to display the single article you want to navigate.
IDV Solutions announced a New Open Data connector for Visual Fusion unites more data sources for enhanced business intelligence by a 7/26/2011 press release published by Directions magazine:
IDV Solutions today released an Open Data Protocol Connector for their business intelligence software, Visual Fusion. With this release, organizations can easily connect solutions built with Visual Fusion to any source that uses the OData protocol, including the DataMarket on Microsoft’s Windows Azure Marketplace.
Visual Fusion is innovative business intelligence software that unites data from disparate sources in a web-based, visual context. Users can analyze their data in the context of location and time, using an interactive map, timeline, data filters and other analytic tools to achieve greater insight and understanding. With today’s release, users can combine OData sources with other business information—including content from SQL Server, SharePoint, Salesforce, Oracle, ArcGIS, and Web feeds.
“The new OData connector enhances one of the key advantages of Visual Fusion—the ability to unite data from virtually any sources in one interactive visualization,” said Riyaz Prasla, IDV’s Program Manager. “It lets users add context to their applications by bringing in data from government sources, the Azure Marketplace, and other organizations that support the protocol.”
OData provides an HTTP-based, uniform interface for interacting with relational databases, file systems, content management systems, or Web sites.
Brian Rhea posted a simple demo of Consuming an OData feed in MVC in a 7/20/2011 post (missed when posted):
I created this quick demo on consuming a WCF Data Service to get my feet wet with OData feeds. I thought it might be a good quick start guide for anyone getting started. Enjoy!
Referencing the Service
After creating the MVC 3 Application our first step is to add the service reference to leverage the proxy class generated for us.
Now that we have our proxy class let’s go ahead and add a strongly-typed partial view in the Home folder for the products.
Once we have a list view to show the products, we can add our call to get the view into the main page using the helper Html.RenderPartial.
On the controller side, we will access the products from the OData feed and return them to the view. Since the OData feed is a RESTful service we could just add /Products to the service url and retrieve them:
But since I have the generated class available I can just use it’s context and LINQ to get the products:
The last thing we need to do is show a select list of categories to sort by and wire it up to change the list using some jQuery binding.
Our page now looks like this:
And in our controller we add the method to get our categories:
Run this and we see our results based on the category chosen:
Okay, so we have our products and can change them by category. Let’s take advantage of Output Caching since the products will not change much (except of course the units in stock). Output Caching is easy to accomplish by adding an attribute to the controller action. If you want you can also use Action Filtering to handle all controllers and actions, providing a great level of caching granularity. Back to our products, just add the OutputCache attribute to the action along with some parameters you can play with to see it working.
Let’s look at this in action in Firebug to really see what’s happening. The first three calls show the first time I select the category. I then select each category again and we can see the difference in response time:
And there we have it. Consuming a WCF Data Service public OData feed and caching the output by parameter.
No significant articles today.
The Windows Azure Team (@WindowsAzure) posted Just Announced: Windows Azure Toolkit for iOS Now Supports Access Control Service, Includes Cloud Ready Packages on 7/27/2011:
Wade Wegner just announced in his blog post, “Windows Azure Toolkit for iOS Now Supports the Access Control Service” the release of an update to the Windows Azure Toolkit for iOS.
This new version of the toolkit includes three key components Wade describes as incredibly important when trying to develop iOS applications that use Windows Azure:
- Cloud Ready Packages for Devices
- Configuration Tool
- Support for ACS
Read Wade’s blog post to learn more about these updates and download all the bits here:
Mary Jo Foley (@maryjofoley) reported Microsoft gives Windows Phone developers a refreshed (non-RTM) Mango build in a 7/27/2011 post to her All About Microsoft blog for ZDNet:
On the heels of its announcement that it had released to manufacturing the Windows Phone operating system 7.5 — better known as “Mango” — Microsoft execs said they’re now updating Mango developers with a near-final build. [See below post.]
The developer update released via the Microsoft Connect site on July 27 is Build 7712. The RTM build is believed to be 7720.The Windows Phone Dev Podcast team hinted yesterday that Microsoft might be ready to launch the 7712 build as early as today.
The big question on many developers’ minds: Why not just give Mango devs the actual RTM bits? Windows Phone Senior Product Manager Cliff Simpkins provided an explanation in a new blog post on the Windows Phone Developer blog:
“For the folks wondering why we’re not providing the ‘RTM’ version, there are two main reasons. First, the phone OS and the tools are two equal parts of the developer toolkit that correspond to one another. When we took this snapshot for the refresh, we took the latest RC drops of the tools and the corresponding OS version. Second, what we are providing is a genuine release candidate build, with enough code checked in and APIs locked down that this OS is close enough to RTM that, as a developer, it’s more than capable to see you through the upcoming RC drop of the tools and app submission. It’s important to remember that until the phone and mobile operator portion of Mango is complete, you’re still using a pre-release on your retail phone – no matter the MS build. Until that time, enjoy developing and cruising around on build 7712 – it’s a sweet ride, to be sure.”
Developers got their first “Beta 2″ test build of Mango in late June. This refresh includes a number of updates, including locked application platform programming interfaces; a screenshot capability built into the emulator; an update to the profiler to include memory profiling; the ability to install NuGet into the free version of the Windows Phone SDK tools; and an “initial peek” at the Marketplace Test Kit.
Microsoft execs said last week that the company was planning to deliver a Release Candidate build of the Mango phone software development kit in late August. From today’s post, the RC plan still seems to remain a go.
Mango is now in telco and handset makers’ hands for testing. Microsoft officials have said Mango will be pushed to existing Windows Phone users this fall and be available on/with new handsets around the same time. Mango includes a number of new features, ranging from the inclusion of an HTML5-compliant IE 9 browser, to third-party multitasking, to Twitter integration.
On July 27, Fujitsu-Toshiba announced what are expected to be the first Mango phones. Due out in Japan in “September or beyond,” the IS12T is a waterproof handset that will come in yellow, pink and black and include a 13.2 megapixel camera.
Rob Tiffany (@RobTiffany) reported Windows Phone “Mango” has been Released to Manufacturing! in a 7/27/2011 post:
On July 26th, the Windows Phone development team officially signed off on the release to manufacturing (RTM) build of “Mango,” which is the latest version of the Windows Phone operating system. We now hand over the code to OEMs to tailor and optimize the OS for their phones. After that, our Mobile Operator partners will do the same in order to prepare the phones for their wireless networks.
This is an amazing milestone for the Windows Phone team and Microsoft. With hundreds of new features including the world’s fastest mobile HTML5 web browser, Windows Phone “Mango” promises to make a huge impact in the mobile + wireless space this fall. This “splash” is made even bigger around the world as we expand our Windows Phone language support to include Brazilian Portuguese, Chinese (simplified and traditional), Czech, Danish, Dutch, Finnish, Greek, Hungarian, Japanese, Korean, Norwegian (Bokmål), Polish, Portuguese, Russian, and Swedish.
Only a few months after the Windows Azure SDK for PHP 3.0.0, Microsoft and RealDolmen are proud to present you the next version of the most complete SDK for Windows Azure out there (yes, that is a rant against the .NET SDK!): Windows Azure SDK for PHP 4. We’ve been working very hard with an expanding globally distributed team on getting this version out.
The Windows Azure SDK [for PHP] 4 contains some significant feature enhancements. For example, it now incorporates a PHP library for accessing Windows Azure storage, a logging component, a session sharing component and clients for both the Windows Azure and SQL Azure Management API’s. On top of that, all of these API’s are now also available from the command-line both under Windows and Linux. This means you can batch-script a complete datacenter setup including servers, storage, SQL Azure, firewalls, … If that’s not cool, move to the North Pole.
Here’s the official change log:
- New feature: Service Management API support for SQL Azure
- New feature: Service Management API’s exposed as command-line tools
- New feature: MicrosoftWindowsAzureRoleEnvironment for retrieving environment details
- New feature: Package scaffolders
- Integration of the Windows Azure command-line packaging tool
- Expansion of the autoloader class increasing performance
- Several minor bugfixes and performance tweaks
Some interesting links on some of the new features:
- Setup the Windows Azure SDK for PHP
- Packaging applications
- Using scaffolds
- A hidden gem in the Windows Azure SDK for PHP: command line parsing
- Scaffolding and packaging a Windows Azure project in PHP
Also keep an eye on www.sdn.nl where I’ll be posting an article on scripting a complete application deployment to Windows Azure, including SQL Azure, storage and firewalls.
As an aside, www.sdn.nl is in Dutch, which isn’t surprising because it’s domiciled in the Netherlands, but I didn’t see an English translation option.
Web Host Industry Review posted Q&A: DotNetNuke’s Mitch Bishop Discusses Version 6 on 6/27/2011:
On July 20, open source software developer DotNetNuke (www.dotnetnuke.com) released Version 6 of its ASP.NET content management system.
DotNetNuke says version 6 offers a significant improvement over its predecessor, including a simplified interface that allows developers, designers and content owners to effectively design, deliver and update websites.
The cloud-focused CMS software has built-in integration for cloud-based services such as Amazon Web Services and Windows Azure, offers the ability to publish files managed in Microsoft Office SharePoint to external websites, and the entire core platform has been rewritten in C# to make it more accessible and customizable to the developer community.
Along with the new version, DotNetNuke has also launched the Snowcovered (www.snowcovered.com) store, which offers some 10,000 DotNetNuke modules, skins, and application extensions for developers to download. The company acquired Snowcovered in August of 2009, and has built it up to better support the CMS.
In an email interview, Mitch Bishop, chief marketing officer at DotNetNuke, discusses the new version, its many benefits, and how it will provide developers with a range of cloud tools.
WHIR: The press release states that DNN 6 is the most "cloud-ready" version ever to be released. Can you explain and elaborate on this?
Mitch Bishop: DNN 6 offers two cloud options. The first is the ability to host your DotNetNuke website in the Windows Azure Cloud Hosting platform. This capability is available in both the free and commercial editions of the product. The second is the ability to store files in either the Amazon S3 or Windows Azure File Storage offerings. Any files needed by your website (images, for example) can be stored either locally, on the website server, or in the cloud using one of these offerings. The developer can set it up so that the files are served up seamlessly, i.e. the website user will not know where the files are coming from – it all looks like one seamless experience. The cloud file storage capability is only offered in the commercial versions of the DotNetNuke product.
WHIR: How will the new DNN Store benefit users?
MB: The store (Snowcovered.com) is an online marketplace that offers 10,000 extensions that help developers add on customized capabilities and skins (site designs) to their website. Having a rich set of extensions means that developers will not have to waste time building their own custom modules or designs – there’s a good chance that required functionality already exists in the store. The DotNetNuke 6 release supports a direct connection to the store that dramatically simplifies the shopping, acquisition, installation and maintenance of these add-on extensions.
WHIR: What kinds of applications are available through the Snowcovered store?
MB: Modules, skins and other apps (functional website code) that make it easy for developers to extend the capabilities of their site.
WHIR: Tell me a bit about the new eCommerce module and how it will help users?
MB: The eCommerce module gives companies a quick way to offer robust online shopping experiences for their customers. The benefit is speed; installing the module and populating the product list is fast and easy. Since payment options are included, there is no time wasted building this functionality on your own.
WHIR: What are the main differences among the three versions (community, professional, enterprise) and what kind of user is each version intended for?
MB: The Community (free) version of the DotNetNuke platform has all the features needed to build a robust website. The platform is flexible, extensible and secure, making it perfect for thousands of business and hobbyist sites. The Professional and Enterprise Editions offer additional capabilities for mid-market enterprises that need more content control and flexibility for customizing and monitoring their web sites.
Tony Bishop (a.k.a. tbtechnet) described a New, Low Friction Way to Try Out Windows Azure and Get Free Help in a 7/26/2011 post to TechNet’s Windows Azure, Windows Phone and Windows Client blog:
If you’re a developer interested in quickly getting up to speed on the Windows Azure platform there are excellent ways to do this.
- Get a no-cost, no credit card required 30-day Windows Azure platform pass
- Get free technical support and also get help driving your cloud application sales. How?
- It’s easy. Chat online or call: http://www.microsoftplatformready.com/us/home.aspx
- Need help? Visit http://www.microsoftplatformready.com/us/dashboard.aspx
- Just click on Windows Azure Platform under Technologies
- How much capacity?
- Should be enough to try out Azure
Bruce Kyle posted Get Azure Cloud Apps Up, Running in Minutes with LightSwitch to the US ISV Evangelism Blog on 6/27/2011:
Microsoft Visual Studio LightSwitch is now available to MSDN subscribers with general availability on Thursday. LightSwitch is a simplified self-service development tool that enables you to build business applications quickly and easily for the desktop and cloud.
Visual Studio LightSwitch enables developers of all skill levels to build line of business applications for the desktop, web, and cloud quickly and easily. LightSwitch applications can be up and running in minutes with templates and intuitive tools that reduce the complexity building data-driven applications, including tools for UI design and publishing to the desktop or to the cloud with Windows Azure.
LightSwitch simplifies attaching to data with data source wizards or creating data tables with table designers. It also includes screen templates for common tasks so you can create clean interfaces for your applications without being a designer. Basic applications can be written without a line of code. However, you can add custom code that is specific to your business problem without having to worry about setting up classes and methods.
For more information, see the blog post at Microsoft® Visual Studio® LightSwitch™ 2011 is Available Today and a video, LightSwitch Overview with Jason Zander.
How to Get LightSwitch
See the LightSwitch site about Thursday’s general.
For the complete VS LightSwitch RTW story, see Windows Azure and Cloud Computing Posts for 7/26/2011+.
SD Times on the Web reported ComponentOne Releases a LightSwitch Extension Offering Instant Business Intelligence in a 7/27/2011 press release:
ComponentOne, a leading component vendor in the Microsoft Visual Studio Industry Partner program, today announced the release of ComponentOne OLAP for LightSwitch, the first OLAP screen for Microsoft LightSwitch. Microsoft Visual Studio LightSwitch is the latest addition to the Visual Studio family and offers its users a way to create high-quality business applications for the desktop and the cloud without code.
ComponentOne OLAP for LightSwitch provides custom controls and screen templates for displaying interactive pivot tables, charts, and reports. OLAP tools provide analytical processing features similar to those found in Microsoft Excel Pivot tables and charts.
"OLAP for LightSwitch preserves the spirit of Visual Studio for LightSwitch by allowing users to create interactive data views without a deep level of programming knowledge," said Dan Beall, product manager at ComponentOne. "It allows any level of user to snap a pivoting data screen into Visual Studio LightSwitch and instantly get in-depth business intelligence functionality," said Beall.
The Microsoft LightSwitch team created a video demonstrating the OLAP for LightSwitch product which is available for viewing on the company product page.
"What sets this extension apart is the ability to simply add this extension to your project and get a tool that creates interactive tables, charts, and reports similar to those found in Microsoft Excel Pivot tables and charts. Drag-and-drop views give you real-time information, insights, and results in seconds. All this without cubes or code!" said Beall.
"There are online tutorials, forums, documentation and diagrams available as part of the ComponentOne LightSwitch experience," Beall. "We are pleased to welcome this product to the ComponentOne family of products and our online community.
In the OpenLight blog, Microsoft Silverlight MVP Michael Washington writes, "Ok folks, I am ‘gonna call it’, we have the "Killer Application" for LightSwitch, ComponentOne’s OLAP for LightSwitch, you can get it for $295." Washington continues, "Well, I think ComponentOne’s OLAP for LightSwitch may be "The One" for LightSwitch. This is the plug-in that for some, becomes the deciding factor to use LightSwitch or not."
Washington goes on to describe that a "Killer App" is an application that provides functionality that is so important, that drives the desire to use the product that it is dependent upon.
Pricing and Availability
As Mr. Washington mentioned, the product is available for immediate download and the cost is $295.00 per license. ComponentOne offers online purchase options www.componentone.com and by telephone at 412.681.4343 or 1.800.858.2739.
Jim O’Neil posted Inside the Cloud on 7/27/2011:
The Global Foundation Services team, the folks at Microsoft who ‘run the cloud,’ have just released a video tour of a few of the Microsoft data centers (including the Azure data centers in Chicago, Illinois and Dublin, Ireland). The clip is a little over 10 minutes long and well worth the time in my opinion. I found the evolution of the infrastructure that powers these data centers to be a fascinating story in itself – cutting energy utilization by 50% and water usage by two orders of magnitude over traditional data centers! Take a quick break and watch it here:
This is the same video that yesterday’s post offered.
David Linthicum (@DavidLinthicum) asserted “There seems to be more cloud construction projects than there is talent to support them. That could spell real trouble” as a deck for his Why the shortage of cloud architects will lead to bad clouds article of 7/27/2011 for InfoWorld’s Cloud Computing blog:
The complexities around multitenancy, resource sharing and management, security, and even version control lead cloud computing startups — and enterprises that build private and public clouds — down some rough roads before they start to learn from their mistakes. Or perhaps they just have to kill the project altogether as they discover all that investment is unsalvageable.
I’ve worked on cloud-based systems for years now, and the common thread to cloud architecture is that there are no common threads to cloud architecture. Although you would think that common architectural patterns would emerge, the fact is clouds do different things and must use very different architectural approaches and technologies. In the world of cloud computing, that means those who are smart, creative, and resourceful seem to win out over those who are just smart.
The demand has exploded for those who understand how to build clouds. However, you have pretty much the same number of cloud-experienced architects being chased by an increasing number of talent seekers. Something has to give, and that will be quality and innovation as organizations settle for what they can get versus what they need.
You won’t see it happen right away. It will come in the form of outages and security breaches as those who are less than qualified to build clouds are actually allowed to build them. Moreover, new IaaS, SaaS, and PaaS clouds — both public and private — will be functional copies of what is offered by the existing larger providers, such as Google, Amazon Web Services, and Microsoft. After all, when you do something for the first time, you’re more likely to copy rather than innovate.
If you’re on the road to cloud computing, there are a few things you can do to secure the talent you need, including buying, building, and renting. Buy the talent by stealing it from other companies that are already building and deploying cloud-based technology — but count on paying big for that move. Build by hiring consultants and mentors to both do and teach cloud deployment at the same time. Finally, rent by outsourcing your cloud design and build to an outside firm that has the talent and track record.
Of course, none of these options are perfect. But they’re better than spending all that time and money on a bad cloud.
Joe Panettieri reported Red Hat Warns Government About Cloud Lock-In in a 7/26/2011 post to the TalkinCloud blog:
In an open letter of sorts, Red Hat is warning U.S. policy makers and government leaders about so-called cloud lock-in — the use of proprietary APIs (application programming interfaces) and other techniques to keep customers from switching cloud providers. The open letter, in the form of a blog entry from Red Hat VP Mark Bohannon, contains thinly veiled criticism of Microsoft and other companies that are launching their own public clouds.
Bohannon penned the blog to recap a new TechAmerica report, which seeks to promote policies that accelerate cloud computing’s adoption. The blog is mostly upbeat and optimistic about cloud computing. But Bohannon also mentions some “strong headwinds” against cloud computing — including:
“steps by vendors to lock in their customers to particular cloud architecture and non-portable solutions, and heavy reliance on proprietary APIs. Lock-in drives costs higher and undermines the savings that can be achieved through technical efficiency. If not carefully managed, we risk taking steps backwards, even going toward replicating the 1980s, where users were heavily tied technologically and financially into one IT framework and were stuck there.”
By pointing to the 1980s, Bohannon is either referring to (A) old proprietary mainframes, (B) proprietary minicomputers or (C) the rise of DOS and then Microsoft Windows. My bet is C, since Red Hat back in 2009 warned its own customers and partners about potential lock-in to Windows Azure, Microsoft’s cloud for platform as a service (PaaS).
For its part, Microsoft has previously stated that Windows Azure supports a range of software development standards, including Java and Ruby on Rails.
Still, Bohannon reinforces his point by pointing government officials to open cloud efforts like the Open Virtualization Alliance (OVA); Red Hat’s own OpenShift PaaS effort; and Red Hat’s CloudForms for Infrastructure-as-a-Service (IaaS).
Concluded Bohannon: “The greatest challenge is to make sure that with the cloud, choice grows rather than shrinks. This effort will be successful so long as users are kept first in order of priority, and remain in charge.”
I understand Bohannon’s concern about cloud lock-in. But I’m not ready to sound the alarm over Windows Azure. Plenty of proprietary software companies and channel partners are shifting applications into the Azure cloud. We’ll continue to check in with partners to measure the challenges and dividends.
Read More About This Topic
“Bohannon’s concern about cloud lock-in” is his worry that anyone using a platform other than Red Hat’s potentially decreases his bonus. The fact that a cloud computing platform uses open source code doesn’t mean that one can more easily move a cloud application to or from another open or closed source platform.
No significant articles today.
- Technical Overview of the Security Features in the Windows Azure Platform: http://www.microsoft.com/online/legal/?langid=en-us&docid=11.
- Windows Azure Security Overview: http://www.globalfoundationservices.com/security/documents/WindowsAzureSecurityOverview1_0Aug2010.pdf
- Windows Azure Privacy: http://www.microsoft.com/online/legal/?langid=en-us&docid=11
- Securing Microsoft Cloud Infrastructure: http://www.globalfoundationservices.com/security/documents/SecuringtheMSCloudMay09.pdf.
- Security Best Practices For Developing Windows Azure Applications: http://www.globalfoundationservices.com/security/documents/SecurityBestPracticesWindowsAzureApps.pdf
- Technorati Tags: Windows Azure,Cloud Security,Windows Azure Security,Azure Security,Cloud Computing,Azure SAS 70,Cloud Security Best Practices,Azure Security Best Practices
Peter Galli posted Microsoft @ OSCON 2011: We have become more open, let’s work together! on 7/27/2011:
Gianugo Rabelino, Microsoft’s Senior Director for Open Source Communities [pictured at right], just finished delivering his keynote at OSCON in Portland.
As Gianugo is now wandering around the OSCON session and expo floor, I thought it would we useful to give you a quick recap of what he just presented.
During his keynote, Gianugo discussed how both the world and Microsoft are changing, saying that “at Microsoft we continue to evolve our focus to meet the challenging needs on the industry: we are open, more open than you may think.”
Gianugo explained that the frontiers between open source, proprietary and commercial software are becoming more and more of a blur. The point is not about whether you run your IT on an Open Source stack or a commercial stack, the important thing is how you can assemble software components and build solutions on top of them using APIs, protocols and standards. And the reality is that most IT systems are using heterogeneous components, he said.
Looking at the cloud, the blur is even more opaque. What does Open Source or Commercial mean in the cloud?
Gianugo put it this way: “In the cloud, we see just a continuous, uninterrupted shade of grey, which makes me believe it’s probably time to upgrade our vision gear. If we do that, we may understand that we have a challenge ahead of us, and it’s a big one: we need to define the new cornerstones of openness in the cloud. And we actually gave it a shot on this very same stage one year ago, when we came up with four interoperability elements of a cloud platform: data portability, standards, ease of migration & deployment, and developer choice.”
Finally, Gianugo talked about how Microsoft’s participation in Open Source communities is real, and he used his keynote as an opportunity to announce a few new projects and updates.
One way we interact with open source software is by building technical bridges, Gianugo said, giving an example on the virtualization front: announcing support for the Red Hat Enterprise Linux 6.0 and CentOS 6.0 guest operating systems on Windows Server Hyper-V (which follows this Linux Interoperability announcement at OSBC a few weeks ago. )
On the cloud development front, we are continuing to improve support for open source languages and runtimes, Gianugo said, announcing the availability of a new version of the Windows Azure SDK for PHP, an open source project which is led by Maarten Balliauw from RealDolmen, where Microsoft is providing funding and technical assistance.
Maarten has all the details on the new features and link to the open source code of the SDK. This announcement also includes a set of cloud rules for the popular PHP_CodeSniffer tool that Microsoft has developed to facilitate the transition of existing PHP applications to Windows Azure. The new set of rules is available on Github.
An on demand Webcast of Gianugo’s keynote will soon be available, and I’ll post the link to it here.
Markus Klems (@markusklems, pictured below) posted Notes from geekSessions – Network and Infrastructure Scalability on 6/26/2011:
Here are my notes from today’s geekSessions 2.2 in San Francisco:
Allan Leinwand, CTO Zynga
Allan gave a short talk on Zynga’s infrastructure, in particular Z Cloud, and Amazon EC2-compatible private cloud. Seems like another proof that AWS is the de-facto standard, at least for compute cloud and storage cloud solutions. If you want to build a hybrid cloud solution, better make sure that it integrates with EC2…
Next up was a tech guy from BigSwitch who promoted an open source network virtualization software, named OpenFlow.
Mike Christian, Business Continuity Planning Yahoo!
Mike reminded us that data centers sometimes go down. When you manage 45 of them, probability is high that one of them disconnects once a week or so, due to a multitude of potential failures: network instability, HVAC failures, UPS failures (apparently a big problem), generator failures – and more mundane issues, such as a leeky roof or a hungry squirrel.
The advice: focus on impact duration, not incident duration, i.e. being able to fail over traffic from one DC to another within minutes, use DNS-based Global Server Load Balancing, degrade service gracefully.
Gleb Budman, CEO Backblaze
Gleb showed how to build an Internet-connected backup server for $5/month. Backblaze targets consumers and small businesses, and does not enforce a storage space limit. Average users store a bit more than 50 GB. Backblaze certainly is cheaper than Amazon S3, on the other hand does not offer (geo-)replication but only RAID redundancy and other nice things like a Web service API, et cetera. Well, you get what you pay for. Not everybody needs a cloud.
Cliff Moon, Co-Founder Boundary
Cliff gave a very entertaining talk, complaining about old-fashioned (client, app, OS, and network) monitoring tools and evangelized the next generation of monitoring tools, like OpenTSDB, and – you guessed it – Boundary.
Barton George (@barton808, pictured below) posted OSCON: How foursquare uses MongoDB to manage its data, an interview with Harry Heymann at OSCON, on 7/26/2011:
I saw a great talk today here at OSCON Data up in Portland, Oregon. The talk was Practical Data Storage: MongoDB @ foursquare and was given by foursquare‘s head of server engineering, Harry Heymann. The talk was particularly impressive since, due to AV issues, Harry had to wing it and go slideless. (He did post his slides to twitter so folks with access could follow along).
Some of the ground Harry covers
- What is foursquare and how it feeds your data back to you
- “Software is eating the world”
- How foursquare got to MongoDB from MySQL
- Handling 3400% growth
- How they use Hadoop for offline data
- Running on Amazon EC2 and at what point does it make sense to move to their own servers
- Harry’s Slides: Practical Data Storage: MongoDB @foursquare
Pau for now…
The Gartner (formerly Burton) Catalyst Conference takes place on 7/26 through 7/29/2011 in San Diego, CA and features Track B: Cloud – Risks & Rewards:
Tutorial: Building a Viable Cloud Adoption Strategy
26 July, 2011 (08:00 AM – 09:00 AM)
As cloud computing enters the adoption era, IT organizations everywhere are being asked by company executives to take a serious look at cloud computing to lower costs and create a more agile IT environment. But cloud computing still has many risks regarding data security, vendor viability, disaster recovery, high availability, and liability. To date, cloud computing has been adopted in an ad-hoc manner, often starting as "skunk works" projects deep within the IT organization. For cloud computing to be successful, IT organization need a well thought-out and executable cloud computing adoption strategy; one that takes into account the risks and rewards. In this presentation, Research Director Drue Reeves will explore the Gartner ITP approach to building a viable cloud computing adoption strategy. Attendees of this tutorial will learn: · Methodology to evaluate which applications go into the cloud and which do not · Methodology to evaluate cloud providers · Risk mitigation and exit strategies for cloud computing· Create a roadmap to cloud computing adoption
Workshop: Cloud Storage (Pre-Registration is now closed please stop by the Information Booth for more information.)
26 July, 2011 (08:00 AM – 10:30 AM)
Workshop: Managing Application Performance on the Network (Pre-Registration is now closed please stop by the Information Booth for more information.)
26 July, 2011 (08:00 AM – 10:30 AM)
This intensive workshop discusses application design for optimal network performance on wired and mobile wireless networks, network tuning best practices, and current optimization devices that provide compression, data reduction, and protocol acceleration. This is a completely revised version of my WAN Optimization workshop, adding material on application design for performance and triage
Tutorial: Building Applications for Deployment in the Cloud
26 July, 2011 (09:00 AM – 10:30 AM)
Cloud computing requires an architecture shift and new development models. Traditional application and data architecture does not enable optimal elastic scalability and maximum utilization of shared infrastructure. To build Cloud-friendly applications that will maximize benefits, development teams must apply Cloud application design patterns to build systems that exhibit parallelism, multi-tenancy, autonomy, distributed interactions, declarative definitions, separation of concerns, and federation. In this session, Gartner Managing Vice President Chris Haddad discusses how to build cloud optimized applications and data by:• Identifying application and data optimization candidates• Applying cloud-friendly patterns• Choosing new programming models• Including new infrastructure components
Hybrid IT: Preparing for the Shift
26 July, 2011 (02:00 PM – 02:40 PM)
In 2011, IT organizations will be compelled to build a hybrid IT architecture. They must devise a hybrid IT strategy that takes advantage of the public cloud’s strengths while building internal IT services to host critical applications and data. Building a hybrid IT architecture enables IT organizations to compete with external cloud service providers (CSPs) and reap the benefits of the public cloud’s rapid provisioning, pay-as-you-go consumption model to augment internal capacity and generate IT agility. Further, implementing a hybrid IT architecture positions IT organizations as the service broker to cloud services (both internal and external). As a broker, the IT organization can ensure that digital assets are hosted in the correct location according to security, performance, availability and the organization’s risk tolerance.
Evaluating Cloud Providers: New World of Vendor Management
26 July, 2011 (02:45 PM – 03:20 PM)
As IT organizations host more applications in the cloud, it is imperative that they understand their CSP’s capabilities for determining an application’s fit in the cloud service, and at the same time mitigate cloud risks. IT organizations should carefully examine CSPs using a rigorous process and detailed criteria that match application, data and business requirements. Security, availability and application fit must be top priorities, but IT organizations must also require CSPs to answer questions regarding company finances, commitment to the market, infrastructure resiliency, disaster recovery, support, supply chain, pricing and charge models.
End-User Case Study: Managing the Buzz: The Power of a Cloud Decision Model
26 July, 2011 (03:20 PM – 03:55 PM)
Buzz about the Cloud seems to be everywhere, from InformationWeek to Business Week. The move toward Cloud model is an attractive proposition, as it empowers business to leverage the innovation and scale of Cloud providers, without engaging sometimes local IT folks. However, vendors are increasingly using Cloud as a marketing label for old technologies and offerings, devaluing the term and trend. Is Cloud prime for enterprise consumption? How can we be sure to choose the right Cloud solution, as to avoid costly mistakes? Learn about one company’s journey towards the public Cloud, what decision frameworks and models it has adopted to determine when/where/why to adopt Cloud and when not to, and the challenges and lessons learned along the way.
Industry Point-of-View: Negotiating and Managing Cloud Service Providers: Cloud Law and Order Panel
26 July, 2011 (04:10 PM – 05:20 PM)
As cloud computing usage increases, IT organizations are attempting to host more critical applications in the public cloud. Although the move to cloud computing comes with issues of security, availability and management, these technical issues are only half the battle. For IT organizations hosting applications in the cloud, wading through the terms and conditions that providers offer, negotiating service levels and contending for outages and legal ramifications can be just as challenging. Moderated by Gartner, this session will explore the rising legal and governance issues surrounding cloud computing. Panel members will respond to the challenging questions of legal issues, data compliance, liability protection, SLA negotiation and bankruptcy.
Making Cloud Commitments: What It Means for Your Users, Your Business, and Your Job
26 July, 2011 (05:25 PM – 06:05 PM)
IT’s view of cloud computing has matured from the hypothetical—Why would I use cloud computing?—to the reality that life as we know it will include a mix of internal and external, private and public clouds. The technical issues are daunting, but solvable. But, as usual, it’s the soft issues that keep your boss, and you, up at night. What does cloud mean for the application portfolio? The data center? How will development and support processes change? What about governance? How do I manage my vendor and stay on top of cloud costs that I may or may not directly control? What does it mean for my role and skill set?
Building Internal Clouds: Tales from the Trenches
27 July, 2011 (08:30 AM – 09:10 AM)
Throughout history, isolated cultures have made identical technical advances. Today we’re not inventing the wheel, but many IT organizations are building internal clouds that include low-level attributes and processes that mirror those of their peers. This session takes a deep look at lessons learned from 17 end user organizations that participated in Gartner’s internal/private cloud contextual research project. While often acting independently, many organizations drew remarkably similar conclusions with regards to architecture, operations, and governance. Attend this session to learn about proven best practices and roadmaps for building internal cloud infrastructure as a service, and also to hear of the many pitfalls organizations have encountered on their internal/private cloud journey.
Moving Apps and Data to the Cloud: Migration Options
27 July, 2011 (08:30 AM – 09:10 AM)
You’ve been ordered to move some applications and data to the cloud. Now what? The easy option is to simply bring the application and its data as-is to an IaaS provider. But will that work? Cloud computing introduces new challenges of cross-functional and cross-supplier integration, and it may require application refactoring. When does the cost of migration outweigh cloud benefits?
Building the Infrastructure
27 July, 2011 (09:15 AM – 09:50 AM)
Underneath all those virtual machines, virtual networks, and virtual storage resources lies real physical hardware with their own set of requirements such as scalability, availability, security, and flexibility. In this session, you will hear from experts who will examine issues around server and storage selection outline the pitfalls and best practices for implementing an internal cloud.
End-User Case Study: Moving to the Cloud in a Hurry: Lessons from Open Dealer Exchange’s Aggressive Migration Program
27 July, 2011 (09:15 AM – 09:50 AM)
As cloud providers multiply with a variety of services being offered As-A-Service (aaS) such as Infrastructure (IaaS), Platform (PaaS), Software (SaaS), or Applications (AaaS), we have some best practices and strategy to share in designing our migration to the cloud, vendor selection, and migration implementation. Having achieved aggressive simultaneous migrations of some of our core corporate IT services as well as many of our primary business applications, we want others to learn from our best practices and mistakes.
Hurry Up and Wait: Virtualization Orchestration – 2011 Style
27 July, 2011 (09:50 AM – 10:25 AM)
Orchestration as a concept sounds great on paper, but how real is it today? As IT organizations look to further mature their virtual infrastructures, reduce TCO, and improve business continuity, orchestration and VM mobility are important topics in 2011. However, many Gartner clients have learned that the path forward is difficult to navigate. A slowly maturing product landscape combined with complex integration requirements adds to the difficulty associated with orchestrating IT operational tasks. This session takes a close look at today’s most pressing concerns associated with virtual infrastructure orchestration and mobility. The session concludes with a discussion of current best practices, future trends, and a list of questions you should be asking of vendors.
Moving and Optimizing Applications for IaaS
27 July, 2011 (09:50 AM – 10:25 AM)
Migrating VMs and applications to the public cloud is not for the faint of heart. Yet organizations have compelling arguments and desire to move VM workloads from internal data centers to public IaaS cloud providers. A market is emerging around cloud brokers and orchestrators, but it is very immature. How can IT organizations capture the opportunity, value and advantage of the public cloud through a methodical and successful migration process?
Network Security Architecture for Private Clouds
27 July, 2011 (10:40 AM – 11:15 AM)
Private clouds change the world of the data center. No longer is it easy to identify which application is running on which server. This leads to concerns regarding how to zone the network and how to efficiently move traffic between virtual machines. Monitoring and controlling traffic take on different meanings in a virtualized environment. Enterprises will need to rethink their network security architecture in light of these changes. This talk will examine the architectural options and the associated tradeoffs. Issues this talk will address include:• Network controls and how they can be used within a private cloud• Network hardware changes and their effect on network security architecture• Tradeoffs between flexibility and security
End-User Case Study: Presidio Health’s Journey to the Cloud: Migrating a Live, Compliance Sensitive, SaaS Offering
27 July, 2011 (10:40 AM – 11:15 AM)
Faced with rapid growth, Presidio needed to find inexpensive and creative ways to quickly scale its SaaS infrastructure to meet customer demand, while preserving privacy and security compliance. Constrained by time and budget, Presidio’s team had to figure out how to adapt its existing software to a scalable and reliable environment. In this session, you’ll be a passenger on a journey to the cloud that will explore: 1) The analysis untaken by the Presidio team to determine its cloud strategy 2) How Presidio migrated (instead of rewriting) its software and went from owned, co-located servers to 100% cloud-based 3) The challenges, impact, lessons learned and next steps in Presidio’s cloud journey.
Securing Hypervisors and Other Building Blocks of Internal Cloud
27 July, 2011 (11:15 AM – 11:50 AM)
Perhaps “love” overstates the case, but removing a bad taste is important, too. Many security teams continue to worry about server virtualization security, from zoning, to protecting moving workloads, to managing malware and configuration of offline guests. Given the hypervisor’s role in internal clouds, such issues are no small matter. Fortunately, both the virtualization platforms and third-party ecosystems for securing virtual servers have matured mightily. In this session, Vice President Trent Henry will reveal common problems and solutions that security teams grapple with as they help build internal clouds and learn to embrace (or at least tolerate) their virtual environments.
Cloud Native or Naturalized Citizen: What’s Your Cloud Application Platform Strategy?
27 July, 2011 (11:15 AM – 11:50 AM)
Suddenly everything is a cloud application platform. And so many flavors: software or service? Instance, framework or metadata? Cloud native or naturalized citizen? Revolutionary approaches to application development are meeting resistance in enterprise development organizations. Mainstream IT requires incremental evolution of platforms to incorporate cloud characteristics while leveraging existing investments. In this session, Director Richard Watson discusses the state of the market for cloud application platforms.
End-User Case Study: Security Success in the Private Enterprise Cloud: A Customer Strategy Revealed
27 July, 2011 (11:55 AM – 12:35 PM)
In this session, attendees will learn the primary considerations and best practices one organization took before securing its most important information in the private multi-tenant cloud, including:• Security controls implemented for internal cloud and server virtualization• Pitfalls/tradeoffs• What technologies we chose and why• What worked, what didn’t and what we would do differently• The difference between protecting data in a virtualized vs. physical environment• Residual risks and how we kept them controlled
Moving Business To The Cloud: A Tale of Security and Governance
27 July, 2011 (11:55 AM – 12:35 PM)
Moving application workloads to the cloud is uncharted for many organizations. There are questions of security, reliability and manageability that must be addressed along with integration questions for identity, data and business processes. This session highlights one organization’s journey in migrating to private clouds.
Building the Self-Service Cloud: Provisioning Portal and Service Catalog
27 July, 2011 (02:05 PM – 02:45 PM)
Provisioning portals and service catalogs are two mandatory components to turn virtual infrastructures into internal clouds. The former empowers end users, simplifying requirements definition, accelerating workloads deployments, and introducing transparency in resource consumption. The latter allows the IT organization to standardize and control the offering. In this session, Research Director Alessandro Perilli will review the alternatives to build a self-service stack, describing pros and cons for in-house development vs. COTS. The session will address the following questions: • Where do provisioning portals and service catalogs fit in the internal cloud architecture?• What are the key attributes for both classes of tools?• What are the major weaknesses in today’s market offerings?• What are the alternatives to build a self-service stack?
Close the Gap with Cloud Storage Gateways
27 July, 2011 (02:05 PM – 02:45 PM)
Organizations are often frustrated by the limitations of a cloud storage infrastructure wholly implemented outside of their datacenters. Cloud storage gateways can avoid the gaps in cloud storage capabilities by moving part of the cloud infrastructure inside a data center. Can this approach alleviate the gap in cloud performance and meet the expectations of storage architects? In this session, Director Gene Ruth examines cloud storage gateways, trends, significant vendor offerings, pitfalls and potential value in data center infrastructures to help organizations judge a cloud storage gateway’s viability for supporting enterprise storage needs.
End-User Case Study: Beyond the Pervasive Cloud – Lessons Learned and the Future
27 July, 2011 (02:50 PM – 03:25 PM)
NASA’s Jet Propulsion Laboratory is a leader in researching and prototyping with a variety of cloud computing solutions and integrating them into real operational missions and in multiple clouds. JPL has successfully used multiple public and private clouds for outreach and mission sensitive computational tasks. Several NASA missions, including Mars Exploration Rovers and Mars Science Laboratory are already soaring in the clouds and cloud computing is becoming pervasive in most computing solutions. One key to success is how these clouds are integrated into the JPL environment so that end users can self-provision the cloud resources independent of which cloud is used and how the charges take place. JPL has created and integrated a cloud brokering and charge-back mechanism that enables full usage of clouds. In this talk, Tom Soderstrom, IT Chief Technology Officer at JPL, will discuss this mechanism, real lessons learned through hands-on exploration of multiple clouds, and JPL’s continuing journey to beyond the pervasive cloud.
Integrating External, Off-site, and On-site Data: Problems and Their Solutions
27 July, 2011 (02:50 PM – 03:25 PM)
Cloud-based applications make data integration even more complicated. Data volumes are increasing, IT’s control over data and systems is diminishing, and integration points are multiplying. Every department in the business wants its own systems and is able to get them, but bringing that widely scattered data together is somehow always IT’s problem. “Lightweight” data integration is one solution.
Rethinking Capacity Management for Virtual and Cloud Infrastructures
27 July, 2011 (03:25 PM – 04:00 PM)
The traditional approach to capacity management is inadequate to meet operational challenges posed by virtual infrastructure and internal clouds. Service-oriented infrastructure delivery is not possible without accounting for technical and non-technical constraints that impact VM mobility and placement. IT organizations need to rethink its capacity allocation strategy, focusing on application performance awareness and deep integration with the management stack. In this session Research Director Alessandro Perilli details the steps required to modernize capacity management so that it meets the needs of today’s increasingly virtualized, dynamic, and service-oriented data center.
End-User Case Study: Just Do It! Build and Run High-Volume Apps in the Cloud
27 July, 2011 (03:25 PM – 04:00 PM)
Building and maintaining a 24×7 high-volume mobile application with a small team is wrought with challenges. By leveraging the power and flexibility of the Cloud’s emerging Platform-as-a-Service (PaaS) layer, Lose It! has been able to laser-focus on building products and features without the distraction of managing servers, networks and system operations. At the same time, the company has been able to continuously make its development and deployment practices more robust. In this session, LoseIt! shares its experience deploying an application on the cloud that can handle up to 13,000 transactions per minute. They’ll also cover the core cloud architecture issues to consider when deploying on a PaaS, and discuss how companies can use this flexible new cloud layer to their greatest advantage.
Panel: Tackling Charge back – Build or Buy
27 July, 2011 (04:15 PM – 04:50 PM)
Get the perspective on chargeback from customers who have built chargeback systems and vendors that want to sell you one. This is your chance to delve deeper into the issues and the questions that must be answered before you decide whether to build or buy a chargeback system.
Email in the Cloud: How Lessons from the Wright Brothers Can Help
27 July, 2011 (04:15 PM – 04:50 PM)
Moving e-mail to the cloud is fraught with challenges and issues that can cause any enterprise to pause before relinquishing a mission-critical application to a service provider. Much like Orville and Wilbur Wright, credited with inventing the first airplane, early innovators have learned valuable lessons that can aid an enterprise seeking to take advantage of software as a service (SaaS) e-mail.
Inviting Developers to Your Cloud Party: Platform as a Service for an Internal Cloud
27 July, 2011 (04:50 PM – 05:25 PM)
Now that you’ve built an internal cloud, what will you do for your developers? Providing internal PaaS keeps the development teams’ public PaaS envy at bay and helps demonstrate the value of your internal cloud to another group of stakeholders.
End-User Case Study: Moving Valeo’s Mail, Ofice and Lotus Notes Applications to the Cloud
27 July, 2011 (04:50 PM – 05:25 PM)
In 2009, Valeo decided to move their Mail and Office applications to the cloud as they were searching for an innovative way to significantly reduce the office infrastructure costs while simultaneously improving user collaboration and productivity. Google Apps was selected as new platform for mail and office applications. As a next step Valeo decided to replace all of their Lotus Notes applications with cloud application platform resulting in further cost reduction, rationalization of duplicate functionality, and better collaboration within the group and with parties in the supply chain. The Cordys Process Factory, which is a high productivity application platform as a service offering. Valeo went live recently with the first set of applications. In order to connect the cloud applications with on premise systems like SAP, Capgemini created a solution for identity and access management as well as master data management services, Both Google Apps as well as Cordys Process factory connect to the on premise systems via this layer. Cordys is interwoven directly with Google Apps as cloud-to-cloud integration. Google Apps are public cloud offering ran by Google. Cordys Process Factory is public cloud offering ran by Cordys on Rackspace infrastructure.
End-User Case Study: Platform as a Service at Northern Trust
27 July, 2011 (05:30 PM – 06:10 PM)
Platform as a Service (PaaS) can be viewed as a “partly cloudy” in-house approach for providing nearly on-demand infrastructure set-up in support of Application development and delivery. It leverages a set of middleware, scripting, and enterprise management technologies to configure made-to-order environments quickly and economically. Northern Trust has employed this approach for a year now. It has become standard practice at the company. Learn why we did it, how we did it, and what comes next.
To Serve and Mediate: Securing Information and Access in Cloud Applications
27 July, 2011 (05:30 PM – 06:10 PM)
Security for cloud-enabled applications has many moving parts. Handling identity and authentication is important, but far from the only concern. How we have to implement controls such as authorization, encryption, data masking and auditing may drastically change when moving applications from inside the enterprise into the cloud. In this session, Director Ramon Krikken, and Vice President and Distinguished Analyst Bob Blakley cover how to apply service-oriented design principles in developing a cloud-ready application security architecture.
Hybrid Clouds: Extending Your Reach
28 July, 2011 (02:05 PM – 02:45 PM)
Integrating private and public clouds can facilitate flexibility, capacity and capabilities, but integration has specific needs at each infrastructure level. In addition, the integration of the associated internal with external infrastructures requires that you utilize nontechnical governance to ensure efficient management of resources. Furthermore, hybrid clouds raise the question: Is the hybrid cloud the endgame or simply a stepping stone to outsourcing IT into external public clouds?
Hybrid Cloud Identity Federation: The One Ring to Bind Them All
28 July, 2011 (02:50 PM – 03:25 PM)
Hybrid clouds require identity federation in order to provide single sign-on and role-based access controls between the internal and external services. Cloud integration federation solutions have begun to appear on the market from vendors such as PingIdentity and Layer7. But services which provide identity federation are only the starting point.
End-User Case Study: Identity Federation for Lockheed Martin’s Supply Chain
28 July, 2011 (03:25 PM – 04:00 PM)
Lockheed Martin (LM) relies on a global, multi-tiered supply chain to deliver on its programs and contracts. LM realized that a traditional, on-premises identity manage ment solution lacked the scalability, security, and affordability necessary to manage its supply chain members’ identities and control access to business-critical applications and data. As a result, LM and Exostar teamed up to create a new breed of hybrid cloud – the Community Cloud, based on an Identity Hub. The Community Cloud’s connect-once environment includes multiple identity providers, a federation hub, user/organization provisioning, and delegated access administration. Learn more about the requirements, architecture, implementation, and benefits of the Identity Hub, which allows LM and Exostar to manage federated access for supply chain management and business operations across approximately 48,000 companies and 96,000 users.
Building a Hybrid Cloud
28 July, 2011 (04:15 PM – 04:50 PM)
Connecting internal and external clouds may seem a simple task, but issues abound. Vendor products, such as Citrix’s OpenCloud Bridge and VMware’s vCloud Connector promise to ease the integration of internal and external clouds. However, these technologies are in their infancy and customers should proceed with caution. In this session, Research VP and Distinguished Analyst Drue Reeves will:• Describe and define the hybrid cloud connection technologies• Illustrate how cloud interconnect solutions enables hybrid cloud computing• Show the benefits and value cloud connection technologies promise• Point out the weaknesses and issues with the hybrid model in addition to the tools• Offer guidance on how to best proceed with building hybrid clouds
End-User Case Study: Hybrid Cloud Issues: The Realities of Operations and Licensing
28 July, 2011 (04:50 PM – 05:25 PM)
Building a hybrid cloud solution was a learning experience for Pacific Life. The requirements to meet service levels at optimal cost were affected by operational factors, recoverability, and licensing. Effectively licensing Microsoft products and negotiating a volume license agreement is more of an art than a science and can impact the best laid IT solution plans. With the full force of client, application, and server virtualization, Microsoft’s newer cloud offerings, and its ongoing adjustments of its licensing strategy, this has only further complicated matters. This session examines Pacific Life’s expedition through designing a hybrid cloud solution, the licensing process, and offers practical advice for those pathfinders wishing to explore hybrid clouds and optimize their license agreement.
Keeping the Good In and the Bad Out: Security Concerns in the Hybrid Cloud
28 July, 2011 (05:30 PM – 06:10 PM)
Hybrid clouds aren’t just connected, they’re conjoined. In some sense, your IT infrastructure becomes one with a public multitenant cloud, and the meter’s always ticking. Multiple interactions require drilling holes through security boundaries to interconnect zones of trust, break other security barriers, and allow shared functionality and data. Outsiders become insiders, and security management must be extended. Not only is there a threat of malicious attack on your data, but denial of service and other risks are increased or at least change as the internal cloud becomes integrated with public multitenant systems.
Cloud Vision: Building an Internal Cloud: Customer Realities and Findings
29 July, 2011 (08:00 AM – 08:45 AM)
Discovering the underlying details of IT implementations and trends is not an easy task. Many IT enterprise organizations are involved in deploying internal clouds; from design to operation of an existing cloud. Gartner’s data center strategies team within the IT professionals research group has embarked on a contextual research (CR) project by performing in-depth interviews of 17 IT organizations from various industry verticals. The interviewees provided information on the struggles and successes they experienced in building their internal clouds. In this round table, the interviewers will discuss many of the insights gained during the project, point out interesting trends, including stumbling blocks and how IT organizations have overcome them, and the successful solutions discovered. Research VP Chris Wolf will lead the session.
Cloud Vision: Managing Risks in the Cloud (end-user customers only)
29 July, 2011 (09:45 AM – 10:30 AM)
Assessing and mitigating the risks associated with hosting the application in the cloud is paramount to a cloud strategy. These risks can be difficult to manage, but effective organizations employ solid risk management principles (i.e. accept, avoid, mitigate, and transfer) in order to take advantage of cloud computing’s benefits and to create a competitive advantage while reducing their risk profile. In this round table, Vice President and Distinguished Analysts Drue Reeves and Bob Blakley will take questions from the audience and facilitate a discussion surrounding the risks associated with hosting applications in the cloud and help organizations with strategies that can help organizations fully utilize cloud computing.
Todd Hoff described Making Hadoop 1000x Faster for Graph Problems in a 7/27/2011 post to his High Scalability blog:
Dr. Daniel Abadi, [pictured at right,] author of the DBMS Musings blog and Cofounder of Hadapt, which offers a product improving Hadoop performance by 50x on relational data, is now taking his talents to graph data in Hadoop’s tremendous inefficiency on graph data management (and how to avoid it), which shares the secrets of getting Hadoop to perform 1000x better on graph data.
- Analysing graph data is at the heart of important data mining problems.
- Hadoop is the tool of choice for many of these problems.
- Hadoop style MapReduce works best on KeyValue processing, not graph processing, and can be well over a factor of 1000 less efficient than it needs to be.
- Hadoop inefficiency has consequences in real world. Inefficiencies on graph data problems like improving power utilization, minimizing carbon emissions, and improving product designs, leads to a lot value being left on the table in the form of negative environmental consequences, increased server costs, increased data center space, and increased energy costs.
- 10x improvement by using a clustering algorithm to graph partition data across nodes in the Hadoop cluster. By default in Hadoop data is distributed randomly around a cluster, which means data that’s close together in the graph can be very far apart on disk. This is very slow for common operations like sub-graph pattern matching, which prefers neighbors to be stored on the same machine.
- 10x improvement by replicating data on the edges of partitions so that vertexes are stored on the same physical machine as their neighbors. By default Hadoop replicates data 3 times, treating all data equally is inefficient.
- 10x improvement by replacing the physical storage system with graph-optimized storage. HDFS, which is a distributed file system, and HBase, which is an unstructured data storage system, are not optimal data stores for graph data.
Voila! That’s a 10x * 10x * 10x = 1000x performance improvement on graph problems using techniques that make a lot of sense. What may be less obvious is the whole idea of keeping the Hadoop shell and making the component parts more efficient for graph problems. Hadoop stays Hadoop externally, but internally has graph super powers. These are strategies you can use.
What I found most intriguing is thinking about the larger consequences of Hadoop being inefficient. There’s more in play than I had previously considered. From the most obvious angle, money, we are used to thinking this way about mass produced items. If a widget can be cost reduced by 10 cents and millions of them are made, we are talking real money. If Hadoop is going to be used for the majority of data mining problem, then making it more efficient adds up to real effects. Going to the next level, the more efficient Hadoop becomes, the quicker important problems facing the world will be solved. Interesting.
- Running Large Graph Algorithms – Evaluation Of Current State-Of-The-Art And Lessons Learned
- Scalable SPARQL Querying of Large RDF Graphs by Jiewen Huang, Daniel J. Abadi, Kun Ren.
- Efﬁcient Processing of Data Warehousing Queriesin a Split Execution Environment by Kamil Bajda-Pawlikowski, Daniel J. Abadi, Avi Silberschatz, Erik Paulson
- Golden Orb – open-source implementation of Pregel.
- GraphLab – A New Parallel Framework for Machine Learning
- Apples and oranges: a comparison of RDF benchmarks and real RDF datasets by Songyun Duan, Anastasios Kementsietsidis, Kavitha Srinivas, Octavian Udrea
- Clause-Iteration with MapReduce to Scalably Query Data Graphs in the SHARD Graph-Store by Kurt Rohloff, Richard E. Schantz
Martin Tantow announced Former NASA CTO Launches Start-up For Web-Scale Private Clouds in a 7/27/2011 post to the CloudTimes blog:
A team of engineers and entrepreneurs led by former NASA CTO Chris C. Kemp, launched today Nebula and announced plans for a turnkey OpenStack hardware appliance that allows businesses to easily, securely and inexpensively deploy large private cloud computing infrastructures from thousands of computers with minimal effort.
Chris Kemp, CEO of Nebula said “Until today, this computing power has only been accessible to organizations like NASA and a small number of elite Silicon Valley companies. We intend to bring it to the rest of the world.”
Big data has been an escalating concern for companies and its growth rate is far exceeding processing and storage capacities. This rapid growth has prompted a host of new innovations in the field of big data analytics.
Although cloud computing is seen as the infrastructure solution for big data analytics, its barriers to adoption have been high. Nebula lowers these barriers by delivering a full-service turnkey appliance for companies to quickly build their own private cloud computing based on OpenStack.
Nebula is backed by high profile investors, including Google’s first investors Andy Bechtolsheim, co-founder of Sun Microsystems, David Cheriton and Ram Shriram.
Andy Bechtolsheim said “Nebula embracing OpenStack today is similar to Sun embracing Berkeley UNIX in the 1980s. Proprietary systems did not have a chance against open platforms. I see Nebula as the company that will bring OpenStack to the private enterprise cloud.”
Nebula will support Facebook’s Open Compute platform in addition to standard commodity servers and will enable companies to deploy highly efficient and inexpensive servers with a simplicity that will lower the adoption barrier to private cloud computing.
David Strom reported CA Introduces Slew of New Cloud Managment Tools in a 7/27/2011 post to the ReadWriteCloud blog:
- CA Business Service Insight v8.0,
- CA Automation Suite for Clouds v1.0,
- CA Automation Suite for Datacenters v12.5,
- CA Virtual Placement Manager v1.0,
- CA AppLogic v3.0, and
- CA NetQoS Unified Communications Monitor v3.2
Calling the series "Cloud Choice," CA is trying to enable cloud usage and help both enterprise and service provider customers quickly deploy business services in the cloud. A link to the complete announcement package can be found here.
Using the new Business Service Insight manager, customers can benchmark and compare internal and external services and their performance by using the Service Measurement Index hosted at Cloud Commons.com. In addition, Business Service Insight provides the ability to research different service options using social interaction. Valuable service opinion and comparison data can be collected by creating questionnaires to poll peers in the industry.
"CA Business Service Insight provides us with the information we need as we make our choices of what should or could be delivered as a cloud application and what should stay in-house. Much of this decision is based on whether service levels are meeting business expectations," says Jose Ferraz, managing partner, VisionWay, Business Management Services and an early user of the tool.
The data center automation suite calculates capacity requirements and can move apps and workloads into and out of various public cloud providers, including AWS, Rackspace and Terremark. The suite has new versions of CA Server Automation, Virtual Automation and other automation services, including a new automation suite for Cisco UCS too. If you are looking to switch cloud providers, this might be of interest to you.
Also leveraging this automation suite is the first version of Automation Suite for Clouds, which will have a pretest framework including predesigned automation workloads to make it easier to deliver cloud services.
The new release of AppLogic allows customers to now run applications and services using both VMware ESX and Xen hypervisors on the same grid, providing a new level of freedom and flexibility for delivering cloud services. In addition, customers can import workloads using the Open Virtualization Format (OVF), making smart use of existing virtualization investments.
Another new product, Virtual Placement Manager, will help with capacity management and optimal workload assignment. This incorporates technology from vendor Hyperformix that CA acquired earlier last year. It helps address problems of VM stall and sprawl by leveraging patented analytics to safely increase overall host to VM ratios and the ability to properly size and place them. This looks like the hidden gem in what was announced today, and you can see a typical screen shot below (you’ll want to click on this to see a larger image).
- A company needs to understand the best way to "source" the services that they offer. For instance, is it better to use an internal email service based on exchange or to source it through the cloud (e.g., gmail). Business Service Insight helps them understand the performance of these services so that they can optimize their service portfolio over time.
- Manage contracts with outsourcers. Once you have outsourced a service to an external company (cloud based or traditional), you need to ensure they are performing up to the level they contracted. Business Service Insight enables companies to track that in an automated fashion.
- Manage internal SLAs in addition to services. Make commitments to departments, subsidiaries, etc. and track them.
- In cases where a company sells services externally (service provider model, for example), performance can be tracked with their customers as well.
Finding the context for these announcements on the sprawling CA Web site is problematic, but you can try to start here, the overall cloud computing landing page. Of course, no actual pricing was announced for any of these tools, so that you will be spending a lot of time on the phone trying to track this down. But these are not cheap tools: The average pricing for CA Business Service Insight has been around $200K. For CA AppLogic, typical pricing starts at $100,000 and grows as the scale of the cloud deployment grows.
Jay Fry (@jayfry3) analyzed CA Technologies’ new cloud computing announcements in his Why it pays to be early — especially with this much cloud choice post of 7/27/2011:
Take my flight today, for example. I’ve done this flying-to-New York thing a few times. I’ve learned the hard way that it’s a good idea to reserve your seat early. I know when to head toward the line at the gate to minimize time spent standing around and maximize the chance that there’s still overhead bin space onboard. And, if it looks like this particular flight is headed for delay or cancellation, I already have a pretty good view of what my options are likely to be. I might even already be dialing/browsing customer service.
I think the same applies for cloud computing. To really have a good view of what you need to know, the folks who have been through this a couple times certainly have a head start. Being early to the party lets you assess what’s happening from a position of experience – peppered with a humbling but healthy dose of reality along the way.
I think today’s big cloud announcements from CA Technologies help drive home this point. (As you might have guessed, they are part of what I’ve been working on recently: 10 new/enhanced offerings for enterprises, 4 for service providers, plus a market accelerator program.) The announcements represent quite a bit of early market experience wrapped up for the benefit of very specific customer sets.
What we (and our customers) learned in the past 18 months
Last year, IT was asking some very basic questions about cloud computing, the core of which boiled down to “So, what is cloud computing, anyway?” CA Technologies kicked off 2010 with an aggressive cloud acquisition spree that surprised more than a few folks. We brought aboard a series of key technologies, some very smart folks, and a lot of on-the-ground services experience. Customers and industry-watchers showed interest (and skepticism, as you’d expect) as we brought the pieces together.
If you look back at our CA World announcements last year, you’ll see that we described the way cloud was changing IT and the solutions that we thought were needed. We talked about the IT role morphing into more of a supply chain orchestration job, focused on delivering IT service. We saw a need to understand those services, figure out ways to compare them, manage them, and control them.
But the market hasn’t been standing still. In fact, I think most would agree with me that the changes in IT as a result of cloud have accelerated. Our view that the IT function is shifting seems to be supported by some proof points (especially if you read some of the survey data I’ve seen in the past year). But that doesn’t mean we got everything perfect, right out of the gate. By being in the game early, we’re in a prime seat to watch the evolution. And react.
Evolving and targeting to match how enterprises and service providers adopt cloud
It’s now a little more than a year later, and we’re evolving our cloud portfolio. Today’s announcements are a set of next steps, and they reflect some pragmatic reactions to what we’ve seen. We’re enhancing the offerings we already have. We’ve built some new ones. And all of these are driven by what customers are saying and doing.
Here are some of the highlights, as I see them:
- More than ever before, cloud means choice. Looking at cloud forces lots of internal and external decisions. As I’ve noted previously, these are decisions about technology, about organizational structure, about IT ownership and policy.
- With all of these options, there is no “one size fits all” for cloud. Instead, you have to make your own, very specific choices. And you want to have a portfolio of options that can help you regardless of which choices you need to make for your business. We, as a partner in that business, need to enable you to have your cloud, your way.
- A broad portfolio to work from is a plus. The work to enable customers to use and provide cloud computing means a bunch of topic areas need to be covered. Management and security really end up jumping to the top of the list. (The CA portfolio is well-tuned to cover that emphasis, I might add.)
We see a lifecycle of decisions, and a set of capabilities at steps along with way. We think customers need to plan, design, deliver, secure, and assure their cloud efforts. And then constantly optimize these decisions for what’s best for their business.
- Enterprises and service providers have very different needs and will make different choices. Enterprise and service providers are doing an interesting dance. Each sees benefit – and profit – in cloud computing, and is adopting it pragmatically. Enterprises are trying to evolve what they have invested in already, while maintaining the control they require and processes they’ve built up. That lets them continue with the heterogeneous components they have. That doesn’t lock them into a proprietary (and probably quite costly) “cloud stack.” Unless they want to be. In some cases, that’s a useful trade-off. But it still needs to be managed and secured.
- Service providers are, in many cases, leading the charge to cloud, looking for ways to quickly deliver cloud services but to do so in a way that is going to mean differentiation and revenues, while building margin. Those that don’t won’t be around long. They are feeling pressure from big guys like Amazon and Rackspace. They’re trying to find the right niche. They’re trying to balance the right infrastructure with the financial structure to result in a winning (and sustainable) formula. As a result of these differences, you’ll see sets of solutions from CA Technologies that address these very specific needs, but help make the connection between the two – the world of hybrid clouds – possible and appealing.
- Finally, if you add new perspectives, experience with customers, and resources to some pretty innovative technology, you can move the needle. Several of today’s announcements show the combined effort of the vision of entrepreneurs that joined CA Technologies through the cloud acquisitions and the organic development efforts since then. A lot of these folks have been working on cloud since long before the term “cloud” existed.
Several of those are near and dear to my heart, and I’ll highlight those here:
- CA Business Service Insight 8.0. We’re calling it 8.0, because the previous 7 versions were called Oblicore Guarantee and were focused on service level management. However, the work done on CA Business Service Insight since last year opens up new territory. The latest release gives enterprises information about their existing services and the ability to compare and contrast what they are doing internally with services they could choose externally. All this, while also managing the service levels from what they acquire from outside. In addition, CA Business Service Insight’s connection to the Service Measurement Index and Cloud Commons will become more and more intriguing as it matures.
- CA AppLogic 3.0. The ability to work at an application level rather than dwelling on low-level hypervisor questions takes a huge step up with the addition of VMware support in this release. Now, you can think in terms of virtual business services instead of ESX or Xen. That’s an important extension to the vision that the 3Tera team brought to CA Technologies, especially if you’re an enterprise.
Service providers are probably still interested in the financial equation of using Xen, but now have new options in working with enterprises who’ve made big VMware investments. And, frankly, that’s everyone at this point. The new languages, VLAN tagging, and role-based access features are probably even more interesting to service providers and how they make money from a cloud business using CA AppLogic as their cloud platform. The service provider ecosystem that’s building around CA AppLogic is should get a mention here, too, but that’s worthy of its own post. ’m personally pleased to see Cassatt capabilities woven in here, too (check out the Global Fabric Controller to see my previous company’s influence).
Learning pragmatically. There are a lot of moving parts here, mostly driven by the huge number of options that the cloud now presents. In my opinion, CA Technologies made a pretty prescient decision to jump into this market with both feet (and wallet), and to do so early. Much of what you’re seeing come to market here has benefited from early moves by both the innovators CA acquired — and by CA itself.
The resulting time and experience have infused our offerings (and those of us working on them) with what I’d call a healthy amount of pragmatism. This pragmatism is something that I think will serve CA Technologies, its ecosystem partners, and our collective customers well as cloud computing continues to evolve.
And, of course, it’s good to see all those hours I’ve spent waiting for flights are paying off in interesting ways.
Jay is marketing and strategy VP for CA’s cloud business.
K. Scott Morrison posted Introducing Layer 7’s OAuth Toolkit on 7/26/2011:
“If your tools don’t work for you, get rid of them,” is a simple creed I learned from my father in the workshop. Over the years, I have found it is just as relevant when applied to software, where virtual tools abound, but with often-dubious value.
OAuth is an emerging technology that has lately been in need of useful tools, and to fill this gap we are introducing an OAuth toolkit into Layer 7’s SecureSpan and CloudSpan Gateways. OAuth isn’t exactly new to Layer 7; we have actually done a number of OAuth implementations with our customers over the last two years. But what we’ve discovered is that there is a lot of incompatibility between different OAuth implementations, and this is discouraging many organizations from making better use of this technology. Our goal with the toolkit was to provide a collection of intelligently parameterized components that developers can mix-and-match to reduce the friction between different implementations. And thanks to the generalization that characterize the emerging OAuth 2.0 specification, this toolkit helps to extend OAuth into interesting new use cases beyond the basic three-legged scenario of version one.
I have to admit that I was suspicious of OAuth when it first appeared a few years ago. So much effort had gone into the formal specification of SAML, from core definition to interop profiles, that I didn’t see the need for OAuth’s one use case solution and had little faith in the rigor of such a grass roots approach. But in time, OAuth won me over; it fits well with the browser-centric, simple-is-better approach of the modern Internet. The mapping to more generalized, token server-style interactions in the new version of the spec appeals to the architect in me, and the opening up of the security token payload indicates a desire to play well with existing infrastructure, which is a basic enterprise requirement.
However, adding extensibility to OAuth will also bring about this technology’s greatest challenge. The 1.0a specification benefitted enormously from laser focus on a use case so narrow that it was a wonder it gained the mindshare that it did. OAuth in 2011 has no such advantage—generalization being great for architects but hell for standards committees and vendors. It will be interesting to see how well the OAuth community satisfies the oftentimes-conflicting agendas of simple, standard, and interoperable.
Here at Layer 7 we predict a bright future for OAuth. We also think it’s very useful today, which is why we introduced a toolkit instead of a one-size-fits-one approach. We see our customers using OAuth in concert with their existing investments in Identity and Access Management (IAM) products, such as IBM’s Tivoli Access Manager (TAM) or Microsoft’s Active Directory (AD). We see it being used to transport SAML tokens that require sophisticated interpretation to render entitlement decisions. Taking a cue from OAuth itself, the point of our toolkit is to simplify both implementation and integration. And the toolkit’s parameterization helps to insulate the application from specification change.
I’ll be at the Gartner/Burton Catalyst show this week in San Diego where we’ll be demonstrating the toolkit. I hope you can drop by and talk about how it might help you.