|A compendium of Windows Azure, Service Bus, EAI & EDI,Access Control, Connect, SQL Azure Database, and other cloud-computing articles.|
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:
- Windows Azure Blob, Drive, Table, Queue, Hadoop and Media Services
- Windows Azure SQL Database, Federations and Reporting, Mobile Services
- Marketplace DataMarket, Cloud Numerics, Big Data and OData
- Windows Azure Service Bus, Access Control, Caching, Active Directory, and Workflow
- Windows Azure Virtual Machines, Virtual Networks, Web Sites, Connect, RDP and CDN
- Live Windows Azure Apps, APIs, Tools and Test Harnesses
- Visual Studio LightSwitch and Entity Framework v4+
- Windows Azure Infrastructure and DevOps
- Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds
- Cloud Security and Governance
- Cloud Computing Events
- Other Cloud Computing Platforms and Services
No significant articles today
No significant articles today
WCF Data Services 5.1.0-RC2 is released
Highlights of this RC release include the following:
- The new “lightweight” JSON format is now the default. This means that OData payloads will look more like payloads in WebAPI (or Windows Azure Mobile Services)—minus the previous bunch of metadata. [Emphasis added.]
- In-the-box support for $format and $callback. This is great news for JSON-P folks.
(In Getting JSON Out of WCF Data Services, I discussed to enable JSON-P because WCF Data Services didn’t support the $format query option out-of-the-box.)
- Both a NuGet package and a standalone installer.
For the full list of new functionality in this preview release, see Mark’s post WCF Data Service 5.1.0-rc2 Released.
New public AdventureWorks sample OData feed
Since nearly the beginning, the OData team has hosted a sample, read-only Northwind feed—exposing the venerable Northwind database to the Web as OData. Now, there is a new read-only sample feed….Derrick VanArnam has published a portion of the (massive) AdventureWorks schema as OData. You can read more about how this sample project was planned and executed on Derrick’s blog Customer Feedback SQL Server Samples, starting with the post AdventureWorks2012 OData service sample – Introduction.
No significant articles today
No significant articles today
A Microsoft Channel 9 video of Hejlsberg discussing TypeScript is available on Microsoft’s Web site:
Soma Somasegar, Corporate Vice President of Microsoft’ Developer Division, outlined the problem space that Microsoft believes it can solve with TypeScript in an October 1 blog post:
Microsoft’s official site for TypeScript is http://www.typescriptlang.org/.
What are your initial thoughts on what the Softies are doing, any of you developer-readers out there?
Update: In spite of the Somasegar quote above regarding the Windows Store — if you still were unsure whether you can build Windows Store apps for Windows 8 and Windows RT using TypeScript, the answer is yes.
TypeScript therefore lets you use features including type annotations, classes, modules and interfaces to make large projects more robust and maintainable.
There is tooling for Visual Studio including a language service to provide code hinting, syntax highlighting and the like.
More information from Microsoft’s S Somasegar here.
At Microsoft Open Technologies, Inc. we are thrilled that the discussion is now open with the community on the language specification: you can play (or even better start developing with TypeScript) with the bits, read the specification and provide your feedback on the discussion forum. We also wanted to make it possible for developers to use their favorite editor to write TypeScript code, in addition to the TypeScript online playground and the Visual Studio plugin.
Below you will find sample syntax files for Sublime Text, Vi and Emacs that will add syntax highlighting to the files with a .ts extension. We want to hear from you on where you think we should post these files for you to be able to optimize them and help us make your TypeScript programming an even greater experience, so please comment on this post or send us a message.
Senior Technical Evangelist
Microsoft Open Technologies, Inc.
Xignite announced clearTREND Delivers Professional Analytics tools to Investors’ Laptops and Windows 8 Tablets in a 10/1/2012 press release:
Xignite, Inc., the leading market data cloud solutions provider, and Appleton Group Wealth Management, LLC, today announced that they have partnered with each other, Skyline Technologies, and Microsoft (MSFT) Windows Azure/Windows 8 to develop clearTREND™, a new mobile investment research app that helps manage investment portfolios. clearTREND is the first investment trend calculator publicly available. It solves a fundamental problem in investing by accurately calculating past price trends for any security, then generating real-time buy and sell recommendations.
“Xignite is a leader in delivering market data to fast-running apps by harnessing the power of the cloud,” said Stephane Dubois, CEO and founder of Xignite. “We are proud to be in the company of industry leaders such as Microsoft, Skyline Technologies, and Appleton Wealth Management.”
clearTREND is web‐based and built for use on any computer (both Windows and Apple‐based), as well as Windows 8 tablets. With 10 patents pending, clearTREND is an innovation in its field because it uses crowdsourcing to analyze changing price trends, as well as generate real‐time buy and sell recommendations at optimal points in time. clearTREND is powered by XigniteGlobalHistorical and XigniteIndices services. It also uses Microsoft’s cloud‐based computing service Windows Azure. With an analytical technique called ‘Simple Moving Average Crossover,’ the app measures historic price trends for over 60,000 investable securities. clearTREND leverages optimization technology to continuously hunt for new price trends that may be more advantageous for the user to follow.
“clearTREND represents a significant leap forward in the field of investment and economic research,” said Mark C. Scheffler, Founder and Sr. Portfolio Manager for Appleton Group. “This app is a ‘must‐have’ for any individual investor or professional advisor, but it’s especially useful for 401(k) participants working to make their retirement plans more profitable, less risky and more predictable.”
For information on features, benefits, pricing and availability, please go to: http://www.cleartrendresearch.com/?page_id=2072.
Xignite is the leading provider of market data cloud solutions. The Xignite Market Data Cloud fulfills more than 5 billion requests per month and offers more than 50 financial web services APIs providing real-time, historical, and reference data across all asset classes. Xignite APIs power mobile financial applications, websites, and front-, middle- and back-office functions for more than 1000 clients worldwide, including Wells Fargo, GE, Computershare, BNY Mellon, Natixis, Forbes.com, SeekingAlpha, ExxonMobil, Starbucks, and Barrick Gold. The company’s award-winning XigniteOnDemand market data cloud platform also powers data distribution solutions for exchanges and data vendors, as well as Enterprise Data Distribution (EDD) solutions for financial institutions. Companies using XigniteOnDemand for market data distribution include the CME Group, NASDAQ OMX, NYSE Euronext and Direct Edge.
About Appleton Wealth Management
Appleton Group Wealth Management LLC is an independent Registered Investment Advisor (RIA), offering objective and unbiased wealth management services to all investment management clients. The firm is compensated solely for the advisory services it provides to its clients, and is in no way compensated by commissions of any kind. As a small privately held firm, Appleton Group is solely focused on providing investment advisory and management services and helping the investment community build and manage more consistent and profitable portfolios.
Jim O’Neil (@jimoneil) posted Sample Browser–The Next Visual Studio Extension You’ll Install on 10/1/2012:
The one software design pattern that I have used in just about every application I’ve written is “cut-and-paste,” so the new “Sample Browser” – read sample as a noun not an adjective – is a great boon to my productivity.
Provided by the Microsoft All-in-One Code Framework in conjunction with the Visual Studio Team and the MSDN Samples Gallery, this Visual Studio plug-in provides the ability to search and install over 3500 code samples all without leaving your favorite IDE (either the 2010 or 2012 version).
Once you’ve installed the extension (if you’re running Visual Studio Express you can use the standalone version), you’ll be able to browse samples by category (HTML 5, Windows 8, Windows Azure, etc.), search by term or description, and even put in a request to have a sample built by Microsoft engineers if there is a gap in the current offerings.
The really cool feature, in my opinion, is that you can trigger a contextual search in the Visual Studio editor for other samples that might reference a specific method, like OnNavigagedTo as seen to the right.
And consider subscribing to the Sample of the Day RSS feed right from the Visual Studio Start screen, it’s a great way to learn something new each day!
Installed on VS 2011 and 2012.
Mary Jo Foley (@maryjofoley) asserted “Microsoft’ Dynamics NAV 2013 ERP release is generally available. But it won’t be hosted on Windows Azure until the first quarter of 2013, instead of this month, as originally planned” in a deck for her Microsoft Dynamics NAV 2013 debuts, minus promised Azure hosting article of 10/1/2012 for ZDNet’s All About Microsoft blog:
On October 1, Microsoft announced general availability of its small/mid-size-business-targeted Dynamics NAV 2013 ERP release.
Dynamics NAV 2013, which is one of four ERP products offered by Microsoft, was slated to be the first of the four to be hosted on Microsoft’s Windows Azure cloud operating system. But it turns out NAV 2013 won’t be hosted on Microsoft’s cloud right out of the gate, after all.
Microsoft’s new plan is to make NAV 2013 available on Azure some time in the first quarter of calendar 2013, a spokesperson confirmed. Once it is hosted on Azure, the product still will be sold through NAV 2013 partners, as per Microsoft’s original plan, the spokesperson said. There’s no date or official commitment as to when/whether Microsoft also might offer NAV 2013 hosted on Azure directly to customers itself, the spokesperson added.
Microsoft officials previously committed to making Dynamics GP 2013, which is slated to be generally available in December 2013, its second Azure-hosted ERP offering. It sounds as though Microsoft officials aren’t 100 percent sure this will happen, but they are still saying, for now, that the plan is to enable partners to sell an Azure-hosted version of GP 2013 once it is available.
Microsoft officials are not sharing details as to what led to the Azure-hosting delay with NAV 2013. When I asked for more background on this, I received the following statement from the spokesperson:
“Microsoft Dynamics NAV 2013 is available to customers both on-premises and in the cloud via partner-hosted offerings. With the new version we have made significant investments in the ‘hostability’ of the product to ensure a great customer and partner experience deploying and using NAV in the cloud. We are currently fine-tuning deployment scenarios and creating prescriptive guidance for deploying NAV on Windows Azure and expect to make deployment of Microsoft Dynamics NAV on Windows Azure broadly available in Q1 of CY2013.“
Improvements to "hostability" is just one of a number of new features in the NAV 2013 release. The latest release also includes improvements to querying and charting; more granular role-tailored capabilities; increased general-ledger flexibility; integration with Microsoft’s SharePoint and OneNote note-taking products; and expanded Web Client/broser support.
Microsoft’s grand ERP plan is to follow the same model on the ERP side of the house that it’s already pursuing on the CRM side of its Dynamics business. This year, Microsoft is rolling out simultaneously on-premises and in the cloud its NAV 2013 release. After this year, future Dynamics ERP releases will be cloud-first. As is the case with Dynamics CRM, Microsoft will be making two major updates a year to its ERP platforms once they’re available both on-premises and in the cloud, officials have said.
Lori MacVittie (lmacvittie) answered “Why active-active is not best practice in the data center, and shouldn’t be in the cloud either” in her Load Balancing 101: Active-Active In the Cloud article of 10/1/2012 for F5’s DevCentral blog:
Last time we dove into a "Load Balancing 101" discussion we looked at the difference between architected for scale and architected for fail. The question that usually pops up after such a discussion is "why can’t I just provision an extra server and use it. If one fails, the other picks up the load"?
We call such a model N+1 – where N is the number of servers necessary to handle load plus one extra, just in case. The assumption is that all N+1 servers are active, so no resources are just hanging out idle and wasting money. This is also sometimes referred to as "active-active" when such architectures include a redundant pair of X (firewalls, load balancers, servers, etc… ) because both the primary and backup are active at the same time.
So it sounds good, this utilization of all resources and when everything is running rosy it can benefit in terms of improving performance, because utilization remains lower across all N+1 devices.
The problem comes when one of those devices fails.
HERE COMES the MATH
In the simplest case of two devices – one acting as backup to the other – everything is just peachy keen until utilization is greater than 50%.
Assume we have two servers, each with a maximum capacity of 100 connections. Let’s assume clients are generating 150 connections and a load balancing service distributes this evenly, giving each server 75 connections for a utilization rate of 75%.
Now let’s assume one server fails.
The remaining server must try to handle all 150 connections, which puts its utilization at … 150%. Which it cannot handle. Performance degrades, connections time out, and end-users become very, very angry.
Which is why, if you consider the resulting impact of performance and downtime on business revenue and productivity, redundancy is considering a best practice for architecting data center networks. N+1 works in the scenario in which only 1 device fails (because the idle one can take over) but the larger the pool of resources, the more likely it is that more than one device will fail at relatively the same time. Making it necessary to take more of an N+"a couple or three spares" approach.
Yes, resources stand idle. Wasted. Money down the drain.
Until they’re needed. Desperately.
They’re insurance, they always have been, against failure. The cost of downtime and/or performance degradation was considered far greater than the operational and capital costs associated with a secondary, idle device.
The ability of a load balancing service to designate a backup server/resource that remains idle is paramount to enabling architectures built to fail. The ability of a load balancing service in the cloud to do this should be considered a basic requirement. In fact, much like leveraging cloud as a secondary "backup" data center for disaster recovery/business continuity strategies, having a "spare" resource waiting to assure availability should be a no-brainer from a cost perspective, given the much lower cost of ownership in the cloud.
Referenced blogs & articles:
Related blogs & articles:
No significant articles today
No significant articles today
I’ve been reading the social media reactions to my recent note on OpenStack, “Don’t Let OpenStack Hype Distort Your Selection of a Cloud Management Platform in 2012” (that’s a client link; a free public reprint without the executive summary is also available), and wanted to respond to some comments that are more centered on the research process and publication process than on the report itself. So, here are a number of general assertions:
Gartner doesn’t do commissioned research. Ever. Repeat: Gartner, unlike almost every other analyst firm, doesn’t do commissioned resesarch — ever. Most analyst firms will do “joint research” or “commissioned whitepapers” or the like — research where a vendor is paying for the note to be written. About a decade ago, Gartner stopped this practice, because management felt it could be seen as compromising neutrality. No vendor paid for that OpenStack note to be written, directly or indirectly. Considering that most of the world’s largest IT vendors are significant participants in OpenStack, and plenty of small ones are as well, and a bunch of them are undoubtedly unhappy about the publication of that note, if Gartner’s interests were oriented around vendors, we certainly wouldn’t have published research practically guaranteed to upset a lot of vendors.
Gartner earns the overwhelming majority of its revenue from IT buyers. About 80% of Gartner’s revenues come from our IT buyer clients (we call them “end-users”). We don’t shill for vendors, ever, because our bread-and-butter comes from IT buyers, who trust us for neutral advice. Analysts are interested in helping our end-user clients make the technology decisions that are best for their business. We also want to help vendors (including those who are involved with free or commercial open-source) succeed in better serving end-users — which often means that we will be critical of vendor efforts. Our clients are asking about OpenStack, and every example of hype in that note comes directly from client interactions. I wrote that note because the volume of OpenStack queries from clients was ramping up, and we needed written research to address it.
Gartner analysts are not compensated on commissions of any sort. Many other analyst firms have incentives for analysts that are tied to revenue or publicity — be quoted in the press X times, sell reports, sell consulting, sell strategy days with vendors, etc. Gartner doesn’t do any of that, and hasn’t for about a decade. Our job, as Gartner analysts, is to try to offer the best advice we can to our clients. Sometimes, of course, we will be wrong, but we try hard. It’s not an academic exercise; our end-user clients have business outcomes that ride on their technology decisions.
Gartner doesn’t dislike open source. As a collective entity, Gartner tends to be cautious in its stances, as our end-user clients tend to be mid-market and enterprise IT executives who are fairly risk-averse; our analysis of all solutions, including OSS, tends to be from that perspective. But we write extensively about open source; we have analysts devoted to the topic, plus everyone covers the OSS relevant to their own coverage. We consider OSS a business strategy like anything else. In fact, we’ve been particularly vocal about how we feel that cloud is driving OSS adoption across a broad spectrum of solutions, and advocates that an IT organization’s adoption of cloud is a great time to consider replacing proprietary tech with OSS. (You’ll note that a whole section of the report predicts OpenStack’s eventual success, by the way, so it’s not like this a prediction of gloom, just an observation of present stumbling-blocks on the road to maturity.)
Gartner research notes are Gartner opinion, not an individual analyst’s opinion. Everything that Gartner publishes as a research note (excluding things like blog posts, which we consider off-the-clock, personal time and not a corporate activity) undergoes a peer review process. While notes do slip through the cracks (i.e., get published without sufficiently broad or deep review), our internal processes require analysts to get review from everyone who touches a coverage area. My OpenStack note was very broadly peer reviewed — by other analysts who cover cloud infrastructure, cloud management platforms, and open source software, as well as a bunch of related areas that OpenStack touches. (As a result of that review, the original note almost quadrupled in size, split into one note on OSS CMPs in general, and one note on OpenStack itself.) I also asked for our Ombudsman’s office, which deals with vendor complaints, to review the note to make sure that it seemed fair, balanced, and free of inflammatory language, and they (and my manager) also asked questions about potentially controversial sections, in order to ensure they were backed by facts. Among other things, these processes are intended to ensure that individual analyst bias is eliminated to as large an extent as possible. That process is part of why Gartner’s opinions often sound highly conservative, but when we take a stance, it is usually neither casual nor one analyst’s opinion.
The publication of this note was not a shock to the vendors involved. Most of the vendors were aware that this note was coming; it was a work in progress over the summer. Rackspace, as the owner of OpenStack at the time that this was placed in the publication pipeline, was entitled to a formal review and discussion prior to its publication (as we do for any research that covers a vendor’s product in a substantive way). I had spoken to many of the other vendors in advance of its publication, letting them know it was coming (although since it was pre-Foundation they did not have advance review). The evolving OpenStack opinions of myself and other Gartner analysts have long been known to the vendors.
It would have been easier not to write anything. I have been closely following OpenStack since its inception, and I have worked with many of the OpenStack vendors since the early days of the project. I have a genuine interest in seeing them, and OpenStack, succeed, and I hope that the people that I and other analysts have dealt with know that. Many individuals have confided in me, and other Gartner analysts, about the difficulties they were having with the OpenStack effort. We value these relationships, and the trust they represent, and we want to see these people and their companies succeed. I was acutely careful to not betray any individual confidences when writing that report, ensuring that any concerns surfaced by the vendors had been said by multiple people and organizations, so that there would be no tracebacks. I am aware, however, that I aired everyone’s collective dirty laundry in public. I hope that making the conversation public will help the community rally around some collective goals that will make OpenStack mainstream adoption possible. (I think Rackspace’s open letter implicitly acknowledges the issues that I raised, and I highly encourage paying attention to its principles.)
You will see other Gartner analysts be more active in OpenStack coverage. I originally picked up OpenStack coverage because I have covered Rackspace for the last decade, and in its early days it was mostly a CMP for service providers. Enterprise adoption has begun, and so its primary home for coverage is going to be our CMP analysts (folks like Alessandro Perilli, Donna Scott, and Ronni Colville), although those of us who cover cloud IaaS (especially myself, Kyle Hilgendorf, and Doug Toombs) will continue to provide coverage from a service provider perspective. Indeed, our coverage of OSS CMPs (CloudStack, Eucalyptus, OpenNebula, etc.) has been ramping up substantially of late. We’re early in the market, and you can expect to see us track the maturation of these solutions.
Jeff Barr (@jeffbarr) suggested Get Started With Oracle Applications Now With Our New Test Drive Program on 10/1/2012:
One of the key advantages that customers and partners are telling us they really appreciate about AWS is its unique ability to cut down the time required to evaluate new software stacks. These "solution appliances" can now be easily deployed on AWS and evaluated by customers in hours or days, rather than in weeks or months, as is the norm with the previous generation of IT infrastructure.
With this in mind, AWS has teamed up with leading Oracle ecosystem partners on a new initiative called the Oracle Test Drive program.
Starting today, customers can experience firsthand a new way to evaluate and learn about advantaged new Oracle features and use cases, on AWS at no charge. In as little as one hour, customers can be guided step-by-step through the experience of:
- Backing up an Oracle database to Amazon S3.
- Creating a high available database solution using Oracle Data Guard.
- Use the business analytics capabilities of Oracle Business Intelligence Enterprise Edition.
- Evaluate PeopleSoft, E-Business Suite, Siebel, JD-Edwards and Hyperion analytics, human capital management, value chain management and financial management software.
While this may sound somewhat intimating to the non-initiated, all of these hands-on labs walk you through the process of logging into your own private AWS instance, and then use the pre-configured Oracle software. The simple step-by-step instructions enable you to experience firsthand the advanced capabilities of Oracle software on AWS at your own speed. The test drive labs are designed to provide you with instant insight into the capabilities and approaches that each of these Oracle solutions provide, and do so in as little as an hour.
For some people the cloud is still somewhat ethereal in nature, a non-tangible concept that is not quite concrete yet in their minds. The Oracle Test Drive labs may be able to make the cloud much more tangible, especially for customers that are familiar with the Oracle environment. Each lab includes up to 5 hours of complimentary AWS server time.
Let’s say that I want to learn how to back up my Oracle database to AWS using the Oracle Secure Backup product. I visit the test drive page and find the appropriate test drive:
Then I fill out the form on the partner site:
The partner sends me an activation link via email, and I click it to get started:
And I sign in on the partner site:
This partner site allows me to choose one of three distinct test drives:
I selected Oracle Secure Backup. This launches a complete, self-contained test environment on an Amazon EC2 instance:
While the environment was setting up, I was able to tab through a preview of what I would see and learn:
When the environment is ready (this one launched in about five minutes), I received a second email, with complete signin details:
And from there I simply connected to the EC2 instance (the lab actually creates one Windows instance and one Linux instance, both within a Virtual Private Cloud):
After I gained access to the Windows instance, I followed the directions and opened up a pre-configured PuTTY session to the Linux instance (yes, this means that I am using a virtual desktop to access one virtual server to log in to a second virtual server):
I continued to follow the directions and verified that the expected database processes and tables were in place:
Per the directions, I installed the Oracle Secure Backup model (this took about 3 minutes) and then I ran an actual backup (the rman.sh script simply invokes the rman command with some parameters):
From there it was easy to verify my backup:
Next (still following the directions), I simulated a disaster by removing a database file and verifying that I could no longer operate on it. I then learned how to recover the data:
After this I brought the database back on line and verified that I could resize it. This concluded the lab, which ended with a nice summary of the benefits of backing up my Oracle database to Amazon S3:
And that’s that! Note that I didn’t have to do any of the following in order to learn how to use Oracle Secure Backup:
- Get permission from Corporate IT or my manager.
- Acquire, install, and configure a server.
- Install and configure Oracle Database 11G or Oracle Secure Backup.
- Spend any money.
- Leave a mess behind.
I was able to do the entire lab in 90 minutes. This included the time to complete the lab and to write and illustrate this blog post. If I can do it, so can you.
This is just one of nearly two dozen labs. Check them out today to learn more about Oracle and AWS, or to improve your skills.
Barb Darrow (@gigabarb) reported Shocker! Oracle takes on Amazon with all-Oracle-all-the-time cloud in a 9/30/2012 article for GigaOm’s Cloud blog:
Updated: Oracle has found a market for its big, pricey engineered hardware systems — and it’s in new public and private Oracle clouds. Oracle CEO Larry Ellison laid out the company’s new all-red infrastructure-as-a-service cloud plan at Oracle OpenWorld on Sunday night.
Oracle cloud will use “our OS, our VM, our compute services and storage services on the fastest most reliable systems in the world — our engineered systems, Exadata, Exalogic, Exalytics, all linked with Infiniband,” Ellison told thousands of Oracle customers, partners and others at San Francisco’s Moscone Center Sunday night. For banks and other companies with requirements to run infrastructure in house, Oracle will offer a private cloud based on the exact same technology and run and manage it customer data centers, Ellison said.
The promised Oracle 12c (the “c” stands for cloud) database will be the software foundation and Ellison said this iteration of the database will put multitenancy — the ability to securely keep separate sets of data in one place — at the database level where it belongs. The rough concept is that 12C is a database container that can run separate “pluggable” databases — one for ERP, another for CRM and so on.
“Back in 1998 and 1999 when NetSuite and Salesforce.com came out, the only way to do multitenancy was at the application layer,” Ellison said, adding that he had problems with that – he Ellison used to blast competitors’ user of multitenancy, calling it an aging technology. That apparently all changes now. (Ellison had stakes in NetSuite and Salesforce.com, both pioneering SaaS companies and still owns a big piece of NetSuite.)
By moving multitenancy into the database, software as a service (SaaS) and platform as a service (PaaS) providers can relinquish that workload to the database and use database query and business intelligence tools to work with them instead of having to come up with application-specific tools.
Ellison: Our SaaS customers want this
Ellison said SaaS and PaaS customers asked Oracle to supply this infrastructure so it will be interesting to see if either Salesforce.com or NetSuite — both SaaS companies which use Oracle databases — makes a move. That’s doubtful in Salesforce.com’s case since that company is competing more and more with Oracle. And, NetSuite CEO Zach Nelson will speak at OracleOpenWorld so stay tuned.
Update: Reached by email, NetSuite’s Nelson said: “NetSuite wouldn’t choose to run our application on anyone’s public cloud — Oracle’s, Microsoft’s, or Amazon’s. We need to manage every aspect of our infrastructure to ensure service level commitments we have made to our customers.” No word back yet from Salesforce.com’s Marc Benioff.
Update: Nelson wrote back in to clarify his statement: “I should have qualified this a bit to say we wouldn’t run our ‘production’ application on anyone’s cloud. However, the idea of doing pre-release testing and/or disaster recovery on Oracle’s cloud is interesting to us. And of course, we certainly believe Oracle’s technology is fantastic for cloud delivery as we (like salesforce.com) run a complete Oracle database and app server beneath the NetSuite application,” he wrote.
The hardware foundation for Oracle Cloud will be Exadata X3, a new “engineered system” which packs 26TB of memory — 4TB of DRAM and 22TB of Flash memory, Ellison said.
Oracle’s problem in all this is that it has not made much of a case for its hardware to date. Oracle’s hardware business was off 24 percent year over year in its last quarter. It also has a bit of an ecosystem problem. Yes it has SaaS customers, but as several on Twitter commented, they would be more impressed if Oracle had trotted out a list of customers and/or partners that signed up for this cloud effort.
Playing catchup in cloud
And, Oracle’s entry into public cloud is late given that competitors including IBM, HP, and the OpenStack players are already there.
In addition, Oracle’s decision to use very high-end specialized hardware to power its cloud flies in the face of conventional wisdom espoused by web giants like Facebook, Google and Amazon that yoke together thousands of commodity servers in webscale data centers. Oracle’s take is definitely scale-up in what appears to be an increasingly scale-out world.
If you can’t sell it, rent it. Sounds like the ultimate in pricey lock-in to me.
Full disclosure: I’m a registered GigaOm Analyst.
Powered by WPeMatico