|Një përmbledhje e Windows-kaltra, SQL Azure Database, AppFabric, Windows Azure Appliance Platforma dhe të tjera cloud informatikë artikuj.|
Shënim: Ky post përmban overflow nga Windows-kaltra dhe / / ndërtuar / të posteve për 2011/09/12 + .
- Azure Blob,, Drive Tabela Radhë dhe Shërbimet
- Baza e të dhënave SQL Azure dhe Raportimi
- Marketplace DataMarket dhe OData
- Windows Azure AppFabric: Apps, Access Control, wif dhe Biznes Shërbimi
- Windows Azure Roli VM, Rrjeti Virtual, Connect, RDP dhe CDN
- Windows Live Azure Apps, TV, mjetet dhe pajimet Test
- Visual Studio LightSwitch dhe Kornizë subjektit v4 +
- Windows Azure Infrastruktura dhe DevOps
- Windows Azure Platforma Appliance (WAPA), Hyper-V dhe Retë private / Hybrid
- Cloud Sigurimit dhe Qeverisja
- Ngjarje cloud
- Platforma të tjera informatikë Cloud dhe Shërbimet
Michael Collier ( @ MichaelCollier ) e rekomanduar që ju Back Up ruajtjen tuaj Dev para azhornimin në Windows Azure SDK 1.5 në një post 2011/09/15:
Dje Microsoft vënë në dispozicion një version të ri të Windows-kaltra SDK. Ri Windows Azure SDK 1.5 ka karakteristika shumë të mëdha, disa prej të cilave kam diskutuar më parë .
Një gjë që nuk ishte e qartë për mua para se të përmirësimit 1,4-1,5 ishte se si pjesë e upgrade, një i ri magazinimit zhvillimi bazës së të dhënave do të ketë nevojë për të marrë të krijuar dhe unë do të lirshme acess të gjitha të dhënat e ruajtura më parë në ruajtje të zhvillimit. Kur duke filluar emulator magazinimit për herë të parë pas upgrade mekanik, DSINIT duhet të kandidojë. DSINIT do të krijojë një bazë të dhënash të re magazinimit të zhvillimit për ju. Baza e të dhënave të reja duhet të quhet "DevelopmentStorageDb20110816". Ju pastaj mund të lidheni me atë duke përdorur mjet tuaj të preferuar magazinimit, të tilla si Cerebrata Storage Studio apo kaltra Storage Explorer . Pasi të lidh, ju do të vini re se të gjitha të dhënat që ju kishte ruajtur më parë në ruajtje dev është zhdukur tani!
Në fakt, të dhënat nuk duket të jetë aq shumë "shkuar" si "Unë nuk mund të merrni atje nga këtu". Nëse ju merrni një vështrim në tuaj lokale SQL Express shembull, ju do të ngjarë njoftim i ri "DevelopmentStorageDb20110816" bazës së të dhënave dhe paraprak "DevelopmentStorageDb20090919" database. Si ju mund të merrni këto të dhëna lehtësisht. . . . Unë nuk jam ende i sigurt.
Çfarë duhet të bëni tani? Ju jeni bast mirë është të ngjarë të rrëmbyer z Fusion dhe DeLorean tuaj dhe të shkoni mbrapa në kohë në pikën para instalimit të përditësuar të Windows Azure mjetet. Prapa deri të dhënat tuaja dev magazinimit, dhe pastaj do të vazhdojë për të instaluar mjetet e reja. Me të dhënat e backup, atëherë ju mund të rivendosur atë në ruajtje të ri dev. Cerebrata ka një walkthrough e bukur e bërë kështu këtu (minus Fusion Z. dhe pjesë DeLorean). Nëse ju nuk keni Fusion zoti i dobishëm, ju duhet gjithashtu të jetë në gjendje të uninstall 1,5 mjete, instaloni 1.4 mjete, backup të dhënat, reinstall 1,5 mjete, dhe pastaj të rivendosur të dhënave.
Mësimi këtu - mbrapa deri të dhënat tuaja dev magazinimit para azhornimin për mjetet e fundit të Windows Azure.
Javën e kaluar unë në fund mori raundin e duke u përpjekur nga beta e një mjet të ri nga Red-Gate Software - Backup SQL Azure . Kjo është një shembull i një prej atyre mjeteve që vetëm ju bën buzëqeshje - shkarko vogël, nuk ka instalim, punon si reklamohen dhe madje ka pak "karakter"
Ajo mund të backup për SQL Server ose në Windows Azure Storage Blob duke përdorur Microsoft Import / Export Shërbimi .
Hapi 1: Futni hollësitë tuaja SQL Azure
Hapi 2: Shkruani detajet e objektivit - në këtë rast një lokal SQL Express
Hapi 3: Bëni një filxhan çaj
Ose - të shikojnë se çfarë po ndodh. Ajo krijon një db të re në SQL kaltra për konsistencën transaksionare
E cila pastaj merr skemen dhe të dhënat
Cili është fshirë përfundimisht
Hapi 4: Backup është i plotë
Hapi 5: Dhe objektivi i bazës së të dhënave është i gatshëm
Lidhje të ngjashme
- Marrë pjesë në një seminar FALAS e Windows Azure zbulimit në Mbretërinë e Bashkuar
Gilad Parann-Nissany diskutuan mbi sigurinë Baza e të dhënave në re në një post 2011/09/13 në blogun Porticor:
Ne shpesh marrin kërkesa për praktikat më të mira në lidhje me sigurinë e bazës së të dhënave relacionale në kontekstin e cloud computing. Njerëzit duan për të instaluar bazën e të dhënave e zgjedhjes së tyre, qoftë ajo të jetë Oracle, MySQL, MS SQL, ose IBM DB2 ...
Kjo është një pyetje e ndërlikuar, por ajo mund të ndahen duke pyetur "çfarë ka të re në renë?" Teknika të shumta që kanë ekzistuar për moshat mbeten të rëndësishme, kështu që le të shkurtimisht të dhënave shqyrtim sigurisë në përgjithësi.
Baza e të dhënave të sigurisë në kontekstin e
- Aplikimi sigurisë. Kërkesë e cila përdor bazën e të dhënave ("ulet në majë" DB) në vetvete është e hapur ndaj sulmeve të ndryshme. Sigurimi i aplikimit do të mbyllë vektorëve sulmit të mëdha të të dhënave, të tilla si SQL injeksion
- Sigurisë fizike. Në kontekstin cloud, kjo do të thotë të zgjidhni një ofrues re që ka zbatuar dhe sigurinë dokumentuar praktikat më të mira
- Rrjeti i sigurisë. Mjedisin tuaj të re dhe 3 rd pjesë e software të sigurisë duhet të përfshijë teknika të rrjetit të sigurisë të tilla si firewalls, rrjetet virtuale private, dhe zbulimin ndërhyrje dhe parandalimin
- Host sigurisë. Në të resë, instanca e tua (aka servers virtuale) duhet të përdorni një up-to-date dhe të sistemit operativ, Patched virus dhe malware mbrojtjen, monitoruar dhe hyni të gjitha aktivitetet
Duke siguruar çdo gjë jashtë bazës së të dhënave, ju kanë mbetur ende me kërcënimet ndaj bazës së të dhënave vetë. Ata shpesh përfshijnë:
- Sulmet e drejtpërdrejta në të dhënat e vetë (në një përpjekje për të marrë në të)
- Sulmet e tërthorta mbi të dhënave (të tilla si në fotografi log)
- Përpjekjet për të ngacmoj me konfigurimin
- Përpjekjet për të ngacmoj me mekanizmat e auditimit
- Përpjekjet për të ngacmoj me software DB vetë (p.sh. ngacmoj me ekzekutuesit e software database)
Deri më tani, këto kërcënime janë të njohur për çdo ekspert i sigurisë bazës së të dhënave me vitet e eksperiencës në qendër të dhënave. Pra, çfarë ndryshon në re?
Të dhënat në pushim në renë
Në fund të ditës, bazat e të dhënave të shpëtuar "gjithçka" në disqe, shpesh në fotografi që mund të përfaqësojnë tabelat, informacionet e konfigurimit, binareve ekzekutueshme, apo subjektet e tjera logjike.
Duke mbrojtur dhe kufizimin qasje në këto dosje është çelësi i kursit. Në qendër të "vjetër" të dhënave, kjo zakonisht është bërë duke e vendosur në një vend të disqe (shpresojmë) të sigurt, pra në një dhomë me mure të mira dhe aksesi i kufizuar. Në të resë, disqe virtuale janë të arritshme nëpërmjet një shfletues, dhe gjithashtu për disa prej punonjësve të ofruesit të resë; padyshim disa të menduarit shtesë është e nevojshme për të siguruar ato.
Përveç mbajtjes kredencialet tuaja të qasjes ruajtur nga afër, është e rekomandueshme që universalisht disqe virtuale me të dhëna të ndjeshme duhet të jetë gjithmonë i shifruar.
Ka dy mënyra themelore për të mbrojtur këto fotografi
- Dokumentit të nivelit të encryption. Në thelb ju duhet të dini që fotografi të veçantë që ju dëshironi për të mbrojtur, dhe encrypt tyre nga një teknikë e duhur
- Full encryption disk. Kjo kodon çdo gjë në disqe
Full encryption disk sot është praktikë e mirë. Ajo siguron se asgjë nuk është harruar.
Çelësat encryption në cloud
Encrypting të dhënat tuaja në pushim në disqe virtuale është padyshim mënyra më e drejtë për të shkuar. Ju duhet gjithashtu të marrin në konsideratë janë çelësat encryption janë mbajtur, sepse nëse një sulmues i merr të mbajë çelësat ata do të jenë në gjendje të decrypt të dhënat tuaja.
Është e rekomanduar për të shmangur zgjidhjeve që mbajnë çelësat tuaj të drejtë tjetër për të dhënat tuaja, sepse atëherë ju në të vërtetë nuk kemi siguri.
Është gjithashtu e rekomanduar për të shmangur shitësit që thoni ju "nuk i besojnë re, por na besoni, dhe le të na shpëtojë çelësat tuaj". Ka një numër i shitësit të tilla në treg. Fakti është që ofruesit cloud tilla si Amazon, Microsoft, Rackspace ose Google - e di stuff e tyre. Nëse ju nuk besoni atyre me të dhënat tuaja të çmuar, pse besimi cloud shitësi X?
Një qasje që e bën punën nga një perspektivë e sigurisë - ju mund të mbani të gjitha çelësat tuaj të kthehet në qendrën tuaj të të dhënave. Por kjo është e rëndë, në fakt ju doli përpara resë, sepse ju të kërkuar për të lëvizur jashtë qendrës së të dhënave.
Një zgjidhje unike ekziston. Porticor ofron zgjidhje unike e saj kyç të menaxhimit i cili ju lejon të besoni askujt, por veten, ende gëzojnë fuqinë e plotë të një zbatim cloud pastër. Për më shumë mbi këtë, shih këtë letër të bardhë. Kjo zgjidhje gjithashtu zbaton plotësisht disk encryption të plotë, siç u përmend më lart.
Baza e të dhënave të sigurisë në renë është një temë komplekse, por krejtësisht e mundur sot.
Nuk ka artikuj të konsiderueshme sot.
Brian Raymor i Ekipit IE përshkruar statusin e WebSockets Faqes gati në një post 2011/09/15:
Web merr të pasur dhe zhvilluesit janë më shumë krijues, kur faqet dhe shërbimet mund të komunikojnë dhe për të dërguar njoftimet në kohë reale. Teknologji WebSockets ka bërë përparim të rëndësishëm gjatë nëntë muajve të fundit. Standardet rreth WebSockets kanë konverguar ndjeshëm, deri në pikën që zhvilluesit dhe konsumatorët tani mund të përfitojnë prej tyre në të gjithë Implementimi të ndryshme, duke përfshirë edhe IE10 në Windows 8. Ju mund të provoni një makinë WebSockets provë që tregon në kohë reale, vizatim multiuser që punon në të gjithë shfletuesit të shumta.
Çfarë është WebSockets dhe çfarë does it do?
WebSockets mundësojë Web aplikacione për të ofruar në kohë reale njoftime dhe azhurnime në shfletuesin. Zhvilluesit kanë hasur në probleme të punuar rreth kufizimeve në modelin origjinal të shfletuesit HTTP kërkesë-përgjigje, e cila nuk ishte projektuar për kohë reale skenarëve. WebSockets mundësojë për të hapur një shfletues bidirectional, kanal të plotë-duplex me shërbimet e komunikimit. Secila palë pastaj mund të përdorni këtë kanal që menjëherë të dërgojë të dhënat për tjetrin. Tani, nga faqet e rrjeteve sociale dhe lojrave për faqet financiare mund të japë më të mira në kohë reale skenarët, në mënyrë ideale duke përdorur Markup njëjtë në të gjithë shfletuesit të ndryshme.
Çfarë ka ndryshuar me WebSockets?
WebSockets kanë bërë një rrugë të gjatë që kemi shkruar rreth tyre në dhjetor 2010 . Në atë kohë, ka qenë një shumë prej ndryshimeve të vazhdueshme në teknologji të thjeshtë, dhe zhvilluesit duke u përpjekur për të ndërtuar mbi të ballafaquar me shumë sfida të dy rreth efikasitetit dhe vetëm duke faqet e tyre për të punuar. Standardi tani është shumë më e qëndrueshme, si rezultat i bashkëpunimit të fortë nëpër kompani të ndryshme dhe Standardet organeve (si W3C dhe Internet Engineering Task Force ).
WebSocket W3C specifikim API është stabilizuar, me asnjë çështje substanciale bllokojnë thirrjen e fundit. Specifikim ka mbështetje të re për lloje mesazh binare . Ka ende çështje në diskutim, si përmirësimin e vlefshmërisë së subprotocols. Protokolli është gjithashtu mjaft i stabilizuar se kjo është tani në axhendën e Grupit Drejtues Internet Engineering për shqyrtim dhe miratim përfundimtar.
Web lëviz përpara, kur zhvilluesit dhe konsumatorët mund të mbështetet në teknologji për të punuar mirë. Kur WebSockets teknologji është zhvendosur dhe "në ndërtim e sipër," ne kemi përdorur HTML5 Labs si një vend për eksperimentim dhe reagime nga komuniteti. Me një prototip kemi fituar përvojë të zbatimit që të çon drejt angazhimit të fortë në grupin e punës dhe mundësi për të mbledhur reagime nga komuniteti, dy nga të cilat në fund të fundit të çojë në një më të mirë, dhe më të qëndrueshme, dizajn për zhvilluesit dhe konsumatorët. Ne jemi të ngazëllyer dhe të inkurajuar nga sa HTML5 Labs na ndihmoi të punojnë me komunitetin për të sjellë WebSockets aty ku është sot.
Sot në të ndërtuar ide kryesore kam pasur mundësi për të treguar disa nga funksionet të ri në Microsoft ® Visual Studio ® 11 dhe Zhvilluesish Pamjeje për Microsoft ® Ekipi Pamjeje Foundation Server. MSDN abonentë mund të shkarkoni previews sot, si dhe lirimin e re e NET Framework 4.5 Developer Preview;. Disponueshmëria e përgjithshme është më e premte, 16 shtator.
- Shkarko Visual Studio 11, Team Foundation Server 11, dhe disa komponente tjera sot ( Subscribers MSDN Vetëm Downloads ).
- E premte 10:00 PDT disponueshmëria e përgjithshme për Visual Studio 11 dhe 11 Ekipi Foundation Server mund te shkarkohet.
Disa njoftime emocionuese janë duke u bërë këtu në ndërtuar. Visual Studio 11 ofron një përvojë të zhvillimit të integruar që përfshin tërë ciklit te jetes e krijimit nga arkitektura software për krijimin e kodit, testimin dhe më gjerë. Ky version shton mbështetje për Windows 8 dhe HTML 5, duke bërë të mundur që ju të synuar platformat nëpër pajisjet, shërbimet dhe reja. Integrimi me ekipin Server Foundation mundëson të gjithë ekipin për të bashkëpunuar gjatë gjithë ciklit të zhvillimit për të krijuar aplikacione të cilësisë.
. NET 4.5 është përqendruar në kërkesat Zhvilluesi të lartë në të gjitha teknologjitë tona kryesore, dhe përfshin karakteristika të reja për Asynchronous programimit në C # dhe Visual Basic, mbështetje për makinat shtetërore në Windows Workflow, dhe rritur investimet në HTML5 dhe CSS3 në ASP.NET.
Ne kemi ndarë një shumë në BUILD tashmë, për më shumë në të ardhmen e zhvillimit të Windows-unë sugjeroj që ju të marrë një sy në Steven Sinofsky dhe S. Somasegar e blogs. Më shumë detaje mbi Fondacioni Ekipi Server duke përfshirë shërbimin e ri të shpallur në të ndërtuar dhe se si ne jemi duke ndihmuar ekipet të jenë më produktive mund të gjenden në Brian Harry blog.
Turne të shpejtë rreth Visual Studio 11 Features
Visual Studio 11 ka disa karakteristika të reja, të gjitha projektuar për të siguruar një sërë të integruar të mjeteve për dërgimin përdorues madhe dhe përvojat e aplikimit; nëse punojnë individualisht ose si pjesë e një ekipi. Më lejoni të theksoj disa:
Zhvillimi Apps stil Metro për Windows 8
Për shkak të natyrës dinamike të HTML ajo shpesh është e vështirë për të parë se si një faqe web është e do të shohim nëse ajo është running. Mënyra inovative blend interaktive projektimit të ju mundëson për të drejtuar app tuaj në sipërfaqen e projektimit si një app jetojnë në vend të një layout statike vizuale.
Enhancements për zhvillimin e lojës
Ne kemi shtuar Visual Studio Graphics mjete për të ndihmuar zhvilluesit e lojë të bëhet më produktive, duke e bërë më të lehtë për të ndërtuar lojëra inovative. Visual Studio 11 siguron qasje në një numër të burimeve redaktimit, dizajn vizual, vizuale dhe mjete për debugging shkrim lojra 2D / 3D dhe aplikacionet në stilin Metro. Në mënyrë të veçantë, Visual Studio Graphics përfshin mjetet për:
Shikimin dhe redaktimi themelore e 3D modele në Visual Studio 11.
Shikimin dhe redaktimi i imazheve dhe textures me mbështetje për kanale alfa dhe transparencës.
Vizualisht dizajnimin e programeve shader dhe fotografi efekt.
Debugging dhe diagnostifikimit të prodhimit DirectX bazuar.
Clone Kodi Analiza
Visual Studio ka dhënë historikisht mjete që i mundësojnë një zhvillues për të përmirësuar cilësinë e koduar nga kodi Refactoring. Megjithatë, ky proces varet nga zhvilluesi i saj për të përcaktuar se ku kodin reusable të tilla ka të ngjarë të ndodhë. Kodi-klon mjet Analiza në Visual Studio 11 shqyrton zgjidhje tuaj duke kërkuar për logjikën që është dyfishohen, duke bërë të mundur që ju të faktor në këtë kod nga një ose më shumë metoda të përbashkëta. Mjet e bën këtë shumë inteligjente, ajo nuk ka vetëm të kërkuar për blloqe identike të kodit, por ajo kërkimet për konstruktet semantike të ngjashme duke përdorur heuristics zhvilluara nga Microsoft Research.
Kjo teknikë është e dobishme në qoftë se ju jeni të korrigjuar një bug në një pjesë të kodit dhe ju doni të gjeni nëse bug njëjta rezulton nga idiomën njëjtën programatik ndodh diku tjetër në projekt.
Kodi Rishikimi Workflow me ekipin Explorer
Visual Studio 11 Pamjeje punon krah për krah me ekipin Server Foundation 11 për të siguruar më të mirë në menaxhimin e klasës aplikimit ciklit te jetes. Visual Studio 11 objektet bashkëpunimi është duke u mundësuar zhvilluesve që të kërkojë dhe të kryejnë shqyrtimet kodin duke përdorur Explorer ekip. Ky funksion përcakton një workflow në Foundation Server ekip që kursen shtetin e projektit dhe rrugët shqyrtojë kërkesat si artikuj të punës për anëtarët e ekipit. Këto workflows janë të pavarur nga çdo proces specifik apo metodologjinë, kështu që ju mund të inkorporojë përshtypjet kod në çdo moment të përshtatshëm në të ciklit të projektit.
Rishikimi Kërkesa Lidhje në panelin tim të punës ju mundëson për të krijuar një të ri nga shqyrtime detyrë të koduar dhe të caktojë atë për një apo më shumë zhvilluesve të tjera.
Recensues mund të pranojë ose të refuzojë shqyrtimin, dhe për t'iu përgjigjur çdo mesazheve apo pyetje lidhur me rishikimin e kodit, shtoni Annotations dhe më shumë. Visual Studio 11 tregon kodin duke përdorur një "diff" format, duke treguar kodin origjinal dhe ndryshimet e bëra nga zhvilluesi i kërkuar rishikim. Ky funksion mundëson recensues për të shpejt të kuptuar qëllimin e ndryshimeve dhe të punojnë në mënyrë më efikase.
Hulumtim dhe Testimi Testimi Njësia Enhanced
Ndërsa ekipet e zhvillimit të bëhen më fleksibël dhe më të shkathët, ata kërkojnë mjete adaptive që ende sigurojnë një angazhim të lartë të cilësisë. Tipar kërkimor Testimi është një mjet i shkathët për testim adaptive që ju mundëson për të testuar pa kryerjen e planifikimit formal test. Ju tani mund të drejtpërdrejt të fillojë testimin e produktit, pa shpenzimet e shkrimit rastet kohë provë apo kompozimin suita test. Kur ju filloni një sesion të ri të testimit, mjet gjeneron një regjistër të plotë të ndërveprimit tuaj me aplikimin në provë. Ju mund të Annotate seancën tuaj me shënime, dhe ju mund të kapni ekran në çdo moment dhe të shtoni në ekran rezulton qëllua për shënimet tuaja. Ju gjithashtu mund të bashkangjitni një fotografi ofruar ndonjë informacion shtesë nëse kërkohet. Me mjet testimin eksploruese ju gjithashtu mund të:
- Paraqesë mete vepruese të shpejtë. Mjet kërkimor Testimi ju mundëson për të gjeneruar një raport bug, dhe hapat që keni kryer të shkaktojë sjellje të papritur që përfshihen automatikisht në raport bug.
- Krijo test raste. Ju mund të gjenerojnë test raste bazohet mbi hapat që shkaktoi mete për të dalë.
- Menaxho seanca paraprake të testimit. Kur testimi është i plotë, ju mund të ktheheni për të Microsoft Menaxheri test, i cili kursen detajet e seancës testimit dhe përfshin informacion të tillë si kohëzgjatja, të cilat mete reja janë ngritur, dhe cilat lëndë testit ishin krijuar.
Çfarë ka të re në. NET 4.5
. NET 4.5 është fokusuar në kërkesat tona Zhvilluesi lartë. Nëpër ASP.NET, BCL, MEF, WCF, WPF, Windows Workflow, dhe teknologjive të tjera kryesore, ne kemi dëgjuar për zhvilluesit dhe shtoi funksionalitetin in. NET 4.5. Shembuj të rëndësishëm përfshijnë makinë mbështetjen e shtetit në Windows Workflow, dhe mbështetje të përmirësuar për SQL Server dhe Windows kaltra në ADO.NET. ASP.NET është rritur investimet në HTML5, CSS3, zbulimin pajisje, optimization faqe, dhe sistemi paketë NuGet, si dhe prezanton funksionalitet të ri me MVC4. Mëso më shumë në Scott Guthrie s blog.
. NET 4.5 gjithashtu ndihmon zhvilluesit shkruani kodin më të shpejtë. Mbështetje për Asynchronous programimit në C # dhe Visual Basic mundëson zhvilluesve të lehtë të shkruani kodin UI klient që nuk të bllokuar, dhe kodin server se shkallët më efikase. Ri i plehrave server koleksionist redukton herë pauzë, dhe karakteristika të reja në Platformën Computing paralele të mundësojë programimin Dataflow dhe përmirësime të tjera.
Visual Studio 11 përfshin disa tipare të reja të cilat do të ndihmojnë zhvilluesit të bashkëpunojnë në mënyrë më efikase, duke krijuar përvoja emocionuese për përdoruesit e tyre. Këtu janë disa janë disa burime për t'ju ndihmuar të merrni filluar.
Unë kam qenë përditësimin projektet e mia të Windows Azure në SDK re dhe u përplas me një problem. Kjo është një çështje e prezantuar nga vetja ime, por ai mund të ndihmojë disa njerëz, nëse unë të shpalosë se çfarë ishte. 7 Të gjitha Apps prodhimit janë tani drejtimin v1.5 me asnjë problem, është gabim në një copë të zhvillimit të kodit (R & D të cilësisë).
Në Visual Studio shënimet web.config:<add type="Microsoft.ServiceHosting.Tools.DevelopmentFabric.Runtime.DevelopmentFabricTraceListener, Microsoft.ServiceHosting.Tools.DevelopmentFabric.Runtime, Version=220.127.116.11, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="DevFabricListener"> <filter type=""/> </ Add>
është theksuar dhe nënvizuar, me Tooltip "Kualifikimi pavlefshme modul: Failed për të zgjidhur Microsoft.ServiceHosting.Tools.DevelopmentFabric.Runtime kuvendit".
Në aplikimin tim, një rol Web, unë isha duke përdorur gjurmimin këtë dëgjues gjurmë, e cila do të hedhin dhe përjashtim. Në Global.asax: Application_Error isha edhe gjurmimin këtë përjashtim, duke shkaktuar një tjetër përjashtim për t'u hedhur nga infrastruktura e gjurmimit. Shumë shpejt ky përkeqësohet në një StackOverflowException dhe crashes W3WP.
Rezoluta është thjesht për të komentuar këtë TraceListener veçantë.
Unë jam ende për të zbuluar se pse emri është i dështuar për të zgjidhur (Unë do update këtu në qoftë se unë bëj - ose në qoftë se ju të gjeni se pse ju lutem më lejoni të dinë). Një numër versioni i ri do të jetë me mend e mia.
Të Bytes nga MSDN blog postuar bytes nga MSDN: Shtator 14 - Lynn Langit on 2011/09/14:
Tune në sa Dave Nielsen ( @ davenielsen ) dhe Lynn Langit ( @ llangit , foto në të djathtë) të diskutuar punën e tyre me qeveritë dhe organizatat jofitimprurëse. Ato mbulojnë si jo-për-fitimet dhe entitetet qeveritare mund të kurseni të holla duke leveraging cloud. Kjo është një jo-brainer, qoftë duke përdorur software si një shërbim (saas) ose zgjerimin e magazinimit, Windows Azure mund të sigurojë një zgjidhje pay-per-çfarë-ju-duhet. Watch this bisedë me Dave dhe Lynn për të gjetur se si shteti i Floridës është duke përdorur scalability renë gjatë sezonit të taksave për të kursyer para.
HPC në blogun Cloud botuar një GigaSpaces Seamless Mundëson vendosjen e misionit-kritike Zbatime Java në Windows Azure shtyp lirimin on 2011/09/13:
GigaSpaces Technologies, një pionier i platformave të brezit të ardhshëm të aplikimit për aplikacionet kritike misionit, tani është ofruar një zgjidhje që ofron seamless on-konviktit për ndërmarrje Java dhe big-dhënave aplikimet për platformën e Windows Azure.
"Konsumatorët që kanë ndërmarrje Java aplikacione mund të levave zgjidhje GigaSpaces për të ndihmuar ata kalimi në platformën e Windows Azure," tha Prashant Ketkar, Drejtor i Marketingut Produktit, Windows Azure në Microsoft. "GigaSpaces është duke punuar me Microsoft për të dhënë e Windows Azure dislokimin konsumatorët, menaxhimin dhe aftësitë monitoruese për Aplikimet që ekzekutohen në një mënyrë të shkathët dhe me kosto efektive."
Zgjidhja GigaSpaces për kaltra thekson dekadë e kompanisë përvojë në dhënien e platformave të aplikimit për në shkallë të gjerë, mision-kritike aplikacionet Java dhe ishte e njohur në Konferencën e fundit të Microsoft Partners në mbarë botën si një bllok ndërtimi për Windows Azure.
Windows Azure mundëson zhvilluesve për të ndërtuar, nikoqir dhe aplikacionet shkallë në datacenters Microsoft vendosura rreth botës. Zhvilluesit mund të përdorni aftësitë ekzistuese me të. Net, Java, Visual Studio, PHP dhe Ruby për të shpejt të ndërtuar zgjidhje me asnjë nevojë për të blerë serverat ose të ngritur një infrastrukturë të dedikuar. Zgjidhja gjithashtu ka të automatizuar të menaxhimit të shërbimit për të ndihmuar në mbrojtjen kundër dështimit hardware dhe joproduktive lidhur me mirëmbajtjen platformë.
"Platforma jonë mundëson bizneseve për të lëvizur aplikacionet e tyre në renë me asnjë ndryshim të kodit dhe të mësuarit kurbë minimale - që nënkupton se ata mund të vazhdojnë të zhvillohen në mjedisin e tyre tradicionale dhe të levave aftësitë ekzistuese dhe asetet, ndërsa duke fituar të mirat e kostos dhe agility e dërgimit të Windows Azure cloud ", thotë Adi Paz, Marketing EVP dhe Zhvillimin e Biznesit në GigaSpaces. "Në të njëjtën kohë, ne të sigurojë ndërmarrjeve dhe ISVs aftësinë për të ia plas në renë e Windows Azure në ngarkesa e pikut, të drejtuar para-prodhimin e tyre në Windows Azure dhe ruani shpenzimet që lidhen me shkallë të gjerë testimin e sistemit."
Zgjidhja GigaSpaces për Windows Azure ofron vlerat e mëposhtme unike:
- On-board misioni kritike ndërmarrje Java / JEE / Pranvera dhe të mëdha-të dhënave aplikacionet to Windows kaltra: shpejt dhe seamlessly, me asnjë ndryshim të kodit.
- Vërtetë ndërmarrje-grade mjedis të prodhimit: Disponueshmëria e vazhdueshme dhe failover, shkallë elastike nëpër rafte dhe automating ciklit te jetes vendosjen e aplikimit.
- Bëni shërbime të Windows Azure natively në dispozicion të ndërmarrjeve aplikacioneve Java: Përmes integrimit të ngushtë në mes të platformës GigaSpaces dhe Azure.
- Të GigaSpaces tregu kryesor në Java-memorie të dhënat e rrjetit: Ofron performancën ekstreme, latente të ulët dhe gjobë-trashë multi-së qirasë.
- Kontrolli dhe shikimit: Built-in monitorimin e aplikimit-dhe cluster-vetëdijshëm.
- Shmangia shitësi bllokoj-në: mundëson bizneseve për të ruajtur praktikat ekzistuese për zhvillim, dhe siguron mbështetje për çdo rafte aplikimit.
"GigaSpaces ka sjellë në zbatimin e Windows Azure më shumë se 10 vjet përvojë në zhvillimin dhe vendosjen e ndërmarrje-grade platformave të aplikimit për në shkallë të gjerë vendosjet running mision-kritike aplikacionet për organizatat më të mëdha të botës", vazhdon Paz GigaSpaces '. "Ne shohim përpara për të sjellë këto përfitime të njëjta për klientët e Windows Azure, në krye të shumë përfitime të platformën e Windows Azure."
Për të mësuar më shumë rreth këtij integrimi, vizitoni www.gigaspaces.com / kaltra , ose shihni webcast Microsoft-GigaSpaces. Cloudify për Azure do të jenë në dispozicion në Q4.
GigaSpaces Technologies është pionier i një brezi të ri të platformave Virtualization aplikimit dhe një ofrues kryesor i fund-te-fund zgjidhje për shkallë të shpërndara, mision-kritike mjediset e aplikimit, dhe cloud mundësuar teknologjitë. GigaSpaces është platforma e vetme në treg që ofron arkitekturë të vërtetë silo-lirë, së bashku me gatishmërinë operacionale dhe të hapjes, dërgimin e efikasitetit të zgjeruara, performanca ekstreme dhe gjithmonë-në disponibilitetin. Oferta GigaSpaces përfshin zgjidhje për aplikim shkallë të ndërmarrjeve dhe Paas ndërmarrjeve dhe ISV Enablement MSA që janë projektuar nga toka deri për të kandiduar në çdo mjedis re - privat, publik, ose hibride - dhe ofron një pa dhimbje, rruga evolucionare të takohet nesër IT sfidat.
Qindra organizata në mbarë botën janë leveraging teknologji GigaSpaces për të rritur atë efikasitetin dhe performancën, ndër të cilat janë Global Fortune 500 kompani, duke përfshirë ndërmarrjet më të lartë të shërbimeve financiare, e-commerce kompanitë, ofruesit e lojrave online dhe transportuesit e telekomit.
Unë u pyet për të konfirmuar përmes emailit mendimet e mia mbi drejtimin MongoDB në Windows Azure, veçanërisht implikimi se kjo nuk është praktikë e mirë. Gjërat kanë lëvizur së bashku dhe mendimet e mia kanë evoluar, kështu që unë mendova se mund të jetë e nevojshme për të rinovuar dhe të botojë mendimet e mia.
Së pari, unë jam një tifoz i madh i SQL kaltra, dhe mendoj se vendim i madh për të hequr pajtueshmërinë prapa me SQL Server ishte një e mirë që i mundësoi SQL Azure për të shpëtoj veten e disa prej problemeve me RDBMSs në renë. Por, siç kam diskutuar në Windows-kaltra ka pak për të ofruar NoSQL , Microsoft është aq i madh në SQL kaltra (për shumë arsye të mira) që NoSQL është një qytetar i klasit të dytë në Windows Azure. Edhe Storage Azure Tabela mungon në karakteristika që janë kërkuar për vite dhe në qoftë se ajo lëviz përpara, ajo do të bëjë kështu pa qejf dhe ngadalë. Kjo do të thotë se një arkitekturë e kaltra që ka nevojë për mirësinë e ofruar nga produktet NoSQL ka nevojë për të rrokulliset në një produkt alternative në disa rolit Azure në terezi (punëtor apo roli VM). (Rolet VM nuk përshtaten mirë me modelin Paas Azure, por për qëllimet e këtij diskutimi dallimet në mes të një roli punëtor dhe rolin VM janë të parëndësishme.)
Rolet Azure nuk janë shembuj. Ata janë kontejnerët aplikimit (që të ndodhë që të ketë disa lloj baze VM) të cilat janë të përshtatshme për përpunimin e aplikimit pa shtetësi - Microsoft referohet atyre si e Windows Azure llogaritin , e cila i jep një çelës që ata janë kryesisht për t'u përdorur për informatikë jo, këmbëngulje. Në kontekstin e një enë aplikimit rolet Azure janë shumë më të paqëndrueshme se një shembull EC2 AWS. Kjo është edhe me dashje dhe një gjë e mirë (në qoftë se ajo që ju dëshironi është llogaritin burimeve). Të gjitha tiparet e mira të Windows-kaltra, të tilla si automatike patching, etj failover janë të mundshme vetëm nëse kontrollues pëlhurë mund të përfundojë role sa herë që ai ndjehet si ajo. (Unë nuk jam i sigurt se si kjo punon terminimit, por unë imagjinohet se, të paktën me rolet web, nuk është një proces për të gracefully përfundojë aplikimin duke ndalur trajtimin e kërkesave hyrëse dhe i lënë ato të konkurrojnë të vijë në një fund.) Ka nuk është SLA për një shembull e Windows Azure llogaritin si të vetme nuk është me një shembull EC2. SLA në mënyrë të qartë thekson se keni nevojë për dy ose më shumë role për të marrë 99,95% uptime.
Për llogaritin, ne garantojmë se ju vendosë kur dy ose më shumë raste rol në faj të ndryshme dhe fusha përmirësuar rolet tuaja të Internetit përballen do të ketë lidhje të jashtme të paktën 99,95% të kohës.
Më 4 shkurt 2011, Steve Marksi nga Microsoft kërkoi Roger Jennings për të ndaluar publikimin e tij Raportin e Windows Azure Uptime [Theksimi i shtuar.]
Ju lutem ndaluar postimin këto. Ata janë të parëndësishme dhe mashtruese.
Për të tjerët që lexojnë këtë, në një platformë shkallë-out si Windows Azure, uptime për çdo rast të dhënë është e pakuptimtë. Është si matjen disponueshmërinë e një banke duke shikuar një tregimtar dhe kur ai e merr pushimet e tij.
Mendoni, për një çast, çfarë kjo do të thotë kur ju drejtuar MongoDB në Windows Azure - roli juaj MongoDB do të jetë drejtimin ku "uptime për çdo rast të dhënë është e pakuptimtë". Kjo e bën duke përdorur një rol për këmbëngulje të vërtetë e vështirë. Mënyra e vetme atëherë është për të kandiduar raste të shumta dhe të sigurohemi që të dhënat është në të dyja rastet.
Përpara se të hyjmë si kjo do të punojnë në Windows Azure, e konsiderojnë për një moment që është MongoDB unashamedly shpejtë dhe se shpejtësia është fituar me kryerjen e të dhënave në memorie në vend të diskut si opsion parazgjedhje. Pra, për kryerjen e disk (duke përdorur 'Safe mode ") apo një numër rastesh (dhe disqet e tyre) shkon kundër disa të asaj që MongoDB qëndron për. The MongoDB api allows you to specify the 'safe' option (or “majority” in 2.0, but more about that later) for individual commands. This means that you can fine tune when you are concerned about ensuring that data is saved. So, for important data you can be safe, and in other cases you may be able to put up with occasional data loss.
(Semi) Officially MongoDB supports Windows Azure with the MongoDB Wrapper that is currently an alpha release. In summary, as per the documentation, is as follows:
- It allows running a single MongoDB process (mongod) on an Azure worker role with journaling. It also optionally allows for having a second worker role instance as a warm standby for the case where the current instance either crashes or is being recycled for a normal upgrade.
- MongoDB data and log files are stored in an Azure Blob mounted as a cloud drive.
- MongoDB on Azure is delivered as a Visual Studio 2010 solution with associated source files
There are also some additional screen shots and instructions in the Azure Configuration docs.
What is interesting about this solution is the idea of a 'warm standby'. I'm not quite sure what that is and how it works, but since 'warm standby' generally refers to some sort of log shipping and the role has journaling turned on, I assume that the journals are written from the primary to the secondary instances. How this works with safe mode (and 'unsafe' mode) will need to be looked at and I invite anyone who has experience to comment. Also, I am sure that all of this journaling and warm standby has some performance penalty.
It is unfortunate that there is only support for a standalone mode as MongoDB really comes into its own when using replica sets (which is the recommended way of deploying it on AWS). One of the comments on the page suggests that they will have more time to work on supporting replica sets in Windows Azure sometime after the 2.0 release, which was today.
MongoDB 2.0 has some features that would be useful when trying to get it to work on Windows Azure, particularly the Data Centre Awareness “majority” tagging. This means that a write can be tagged to write across the majority of the servers in the replica set. You should be able to, with MongoDB 2.0, run it in multiple worker roles as replicas (not just warm standby) and ensure that if any of those roles were recycled that data would not be lost. There will still be issues of a recycled instance rejoining the replica set that need to be resolved however – and this isn't easy on AWS either.
I don't think that any Windows Azure application can get by with SQL Azure alone – there are a lot of scenarios where SQL Azure is not suitable. That leaves Windows Table Storage or some other database engine. Windows Table Storage, while well integrated into the Windows Azure platform, is short on features and cloud be more trouble than it is worth. In terms of other database engines, I am a fan of MongoDB but there are other options (RavenDB, CouchDB) – although they all suffer from the same problem of recycling instances. I imagine that 10Gen will continue to develop their Windows Azure Wrapper and expect that a 2.0 replica set enabled wrapper would be a fairly good option. So at this stage MongoDB should be a safe enough technology bet, but make sure that you use the “safe mode” or “majority” sparingly in order to take advantage of the benefits of MongoDB.
My Uptime Report for my Live OakLeaf Systems Azure Table Services Sample Project: June 2011 post was the first uptime report with two Web roles. The Republished My Live Azure Table Storage Paging Demo App with Two Small Instances and Connect, RDP of 5/9/2011 described the change from one to two web roles for the sample project.
In 2009 Microsoft released the Windows Azure platform, an operating environment for developing, hosting, and managing cloud-based services. Windows Azure established a foundation that allows customers to easily move their applications from on-premises locations to the cloud. Since then, Microsoft, analysts, customers, partners and many others have been telling stories of how customers benefit from increased agility, a very scalable platform, and reduced costs.
This post is the first in a planned series about Windows® Azure™. I will attempt to show how you can adapt an existing, on-premises ASP.NET application—like the Partner Velocity Platform (PVP) which is the engine that drives all partner-related functions behind the Microsoft Partner Network's (MPN)—to one that operates in the cloud. The series of posts are intended for any architect, developer, or information technology (IT) professional who designs, builds, or operates applications and services that are appropriate for the cloud. Although applications do not need to be based on the Microsoft ® Windows® operating system to work in Windows Azure, these posts are written for people who work with Windows-based systems. You should be familiar with the
Microsoft .Net Framework , Microsoft Visual Studio®, ASP.NET, and Microsoft
Visual C #®.
Introduction to the Windows Azure Platform
I can spend tons of time duplicating what others have already written about Windows Azure. But I will not. I will, however, provide you with pointers to where you can get great information that will give you a comprehensive introduction to it. Other than that, I will concentrate to add to what others have written and provide context as it pertains to the migration of the PVP platform to Windows Azure .
Introduction to the Windows Azure Platform provides an overview of the platform to get you started with Windows Azure. It describes web roles and worker roles, and the different ways you can store data in Windows Azure.
Here are some additional resources that introduce what Windows Azure is all about.
There is a great deal of information about the Windows Azure platform in the form of documentation, training videos, and white papers. Here are some Webs sites you can visit to get started:
- The portal to information about Microsoft Windows Azure is at http://www.microsoft.com/windowsazure/ .
It has links to white papers, tools such as the Windows Azure SDK, and many
other resources. You can also sign up for a Windows Azure account here.
- The Windows Azure platform Training Kit contains hands-on labs to
get you quickly started. You can download it at http://www.microsoft.com/downloads/details.aspx?FamilyID=413E88F8-5966-4A83-B309-53B7B77EDF78&displaylang=en .
- Find answers to your questions on the Windows Azure Forum at http://social.msdn.microsoft.com/Forums/en-US/windowsazure/threads .
In my next post I will setup the stage and tell you about the PVP Platform, MPN, the challenges Microsoft IT faces with the current infrastructure and code behind the PVP platform. We will discuss some of our goals and concerns and discuss the strategy behind the move of PVP to Azure.
Subscribed. Abel is a Principal PM at Microsoft IT Engineering.
Luiz Santos reiterated Microsoft SQL Server OLEDB Provider Deprecation Announcement in a 9/13/2011 post to the ADO.NET Team blog:
The commercial release of Microsoft SQL Server, codename “Denali,” will be the last release to support OLE DB. Support details for the release version of SQL Server “Denali” will be made available within 90 days of release to manufacturing. For more information on Microsoft Support Lifecycle Policies for Microsoft Business and Developer products, please see Microsoft Support Lifecycle Policy FAQ . This deprecation applies to the Microsoft SQL Server OLE DB provider only. Other OLE DB providers as well as the OLE DB standard will continue to be supported until explicitly announced.
It's important to notice that this announcement does not affect ADO's and ADO.NET's roadmaps. In addition, while Managed OLEDB classes will continue to be supported, if you are using it to connect to SQL Server through the SNAC OLEDB Provider, you will be impacted.
If you use SQL Server as your RDBMS, we encourage you to use SqlClient as your .NET Provider. In case you use other database technologies, we would recommend that you adopt their native .NET Providers or Managed ODBC in the development of new and future versions of your applications. You don't need to change your existing applications using OLE DB, as they will continue to be supported, but you may want to consider migrating them as a part of your future roadmap.
Microsoft is fully committed to making this transition as smooth and easy as possible. In order to prepare and help our developer community, we will be providing assistance throughout this process. This will include providing guidance via technical documentation as well as tools to jump start the migration process and being available right away to answer questions on the SQL Server Data Access forum .
Beth Massi ( @bethmassi ) described Filtering Lookup Lists with Large Amounts of Data on Data Entry Screens in a 9/15/2011 post:
First off let me say WOW, it's great to be back to blog writing! Sorry I have been away for a couple weeks – I've been working on a lot of cool stuff internally since I got back from my trip . And I know I have a loooooong list of article and video suggestions from all of you that I need to get to so thanks for bearing with me! Today I'm going to show you another common (and often requested) technique when creating data entry screens.
In my previous posts on data-driven lookup lists (or sometimes called “Pick Lists”) I showed a few techniques for formatting, editing and adding data to them. If you missed them:
- How to Create a Multi-Column Auto-Complete Drop-down Box in LightSwitch
- How to Allow Adding of Data to an Auto-Complete Drop-down Box in LightSwitch
In this post I'm going to show you a couple different ways you can help users select from large sets of lookup list data on your data entry screens. For instance, say we have a one-to-many relationship from category to product so when entering a new product we need to pick a category from an auto-complete box. LightSwitch generates this automatically for us based on the relation when we create the product screen. We can then format it like I showed in my previous example .
Now say we've set up our product catalog of hundreds or even thousands of these products and we need to select from them when creating Orders for our Customers. Here's the data model I'll be working with – this illustrates that a Product needs to be selected on an OrderDetail line item. OrderDetail also has a parent OrderHeader that has a parent Customer, just like good 'ol Northwind .
In my product screen above there are only about 20 categories to choose from so displaying all the lookup list data from the category table in this drop down works well. However, that's not the best option for the product table with lots of data — that's just too much data to bring down and display at once. This may not be a very efficient option for our users either as they would need to either scroll through the data or know the product name they were specifically looking for in order to use the auto-complete box. A better option is to use a modal window picker instead which allows for more search options as well as paging through data. Another way is to filter the list of products by providing a category drop-down users select first. Let's take a look at both options.
Using a Modal Window Picker
Say I have selected the Edit Details Screen template to create a screen for entering an OrderDetail record. By default, LightSwitch will generate an auto-complete box for the Product automatically for us. It also does this for the OrderHeader as well since this is also a parent of OrderDetail. On this screen however, I don't want the user to change the OrderHeader so I'll change that to a summary control. I'll also change the Auto Complete Box control on the Product to a Modal Window Picker:
I also want the products to display in alphabetical order so I'll create a query called SortedProducts and then at the top of the screen select “Add Data Item” and then choose the SortedProducts query:
Once you add the query to the screen, select the Product in the content tree and set the “Choices” property to “SortedProducts” instead of Auto.
You can also fine-tune how many rows of data will come down per page by selecting the SortedProducts query and then setting the number of items to display per page in the properties window. By default 45 rows per page are brought down.
Now hit F5 to run the application and see what you get. Notice that when you run the screen you can now select the ellipses next to Product which brings up the Modal Window Picker. Here users can search and page through the data. Not only is this easier for the user to find what they are looking for, using a Modal Window Picker is also the most efficient on the server.
Using Filtered Auto-Complete Boxes
Another technique is using another auto-complete box as a filter into the next. This limits the amount of choices that need to be brought down and displayed to the user. This technique is also useful if you have cascading filtered lists where the first selection filters data in the second, which filters data in the next, and so forth. Data can come from either the same table or separate tables like in my example – all you need is to set up the queries on your screen correctly so that they are filtered by the proper selections.
So going back to the OrderDetail screen above, set the Product content item back to an Auto Complete Box control. Next we'll need to add a data item to our screen for tracking the selected category. This category we will use to determine the filter on the Product list so that users only see products in the selected category. Click “Add Data Item” again and this time add a Local Property of type Category called SelectedCategory.
Next, drag the SelectedCategory over onto the content tree above the Product. LightSwitch will automatically create an Auto Complete Box control for you.
If you want to also sort your categories list you do it the same way as we did with products, create a query that sorts how you like, add the data item to the screen, and then set the Choices property from Auto to the query.
Now we need to create a query over our products that filters by category. There are two ways to do this, you can create a new global query called ProductsByCategory or if this query is only going to be used for this specific screen, you can just click Edit Query next to the SortedProduct query we added earlier. Let's just do it that way. This opens the query designer which allows you to modify the query locally here on the screen. Add a parameterized filter on Category.Id by clicking the +Filter button, then in the second drop-down choose Category.Id, in the fourth drop down select Parameter, and in the last drop-down choose “Add New…” to create a parameterized query. You can also make this parameter optional or required. Let's keep this required so users must select the category before any products are displayed.
Lastly we need to hook up the parameter binding. Back on the screen select the Id parameter that you just created on the SortedProducts query and in the properties window set the Parameter Binding to SelectedCategory.Id. Once you do this a grey arrow on the left side will indicate the binding.
Once you set the value of a query parameter, LightSwitch will automatically execute the query for you so you don't need to write any code for this. Hit F5 and see what you get. Notice now that the Product drop down list is empty until you select a Category at which point feeds the SortedProducts query and executes it. Also notice that if you make a product selection and then change the category, the selection is still displayed correctly, it doesn't disappear. Just keep in mind that anytime a user changes the category the product query is re-executed against the server.
One additional thing that you might want to do is to initially display the category to which the product belongs. As it is, the Selected Category comes up blank when the screen is opened. This is because it is bound to a screen property which is not backed by data. However we can easily set the initial value of the SelectedCategory in code. Back in the Screen Designer drop down the “Write Code” button at the top right and select the InitializeDataWorkspace method and write the following:Private Sub OrderDetailDetail_InitializeDataWorkspace(saveChangesTo As List(Of IDataService)) ' Write your code here. If Me.OrderDetail.Product IsNot Nothing Then Me.SelectedCategory = Me.OrderDetail.Product.Category End If End Sub
Now when you run the screen again, the Selected Category will be displayed.
Using Filtered Auto-Complete Boxes on a One-to-Many Screen
The above example uses a simple screen that is editing a single OrderDetail record – I purposely made it simple for the lesson. However in real order entry applications you are probably going to be editing OrderDetail items at the same time on a one-to-many screen with the OrderHeader. For instance the detail items could be displayed in a grid below the header.
Using a Modal Window Picker is a good option for large pick lists that you want to use in an editable grid or on a one-to-many screen where you are editing a lot of the “many”s like this. However using filtered auto-complete boxes inside grid rows isn't directly supported. BUT you can definitely still use them on one-to-many screens, you just need to set up a set of controls for the “Selected Item” and use the filtered boxes there. Let me show you what I mean.
Say we create an Edit Detail Screen for our OrderHeader and choose to include the OrderDetails. This automatically sets up an editable grid of OrderDetails for us.
Change the Product in the Data Grid Row to a Modal Window Picker and you're set – you'll be able to edit the line items and use the Modal Window Picker on each row. However in order to use the filtered drop downs technique we need to create an editable detail section below our grid. On the content tree select the “children” row layout and then click the +Add button and select Order Details – Selected Item.
This will create a set of fields below the grid for editing the selected detail item (it will also add Order Header but since we don't need that field here you can delete it). I'm also going to make the Data Grid Row read only by selecting it and in the properties windows checking “Use Read-only Controls” as well as remove the “Add…” and “Edit…” buttons from the Data Grid command bar. I'll add an “AddNew” button instead. This means that modal windows won't pop up when entering items; instead we will do it in the controls below the grid. You can make all of these changes while the application is running in order to give you a real-time preview of the layout. Here's what my screen looks like now in customization mode.
Now that we have our one-to-many screen set up the rest of the technique for creating filtered auto-complete boxes is almost exactly the same. The only difference is the code you need to write if you want to display the Selected Category as each line item is edited. To recap:
- Create a parameterized query for products that accepts an Id parameter for Category.Id
- Add this query to the screen (if it's not there already) and set it as the Choices property on the Product Auto Complete Box control
- Add a data item of type Category to the screen for tracking the selected category
- Drag it to the content tree above the Selected Item's Product to create an Auto Complete Box control under the grid
- Set the Id parameter binding on the product query to SelectedCategory.Id
- Optionally write code to set the Selected Category
- Run it!
The only difference when working with collections (the “many”s) is step 6 where we write the code to set the Selected Category. Instead of setting it once, we will have to set it anytime a new detail item is selected in the grid. On the Screen Designer select the OrderDetails collection on the left side then drop down the “Write Code” button and select OrderDetails_SelectionChanged. Write this code:Private Sub OrderDetails_SelectionChanged() If Me.OrderDetails.SelectedItem IsNot Nothing AndAlso Me.OrderDetails.SelectedItem.Product IsNot Nothing Then Me.SelectedCategory = Me.OrderDetails.SelectedItem.Product.Category End If End Sub
In this article I showed you a couple techniques available to you in order to display large sets of lookup list data to users when entering data on screens. The Modal Window Picker is definitely the easiest and most efficient solution. However sometimes we need to really guide users into picking the right choices and we can do that with auto-complete boxes and parameterized queries.
Like most of you, I've been wonderfully surprised by the Microsoft BUILD conference this last week. The delivered software and presentations to help us get started with Windows 8 far exceeded any expectations I had.
To try and add anything to what has been clearly communicated would be foolish on my part. Instead let me tell you about my “Lazarus” experience this week.
I've been eyeing the Asus EP121 for several weeks now. I got to play with one at the Bellevue Microsoft Store. This is one sweet unit.
Well, I have a dusty, HP tm2 TouchSmart Laptop/Tablet. It has a Core i3 1.2ghz, 4GB memory, integrated graphics card, slow 5400rpm drive. My thinking was, if I can pull a Lazarus on this computer for 6-12 months, I'll save myself the $1,000 now and wait for the next generation hardware and with fast CPU, SSD, HD screen, etc.
I did use the HP tm2 for Window Phone 7 development and OneNote note taking. It was kind of slow, especially compared to other modern hardware.
The slowness was not attributed to Windows 7, but rather to lame hardware.
PC hardware manufacturers please start making decent hardware that competes with Apple's hardware and PLEASE stop putting crapware on my new PC. All crapware should be a line item, opt-in.
I need to move off this topic before I go into a tirade.
On the good side, one of the keynotes at BUILD showed new hardware coming soon that looks like the MacBook Air, metal, thin, etc. At last. Please offer good components in your units, I'll pay for them.
So I replaced the first generation 5,400rpm hard drive with a 7,200 second generation SATA. Was getting just a little excited, breathing new life into my laptop.
Following simple directions on Scott Hanselman's blog, I loaded the Win8 Preview on a USB.
When I booted the laptop I change the default boot to the USB so I could install Windows. When Windows restarts, don't forget to change the default boot back to your hard drive.
Installation took 12 minutes; Windows, Visual Studio, demo applications, etc. Core i3 and a decent disk, still respectable.
The laptop boots very quickly, applications are responsive and fun to use. I have not installed Office yet, but will soon. For now, just learning to get around Windows 8 and how to write Metro XAML apps.
After I logged in, I ran Windows Update and one of the items installed was the, “Microsoft IntelliPoint 8.2 Mouse Software for Windows – 64 bit” This update on my laptop caused the touch to quick working. So I used Add Remove programs to uninstall it, rebooted and got touch working again.
Visual Studio XAML Designer Patch
You need to install a patch published by the Expression Team to correct a mouse issue with the designer.
After downloading, don't forget to “Unblock” the .zip file. The instructions left this out.
You MUST follow the installation instructions, most important you must install the patch as an administrator.
The fun part will be trying to figure out how to open an Administrator Command Prompt. I could not figure out how to do this using the Metro interface. So… I opened Windows Explorer in the Desktop, navigated to the \Windows\system32 folder, right-clicked on the cmd.exe file and selected, “Run as Administrator.” While you at it, go ahead and pin that Administrator Command Window to the TaskBar, problem solved.
Getting Around Windows 8
Since you probably won't be writing code using your TouchScreen keyboard, you'll want to get up to speed on Windows Shortcuts. The following blog post is being recommended by several on Twitter so I've included it here as well.
Before Your Frist Project
Before you dive into your first Metro project, take time and watch some of the BUILD videos. If you only watch one video, watch this one: http://channel9.msdn.com/events/BUILD/BUILD2011/BPS-1004 . Jensen Harris clearly explains Metro and the thinking behind it. He is also one of the best presenters at BUILD and connects with the audience and viewers alike.
The below video shows my HP tm2 after the Lazarus operation. Short, 3 minutes gives you a good feel for how a Core i3 runs Windows 8.
These are good times for Windows developers.
For me, I'm finishing up my WPF/Prism BBQ Shack program and will move the cash register and online purchasing modules to Metro. Metro is perfect for a touch screen cash register. This will be so much fun to write.
Karl works on the Microsoft p&p Prism, Phone and Web Guidance teams.
Doug Seven clarified the relationship between the Windows 8 Platform, the CLR and Development Tools in his A bad picture is worth a thousand long discussions post of 9/15/2011:
While here at Build I've been in lots of conversations with customers, other attendees, Microsoft MVP's, Microsoft Regional Directors, and Microsoft engineering team members. One of the recurring topics that I've been talking about ad nausium is the “boxology” diagram of the Windows 8 Platform and Tools (shown here).
Now let me tell you, I have drawn a lot of these “marketecture” diagrams in my time and its not easy. These kind of diagrams are never technically accurate. There is simply no easily digestible way to make a technically accurate diagram for a complex system that renders well on a slide and is easy to present and explain. The end result is that you create a diagram that is missing a lot of boxes – missing a lot of the actual technology that is present in the system. Unfortunately that is exactly what has happened here – the Windows 8 boxology is missing some of the actual technology that is present.
One of the conversations that has come up is around the missing box in the “green side” (aka Metro style apps) for the .NET Framework and the CLR. Do VB and C# in Metro style apps compile and run directly against the WinRT? Is this the end of the .NET Framework?
Others who have done some digging into the bits are wondering if there are two CLRs. What the heck is going on in Windows 8?
I spent some time with key members of the .NET CLR team last night (no names, but trust me when I say, these guys know exactly how the system works), and here's the skinny.
- There is only one CLR. Each application or application pool (depending on the app type) spins up a process and the CLR is used within that process. Meaning, a Metro style app and a Desktop Mode app running at the same time are using the same CLR bits, but separate instances of the CLR.
- The .NET Framework 4.5 is used in both Desktop Mode apps and in Metro style apps. There is a difference though. Metro style apps use what is best described as a different .NET Profile (eg Desktop apps use the .NET Client Profile and Metro style apps use the .NET Metro Profile). There is NOT actually a different profile, but the implementation of .NET in Metro style apps is LIKE a different profile. Don't go looking for this profile – its basically rules built into the .NET Framework and CLR that define what parts of the framework are available.
- Whether a Desktop Mode app or a Metro style app, if it is a .NET app, it is compiled to the same MSIL. There isn't a special Windows 8 Metro IL – there is, like the CLR, only one MSIL.
A More Accurate Picture
A more correct (but still marketecture that is not wholly technically accurate) would look like this:
In this diagram you can see that the CLR and the .NET Framework 4.5 are used for C# and Visual Basic apps in either Desktop Mode apps (blue side) or Metro style apps (green side). Silverlight is still only available in Desktop Mode as a plug-in to Internet Explorer (yes, out of browser is still supported in Desktop Mode). Another addition in this diagram is DirectX, which was strangely absent from the original diagram. DirectX is the defacto technology for high-polygon count applications, such as immersive games. DirectX leverages the power of C++ and can access the GPU.
This biggest confusion, as I mentioned, has been around the use of the .NET Framework across the blue side and green side. The reason for the, as I call it, .NET Metro Profile is because the Metro style apps run in an app container that limits what the application can have access to in order to protect the end user from potentially malicious apps. As such, the Metro Profile is a subset of the .NET Client Profile and simply takes away some of the capabilities that aren't allowed by the app container for Metro style apps. Developers used to .NET will find accessing the WinRT APIs very intuitive – it works similarly to having an assembly reference and accessing the members of said referenced assembly.
Additionally, some of the changes in the Metro Profile are to ensure Metro style apps are constructed in the preferred way for touch-first design and portable form factors. An example is File.Create(). Historically if you were using .NET to create a new file you would use File.Create(string fileLocation) to create the new file on the disk, then access a stream reader to create the contents of the file as a string. This is a synchronous operation – you make the call and the process stalls while you wait for the return. The idea of modern, Metro style apps is that ansychronous programming practices should be used to cut down on things like IO latency, such as that created by file system operations. What this means is that the .NET Metro Profile doesn't provide access to FileCreate() as a synchronous operation. Instead, you can still call File.Create() (or File.CreateNew()…I can't recall right now) as an asynchronous operation. Once the callback is made you can still open a stream reader and work with the file contents as a string, just like you would have.
Ultimately all of this means that you have some choice, but you don't have to sacrifice much if anything along the way. You can still build .NET and Silverlight apps the way you are used to, and they will run on Windows for years to come. If you want to build a new Metro style app, you have four options to choose from:
- XAML and .NET (C# or VB)You don't have to giving up too much in the .NET Framework (remember, you only give up what is forbidden by the Application Container), and you get access to WinRT APIs for sensor input and other system resources.
- XAML and C++You can use your skills in XAML and C++ in order to leverage (or even extend) WinRT. Of course you don't get the benefit of the .NET Framework, but hey….some people like managing their own garbage collection.
- DirectX and C++If you're building an immersive game you can use DirectX and access the device sensors and system resources through C++ and WinRT.
Hopefully this adds some clarity to the otherwise only slightly murky picture that is the Windows 8 boxology.
Don't forget to check out Telerik.com/build .
Doug is Executive VP at Telerik.
Since last Thursday, I was ordered under strict nondisclosure to keep my mouth shut. And that was really hard for me to do because I could barely contain my enthusiasm for what is probably the most significant server operating system release that Microsoft has ever planned to roll out.
Nothing from Microsoft, and I mean literally nothing has ever been this ambitious or has tried to achieve so much in a single server product release since Windows 2000, when Active Directory was first introduced.
Last week, a group of about 30 computer journalists were invited to Microsoft's headquarters in Redmond to get an exclusive two-day preview of what is tentatively being referred to as “Windows Server 8″.
So much was covered over the course of those two days that the caffeine-fueled and sleep-deprived audience was sucking feature and functionality improvements through a proverbial fire-hose.
We weren't given any PowerPoints or code to take home — that material will be reserved for after the BUILD conference taking place in Anaheim this week, and I promise to get you galleries and demos of the technology as soon as I can.
[ UPDATE: I now have the PowerPoints, and I'll be updating the content of this article to reflect the comprehensive feature list.]
Still, I did take enough notes to give you a brief albeit nowhere near complete overview of the Server OS that is likely to ship from Microsoft within the next year. And it will definitely make huge waves in the enterprise space, I guarantee.
It's not fully known if “Windows Server 8″ is just a working title or if it is the actual product name, but what was shown to us in the form of numerous demos and about 20 hours of PowerPoints will be the Server OS that will replace Windows Server 2008 R2.
Server 8 will unleash a massive tsunami of new features specifically targeted at building and managing infrastructure for large multi-tenant Clouds, drastically increased scalability and reliability features in the areas of Virtualization, Networking, Clustering and Storage, as well as significant security improvements and enhancements.
Frankly, I am amazed by the amount of features — numbering in the hundreds — that have been added to this product, and how many are actually working right now given the Alpha-level code we were shown. In all the demos, very few glitches occurred, and much of the underlying code and functionality appears to be very mature.
It's my perception based on the maturity off the code that we were demoed that we were shown features that have been under development for several years, possibly going back as far as the Windows Vista release timeframe, which leads me to believe that a great deal of stuff was dropped on the cutting room floor in the Server 2008 and Server 2008 R2 releases and was left out by Microsoft's top brass at the Server group until it was truly ready for prime time.
We did see some new UI improvements — namely the new Server Manager, which has been designed to replace a lot of the MMC drill-down and associated snap-ins and is targeted towards sysadmins that need to manage multiple views of a large amount of systems simultaneously, based on actual services and roles running on the managed systems, using a “Scenario-Driven” user interface.
However, a lot of what we saw in terms of actual look-and feel was just standard old-school Windows UI, and a lot of PowerShell.
In fact, I would say that Microsoft is pushing PowerShell really hard to sysadmins because you can actually get some very sophisticated tasks done in only a single command, such as migrating one or multiple virtual machines to another host, or altering storage quotas.
Thousands of “Commandlets” for PowerShell have already been written, so as to take advantage of the scripting functionality and heavy automation that will be required for large scale Windows Server 8 and Cloud deployments.
This is not to say Windows 8 Server will be going all command-line Linux-y. There will be new significant UI peices, but Microsoft appears to have done their software development in reverse this time around — build the API layers and underlying engines first, and then write the UI layers to interface with it afterwards.
They've got a year now to polish the UI elements, a number of which we were told had some commonality with the “Metro” UI shown at BUILD for the WIndows 8 client. As I said, we didn't get to see them at the special Reviewers Workshop, but I'll show them to you as soon as I am able.
Microsoft also stressed that many of the APIs for various new features, including their entire management API will be opened for third party vendors to integrate with and so they could write their own UIs.
One of the ways they are going to do this is by releasing a completely portable, brand-new Web-based Enterprise Management (WBEM) CIM server called NanoWBEM for Linux , written by one of the main developers of of OpenPegasus, which has been designed to work Windows Server 8's new management APIs, so that various vendors can build in the functionality into their products via a common provider interface.
While not strictly Open Source per se , NanoWBEM will be readily licensable to other companies, which is a big step for Microsoft in opening up interfaces into Windows managment.
Windows Management Instrumentation (WMI) has also been enhanced considerably, as it now is capable of talking to WSMAN or DCOM directly. This makes it much easier for developers to write new WMI providers.
As to be expected of a Cloud-optimized Windows Server release, many enhancements are going to come in the form of improvements to Hyper-V. And boy are they big ones.
For starters, Hyper-V will now support up to 32 processors and 512GB of RAM per VM . In order to accomodate larger virtual disks, a new virtual file format, .VHDX, will be introduced and will allow for virtual disk files greater than 2 terabytes.
Not impressed? How about 63-node Hyper-V clusters that can run up to 4000 concurrent VMs simultaneously? No, I'm not joking. They actually showed it to us, for real, and it was working flawlessly.
Live Migration in Hyper-V has also been greatly enhanced — to the point where clustered storage isn't even required to do a VM migration anymore.
Microsoft demonstrated the ability to literally “beam” — a la Star Trek — a virtual machine between two Hyper-V hosts with only an IP connection.
A VM on a developer's laptop hard disk running on Hyper-V was sent over Wi-Fi to another Hyper-V server without any downtime — all we saw was a single dropped PING packet. We also observed the ability of Hyper-V to do live migrations across different subnets, with multiple live migrations being queued up and transferring simultaneously.
Microsoft told us that the limit to the amount of VM and storage migrations that could run simultaneously across a Hyper-V cluster was governed only by the amount of bandwidth that you actually have. No limits to the number of concurrent live migrations in the OS itself. Asnjë.
It should also be added that with the new SMB 2.2 support, Hyper-V virtual machines can now live on CIFS/SMB network shares.
Another notable improvements to Hyper-V will include “Hyper-V Replica” which is roughly analogous to the asynchronous/consistent replication functionality sold with Novell's Platespin's Protect 10 virtualization disaster recovery product. This of course will be a built-in feature of the OS and will not require additional licensing whatsoever.
The list of Hyper-V features goes on and on. A new Open Extensible Virtual Switch will allow 3rd-parties to plug into Hyper-V's switch architecture. SR-IOV for privileged access to PCI devices has now been implemented as well as CPU metering and resource pools, which should be a welcome addition to anyone currently using them in existing VMWare environments to portion out virtual infrastructure.
VDI… Did I mention the VDI improvements? Windows Server 8's Remote Desktop Session Host, or RDSH (what used to be called Terminal Server) now fully supports RemoteFX and is enabled by default out of the box.
What's the upside to this? Well now you can put GPU cards in your VDI server so that your remote clients, be it terminals or tablets or Windows desktops that have the new RemoteFX-enabled RDP client software can run multi-media rich applications remotely with virtually no performance degradation.
As in, completely smooth video playback on remote desktops, as well as the ability to experience full-blown hardware-accelerated Windows 7 Aero and Windows 8 Metro UIs with full DirectX10 and OpenGL 1.1 support on virtualized desktops.
This will work with full remote desktop UIs as well as “Published” applications, a la Citrix. And no, you won't need Citrix XenApp in order to support load balanced remote desktop sessions anymore. It's all built-in.
RemoteFX and the new RDSH is killer, but you know what's really significant? You can template virtual desktops from a single gold master image stored on disk and instantiated in memory as a single VM and then customize individual sessions to have roaming profiles with customized desktops and apps and personal storage using system policy. That conserves a heck of a lot of disk space and memory on the VDI server.
And in Server 8, RDP is also now much more WAN optimized than in previous incarnations.
Can you say hasta la vista, Citrix? I knew that you could.
[ Editor's Note: This is my personal opinion. As far as Microsoft is concerned, Citrix is still one of their most valued partners, and in has no way indicated to us that this new RDSH functionality is intended to replace XenApp. ]
One of the demos we saw using this technology was a 10-finger multitouch display running RDP and RemoteFX, with the Microsoft “Surface” interface virtualized over the network. It was truly stunning to see.
A number of network improvments have also been implemented that improve Hyper-V as well as all services and roles running on the Server 8 stack, which includes full network virtualization and network isolation for multi-tenancy environments.
This includes Port ACLs that can block by source and destination VM, implementations of Private VLANs (PVLAN), network resource pools and open network QoS as well as packet-level IP re-write with GRE encapsulation and consistent device naming.
Multi-Path I/O (MPIO) drivers (such as EMC's PowerPath and IBM's SDDPCM) when combined with Microsoft's virtual HBA provider can also now be installed as virtualized fiber channel host bus adapters (HBA) within virtual machines, in order to take better advantage of the performance of enterprise SAN hardware and for VMs to have direct access to SAN LUNs.
Windows Server 8 will also include improved Offloaded Data Transfer, so that when you drag and drop files between two server windows, the server OS knows to transfer data directly from one system to another, rather than passing it through your workstation or through another server.
“Branch Caching” performance has also been improved and reduces the need for expensive WAN optimization appliances. Microsoft has also implemented a type of Bitorrent-like technology for the enterprise in branch offices that enables client systems to find the files they need locally on other client systems and servers instead of going across the WAN
The NFS server and client code within Server 8 has also been completely re-written from the ground up and is now much faster, which should be a big help when needing to inter-operate with Linux and UNIX systems.
Server 8 will also include built-in NIC teaming, a “feature” that has always been a part of Windows Server but has been provided in the past by 3rd-party vendors. With the new NIC teaming feature, network interface boards from different vendors can be mixed into bonded teams of trunked interfaces which will provide performance improvements as well as redundancy.
No more need for 3rd-party utilities and driver kits to do this.
Storage in Server 8 has also been greatly enhanced, most importantly the introduction of data de-duplication as part of the OS. Based on two years of work at Microsoft for just the algorithm alone, de-duplication uses commonality factoring to hugely compress the amount of data stored on a volume, with no significant performance implications.
Naturally, this also allows the backup window for a server with a de-duplicated file system to be reduced dramatically.
Oh and chkdsk ? Huge storage volumes can be checked and fixed in an on-line state in a mere fraction of a time that it took before. Like, in ten percent of the time it used to.
Server 8 will have built in support for JBODs, as well as new support for SMB storage using RDMA ( Remote Direct Memory Access ) networks, allowing for large storage pools to be built with commodity 10 gigabit ethernet networks rather than much more expensive fiber-channel SAN technology. Microsoft also demonstrated the capability for Server 8 to “Thin Provision” storage on JBODs as well.
Clustered disks can now be fully encrypted using BitLocker technology and the new Clustered Shared Volume 2.0 implementation fully supports storage integration for built-in replication as well as hardware snapshotting.
And we saw a bunch of new storage virtualization stuff too. I didn't take good notes that day, sorry.
I'm sure I'm leaving out a large number of other important things, including an all-new IP address managment UI (appropriately named IPAM) as well as some new schema extensions to Active Directory that greatly improves file security when using native Windows 8 servers. And all of the new stuff that's been added to IIS and Windows's networking stack in order to accomodate large multi-tenant environments and hybrid clouds.
By the end of the second day at the Windows Server 8 Reviewer's Workshop I was literally ready to pass out from the sheer amount of stuff being shown to us, and my brain had turned to mush, but all of this should whet your appetites for Server 8 when I finally have some code running and can actually demonstrate some of this stuff.
While Microsoft has certainly gotten its act together with its last two Server releases in terms of basic stability, has brought it's core OS up to date with Windows 7 and has made a good college try at virtualization with early releases of Hyper-V, I haven't been truly excited about a Windows Server release in a long time.
Call me excited.
In my opinion, Server 8 changes everything, particularly from a complete virtualization and storage value proposition. CIOs are going to be very hard pressed to resist the product simply from all the stuff that you get built-in that you would otherwise have to spend an utter fortune on with 3rd-party products.
Are these new features worth the wait? Should Microsoft's cloud and virtualization software competitors be worried? Talk Back and Let Me Know.
“ZDNet Senior Technology Editor Jason Perlow walks through the new Metro UI and legacy compatibility features in the Windows 8 [client] Developer Preview” in his 00:21:01 Windows 8 Developer Preview Video Tour video of 9/17/2011:
Charlie Babcock wrote Cloud Performance Monitoring: What You Can't See for InformationWeek ::Analytics on 9/14/2011:
For IT managers used to scrutinizing their infrastructure, sending workloads to run in the cloud can be as nerve-racking as dropping a firstborn off at school for the first time. Even when they ping the app frequently for reassurance, they know many unseen things could be going wrong.
Emerging online services are trying to increase the visibility IT teams have into the apps running on infrastructure as a service, particularly into those things they can't see. Keynote Systems' Internet Health Report, for example, monitors traffic at key network backbone junctions between carriers and highlights those that may be a problem.
Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book published in May 2010. He continues to stay at the forefront of reporting on cloud developments and is a frequent commentator and speaker at industry events. He joined InformationWeek in San Francisco in 2003.
He is the former editor-in-chief of Digital News, former software editor of Computerworld, analyst with the Mainspring group in Cambridge, Mass., sold to IBM, and former technology editor of Interactive Week. He graduated from Syracuse University, where he obtained a bachelor's degree in journalism and served as editor in chief of the Daily Orange.
Bryan Semple ( @VK_Bryan ) asserted “Understanding the goal of chargeback and show back is the starting point to select the proper pricing strategy” in a deck for his Setting Prices for Private Clouds article of 9/14/2011 for the Cloudonomics Journal:
As more and more private clouds are deployed, organizations will face the requirement to implement chargeback or at least show back. Key to implementing chargeback is setting the chargeback rate. Much as pricing significantly impacts a public cloud provider, for the private cloud provider, the chargeback rate has significant implications. This article examines three strategies for setting prices for private cloud operators.
Public cloud operators, or for that matter any business selling a product, set pricing based on the pricing triangle. The triangle contains three key balance points for pricing a product:
- Product or Service Cost – unless you are selling something as a loss leader, price is generally set above cost. The greater the delta between cost and price, the higher the margins. Determining the cost of IT services can be challenging but, in general, is a summation of computer, network port, storage port, storage space, power, datacenter space, software, maintenance, and support costs.
- Value – this is the perceived value of the product from the customer's point of view. Segmenting the potential customers helps to determine what portion of the potential customer base finds the value of your product the greatest. Hopefully, this segment values the product greater than the costs and this segment is large enough to drive significant top line revenue. For IT, the customer values tangible items such as up time and patch management services. The challenge for IT are the value items they are required to provide the corporation, such as compliance with back up policies, that the individual IT customer may not value.
- Competition – no matter the cost or perceived value to customers, competitors dramatically drive market pricing. It's possible to price above competitors costs provided the perceived value to a customer is there. Conversely, competitive pricing can reduce even the strongest value point. For IT, the competition is surfacing as public cloud providers.
A balanced approach with the triangle sets a price that provides sufficient margin, is defensible against the competition, and has the customer seeing sufficient value in the price.
While the balanced approach is preferred for corporations selling goods and services, it may not be preferred for private cloud operators. Other pricing strategies could be:
- Cost plus pricing that ignores the value and competitive legs of the triangle. Cost plus pricing simply takes the cost of a product and adds a set mark up. For IT, this is the traditional chargeback approach.
- Value-based pricing only looks at the value delivered to a customer yet ignores competitive pressures.
- Commodity pricing ignores all the value-added services a vendor provides, and simply sets prices based on the offerings of competitors
Ideally, an enterprise uses the cost triangle to balance the three inputs to pricing and come up with the right price. But IT is not actually a true vendor and there is actually a hazard to pricing this way for IT services. Value, cost plus, or competitive pricing may actually be preferred based on the goals of the private cloud initiative for an enterprise.
Which strategy does one use? There are three pricing strategies for charging back or show back:
- Price to reduce sprawl and waste
- Price to push IT to operate as a business
- Price to enable open market competition between IT and outside vendors
We will examine each of these pricing strategies and how the pricing triangle may be different.
Price to Reduce Sprawl and Waste
By exposing the cost of resources consumed by a business unit, business units are motivated to reduce their spending to maximize their internal P&L. Inefficient consumption happens with VM sprawl and overallocation of resources to a virtual machine. Sprawl occurs simply because of the impression that "VMs are free" while overallocation of resources occurs when application owners insist on deploying the same resources to a virtual application as a physical application regardless of utilization.
What is the optimal pricing strategy to reduce sprawl and waste?
Since the goal is simply to reduce waste, coming up with a reasonable cost to deliver services and exposing that cost to end users is the optimal strategy. Since it's the act of exposing the cost that drives efficiency, there is no reason to charge the actual prices. Some logical formula that takes into account actual cost, then modifies the pricing to be below public cloud providers is all that is needed.
Why price below public cloud providers? Pricing significantly above public cloud providers could actually introduce an incentive for internal customers to seek out external providers. Since external providers many times don't meet the compliance requirements of a private cloud, this would not be a preferred result. Since the goal is simply to reduce waste and not compete on the public market, pricing more than public providers is not a good strategy. By adjusting the actual internal costs, pricing less than the competition, and simply highlighting the value added services, IT organizations can achieve the goal of reducing waste without driving their customers to external providers.
There is a hazard, however, with charging back in all these scenarios. Once an IT group charges back, they lose some amount of control over the resource. For an individual business unit, wasting 30% of an IT asset could be a minor line item on their balance sheet not worth the trouble to correct. For the IT group, however, if every business unit wasted 30% of their IT resources, that adds up to significant waste for the company. So although chargeback is in place, retaining the ability to force corporate-wide efficiency must be maintained. One VKernel customer actually charged a higher rate for wasted resources than efficiently used resources.
Price to Push IT to Operate as a Business
For this second strategy, the pricing triangle actually provides good guidance on how to price. Assuming internal customers are not permitted to use competitive offerings from sites like AWS, setting prices to cover costs makes the most sense. Since IT generally provides many additional services that external providers do not provide, internal costs tend to be higher than competitive offerings. It's important to actively market the added value IT provides. Exposing actual costs will push IT to reduce these costs over time, which is good, but also provides IT with a platform to market the added value services they provide. The same hazard exists with internal customers wasting resources as it did with goal #1. And, despite IT directives not to use outside vendors, public cloud operators will start to make progress in rogue business units.
Price to Enable Free Market Competition with External Vendors
This goal is perhaps the most challenging for IT but also a requirement for organizations looking to implement advanced hybrid clouds. External competitors like Amazon are generally cheaper than internal IT. But they also don't offer all the value-added services. However, some of these services may not be valued by individuals inside the corporation. Rogue development organizations may not care about esoteric compliance requirements. If IT operates in the free market without significant marketing to explain the need for higher internal IT costs and the value add it provides, end users will go for less expensive services, but also place corporate assets at risk.
Pricing in this model would be to price at cost or best case at a competitor's pricing. IT would then market the value-added services to end users. In this model, applications move between internal private clouds and public clouds based on the lowest cost. Hence the hybrid cloud label. Despite the theoretical efficiency, there are many pitfalls with hybrid cloud/market based pricing. Is IT still responsible for an application owner who moves his app to the public cloud for lower prices yet fails to recognize the public cloud provider does not meet compliance requirements? The list of hybrid cloud pitfalls is long, but the pricing strategy in this case is the most pure of the three.
Common Benefits and Pitfalls
Whichever goal and pricing strategy is selected, there are common benefits and pitfalls.
- All models require determination of actual IT costs per virtual machine. This is an invaluable exercise.
- All models require an inventory of the value added IT services such as compliance, back up, patch management
- All models require marketing value added services
- Loss of control that pits the greater good of the company vs. individual end users. Once end users are "paying" for a service, they can waste it. Allowing internal IT to push for across the board efficiencies is important to contain IT spending.
- Easy comparison to public providers – exposing cost information provides an easy comparison for end users to compare internal prices with public cloud providers. This ease of comparison invariably will drive internal customers to select external providers for cost reasons or at least question why the internal resources supposedly cost so much.
- Charging back provides context for people to depart the corporate standard to "save money".
Selecting a pricing strategy for IT is challenging. Understanding the goal of chargeback and show back is the starting point to select the proper pricing strategy. Care should be taken to avoid pitfalls and while harvesting the benefits of a more rigorous financial approach to delivering IT services.
Bryan is Chief Marketing Officer at VKernel .
David Linthicum ( @DavidLinthicum ) asserted “IT naïveté can make a migrated cloud application perform very poorly — but you can avoid this fate” in a deck for his Heads up: 3 cloud performance gotchas post of 9/15/2011 for InfoWorld's Cloud Computing blog:
Those moving to the cloud are weighing numerous factors, including the type of cloud and cloud brand to use, as well as the best path for migration. However, most IT organizations aren't considering performance, and they're making huge performance mistakes as systems on clouds move into production.
- Porting code without performing platform localization modifications
- Not considering I/O tuning
- Not considering network latency
Porting code without performing platform localization modifications is an error made by those who think they can take C++ code from on-premise platforms and move it to the cloud without a hitch. They can't.
The fact is you need to localize and optimize any code as you move from platform to platform. Moving to a cloud, either IaaS or PaaS, is no exception. The confusion may stem from cloud computing providers who brag about A-to-A portability, which many do provide. However, that won't get you A-to-A performance characteristics unless you do additional work.
As for not considering I/O tuning, much to the previous point, you need to optimize the I/O subsystems by tweaking the tunables. Keep in mind this is different than elasticity, where cloud platforms can autoprovision servers as you saturate processors. By contrast, the I/O issue is about access to the native I/O system in the most efficient way. Some cloud providers provide access to tunables; some don't.
It's a similar issue with network latency. The Internet doesn't always provide consistent performance, so you need to consider that latency in the overall performance model for your cloud environment. Do not move to a cloud if it's going to be a significant issue.
Also, don't forget the network latency that may occur as systems communicate within the cloud environment. I find this is often overlooked and actually very difficult to monitor, much less understand, given that you don't have access to the physical systems. Work with your provider on this one.
Richard L. Santalesa reported Blumenthal Bill Bumps Up Big Fines for Data Thefts and Security Breaches on 9/13/2011 to the InfoLaw Group blog:
Late last week Senator Richard Blumenthal (D-CT) introduced a one-hundred page bill, dubbed the Personal Data Protection and Breach Accountability Act of 2011 , S.1535 , (the “PDPBA Act”), referred to the Senate Judiciary Committee , that if ultimately passed would levy significant penalties for identify theft and other “violations of data privacy and security,” criminalize as felonies the installation of software that collects “sensitive personally identifiable information” without clear and conspicuous notice and consent, and specifies requirements that companies collecting or storing online data of more than 10,000 individuals adhere to data storage guidelines to be enacted by the FTC via its Title 5 rulemaking authority , including a mandate to audit the information security practices of contractors and third party business entities. Notably the PDPBA Act provides for enforcement by the United State Attorney General, by State Attorneys General, and by individuals via a private right of action that allows for civil penalties of up to $10,000 per violation per day per individual up to a maximum of $20,000,000 per violation.
The PDPBA Act's findings section notes in support that “over 9,300,000 individuals were victims of identity theft in America last year” and “over 22,960,000 cases of data breaches involving personally identifiable information were reported through July of 2011, and in 2009 through 2010, over 230,900,000 cases of personal data breaches were reported.”
The complicated technology and legal landscape subject to the Act is plainly evidenced by the numerous carveouts and exceptions, including express carveouts for financial institutions subject to the Gramm-Leach-Bliley Act (“GLBA”), HIPAA regulated entities and public records. With no co-sponsors at present PDPBA joins the crowded landscape of data security, privacy and other such bills that have been introduced in 2011 and which we've covered previously in detail.
While we'll keep an eye on Senator Blumenthal's latest bill as it progresses through the long legislative process, some notable provisions in brief include:
- The requirement that "business entities", as defined, shall "on a regular basis monitor, evaluate, and adjust, as appropriate its data privacy and security program" in response to changes in technology, threats, PII retained, and "changing business arrangements";
- A duty to vet subcontractors not otherwise subject to the Act and to impose by contract appropriate obligations regarding data handling, security and safeguarding;
- Steps by business entities to conduct employee training regarding data security programs;
- Imposition of regular vulnerability testing by business entities subject to the Act;
- Comprehensive requirements concerning risk assessment, management and control in the area of data privacy and security;
- The implementation within one year of enactment of "a comprehensive personal data privacy and security program that includes administrative, technical, and physical safeguards appropriate to the size and complexity of the business entity and the nature and scope of its activities";
- Civil penalties for violation of either, depending on who is seeking enforcement, of from $5,000 to $10,000 per day per violation, as well as potential punitive damages and equitable relief, "up to a maximum of $20,000,000 per violation" for each individual;
- Criminal penalties of up to 5 years imprisonment for those who:
- "intentionally or willfully conceals the fact of [a]  security breach and which breach causes economic damage or substantial emotional distress to 1 or more persons," or
- "engages in a pattern or practice of activity that violates [Section 105, Unauthorized Installation of Personal information Collection Features on a User's Computer]," or
- sends "a notification of a breach of security is false or intentionally misleading in order to obtain sensitive personally identifiable information in an effort to defraud an individual"
- Required notice, as specified in the Act, "without unreasonable delay" to individuals in the event of any data breach involving sensitive PII, as well as notice to the owner or licensee of the data breach, if applicable, after a risk assessment concludes the there is a significant risk of harm to the effected individual(s);
- Two years of free credit reports on a quarterly basis, and credit monitoring, including a security freeze at no cost to the effected individuals in the event notice is required;
- Notice to the FBI, Secret Service and credit reporting agencies in the event of a breach effecting more than 5,000 individuals;
- The maintenance by the Attorney General of a "Post-Breach Technical Information Clearinghouse"; and
- The requirement that all federal contracts with "data brokers" in excess of $500,000 are to be evaluated by the GSA with regards to the data privacy and security program, program compliance, and other factors.
Needless to say, the PDPBA Act covers a great deal of ground and we will continue to monitor progress of the bill and provide timely alerts on new developments.
The Sacramento Bee published the Open Group's The Open Group Releases Guide "Cloud Computing for Business" press release on 9/12/2011 with the following deck: Leading Global IT Standards Organization Introduces New Book Proving Guidance and Best Practices for Enterprises to Maximize ROI from Cloud Computing:
SAN FRANCISCO, Sept. 12, 2011 — /PRNewswire/ — The Open Group today announced the immediate availability of its new book, Cloud Computing for Business , which takes an in-depth look at Cloud Computing and how enterprises can derive the greatest business benefit from its potential. The publication is part of the ongoing work of The Open Group Cloud Computing Work Group , which exists to create, among buyers and suppliers, a common understanding of how enterprises of all sizes and scales of operation can use Cloud Computing technology in a safe and secure way in their architectures to realize its significant cost, scalability and agility benefits.
Intended for a variety of corporate stakeholders — from the executive suite to business managers, the IT and marketing departments, as well as enterprise and business architects —the book reflects the combined experience of member companies of The Open Group and their collective understanding shared with the wider IT community as practical guidance for considering Cloud Computing. The book explores the importance of Cloud Computing within the overall technology landscape and provides practical advice for companies considering using the Cloud, as well as ways to assess the risk in Cloud initiatives and build return on investment.
"With each new technology trend that emerges, the resulting hype cycle often obscures how companies can actually take advantage of the new phenomenon and share in its growth and benefits," said Dr. Chris Harding, Director, Interoperability, The Open Group. "The Cloud Computing Work Group was established by Open Group members to help enterprises of all sizes make sense of Cloud Computing and provide the understanding necessary to make it work for them. The Open Group and our member community are excited to release this book, which pays special consideration to an organization's technical and business requirements, and aims to help readers gain the greatest value possible from their Cloud projects."
Key themes covered in the book include:
- Why Cloud?
- Establishing Your Cloud Vision
- Buying Cloud Services
- Understanding Cloud Risk
- Building Return on Investment from Cloud Computing
- Cloud Challenges for the Business
Cloud Computing for Business is available for download from The Open Group at https://www2.opengroup.org/ogsys/jsp/publications/PublicationDetails.jsp?publicationid=12431 and as a hard copy from Van Haren Publishing at http://www.vanharen-library.net/cloudcomputingforbusinesstheopengroupguide-p994.html . The hardcopy version retails for $58 in the US, 37 pounds Sterling in the United Kingdom and for euro 39.95 in Europe.
To see a preview of the book, please visit: http://www3.opengroup.org/sites/default/files/contentimages/Press/Excerpts/first_30_pages.pdf
To read about the announcement on The Open Group's blog, please visit: http://blog.opengroup.org/2011/09/12/introducing-our-new-book-cloud-computing-for-business/
About The Open Group Cloud Computing Work Group
The Open Group Cloud Computing Work Group exists to create a common understanding among buyers and suppliers of how enterprises of all sizes and scales of operation can include Cloud Computing technology in a safe and secure way in their architectures to realize its significant cost, scalability and agility benefits. It includes some of the industry's leading Cloud providers and end-user organizations, collaborating on standard models and frameworks aimed at eliminating vendor lock-in for enterprises looking to benefit from Cloud products and services. For more information on how to get involved in The Open Group Cloud Computing Work Group, please visit: http://www3.opengroup.org/getinvolved/workgroups/cloudcomputing .
About The Open Group
The Open Group is an international vendor- and technology-neutral consortium upon which organizations rely to lead the development of IT standards and certifications, and to provide them with access to key industry peers, suppliers and best practices. The Open Group provides guidance and an open environment in order to ensure interoperability and vendor neutrality. Further information on The Open Group can be found at http://opengroup.org .
SOURCE The Open Group
Chris Hoff ( @Beaker ) posted Cloud Security Start-Up: Dome9 – Firewall Management SaaS With a Twist on 9/12/2011:
Dome9 has peeked its head out from under the beta covers and officially launched their product today. I got an advanced pre-brief last week and thought I'd summarize what I learned.
As it turns out I enjoy a storied past with Zohar Alon, Dome9′s CEO. Back in the day, I was responsible for architecture and engineering of Infonet's (now BT) global managed security services which included a four-continent deployment of Check Point Firewall-1 on Sun Sparcs.
Deploying thousands of managed firewall “appliances” (if I can even call them that now) and managing each of them individually with a small team posed quite a challenge for us. It seems it posed a challenge for many others also.
Zohar was at Check Point and ultimately led the effort to deliver Provider-1 which formed the basis of their distributed firewall (and virtualized firewall) management solution which piggybacked on VSX.
Fast forward 15 years and here we are again — cloud and virtualization have taken the same set of security and device management issues and amplified them. Zohar and his team looked at the challenge we face in managing the security of large “web-scale” cloud environments and brought Dome9 to life to help solve this problem.
Dome9′s premise is simple – use a centralized SaaS-based offering to help manage your distributed cloud access-control (read: firewall) management challenge using either an agent (in the guest) or agent-less (API) approach across multiple cloud IaaS platforms.
Their first iteration of the agent-based solution focuses on Windows and Linux-based OSes and can pretty much function anywhere. The API version currently is limited to Amazon Web Services.
Dome9 seeks to fix the “open hole” access problem created when administrators create rules to allow system access and forget to close/remove them after the tasks are complete. This can lead to security issues as open ports invite unwanted “guests.” In their words:
- Keep ALL administrative ports CLOSED on your servers without losing access and control.
- Dynamically open any port On-Demand, any time, for anyone, and from anywhere.
- Send time and location-based secure access invitations to third parties.
- Close ports automatically, so you don't have to manually reconfigure your firewall.
- Securely access your cloud servers without fear of getting locked out.
The unique spin/value-proposition with Dome9 in it's initial release is the role/VM/user focused and TIME-LIMIT based access policies you put in place to enable either static (always-open) or dynamic (time-limited) access control to authorized users.
Administrators can setup rules in advance for access or authorized users can request time-based access dynamically to previously-configured ports by clicking a button. It quickly opens access and closes it once the time limit has been reached.
Basically Dome9 allows you to manage and reconcile “network” based ACLs and — where used — AWS security zones (across regions) with guest-based firewall rules. With the agent installed, it's clear you'll be able to do more in both the short and long-term (think vulnerability management, configuration compliance, etc.) although they are quite focused on the access control problem today.
There are some workflow enhancements I suggested during the demo to enable requests from “users” to “administrators” to request access to ports not previously defined — imagine if port 443 is open to allow a user to install a plug-in that then needs a new TCP port to communicate. If that port is not previously known/defined, there's no automated way to open that port without an out-of-band process which makes the process clumsy.
We also discussed the issue of importing/supporting identity federation in order to define “users” from the Enterprise perspective across multiple clouds. They could use your input if you have any.
There are other startups with similar models today such as CloudPassage (I've written about them before here ) who look to leverage SaaS-based centralized security services to solve IaaS-based distributed security challenges.
In the long term, I see Cloud security services being chained together to form an overlay of sorts. In fact, CloudFlare (another security SaaS offering) announced a partnership with Dome9 for this very thing.
Dome9 has a 14-day free trial two available pricing models:
- “Personal Server” – a FREE single protected server with a single administrator
- “Business Cloud” – Per-use pricing with 5 protected servers at $20 per month
If you're dealing with trying to get a grip on your distributed firewall management problem, especially if you're a big user of AWS, check out Dome9.
- GoGrid and Dome9 Security Partner for Cloud Security Management (your-story.org)
- Dome9 Security Adds Protection for CloudFlare Customers' Web Servers (your-story.org)
- VMware's vShield – Why It's Such A Pain In the Security Ecosystem's *aaS… (rationalsurvivability.com)
- Unsafe At Any Speed: The Darkside Of Automation (rationalsurvivability.com)
- SecurityAutomata: A Reference For Security Automation… (rationalsurvivability.com)
TechTarget reported Cloud University Now Open – Lesson 1: Cloud Security In Context on 9/15/2011:
TechTarget invites you to enroll in TechTarget's all-new Cloud University – a unique online education program designed to target the core constituents of your organization affected by cloud computing.
Each lesson – which is "taught" by a key industry expert knowledgeable in your specific role – builds on the previous one, and provides you with the knowledge needed to make informed decisions on your path to cloud computing.
This classroom, Cloud Security In Context – taught by Dick Mackey, Vice President at System Experts Corporation, an independent consultancy in the security and compliance market, is designed to bring you up to speed with the key concepts of cloud security. When completed, you'll have the knowledge to advise your company on how to move to the cloud in a secure way that makes sense for your business.
( Each Lesson Approximately 5 Minutes In Length )
Lesson 1: Cloud Security in Context
In this lesson, Dick Mackey discusses the issues that enterprise IT shops must consider when determining whether they will deploy internal services to the cloud or consume services that exist in the cloud.
Lesson 2: Cloud Interoperability and Standards
In this lesson, Mackey introduces cloud interoperability and management concepts and explains how the current state of cloud computing limits choices and movement of services from cloud provider to cloud provider.
Lesson 3: Compliance in the Cloud
In this lesson, Mackey discusses how cloud computing affects compliance with various regulations and contracts. You'll get an overview of particular regulatory requirements – including PCI, HIPAA, PCI DSS and others – and their demands on cloud consumers and providers, as well as an introduction to issues such as encryption, data segregation, monitoring, auditing and testing.
Lesson 4: Legal and Contractual Issues in the Cloud
In the final lesson, Mackey discusses the types of terms that consumers of cloud services need to include in contracts with providers. Discover why issues such as data protection and disposal, incident response and coordination, availability requirements, and transparency of operations must be considered.
Best of all, when you've completed the course, see what you've learned by testing your knowledge with our Final Quiz !
Full disclosure : I'm a paid contributor to TechTarget's SearchCloudComputing.com.
My ( @rogerjenn ) Windows Azure and Cloud Session Content from the //BUILD/ Windows Conference 2011 post of 8/17/2011 contains:
[F]ull descriptions and links to slide decks and video archives of sessions in the Windows Azure and Cloud Computing tracks presented at the //BUILD/ Windows Conference 2011 held in Anaheim, CA on 9/13 through 9/16/2011.
The descriptions are fully searchable with Ctrl+F.
David Pallman recapped Live from the BUILD Conference – Windows Azure 1.5 in a 9/16/2011 post:
While a lot of BUILD was focused around Windows 8 and Windows Server 8 , cloud was not ignored. Windows Azure figures into the revised Microsoft platform strategy and there are also some updates and announcements around Windows Azure that came out this week. There were also some good sessions on Windows Azure given at the conference which will be available for online viewing shortly.
First off, strategy. The common vision that interconnects everything shown at build is "connected devices, continuous services" — and cloud services figure prominently in that equation. And while there is some direct integration to Windows Live Services in Windows 8, Windows Azure plays an equally important role. It's the place to host your own application services and data with worldwide scale. In addition to Compute, it's valuable services for Storage, Relational Data, Community, Networking, and Security are essential infrastructure.
Here are significant Windows Azure announcements made this week:
• Windows Azure SDK / Tools for Visual Studio 1.5 . The updated Windows Azure SDK 1.5 includes an overhauled emulator for local development, performance improvements, validation of packages before deployment, bug fixes, and expanded command line tools. Enhancements for Visual Studio allow you to profile your Windows Azure applications, create ASP.NET MVC3 web roles, and manage multiple service configurations in a cloud project.
• Service Bus Released / AppFabric SDK 1.5 . Some exciting updates to AppFabric Service Bus have been in preview for most of this year which include brokered message features such as queues, topics, and subscriptions. The updated Service Bus is now released, and to go with it there is a new AppFabric SDK 1.5 .
• Windows Azure Toolkit for Windows 8 . For those getting an early start on Windows 8 development, the Windows Azure Toolkit for Windows 8 provides the same kind of cloud service support Microsoft released earlier this year for various phone platforms.
• Windows Azure Autoscaling Application Block . This is a code block that helps you auto-scale your cloud applications. It is currently in preview and the code is available in binary or source code form using nuget .
• Updated Management Portal . The Windows Azure management portal has been improved. In particular, the SQL Azure data management area of the portal has been overhauled and enhanced.
• Geo-replication of Windows Azure Storage . As you may know the 6 Windows Azure data centers can be considered pairs (2 in North America, 2 in Europe and 2 in Asia). Your Windows Azure storage is now replicated to the "other pair". This is automatic and isn't something you can visibly see or manage, it's there for failover in the event of a catastrophic data loss in a data center. There's no extra cost associated with this, it's simply another level of protection Microsoft has added to the platform in addition to the triple-redundancy we already enjoy within each data center.
My colleague Michael Collier covers some of these features in a lot more detail on his blog . The Windows Azure platform keeps getting better and better, and it's nice to see the pace of work on improving it hasn't slowed any.
David Pallman reported Live from the BUILD Conference – Windows Server 8 on 9/16/2011:
After Day 1 of BUILD being so momentous, I was honestly expecting a letdown on Day 2. What could possibly compete with the wealth of exciting information we got about Windows 8? I'm happy to report that Day 2 was packed with oodles of great announcements and demos about the back end (server, cloud) and developer libraries and tooling.
Moreover, the “front-end” coverage on Day 1 and the “back end” coverage on Day 2 are linked through a comprehensive strategy of “connected devices and continuous services” . This phrase, much easier for all audiences to parse and understand, is a big improvement over “software + services” or “platform as a service”. It beautifully reflects the device/HTML5 revolution that is happening on the front end and the cloud computing revolution on the back end. Rarely have I seen this much collaboration and shared vision between the teams at Microsoft. It's really refreshing!
There was a whole lot shared on Day 2, and once again it will take some time to really absorb it all. For now, I'm going to focus on Windows Server 8 and cover the rest in additional posts.
Windows Server 8
Just as we have a new client OS on the horizon with Windows 8, a new version of Windows Server is in the works as well: Windows Server 8. If Windows Server 8 is about one thing, that thing is “private cloud”: it has extremely advanced virtualization features such as the ability to relocate running VMs, and the management is implemented in an extremely user-friendly way. Here are the highlights on Windows Server 8:
• Overhauled User Interfaces . Windows Server 8 has some nice management interfaces that are friendly and approachable.
• IIS . IIS has application platform improvements. I haven't learned the specific details yet.
• HA . You can build small-size clusters that have high availability.
• Private Cloud . Windows Server 8 is a virtualization platform, allowing you to create a private cloud on top of your existing on-premise infrastructure.
• Live Migration . Relocate a VM's hard disk storage to another location, even a remote location, while the VM is running.
• Multi-tenancy . Windows Server 8 deeply understands multi-tenancy, allowing you fine control over how you provision compute, storage, and network resources for application workloads.
• Storage Spaces . This feature allows you to manage multiple drives connected by Serial Attached SCSI (SAS). For example, you could form a single storage pool from a dozen hard drives, then partition that into multiple virtual drives. This was demonstrated to be easy to manage (“You don't need a PhD in storage to be able to use it”).
• Parallel Networking . Windows Server 8 can leverage multiple NICs simultaneously to strongly boost throughput and to provide fault tolerance.
As you can see, Windows Server 8 is a real tiger! –and I've only scratched the surface. Check out this InfoWorld article for a more detailed overview and get a view of the new UI from this PCMag.com article . I also really encourage you to watch the Day 2 Keynote from the BUILD site. MSDN subscribers can download a developer preview of Windows Server 8 right now.
With Windows 8 and Windows Server 8, Microsoft has pushed the envelope on what an operating system should do and how it should do it, in a ground-breaking way. Together with cloud services (Windows Live, Windows Azure, Office 365), they make the vision of “connected devices, continuous services” more than just a neat idea: they make it reality.
David Pallman posted Live from the BUILD Conference – Windows 8 on 9/14/2011:
On Tuesday the BUILD conference got off to a roaring start, and it was all about one thing: Windows 8. After an unprecedented amount of secrecy and mystery, we finally got our first real look at Windows 8. Except it was more than a look–Microsoft spent the entire day going through the goals, user experience, application model, and development platform for Windows 8 in much detail. You can watch the keynote yourself on the BUILD site and the session videos will also be posted there as the week progresses. I can tell you already that this is one of the most significant developer conferences Microsoft has ever put on: the amount of good stuff shown was staggering and overwhelming and I'm still absorbing it. There was so much vision, style, and creativity shown I thought I was at an Apple conference! Best of all, a Windows 8 developer preview has been released that you can get now at http://dev.windows.com .
Windows 8 Highlights
There was so much new information shared yesterday it's going to take a while to sort it out and digest it–but let me give you the highlights now.
• PCs and Devices . Windows 8 will run equally well on ARM devices like tablets as it will on PCs. Given that some people are calling this "the post-PC era", that's important!
• Touch support is really important and is baked into everything. It was stated, "in the future a screen that doesn't support touch is a broken screen". However, mouse and keyboard remain fully supported throughout. The touch support is way more than simply emulating what a mouse does–it's extremely sophisticated and well thought out based on extensive usability research.
• Metro . Windows 8′s default interface is called Metro, but you can also get to traditional views like the desktop we are used to today. Watch the keynote and you'll get a good sense of what Metro is like. The key phrase is "fast and fluid".
• Metro App Model . There's more to Metro than the operating system – there's also a new app model. A Metro App follows a comprehensive set of design and interaction rules and uses a new runtime API called WinRT. Metro apps give up the entire screen to content and are "chromeless". Some people were joking that it is ironic that the new Windows doesn't have windows!
• Contracts allow apps to cooperate in such activities as search, sharing, and picking. An application manifest describes a Metro app's capabilities.
• Tools . Visual Studio 11 and Expression Blend 5 strongly support WinRT. Blend even lets you work with native HTML5/CSS.
• App Store . Windows 8 includs an App Store – which gives developers access to a market of 450 million people!
• Cloud support figures prominently in Windows 8. Although Windows Live services were given the spotlight, Windows 8 apps can obviously also make use of other cloud services such as the Windows Azure platform.
Windows 8 and HTML5
How does an HTML5-based Metro app differ from an HTML5 web app? In these ways:
- Packages – a Metro app has to be "packaged" and needs an application manifest.
- Not a Web App – although Windows makes use of the IE rendering engine, a Metro app is not a web app, it's a Windows app.
- Metro styling – the design guidelines for Metro expect your app to conform to specific display, interaction, and form factor guidelines.
Although I was hoping for a "less proprietary" HTML5 story on Windows 8, this is still a big deal: I can create web apps in HTML5 and leverage that code to also create Windows 8 Metro Apps.
Those who made the trip to attend the conference in person were certainly rewarded–this year's give-away was a Samsung Galaxy tablet with a developer prreview of Windows 8! It includes a wireless keyboard, a tablet stand, and an AT&T 3G data pass. Microsoft stressed that this tablet is powerful enough to be a development machine and is configured that way. There have been give-aways at past PDC conferences but this one is a real home run.
David Pallman began his //BUILD/ Conference series with Live from the BUILD Conference – Expectations on 9/12/2011:
This year's conference, which replaces the traditional PDC conference, is shrouded in mystery! The web site says next to nothing about the agenda (right now, anyway), and having just registered hasn't provided any enlightment–my event guide says nothing about what the sessions are. Well, it'll all come out tomorrow in the 9am keynote, which anyone can tune in to.
We do know Microsoft will be talking a lot about two things: Windows 8 and HTML5 . I think HTML5 and what that means for the Microsoft development roadmap is the big story here, and that's certainly not unrelated to Windows 8. Here are some expectations and questions I hope to have answered:
• Windows 8 Unveiled . We'll surely see and learn a lot about Windows 8, an OS that will serve users of PCs and "post-PC devices" like tablets equally well.
• IE9 and HTML5? Microsoft has made a big deal about IE9 and HTML5 in the last year, but it seems IE9 scores somewhat low when you visit HTML5 compatibility testing sites like html5test.com , and even the IE10 Test Drive Preview scores pretty low. Many of the cool HTML5 demos I find online don't work with IE9 today. Çfarë është me se?
• Silverlight . How should Silverlight developers think about HTML5? Is there anything planned that will integrate Silverlight with an HTML5 world in any way?
While Windows and front-end web development may be in the spotlight at this show, I'll bet there will be interesting news on the Windows Azure front as well. I plan to cover all of it and share here, so stay tuned!
Jeff Barr ( @jeffbarr ) reported on 9/13/2011 an AWS Media and Entertainment Summit – This Coming October in Los Angeles on 10/6/2011 at the Sofitel Hotel:
We are hosting an AWS Media and Entertainment Summit at the Sofitel hotel in Los Angeles in early October.This half-day event will give you the information that you need to use AWS on your own cloud computing projects. We've put together an action-packed agenda to make the best possible use of your time:
Michelle Munson, CEO of Aspera , will talk about media ingestion and high speed data transfer, an essential element of any cloud-based media architecture.
Following Michelle's presentation, you will get to hear from three senior members of our product teams:
- S3 Senior Manager Dan Winn will talk about storage.
- EC2 Principal Product Manager Deepak Singh will discuss transcoding and rendering.
- CloudFront General Manager Tal Saraf will discuss content delivery.
Ariel Kelman, Director of AWS Marketing, will emcee the event.
The event is free, but seating is limited, so register now .
Martin Tantow ( @mtantow ) asserted Verizon's CloudSwitch Acquisition Added Value to its Cloud Enterprise in a 9/18/2011 post to the CloudTimes blog:
Verizon again made a milestone when it acquired CloudSwitch last month; this was following its earlier purchase in January of Terremark, a premier cloud storage provider. The latest acquisition of CloudSwitch is expected to boost the cloud enterprise value of Verizon. This is following the global trend from big mobile and smartphone providers like AT&T, Telstra, BT and Verizon in the battle for cloud control. The move is to seek new avenues where they can offer better and diversified services to customers. This is as the telecommunications business from both local and international traffic calls further decline. Although data hosting and networking seems to be the new venture, the partnership of Terremark and CloudSwitch will definitely make a roaring statement as it endeavors into the cloud.
The world's telephone giant companies are now ready and established to do hosting and co-location services, hosting large corporations that have deep pockets to support their cloud venture. One issue, however, that small-scale telcos must deal with is the fact that their traditional systems cannot easily handle the transition to the cloud.
David Linthicum of Infoworld commented of Verizon's acquisition of Terremark and said “Verizon has the same problem as many other telecommunications giants: It has fat pipes and knows how to move data, but it doesn't know how to turn its big honking networks into big honking cloud computing offerings.” This move from Verizon is not unique to them, Orange is another cloud player that is selling GoGrid, which is manufactured and developed by another cloud solutions provider.
Ed Gubbins, NPRG's Senior Analyst said “Locating and building data centers, outfitting them with the necessary equipment, efficient energy supplies and software and building a capable staff is no small task for a company like Verizon with lots of other business segments it must attend to. “It takes time,”' Verizon's COO, Lowell McAdam said . . . “That's not our core competency.”
Terremark and the rest of their competitors are making their statement very clear; that they can and are able to provide more agile data centers. This is just one way for Terremark to gain control over Verizon's data storage center to improve its speed to make it more robust for the cloud environment.
Bloomberg BusinessWeek caught interest of Verizon's business move to acquire Terrizon and said the $1.4 billion investment placed the company in a strategic cloud position. Another investment that may attest more significant for Verizon is the purchase of CloudSwitch, the brains behind LaunchPad that was commenced at GigaOM's 2010 Structure conference.
Verizon's business strategies become more appealing to customers because they now prefer to buy products and applications in packages instead of just mere data center access. They find more appeal in products that can provide scalable and secure networking plus hosting facilities that can easily fit into their existing servers.
Other vendors may be able to offer products at a much cheaper price and in separate applications, but Verizon's packaging will be very hard to resist for customers as it proves to provide more peace of mind for any computer or business cloud solutions.
In terms of OAuth enterprise tooling , a lot of focus is given to OAuth-enabling APIs exposed by the enterprise itself. Naturally, the demand for this reflects today's reality where the enterprise is increasingly playing the role of an api provider. However, many enterprise integration use cases involving cloud-based services puts the enterprise in the role of API consumer, rather than provider. And as the number of enterprise applications consuming these external APIs grows, and the number of such external APIs themselves grows, point-to-point OAuth handshakes become problematic.
Another challenge relating to consuming these external APIs is that OAuth handshakes are geared towards a client application driven by a user. The protocol involves a redirection of that user to the API provider in order to authenticate and express authorization. Many enterprise integration (EI) applications do not function in this way. Instead their behavior follows a machine-to-machine transaction type; they operate at runtime without being driven by a user. Wouldn't it be great if these EI apps could benefit from the OAuth capabilities of the APIs and still operate in headless mode? The so-called 'two-legged' OAuth pattern provides a work around for this challenge but requires the client app to hold resource owner credentials, which is problematic, especially when replicated across every client app.
To illustrate how an enterprise API management solution can help manage this challenge, I demonstrate an OAuth tooling geared towards brokering a client-side OAuth session with the Salesforce API using the Layer 7 Gateway . By proxying the Salesforce API at the perimeter using the Layer 7 Gateway, my EI apps do not have to worry about the API provider OAuth handshake. Instead, these EI apps can be authenticated and authorized locally using the Enterprise identity solution of choice and the Layer 7 Gateway manages the OAuth session on behalf of these applications. The benefits of this outbound API proxy are numerous. First, the OAuth handshake is completely abstracted out of the EI apps. In addition, the enterprise now has an easy way to manage control of which applications and enterprise identities can consume the external API, control of the rates of consumption and monitor usage over time. The API can itself be abstracted and the proxy can transform API calls at runtime to protect the consuming apps from version changes at the hosted API side.
To set this up on the Layer 7 Gateway, you first need to register a remote access to your Salesforce instance. Log into your Salesforce instance and navigate to Setup -> App Setup -> Develop -> Remote Access. From there, you define your remote access application. The callback URL must match the URL used by the Layer 7 Gateway administrator at setup time in the Layer 7 Gateway. Make sure you note the Consumer Key and Consumer Secret as they will be used during the OAuth handshake setup; these values will be used by your Layer 7 OAuth broker setup policy.
Using the Layer 7 Policy Manager, you publish your broker setup policies to manage the OAuth handshake between the Gateway and your Salesforce instance. Note that the OAuth callback handling must listen at a URL matching the URL defined in Salesforce. These policies use the consumer key and consumer secret associated with the registered remote access in your Salesforce instance. The secret should be stored in the Gateway's secure password store for added security. Use templates from Layer 7 to simplify the process of setting up these policies.
Once these two policies are in place, you are ready to initiate the OAuth handshake between the Layer 7 Gateway and the Salesforce instance. Using your favorite browser, navigate to the entry point defined in the admin policy above. Click the 'Reset Handshake' button. This will redirect you to your Salesforce instance. If you do not have a session in place on this browser, you will be asked to authenticate to the instance, then you are asked to authorize the client app (in this case, your Layer 7 Gateway). Finally, you are redirected back to the Layer 7 Gateway admin policy which now shows the current OAuth handshake in place. The admin policy stores the OAuth access token so that it can be used by the api proxy at runtime.
Your Layer 7 Gateway is now ready to act as an OAuth broker for your EI apps consuming the Salesforce API. You can publish a simple policy to act as this proxy. This policy should authenticate and authorize the EI app and inject the stored OAuth access token on the way out. Note that this policy can be enhanced to perform additional tasks such as transformation, rate limiting, caching, etc.
Although this use case focuses on the Salesforce API, it is generally applicable to any external API you consume. You can maintain an OAuth session for each API you want to proxy in this Gateway as well as perform identity mapping for other external access control mechanism, for example AWS HMAC signatures.
Bill McColl described “Dataclouds” in his The Consumerization of Big Data Analytics article of 9/14/2011 for the Cloudscale blog:
With Dropbox, Jive, Yammer, Chatter and a number of other new services, the modern enterprise is rapidly becoming "consumerized". And it's not just business, the same is happening in major web companies, on Wall Street, in government agencies, and in science labs. Thirty years of bad "enterprise software" experiences is making this transition happen much more quickly than anyone would have expected. The shift to cloud computing is also accelerating the trend, as is the goal of developing a much more "social" approach to business.
The other major change that's going on today throughout business, web, finance, government and science, is that every organisation now realizes that it needs to be data-driven. Big data and analytics have the potential to unleash creativity and innovation everywhere – generating new ideas and new insights continuously. To achieve and maintain competitive advantage today, it is becoming essential for everyone in an organisation to have instant access to all the information they need, at all times.
Making big data analytics available to everyone in an organisation means that it has to be much simpler than traditional data analytics solutions such as databases, data warehouses and Hadoop clusters. It needs to be consumerized! We need a new generation of data analytics solutions that are not just powerful and scalable, but also very easy-to-use.
At Cloudscale we've been working on the hard problem of delivering this extreme simplicity, extreme power and extreme scale. Our "datacloud" solution combines a number of advanced technologies in a unique way to achieve these challenging goals. The patented in-memory architecture is massively parallel, cloud-based, and fault tolerant. It runs on standard commodity hardware, either in the cloud (eg Amazon) or as an in-house (OpenStack) appliance.
Cloudscale lets anyone easily store, share, explore and analyze the exponentially growing volumes of data in their work and in their life. It's like a "Dropbox for Big Data Analytics". The Cloudscale data store and app store allow users to easily create, share and collaborate on all kinds of data and apps.
It's designed for everyone – business users, data scientists, app developers, individuals – anyone, or any organization, that needs a simpler way to handle today's explosively growing data volumes. And it's viral – sharing data and apps creates powerful network effects within organizations, unleashing data-driven creativity and innovation everywhere.
With this new technology, anyone can now become a "big data rocket scientist". Through simple, easy-to-use interfaces, users can:
- Work with all types of data – structured and unstructured – from any source
- Work with live data streams and massive stored data sets
- Quickly discover important patterns, correlations, statistics, trends, predictions,…
- Quickly develop, deploy and scale big data apps – mapreduce, realtime analytics, statistics, pattern matching, machine learning, graph algorithms, time series,…
- Evaluate millions of scenarios and potential opportunities and threats every second
- Go from data to decision to action instantly
It's super-fast and super-scalable! For example, Cloudscale can be used to analyze a live stream in realtime at more than 150MB/sec on just three 8-core AWS cluster instances. That corresponds to processing a SINGLE STREAM in parallel at a rate of TWO MILLION ROWS PER SECOND, or well over ONE TRILLION EVENTS per week. To give some idea of how fast this is, the nationwide call log systems of even the biggest US telcos only generate about 50,000 rows/sec, even at peak. For processing even more data, the solution scales linearly.
The performance of the Cloudscale datacloud is more than 125x faster than Yahoo's S4 (Realtime MapReduce) system, on the same hardware – about the difference in speed between walking from San Francisco to New York (4mph) versus taking a plane (500mph).
These are just the first steps in the consumerization of the $30Billion+ analytics industry. As powerful analytics gets democratised in this way, we can expect that it will spread virally into every corner of every organisation.
Marcin Okraszewski asked Is Amazon the cheapest cloud computing provider? in a 9/13/2011 post to the Cloud Computing Economics blog:
When people think of cloud computing, they almost automatically think of Amazon EC2 . Amazon has become the cloud computing company and is commonly perceived as the cheapest, if not the only, IaaS provider. But is this really so? Let's play Myth Busters as on the Discovery Channel. Cloud Computing Myth Busters!
We will compare all of Amazon's instances from Standard line with prices for cloud servers of at least the same parameters from other cloud computing providers. For this purpose we will use Cloudorado – the cloud computing price comparison engine . For Amazon to be considered the cheapest, it would have to be the cheapest for every instance type they provide, since these are their strongest points. If this is not met, there is no point in checking any further.
We will assume only cloud server costs. No transfer, licenses or load balancers. We will choose a full month of computing with on-demand prices. We could expand it to other instance types and other combinations, but there's no need to drag this article out with too many variations when you could easily try them on your own with the cloud hosting price comparison engine .
We will also provide one extension to Cloudorado calculations. As Amazon does not have persistent instance storage as other providers, we will also provide additional calculation of instances with a persistent EBS storage of equal size to instance storage. Unfortunately the cost of the EBS service depends on both size and number of I/O requests. As an estimate of I/O requests cost, we will use 100 I/O per second, resulting in $26 per month as indicated by Amazon in Projecting Costs section of EBS description .
OK, with all assumptions explained, let's start!
Experiment 1 – Standard Small
Conditions: 1.7 GB RAM, 1 ECU CPU power and 160 GB storage
Experiment 2 – Standard Large
Conditions: 7.5 GB RAM, 4 ECU CPU power, 850 GB storage
Experiment 3 – Standard Extra Large
Conditions: 15 GB RAM, 8 ECU CPU power, 1690 GB storage
Experiment 4 – Standard Medium
Çfarë? But there is no Standard Medium instance! You are right. We just wanted to show what happens if requirements go outside of Amazon space. Amazon does not have any instance between 1.7 GB RAM and 7 GB RAM. Standard Medium would be an instance twice as big as Standard Small – 3.4 GB RAM.
Conditions: 3.4 GB RAM, 2 ECU CPU power, 320 GB storage
Myth Busted! Amazon is not universally the cheapest cloud computing provider. Even with requirements perfectly matching Standard instance types, Amazon was the cheapest only once! It was once almost the cheapest and once 24% more expensive than the cheapest provider. With the persistent storage option (EBS), Amazon was never the cheapest, costing on average 55% more than the winner. It gets even worse if you get away from Amazon's instance types, where we showed an example of Amazon being twice as expensive, but it can be much worse. So always be sure to compare cloud computing prices for your specific needs. Don't fall for myths that any given provider always offers the best deal.
Barton George ( @barton808 ) reported Now available: Dell | Cloudera solution for Apache Hadoop in a 9/12/2011 post:
As a refresher:
The solution is comprised of Cloudera's distribution of Hadoop, running on optimized Dell PowerEdge C2100 servers with Dell PowerConnect 6248 switch, delivered with joint service and support from both companies. You can buy it either pre-integrated and good-to-go or you can take the DIY route and set up yourself with the help of
Learn more at the Dell | Cloudera page .
Technorati Tags: Windows Azure, Windows Azure Platform, Azure Services Platform, Azure Storage Services, Azure Table Services, Azure Blob Services, Azure Drive Services, Azure Queue Services, SQL Azure Database, SADB, Open Data Protocol, OData, Windows Azure AppFabric, Azure AppFabric, Windows Server AppFabric, Server AppFabric, Cloud Computing, Visual Studio LightSwitch, LightSwitch, Amazon Web Services, AWS, Hadoop, Cloudera, MongoDB, Open Group, Verizon, CloudSwitch, OAuth, WebSockets, HTML5, Visual Studio 2011