Windows Azure
I went to this presentation of a new Microsoft concept called Windows Azure. Well, it is not so much as a new concept, more like them entering the distributed computing competition. Like Google, IBM and - most notably - Amazon before it, Microsoft is using large computer centers to provide storage and computing as services. So, instead of having to worry about buying a zillion computers for your web farm, manage the failed components, the backups, etc. you just rent the storage and computing and create an application using the Windows Azure SDK.
As explained to me, it makes sense to use a large quantity of computers, especially structured for this cloud task, having centralized cooling, automated update, backup and recovery management, etc, rather than buying your own components. More than that. since the computers run the tasks of all customers, there is a more efficient use of CPU time and storage/memory use.
You may pay some extra money for the service, but it will closely follow the curve of your needs, rather than the ragged staircase that is usually a self managed system. You see, you would have to buy all the hardware resources for the maximum amount of use you expect from your application. Instead, with Azure, you just rent what you need and, more importantly, you can unrent when the usage goes down a bit. What you need to understand is that Azure is not a hosting service, nor a web farm proxy. It is what they call cloud computing, the separation of software from hardware.
Ok, ok, what does one install and how does one code against Azure? There are some SDKs. Included is a mock host for one machine and all the tools needed to build an application that can use the Azure model.
What is the Azure model? You have your resources split into storage and computing workers. You cannot access the storage directly and you have no visual interface for the computing workers. All the interaction between them or with you is done via REST requests, http in other words. You don't connect to SQL Servers, you use SQL Services, and so on and so on.
Moving a normal application to Azure may prove difficult, but I guess they will work something out. As with any new technology, people will find novell problems and the solutions for them.
I am myself unsure of what is the minimum size of an application where it becomes more effective to use Azure rather than your own servers, but I have to say that I like at least the model of software development. One can think of the SDK model (again, using only REST requests to communicate with the Azure host) as applicable to any service that would implement the Azure protocols. I can imagine building an application that would take a number of computers in one's home, office, datacenter, etc and transforming them into an Azure host clone. It wouldn't be the same, but assuming that many of your applications are working on Azure or something similar, maybe even a bit of the operating system, why not, then one can finally use all those computers gathering dust while sitting with 5% CPU usage or not even turned on to do real work. I certainly look forward to that! Or, that would be really cool, a peer to peer cloud computing network, free to use by anyone entering the cloud.
As explained to me, it makes sense to use a large quantity of computers, especially structured for this cloud task, having centralized cooling, automated update, backup and recovery management, etc, rather than buying your own components. More than that. since the computers run the tasks of all customers, there is a more efficient use of CPU time and storage/memory use.
You may pay some extra money for the service, but it will closely follow the curve of your needs, rather than the ragged staircase that is usually a self managed system. You see, you would have to buy all the hardware resources for the maximum amount of use you expect from your application. Instead, with Azure, you just rent what you need and, more importantly, you can unrent when the usage goes down a bit. What you need to understand is that Azure is not a hosting service, nor a web farm proxy. It is what they call cloud computing, the separation of software from hardware.
Ok, ok, what does one install and how does one code against Azure? There are some SDKs. Included is a mock host for one machine and all the tools needed to build an application that can use the Azure model.
What is the Azure model? You have your resources split into storage and computing workers. You cannot access the storage directly and you have no visual interface for the computing workers. All the interaction between them or with you is done via REST requests, http in other words. You don't connect to SQL Servers, you use SQL Services, and so on and so on.
Moving a normal application to Azure may prove difficult, but I guess they will work something out. As with any new technology, people will find novell problems and the solutions for them.
I am myself unsure of what is the minimum size of an application where it becomes more effective to use Azure rather than your own servers, but I have to say that I like at least the model of software development. One can think of the SDK model (again, using only REST requests to communicate with the Azure host) as applicable to any service that would implement the Azure protocols. I can imagine building an application that would take a number of computers in one's home, office, datacenter, etc and transforming them into an Azure host clone. It wouldn't be the same, but assuming that many of your applications are working on Azure or something similar, maybe even a bit of the operating system, why not, then one can finally use all those computers gathering dust while sitting with 5% CPU usage or not even turned on to do real work. I certainly look forward to that! Or, that would be really cool, a peer to peer cloud computing network, free to use by anyone entering the cloud.
Comments
Be the first to post a comment