Forum menu
richc Amazon disagree with you
Not when you try and actually buy the service from them.
Marketing, does not equal reality.
5e our spec was, we need to run computation jobs (using industry standard software) which will use about 2TB of storage 3 times as year. Our current jobs take 28 days to complete so they must take this amount of time or less. After which we need access to the 2TB of data(via network or tape) and will not require the service for another 2 months.
Why does your "job" take 28 days? does it actually take that long from start to finish?
Do you have any other requirements for "IT" during that period and after the processing has completed? other Db's, ERP systems for the company etc and how do you "view" intepret the output later? I assume you require/have some way of storing the output for analysis and distribution?
Depending on expertise and cost you may be able to make use of virtualisation internally.
Sound like a service suitable for a cloud? Did to me, didn't to any of the cloud services companies.
Me too, but I think you are approaching this with a far too highly tuned bullshit detector ๐
It also sounds like you'd be able to fairly accurately price keeping this in house - storage and archiving are relatively cheap these days (look at midline SATA - close to SAS performance, close to SATA price .. or just SATA if IOPs/high mtbf aren't an issue) - I can see the processing may be an issue if it isn't needed all the time but maybe you could short term lease the iron and archive your workloads when not required?
Got an issue at the moment on the difference between "profit" and "anticpated profit" when trying to determine what is in or out of a direct loss category for limitation of liability.
๐ ok ok which bit are you stuck on?
Excellent. Got an issue at the moment on the difference between "profit" and "anticpated profit" when trying to determine what is in or out of a direct loss category for limitation of liability.Anyone?
Have you tried switching it off and back on? ๐
I can see the processing may be an issue if it isn't needed all the time but maybe you could short term lease the iron and archive your workloads when not required?
Or accept that during parts of the year you will have spare resource. If this is within a virtualised environment it may been you are underutilising a bit of your hardware as oppose to having pieces of hardware lying dormant.
It's marketing, essentially.
Originally, IT used the term "cloud" to refer to a network that operated like a black box, where the actual interconnects are unimportant.
Eg, if you're a network provider linking up sites in Edinburgh and London, you don't need to micromanage the exact route; you wouldn't sit there going, "well, from Edinburgh there we'd connect you to Carlisle, Carlisle goes down to Manchester and then Birmingham..." - it's both unimportant and impossible to define as it's all automated routing. Simpler to say that the connection goes 'into the cloud' at Edinburgh and comes out at London, the middle bit is irrelevant.
What this means is that you're no longer looking at traditional point-to-point communications. The cloud model is any-to any. In the above example, you open a new office in Birmingham, you just bolt that into the cloud as well.
Now we're looking at service providers giving more than just connectivity. Storage, computing power, applications are all being offered in 'the cloud'. It sounds flashy and impressive but it's basically the same principle as the networking model above; it's an any-to-any solution where the actual bit in the middle doesn't really matter.
To those who are still with me and thinking "wait a minute, you've just described a web server," you get five Cougar Points; now look at the first sentence of this post again.
Yeah, I should have qualified that surfer - maybe lease any additonal hosts over and above normal anticipated workloads. You'd still have to chew on the vlayer licensing, unless you can do it on Xen or Hyper V (I only do VMWare...)
But you are largely describing a meshed WAN cloud. We have one of those however our data centre is in one physical location. The routes to that location are chageable based on certain criteria internal to the technology (MPLS in this case)
Cloud computing also refers to the physical location of services and data which may be distributed.
However I could access my mail (and data) from anywhere 10 years ago and I am fairly sure that wasn't a cloud service
It could've been, depending on how it was done.
I think Rackspace are just a hosting company using the buzzword 'cloud' to attract customers. Cloud computing is a concept, Rackspace may or may not be implementing it in all aspects ๐
@brassneck
There are very competitive deals using vsphere for a limited number of hosts. Over that number costs escalta significantly.
It really depends on the size of the organisation and whether those resources can be utilised at aother times.
it makes sense if you have equal upload and download speeds, but the typical 10:1 ratios you get in this country don't really make it worth while.
for example:
[url= http://labs.autodesk.com/technologies/photofly/ ]project photofly[/url]
uploading 45 2meg jpgs is prohibitively slow even with 50meg cable setup (approximates to a 5meg upload here).
The job takes 28 days (would be nice to be less) because it's software for the development, analysis and testing of microchips.
At approx 2 Million (and 6 weeks) a dry run, to make them you don't want to get it wrong to many times.
surfer has it. Cougar's model doesn't sound like the cloud to me.
To those who are still with me and thinking "wait a minute, you've just described a web server,"
No you didn't, you just described a 1970s Unix mainframe at a university...
Exactly, it just all part of the centralisation/decentralisation cycle, but with a fancy buzzword and using the WAN rather than LAN.
I do see the point for organisations without decent IT departments, but it you have a group of people who know what they are doing and a reasonable budget, I haven't seen any cloud offering that cannot be done in house.
When they offer 24by7 support and 4 hour SLA for resolutions (not just picking up the phone) and restores that aren't just for DRM that might be a different matter however.
I haven't seen any cloud offering that cannot be done in house
?
You can install 100 servers for a couple of days and then magic them away again?
richc - you might be able to use the cloud for your requirements.
Does your application use a distributed processing system? Or does it require one massive server?
Look into virtual server hosting... http://www.rackspace.com/cloud/cloud_hosting_products/servers/
Spawn multiple 'servers' and harness their power for your distributed processing.
Cougar - you're talking about a networking mesh (WANs), not cloud computing.
@brassneck
There are very competitive deals using vsphere for a limited number of hosts. Over that number costs escalta significantly.
It really depends on the size of the organisation and whether those resources can be utilised at aother times
That's the problem - I think Foundation gives you 3 hosts and a vCenter, limited to 4 vCPUs and no storage vmotion. It gets good at Enterprise Plus but for smaller purchasers thats prohibitively expensive to only use some of the time.
Luckily we have a global direct deal, so just put E+ everywhere B-) - works out cheaper for us over 3 years including maintenance.
You can install 100 servers for a couple of days and then magic them away again?
Yes, and in fact we do.
You can install 100 servers for a couple of days and then magic them away again?
VM's yes, its not rocket science.
Physical boxes, yes but its harder (OS on local disk, shared disks on SAN and vice versa)
I have done both in environments, where a cluster is 500-800 servers, and needed rebuilding/imaging every 60 to 90 days, with the latest updates.
As long as you can script, and understand how to build live CD's (and can remotely mount them, if not someones got a shitty job in a very cold room) its a pretty easy thing to do.
If they all need to be different then that's harder, but using RDP and puppet its still not difficult.
Installing/deploying boxes is the easy part, monitoring, performance tuning, backing them up and securing them is the difficult bit.
VM's yes, its not rocket science
Where do the VMs run?
Are you seriously suggesting purchasing and instlaling a load of servers, once a year, for some task, and then selling them again a few days later?
And there you have it, Wopster, as simple as that. Either not the simple answer you were looking for or a spectular troll. ๐
Are you seriously suggesting purchasing and instlaling a load of servers, once a year, for some task, and then selling them again a few days later?
They are virtual not physical. Its difficult to know without knowing more about the apps/data/business etc but its not uncommon to have dozens of VM's running on a single physical host.
No, I would like to get someone else to do it. However nobody wants to play. So we are stuck with a small 4 CPU Quad core cluster of 4 physical boxes, which can host upto 124-160 hosts each, with reasonable performance.
Big compute jobs, require physical boxes as ESX/VMware hosts really don't work that well with that kinda of load (Xen is much better though), so we have a few really big boxes, with lots of quad core CPU's and hundreds of Gigs of RAM.
Where I used to deploy hundreds of boxes all the time, I was working for a company developing cloud services software. Hence the constant rebuilds of physical machines, as they were dicking with the hyper-visor code.
They are virtual not physical. Its difficult to know without knowing more about the apps/data/business etc but its not uncommon to have dozens of VM's running on a single physical host.
That'd be pointless for number crunching tho, that doesnt' give you any extra actual CPU power. Which is what 'cloud' services can offer.
No, I would like to get someone else to do it. However nobody wants to play
Right.. is it because you are a small business perhaps? I mean, the Amazon thingy above sounded ok, but I expect they are looking for big fish.
You know, I have a mate who's into this stuff, I'll drop him a line.
Nice thread.
I had to explain to somebody at work recently that "the internet" isn't stored in a big warehouse somewhere in the USA and guarded by the CIA from Al-Qaida attacks.
It was like explaining a rainbow to a blind person.
Ironically though hels, it was invented by Americans* to be resistant to attacks and not need guarding.. ๐
* kind of
I thought it was an Egghead at MIT that invented t'internet ? (or thought up the required address protocols and all that)
No you didn't, you just described a 1970s Unix mainframe at a university...
You know, I thought that as well... (-:
I thought it was an Egghead at MIT that invented t'internet ? (or thought up the required address protocols and all that)
http://en.wikipedia.org/wiki/History_of_the_Internet
The internet is a giant TCP/IP network, which was invented (or at least developed) as arpanet, a network to connect US military bases together that could still work with lots of bits of it taken out.
I think you guys have gotten too quickly excited over a fairly average computing system and lost sight of the OP.. for the OPs sake and the sake of sanity.. let me introduce..
[url= http://www.liveleak.com/view?i=be9_1308038755&p=1 ]THE cloud[/url]
Thing is when normal people talk about the internet, they are actually talking about the WWW which was invented by CERN, not the D/ARPA net.
As whilst, the internet was a good invention, developing tools to actually make it useful was a better one.
Yes and there have been numerous tools since it was invented, the WWW being just one, as I am sure you know.
That'd be pointless for number crunching tho, that doesnt' give you any extra actual CPU power. Which is what 'cloud' services can offer.
As I said without knowing more about the application its difficult to say. You are assuming it maximises CPU during the whole process. Most systems use a mix of resources and CPU is only one of them. Within a virtual environment thats why it can be efficient to over commit resources.
richc - drop me a line.. might have something of interest to you.
about 28 mins in
eh ? wierd, last post didn't appear till i posted this ๐
Do you remember mainframe computing, before PCs, when you had a thin-client just providing a user interface and all your computing and storage was done by a central mainframe?
The Cloud is a bit like using the Internet as a Mainframe, in that your PC is just doing the user interface bit, and the heavy computing, storage and backups are being "rented" from Internet servers.
The advantage of this kind of outsourcing is that companies heavily using IT can slim-down their IT support department and sack the IT director. It also makes it easier for start-ups because it reduces their upfront IT investment.
I registered a dev account at salesforce.com and started some cloud development tutorials, yet to finish them though as summer arrived and evenings busy ๐ Got to keep one's CV current and all that.