Cut the red tape ? I am not sure, Daniel, given the many tools available or in development to control how those resources are available according to ruled, most of the times budget rules.
“Cloud”, on demand, resources also cost much much more. Running your standard operations this way is very costly. It can have an interest for “spot” tasks, but they are not that many real cases.
You would think that “private cloud” is a good option, but it remains difficult today. Installing a system is ok. Dealing with the problems happening with those highly complex tasks require large teams of highly skilled people. Most big companies fail.
So, well, I am quite surprised by the enthusiasm for those on demand cloud solutions.
And I do not even discuss other aspects, like storing sensitive data
“Cloud”, on demand, resources also cost much much more. Running your standard operations this way is very costly.
It might be, but it is still very popular. Whenever I meet developers these day who are not part of a super large organization (e.g., not from the government), and I ask them whether they use the public cloud, the answer is almost always positive. And the answer is typically “we use AWS”.
Cut the red tape ? I am not sure, Daniel
Can you elaborate on your counterpoint? If you work for a company that says “start as many instances as you’d like, you don’t need approval”, then clearly, you can have as much fun as you’d like. Compare this with the burden of having to request additional servers, something that assuredly requires many levels of approval at most places.
Ludovic Pénetsays:
Well, like the other commentator, I do not know many companies where dev can use the credit card without control… And so called on demand, cloud services are not at all free.
So, most companies I know of have an authorisation process for this kind of service. And it is legit: it is a spending.
And most of them use software to implement their policies.
Those who don’t have or will have bad surprises. Those services are full of cost traps.
Do not be mistaken: I use them, for personnal projects. I love GCE. I love to be able to rent a server being billed per minute of use. I love to use this server to prepare an ML model, then plug 4 on demand high end GPU to do some stuff.
But, if that was my daily use, it would cost me much less to buy this hardware.
-.-says:
I do see it often that cloud services do make it easier to have resources provisioned, compared with more traditional approaches, but this may only last until companies realise that giving developers unfettered access to the credit card has never been a great idea.
Don’t forget traditional server hosting, which has been around for a very long time (much longer than “cloud hosting”), and does handle stuff like managing hardware for clients. What makes “cloud hosting” attractive over “regular hosting”, from what I can tell is:
near instant provisioning of resources, and hourly billing
resource management via APIs (somewhat related to the above)
hosted or “managed” services
cargo-cult hype
Non “cloud” hosts adopt some of the above, so the lines are blurring a bit between the two these days.
Many web service developers like to think that they should be “scalable” (insert “webscale” meme here) in the sense that servers should automatically scale up if load increases. The prospect is attractive, particularly to startups which will experience viral growth (because all startups believe that this will happen), as their service won’t go down even if millions of visitors suddenly show up. Of course, such scalability is often not necessary if sufficient overprovisioning is employed, but I can only say that from experience – many don’t have such experience, and the notion of scalability that cloud services provide can help provide an answer to an unknown, so makes people feel safer I guess.
The “managed” services can help further speed up provisioning. Note that I use managed in quotes as I think they’re mostly a misnomer (to the benefit of cloud providers) – they aren’t managed (beyond hardware/network management that any server host provides), rather, they’re pre-configured.
Pre-configured services do mean that those not versed in configuration can quickly have something set up, and can help avoid common pitfalls (like forgetting to set up backups). Personally, as someone who likes to have full control over everything, these pre-configured services aren’t attractive to me, but many developers simply don’t care about all the nitty gritty details, so I can see their point.
I see a lot of “tech envy” in developers. There’s plenty of blog posts about people being incredibly successful using the latest tech stacks or applications (or cloud, for that matter), which glamorises these sorts of things. Instead of sticking with old and boring technologies (e.g. relational databases), developers like to always change things around, try new stuff (e.g. “managed NoSQL databases”) and create new things. This also often helps with career prospects (professional experience in the latest tech), so you can see why there’s an incentive to adopt the latest trends.
I suppose many web apps aren’t that exciting, if you break down the requirements. Many are basically “CRUD” (create/read/update/delete) apps which are essentially fancy wrappers to a database. So developers invent complexity to keep things interesting, such as adopting complicated architectures (microservices, message queues, orchestration, multiple data stores etc etc). It’s often easy to justify these designs/changes (“we need to be webscale”, “separate concerns”), and they often sound attractive to management (who like to tout all the changes/improvements they’ve helped drive (whilst downplaying the downsides introduced with the changes)).
In a sense, these sorts of “fashion trends” aren’t just limited to developers – you see this in various other industries too. There’s reasons to adopt cloud, but there’s also a lot of unnecessary hype around it.
Another thing to consider is that “cloud” is considered industry standard these days (i.e. “if you’re not on cloud, then why not?”). Furthermore, names like AWS, Google and Microsoft have credibility behind them. If you use AWS, and it goes down, then Amazon is just having a bad day. However, if you go with a lesser known provider, and it goes down, you’ll be forever justifying why you didn’t go with AWS.
In terms of efficiency, hardware is cheap and developers are expensive. As hardware increasingly becomes more powerful, this relationship becomes truer by the day. Effects this has on our environment is rarely a concern unless it has much of an effect on the company’s bottom line (PR could be another angle, but it’s often not hard to manage in this regard).
Cloud hosting does have various downsides. Compared to traditional server hosting, cloud hosting is ridiculously expensive, for what you get. For example, it’s not unusual to get 5-10x more bang for your buck at places like OVH compared to AWS, and that’s ignoring what they charge for bandwidth (which is even more crazy). Unless you can really make use of dynamic scaling (i.e. have workloads which vary greatly), cloud will almost certainly cost more than regular server hosting. However, in most organisations, developers don’t really care about what it costs, as it’s rarely their concern.
(some places adopt a hybrid approach – baseline load is handled by dedicated servers, and dynamic load handled by cloud)
Also, many of the services provided are proprietary/non-standard to an extent, and there’s some element of vendor lock-in. This, however, is rarely a concern when starting out.
(personally, I have a suspicion that the absurd bandwidth costs that cloud providers charge, may be to encourage customers to keep all their services on the same platform, rather than simply pick and match the most cost effective solution)
Because everything is billed, mistakes can be costly (for example, a rouge script which uses too many resources), whereas on traditional hosting, your server would just slow down and it’d be obvious that there was a problem. Cloud providers do often enable you to set up warnings if you’d get charged too much, but you have to notice and react to the warning, and also remember to set it up in the first place.
Complexity is also an element. AWS, for example, offers many services (often with undescriptive names) and can require some knowledge to understand.
This may seem a little contradictory to some extent, as cloud services are supposed to remove complexity with setting up services – I suppose it does to some extent, but it does add its own set of complexities on top.
As for your examples of photos, I don’t really think this is a property of cloud. If you’re given limited storage space in the cloud, then you’d still have to manage what gets stored there. On the other hand, if you have a big harddrive at home, you’d be less inclined to delete stuff.
As for your examples of photos, I don’t really think this is a
property of cloud. If you’re given limited storage space in the cloud,
then you’d still have to manage what gets stored there.
I would not argue that it is “a property of the cloud”. I was just making an analogy.
JF Greniersays:
The main advantage of cloud computing is not really around cost nor performance. It’s about time and humans.
Having anything right now is always better than having the same result tomorrow. Spinning a couple of instances to do something new is not a problem, if it cost more than the dev hours that will require optimization, do it at this time, never before. Doing it before imply you’re not doing some other task that could bring more value.
Needing no humans on your side to do it is better than having to find, hire and keep a team to manage stuff internally. Cloud providers are essentially ”all those devops guys that cost a bunch that we can’t find anyway” as a service. Hiring is getting hard in tech, real hard.
As deliveries need to go faster and faster and we have less and less people to do it, it’s inevitable that something that’s mainly just a cost will get outsourced if it means we can go faster with less people. No one cares if the shops you go to own or rent their spaces, it’s pretty much the same in tech.
It is absolutely true that the cloud enables sloppiness. See, e.g. Frank McSherry’s paper “Scalability! But at what COST?” which compares “big data” distributed systems to the performance of a well-engineered program running on a single thread.
At the same time, if one uses the cloud thoughtfully, it can be freeing. If a service is melting due to load issues, you could allocate a bunch of developers to optimizing it right now and maybe assessing whether the increased scale merits a wholly new architecture). But this introduces uncertainty and schedule risk into whatever project they were currently working on. It also hurts morale – no one wants to be interrupted to fight fires. It also risks making poor decisions in the heat of the moment.
Instead, the cloud lets you buy time–simply pay for a larger server (or servers), and schedule the optimization work for the next sprint, in two weeks time. Thus, even though you are nominally paying for compute flexibility, you are actually buying flexibility at the developer level — and the developers are the most costly asset in many companies.
The last place I worked was an 80-person software firm. Everyone in engineering had permission to launch new AWS resources. If it was for a transient thing, no sign off needed. If it was for an ongoing project, give a heads up to your manager and make sure it aligned with the overall architecture. No big deal.
Everyone also had access to a dashboard that showed the company’s daily revenue vs its (not insubstantial) daily AWS expenses. Perhaps that was key, too.
Ludovic Pénetsays:
We always come back with to some kind of “debt”. Independently from the on demand / autoscale vs on premise, there is an old debate on carefully engineered vs quick and dirty.
Which is the best ? IMHO, it depends… What matters the most is to make a thoughtful decision, with no belief in a magic solution.
Cut the red tape ? I am not sure, Daniel, given the many tools available or in development to control how those resources are available according to ruled, most of the times budget rules.
“Cloud”, on demand, resources also cost much much more. Running your standard operations this way is very costly. It can have an interest for “spot” tasks, but they are not that many real cases.
You would think that “private cloud” is a good option, but it remains difficult today. Installing a system is ok. Dealing with the problems happening with those highly complex tasks require large teams of highly skilled people. Most big companies fail.
So, well, I am quite surprised by the enthusiasm for those on demand cloud solutions.
And I do not even discuss other aspects, like storing sensitive data
“Cloud”, on demand, resources also cost much much more. Running your standard operations this way is very costly.
It might be, but it is still very popular. Whenever I meet developers these day who are not part of a super large organization (e.g., not from the government), and I ask them whether they use the public cloud, the answer is almost always positive. And the answer is typically “we use AWS”.
Cut the red tape ? I am not sure, Daniel
Can you elaborate on your counterpoint? If you work for a company that says “start as many instances as you’d like, you don’t need approval”, then clearly, you can have as much fun as you’d like. Compare this with the burden of having to request additional servers, something that assuredly requires many levels of approval at most places.
Well, like the other commentator, I do not know many companies where dev can use the credit card without control… And so called on demand, cloud services are not at all free.
So, most companies I know of have an authorisation process for this kind of service. And it is legit: it is a spending.
And most of them use software to implement their policies.
Those who don’t have or will have bad surprises. Those services are full of cost traps.
Do not be mistaken: I use them, for personnal projects. I love GCE. I love to be able to rent a server being billed per minute of use. I love to use this server to prepare an ML model, then plug 4 on demand high end GPU to do some stuff.
But, if that was my daily use, it would cost me much less to buy this hardware.
I do see it often that cloud services do make it easier to have resources provisioned, compared with more traditional approaches, but this may only last until companies realise that giving developers unfettered access to the credit card has never been a great idea.
Don’t forget traditional server hosting, which has been around for a very long time (much longer than “cloud hosting”), and does handle stuff like managing hardware for clients. What makes “cloud hosting” attractive over “regular hosting”, from what I can tell is:
near instant provisioning of resources, and hourly billing
resource management via APIs (somewhat related to the above)
hosted or “managed” services
cargo-cult hype
Non “cloud” hosts adopt some of the above, so the lines are blurring a bit between the two these days.
Many web service developers like to think that they should be “scalable” (insert “webscale” meme here) in the sense that servers should automatically scale up if load increases. The prospect is attractive, particularly to startups which will experience viral growth (because all startups believe that this will happen), as their service won’t go down even if millions of visitors suddenly show up. Of course, such scalability is often not necessary if sufficient overprovisioning is employed, but I can only say that from experience – many don’t have such experience, and the notion of scalability that cloud services provide can help provide an answer to an unknown, so makes people feel safer I guess.
The “managed” services can help further speed up provisioning. Note that I use managed in quotes as I think they’re mostly a misnomer (to the benefit of cloud providers) – they aren’t managed (beyond hardware/network management that any server host provides), rather, they’re pre-configured.
Pre-configured services do mean that those not versed in configuration can quickly have something set up, and can help avoid common pitfalls (like forgetting to set up backups). Personally, as someone who likes to have full control over everything, these pre-configured services aren’t attractive to me, but many developers simply don’t care about all the nitty gritty details, so I can see their point.
I see a lot of “tech envy” in developers. There’s plenty of blog posts about people being incredibly successful using the latest tech stacks or applications (or cloud, for that matter), which glamorises these sorts of things. Instead of sticking with old and boring technologies (e.g. relational databases), developers like to always change things around, try new stuff (e.g. “managed NoSQL databases”) and create new things. This also often helps with career prospects (professional experience in the latest tech), so you can see why there’s an incentive to adopt the latest trends.
I suppose many web apps aren’t that exciting, if you break down the requirements. Many are basically “CRUD” (create/read/update/delete) apps which are essentially fancy wrappers to a database. So developers invent complexity to keep things interesting, such as adopting complicated architectures (microservices, message queues, orchestration, multiple data stores etc etc). It’s often easy to justify these designs/changes (“we need to be webscale”, “separate concerns”), and they often sound attractive to management (who like to tout all the changes/improvements they’ve helped drive (whilst downplaying the downsides introduced with the changes)).
In a sense, these sorts of “fashion trends” aren’t just limited to developers – you see this in various other industries too. There’s reasons to adopt cloud, but there’s also a lot of unnecessary hype around it.
Another thing to consider is that “cloud” is considered industry standard these days (i.e. “if you’re not on cloud, then why not?”). Furthermore, names like AWS, Google and Microsoft have credibility behind them. If you use AWS, and it goes down, then Amazon is just having a bad day. However, if you go with a lesser known provider, and it goes down, you’ll be forever justifying why you didn’t go with AWS.
In terms of efficiency, hardware is cheap and developers are expensive. As hardware increasingly becomes more powerful, this relationship becomes truer by the day. Effects this has on our environment is rarely a concern unless it has much of an effect on the company’s bottom line (PR could be another angle, but it’s often not hard to manage in this regard).
Cloud hosting does have various downsides. Compared to traditional server hosting, cloud hosting is ridiculously expensive, for what you get. For example, it’s not unusual to get 5-10x more bang for your buck at places like OVH compared to AWS, and that’s ignoring what they charge for bandwidth (which is even more crazy). Unless you can really make use of dynamic scaling (i.e. have workloads which vary greatly), cloud will almost certainly cost more than regular server hosting. However, in most organisations, developers don’t really care about what it costs, as it’s rarely their concern.
(some places adopt a hybrid approach – baseline load is handled by dedicated servers, and dynamic load handled by cloud)
Also, many of the services provided are proprietary/non-standard to an extent, and there’s some element of vendor lock-in. This, however, is rarely a concern when starting out.
(personally, I have a suspicion that the absurd bandwidth costs that cloud providers charge, may be to encourage customers to keep all their services on the same platform, rather than simply pick and match the most cost effective solution)
Because everything is billed, mistakes can be costly (for example, a rouge script which uses too many resources), whereas on traditional hosting, your server would just slow down and it’d be obvious that there was a problem. Cloud providers do often enable you to set up warnings if you’d get charged too much, but you have to notice and react to the warning, and also remember to set it up in the first place.
Complexity is also an element. AWS, for example, offers many services (often with undescriptive names) and can require some knowledge to understand.
This may seem a little contradictory to some extent, as cloud services are supposed to remove complexity with setting up services – I suppose it does to some extent, but it does add its own set of complexities on top.
As for your examples of photos, I don’t really think this is a property of cloud. If you’re given limited storage space in the cloud, then you’d still have to manage what gets stored there. On the other hand, if you have a big harddrive at home, you’d be less inclined to delete stuff.
I would not argue that it is “a property of the cloud”. I was just making an analogy.
The main advantage of cloud computing is not really around cost nor performance. It’s about time and humans.
Having anything right now is always better than having the same result tomorrow. Spinning a couple of instances to do something new is not a problem, if it cost more than the dev hours that will require optimization, do it at this time, never before. Doing it before imply you’re not doing some other task that could bring more value.
Needing no humans on your side to do it is better than having to find, hire and keep a team to manage stuff internally. Cloud providers are essentially ”all those devops guys that cost a bunch that we can’t find anyway” as a service. Hiring is getting hard in tech, real hard.
As deliveries need to go faster and faster and we have less and less people to do it, it’s inevitable that something that’s mainly just a cost will get outsourced if it means we can go faster with less people. No one cares if the shops you go to own or rent their spaces, it’s pretty much the same in tech.
It is absolutely true that the cloud enables sloppiness. See, e.g. Frank McSherry’s paper “Scalability! But at what COST?” which compares “big data” distributed systems to the performance of a well-engineered program running on a single thread.
At the same time, if one uses the cloud thoughtfully, it can be freeing. If a service is melting due to load issues, you could allocate a bunch of developers to optimizing it right now and maybe assessing whether the increased scale merits a wholly new architecture). But this introduces uncertainty and schedule risk into whatever project they were currently working on. It also hurts morale – no one wants to be interrupted to fight fires. It also risks making poor decisions in the heat of the moment.
Instead, the cloud lets you buy time–simply pay for a larger server (or servers), and schedule the optimization work for the next sprint, in two weeks time. Thus, even though you are nominally paying for compute flexibility, you are actually buying flexibility at the developer level — and the developers are the most costly asset in many companies.
The last place I worked was an 80-person software firm. Everyone in engineering had permission to launch new AWS resources. If it was for a transient thing, no sign off needed. If it was for an ongoing project, give a heads up to your manager and make sure it aligned with the overall architecture. No big deal.
Everyone also had access to a dashboard that showed the company’s daily revenue vs its (not insubstantial) daily AWS expenses. Perhaps that was key, too.
We always come back with to some kind of “debt”. Independently from the on demand / autoscale vs on premise, there is an old debate on carefully engineered vs quick and dirty.
Which is the best ? IMHO, it depends… What matters the most is to make a thoughtful decision, with no belief in a magic solution.
A possibly relevant blog post.
https://blog.codinghorror.com/the-cloud-is-just-someone-elses-computer/