Skip to content

MCTS KEY

MCTS Training, MCTS Certification exams Training at MCTSKYEY.com

Archive

Category: Amazon

What are the major tech companies doing to win in the cloud, and how might the market shake out?

There’s an old joke that starts: How do you make God laugh?

The answer, of course: Make plans.

Larger services companies bent on world domination have poised a lot of capital into developing cloud resources, and some aren’t doing well. Let’s ignore Software-as-a-service/SaaS and pure-play cloud services companies, and instead let’s focus on some new entrants that staked their claims in other markets beside cloud.
Dell

What They Did: Clouds are made of up disk and virtual stuff, and Dell just bought EMC – whose disk empire is legendary – and with it, a huge chunk of VMware, whose feisty formula for virtualizing all-things-not-nailed-down is legendary.

What Might Happen: In one huge private (not public) transaction, Dell gets The Full Meal Deal, and makes up for a half-decade of losing ground.
Amazon

What They Did: Like all good B-School grads, they took a key success ingredient in their rapidly evolving IT infrastructure and resold excess capacity at such a price as to make it highly attractive to the IT-Maker hybrid community, thus launching still another way to make Amazon more fluid whilst spawning developer and service provider imaginations.

What Might Happen: All leaders are the biggest targets of competitors, who learn by a leader’s mistakes, and find cracks to drive hydraulically powered wedges. They’ve captured imagination, and to keep the pace of that attractiveness and fluidity, must imagine products that don’t go stale easily through a long revenue cycle. I say: spin-off.
Microsoft

What They Did: Dawdled, then attempted to take an increasingly brittle if varied and successful computing infrastructure for businesses, along with a huge user base, then not only adapted it for the web, but also made licensing suitable for actual virtualization—then cloud use. Their cloud offering, Azure, now mimes appliance, DevOps/AgileDev, and ground-floor services of their strongest competitors, if a little green in places.

What Might Happen: Microsoft will continue to try to leverage a huge user base into forward-thinking capabilities to extend but not destroy F/OSS initiatives, gleaning the good stuff and vetting as much as is possible into the user cloud model, and also the hybrid and public cloud models. Profit!
Oracle

What They Did: After the indigestion of Sun and MySQL, Oracle wrestled with evolving their own vertical cloud, knowing that their highly successful DB products required comparative platform (and also customer) control. Attempts at virtualization weren’t very successful, but the oil well in the basement, SQL infrastructure, continued to produce oil. Cloud offerings were designed for their target clientele and no others, holding ground while not losing ground.

What Might Happen: Oracle’s enterprise clientele has a love/hate relationship with Oracle, and migration to another platform makes them shudder and perspire. Core line-of-business functionality continues to evolve but at a comparatively/competitively lower pace than visible progress made in the arena Oracle plays in.
HP

What They Did: HP purchased Eucalyptus, a burgeoning cloud emulation and DevOps/AgileDev integration software organization known for their AWS emulation private cloud capabilities. HP evolved the purchase into the HP Helion Cloud, which offered private, public, and hybrid clouds. Development appeared (to me) to languish at least in the public space as smaller competitors, notably Rackspace (and other pure-play cloud services organizations) evolved. HP announced last week that they’re dropping the public portion of their Helion Cloud, after changing management earlier.

What Might Happen: As a hardware company, HP competes potentially with cloud services organizations on the cloud front. Its support for initiatives like OpenStack may change. Now that competitor Dell will digest EMC and VMware, the game has changed.

“If you do one thing, do it very well.” That mantra seems to ring true, and each of these organization has struggled to keep up with the pace of change and competitive pricing, all while attempting to gain, rather than hold, ground. Juggling clouds, to coin a metaphor, isn’t easy.

There’s one motivating a migration to the cloud that must be absorbed by cloud services organizations that no one likes to talk about: shifting depreciation. Each of these organizations (and more like them) faces cost models while the sands of depreciation fall through the ROI glass.

 

 

 

 

Click here to view complete Q&A of 70-341 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-341 Training at certkingdom.com

 

 

The new service makes it easier to implement the popular open-source tool

While Amazon Web Services made a name for itself by providing raw computing power and data storage at rock-bottom prices, the company has been moving toward providing services that do more of the heavy lifting for developers and administrators in exchange for a higher price.

Amazon Elasticsearch Service is a new product in that vein that’s designed to make it easier for developers to implement the popular Elasticsearch open-source search and analytics engine that lives in the AWS cloud. Users can set up an Elasticsearch service cluster using the AWS Management Console, command-line tools, or the Amazon Elasticsearch Service API. They can set up parameters like instance count and what sort of storage their search instance should use.

The Elasticsearch service can be set up either to use storage on the instance that’s running it or to provision and connect to a separate storage volume like Amazon’s Elastic Block Store.
MORE ON NETWORK WORLD: 10 Most Powerful IaaS Companies

Once the cluster is set up, users can load information into the storage that’s tied to their Elasticsearch cluster and begin querying it and visualizing the data using a tool like Kibana. The Elasticsearch service also integrates with Amazon’s CloudWatch Logs monitoring service. Users can set up an Elasticsearch Service domain and then navigate to their CloudWatch console and use a wizard to connect the two services.

Amazon’s new service is available in nine regions beginning Thursday. People who qualify for AWS’s free tier can use a t2.micro.elasticsearch node for up to 750 hours a month.

Services like these are key to AWS’s future profitability, since Amazon charges more for them than for just the raw compute instances that they run on in the company’s cloud. That price increase may be justified ease of use if developers and administrators don’t have to spend time setting up the same system from scratch.
 

MCTS Training, MCITP Trainnig

Best Amazon Certification Training and Amazon Exams Training  and more Amazon exams log in to Certkingdom.com

Market leader AWS is attempting to widen its lead.

Amazon Web Services has launched a new general purpose Elastic Block Store that runs fully on solid state drives (SSD), which the leading IaaS cloud vendor says will provide dramatically better performance for users compared to previous-generation spinning disk persistent storage.

Practical advice for you to take full advantage of the benefits of APM and keep your IT environment

In addition to announcing all SSD General Purpose EBS Volumes today, AWS reduced prices for its EBS services by 35%, which represent the 43rd price drop the company has announced since 2006. Read about the news from AWS here.

The new General Purpose SSD-backed EBS volumes come with a 99.999% availability guarantee and are meant to be used for any range of block storage use cases in AWS. Block storage is persistent storage that can be attached to compute instances, in this case AWS’s Elastic Compute Cloud (EC2), and they’re used commonly to host databases.

The General Purpose EBS offering is ideal for small to midsized databases, as they burst in input/output operations per second (IOPS) speeds based on the amount of data that is stored in them. IOPS is basically a measure of how fast EBS can manage the data stored in it. The new General Purpose SSD Volumes provide a base level of 3 IOPS for every gigabyte of storage, but it scales up to 3,000 IOPS if needed. For larger workloads that need even faster provisioning times, AWS has Provisioned IOPS EBS Volumes deliver up to 48,000 IOPS.

That IOPS capacity makes the new General Purpose and Provisioned IOPS storage good for not only hosting databases, but also to increase boot time of compute instances. By taking advantage of the increased SSD capacity of the EBS instances, AWS estimates that boot times could be 50% faster for customers.

The General Purpose EBS SSD volumes cost $0.10 per GB per month, while the Provisioned IOPS Volumes cost 0.125 per GB per month. AWS still has its previous generation EBS offering, which it calls Magnetic Volumes, that costs $0.05 per GB per month.

Today’s move continues a natural progression of AWS rolling out SSDs across its cloud. Previously, AWS has installed SSDs for the local storage of many of its EC2 instances. All of the company’s newest-generation virtual machine instance sizes now come in SSD flavors, including those optimized for high compute, memory and graphic processing.

Rolling out SSD functionality is becoming standard practice to cloud providers, but various market players are in different stages of doing so. Many of AWS’s competitors, like CenturyLink, Rackspace and Joyent, offer SSD storage options for customers. AWS is making SSD volumes the default new storage option for EBS volumes though.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

A diverse set of real-world Java benchmarks shows that Google is fastest, Azure is slowest, and Amazon is priciest

If the cartoonists are right, heaven is located in a cloud where everyone wears white robes, every machine is lightning quick, everything you do works perfectly, and every action is accompanied by angels playing lyres. The current sales pitch for the enterprise cloud isn’t much different, except for the robes and the music. The cloud providers have an infinite number of machines, and they’re just waiting to run your code perfectly.

The sales pitch is seductive because the cloud offers many advantages. There are no utility bills to pay, no server room staff who want the night off, and no crazy tax issues for amortizing the cost of the machines over N years. You give them your credit card, and you get root on a machine, often within minutes.

[ From Amazon to Windows Azure, see how the elite 8 public clouds compare in InfoWorld Test Center’s review. | Benchmarking Amazon: The wacky world of cloud performance | Stay on top of the cloud with InfoWorld’s “Cloud Computing Deep Dive” special report and Cloud Computing Report newsletter. ]

To test out the options available to anyone looking for a server, I rented some machines on Amazon EC2, Google Compute Engine, and Microsoft Windows Azure and took them out for a spin. The good news is that many of the promises have been fulfilled. If you click the right buttons and fill out the right Web forms, you can have root on a machine in a few minutes, sometimes even faster. All of them make it dead simple to get the basic goods: a Linux distro running what you need.

At first glance, the options seem close to identical. You can choose from many of the same distributions, and from a wide range of machine configuration options. But if you start poking around, you’ll find differences — including differences in performance and cost. The machines may seem like commodities, but they’re not. This became more and more evident once the machines started churning through my benchmarks.

Fast cloud, slow cloud
I tested small, medium, and large machine instances on Amazon EC2, Google Compute Engine, and Microsoft Windows Azure using the open source DaCapo benchmarks, a collection of 14 common Java programs bundled into one easy-to-start JAR. It’s a diverse set of real-world applications that will exercise a machine in a variety different ways. Some of the tests will stress CPU, others will stress RAM, and still others will stress both. Some of the tests will take advantage of multiple threads. No machine configuration will be ideal for all of them.

Some of the benchmarks in the collection will be very familiar to server users. The Tomcat test, for instance, starts up the popular Web server and asks it to assemble some Web pages. The Luindex and Lusearch tests will put Lucene, the common indexing and search tool, through its paces. Another test, Avrora, will simulate some microcontrollers. Although this task may be useful only for chip designers, it still tests the raw CPU capacity of the machine.

I ran the 14 DaCapo tests on three different Linux machine configurations on each cloud, using the default JVM. The instances aren’t perfect “apples to apples” matches, but they are roughly comparable in terms of size and price. The configurations and cost per hour are broken out in the table below.

I gathered two sets of numbers for each machine. The first set shows the amount of time the instance took to run the benchmark from a dead stop. It fired up the JVM, loaded the code, and started to work. This isn’t a bad simulation because many servers start up Java code from command lines in scripts.

To add another dimension, the second set reports the times using the “converge” option. This runs the benchmark repeatedly until consistent results appear. This sometimes happens after just a few runs, but in a few cases, the results failed to converge after 20 iterations. This option often resulted in dramatically faster times, but sometimes it only produced marginally faster times.

The results (see charts and tables below) will look like a mind-numbing sea of numbers to anyone, but a few patterns stood out:

Google was the fastest overall. The three Google instances completed the benchmarks in a total of 575 seconds, compared with 719 seconds for Amazon and 834 seconds for Windows Azure. A Google machine had the fastest time in 13 of the 14 tests. A Windows Azure machine had the fastest time in only one of the benchmarks. Amazon was never the fastest.
Google was also the cheapest overall, though Windows Azure was close behind. Executing the DaCapo suite on the trio of machines cost 3.78 cents on Google, 3.8 cents on Windows Azure, and 5 cents on Amazon. A Google machine was the cheapest option in eight of the 14 tests. A Windows Azure instance was cheapest in five tests. An Amazon machine was the cheapest in only one of the tests.

The best option for misers was Windows Azure’s Small VM (one CPU, 6 cents per hour), which completed the benchmarks at a cost of 0.67 cents. However, this was also one of the slowest options, taking 404 seconds to complete the suite. The next cheapest option, Google’s n1-highcpu-2 instance (two CPUs, 13.1 cents per hour), completed the benchmarks in half the time (193 seconds) at a cost of 0.70 cents.

If you cared more about speed than money, Google’s n1-standard-8 machine (eight CPUs, 82.9 cents per hour) was the best option. It turned in the fastest time in 11 of the 14 benchmarks, completing the entire DaCapo suite in 101 seconds at a cost of 2.32 cents. The closest rival, Amazon’s m3.2xlarge instance (eight CPUs, $0.90 per hour), completed the suite in 118 seconds at a cost of 2.96 cents.

Amazon was rarely a bargain. Amazon’s m1.medium (one CPU, 10.4 cents per hour) was both the slowest and the most expensive of the one CPU instances. Amazon’s m3.2xlarge (eight CPUs, 90 cents per hour) was the second fastest instance overall, but also the most expensive. However, Amazon’s c3.large (two CPUs, 15 cents per hour) was truly competitive — nearly as fast overall as Google’s two-CPU instance, and faster and cheaper than Windows Azure’s two CPU machine.

These general observations, which I drew from the “standing start” tests, are also borne out by the results of the “converged” runs. But a close look at the individual numbers will leave you wondering about consistency.

Some of this may be due to the randomness hidden in the cloud. While the companies make it seem like you’re renting a real machine that sits in a box in some secret, undisclosed bunker, the reality is that you’re probably getting assigned a thin slice of a box. You’re sharing the machine, and that means the other users may or may not affect you. Or maybe it’s the hypervisor that’s behaving differently. It’s hard to know. Your speed can change from minute to minute and from machine to machine, something that usually doesn’t happen with the server boxes rolling off the assembly line.

So while there seem to be clear performance differences among the cloud machines, your results could vary. These patterns also emerged:

Bigger, more expensive machines can be slower. You can pay more and get worse performance. The three Windows Azure machines started with one, two, and eight CPUs and cost 6, 12, and 48 cents per hour, but the more expensive they were, the slower they ran the Avrora test. The same pattern appeared with Google’s one CPU and two CPU machines.
Sometimes bigger pays off. The same Windows Azure machines that ran the Avrora jobs slower sped through the Eclipse benchmark. On the first runs, the eight-CPU machine was more than twice as fast as the one-CPU machine.

Comparisons can be troublesome. The results table has some holes produced when a particular test failed, some of which are easy to explain. The Windows Azure machines didn’t have the right codec for the Batik tests. It didn’t come installed with the default version of Java. I probably could have fixed it with a bit of work, but the machines from Amazon and Google didn’t need it. (Note: Because Azure balked at the Batik test, the comparative times and costs cited above omit the Batik results for Amazon and Google.)
Other failures seemed odd. The Tradesoap routine would generate an exception occasionally. This was probably caused by some network failure deep in the OS layer. Or maybe it was something else. The same test would run successfully in different circumstances.

Adding more CPUs often isn’t worth the cost. While Windows Azure’s eight-CPU machine was often dramatically faster than its one-CPU machine, it was rarely ever eight times faster — disappointing given that it costs eight times as much. This was even true on the tests that are able to recognize the multiple CPUs and set up multiple threads. In most of the tests the eight CPU machine was just two to four times faster. The one test that stood out was the Sunflow raytracing test, which was able to use all of the compute power given to it.
The CPU numbers don’t always tell the story. While the companies usually double the price when you get a machine with two CPUs and multiply by eight when you get eight CPUs, you can often save money if you don’t increase the RAM too. But if you do, don’t expect performance to still double. The Google two-CPU machine in these tests was a so-called “highcpu” machine with less RAM than the standard machine. It was often slower than the one-CPU machine. When it was faster, it was often only about 30 percent faster.

Thread count can also be misleading. While the performance of the Windows Azure machines on the Sunflow benchmark track the number of threads, the same can’t be said for the Amazon and Google machines. Amazon’s two-CPU instance often went more than twice as fast as the one-CPU machine. On one test, it was almost three times faster. Google’s two-CPU machine, on the other hand, went only 20 to 25 percent faster on Sunflow.

The pricing table can be a good indicator of performance. Google’s n1-highcpu-2 machine is about 30 percent more expensive than the n1-standard-1 machine even though it offers twice as much theoretical CPU power. Google probably used performance benchmarks to come up with the prices.

Burst effects can distort behavior. Some of the cloud machines will speed up for short “bursts.” This is sort of a free gift of the extra cycles lying around. If the cloud providers can offer you a temporary speed up, they often do. But beware that the gift will appear and disappear in odd ways. Thus, some of these results may be faster because the machine was bursting.
The bursting behavior varies. On the Amazon and Google machines, the Eclipse benchmark would speed up by a factor of more than three when using the “converge” option of the benchmark. Windows Azure’s eight-CPU machine, on the other hand, wouldn’t even double.

If all of these factors leave you confused, you’re not alone. I tested only a small fraction of the configurations available from each cloud and found that performance was only partially related to the amount of compute power I was renting. The big differences in performance on the different benchmarks means that the different platforms could run your code at radically different speeds. In the past, my tests have shown that cloud performance can vary at different times or days of the week.

This test matrix may be large, but it doesn’t even come close to exploring the different variations that the different platforms can offer. All of the companies are offering multiple combinations of CPUs and RAM and storage. These can have subtle and not-so-subtle effects on performance. At best, these tests can only expose some of the ways that performance varies.

This means that if you’re interested in getting the best performance for the lowest price, your only solution is to create your own benchmarks and test out the platforms. You’ll need to decide which options are delivering the computation you need at the best price.

Calculating cloud costs
Working with the matrix of prices for the cloud machines is surprisingly complex given that one of the selling points of the clouds is the ease of purchase. You’re not buying machines, real estate, air conditioners, and whatnot. You’re just renting a machine by the hour. But even when you look at the price lists, you can’t simply choose the cheapest machine and feel secure in your decision.

The tricky issue for the bean counters is that the performance observed in the benchmarks rarely increased with the price. If you’re intent upon getting the most computation cycles for your dollar, you’ll need to do the math yourself.

The simplest option is Windows Azure, which sells machines in sizes that range from extra small to extra large. The amount of CPU power and RAM generally increase in lockstep, roughly doubling at each step up the size chart. Microsoft also offers a few loaded machines with an extra large amount of RAM included. The smallest machines with 768MB of RAM start at 2 cents per hour, and the biggest machines with 56GB of RAM can top off at $1.60 per hour. The Windows Azure pricing calculator makes it straightforward.

One of the interesting details is that Microsoft charges more for a machine running Microsoft’s operating system. While Windows Azure sometimes sold Linux instances for the same price, at this writing, it’s charging exactly 50 percent more if the machine runs Windows. The marketing department probably went back and forth trying to decide whether to price Windows as if it’s an equal or a premium product before deciding that, duh, of course Windows is a premium. 

Google also follows the same basic mechanism of doubling the size of the machine and then doubling the price. The standard machines start at 10.4 cents per hour for one CPU and 3.75GB of RAM and then double in capacity and price until they reach $1.66 per hour for 16 CPUs and 60GB of RAM. Google also offers options with higher and lower amounts of RAM per CPU, and the prices move along a different scale.

The most interesting options come from Amazon, which has an even larger number of machines and a larger set of complex pricing options. Amazon charges roughly double for twice as much RAM and CPU capacity, but it also varies the price based upon the amount of disk storage. The newest machines include SSD options, but the older instances without flash storage are still available.

Amazon also offers the chance to create “reserved instances” by pre-purchasing some of the CPU capacity for one or three years. If you do this, the machines sport lower per-hour prices. You’re locking in some of the capacity but maintaining the freedom to turn the machines on and off as you need them. All of this means that you can ask yourself how much you intend to use Amazon’s cloud over the next few years because it will then help you save more money.

In an effort to simplify things, Google created the GCEU (Google Compute Engine Unit) to measure CPU power and “chose 2.75 GCEUs to represent the minimum power of one logical core (a hardware hyper-thread) on our Sandy Bridge platform.” Similarly, Amazon measures its machines with Elastic Compute Units, or ECUs. Its big fat eight-CPU machine, known as the m3.2xlarge, is rated at 26 ECUs while the basic one-core version, the m3.medium, is rated at three ECUs. That’s a difference of more than a factor of eight.

This is a laudable effort to bring some light to the subject, but the benchmark performance doesn’t track the GCEUs or ECUs too closely. RAM is often a big part of the equation that’s overlooked, and the algorithms can’t always use all of the CPU cores they’re given. Amazon’s m3.2xlarge machine, for instance, was often only two to four times faster than the m3.medium, although it did get close to being eight times faster on a few of the benchmarks.

Caveat cloudster
The good news is that the cloud computing business is competitive and efficient. You put in your credit card number, and a server pops out. If you’re just looking for a machine and don’t have hard and fast performance numbers in mind, you can’t go wrong with any of these providers.

Is one cheaper or faster? The accompanying tables show the fastest and cheapest results in green and the slowest and priciest results in red. There’s plenty of green in Google’s table and plenty of red in Amazon’s. Depending on how much you emphasize cost, the winners shift. Microsoft’s Windows Azure machines start running green when you take the cost into account.

The freaky thing is that these results are far from consistent, even across the same architecture. Some of Microsoft’s machines have green numbers and red numbers for the same machine. Google’s one-CPU machine is full of green but runs red with the Tradesoap test. Is this a problem with the test or Google’s handling of it? Who knows? Google’s two-CPU machine is slowest on the Fop test — and Google’s one-CPU machine is fastest. Go figure.

All of these results mean that doing your own testing is crucial. If you’re intent on squeezing the most performance out of your nickel, you’ll have to do some comparison testing and be ready to churn some numbers. The performance varies, and the price is only roughly correlated with usable power. There are a number of tasks where it would just be a waste of money to buy a fancier machine with extra cores because your algorithm can’t use them. If you don’t test these things, you can be wasting your budget.

It’s also important to recognize that there can be quite a bit of markup hidden in these prices. For comparison, I also ran the benchmarks on a basic eight-core (AMD FX-8350) machine with 16GB of RAM on my desk. It was generally faster than Windows Azure’s eight-core machine, just a bit slower than Google’s eight-core machine, and about the same speed as Amazon’s eight-core box. Yet the price was markedly different. The desktop machine cost about $600, and you should be able to put together a server in the same ballpark. The Google machine costs 82 cents per hour or about $610 for a 31-day month. You could start saving money after the first month if you build the machine yourself.

The price of the machine, though, is just part of the equation. Hosting the computer costs money, or more to the point, hosting lots of computers costs lots of money. The cloud services will be most attractive to companies that need big blocks of compute power for short sessions. If they pay by the hour and run the machines for only a short block of time, they can cut the costs dramatically. If your workload appears in short bursts, the markup isn’t a problem because any machine you own will just sit there most of the day waiting, wasting cycles and driving up the air conditioning bills.

All of these facts make choosing a cloud service dramatically more complicated and difficult than it might appear. The marketing is glossy and the imagery makes it all look comfy, but hidden underneath is plenty of complexity. The only way you can tell if you’re getting what you’re paying for is to test and test some more. Only then can you make a decision about whether the light, airy simplicity of a cloud machine is for you.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

Challengers to Amazon’s dominance in public cloud services face an uphill battle

It is quite a stretch for most cloud service providers to match the geographical reach of Amazon Web Services. It’s equally tough to roll out a portfolio of public cloud offerings at the same pace as Amazon.

It’s also quite hard to build the industry ecosystem of independent software vendors and certified professionals that Amazon has managed to nurture and grow.

And it is virtually impossible to beat Amazon on price.

But that hasn’t stopped dozens of companies from jockeying for position in the public cloud marketplace as the next best thing to AWS. There is still a significant market opportunity due to the fact that public cloud services are being used more frequently, both as an extension of enterprise IT as well as a base infrastructure for startups. The question many enterprise IT executives are pondering pertains to which ones are going to be around in three to five years.

Top 10 Amazon cloud challengers

Amazon is the undisputed leader in the global IaaS market, according to London-based market research firm TechNavio. Analysts there define the market as including both compute- and storage-as-a-service offerings. Amazon’s market share this year sits at between 41 and 43 percent, according to a report published by the company last month. Looking at just EC2, Amazon’s public compute option, the company held a market share of around 60 percent. Sitting at a very distant second was Rackspace at 13 to 15 percent. TechNavio analysts say they expect that Amazon will hold onto that lead for the foreseeable future.

While industry analysts point out that competitors like Google, Joyent, Microsoft, Rackspace, Savvis and Terremark are likely to gain some market share, they are not likely to cut too deeply into the revenue stream Amazon currently pulls in with AWS, as cloud spending is expected to grow rapidly.

IDC predicts that U.S. public IT cloud services revenue will experience a compound annual growth rate of 18.5% over the forecast period outlined in its most recent report, from $18.5 billion last year to $43.2 billion in 2016. The IDC report, published in late 2012, includes assessments in five functional market segments within its definition of public cloud services including application as a service, system infrastructure software as a service (which includes Infrastructure as a Service [IaaS]), platform as a service (PaaS), server as a service and basic storage as a service. IDC predicts that by 2016 the global market for these services will surpass $100 billion.

The advantages Amazon has over its competitors fall both in the technical and non-technical realms. Erik Sebesta, chief architect and technology officer at the consultancy, Cloud Technology Partners, says “At its core, Amazon is a business, just like most of its customers. Amazon really ‘gets’ the business application to the cloud and sells that very well.’’

But Amazon is also able to make a technical case to corporate IT based on its global data center presence, trusted brand, and leading edge technology included in its broad portfolio of services, adds Sebesta.

Forrester Research vice president, principal analyst James Staten contends that while the traditional hosting service providers like AT&T and Verizon/Terremark certainly have the reach in terms of the number of geographically dispersed data centers they own, “not all of them are offering cloud services, so catching up to Amazon in that regard is not an easy – or inexpensive – issue.”

Staten adds that Amazon has made a push in the last 12 months to pick up as many compliance, security and operational standard certifications as possible to help ease corporate IT’s hesitation about both its security or management practices.

AWS has achieved ISO 27001 certification and has been validated as a Level 1 service provider under the Payment Card Industry (PCI) Data Security Standard (DSS). AWS undergoes annual SOC 1 audits and has been successfully evaluated at the Moderate level for Federal government systems as well as DIACAP Level 2 for DoD systems. (There is a full listing of Amazon’s compliance credentials here.)

Microsoft has been working toward the same certification prowess with its Azure cloud platform, but Staten argues that Azure will not earn the same number of certifications as AWS for at least another six months.

Amazon also has a major lead in building industry buzz that attracts both ISVs and management consultants who can help build an attractive ecosystem around Amazon’s cloud to make it more attractive to customers who sign on to dabble, but stay in Amazon’s cloud because it’s the place to be seen. “I estimate that for every one ISV building an application on another cloud platform, there are 10 building an application to run in Amazon’s cloud,” Staten says.

Staten argues that Rackspace, working in conjunction with the entire OpenStack community, is the only vendor that might be able to rival AWS’s ecosystem.

Amazon’s weak link

The piece of the enterprise cloud implementation story where Amazon does not have a hard and fast answer is private cloud links and hybrid cloud implementation.

Here is where analysts say that established managed service providers with relationships within enterprise IT (IBM, Savvis, HP) that are now offering private cloud services; and pure play cloud companies supporting hybrid clouds (GoGrid, BlueLock) have a chance to beat Amazon into the enterprise.

Amazon has tried to neutralize this type of criticism by establishing partnerships with companies like Equinox to provide a direct, superfast connection between private corporate assets and AWS, calling the connection a “virtual private cloud”.

Amazon has no real intention to have on-premise private cloud services mainly because it is not viable within the business model it has established as the public cloud norm, Staten says.


 

MCTS Certification, MCITP Certification

Microsoft MCTS Certification, MCITP Certification and over 3000+
Exams with Life Time Access Membership at http://www.actualkey.com

While the distributor of several e-books was wrong to assume that the “classic” nature of certain titles allowed them to be sold under the public domain license, there’s been considerable concern over Amazon’s right to “undo” the sale of those titles through its electronic Kindle Store. Last July, Amazon CEO Jeff Bezos issued a mea culpa, saying the unannounced deletion of various titles including George Orwell’s 1984 was “stupid, thoughtless, and painfully out of line with our principles.”

 

Best Microsoft MCTS Training – Microsoft MCITP Training at Certkingdom.com

 

This morning, as first noted by Gizmodo’s Rosa Golijan, individuals affected by Amazon’s unannounced deletions are now receiving e-mails that appear to be from Amazon, offering customers the opportunity to the company to deliver legitimate copies of their books free of charge, or alternately to receive $30 gift certificates or refund checks from Amazon.

The e-mail as quoted there is curious as it only mentions 1984, which was not the only deleted title. Last June, the retailer deleted illegitimate copies of Ayn Rand novels, including Atlas Shrugged, The Fountainhead, and The Virtue of Selfishness, one month prior to the deletions of Orwell’s novels also including Animal Farm. Amazon has yet to confirm the legitimacy of the e-mails now being trafficked around the Web, nor is there evidence of similar e-mails regarding other deleted titles than the one that generated the most controversy because of its irony.

Many blogs and a few YouTube videos poked fun at the irony of, as they put it, a distributor “burning” books about book burning from a device called Kindle. Though some were confusing the title in question with Ray Bradbury’s classic Fahrenheit 451, others accurately invoked Orwell’s metaphorical “memory hole,” which in his novel was a depository for all modern literature deemed irrelevant to the maintenance of the state.

From a technical and legal standpoint, however, Amazon may have been within its rights to do what it did, although it certainly turned out to be politically inconvenient for the retailer. Some distributors have been operating under the mistaken belief that since book distribution contracts historically have pertained only to printed material, the rights to distribute works electronically are up in the air, “jump balls” — this was part of Google’s original defense of its Google Books scanning project.

But the electronic version of a book is software. On the one hand, that qualifies it for copyright protection as one of “any and all forms” of publication, under book publishers’ contracts; on the other, it gives book publishers the right to determine how, or if, they will distribute a copyrighted work as software. So if someone does that job for them and Amazon facilitates the sale, Amazon could be liable for copyright infringement — a punishment which certainly made the retraction of the book urgent.

How Amazon went about that task in this case was perhaps ill-advised, especially since owners of Kindle and other e-book readers think of their electronic libraries as sacrosanct as their printed ones. The notion that they are purchasing software — essentially, the limited right to use media in electronic form, as prescribed by the distributor — may conflict with their feelings of books as possessions, and their equation of e-books with books from a moral standpoint.

Writing last month on behalf of the Free Software Foundation, Harvard University Law Professor John Palfrey argued that even though e-books are software, they hold the same sacred place in readers’ hearts and should be protected as such: “The level of control Amazon has over their e-books conflicts with basic freedoms that we take for granted. In a future where books are sold with digital restrictions, it will be impossible for libraries to guarantee free access to human knowledge.”

But that’s for the reader of classic novels. One of the most lucrative platforms for e-book publishing in recent years has been technology books, far more so in some cases than for classic literature. And as it turns out, in a recent survey of 2,000 e-book customers, as O’Reilly publisher Joe Wikert reported last week, 81% of respondents use laptop computers to read their O’Reilly downloads, versus 29% on the iPhone, 14% for the Amazon Kindle, and 11% for the Sony Reader.

Free MCTS Training - Free MCITP Training - CCNA Training - CCIE Labs - CCNA Certification - MCTS Online Training - MCITP Online Training - Comptia a+ videos - Comptia a+ Video Training - MCTS Training Key - MCITP Training Key - Free Training Courses - Free Certification Courses - MCTS Online Training - MCTS Online Certification - Cisco Certification Training - CCIE LABS Preparation - Cisco CCNA Training - Cisco CCNA Certification Key - MCITP Videos Training - Free MCITP Videos Tutorial - Free MCTS Video Training - MCTS Videos Tutorial - Free Comptia Online Training - Free Comptia Online Certification

Microsoft MCTS Certification - Microsoft MCITP Training - Comptia A+ Training - Comptia A+ Certification - Cisco CCNA Training - Cisco CCNA Certification - Cisco CCIE Training - Cisco CCIE Exams - Cisco CCNA Training - Comptia A+ Training - Microsoft MCTS Training - MCTS Certification - MCITP Certification