Skip to content


MCTS Training, MCTS Certification exams Training at


Category: Uncategorized

All of the following are basic components of a security policy EXCEPT the

A. definition of the issue and statement of relevant terms.
B. statement of roles and responsibilities
C. statement of applicability and compliance requirements.
D. statement of performance of characteristics and requirements.

Answer: D

Explanation: Policies are considered the first and highest level of documentation, from which the
lower level elements of standards, procedures, and guidelines flow. This order, however, does not
mean that policies are more important than the lower elements. These higher-level policies, which
are the more general policies and statements, should be created first in the process for strategic
reasons, and then the more tactical elements can follow. -Ronald Krutz The CISSP PREP Guide
(gold edition) pg 13

A security policy would include all of the following EXCEPT

A. Background
B. Scope statement
C. Audit requirements
D. Enforcement

Answer: B


Which one of the following is an important characteristic of an information security policy?

A. Identifies major functional areas of information.
B. Quantifies the effect of the loss of the information.
C. Requires the identification of information owners.
D. Lists applications that support the business function.

Answer: A

Explanation: Information security policies area high-level plans that describe the goals of the
procedures. Policies are not guidelines or standards, nor are they procedures or controls.
Policies describe security in general terms, not specifics. They provide the blueprints for an
overall security program just as a specification defines your next product – Roberta Bragg CISSP
Certification Training Guide (que) pg 206

Ensuring the integrity of business information is the PRIMARY concern of

A. Encryption Security
B. Procedural Security.
C. Logical Security
D. On-line Security

Answer: B

Explanation: Procedures are looked at as the lowest level in the policy chain because they are
closest to the computers and provide detailed steps for configuration and installation issues. They
provide the steps to actually implement the statements in the policies, standards, and
guidelines…Security procedures, standards, measures, practices, and policies cover a number of
different subject areas. – Shon Harris All-in-one CISSP Certification Guide pg 44-45

Which of the following would be the first step in establishing an information security program?

A. Adoption of a corporate information security policy statement
B. Development and implementation of an information security standards manual
C. Development of a security awareness-training program
D. Purchase of security access control software

Answer: A


MCTS Training, MCITP Trainnig

Best ISC Certification, ISC CISSP Exams Training at

Diabetes causes the sugar levels in the blood to soar causing damage to the blood vessels of the organs in the body. Damage to the blood vessels of the nerves can result to neuropathy. Damage to blood vessels of the kidney can result to kidney failure thus requiring dialysis. Moreover, high glucose levels in the blood can also damage the retina’s blood vessels. The retina is just like a film inside a camera that lines the back of the eye and detects light that entering the eye and transforms it into an image for the brain to interpret. When the blood vessels of the retina becomes damaged, the retina may stop functioning thus vision loss becomes evident.

People who have been exposed to high levels of sugar for a very long time such as in diabetes, can cause portions of the blood vessels in the retina to weaken. These weakened portions usually pouch out along the walls of the blood vessels forming microaneurysms. These microaneurysms can rupture anytime, spilling blood into the retina and are seen as small dots of hemorrhages where most of it will disappear over time and the remaining debris will form clumps called hard exudates. Altogether, these changes in the retina of the eyes can be called as background diabetic eye disease. This disease is quite common among known diabetics for 10 years or more. Background diabetic eye disease rarely causes significant loss of vision unless the macula, which is the center of the retina, is affected. When the swelling occurs in the macula, it is termed as diabetic macular swelling, wherein vision loss is likely a result.
Another diabetic eye disease is the proliferative diabetic eye disease. These are caused by the abnormal growth formation of the blood vessels in the retina in an attempt to replace those that have been destroyed. These new growths are usually very fragile and break easily. Proliferative diabetic eye disease can result to complete or partial loss of vision but is quite uncommon than background eye disease.

Buy Affordable Buy Glasses Online and Eye Care Glasses Online at

Both background and proliferative diabetic retinopathy usually don’t have symptoms during the early stages. And the only way to detect them is through regular visits to the Optometrist Austin. It is very important to keep the eyes in check for abnormal changes in the eyes. If you are a diabetic, visit your Optometrist Austin now and keep your eyes health and in check.

The eyes are the mirror of the soul, take care of your eyes visit optometrist Austin for a clearer vision.

Amazon said Friday morning that it expected that the majority of sites affected by its unexpected cloud services outage would be back up by the end of Friday, with some exceptions. (See PCMag’s analysis on why the Amazon cloud outage matters.)

Amaazon’s published round-the-clock updates overnight, advising customers that the company had brought “all hands on deck” to solve the problem. At issue, Amazon said, was a single “availability zone” in the Eastern U.S. that with “stuck” volumes of data.



Best Microsoft MCTS Training – Microsoft MCITP Training at


“We continue to see progress in recovering volumes, and have heard many additional customers confirm that they’re recovering,” Amazon wrote Friday morning. “Our current estimate is that the majority of volumes will be recovered over the next 5 to 6 hours. As we mentioned in our last post, a smaller number of volumes will require a more time consuming process to recover, and we anticipate that those will take longer to recover. We will continue to keep everyone updated as we have additional information.”

It appeared that the sites which had been affected by the Thursday outage to Amazon’s EC2 (Elastic Compute Cloud) and Amazon Web Services were functional, but that data for the affected period was in the process of being restored. That means that visitors to those sites would be able to use them, but not necessarily access comments, access rights, documents or other data affected by the outage.

Quora and Reddit, two sites that were affected by the outage, were operational at press time, but partly so. “We are slowly getting our capacity back, and as such users are being randomly granted access back to the site,” Reddit posted to the top of the site. “Please check back soon, as you may be able to log in shortly. Thanks!”

Charlie Cheever, one of the founders of Quora, also explained why some of the site’s data was still missing. “The Amazon EBS volumes where the data for Quora is stored are still not available,” he wrote on the site. “To get the site back online, we brought up a new database based on the most recent database backup we had available which was from midnight on Tuesday night. So, any writes to the database during the time between the backup and the outage (most activity on the site on Wednesday) are missing right now.

“When the volume is restored, we’ll try to merge the data from Wednesday back into the current version of the site,” Cheever added. “There will likely be some conflicts, but we think there will be graceful ways to resolve most of those.”

Foursquare, which had also been taken down by Amazon’s cloud problems, also reported that it had restored full access to the site at 1:40 AM EDT on Friday morning, taking the site down for ten minutes just to ensure there were no problems.

Hootsuite, which also had been affected by the outage, said that service had been restored at 9:25 PM on Thursday, although not all profiles were available.

At 6:18 AM PT, Amazon reported that it had begun to see “more meaningful progress” in restoring its volumes, and at 8:49 AM PT customers began telling Amazon that they were coming back online.

While some might argue that it can almost replace a full-fledged computer, the iPad was designed to be simple. Even if you have very little tech savvy, you can probably pick up Apple’s latest tablet and master most of the basic features in a matter of minutes. And the longer you spend swiping your way around the touch-based iOS operating system, the more you’ll learn. Like it is with any OS, though, there are just some things that aren’t obvious. You could (gasp!) pore through the 22-chapter iPad 2 User Guide (it’s got three appendices too), to make sure you’re not missing out on anything, but where’s the fun in that?


Best Microsoft MCTS Certification – Microsoft MCITP Training at


After almost a month of testing and using the iPad 2, we’ve learned some cool tricks and we want to share them with you. In the slideshow, you’ll find general tips that apply to multiple applications, along with those specific to Safari, Maps, iPod, and Photos. Whether you’re a seasoned Mac or iOS user, or even an Apple newbie, there’s something here to help you get the most out of your iPad 2. (Actually, come to think of it, a lot of these tips also apply to the original iPad.) Have a tip, trick, or shortcut of your own to share? Let us know in the comments below.

Location based services (LBS) may be the rage around the world, but have not made their presence felt in India. Could context awareness hold the key to their success?
In recent times, there has been a lot of talk about Location based services (LBS)–applications that integrate geographic location while delivering relevant information.


Best Microsoft MCTS Training – Microsoft MCITP Training at


Interestingly, LBS as a concept is not new. It is in fact almost a decade old and has been in use in the enterprise domain for years. However, it has not made its presence felt in the consumer sector for some rather interesting reasons. One cannot have a tasty dish without the right ingredients. Well, the same applies to technologies. Some of the ingredients considered necessary to make LBS relevant to a broader base of consumers were the existence of standards, efficient computing power, friendly yet-powerful human-computer interfaces, higher penetration of feature- rich smartphones, GPS (global positioning system) devices, a geographical database for locations, and a rich collection of points of interests.
One of the developments in the LBS industry has been the emergence of technologies that have demonstrated a viable solution without needing a GPS device.

While most of the ingredients seem to have reached the threshold to enable the use of LBS in developed countries, developing economies like India are still struggling in this regard. Perhaps India’s dense population and middle class dominance doesn’t make investment intensive ingredients like GPS devices, smartphones, PNDs (portable navigation devices), etc, a common need. Thus, delivering location-based services has been a greater challenge in this part of the world and requires different strategies from those that succeeded in the west.

The need for LBS
A rapidly changing landscape, lack of navigation planning, and increasing traffic are some of the factors that are fuelling a rising need for highly efficient navigation and allied services for the masses. An efficient navigation system can also act as the backbone for other services to be delivered along an active route the user is travelling through—what are known as ‘vertical location-based-services’. Live traffic information, local search, permissive local advertisements, mobile contact trackers and SOS, are some examples of vertical LBS.

One of the developments in the LBS industry has been the emergence of technologies that have demonstrated a viable solution without needing a GPS device. Operators themselves have been trying for more than half a decade to use dynamic location awareness to provide customised mobile services to consumers, but there has nevertheless been a noticeable delay in bringing efficient location-based services over mobile devices.

Today, such services are often used via Web browsers and hence considered as Web services. The additional challenges to be considered are the richness, personalisation and ubiquity of services to the mobile user, and the linking of services to a relevant context. Therefore, another challenge has been the lack of understanding among application developers about what makes a location-based service appealing to the common person. Systems that can deliver intelligent information in relation to the context of the user’s location simply do not exist.

The entire LBS bouquet has been revolving around harnessing the value out of the users’ location awareness and not the location’s context awareness. This is a fact that is confirmed by the research team under the Future Computing Environments (FCE) at Georgia Tech, which is dedicated to the invention of novel applications using context-aware computing technology to assist everyday activities. This team admitted that the majority of context-aware computing is restricted to location-aware computing for mobile applications and not the location-context-aware computing. Therefore, this is perhaps the fundamental intelligence that is needed to be delivered to make LBS more appealing to the user.

What is context awareness?
Is the fact that I am at Lajpat Nagar in Delhi, contextual information about me? Well, actually it is location awareness about me and not location context awareness. But add to that the fact that it’s a Sunday afternoon; Lajpat Nagar is a busy shopping area; and as my office is not located here, the chances are that I am probably out shopping, would be contextual information related to my current location.

Another example of contextual information would be the fact that as it is December, it is likely to be cold in Delhi and there’s a fair chance that I might go to a restaurant to have some hot beverage. Similarly, a famous dry-fruit shop in the area may like to notify me that it is offering cashew nuts at a discount. Since most people are in a frame of mind to shop in that area, chances are that they may go there to check the prices and end up making a purchase. Since it is winter, chances are that most shoppers are looking for woollen clothes on discount in the area and, hence, a merchant offering special discounts on woollen clothes may like to advertise it to every person entering the area.

This is an example of advertising based on the location context-awareness of the consumers. Such information delivered on mobile phones with the address of relevant merchants and last-mile directions can act as icing on the cake and enhance the experience of consumers.

However, we have not yet seen intelligent ways of collating and managing this information to make it relevant, interesting and useful for the customer, while increasing the reach of advertisers. One way is to passively collect mobile location data in real-time from users’ mobile devices or other possible sources, and then classify people into various categories based on their behavioral patterns over time.

This will ease the process of inferring traits about particular places and the type of people that visit them; what people do at these places and what are they likely to do next. This information can easily be used to decipher changes in consumer behaviour over time, predict trends and hence, this data can be correlated with various industry domains like retail, travel, tourism and entertainment. A few companies in the west, like Rocking Frog and Sense Networks, have made some efforts in this direction but unfortunately, the Indian arena is still waiting for something similar to happen.

An intelligent agent can change your business. And no, we are not talking about espionage and subterfuge but about tech entities. Read on…
a number of enterprises these days employ abstract intelligent agents (AIAs). These are ‘intelligent’ entities that can scrutinise the given milieu and respond to variations in specific parameters. As AIAs have the knack to learn from their surroundings and are also specially ‘trained’ to use their knowledge, a number of companies are using them for meticulous precision and accuracy in calculations.

Best Microsoft MCTS Certification – Microsoft MCITP Training at


AIAAgents with intelligence
An intelligent agent has the ability to observe its surroundings and react according to the changes. But it is not fully self-reliant. In most cases, the implementer is required to closely monitor the working of these agents.
In actual practice we deploy many types of multi-agent systems (MASs) for better accuracy in tasks like online trading, disaster response and so on. Online trading is too difficult for an individual agent (or monolithic system) to handle. If you are in this domain, you can incorporate such a system in a business framework, provided you have the technical expertise.

The agents in a multi-agent system could be robots or human beings. And their environment may have other agents, objects, and global and local variables. Based on the behavioural capabilities of the agents, you may classify them as reactive agents (whose prediction factor depends only on the environment) and proactive agents that work in a framework where the environment and its own state are considered. Some agents are non-adaptive while some others are adaptive. One has to be very vigilant while choosing the agent for a particular business.

Technically speaking, MAS agents can communicate using a weighted request matrix and a weighted response matrix. The specification style adopted by the developers is elegant and comprehensible. Even if you are a novice in this realm, you can imbibe the agent’s ‘functions, rules, knowledge and strategies’ by looking at it. And there are even many other schemes for MASs-viz, the challenge-response-contract-that are extremely practical and commanding.

Since they are able to solve problems by themselves, these are aptly called as self-organised systems. The success of MAS has triggered advanced level research in agent-oriented software engineering; organisation; beliefs, desires, and intentions (BDI); distributed problem solving and multi-agent learning. By suitably using communication and negotiation capabilities, agents help enterprises in promoting their products and solutions.

Agents in action
When you purchase books from, have you ever wondered how the site is able to display a list of books that you may like? Buyer agents (shopping bots) are behind them. The bot analyses what you are buying now and what you have bought in the past. Then, it uses its ‘knowledge base’ to furnish suggestions. These agents can be customised to get information about goods and services. You must have used ‘user’ agents (personal agents), too-when your system plays games as your opponent, it is doing the function of an agent.
If you look at the website of NASA’s (National Aeronautics and Space Administration’s) Jet Propulsion Laboratory you can find information about monitoring-and-surveillance (predictive) agents. Though these agents as such can’t be employed in a business framework, they can be tailored to perform specific tasks like monitoring your competitors’ prices.

Data mining agents are also widely being used in the industry. And one may often have a data warehouse that contains information from many other sources. Agents can come out with ways to boost sales or keep customers loyal by using data mining techniques coupled with other AI solutions.

The list of applications does not end here. There is a lot more that you can do with such agents. You can even simulate the movement of a large number of objects or characters using them. Some R&D teams opt for fuzzy agents and distributed agents to perform definite missions.

Why are agents unique?
Their ability to learn and adapt makes agents exceptional. Not surprisingly, many industries are investing in their development. Many companies have R&D teams working on agents. Though you can build an agent from scratch, there are some frameworks that implement common standards, such as JADE.

JADE is essentially a middleware (a type of connector) that can be used for the development of applications. It can administer both mobile and fixed environments. A peer-to-peer intelligent agent approach is employed here. You can use JADE for developing simple reflex agents, model-based reflex agents, goal-based agents, utility-based agents and learning agents, provided your company’s CTO is good in customising complex frameworks. Once you employ an agent-where it is an agent for decisions, inputs, learning, processing, world/global agent, or a spatial or temporal agent*-you will see the impact right away in your turnover. That’s the clout of artificial intelligence (AI)!

Toshiba America has announced that it is expanding its offering of NAND Flash memory by introducing SmartNAND, a new family of Flash based on a new 24nm production process that integrates a control chip with an error correction code (ECC).

The new 24nm SmartNAND will replace current-generation 32nm devices, utilizing a faster controller and internal interface that result in faster read and write speeds, and overall performance. Toshiba claims read speeds may be up 1.9 times faster than current models, and write speeds 1.5 times faster. SmartNAND is also designed to support four read modes and two write modes, and to support a special “power save” made for low-power requests.

In addition, the SmartNAND series also removes “the burden of ECC from the host processor, while minimizing protocol changes,” according to a Toshiba press release. The simplifies the design and makes memory using advanced NAND technoloy better suited for inclusion in digital televisions, portable media players, tablet PCs, set-top boxes, and other devices that require high-density, non-volatile memory.


Best Microsoft MCTS Training – Microsoft MCITP Training at


In a statement, Scott Nelson, vice president of the Memory Business Unit of Toshiba America Electronic Components, said, “Toshiba’s new SmartNAND will provide our customers a smoother design experience into 24nm generation and beyond. By enabling the system designer to directly manage the NAND using a standard or custom host NAND controller, while leaving the function of error correction within the NAND package, SmartNAND results in faster time to market, access to leading geometries and potentially lowers design costs when compared to conventional NAND flash implementations with external ECC.”

SmartNAND Flash memory will appear in capacities ranging from 4GB to 64GB; samples will be available starting in mid April, with mass production slated for the second quarter of 2011.

Accusations of monopolistic practices have been lobbed at Google for years, but now they are reaching a fever pitch, and investigations may be coming

Google may soon face allegations of anti-competitive behavior in the United States similar to the way Microsoft was examined in the 1990s. The Federal Trade Commission is reportedly considering a “broad antitrust investigation” of Google, according to Bloomberg. Accusations of monopolistic practices have been lobbed at Google for several years, but now they are reaching a fever pitch following the European Commission’s launch of an antitrust investigation of Google in November. Microsoft recently announced it would join the antitrust complaint against Google in Europe.


Best Microsoft MCTS Certification – Microsoft MCITP Trainng at


Antitrust concerns have also been raised over the search giant’s plans for expanding Google Books and the company’s intent to purchase ITA, a flight data aggregation company. The FTC is reportedly waiting on the Department of Justice to consider possible antitrust implications from Google’s ITA deal before launching a broader investigation. The DOJ in January was said to be preparing to file an antitrust challenge against Google, but that threat has yet to materialize.

[ Keep up on the day’s tech news headlines with InfoWorld’s Today’s Headlines: Wrap Up newsletter. | Get the latest insight on the tech news that matters from InfoWorld’s Tech Watch blog. ]

Beyond search
Google owns about 66 percent of the U.S. search market, according to the latest numbers from comScore. But Google has moved beyond search to become a dominant player in mobile phones with the Android operating system. The company is also hoping to create the largest digitized library of the world’s books that is completely searchable with Google Books. Google is working on a music retail service to compete with Apple’s iTunes, and a recent report by the Guardian said Google’s YouTube is hoping to become “the home of live sports broadcasting online” for major North American sports leagues, such as the NBA and NHL.

Add to all that the massive amounts of user data Google has on its servers, and it’s no surprise that regulators want to consider reining in the company.

Radical energy savings method 7: Bury heat in the earth
In warmer regions, free cooling may not be practical all year long. Iowa, for example, has moderate winters but blistering summers, with air temperatures in the 90- and 100-degree range, which is unsuitable for air-side economization.


Best Microsoft MCTS Certification – Microsoft MCITP Trainng at


But the ground often has steady, relatively low temperatures, once you dig down a few feet. The subsurface earth is also less affected by outdoor weather conditions such as rain or heat that can overload traditional equipment. By sending pipes into the earth, hot water carrying server-generated heat can be circulated to depths where the surrounding ground will usher the heat away by conduction.

Again, the technology is not rocket science, but geothermal cooling does require a fair amount of pipe. A successful geothermal installation also requires careful advance analysis. Because a data center generates heat continuously, pumping that heat into a single earth sink could lead to local saturation and a loss of cooling. An analysis of ground capabilities near the data center will determine how much a given area can absorb, whether heat-transfer assistance from underground aquifers will improve heat dissipation, and what, if any, environmental impacts might ensue.

Speaking of Iowa, the ACT college testing nonprofit deployed a geothermal heat sink for its Iowa City data center. Another Midwestern company, Prairie Bunkers near Hastings, Neb., is pursuing geothermal cooling for its Data Center Park facility, converting several 5,000-square-foot ammo bunkers into self-contained data centers.

Radical energy savings method 8: Move heat to the sea via pipes
Unlike geothermal heat sinks, the ocean is effectively an infinite heat sink for data center purposes. The trick is being near one, but that is more likely than you might think: Any sufficiently large body of water, such as the Great Lakes between the United States and Canada, can serve as a coolant reservoir.

The ultimate seawater cooling scenario is a data center island, which could use the ocean in the immediate area to cool the data center using sea-to-freshwater heat exchangers. The idea is so good that Google patented it back in 2007. Google’s approach falls far afield of the objectives in this article, however, since the first step is to either acquire or construct an island.

But the idea isn’t so farfetched if you’re already located reasonably close to an ocean shore, large lake, or inland waterway. Nuclear plants have used sea and lake water cooling for decades. As reported in Computer Sweden (Google’s English translation) last fall, Google took this approach for its Hamina, Finland, data center, a converted paper pulp mill. Using chilly Baltic Sea water as the sole means to cool its new mega data center, as well as to supply water for emergency fire protection, demonstrates a high degree of trust in the reliability of the approach. The pulp mill has an existing water inlet from the Baltic, with two-foot-diameter piping, reducing the project’s implementation costs.

Freshwater lakes have been used successfully to cool data centers. Cornell University’s Ithaca, N.Y., campus uses water from nearby 2.5-trillion-gallon Cayuga Lake to cool not just its data centers but the entire campus. The first-of-its-kind cooling facility, called Lake Source Cooling and built in 2000, pumps 35,000 gallons per hour, distributing water at 39 degrees Fahrenheit to campus buildings located 2.5 miles away.

Both salt- and freshwater cooling systems require one somewhat expensive component: a heat exchanger to isolate natural water from the water used to directly chill the data center. This isolation is necessary to protect both the environment and sensitive server gear, should a leak occur in the system. Beyond this one expensive component, however, sea (and lake) water cooling requires nothing more complex than ordinary water pipe.

How much money do you want to save?
The value of these techniques is that none are mutually exclusive: You can mix and match cost saving measures to meet your short-term budget and long-term objectives. You can start with the simple expedient of raising the data center temperatures, then assess the value of other techniques in light of the savings you achieve with that first step.

Radical energy savings method 5: Use SSDs for highly active read-only data sets
SSDs have been popular in netbooks, tablets, and laptops due to their speedy access times, low power consumption, and very low heat emissions. They’re used in servers, too, but until recently their costs and reliability have been a barrier to adoption. Fortunately, SSDs have dropped in price considerably in the last two years, making them candidates for quick energy savings in the data center — provided you use them for the right application. When employed correctly, SSDs can knock a fair chunk off the price of powering and cooling disk arrays, with 50 percent lower electrical consumption and near-zero heat output.


Best Microsoft MCTS Training – Microsoft MCITP Training at


One problem SSDs haven’t licked is the limited number of write operations, currently around 5 million writes for the single-level-cell (SLC) devices appropriate for server storage. Lower-cost consumer-grade multilevel-cell (MLC) components have higher capacities but one-tenth of SLCs’ endurance.

The good news about SSDs is that you can buy plug-compatible drives that readily replace your existing power-hungry, heat-spewing spinners. For a quick power reduction, replace large primarily read-only data sets, such as streaming video archives, with SSD. You won’t encounter SSD wear-out problems, and you’ll gain an instant performance boost in addition to reduced power and cooling costs.

Go for drives specifically designed for server, rather than desktop, use. Such drives typically have multichannel architectures to increase throughput. The most common interface is SATA 2.0, with 3Gbps transfer speeds. Higher-end SAS devices, such as the Hitachi/Intel Ultrastar SSD line, can achieve 6Gbps speeds, with capacities up to 400GB. Although SSD devices have encountered some design flaws, these have been primarily with desktop and laptop drivers involving BIOS passwords and encryption, factors not involved in servers’ storage devices.

Do plan to spend some brain cycles monitoring usage on your SSDs, at least initially. Intel and other SSD makers provide analysis tools that track read and write cycles, as well as write failure events. SSD disks automatically remap writes to even out wear across a device, a process called load leveling, which can also detect and recover from some errors. When actual significant write failures begin occurring, it’s time to replace the drive.

Radical energy savings method 6: Use direct current in the data center
Yes, direct current is back. This seemingly fickle energy source enjoys periodic resurgences as electrical technologies ebb and flow. The lure is a simple one: Servers use direct current internally, so feeding that power to them directly should reap savings by eliminating the AC-to-DC conversion performed by a server’s internal power supply.

Direct current was popular in the early 2000s because the power supplies in servers of that era had data center conversion efficiencies as low as 75 percent. But then power supply efficiencies improved, and data centers shifted to also-more-efficient 208-volt AC. By 2007, direct current fell out of favor. InfoWorld even counted it among the myths in our 2008 article “10 power-saving myths debunked.” Then in 2009 direct current bounced back, owing to the introduction of high-voltage data center products.

In the earliest data centers, utility-supplied 16,000 VAC (volts of alternating current) electricity was first converted to 440 VAC for routing within a building, then to 220 VAC, and finally to the 110 VAC used by the era’s servers. Each conversion wasted power by dint of being less than 100 percent efficient, with the lost power being cast off as heat (which had to be removed by cooling, incurring yet more power expense). The switch to 208 VAC eliminated one conversion, and with in-server power supplies running at 95 percent efficiency, there wasn’t any longer much to gain.

But 2009 brought a new line of data center equipment that could convert 13,000 VAC utility power directly to 575 VDC (volts of direct current), which can then be distributed directly to racks, where a final step-down converter takes it to 48 VDC for consumption by servers in the rack. Each conversion is about twice as efficient as older AC transformer technology and emits far less heat. Although vendors claim as much a 50 percent savings when electrical and cooling reductions are combined, most experts say that 25 percent is a more credible number.

This radical approach does require some expenditure on new technology, but the technologies involved are not complex and have been demonstrated to be reliable. One potential hidden cost is the heavier copper cabling required for 48 VDC distribution. As Joule’s Law dictates, lower voltages require heavier conductors to carry the same power as higher voltages, due to higher amperage. Another cost factor with data centers is the higher voltage drop incurred over distance (about 20 percent per 100 feet), compared to AC. This is why the 48 VAC conversion is done in the rack rather than back at the utility power closet.

Of course, converting to direct current requires that your servers can accommodate 48 VDC power supplies. For some, converting to DC is a simple power supply swap. Chassis-based servers, such as blade servers, may be cheaper to convert because many servers share a single power supply. Google used the low-tech expedient of replacing server power supplies with 12V batteries, claiming 99 percent efficiency over a traditional AC-powered UPS (uninterruptible power supply) infrastructure.

If you’re planning a server upgrade, you might want to consider larger systems that can be powered directly from 575 VDC, such as IBM’s Power 750, which recently demolished human competitors as Watson on the “Jeopardy” game show. Brand-new construction enjoys the advantage of starting with a clean sheet of paper, as Syracuse University did when building out a data center last year, powering IBM Z and Power mainframes with 575 VDC.

Free MCTS Training - Free MCITP Training - CCNA Training - CCIE Labs - CCNA Certification - MCTS Online Training - MCITP Online Training - Comptia a+ videos - Comptia a+ Video Training - MCTS Training Key - MCITP Training Key - Free Training Courses - Free Certification Courses - MCTS Online Training - MCTS Online Certification - Cisco Certification Training - CCIE LABS Preparation - Cisco CCNA Training - Cisco CCNA Certification Key - MCITP Videos Training - Free MCITP Videos Tutorial - Free MCTS Video Training - MCTS Videos Tutorial - Free Comptia Online Training - Free Comptia Online Certification

Microsoft MCTS Certification - Microsoft MCITP Training - Comptia A+ Training - Comptia A+ Certification - Cisco CCNA Training - Cisco CCNA Certification - Cisco CCIE Training - Cisco CCIE Exams - Cisco CCNA Training - Comptia A+ Training - Microsoft MCTS Training - MCTS Certification - MCITP Certification