The Heat Is Off
Data centers keep cool
Context: Server computers are housed by the thousands in buildings called data centers. All those servers humming together generate a vast amount of heat, which can make them malfunction. A 30,000-square-foot center can run up a bill of $8 million a year just for cooling, so data centers’ floor plans are designed to make cooling more efficient. The servers are also governed by algorithms that optimally distribute work to reduce the total amount of power used. These methods, however, do not accommodate for temperature variations across a data center. Now researchers from Duke University and Hewlett-Packard Labs have shown that assigning tasks to servers based on such variations can slash cooling costs.
Methods and Results: When temperature in a data center isn’t uniform, energy is wasted cooling the entire room just to keep machines in hot spots from overheating. Justin Moore and colleagues designed two algorithms that avoid creating local hot spots. One algorithm gives a server less work as its surroundings get hotter. The other surveys the entire data center and assigns fewer tasks to servers more prone to recirculate hot air.
Why It Matters: Moore and colleagues’ work could drop the costs of doing business and keep servers from crashing. Computer models show that using the algorithms could reduce cooling costs by 25 percent, an annual savings of $1 million to $2 million for a large center. Data centers power today’s Internet economy; with the new algorithms, they would be more reliable and use fewer resources.
Source: Moore, J., et al. 2005. Making scheduling “cool”: temperature-aware workload placement in data centers. Paper presented at Usenix Annual Technical Conference. April 10-15. Anaheim, CA.
Speedy Security
Making safe data transmission faster
Context: Many software companies use encryption to protect their programs from tampering or copying, but even those protections can be circumvented by a hacker who’s skilled and motivated enough. In a conventional computer, protected software is decoded and stored in memory until the processor calls for it; hackers can tap into decoded instructions as they move from memory to the processor by listening to the channel between the two. Safeguards exist – namely, the XOM (execution-only memory) processor, which keeps information encrypted until it gets to the processor – but systems that use them are painfully slow.
Methods and Results: The bottleneck in most XOM systems is the decryption procedure: encrypted instructions are first fetched from memory, then decoded, then executed. Jun Yang, an assistant professor of computer science at the University of California, Riverside, and colleagues at Riverside and the University of Texas at Dallas use a security scheme called a one-time pad that can start decryption without the data. The new procedure fetches data and starts the decryption in parallel, so that the processor can act on instructions almost as they arrive. In a simulation, the extra time needed for decryption dropped from 20.8 percent of the computation time in current XOM processors to a mere 1.3 percent.
Why It Matters: Until now, the XOM fix caused a performance slowdown of as much as 42 percent. For some applications, like ATMs and other financial systems, it’s worth the cost. But for interactive applications like video games, sluggish response times – reminiscent of surfing the Internet in the early days – simply are not acceptable. Yang and colleagues’ technique faces sizable hurdles to adoption: devices will need updated software and new processors with extra on-chip memory. Nevertheless, the researchers’ method for improving the performance of encrypted software might be the breakthrough required to produce systems that are both secure and fast.
Source: Yang, J., et al. 2005. Improving memory encryption performance in secure processors. IEEE Transactions on Computers 54:630-640.
Timing Text
Mobile messaging on cue
Context: Some text messages, like birthday wishes or driving directions, make sense only in particular contexts. But cell phones send messages immediately, not when they are most timely. Younghee Jung, Per Persson, and Jan Blom of Nokia have now designed cell-phone software that lets senders dictate when and where their text messages should be delivered.
Methods and Results: Cell phones already track what time it is, where they are, and who has called recently. Jung and colleagues wrote software that monitors this information and withholds messages until certain delivery conditions are met. They designed a user interface that lets senders choose conditions such as time of day or a recipient’s location. Finally, they loaded the software onto phones, gave them to seven Finnish teens, and monitored their use over several weeks.
More than 10 percent of all sent messages used this “context-enhanced” delivery. Just over half of these were triggered by the recipient’s location – in, say, the vicinity of a common rendezvous point. Most of the rest specified when a message should be delivered, and many were timed to reach friends when they were in between known engagements.
Why It Matters: Context-specific delivery could change the way people use cell phones. However, the change could be as much a curse as a benefit. Although the teens’ biggest complaint was that they couldn’t be sure friends received their messages, the researchers haven’t yet identified a way to verify delivery that protects the recipient’s privacy. If a phone sends a confirmation when a message is read, it will also reveal where the recipient is. Vendors might also be tempted to send messages that would be delivered as users neared their shops’ locations, creating a boom in text-message spam.
Source: Jung, Y., et al. 2005. DeDe: design and evaluation of a context-enhanced mobile messaging system. Paper presented at Conference on Human Factors in Computing Systems. April 2-7. Portland, OR.
Smoothing Out Speech
Internet phones get clearer
Context: People trying to converse over wireless local-area networks (WLANs ) – using them to connect to voice-over-Internet-protocol, or VoIP, systems – are often confounded by the poor quality of the transmission. Those frustrations may soon clear up, thanks to researchers at the University of California, Santa Barbara, who report a method to improve the clarity of VoIP conversations.
Methods and Results: Information is sent over a WLAN network in units called packets. But wireless signals can deteriorate over distance, interfere with each other, or otherwise introduce errors into the packets. If that happens, the transmission standard IEEE 802.11 requires that the packets be re-sent. The subsequent delay garbles real-time communication. While zero error tolerance makes sense for e-mail, it might be too strict a standard for voice: Ian Chakeres and colleagues have shown that digitized voice data can suffer some errors without degrading call quality. The researchers used a computer simulation of a network to test reliability. They then combined various network layouts, hardware settings, and traffic scenarios with different levels of permitted packet error and charted the resulting voice drop-out. This gave them the combination that allowed for the highest packet error and the best conversational quality.
Why It Matters: Users tap into WLANs via handheld devices with radio connections. Because of its ease, low cost, and mobility, voice transmission over WLAN is becoming more common, and video transmission is following suit. But current technology can’t yet deliver smooth, clear voice or video communications, a drawback that keeps consumers from adopting it. The researchers’ method could help bring WLAN voice, video, and multimedia into the mainstream.
Source: Chakeres, I., et al. 2005. Allowing bit errors in speech over wireless LANs. Computer Communications (in press).