Red Hat is aiming for lowered latency and improved performance with the second update of the year to its real-time Linux platform.
The Linux vendor’s new MRG 1.2 (short for Messaging, Real-Time, Grid) release includes new tools and improved technology designed to better enhance the OS’s value for customers who rely on real-time performance.
A real-time operating system differs from other OSes in that its overall latency is lower, and system actions occur at deterministic intervals. Providing deterministic real-time performance is all about enabling actions to occur within the same amount of time, every time — a feature that is critical for Red Hat’s (NYSE: RHT) U.S military, financial services and telco customers.
With MRG 1.2, Red Hat is publishing new performance benchmarks. For Infiniband
“On both throughput and the latency results we’re seeing somewhere on the order of a 50 percent increase in throughput and a reduction in latency compared to some of the previous benchmarks we’ve published,” Bryan Che, product manager for Red Hat MRG, told InternetNews.com.
Targeting KVM virtualization performance and hardware latency
Real time also gets a boost on virtualized instances in the new MRG 1.2 release. Che said that on a 10 Gigabit Ethernet connection with a KVM virtualization hypervisor, MRG 1.2 is able to achieve performance of over one million messages per second throughput.
At that speed, Che said that real-time virtualization performance is within 5 percent of the bare-metal performance.
“How close we’re able to get to bare-metal performance, running completely virtualized, is pretty astounding compared to what you typically see with virtualization overhead,” Che said.
Red Hat has been concentrating on the KVM technology as part of its Red Hat Enterprise 5.4 release — and as part of its drive away from relying on the Xen hypervisor.
Che noted that with Xen, he’d expect to see real-time virtualization performance that is in the double- digit percentages, significantly worse than what KVM is providing.
As part of the MRG 1.2 release, Red Hat is also providing a new open source tool called RTeval, which is a hardware latency identifier. Previous versions of MRG have had software tools that discovered areas of application latency.
Now, with the new tool, users can pinpoint where latency occurs in hardware, helping them troubleshoot system bottlenecks and correct them to improve the performance of their MRG deployment.
“With our customers that run real-time, a big source of latency is not just from the operating system but also from certified hardware,” Che said.
Moving forward, Che said that Red Hat’s plan is to have two updates ever year for MRG.
“At a high-level on our roadmap, we’ll always go after performance,” Che said. “But there are other areas we’re looking at.”
Real-time MRG versus real-time mainline kernel
Red Hat’s MRG work is an outgrowth of a broader effort among Linux backers to provide real-time capabilities in the mainline Linux kernel — an initiative to which Red Hat has been contributing since at least 2006.
Yet while much of the initial real-time technology that Red Hat was working on is now in the mainline Linux kernel, Che noted there are still some differences between the two releases.
One key item that is not yet in the mainline kernel has to do with full kernel pre-emption. Che noted that a key difference between the standard kernel and the real-time kernel is that the MRG real-time kernel is fully pre-emptive, which “means that real-time kernel can always immediately respond to something, whether it’s request or an interrupt, and make a decision on what to do with it,” he said.
Another feature lacking in the mainline kernel is threaded interrupt requests (IRQs), which Che said “allow us to register hardware device handlers as regular kernel threads.”
Even if those additional real-time features are merged into the mainline kernel, Che said there would still likely be differences in what MRG provides.
Most notable is the fact the real-time kernel is compiled differently than the standard kernel to get better latency. Doing so trades better latency and determinism — key selling points for a real-time OS — for reduced overall throughput, Che said.