High-End Buses wrestle with Speed, Slots and Silicon

Out-sourcing embedded computer designs means bus-based computers’ boards are more popular then ever. Looking to the future our expert John Collins simplifies the complexities of VME and CompactPCI bridging issues.

Thanks to a worldwide trend toward out-sourcing complex embedded computer designs, bus-based computers’ boards are more popular then ever. Applications ranging from industrial-control to defence systems to telecom are embedding ever more powerful bus-based computers.

Dominating the choice of high-end bus architectures are VME and CompactPCI. Over the past five years these two buses fought an eventful popularity contest. The hurdles faced by each stem from their respective ages. For its part the 18-year old VME has been fighting its old age by increasing its throughput. Meanwhile CompactPCI has wrestled the growing pains of a young bus architecture. Unlike bus wars of the past, the VME/CompactPCI bus war isn’t a fight between sets of board vendors. In fact all of the top VME board vendors offer both CompactPCI and VME board-level products. As a result, the rivalry between the CompactPCI and VME gets its voice from the buses’ two respective standards bodies: the VME Trade Association (VITA) and the PCI Industrial Manufacturer’s Group (PICMG,the body responsible for crafting CompactPCI specifications.)

Interestingly, the key drawbacks of CompactPCI and VME are inverse to one another. To avoid becoming obsolete VME needs to get to the next level speed, while its slot count of 21 slots is sufficient for the most demanding system designs. In contrast CompactPCI offers performance up to PCI’s 133MHz maximum rate, but its slot count is limited to 8 board slots per system. Worse, those 8 slots are possible only at 33 MHz, and the slot count goes down at faster PCI rates of 66MHz and 133MHz. Fortunately board vendors are addressing both VME’s and CompactPCI’s roadblocks with innovative interface ICs.


Over its history, the VME community has evolved the specification. Over the past few years, there’s been interest in moving to the new, high-speed VME protocols under development. First there was 2eVME, a protocol that increases VME bandwidth to 160MB/s. Next there’s 2eSST, which doubles that bandwidth to 320MBs (VME320), with potential for 533MB/s, and perhaps even 1GB/s.

While the VITA technical committees have already developed the specifications for the more recent speed ups to VME, those new protocols have yet to see any wide deployment. Two factors are behind this. First there’s a lack of off the shelf VME interface supporting those newer technologies. Next there’s the questionable market demand, among chip makers, for faster VME. Questions about demand are eroding however, as faster CPUs and faster I/O interfaces hit the mainstream.

Most VME boards on the market today sport VME interface silicon from Tundra Semiconductor Corp. That firm has reported no plans to support the next level of VME standards beyond 80MB/s. To fill the vacuum, board vendors Cetia Inc. and General Micro Systems are developing VME chips that can accommodate data rates of 320MB/s and beyond and integrate the 2e VME and 2e Source Synchronous Transfer (SST) protocols.

PCI, the electrical scheme on which CompactPCI is based, suffers from a vexing slot-count limitation. Electrical loading constraints limit it to 4 slots at 33MHz, and fewer at higher speeds. PCI’s embedded board cousin CompactPCI extends that limit out to 8 slots at 33MHz, and 5 slots at 66MHz. At 133MHz PCI becomes merely a point to point interconnect. A way around those limits is to use PCI-to-PCI bridge silicon to link multiple PCI bus segments together. Over the years, PCI-to-PCI bridging has become an accepted and well understood approach.

In certain applications there’s an advantage to being able to hide PCI devices located behind the bridge. Non-transparent bridges do just that. They take everything located behind the bridge, including the bridge itself and it presents itself as a single PCI device to the host CPU. When a host CPU polls for PCI devices and finds a non-transparent it treats it as one PCI devices and uses one device driver. Everything behind the device is hidden from the host CPU and is under the control of a local I/O processor.

Force Computers recently announced a PCI-to-PCI bridge chip that’s both transparent and non-transparent. As a board vendor, Force Computers has no plans to make the chip available to the general market. Called the Sentinel, the chip supports MSI (message signaled interrupt) for advanced interrupt support, and hardware support for Intelligent I/O communication. Using a Sentinel chip eliminates the need use different CompactPCI boards for system slots than are used in I/O slots.


Hoping to leverage switch-fabric, channel I/O technology, the embedded- board/bus community has tried to get involved early in emerging specifications like InfiniBand. In separate efforts, both VITA and PICMG have plans under way to include InfiniBand in their VME and Compact board specifications. Meanwhile, the two groups are teaming up to craft mechanical specifications for InfiniBand. The InfiniBand interconnect uses a 2.5Gbits/s wire speed connection with one, four or twelve wire link widths. This implies a throughput ranging from 500MB/s per link (one link) to 6GB per link (12 links).

VITA already has formed a task group to develop new architectures for VME-based InfiniBand boards and systems. According to Ray Alderman, executive director at VITA, the first step will be to define the specifications for InfiniBand and create a VME/InfiniBand hybrid. It’s likely that PICMG will hang InfiniBand off a set of CompactPCI’s available I/O pins. Early on, companies crafting InfiniBand approached PICMG for help in Eurocard packaging.

That relationship was expected to continue but, unfortunately for both VITA and PICMG, the InfiniBand Trade Association decided not to let consortia like VITA and PICMG be members of their association, or let them get their technical specs early. Especially frustrating for the embedded bus board community is the apparent choice of board format and connector choice. Both choices seem to intentionally make it difficult for VME and CompactPCI to leverage InfiniBand without reworking some of those bus architecture’s mechanical specifications.


Tom’s hardware guide to bus speedhttp://www7.tomshardware.com/mainboard/98q1/980101/

VITA Standards Organisationhttp://www.vita.com/vso/stds.html

PICMG – PCI Industrial Computers Manufacturers Grouphttp://www.picmg.org/

What is Infiniband architecture?http://developer.intel.com/design/servers/future_server_io/}}

Copyright: Centaur Communications Ltd and licensors