Jump to content
 







Main menu
   


Navigation  



Main page
Contents
Current events
Random article
About Wikipedia
Contact us
Donate
 




Contribute  



Help
Learn to edit
Community portal
Recent changes
Upload file
 








Search  

































Create account

Log in
 









Create account
 Log in
 




Pages for logged out editors learn more  



Contributions
Talk
 



















Contents

   



(Top)
 


1 Applications  





2 Architecture  



2.1  Interconnect  





2.2  Lane  





2.3  Serial bus  







3 Form factors  



3.1  PCI Express (standard)  



3.1.1  Pinout  





3.1.2  Power  







3.2  PCI Express Mini Card  



3.2.1  Physical dimensions  





3.2.2  Electrical interface  







3.3  Mini PCI Express & mSATA  





3.4  PCI Express External Cabling  





3.5  Derivative forms  







4 History and revisions  



4.1  PCI Express 1.0a  





4.2  PCI Express 1.1  





4.3  PCI Express 2.0  





4.4  PCI Express 2.1  





4.5  PCI Express 3.0  





4.6  PCI Express 4.0  





4.7  Current status  



4.7.1  PCIe 3.0 vs. HTX, Thunderbolt, USB 3.0  





4.7.2  Fibre: PCI extensions vs. LightPeak, Ethernet, RapidIO  









5 Hardware protocol summary  



5.1  Physical layer  



5.1.1  Data transmission  







5.2  Data link layer  





5.3  Transaction layer  







6 Uses  



6.1  External PCIe cards  





6.2  External memory  







7 Competing protocols  





8 Development tools  





9 See also  





10 References  





11 Further reading  





12 External links  














PCI Express: Difference between revisions






العربية
Bosanski
Català
Čeština
Deutsch
Eesti
Ελληνικά
Español
Esperanto
فارسی
Français
Galego

Hrvatski
Bahasa Indonesia
Italiano
עברית

Magyar
Nederlands

Norsk bokmål
ି
Polski
Português
Română
Русский
Shqip
Simple English
Slovenčina
Slovenščina
Српски / srpski
Suomi
Svenska
Türkçe
Українська
Tiếng Vit


 

Edit links
 









Article
Talk
 

















Read
Edit
View history
 








Tools
   


Actions  



Read
Edit
View history
 




General  



What links here
Related changes
Upload file
Special pages
Permanent link
Page information
Cite this page
Get shortened URL
Download QR code
Wikidata item
 




Print/export  



Download as PDF
Printable version
 




Print/export  







In other projects  



Wikimedia Commons
 
















Appearance
   

 





Help
 

From Wikipedia, the free encyclopedia
 


Browse history interactively
 Previous editNext edit 
Content deleted Content added
m Reverting possible vandalism by 220.227.191.100 to version by Binaryguru. False positive? Report it. Thanks, ClueBot NG. (1157878) (Bot)
Line 572: Line 572:

Data transmitted on multiple-lane links is interleaved, meaning that each successive byte is sent down successive lanes. The PCIe specification refers to this interleaving as ''data striping''. While requiring significant hardware complexity to synchronize (or [[clock skew|deskew]]) the incoming striped data, striping can significantly reduce the latency of the ''n''<sup>th</sup> byte on a link. Due to padding requirements, striping may not necessarily reduce the latency of small data packets on a link.

Data transmitted on multiple-lane links is interleaved, meaning that each successive byte is sent down successive lanes. The PCIe specification refers to this interleaving as ''data striping''. While requiring significant hardware complexity to synchronize (or [[clock skew|deskew]]) the incoming striped data, striping can significantly reduce the latency of the ''n''<sup>th</sup> byte on a link. Due to padding requirements, striping may not necessarily reduce the latency of small data packets on a link.



As with other high data rate serial transmission protocols, clocking information is [[self-clocking signal|embedded]] in the signal. At the physical level, PCI Express 2.0 utilizes the [[8b/10b encoding]] scheme<ref name="faq3" /> to ensure that strings of consecutive ones or consecutive zeros are limited in length. This was used to prevent the receiver from losing track of where the bit edges are. In this coding scheme every eight (uncoded) payload bits of data are replaced with 10 (encoded) bits of transmit data, causing a 20% overhead in the electrical bandwidth. To improve the available bandwidth, PCI Express version 3.0 employs 128b/130b encoding instead: similar but with much lower overhead.

As with other high data rate serial transmission protocols, the clock is [[self-clocking signal|embedded]] in the signal. At the physical level, PCI Express 2.0 utilizes the [[8b/10b encoding]] scheme<ref name="faq3" /> to ensure that strings of consecutive ones or consecutive zeros are limited in length. This coding was used to prevent the receiver from losing track of where the bit edges are. In this coding scheme every eight (uncoded) payload bits of data are replaced with 10 (encoded) bits of transmit data, causing a 20% overhead in the electrical bandwidth. To improve the available bandwidth, PCI Express version 3.0 employs 128b/130b encoding instead: similar but with much lower overhead.



Many other protocols (such as [[SONET]]) use a different form of encoding known as ''[[scrambler (randomizer)|scrambling]]'' to embed clock information into data streams. The PCIe specification also defines a scrambling algorithm, but it is used to reduce [[radio frequency interference|electromagnetic interference]] (EMI) by preventing repeating data patterns in the transmitted data stream.

Many other protocols (such as [[SONET]]) use a different form of encoding known as ''[[scrambler (randomizer)|scrambling]]'' to embed clock information into data streams. The PCIe specification also defines a scrambling algorithm, but it is used to reduce [[radio frequency interference|electromagnetic interference]] (EMI) by preventing repeating data patterns in the transmitted data stream.


Revision as of 18:22, 9 August 2012

PCI Express
Year created2004
Created byIntel · Dell · HP · IBM
SupersedesAGP · PCI · PCI-X
Width in bits1–32
No. of devicesOne device each on each endpoint of each connection. PCI Express switches can create multiple endpoints out of one endpoint to allow sharing one endpoint with multiple devices.
SpeedPer lane (each direction):
  • v1.x: 250 MB/s (2.5 GT/s)
  • v2.x: 500 MB/s (5 GT/s)
  • v3.0: 1 GB/s (8 GT/s)
  • v4.0: 2 GB/s (16 GT/s)

16 lane slot (each direction):

  • v1.x: 4 GB/s (40 GT/s)
  • v2.x: 8 GB/s (80 GT/s)
  • v3.0: 16 GB/s (128 GT/s)
StyleSerial
Hotplugging interfaceYes, if ExpressCard, PCI Express ExpressModuleorXQD card
External interfaceYes, with PCI Express External Cabling, such as Thunderbolt
Websitepcisig.com

PCI Express (Peripheral Component Interconnect Express), officially abbreviated as PCIe, is a computer expansion bus standard designed to replace the older PCI, PCI-X, and AGP bus standards. PCIe has numerous improvements over the aforementioned bus standards, including higher maximum system bus throughput, lower I/O pin count and smaller physical footprint, better performance-scaling for bus devices, a more detailed error detection and reporting mechanism (Advanced Error Reporting (AER) [1]), and native hot-plug functionality. More recent revisions of the PCIe standard support hardware I/O virtualization.

The PCIe electrical interface is also used in a variety of other standards, most notably ExpressCard, a laptop expansion card interface.

Format specifications are maintained and developed by the PCI-SIG (PCI Special Interest Group), a group of more than 900 companies that also maintain the Conventional PCI specifications. PCIe 3.0 is the latest standard for expansion cards that is in production and available on mainstream personal computers.[2][3]

Applications

PCI Express is used in consumer, server, and industrial applications, as a motherboard-level interconnect (to link motherboard-mounted peripherals), a passive backplane interconnect and as an expansion card interface for add-in boards.

In virtually all modern PCs, from consumer laptops and desktops to enterprise data servers, the PCIe bus serves as the primary motherboard-level interconnect, connecting the host system processor with both integrated-peripherals (surface mounted ICs) and add-on peripherals (expansion cards.) In most of these systems, the PCIe bus co-exists with one or more legacy PCI buses, for backward compatibility with the large body of legacy PCI peripherals.

Architecture

Conceptually, the PCIe bus is like a high-speed serial replacement of the older PCI/PCI-X bus,[4] an interconnect bus using shared address/data lines.

A key difference between PCIe bus and the older PCI is the bus topology. PCI uses a shared parallel bus architecture, where the PCI host and all devices share a common set of address/data/control lines. In contrast, PCIe is based on point-to-point topology, with separate serial links connecting every device to the root complex (host). Due to its shared bus topology, access to the older PCI bus is arbitrated (in the case of multiple masters), and limited to one master at a time, in a single direction. Furthermore, the older PCI's clocking scheme limits the bus clock to the slowest peripheral on the bus (regardless of the devices involved in the bus transaction). In contrast, a PCIe bus link supports full-duplex communication between any two endpoints, with no inherent limitation on concurrent access across multiple endpoints.

In terms of bus protocol, PCIe communication is encapsulated in packets. The work of packetizing and de-packetizing data and status-message traffic is handled by the transaction layer of the PCIe port (described later). Radical differences in electrical signaling and bus protocol require the use of a different mechanical form factor and expansion connectors (and thus, new motherboards and new adapter boards); PCI slots and PCIe slots are not interchangeable. At the software level, PCIe preserves backward compatibility with PCI; legacy PCI system software can detect and configure newer PCIe devices without explicit support for the PCIe standard, though PCIe's new features are inaccessible.

The PCIe link between two devices can consist of anywhere from 1 to 32 lanes. In a multi-lane link, the packet data is striped across lanes, and peak data-throughput scales with the overall link width. The lane count is automatically negotiated during device initialization, and can be restricted by either endpoint. For example, a single-lane PCIe (×1) card can be inserted into a multi-lane slot (×4, ×8, etc.), and the initialization cycle auto-negotiates the highest mutually supported lane count. The link can dynamically down-configure the link to use fewer lanes, thus providing some measure of failure tolerance in the presence of bad or unreliable lanes. The PCIe standard defines slots and connectors for multiple widths: ×1, ×4, ×8, ×16, ×32. This allows PCIe bus to serve both cost-sensitive applications where high throughput is not needed, as well as performance-critical applications such as 3D graphics, network (10 Gigabit Ethernet, multiport Gigabit Ethernet), and enterprise storage (SAS, Fibre Channel.)

As a point of reference, a PCI-X (133 MHz 64-bit) device and PCIe device at 4-lanes (×4), Gen1 speed have roughly the same peak transfer rate in a single-direction: 1064 MB/sec. The PCIe bus has the potential to perform better than the PCI-X bus in cases where multiple devices are transferring data communicating simultaneously, or if communication with the PCIe peripheral is bidirectional.

Interconnect

PCIe devices communicate via a logical connection called an interconnect[5]orlink. A link is a point-to-point communication channel between two PCIe ports, allowing both to send/receive ordinary PCI-requests (configuration read/write, I/O read/write, memory read/write) and interrupts (INTx, MSI, MSI-X). At the physical level, a link is composed of 1 or more lanes.[5] Low-speed peripherals (such as an 802.11 Wi-Fi card) use a single-lane (×1) link, while a graphics adapter typically uses a much wider (and thus, faster) 16-lane link.

Lane

A lane is composed of two differential signaling pairs: one pair for receiving data, the other for transmitting it. Thus, each lane is composed of four wires or signal traces. Conceptually, each lane is used as a full-duplex byte stream, transporting data packets in eight-bit 'byte' format, between endpoints of a link, in both directions simultaneously.[6] Physical PCIe slots may contain from one to thirty-two lanes, in powers of two (1, 2, 4, 8, 16 and 32).[5] Lane counts are written with an × prefix (e.g., ×16 represents a sixteen-lane card or slot), with ×16 being the largest size in common use.[7]

Serial bus

The bonded serial format was chosen over a traditional parallel bus format due to the latter's inherent limitations, including single-duplex operation, excess signal count and an inherently lower bandwidth due to timing skew. Timing skew results from separate electrical signals within a parallel interface traveling down different-length conductors, on potentially different printed circuit board layers, at possibly different signal velocities. Despite being transmitted simultaneously as a single word, signals on a parallel interface experience different travel times and arrive at their destinations at different moments. When the interface clock rate is increased to a point where its inverse (i.e., its clock period) is shorter than the largest possible time between signal arrivals, the signals no longer arrive with sufficient coincidence to make recovery of the transmitted word possible. Since timing skew over a parallel bus can amount to a few nanoseconds, the resulting bandwidth limitation is in the range of hundreds of megahertz.

A serial interface does not exhibit timing skew because there is only one differential signal in each direction within each lane, and there is no external clock signal since clocking information is embedded within the serial signal. As such, typical bandwidth limitations on serial signals are in the multi-gigahertz range. PCIe is just one example of a general trend away from parallel buses to serial interconnects. Other examples include Serial ATA, USB, SAS, FireWire (1394) and RapidIO.

Multichannel serial design increases flexibility by allocating slow devices to fewer lanes than fast devices.

Form factors

PCI Express (standard)

Various PCI slots. From top to bottom:
  • PCI Express ×4
  • PCI Express ×16
  • PCI Express ×1
  • PCI Express ×16
  • Conventional PCI (32-bit)

A PCIe card fits into a slot of its physical size or larger (maximum ×16), but may not fit into a smaller PCIe slot (×16 in a ×8 slot). Some slots use open-ended sockets to permit physically longer cards and negotiate the best available electrical connection. The number of lanes actually connected to a slot may also be less than the number supported by the physical slot size.

An example is a ×8 slot that actually only runs at ×1. These slots allow any ×1, ×2, ×4 or ×8 card, though only running at ×1 speed. This type of socket is called a ×8 (×1 mode) slot, meaning it physically accepts up to ×8 cards but only runs at ×1 speed. This is also sometimes specified as『×size (@×capacity)』(for example, "×16(@×8)"). The advantage is that it can accommodate a larger range of PCIe cards without requiring motherboard hardware to support the full transfer rate. This keeps design and implementation costs down.

Pinout

The following table identifies the conductors on each side of the edge connector on a PCI Express card. The solder side of the printed circuit board (PCB) is the A side, and the component side is the B side.[8]

PCI express ×16 connector pinout
Pin Side B Side A Comments
1 +12V PRSNT1# Pulled low to indicate card inserted
2 +12V +12V
3 +12V +12V
4 Ground Ground
5 SMCLK TCK SMBus and JTAG port pins
6 SMDAT TDI
7 Ground TDO
8 +3.3V TMS
9 TRST# +3.3V
10 +3.3V aux +3.3V Standby power
11 WAKE# PWRGD Link reactivation, power good.
Key notch
12 Reserved Ground
13 Ground REFCLK+ Reference clock differential pair
14 HSOp(0) REFCLK- Lane 0 transmit data, + and −
15 HSOn(0) Ground
16 Ground HSIp(0) Lane 0 receive data, + and −
17 PRSNT2# HSIn(0)
18 Ground Ground
19 HSOp(1) Reserved Lane 1 transmit data, + and −
20 HSOn(1) Ground
21 Ground HSIp(1) Lane 1 receive data, + and −
22 Ground HSIn(1)
23 HSOp(2) Ground Lane 2 transmit data, + and −
24 HSOn(2) Ground
25 Ground HSIp(2) Lane 2 receive data, + and −
26 Ground HSIn(2)
27 HSOp(3) Ground Lane 3 transmit data, + and −
28 HSOn(3) Ground
29 Ground HSIp(3) Lane 3 receive data, + and −
30 Reserved HSIn(3)
31 PRSNT2# Ground
32 Ground Reserved
33 HSOp(4) Reserved Lane 4 transmit data, + and −
34 HSOn(4) Ground
35 Ground HSIp(4) Lane 4 receive data, + and −
36 Ground HSIn(4)
37 HSOp(5) Ground Lane 5 transmit data, + and −
38 HSOn(5) Ground
39 Ground HSIp(5) Lane 5 receive data, + and −
40 Ground HSIn(5)
41 HSOp(6) Ground Lane 6 transmit data, + and −
42 HSOn(6) Ground
43 Ground HSIp(6) Lane 6 receive data, + and −
44 Ground HSIn(6)
45 HSOp(7) Ground Lane 7 transmit data, + and −
46 HSOn(7) Ground
47 Ground HSIp(7) Lane 7 receive data, + and −
48 PRSNT2# HSIn(7)
49 Ground Ground
50 HSOp(8) Reserved Lane 8 transmit data, + and −
51 HSOn(8) Ground
52 Ground HSIp(8) Lane 8 receive data, + and −
53 Ground HSIn(8)
54 HSOp(9) Ground Lane 9 transmit data, + and −
55 HSOn(9) Ground
56 Ground HSIp(9) Lane 9 receive data, + and −
57 Ground HSIn(9)
58 HSOp(10) Ground Lane 10 transmit data, + and −
59 HSOn(10) Ground
60 Ground HSIp(10) Lane 10 receive data, + and −
61 Ground HSIn(10)
62 HSOp(11) Ground Lane 11 transmit data, + and −
63 HSOn(11) Ground
64 Ground HSIp(11) Lane 11 receive data, + and −
65 Ground HSIn(11)
66 HSOp(12) Ground Lane 12 transmit data, + and −
67 HSOn(12) Ground
68 Ground HSIp(12) Lane 12 receive data, + and −
69 Ground HSIn(12)
70 HSOp(13) Ground Lane 13 transmit data, + and −
71 HSOn(13) Ground
72 Ground HSIp(13) Lane 13 receive data, + and −
73 Ground HSIn(13)
74 HSOp(14) Ground Lane 14 transmit data, + and −
75 HSOn(14) Ground
76 Ground HSIp(14) Lane 14 receive data, + and −
77 Ground HSIn(14)
78 HSOp(15) Ground Lane 15 transmit data, + and −
79 HSOn(15) Ground
80 Ground HSIp(15) Lane 15 receive data, + and −
81 PRSNT2# HSIn(15)
82 Reserved Ground

An ×1 slot is a shorter version of this, ending after pin 18, x4 ending after pin 32, ×8 ending after pin 49.

Legend
Ground pin Zero volt reference
Power pin Supplies power to the PCIe card
Output pin Signal from the card to the motherboard
Input pin Signal from the motherboard to the card
Open drain May be pulled low and/or sensed by multiple cards
Sense pin Tied together on card
Reserved Not presently used, do not connect

Power

PCI Express cards are allowed a maximum power consumption of 25W (×1: 10W for power-up). Low profile cards are limited to 10W (×16 to 25W). PCI Express Graphics 1.0 (PEG) cards may increase power (from slot) to 75W after configuration (3.3V/3A + 12V/5.5A).[9] PCI Express 2.1 increased the power output from an x16 slot to 150W so that some high-performance graphics cards can be run from the slot power alone.[10] Optional connectors add 75W (6-pin) or 150W (8-pin) power for up to 300W total.

PCI Express Mini Card

A WLAN PCI Express Mini Card and its connector.
MiniPCI and MiniPCI Express cards in comparison

PCI Express Mini Card (also known as Mini PCI Express, Mini PCIe, and Mini PCI-E) is a replacement for the Mini PCI form factor, based on PCI Express. It is developed by the PCI-SIG. The host device supports both PCI Express and USB 2.0 connectivity, and each card may use either standard. Most laptop computers built after 2005 are based on PCI Express and can have several Mini Card slots.[citation needed]

Physical dimensions

PCI Express Mini Cards are 30×50.95 mm. There is a 52-pin edge connector, consisting of two staggered rows on a 0.8 mm pitch. Each row has eight contacts, a gap equivalent to four contacts, then a further 18 contacts. A half-length card is also specified 30×26.8 mm. Cards have a thickness of 1.0 mm (excluding components).

Electrical interface

PCI Express Mini Card edge connector provide multiple connections and buses:

Mini PCI Express & mSATA

Despite the mini-PCI Express form factor, a mini-PCI Express slot must have support for the electrical connections an mSATA drive requires. For this reason, only certain notebooks are compatible with mSATA drives. Most compatible systems are based on Intel's Sandy Bridge processor architecture, using the Huron River platform. But for a mSATA/mini-PCI-E connector, the only prerequisite is that there is a switch which makes it either a mSATA or a mini-PCI-E slot and can be implemented on any platform.

Notebooks like Lenovo's T-Series, W-Series, and X-Series ThinkPads released in March–April 2011 have support for an mSATA SSD card in their WWAN card slot. The ThinkPad Edge E220s/E420s, and the Lenovo IdeaPad Y460/Y560 also support mSATA.[11]

Some notebooks (notably the Asus Eee PC, the MacBook Air, and the Dell mini9 and mini10) use a variant of the PCI Express Mini Card as an SSD. This variant uses the reserved and several non-reserved pins to implement SATA and IDE interface passthrough, keeping only USB, ground lines, and sometimes the core PCIe 1x bus intact.[12] This makes the 'miniPCIe' flash and solid state drives sold for netbooks largely incompatible with true PCI Express Mini implementations.

Also, the typical Asus miniPCIe SSD is 71mm long, causing the Dell 51mm model to often be (incorrectly) referred to as half length. A true 51mm Mini PCIe SSD was announced in 2009, with two stacked PCB layers, which allows for higher storage capacity. The announced design preserves the PCIe interface, making it compatible with the standard mini PCIe slot. No working product has yet been developed.

Intel has numerous Desktop Boards with the PCIe x1 Mini-Card slot which typically do not support mSATA SSD. A list of Desktop Boards that natively support mSATA in the PCIe x1 Mini-Card slot (typically multiplexed with a SATA port) is provided on the Intel Support site.[13]

PCI Express External Cabling

PCI Express External Cabling (also known as External PCI Express, Cabled PCI Express, or ePCIe) specifications were released by the PCI-SIG in February 2007.[14][15]

Standard cables and connectors have been defined for ×1, ×4, ×8, and ×16 link widths, with a transfer rate of 250 MB/s per lane. The PCI-SIG also expects the norm will evolve to reach the 500 MB/s, as in PCI Express 2.0. The maximum cable length remains undetermined. An example of the uses of Cabled PCI Express is a metal enclosure, containing a number of PCI slots and PCI-to-ePCIe adapter circuitry. This device would not be possible had it not been for the ePCIe spec.

Derivative forms

There are several other expansion card types derived from PCIe. These include:

History and revisions

While in early development, PCIe was initially referred to as HSI (for High Speed Interconnect), and underwent a name change to 3GIO (for 3rd Generation I/O) before finally settling on its PCI-SIG name PCI Express. It was first drawn up by a technical working group named the Arapaho Work Group (AWG) that, for initial drafts, consisted only of Intel engineers. Subsequently the AWG expanded to include industry partners.

PCIe is a technology under constant development and improvement. The current PCI Express implementation is version 3.0.

PCI Express 1.0a

In 2003, PCI-SIG introduced PCIe 1.0a, with a per-lane data rate of 250 MB/s and a transfer rate of 2.5 GT/s (giga transfers per second = 109/s).

PCIe 1.x uses an 8b/10b encoding scheme that results in a 20 percent ((10-8)/10) overhead on the raw bit rate, therefore delivering an effective 250 MB/s max data rate[16].

Transfer rate is expressed in GT/s instead of Gb/s because the number includes the overhead bits[17].

PCI Express 1.1

In 2005, PCI-SIG introduced PCIe 1.1. This updated specification includes clarifications and several improvements, but is fully compatible with PCI Express 1.0a. No changes were made to the data rate.

PCI Express 2.0

PCI-SIG announced the availability of the PCI Express Base 2.0 specification on 15 January 2007.[18] The PCIe 2.0 standard doubles the transfer rate compared with PCIe 1.0 to 5 GT/s and the per-lane throughput rises from 250 MB/s to 500 MB/s. This means a 32-lane PCIe connector (×32) can support throughput up to 16 GB/s aggregate.

PCIe 2.0 motherboard slots are fully backward compatible with PCIe v1.x cards. PCIe 2.0 cards are also generally backward compatible with PCIe 1.x motherboards, using the available bandwidth of PCI Express 1.1. Overall, graphic cards or motherboards designed for v2.0 will work with the other being v1.1 or v1.0a.

The PCI-SIG also said that PCIe 2.0 features improvements to the point-to-point data transfer protocol and its software architecture.[19]

Intel's first PCIe 2.0 capable chipset was the X38 and boards began to ship from various vendors (Abit, Asus, Gigabyte) as of October 21, 2007.[20] AMD started supporting PCIe 2.0 with its AMD 700 chipset series and nVidia started with the MCP72.[21] All of Intel's prior chipsets, including the Intel P35 chipset, supported PCIe 1.1 or 1.0a.[22]

Like 1.x, PCIe 2.0 uses an 8b/10b encoding scheme, therefore delivering an effective 4 Gb/s max transfer rate from its 5 GT/s raw data rate.

PCI Express 2.1

PCI Express 2.1 supports a large proportion of the management, support, and troubleshooting systems planned for full implementation in PCI Express 3.0. However, the speed is the same as PCI Express 2.0. Unfortunately, the increase in power from the slot breaks backwards-compatibility between PCI Express 2.1 cards and some older motherboards with 1.0/1.0a, but most motherboards with PCI Express 1.1 connectors are provided with a BIOS update by their manufacturers through utilities to support backward compatibility of cards with PCIe 2.1.

PCI Express 3.0

PCI Express 3.0 Base specification revision 3.0 was made available in November 2010, after multiple delays. In August 2007, PCI-SIG announced that PCI Express 3.0 would carry a bit rate of 8 gigatransfers per second (GT/s), and that it would be backwards compatible with existing PCIe implementations. At that time, it was also announced that the final specification for PCI Express 3.0 would be delayed until 2011.[23] New features for the PCIe 3.0 specification include a number of optimizations for enhanced signaling and data integrity, including transmitter and receiver equalization, PLL improvements, clock data recovery, and channel enhancements for currently supported topologies.[24]

Following a six-month technical analysis of the feasibility of scaling the PCIe interconnect bandwidth, PCI-SIG's analysis found out that 8 gigatransfers per second can be manufactured in mainstream silicon process technology, and can be deployed with existing low-cost materials and infrastructure, while maintaining full compatibility (with negligible impact) to the PCIe protocol stack.

PCIe 3.0 removes the requirement 2.0 has for 8b/10b encoding, and instead uses a technique called "scrambling" that applies a known binary polynomial to a data stream in a feedback topology. Because the scrambling polynomial is known, the data can be recovered by running it through a feedback topology using the inverse polynomial.[25] and also uses a 128b/130b encoding scheme, reducing the overhead to approximately 1.5% ((130-128)/130), as opposed to the 20% overhead of 8b/10b encoding used by PCIe 2.0. PCIe 3.0's 8 GT/s bit rate effectively delivers double PCIe 2.0 bandwidth. PCI-SIG expects the PCIe 3.0 specifications to undergo rigorous technical vetting and validation before being released to the industry. This process, which was followed in the development of prior generations of the PCIe Base and various form factor specifications, includes the corroboration of the final electrical parameters with data derived from test silicon and other simulations conducted by multiple members of the PCI-SIG.

On November 18, 2010, the PCI Special Interest Group officially published the finalized PCI Express 3.0 specification to its members to build devices based on this new version of PCI Express.[26]

AMD latest flagship graphic card, the Radeon 7970, launched on January 9, 2012, is the world's first PCIe 3.0 graphic card.[27] Initial reviews suggest that the new interface would not improve graphic performance compared to earlier PCIe 2.0, which, at the time of writing, is still under-utilized. However, the new interface would prove advantageous when used for general purpose computing with technologies like OpenCL, CUDA and C++ AMP.[28]

PCI Express 4.0

On November 29, 2011, PCI-SIG has announced to proceed to PCI Express 4.0 featuring 16 GT/s, still on copper technology. Additionally, active and idle power optimizations are to be investigated. Final specifications are expected to be released in 2014/2015.[29]

Current status

PCI Express has replaced AGP as the default interface for graphics cards on new systems. Almost all models of graphics cards released in 2010 and 2011 by AMD (ATI) and NVIDIA use PCI Express. NVIDIA uses the high bandwidth data transfer of PCIe for its Scalable Link Interface (SLI) technology, which allows multiple graphics cards of the same chipset and model number to run in tandem, allowing increased performance. AMD has also developed a multi-GPU system based on PCIe called CrossFire. AMD and NVIDIA have released motherboard chipsets that support as many as four PCIe ×16 slots, allowing tri-GPU and quad-GPU card configurations.

PCIe 3.0 vs. HTX, Thunderbolt, USB 3.0

However, PCI Express 3.0 is as of June 2012 available on only a few high-end motherboards. Boards that implement PCI 3.0 on their Intel incarnation typically do not on their AMD incarnation, using the native HyperTransport bus there, which already has all the promised features of PCI 4.0.

At its announced 16GT/s and lacking power adaptability features as a certainty, PCI 4.0 may not compete with HyperTransport 3.0 or 3.1, only with 2.0 [1]. Intel's prior attempt to displace HTX, QPI, failed. Thunderbolt is more likely a longer term copper interface for Intel's own desktop and gaming systems due to its flexibility, though it is not price-competitive with USB 3.0 as a wired interface [2]. Neither Thunderbolt nor USB 3.0 are competitive with power over Ethernet where lower bandwidth, longer wire and better power management is required, nor are extensions to SATA likely to make that so.


Of copper alternatives to Thunderbolt & PCIe 3.0, like 10 gigabit Ethernet and 100 gigabit Ethernet are extremely robust for external connections, while HTX is far more mature on-board but not available as a card interface except on some test boards - this leaves PCIe 2.0 x32 as the fastest feasible card interface as of June 2012. Regarding multi-socket CPU configurations, HTX HyperShare is much further advanced towards an inter-processor standard than PCIe 4.0. PCIe 3.0 or 4.0 can't reasonably displace HTX on AMD boards, and is less likely to reliably support MIPS, PPC or ARM processors in future, especially as AMD includes ARM cores and works to a common HSA Foundation specification for heterogenous architectures [3] [4] beyond CPU+GPU from one company.

PCIe 3.0, lacking such architectural support, can be viewed strictly as a more stable Front Side Bus. See QPI for Intel's previous attempt to develop one. While it is not technically tied to the x86 architecture, in practice Intel does not participate in alternatives to this architecture and cannot reasonably be expected to implement standards that obsolete the x86.

PCI-SIG, the copper wire extension of PCI outside the box, [5], competes more with fiber alternatives but seems also to directly compete with Thunderbolt_(interface). Accordingly Intel is extremely unlikely to invest in any such extensions.

Fibre: PCI extensions vs. LightPeak, Ethernet, RapidIO

Some vendors ship PCIe over fiber [6] [7] but as of 2011 this was not widely deployed [8]. Intel does not support any such fibre extension.

As high end data centre applications move to fibre connectivity, seeking low power and low latency, they already adopt some fibre-specific protocol and bus technologies like RapidIO. [9]. already selected for space program use [10] where copper is unsuitable.

Thunderbolt originated as LightPeak and was intended as an all-fibre interface but could not be made price-competitive. Losing large price-insensitive design wins like space programs to RapidIO (fibre) or HyperShare (copper) due to the lack of any released LightPeak/Thunderbolt alternative renders it unlikely that Intel can displace 4 or 8 gigabit Fibre Channel with major large data centre vendors such as IBM [11] or Apple [12] which are more committed to the open and consortium technologies, which they are much more likely to support them than any x86-tied Intel fibre bus. Though Apple and Intel closely cooperated on LightPeak initially.'

The proprietary protocols, as with Firewire, may simply serve to make open alternatives cheaper. That is, as Firewire was not scalable it only served to justify price reductions in gigabit ethernet. Thunderbolt may do likewise for 10 gigabit Ethernet while various fibre extensions of PCIe (or Fibre Channel) may provide some competition to 100 gigabit Ethernet (usually over fibre) or terabit Ethernet to drive their price per port down faster. See list of device bit rates to identify which types of devices compete at what speed levels.

Hardware protocol summary

The PCIe link is built around dedicated unidirectional couples of serial (1-bit), point-to-point connections known as lanes. This is in sharp contrast to the earlier PCI connection, which is a bus-based system where all the devices share the same bidirectional, 32-bit or 64-bit parallel bus.

PCI Express is a layered protocol, consisting of a transaction layer, a data link layer, and a physical layer. The Data Link Layer is subdivided to include a media access control (MAC) sublayer. The Physical Layer is subdivided into logical and electrical sublayers. The Physical logical-sublayer contains a physical coding sublayer (PCS). The terms are borrowed from the IEEE 802 networking protocol model.

Physical layer

The PCIe Physical Layer (PHY, PCIEPHY, PCI Express PHY, or PCIe PHY) specification is divided into two sub-layers, corresponding to electrical and logical specifications. The logical sublayer is sometimes further divided into a MAC sublayer and a PCS, although this division is not formally part of the PCIe specification. A specification published by Intel, the PHY Interface for PCI Express (PIPE),[30] defines the MAC/PCS functional partitioning and the interface between these two sub-layers. The PIPE specification also identifies the physical media attachment (PMA) layer, which includes the serializer/deserializer (SerDes) and other analog circuitry; however, since SerDes implementations vary greatly among ASIC vendors, PIPE does not specify an interface between the PCS and PMA.

At the electrical level, each lane consists of two unidirectional LVDSorPCML pairs at 2.525 Gbit/s. Transmit and receive are separate differential pairs, for a total of four data wires per lane.

A connection between any two PCIe devices is known as a link, and is built up from a collection of one or more lanes. All devices must minimally support single-lane (×1) link. Devices may optionally support wider links composed of 2, 4, 8, 12, 16, or 32 lanes. This allows for very good compatibility in two ways:

In both cases, PCIe negotiates the highest mutually supported number of lanes. Many graphics cards, motherboards and bios versions are verified to support ×1, ×4, ×8 and ×16 connectivity on the same connection.

Even though the two would be signal-compatible, it is not usually possible to place a physically larger PCIe card (e.g., a ×16 sized card) into a smaller slot —though if the PCIe slots are open-ended, by design or by hack, some motherboards will allow this.[citation needed]

The width of a PCIe connector is 8.8 mm, while the height is 11.25 mm, and the length is variable. The fixed section of the connector is 11.65 mm in length and contains two rows of 11 (22 pins total), while the length of the other section is variable depending on the number of lanes. The pins are spaced at 1 mm intervals, and the thickness of the card going into the connector is 1.8 mm.[31][32]

Lanes Pins Length
Total Variable Total Variable
×1 2×18 = 36[33] 2×7 = 14 25 mm 7.65 mm
×4 2×32 = 64 2×21 = 42 39 mm 21.65 mm
×8 2×49 = 98 2×38 = 76 56 mm 38.65 mm
×16 2×82 = 164 2×71 = 142 89 mm 71.65 mm

Data transmission

PCIe sends all control messages, including interrupts, over the same links used for data. The serial protocol can never be blocked, so latency is still comparable to conventional PCI, which has dedicated interrupt lines.

Data transmitted on multiple-lane links is interleaved, meaning that each successive byte is sent down successive lanes. The PCIe specification refers to this interleaving as data striping. While requiring significant hardware complexity to synchronize (ordeskew) the incoming striped data, striping can significantly reduce the latency of the nth byte on a link. Due to padding requirements, striping may not necessarily reduce the latency of small data packets on a link.

As with other high data rate serial transmission protocols, the clock is embedded in the signal. At the physical level, PCI Express 2.0 utilizes the 8b/10b encoding scheme[25] to ensure that strings of consecutive ones or consecutive zeros are limited in length. This coding was used to prevent the receiver from losing track of where the bit edges are. In this coding scheme every eight (uncoded) payload bits of data are replaced with 10 (encoded) bits of transmit data, causing a 20% overhead in the electrical bandwidth. To improve the available bandwidth, PCI Express version 3.0 employs 128b/130b encoding instead: similar but with much lower overhead.

Many other protocols (such as SONET) use a different form of encoding known as scrambling to embed clock information into data streams. The PCIe specification also defines a scrambling algorithm, but it is used to reduce electromagnetic interference (EMI) by preventing repeating data patterns in the transmitted data stream.

Data link layer

The Data Link Layer performs three vital services for the PCIe express link:

  1. sequence the transaction layer packets (TLPs) that are generated by the transaction layer,
  2. ensure reliable delivery of TLPs between two endpoints via an acknowledgement protocol (ACK and NAK signaling) that explicitly requires replay of unacknowledged/bad TLPs,
  3. initialize and manage flow control credits

On the transmit side, the data link layer generates an incrementing sequence number for each outgoing TLP. It serves as a unique identification tag for each transmitted TLP, and is inserted into the header of the outgoing TLP. A 32-bit cyclic redundancy check code (known in this context as Link CRC or LCRC) is also appended to the end of each outgoing TLP.

On the receive side, the received TLP's LCRC and sequence number are both validated in the link layer. If either the LCRC check fails (indicating a data error), or the sequence-number is out of range (non-consecutive from the last valid received TLP), then the bad TLP, as well as any TLPs received after the bad TLP, are considered invalid and discarded. The receiver sends a negative acknowledgement message (NAK) with the sequence-number of the invalid TLP, requesting re-transmission of all TLPs forward of that sequence-number. If the received TLP passes the LCRC check and has the correct sequence number, it is treated as valid. The link receiver increments the sequence-number (which tracks the last received good TLP), and forwards the valid TLP to the receiver's transaction layer. An ACK message is sent to remote transmitter, indicating the TLP was successfully received (and by extension, all TLPs with past sequence-numbers.)

If the transmitter receives a NAK message, or no acknowledgement (NAK or ACK) is received until a timeout period expires, the transmitter must retransmit all TLPs that lack a positive acknowledgement (ACK). Barring a persistent malfunction of the device or transmission medium, the link-layer presents a reliable connection to the transaction layer, since the transmission protocol ensures delivery of TLPs over an unreliable medium.

In addition to sending and receiving TLPs generated by the transaction layer, the data-link layer also generates and consumes DLLPs, data link layer packets. ACK and NAK signals are communicated via (DLLP), as are flow control credit information, some power management messages and flow control credit information (on behalf of the transaction layer.)

In practice, the number of in-flight, unacknowledged TLPs on the link is limited by two factors: the size of the transmitter's replay buffer (which must store a copy of all transmitted TLPs until the remote receiver ACKs them), and the flow control credits issued by the receiver to a transmitter. PCI Express requires all receivers to issue a minimum number of credits, to guarantee a link allows sending PCIConfig TLPs and message TLPs.

Transaction layer

PCI Express implements split transactions (transactions with request and response separated by time), allowing the link to carry other traffic while the target device gathers data for the response.

PCI Express uses credit-based flow control. In this scheme, a device advertises an initial amount of credit for each received buffer in its transaction layer. The device at the opposite end of the link, when sending transactions to this device, counts the number of credits each TLP consumes from its account. The sending device may only transmit a TLP when doing so does not make its consumed credit count exceed its credit limit. When the receiving device finishes processing the TLP from its buffer, it signals a return of credits to the sending device, which increases the credit limit by the restored amount. The credit counters are modular counters, and the comparison of consumed credits to credit limit requires modular arithmetic. The advantage of this scheme (compared to other methods such as wait states or handshake-based transfer protocols) is that the latency of credit return does not affect performance, provided that the credit limit is not encountered. This assumption is generally met if each device is designed with adequate buffer sizes.

PCIe 1.x is often quoted to support a data rate of 250 MB/s in each direction, per lane. This figure is a calculation from the physical signaling rate (2.5 Gbaud) divided by the encoding overhead (10 bits per byte.) This means a sixteen lane (×16) PCIe card would then be theoretically capable of 16×250 MB/s = 4 GB/s in each direction. While this is correct in terms of data bytes, more meaningful calculations are based on the usable data payload rate, which depends on the profile of the traffic, which is a function of the high-level (software) application and intermediate protocol levels.

Like other high data rate serial interconnect systems, PCIe has a protocol and processing overhead due to the additional transfer robustness (CRC and acknowledgements). Long continuous unidirectional transfers (such as those typical in high-performance storage controllers) can approach >95% of PCIe's raw (lane) data rate. These transfers also benefit the most from increased number of lanes (×2, ×4, etc.) But in more typical applications (such as a USBorEthernet controller), the traffic profile is characterized as short data packets with frequent enforced acknowledgements.[34] This type of traffic reduces the efficiency of the link, due to overhead from packet parsing and forced interrupts (either in the device's host interface or the PC's CPU.) Being a protocol for devices connected to the same printed circuit board, it does not require the same tolerance for transmission errors as a protocol for communication over longer distances, and thus, this loss of efficiency is not particular to PCIe.

Uses

External PCIe cards

Theoretically, external PCIe could give a notebook the graphics power of a desktop, by connecting a notebook with any PCIe desktop video card (enclosed in its own external housing, with strong power supply and cooling); This is possible with an ExpressCard interface, which provides single lane v1.1 performance.

[35][36][37][38][39]

IBM/Lenovo has also included a PCI-Express slot in their Advanced Docking Station 250310U. It provides a half-sized slot with an ×16 length slot, but only ×1 connectivity.[40] However, docking stations with expansion slots are becoming less common as the laptops are getting more advanced video cards and either DVI-D interfaces, or DVI-D pass through for port replicators and docking stations.

Additionally, Nvidia has developed Quadro Plex external PCIe video cards that can be used for advanced graphic applications. These video cards require a PCI Express ×8 or ×16 slot for the interconnection cable.[41] In 2008, AMD announced the ATI XGP technology, based on a proprietary cabling solution that is compatible with PCIe ×8 signal transmissions.[42] This connector is available on the Fujitsu Amilo and the Acer Ferrari One notebooks. Only Fujitsu has an actual external box available, which also works on the Ferrari One. Recently Acer launched the Dynavivid graphics dock for XGP.

There are now card hubs in development that one can connect to a laptop through an ExpressCard slot, though they are currently rare, obscure, or unavailable on the open market. These hubs can have full-sized cards placed in them.

Magma and ViDock also makes use of ExpressCard and implements the usage of external graphic solutions .ViDock are expansion chassis tailored specifically for adapting PCI Express graphics cards for use with ExpressCard equipped laptop PCs. This enables user to make use of connecting PCIe cards externally. Although, the developments in these technologies are still ongoing. Other examples that underwent are - MSI GUS, Asus XG Station.

Recently, Intel and Apple introduced Thunderbolt, which allows for external PCI(e) devices.

Juniper Virtual Chassis port, as found on Juniper EX4200 model ethernet switches, features two external 16 lane PCI(e) connectors which allow for redundant cabling to one or more switches, interconnecting a total of 10 switches into one large, redundant switching system.

External memory

PCI Express protocol can be used as data interface to flash memory devices, such as memory cards and solid state drives. One such format is XQD card developed by the CompactFlash Association.

Many high-performance, enterprise-class solid state drives are designed as PCI Express RAID controller cards with flash memory chips placed directly on the circuit board; this allows much higher transfer rates (over 1 Gbyte/s) and IOPS (IO operations per second) (over 1 million) comparing to Serial ATA or SAS drives.

OCZ and Marvell co-developed the native PCIe solid state drive controller Kilimanjaro that is utilized in OCZ's Z-Drive 5. The Z-Drive 5 is designed for a PCIe 3.0 x16 slot and when the highest capacity (12TB) version is installed in such a slot it can run up to 7.2 Gigabytes per second sequential transfers and up to 2.52 million IOPS in random transfers. [43]

Competing protocols

Several communications standards have emerged based on high bandwidth serial architectures. These include InfiniBand, RapidIO, HyperTransport, QPI and StarFabric. The differences are based on the tradeoffs between flexibility and extensibility vs latency and overhead. An example of such a tradeoff is adding complex header information to a transmitted packet to allow for complex routing (PCI Express is not capable of this). The additional overhead reduces the effective bandwidth of the interface and complicates bus discovery and initialization software. Also making the system hot-pluggable requires that software track network topology changes. Examples of buses suited for this purpose are InfiniBand and StarFabric.

Another example is making the packets shorter to decrease latency (as is required if a bus must operate as a memory interface). Smaller packets mean packet headers consume a higher percentage of the packet, thus decreasing the effective bandwidth. Examples of bus protocols designed for this purpose are RapidIO and HyperTransport.

PCI Express falls somewhere in the middle, targeted by design as a system interconnect (local bus) rather than a device interconnect or routed network protocol. Additionally, its design goal of software transparency constrains the protocol and raises its latency somewhat.

Development tools

When developing and/or troubleshooting the PCI Express bus, examination of hardware signals can be very important to find the problems. Oscilloscopes, Logic analyzers and bus analyzers are tools that collect, analyze, decode, store signals so people can view the high-speed waveforms at their leisure.

See also

References

  1. ^ Yanmin Zhang and T. Long Nguyen, Intel Corp. (June 2007). "Enable PCI Express Advanced Error Reporting in the Kernel. Proceedings of the Linux Symposium, 2007" (PDF).
  • ^ "PCI Express 2.0 (Training)". MindShare. Retrieved 2009-12-07.
  • ^ "PCI Express Base specification". PCI_SIG. Retrieved 2010-10-18. {{cite web}}: Cite has empty unknown parameter: |data= (help)
  • ^ "HowStuffWorks "How PCI Express Works"". Computer.howstuffworks.com. Retrieved 2009-12-07.
  • ^ a b c "PCI Express Architecture Frequently Asked Questions". PCI-SIG. Retrieved 23 November 2008.
  • ^ "PCI Express Bus". Retrieved 2010-06-12.
  • ^ "PCI Express – An Overview of the PCI Express Standard - Developer Zone - National Instruments". Zone.ni.com. 2009-08-13. Retrieved 2009-12-07.
  • ^ "What is the A side, B side configuration of PCI cards". Frequently Asked Questions. Adex Electronics. 1998. Retrieved 2011 Oct 24. {{cite web}}: Check date values in: |accessdate= (help)
  • ^ PCI-SIG: Board Design Guidelines for PCI Express Architecture 2004 p. 19
  • ^ Soderstrom, Thomas. "ingle-Slot Graphics: Whose Card Is Fastest?". Retrieved 19 March 2012.
  • ^ http://forum.notebookreview.com/lenovo-ibm/574993-msata-faq-basic-primer.html. {{cite news}}: Missing or empty |title= (help)
  • ^ "Eee PC Research". Retrieved 26 October 2009.
  • ^ "Intel Desktop Board Solid-state drive (SSD) compatibility".
  • ^ "PCI Express External Cabling 1.0 Specification". Retrieved 9 February 2007.
  • ^ "February 7, 2007". Pci-Sig. 2007-02-07. Retrieved 2010-09-11.
  • ^ http://www.eiscat.se/groups/EISCAT_3D_info/DeliverableWP12.2/preview_popup
  • ^ http://www.tmworld.com/electronics-news/4380071/What-does-GT-s-mean-anyway-
  • ^ "PCI Express Base 2.0 specification announced" (PDF) (Press release). PCI-SIG. 15 January 2007. Retrieved 9 February 2007. — note that in this press release the term aggregate bandwidth refers to the sum of incoming and outgoing bandwidth; using this terminology the aggregate bandwidth of full duplex 100BASE-TX is 200 Mbit/s
  • ^ Tony Smith (11 October 2006). "PCI Express 2.0 final draft spec published". The Register. Retrieved 9 February 2007.
  • ^ Gary Key & Wesley Fink (21 May 2007). "Intel P35: Intel's Mainstream Chipset Grows Up". AnandTech. Retrieved 21 May 2007.
  • ^ Anh Huynh (8 February 2007). "NVIDIA "MCP72" Details Unveiled". AnandTech. Retrieved 9 February 2007.
  • ^ "Intel P35 Express Chipset Product Brief" (PDF). Intel. Retrieved 5 September 2007.
  • ^ Hachman, Mark (5 August 2009). "PC Magazine". Pcmag.com. Retrieved 2010-09-11.
  • ^ "PCI Express 3.0 Bandwidth: 8.0 Gigatransfers/s". ExtremeTech. 9 August 2007. Retrieved 5 September 2007.
  • ^ a b "PCI Express 3.0 Frequently Asked Questions". PCI-SIG. Retrieved 23 November 2010.
  • ^ "PCI Special Interest Group Publishes PCI Express 3.0 Standard". 18 November 2010. Retrieved 18 November 2010.
  • ^ "AMD Official Product Page". Retrieved 25 December 2011.
  • ^ "Anandtechs' PCI Express 3.0: More Bandwidth For Compute, 7970 Review". 22 December 2011. Retrieved 25 December 2011.
  • ^ PCI-SIG press release Nov 29, 2011
  • ^ "PHY Interface for the PCI Express Architecture, version 2.00" (PDF). Retrieved 21 May 2008.
  • ^ "Mechanical Drawing for PCI Express Connector". Retrieved 7 December 2007.
  • ^ "FCi schematic for PCIe connectors" (PDF). Retrieved 7 December 2007. [dead link]
  • ^ "PCI Express 1x, 4x, 8x, 16x bus pinout and wiring @". Pinouts.ru. Retrieved 2009-12-07.
  • ^ "Computer Peripherals And Interfaces". Technical Publications Pune. Retrieved 23 July 2009.
  • ^ "Magma ExpressBox1: Cabled PCI Express for Desktops and Laptops". Magma.com. Retrieved 2010-09-11.
  • ^ "TheInquirer". TheInquirer. Retrieved 2010-09-11.
  • ^ "Custompcmag.co.uk". Custompcmag.co.uk. Retrieved 2010-09-11.
  • ^ ASUSTeK Computer
  • ^ "Technology Beats. News and Reviews". VR-Zone. 1995-09-09. Retrieved 2010-09-11.
  • ^ "IBM/Lenovo Thinkpad Advanced Dock Overview". 307.ibm.com. 2010-03-07. Retrieved 2010-09-11.
  • ^ "NVIDIA Quadro Plex VCS – Advanced visualization and remote graphics". Nvidia.com. Retrieved 2010-09-11.
  • ^ "Advanced Micro Devices, AMD – Global Provider of Innovative Microprocessor, Graphics and Media Solutions". Ati.amd.com. Retrieved 2010-09-11.
  • ^ http://www.xbitlabs.com/news/storage/display/20120110180208_OCZ_Demos_4TB_16TB_Solid_State_Drives_for_Enterprise.html
  • Further reading

    External links


    Retrieved from "https://en.wikipedia.org/w/index.php?title=PCI_Express&oldid=506596299"

    Categories: 
    2004 introductions
    Computer buses
    Serial buses
    Standards organizations
    Motherboard expansion slot
    Hidden categories: 
    CS1 errors: empty unknown parameters
    CS1 errors: dates
    CS1 errors: missing title
    CS1 errors: bare URL
    All articles with dead external links
    Articles with dead external links from September 2010
    All articles with unsourced statements
    Articles with unsourced statements from March 2011
    Articles that may contain original research from July 2012
    All articles that may contain original research
    Articles with unsourced statements from September 2010
    CS1 errors: missing periodical
    CS1 maint: postscript
     



    This page was last edited on 9 August 2012, at 18:22 (UTC).

    This version of the page has been revised. Besides normal editing, the reason for revision may have been that this version contains factual inaccuracies, vandalism, or material not compatible with the Creative Commons Attribution-ShareAlike License.



    Privacy policy

    About Wikipedia

    Disclaimers

    Contact Wikipedia

    Code of Conduct

    Developers

    Statistics

    Cookie statement

    Mobile view



    Wikimedia Foundation
    Powered by MediaWiki