Friday, 31 May 2013

Ten Gigabit Ethernet Seminar Report

CHAPTER-1
INTRODUCTION

1.1     GENERAL INTRODUCTION
             From its origin more than 25 years ago, Ethernet has evolved to meet the increasing demands of packet-switched networks. Due to its proven low implementation cost, its known reliability, and relative simplicity of installation and maintenance, its popularity has grown to the point that today nearly all traffic on the Internet originates or ends with an Ethernet connection. Further, as the demand for ever-faster network speeds has grown, Ethernet has been adapted to handle these higher speeds and the concomitant surges in volume demand that accompany them. The One Gigabit Ethernet standard is already being deployed in large numbers in both corporate and public data networks, and has begun to move Ethernet from the realm of the local area network out to encompass the metro area network. Meanwhile, an even faster 10 Gigabit Ethernet standard is nearing completion. This latest standard is being driven not only by the increase in normal data traffic but also by the proliferation of new, bandwidth-intensive applications.
The draft standard for 10 Gigabit Ethernet is significantly different in some respects from earlier Ethernet standards, primarily in that it will only function over optical fiber, and only operate in full-duplex mode, meaning that collision detection protocols are unnecessary. Ethernet can now step up to 10 gigabits per second, however, it remains Ethernet, including the packet format, and the current capabilities are easily transferable to the new draft standard.In addition, 10 Gigabit Ethernet does not obsolete current investments in network infrastructure. The task force heading the standards effort has taken steps to ensure that 10 Gigabit Ethernet is interoperable with other networking technologies such as SONET. The standard enables Ethernet packets to travel across SONET links with very little inefficiency.Ethernet’s expansion for use in metro area networks can now be expanded yet again onto wide area networks, both in concert with SONET and also end-to-end Ethernet. With the current balance of network traffic today heavily favoring packet-switched data over voice, it is expected that the new 10 Gigabit Ethernet standard will help to create a convergence between networks designed primarily for voice, and the new data centric networks. Ethernet is a family of frame-based computer networking technologies for local area networks (LANs). The name comes from the physical concept of the ether. It defines a number of wiring and signaling standards for the Physical Layer of the OSI networking model, through means of network access at the Media Access Control protocol (a sub-layer of Data Link Layer), and a common addressing format.
Ethernet is standardized as IEEE 802.3. The combination of the twisted pair versions of Ethernet for connecting end systems to the network, along with the fiber optic versions for site backbones, is the most widespread wired LAN technology. It has been in use from around 1980 to the present, largely replacing competing LAN standards such as token ring, FDDI, and ARCNET.
                                         
Fig 1.1: A standard 8P8C connector
 1.2    History
Ethernet was developed at Xerox PARC between 1973 and 1975. In 1975, Xerox filed a patent application listing Robert Metcalfe, David Boggs, Chuck Thacker and Butler Lampson as inventors, U.S. Patent 4,063,220 "Multipoint data communication system (with collision detection)". In 1976, after the system was deployed at PARC, Metcalfe and Boggs published a seminal paper.
The experimental Ethernet described in the 1976 paper ran at 3,000,000 bits per second (3 Mbit/s) and had eight-bit destination and source address fields, so the original Ethernet addresses were not the MAC addresses they are today. By software convention, the 16 bits after the destination and source address fields specified a "packet type", but, as the paper says, "different protocols use disjoint sets of packet types". Thus the original packet types could vary within each different protocol, rather than the packet type in the current Ethernet standard which specifies the protocol being used.
Metcalfe left Xerox in 1979 to promote the use of personal computers and local area networks (LANs), forming 3Com. He convinced DEC, Intel, and Xerox to work together to promote Ethernet as a standard, the so-called "DIX" standard, for "Digital/Intel/Xerox"; it specified the 10 megabits/second Ethernet, with 48-bit destination and source addresses and a global 16-bit type field. The first standard draft was first published on September 30, 1980 by the Institute of Electrical and Electronics Engineers (IEEE). It competed with two largely proprietary systems, Token Ring and Token Bus. To get over delays of the finalization of the Ethernet "Carrier sense multiple access with collision detection" (CSMA/CD) standard due to the difficult decision processes in the "open" IEEE, and due to the competitive Token Ring proposal strongly supported by IBM, support of CSMA/CD in other standardization bodies (i.e. ECMA, IEC and ISO) was instrumental to its success. The proprietary systems soon found themselves buried under a tidal wave of Ethernet products. In the process, 3Com became a major company. 3COM built the first 10 Mbit/s Ethernet adapter (1981). This was followed quickly by DEC's Unibus to Ethernet adapter, which DEC sold and used internally to build its own corporate network, reaching over 10,000 nodes by 1986, far and away the largest then extant computer network in the world.
The advantage of CSMA/CD was that, unlike Token Ring and Token Bus, all nodes could "see" each other directly. All "talkers" shared the same medium - a single coaxial cable - however, this was also a limitation; with only one speaker at a time, packets had to be of a minimum size to guarantee that the leading edge of the propagating wave of the message got to all parts of the medium before the transmitter could stop transmitting, thus guaranteeing that collisions (two or more packets initiated within a window of time which forced them to overlap) would be discovered. Minimum packet size and the physical medium's total length were thus closely linked.
Through the first half of the 1980s, Digital's ethernet implementation utilized a coaxial cable about the diameter of a US nickel (5¢ coin) which became known as "thick wire ethernet" when its successor, "thin wire ethernet" was introduced. Thin-wire ethernet was in essence a high-quality version of the cable used on closed-circuit television of the era. The emphasis was on making the physical routing of cable easier, less costly, and, whenever possible, utilize existing wiring. The observation that there was plenty of excess capacity in unused "twisted pair" (sometimes "twisted copper") telephone wiring already installed in commercial buildings provided another opportunity to expand the installed base and thus twisted-pair ethernet was the next logical development.
Twisted-pair Ethernet systems were developed in the mid 1980s, beginning with StarLAN, and become widely known with 10BASE-T. These systems replaced the coaxial cable on which early Ethernets were deployed with a system of hubs linked with unshielded twisted pair (UTP), ultimately replacing the CSMA/CD scheme in favor of a switched full duplexsystem offering higher performance.
1.3     General description
This is a combination card that supports both coaxial-based using a 10BASE2 (BNC connector, left) and twisted pair-based 10BASE-T, using an RJ45 (8P8C modular connector, right).
Figure 1.2: A 1990s network interface card.
Ethernet was originally based on the idea of computers communicating over a shared coaxial cable acting as a broadcast transmission medium. The methods used show some similarities to radio systems, although there are fundamental differences, such as the fact that it is much easier to detect collisions in a cable broadcast system than a radio broadcast. The common cable providing the communication channel was likened to the ether and it was from this reference that the name "Ethernet" was derived.
From this early and comparatively simple concept, Ethernet evolved into the complex networking technology that today underlies most LANs. The coaxial cable was replaced with point-to-point links connected by Ethernet hubs and/or switches to reduce installation costs, increase reliability, and enable point-to-point management and troubleshooting. StarLAN was the first step in the evolution of Ethernet from a coaxial cable bus to a hub-managed, twisted-pair network. The advent of twisted-pair wiring dramatically lowered installation costs relative to competing technologies, including the older Ethernet technologies.
Above the physical layer, Ethernet stations communicate by sending each other data packets, blocks of data that are individually sent and delivered. As with other IEEE 802 LANs, each Ethernet station is given a single 48-bit MAC address, which is used to specify both the destination and the source of each data packet. Network interface cards (NICs) or chips normally do not accept packets addressed to other Ethernet stations. Adapters generally come programmed with a globally unique address, but this can be overridden, either to avoid an address change when an adapter is replaced, or to use locally administered addresses.Despite the significant changes in Ethernet from a thick coaxial cable bus running at 10 Mbit/s to point-to-point links running at 1 Gbit/s and beyond, all generations of Ethernet (excluding early experimental versions) share the same frame formats (and hence the same interface for higher layers), and can be readily interconnected.Due to the ubiquity of Ethernet, the ever-decreasing cost of the hardware needed to support it, and the reduced panel space needed by twisted pair Ethernet, most manufacturers now build the functionality of an Ethernet card directly into PC motherboards, eliminating the need for installation of a separate network card.



































CHAPTER-2
                         THEME: GIGABIT ETHERNET
2.1     Ethernet

1.      Bridged  Ethernet
2.      Switched Ethernet
3.      Fast Ethernet
4.      Gigabit Ethernet



                                      Fig.2.1:Ethernet evolution through four generations



IEEE Project 802 (ETHERNET) has created a sublayer called Media Access Control (MAC) that defines the specific access method for each LAN.



2.1.1  Bridged Ethernet
The first step in the Ethernet evolution was the division of a LAN by bridges. Bridges have two effects on an Ethernet LAN: They raise the bandwidth and they separate collision domains. In an unbridged Ethernet network, the total capacity (10 Mbps) is shared among all stations. The problem is if all the system want to transmit the data at same time means the speed of the N/W will be decreased.A bridge divides the network into two or more networks. Bandwidth-wise, each network is independent. For example, a network with 12 stations is divided into two networks, each with 6 stations.

Now each network has a capacity of 10 Mbps. The 10-Mbps capacity in each segment or subnet  is now shared between 6 stations (actually 7 because the bridge acts as a station in each segment), not 12 stations.

In a network with a heavy load, each station theoretically is offered 10/6 Mbps instead of 10/12 Mbps, assuming that the traffic is not going through the bridge.
if we further divide the network, we can gain more bandwidth for each segment. For ex, if we use a four-port bridge, each station is now offered 10/3 Mbps, which is 4 times more than an unbridged network.


Fig.2.2:Without bridging           



Fig.2.3: With bridging

2.1.2  Switched Ethernet
The idea of a bridged LAN can be extended to a switched LAN. Instead of having two
to four networks, why not have N networks, where N is the number of stations on the LAN? In other words, if we can have a multiple-port bridge, why not have an N-port One of the limitations of 10Base5 and 10Base2 is that communication is half-duplex a station can either send or receive, but may not do both at the same time. (10Base-T is always full-duplex);

The next step in the evolution was to move from switched Ethernet to full-duplex switched Ethernet. The full-duplex mode increases the capacity of each domain from 10 to 20 Mbps. Below figure  shows a switched Ethernet in full-duplex mode. Note that instead of using one link between the station and the switch, the configuration uses two links: one to transmit and one to receive.
                                    


Fig2.4: Switched Ethernet

Fig.2.5: Full Duplex Switched Ethernet

2.1.3  FAST ETHERNET / IEEE 802.3u
Fast Ethernet was designed to compete with LAN protocols such as FDDI or Fiber Channel. IEEE created Fast Ethernet under the name 802.3u.

The goals of Fast Ethernet can be summarized as follows:

1. Upgrade the data rate to 100 Mbps.
2. Make it compatible with Standard Ethernet.
3. Keep the same 48-bit address.
4. Keep the same frame format.
5. Keep the same minimum and maximum frame lengths.

2.1.4  GIGABIT ETHERNET / IEEE 802.3z
The goals of the  Gigabit Ethernet design can be summarized as follows:
1. Upgrade the data rate to 1 Gbps.
2. Make it compatible with Standard or Fast Ethernet.
3. Use the same 48-bit address.
4. Use the same frame format.
5. Keep the same minimum and maximum frame lengths.
6. To support autonegotiation as defined in Fast Ethernet.


2.1.5  Ten-Gigabit Ethernet
The IEEE committee created Ten-Gigabit Ethernet and called it Standard 802.3ae.
The  goals of the Ten-Gigabit Ethernet design can be summarized as follows:
1. Upgrade the data rate to 10 Gbps.
2. Make it compatible with Standard, Fast, and Gigabit Ethernet.
3. Use the same 48-bit address.
4. Use the same frame format.
5. Keep the same minimum and maximum frame lengths.
6. Allow the interconnection of existing LANs into a metropolitan area network (MAN)
or a wide area network (WAN).
7. Make Ethernet compatible with technologies such as Frame Relay and ATM.

 2.2    GIGABIT ETHERNET TECHNOLOGY
             The Gigabit Ethernet Alliance was established in order to promote standards-based Gigabit Ethernet technology and to encourage the use and implementation of Gigabit Ethernet as a key networking technology for connecting various computing, data and telecommunications devices. The charter of the Gigabit Ethernet Alliance includes Supporting the Gigabit Ethernet standards effort conducted in the IEEE 802.3 working group Contributing resources to facilitate convergence and consensus on technical.
Promoting industry awareness, acceptance, and advancement of the 10 Gigabit Ethernet standard. Accelerating the adoption and usage of 10 Gigabit Ethernet products and services. Providing resources to establish and demonstrate multi-vendor interoperability and generally encourage and promote interoperability and interoperability events. Fostering communications between suppliers and users of 10 Gigabit Ethernet technology and products.

2.3     Gigabit Ethernet Alliance
             The purpose of the Gigabit Ethernet proposed standard is to extend the 802.3 protocols to an operating speed of 10 Gbps and to expand the Ethernet application space to include WAN links. This will provide for a significant increase in bandwidth while maintaining maximum compatibility with the installed base of 802.3 interfaces, previous investment in research and development, and principles of network operation and management.
              In order to be adopted as a standard, the IEEE’s 802.3ae Task Force has established five criteria that the new 10 Gigabit Ethernet P (proposed) standard must meet:
It must have broad market potential, supporting a broad set of applications, with multiple vendors supporting it, and multiple classes of customers.It must be compatible with other existing 802.3 protocol standards, as well as with both Open Systems Interconnection (OSI) and Simple Network Management Protocol (SNMP) management specifications.
It must be substantially different from other 802.3 standards, making it a unique solution for a problem rather than an alternative solution.It must have demonstrated technical feasibility prior to final ratification. It must be economically feasible for customers to deploy, providing reasonable cost, including all installation and management costs, for the expected performance increase.

2.4       Gigabit Ethernet Standard
            Under the International Standards Organization’s Open Systems Interconnection (OSI) model, Ethernet is fundamentally a Layer 2 protocol. 10 Gigabit Ethernet uses the IEEE 802.3 Ethernet Media Access Control (MAC) protocol, the IEEE 802.3 Ethernet frame format, and the minimum and maximum IEEE 802.3 frame size. Just as 1000BASE-X and 1000BASE-T (Gigabit Ethernet) remained true to the Ethernet model, 10 Gigabit Ethernet continues the natural evolution of Ethernet in speed and distance. Since it is a full-duplex only and fiber-only technology, it does not need the carrier-sensing multiple-access with collision detection (CSMA/CD) protocol that defines slower, half-duplex Ethernet technologies. In every other respect, 10 Gigabit Ethernet remains true to the original Ethernet model.

                  An Ethernet PHYsical layer device (PHY), which corresponds to Layer 1 of the OSI model, connects the media (optical or copper) to the MAC layer, which corresponds to OSI Layer 2. Ethernet architecture further divides the PHY (Layer 1) into a Physical Media Dependent (PMD) and a Physical Coding Sublayer (PCS). Optical transceivers, for example, are PMDs. The PCS is made up of coding (e.g., 64/66b) and a serializer or multiplexing functions.

                  The 802.3ae specification defines two PHY types: the LAN PHY and the WAN PHY (discussed below). The WAN PHY has an extended feature set added onto the functions of a LAN PHY. These PHYs are solely distinguished by the PCS. There will also be a number of PMD types.

                           Fig. 2.6:The architectural components of the 802.3ae standard
  2.5     Gigabit Ethernet in the Marketplace
             The accelerating growth of worldwide network traffic is forcing service providers, enterprise network managers and architects to look to ever higher-speed network technologies in order to solve the bandwidth demand crunch. Today, these administrators typically use Ethernet as their backbone technology. Although networks face many different issues, Gigabit Ethernet meets several key criteria for efficient and effective high-speed networks: Easy, straightforward migration to higher performance levels without disruption, Lower cost of ownership vs. current alternative technologies – including both acquisition and support costs Familiar management tools and common skills base .Ability to support new applications and data types Flexibility in network design. Multiple vendor sourcing and proven interoperability. Managers of enterprise and service provider networks have to make many choices when they design networks. They have multiple media, technologies, and interfaces to choose from to build campus and metro connections: Ethernet (100, 1000,and 10,000 Mbps), OC-12 (622 Mbps) and OC-48 (2.488 Gbps), SONET or equivalent SDH network, packet over SONET/SDH (POS), and the newly authorized IEEE 802 Task Force (802.17) titled Resilient Packet Ring.Network topological design and operation has been transformed by the advent of intelligent Gigabit Ethernet multi-layer switches. In LANs, core network technology is rapidly shifting to Gigabit Ethernet and there is a growing trend towards Gigabit Ethernet networks that can operate over metropolitan area distances. The next step for enterprise and service provider networks is the combination of multi-gigabit bandwidth with intelligent services, leading to scaled, intelligent, multi-gigabit networks with backbone and server connections ranging up to 10 Gbps.

                  In response to market trends, Gigabit Ethernet is currently being deployed over tens of kilometers in private networks. With 10 Gigabit Ethernet, the industry has developed a way to not only increase the speed of Ethernet to 10 Gbps but also to extend its operating distance and interconnectivity. In the future, network managers will be able to use 10 Gigabit Ethernet as a cornerstone for network architectures that encompass LANs, MANs and WANs using Ethernet as the end-to-end, Layer 2 transport method.

                  Ethernet bandwidth can then be scaled from 10 Mbps to 10 Gbps – a ratio of 1 to 1000 — without compromising intelligent network services such as Layer 3 routing and layer 4 to layer 7 intelligence, including quality of service (QoS), class of service (CoS), caching, server load balancing, security, and policy based networking capabilities. Because of the uniform nature of Ethernet across all environments when IEEE 802.3ae is deployed, these services can be delivered at line rates over the network and supported over all network physical infrastructures in the LAN, MAN, and WAN. At that point, convergence of voice and data networks, both running over Ethernet, becomes a very real option. And, as TCP/IP incorporates enhanced services and features, such as packetized voice and video, the underlying Ethernet can also carry these services without modification.As we have seen with previous versions of Ethernet, the cost for 10 Gbps communications has the potential to drop significantly with the development of new technologies. In contrast to 10 Gbps telecommunications lasers, the 10 Gigabit Ethernet short links — less than 40km over single-mode (SM) fiber — will be capable of using lower cost, uncooled optics and, in some cases, vertical cavity surface emitting lasers (VCSEL), which have the potential to lower PMD costs. In addition, the industry is supported by an aggressive merchant chip market that provides highly integrated silicon solutions. Finally, the Ethernet market tends to spawn highly competitive start-ups with each new generation of technology to compete with established Ethernet vendors.

2.6     Interoperability Demos
             One of the keys to Ethernet’s success is the widespread interoperability between vendors. In keeping with its mission to provide resources to establish and demonstrate multi-vendor interoperability of 10 Gigabit Ethernet products, the 10 GEA hosted the world’s largest 10 Gigabit Ethernet Interoperability Network in May, 2002. The live, multi-vendor network was on display at the NetWorld+Interop trade show in Las Vegas, Nevada. The network will also be on display at SuperComm, June 4-7, 2002in Atlanta Georgia.Comprised of products from 23 vendors, the network included a comprehensive range of products: systems, test equipment, components and cabling. The end-to-end 10GbE network was over 200 kilometers long and showcased five of the seven PMD port types specified in the IEEE 802.3ae draft: 10GBASE-LR, 10GBASE-ER, 10GBASE-SR 10GBASE-LW and 10GBASE-LX4.The network boasted 10 network hops, 18 10 GbE links, and represented all aspects of the technology; WAN, MAN and LAN.As part of the demonstration 12 companies showed chip-to-chip communication over the IEEE 802.3ae XAUI interface.The collection of products and technologies illustrate years of industry collaboration and signal to the market that 10 Gigabit Ethernet is ready to be deployed and implemented into networks around the world.

Fig. 2.7:world’s largest 10 Giggabit Ethernet interperability Demonstration
2.7     Applications for 10 Gigabit Ethernet

2.7.1   Ten Gigabit Ethernet in the Metro
             Vendors and users generally agree that Ethernet is inexpensive, well understood, widely deployed and backwards compatible from Gigabit switched down to 10 Megabit shared. Today a packet can leave a server on a short-haul optic Gigabit Ethernet port, move cross-country via a DWDM (dense wave division multiplexing) network, and find its way down to a PC attached to a “thin coax” BNC (Bayonet Neill Concelman) connector, all without any re-framing or protocol conversion. Ethernet is literally everywhere, and 10 Gigabit Ethernet maintains this seamless migration in functionality.

                Gigabit Ethernet is already being deployed as a backbone technology for dark fiber metropolitan networks. With appropriate 10 Gigabit Ethernet interfaces, optical transceivers and single mode fiber, service providers will be able to build links reaching40km or more. (See Figure 2.8)

Fig.2.8:Ten Gigabit Ethernet use in MAN

2.7.2  Gigabit Ethernet in Local Area Networks
             Ethernet technology is already the most deployed technology for high performance LAN environments. With the extension of 10 Gigabit Ethernet into the family of Ethernet technologies, the LAN now can reach farther and support upcoming bandwidth hungry applications. Similar to Gigabit Ethernet technology, the 10 Gigabit proposed standard supports both single mode and multi-mode fiber mediums. However in 10 Gigabit Ethernet, the distance for single-mode fiber has expanded from the 5km that Gigabit Ethernet supports to 40km in 10 Gigabit Ethernet.
                  The advantage for the support of longer distances is that it gives companies who manage their own LAN environments the option of extending their data centers to more cost-effective locations up to 40km away from their campuses. This also allows them to support multiple campus locations within that 40km range. Within data centers, switch-to-switch applications, as well as switch to server applications, can also be deployed over a more cost effective multi-mode fiber medium to create 10 Gigabit Ethernet backbones that support the continuous growth of bandwidth hungry applications. (Fig2.9)

                                      Fig2.9: 10 Gigabit Ethernet  useExpanded LAN environment

With 10 Gigabit backbones installed, companies will have the capability to begin providing Gigabit Ethernet service to workstations and, eventually, to the desktop in order to support applications such as streaming video, medical imaging, centralized applications, and high-end graphics. 10 Gigabit Ethernet will also provide lower network latency due to the speed of the link and over-provisioning bandwidth to compensate for the bursty nature of data in enterprise applications.

2.7.3  10 Gigabit Ethernet in the Storage Area Network
            Additionally, 10 Gigabit Ethernet will provide infrastructure for both network-attached storage (NAS) and storage area networks (SAN). Prior to the introduction of 10 Gigabit Ethernet, some industry observers maintained that Ethernet lacked sufficient horsepower to get the job done. Ethernet, they said, just doesn’t have what it takes to move “dump truck loads worth of data.” 10 Gigabit Ethernet, can now offer equivalent or superior data carrying capacity at similar latencies to many other storage networking technologies including 1 or 2 Gigabit Fiber Channel, Ultra160 or 320 SCSI, ATM OC-3, OC-12 & OC-192,and HIPPI (High Performance Parallel Interface). While Gigabit Ethernet storage servers, tape libraries and compute servers are already available, users should look for early availability of 10 Gigabit Ethernet end-point devices in the second half of 2001.There are numerous applications for Gigabit Ethernet in storage networks today, which will seamlessly extend to 10 Gigabit Ethernet as it becomes available. (See Figure 2.10) These include:

·         Business continuance/disaster recovery

·         Remote backup

·         Storage on demand

·         Streaming media

                 Fig2.10:Ten Gigabit Ethernet in the Storage Area Network

2.7.4  Ten Gigabit Ethernet in Wide Area Networks
             Gigabit Ethernet will enable Internet service providers (ISP) and network service providers (N SPs) to create very highspeed links at a very low cost, between co-located, carrier-class switches and routers and optical equipment that is directly attached to the SONET/SDH cloud. 10 Gigabit Ethernet with the WAN PHY will also allow the construction of WANs that connect geographically dispersed LANs between campuses or POPs (points of presence) over existing SONET/SDH/TDM networks. 10 Gigabit Ethernet links between a service provider’s switch and a DWDM (dense wave division multiplexing) device or LTE (line termination equipment) might in fact be very short — less than 300 meters. (See Figure 2.11.)
                   Fig.2.11:Ten Gigabit Ethernet in Wide Area Networks

2.8      The 10 Gigabit Ethernet Technology 10GbE Chip Interfaces
                  Among the many technical innovations of the 10 Gigabit Ethernet Task Force is an interface called the XAUI (10 Gigabit Attachment Unit Interface). It is a MAC-PHY interface, serving as an alternative to the XGMII (10 Gigabit Media Independent Interface). XAUI is a low pin-count differential interfaces that enables lower design costs for system vendors.The XAUI is designed as an interface extender for XGMII, the 10 Gigabit Media Independent Interface. The XGMII is a 74 signal wide interface (32-bit data paths for each of transmit and receive) that may be used to attach the Ethernet MAC to its PHY. The XAUI may be used in place of, or to extend, the XGMII in chip-to-chip applications typical of most Ethernet MAC to PHY interconnects. (See Figure 2.12)

                  The XAUI is a low pin count, self-clocked serial bus that is directly evolved from the Gigabit Ethernet 1000BASE-X PHY. The XAUI interface speed is 2.5 times that of 1000BASE-X. By arranging four serial lanes, the 4-bit XAUI interface supports the ten-times data throughput required by 10 Gigabit Ethernet.
                    .
                 Fig 2.12:Functions as an extender interface between the Mac and PCS

2.9     Physical Media Dependent (PMDS)
             XAUI The IEEE 802.3ae Task Force has developed a draft standard that provides a physical layer that supports link distances for fiber optic media.To meet these distance objectives, four PMDs were selected. The task force selected a 1310 nanometer serial PMD to meet its 2km and 10km single-mode fiber (SMF) objectives. It also selected a 1550 nm serial solution to meet (or exceed) its 40km SMF objective. Support of the 40km PMD is an acknowledgement that Gigabit Ethernet is already being successfully deployed in metropolitan and private, long distance applications. An 850 nanometer PMD was specified to achieve a 65-meter objectiveover multimode fiber using serial 850 nm transceivers.

              Additionally, the task force selected two versions of the wide wave division multiplexing (WWDM) PMD, a 1310 nanometer version over single-mode fiber to travel a distance of 10km and a 1310 nanometer PMD to meet its 300-meter-over-installedmultimode- fiber objective.The LAN PHY and the WAN PHY will operate over common PMDs and, therefore, will support the same distances. These PHYs are distinguished solely by the Physical Encoding Sublayer (PCS). The 10 Gigabit LAN PHY is intended to support existing Gigabit Ethernet applications at ten times the bandwidth with the most cost-effective solution. Over time, it is expected that the LAN PHY will be used in pure optical switching environments extending over all WAN distances. However, for compatibility with the existing WAN network, the 10 Gigabit Ethernet WAN PHY supports connections to existing and future installations of SONET/SDH (Synchronous Optical Network/ Synchronous Digital Hierarchy) circuit-switched telephony access equipment.

              The WAN PHY differs from the LAN PHY by including a simplified SONET/SDH framer in the WAN Interface Sublayer (WIS). Because the line rate of SONET OC-192/ SDH STM-64 is within a few percent of 10 Gbps, it is relatively simple to implement a MAC that can operate with a LAN PHY at 10 Gbps or with a WAN PHY payload rate of approximately 9.29 Gbps. (See Figure 9.). Appendix III provides a more in depth look at the WAN PHY.


Fig.2.13:Conceptual diagram of PHY’s And PMD’s
2.10   Standardization
            Notwithstanding its technical merits, timely standardization was instrumental to the success of Ethernet. It required well-coordinated and partly competitive activities in several standardization bodies such as the IEEE, ECMA, IEC, and finally ISO.
In February 1980 IEEE started a project, IEEE 802 for the standardization of Local Area Networks (LAN).
            The "DIX-group" with Gary Robinson (DEC), Phil Arst (Intel) and Bob Printis (Xerox) submitted the so-called "Blue Book" CSMA/CD specification as a candidate for the LAN specification. Since IEEE membership is open to all professionals including students, the group received countless comments on this brand-new technology.In addition to CSMA/CD, Token Ring (supported by IBM) and Token Bus (selected and henceforward supported by General Motors) were also considered as candidates for a LAN standard. Due to the goal of IEEE 802 to forward only one standard and due to the strong company support for all three designs, the necessary agreement on a LAN standard was significantly delayed.
            In the Ethernet camp, it put at risk the market introduction of the Xerox Star workstation and 3Com's Ethernet LAN products. With such business implications in mind, David Liddle (General Manager, Xerox Office Systems) and Metcalfe (3Com) strongly supported a proposal of Fritz Röscheisen (Siemens Private Networks) for an alliance in the emerging office communication market, including Siemens' support for the international standardization of Ethernet (April 10, 1981). Ingrid Fromm, Siemens representative to IEEE 802 quickly achieved broader support for Ethernet beyond IEEE by the establishment of a competing Task Group "Local Networks" within the European standards body ECMA TC24. As early as March 1982 ECMA TC24 with its corporate members reached agreement on a standard for CSMA/CD based on the IEEE 802 draft. The speedy action taken by ECMA decisively contributed to the conciliation of opinions within IEEE and approval of IEEE 802.3 CSMA/CD by the end of 1982.Approval of Ethernet on the international level was achieved by a similar, cross-partisan action with Fromm as liaison officer working to integrate IEC TC83 and ISO TC97SC6, and the ISO/IEEE 802/3 standard was approved in 1984.
2.11   CSMA/CD shared medium Ethernet
            Ethernet originally used a shared coaxial cable (the shared medium) winding around a building or campus to every attached machine. A scheme known as carrier sense multiple access with collision detection (CSMA/CD) governed the way the computers shared the channel. This scheme was simpler than the competing token ring or token bus technologies. When a computer wanted to send some information, it used the following algorithm:
1.      Frame ready for transmission.
2.      Is medium idle? If not, wait until it becomes ready and wait the interframe gap period (9.6 µs in 10 Mbit/s Ethernet).
3.      Start transmitting.
4.      Did a collision occur? If so, go to collision detected procedure.
5.      Reset retransmission counters and end frame transmission.
 This can be likened to what happens at a dinner party, where all the guests talk to each other through a common medium (the air). Before speaking, each guest politely waits for the current speaker to finish. If two guests start speaking at the same time, both stop and wait for short, random periods of time (in Ethernet, this time is generally measured in microseconds). The hope is that by each choosing a random period of time, both guests will not choose the same time to try to speak again, thus avoiding another collision. Exponentially increasing back-off times (determined using the truncated binary exponential backoff algorithm) are used when there is more than one failed attempt to transmit.
Computers were connected to an Attachment Unit Interface (AUI) transceiver, which was in turn connected to the cable (later with thin Ethernet the transceiver was integrated into the network adapter). While a simple passive wire was highly reliable for small Ethernets, it was not reliable for large extended networks, where damage to the wire in a single place, or a single bad connector, could make the whole Ethernet segment unusable. Multipoint systems are also prone to very strange failure modes when an electrical discontinuity reflects the signal in such a manner that some nodes would work properly while others work slowly because of excessive retries or not at all (see standing wave for an explanation of why); these could be much more painful to diagnose than a complete failure of the segment. Debugging such failures often involved several people crawling around wiggling connectors while others watched the displays of computers running a ping command and shouted out reports as performance changed.
Since all communications happen on the same wire, any information sent by one computer is received by all, even if that information is intended for just one destination. The network interface card interrupts the CPU only when applicable packets are received: the card ignores information not addressed to it unless it is put into "promiscuous mode". This "one speaks, all listen" property is a security weakness of shared-medium Ethernet, since a node on an Ethernet network can eavesdrop on all traffic on the wire if it so chooses. Use of a single cable also means that the bandwidth is shared, so that network traffic can slow to a crawl when, for example, the network and nodes restart after a power failure.
2.12   Repeaters and hubs
            For signal degradation and timing reasons, coaxial Ethernet segments had a restricted size which depended on the medium used. For example, 10BASE5 coax cables had a maximum length of 500 meters (1,640 ft). Also, as was the case with most other high-speed buses, Ethernet segments had to be terminated with a resistor at each end. For coaxial-cable-based Ethernet, each end of the cable had a 50 ohm (Ω) resistor attached. Typically this resistor was built into a male BNC or N connector and attached to the last device on the bus, or, if vampire taps were in use, to the end of the cable just past the last device. If termination was not done, or if there was a break in the cable, the AC signal on the bus was reflected, rather than dissipated, when it reached the end. This reflected signal was indistinguishable from a collision, and so no communication would be able to take place.
            A greater length could be obtained by an Ethernet repeater, which took the signal from one Ethernet cable and repeated it onto another cable. If a collision was detected, the repeater transmitted a jam signal onto all ports to ensure collision detection. Repeaters could be used to connect segments such that there were up to five Ethernet segments between any two hosts, three of which could have attached devices. Repeaters could detect an improperly terminated link from the continuous collisions and stop forwarding data from it. Hence they alleviated the problem of cable breakages: when an Ethernet coax segment broke, while all devices on that segment were unable to communicate, repeaters allowed the other segments to continue working - although depending on which segment was broken and the layout of the network the partitioning that resulted may have made other segments unable to reach important servers and thus effectively useless.
            People recognized the advantages of cabling in a star topology, primarily that only faults at the star point will result in a badly partitioned network, and network vendors began creating repeaters having multiple ports, thus reducing the number of repeaters required at the star point. Multiport Ethernet repeaters became known as "Ethernet hubs". Network vendors such as DEC and SynOptics sold hubs that connected many 10BASE2 thin coaxial segments. There were also "multi-port transceivers" or "fan-outs". These could be connected to each other and/or a coax backbone. A well-known early example was DEC's DELNI. These devices allowed multiple hosts with AUI connections to share a single transceiver. They also allowed creation of a small standalone Ethernet segment without using a coaxial cable. Ethernet on unshielded twisted-pair cables (UTP), beginning with Star LAN and continuing with 10BASE-T, was designed for point-to-point links only and all termination was built into the device. This changed hubs from a specialist device used at the center of large networks to a device that every twisted pair-based network with more than two machines had to use. The tree structure that resulted from this made Ethernet networks more reliable by preventing faults with (but not deliberate misbehavior of) one peer or its associated cable from affecting other devices on the network, although a failure of a hub or an inter-hub link could still affect lots of users. Also, since twisted pair Ethernet is point-to-point and terminated inside the hardware, the total empty panel space required around a port is much reduced, making it easier to design hubs with lots of ports and to integrate Ethernet onto computer motherboards.
            Despite the physical star topology, hubbed Ethernet networks still use half-duplex and CSMA/CD, with only minimal activity by the hub, primarily the Collision Enforcement signal, in dealing with packet collisions. Every packet is sent to every port on the hub, so bandwidth and security problems aren't addressed. The total throughput of the hub is limited to that of a single link and all links must operate at the same speed. Collisions reduce throughput by their very nature. In the worst case, when there are lots of hosts with long cables that attempt to transmit many short frames, excessive collisions can reduce throughput dramatically. However, a Xerox report in 1980 summarized the results of having 20 fast nodes attempting to transmit packets of various sizes as quickly as possible on the same Ethernet segment.  The results showed that, even for the smallest Ethernet frames (64B), 90% throughput on the LAN was the norm. This is in comparison with token passing LANs (token ring, token bus), all of which suffer throughput degradation as each new node comes into the LAN, due to token waits.This report was controversial, as modeling showed that collision-based networks became unstable under loads as low as 40% of nominal capacity. Many early researchers failed to understand the subtleties of the CSMA/CD protocol and how important it was to get the details right, and were really modeling somewhat different networks (usually not as good as real Ethernet).





                                                         CHAPTER-3
                                                     Conclusion
As the Internet transforms longstanding business models and global economies, Ethernet has withstood the test of time to become the most widely adopted networking technology in the world. Much of the world’s data transfer begins and ends with an Ethernet connection. Today, we are in the midst of an Ethernet renaissance spurred on by surging E-Business and the demand for low cost IP services that have opened the door to questioning traditional networking dogma. Service providers are looking for higher capacity solutions that simplify and reduce the total cost of network connectivity, thus permitting profitable service differentiation, while maintaining very high levels of reliability.Enter 10 Gigabit Ethernet. Ethernet is no longer designed only for the LAN. 10 Gigabit Ethernet is the natural evolution of the well-established IEEE 802.3 standard in speed and distance. It extends Ethernet’s proven value set and economics to metropolitan and wide area networks by providing:An Ethernet-optimized infrastructure build out is taking place. The metro area is currently the focus of intense network development to deliver optical Ethernet services. 10 Gigabit Ethernet is on the roadmaps of most switch, router and metro optical system vendors to enable.




REFERENCES

1.http://www.10gea.org
2.http://standards.ieee.org/resources/glance.html
3.IEEE 802 LAN/MAN Standards Committee
4.IEEE 802.3 CSMA/CD (ETHERNET)
5.IEEE P802.3ae 10Gb/s Ethernet Task Force












No comments:

Post a Comment