The network layer provides the means of transferring variable-length network packets from a source to a destination host via one or more networks. Within the service layering semantics of the OSI network architecture, the network layer responds to service requests from the transport layer and issues service requests to the data link layer.
Functions of the network layer include:
Connectionless communication
For example, IP is connectionless, in that a data packet can travel from a sender to a recipient without the recipient having to send an acknowledgement. Connection-oriented protocols exist at other, higher layers of the OSI model.
Host addressing
Every host in the network must have a unique address that determines where it is. This address is normally assigned from a hierarchical system. For example, you can be “Fred Murphy” to people in your house, “Fred Murphy, 1 Main Street” to Dubliners, or “Fred Murphy, 1 Main Street, Dublin” to people in Ireland, or “Fred Murphy, 1 Main Street, Dublin, Ireland” to people anywhere in the world. On the Internet, addresses are known as IP addresses.
Since many networks are partitioned into subnetworks and connect to other networks for wide-area communications, networks use specialized hosts, called gateways or routers, to forward packets between networks.
A host (also known as “network host”) is a computer or other device that communicates with other hosts on a network. Hosts on a network include clients and servers — that send or receive data, services or applications.
The router forwards data packets along networks. It is connected to at least two networks, commonly two LANs or WANs or a LAN and its ISP’s network. Routers are located at gateways, the places where two or more networks connect. Routers use headers and forwarding tables to determine the best path for forwarding the packets, and they use protocols to communicate with each other and configure the best route between any two hosts.
In networks the switch is the device that filters and forwards packets between LAN segments. Switches operate at the data link layer (layer 2) and sometimes the network layer (layer 3) of the OSI Reference Model and therefore support any packet protocol. LANs that use switches to join segments are called switched LANs or, in the case of Ethernet networks, switched Ethernet LANs.
A protocol is a set of rules and guidelines for communicating data. Rules are defined for each step and process during communication between two or more computers. Networks have to follow these rules to successfully transmit data.
Data communication networks can affect businesses by being the foundations for distributed systems in which information system applications are divided among a network of computers. Data communication networks facilitate more efficient use of computers and improve the day-to-day control of a business by providing faster information flow. They also provide message transfer services to allow computer users to talk to one another via electronic mail.
Scalability is an attribute that describes the ability of a process, network, software or organization to grow and manage increased demand. A system, business or software that is described as scalable has an advantage because it is more adaptable to the changing needs or demands of its users or clients.
Compatibility is the capacity for two systems to work together without having to be altered to do so. Compatible software applications use the same data formats. For example, if word processor applications are compatible, the user should be able to open their document files in either product.
Heterogeneous computing environments are a reality today. Users purchase systems from many vendors to implement the solutions they need. Standardization and clear interfaces are critical to a heterogeneous environment, enabling users to develop strategies for communicating throughout their network.
A Hub is a networking device that allows one to connect multiple PCs to a single network. Hubs may be based on Ethernet, Fire wire, or USB connections. A switch is a control unit that turns the flow of electricity on or off in a circuit. It may also be used to route information patterns in streaming electronic data sent over networks. In the context of a network, a switch is a computer networking device that connects network segments. Hence a switch is better than a Hub.
A logical topology is how devices appear connected to the user. A physical topology is how they are actually interconnected with wires and cables. For example, in a shared Ethernet network that uses hubs rather than switches, the logical topology appears as if every node is connected to a common bus that runs from node to node. However, its physical topology is a star, in which every node on the network connects to a central hub.
A broadcast domain is a logical division of a computer network, in which all nodes can reach each other by broadcast at the data link layer. A broadcast domain can be within the same LAN segment or it can be bridged to other LAN segments. … Routers and other higher-layer devices form boundaries between broadcast domains.
A collision domain is a network segment connected by a shared medium or through repeaters where data packets may collide with one another while being sent. The collision domain applies particularly in wireless networks, but also affected early versions of Ethernet.
The basic components of a wired LAN are the NICs, circuits, access points, and network operating systems – The network interface card (NIC) allows the computer to be physically connected to the network cable, which provides the physical layer connection among the computers in the network The circuits are the cables that connect devices together. In a LAN, these cables are generally twisted pair from the client to the hub or server. Outside the building, fiber optic is generally used. Network hubs and switches serve two purposes. First, they provide an easy way to connect network cables. Network cables can be directly connected by splicing two cables together. Second, many hubs and switches act as repeaters or amplifiers. Signals can travel only so far in a network cables before they attenuate and can no longer be recognized.
A network interface card (NIC) is a circuit board or card that is installed in a computer so that it can be connected to a network. A network interface card provides the computer with a dedicated, full-time connection to a network.
A Hub is a networking device that allows one to connect multiple PCs to a single network. Hubs may be based on Ethernet, Fire wire, or USB connections. A switch is a control unit that turns the flow of electricity on or off in a circuit. It may also be used to route information patterns in streaming electronic data sent over networks. In the context of a network, a switch is a computer networking device that connects network segments. Hence a switch is better than a Hub. In a shared Ethernet network that uses hubs rather than switches, the logical topology appears as if every node is connected to a common bus that runs from node to node. However, its physical topology is a star, in which every node on the network connects to a central hub.
Carrier Sense Multiple Access with Collision Detection is a type of protocol for networks that helps to triage transmissions and control network traffic. collision detection is the process by which a node determines that a collision has occurred. Collisions occur with most networks, so a protocol is required to recover from such events. Ethernet uses CSMA/CDas its collision detection and recovery system.
The correctness of a distributed algorithm is expressed through safety and liveness properties. These properties can be defined as sets of histories (traces). More specifically, a safety property is defined as a prefix-closed and limit-closed set of well-formed histories. Whereas, a liveness property is defined as a set of histories that permits any finite well-formed history, i.e., for every finite history there exists a continuation of that history in the liveness property.
A three-way handshake is a method used in a TCP/IP network to create a connection between a local host/client and server. It is a three-step method that requires both the client and server to exchange SYN and ACK (acknowledgment) packets before actual data communication begins.
One of the issues which must be faced in any system is the problem of errors. We make an assumption – which is often justified – that a digital bit pattern remains constant in time and therefore information does not “decay away’. Error-checking is a device we use to confirm our prejudices or alert us to a failure in the system. Virtually all forms of error checking involve adding something to the digital pattern. This increases the number of possible patterns (adding one bit to a pattern doubles the number of possibilities) If we then place rules on valid patterns we can arrange things so that valid patterns do not become other valid patterns through small errors. Parity is the simplest form of error checking. It adds one bit to the pattern and then requires that the modulo-2 sum of all the bits of the pattern and the parity bit have a defined answer. The answer is 0 for even parity and 1 for odd parity. An alternative way of making the same statement is that odd(even) parity constrains there to be an odd(even) number of “1″s in the pattern plus parity bit. A CRC check can catch all single, and a large number of other, errors. It is not prone to the “bursting” problem above. It is used extensively in disk systems, communication systems and other places where a check on a pattern has to be maintained.
Token passing. On a local area network, token passing is a channel access method where a signal called a token is passed between nodes to authorize that node to communicate. … Some types of token passing schemes do not need to explicitly send a token between systems because the process of “passing the token” is implicit.
Lost Frame
Lost/Damaged Acknowledgement
Delayed Acknowledgement
After timeout on sender side, a long delayed acknowledgement might be wrongly considered as acknowledgement of some other recent packet.
Above 3 problems are resolved by Stop and Wait ARQ (Automatic Repeat Request) that does both error control and flow control and is resolved by introducing sequence number for acknowledgement also.
The Stop and Wait ARQ solves main three problems, but may cause big performance issues as sender always waits for acknowledgement even if it has next packet ready to send. Consider a situation where you have a high bandwidth connection and propagation delay is also high (you are connected to some server in some other country though a high speed connection). To solve this problem, we can send more than one packet at a time with a larger sequence numbers. We will be discussing these protocols in next articles.
So Stop and Wait ARQ may work fine where propagation delay is very less for example LAN connections, but performs badly for distant connections like satellite connection.
If no acknowledgement is received after sending 6 frames, the sender takes the help of a timer. After the time-out, it resumes retransmission. The go-Back-N protocol also takes care of damaged frames and damaged ACKs.
Synchronous timing occurs when a signal has a fixed relationship with a clock edge. Synchronous timing ensures that data arrives at a digital tester at a known time relative to a clock edge. Interfaces are said to be “source synchronous” if the data is transmitted or received along with the clock.
Due to various topologies used across the networks, the Maximum transmission unit differs. Framing does the job for it. A frame is also used to wrap the payload with additional information such as addressing information and checksum. Data link layer also delineates the frames, commonly termed as Frame synchronization. Framing or the addition of L2 header is done to send the packet from one node to another in same network, cause each node is identified using a unique mac address in a broadcast domain. There are also some services that Data link layer provides like flow control, error detection and correction, frame synchronisation (to mark the end of one frame ) for that mac trailer is added . The final framing method is physical layer coding violations and is applicable to networks in which the encoding on the physical medium contains some redundancy. In such cases normally, a 1 bit is a high-low pair and a 0 bit is a low-high pair. The combinations of low-low and high-high which are not used for data may be used for marking frame boundaries.
Controlled access-one node consult with the other node for sending data packets.so, there is no collision. three types-reservation, polling, token passing. contention based access-node send the packet without consulting with the other node.so, there is a chance of collision. four types-aloha, csma, csma/cd, csma/ca. Hence, the former is distributed and the latter is centralized.
The various codes are used for the error detection and correction in the field of communication. Some are for the single bit error and some are for the multiple bits error. Reed-Muller algorithm realizes the ability of multiple bits error detection and correction. With the superior error correction capability, these codes have initiated wide scale interests in satellite communication, wireless communication, and storage fields. Reed-Muller codes cover a wide range of service requirements and diverse interference conditions in wireless applications and can operate at both high and low code rates. In this paper, the comparison of Reed-Muller codes with different other codes for error detection and correction of multiple bits is proposed and further we can implement it on a Xilinx field programmable gate array (FPGA) device. Using Reed-Muller method the data is transferred from transmitter to receiver without any error. The error detection and correction principle and realization methods are described in detail. The multiple bits error detection and correction with Reed-Muller algorithm method can improve the bit error rate and packet error rate effectively.
Modulation allows us to send a signal over a bandpass frequency range. If every signal gets its own frequency range, then we can transmit multiple signals simultaneously over a single channel, all using different frequency ranges. Another reason to modulate a signal is to allow the use of a smaller antenna.
Attenuation is a general term that refers to any reduction in the strength of a signal. Attenuation occurs with any type of signal, whether digital or analog. Sometimes called loss, attenuation is a natural consequence of signal transmission over long distances. Although attenuation is significantly lower for optical fiber than for other media, it still occurs in both multimode and single-mode transmission. An efficient optical data link must have enough light available to overcome attenuation.
This corresponds to a transfer rate of about 960 bytes per second.
an amplifier amplifies everything to increase signal level, inlcuding noise received before and after the last link, and adds it own noise. Using a large number of amplifiers over a long, lossy link inevitably increases the noise and decreases the signal to noise levels. A repeater is usually a receiver followed by a regenerating logic and transmitter. In this case, the regeneration receives the usually digital signals and retimes it to remove jitter and then retransmits it with minimal baseband noise. This maximizes signal to noise ratio as if it was the original transmitter in a chain and gives the cleanest received signal with fewest error bits.
Bit rate is a measure of the number of data bits (that’s 0’s and 1’s) transmitted in one second. A figure of 2400 bits per second means 2400 zeros or ones can be transmitted in one second, hence the abbreviation ‘bps’
There are different strategies for modulating the carrier wave. First, a user can tweak the height of the carrier. If an input signal’s height varies with the loudness of a user’s voice and then adds this to the carrier, then the carrier’s amplitude will change corresponding to the input signal that’s been fed into it. This is called amplitude modulation or AM. Frequency of an input signal can also be changed. If this input signal is added to the pure carrier wave, it will thereby change the frequency of the carrier wave. In that way, users can use changes of frequency to carry speech information. This is called frequency modulation or FM. Modulation schemes can be analog or digital. An analog modulation scheme has an input wave that varies continuously like a sine wave. In digital modulation scheme, it’s a little more complicated. Voice is sampled at some rate and then compressed and turned into a bit stream – a stream of zeros and ones – and this in turn is created into a particular kind of wave which is then superimposed on the carrier.
In the mid 20th century, a codec was a device that coded analog signals into digital form using pulse-code modulation (PCM). Later, the name was also applied to software for converting between digital signal formats, including compander functions. A modem is a contraction of modulator-demodulator.
In general, information is conveyed by change in values of the signal in time. Since frequency of a signal is a direct measure of the rate of change in values of the signal, the more the frequency of a signal, more is the achievable data rate or information transfer rate. This can be illustrated by taking the example of both an analog and a digital signal. If we take analog transmission line coding techniques like Binary ASK, Binary FSK or Binary PSK, information is tranferred by altering the property of a high high frequency carrier wave. If we increase the frequency of this carrier wave to a higher value, then this reduces the bit interval T (= 1/f) duration, thereby enabling us to transfer more bits per second.
Similarly, if we take digital transmission techniques like NRZ, Manchester encoding etc., these signals can be modelled as periodic signals and hence is composed of an infinite number of sinusoids, consisting of a fundamental frequency (f) and its harmonics. Here too, the bit interval (T) is equal to the reciprocal of the fundamental frequency (T = 1/f). Hence, if the fundamental frequency is increased, then this would represent a digital signal with shorter bit interval and hence this would increase the data rate.
So, whether it is analog or digital transmission, an increase in the bandwidth of the signal, implies a corresponding increase in the data rate. For e.g. if we double the signal bandwidth, then the data rate would also double.
In practise however, we cannot keep increasing the signal bandwidth infinitely. The telecommunication link or the communication channel acts as a police and has limitations on the maximum bandwidth that it would allow. Apart from this, there are standard transmission constraints in the form of different channel noise sources that strictly limit the signal bandwidth to be used. So the achievable data rate is influenced more by the channel’s bandwidth and noise characteristics than the signal bandwidth.
Nyquist and Shannon have given methods for calculating the channel capacity (C) of bandwidth limited communication channels. Given a noiseless channel with bandwidth B Hz., Nyquist stated that it can be used to carry atmost 2B signal changes (symbols) per second. The converse is also true, namely for achieving a signal transmission rate of 2B symbols per second over a channel, it is enough if the channel allows signals with frequencies upto B Hz.
Another implication of the above result is the sampling theorem, which states that for a signal whose maximum bandwidth is f Hz., it is enough to sample the signals at 2f samples per second for the purpose of quantization (A/D conversion) and also for reconstruction of the signal at the receiver (D/A conversion). This is because, even if the signals are sampled at a higher rate than 2f ( and thereby including the higher harmonic components), the channel would anyway filter out those higher frequency components.
Also, symbols could have more than two different values, as is the case in line coding schemes like QAM, QPSK etc. In such cases, each symbol value could represent more than 1 digital bit.
Nyquist’s formulae for multi-level signalling for a noiseless channel is
C = 2 * B * log M,
where C is the channel capacity in bits per second, B is the maximum bandwidth allowed by the channel, M is the number of different signalling values or symbols and log is to the base 2.
For example, assume a noiseless 3-kHz channel.
If binary signals are used, then M= 2 and hence maximum channel capacity or achievable data rate is C = 2 * 3000 * log 2 = 6000 bps.
Similarly, if QPSK is used instead of binary signalling, then M = 4. In that case, the maximum channel capacity is C = 2 * 3000 * log 4 = 2 * 3000 * 2 = 12000bps.
Thus, theoritically, by increasing the number of signalling values or symbols, we could keep on increasing the channel capacity C indefinitely. But however, in practise, no channel is noiseless and so we cannot simply keep increasing the number of symbols indefinitely, as the receiver would not be able to distinguish between different symbols in the presence of channel noise.
It is here that Shannon’s theorem comes in handy, as he specifies a maximum theoritical limit for the channel capacity C of a noisy channel.
Shannon’s channel capacity criteria for noisy channels
Given a communication channel with bandwidth of B Hz. and a signal-to-noise ratio of S/N, where S is the signal power and N is the noise power, Shannon’s formulae for the maximum channel capacity C of such a channel is
C = B log (1 + S/N)
(log is to base 2)
For example, for a channel with bandwidth of 3 KHz and with a S/N value of 1000, like that of a typical telephone line, the maximum channel capacity is
C = 3000 * log (1 + 1000) = 30000 bps (approx.)
Using the previous examples of Nyquist criteria, we saw that for a channel with bandwidth 3 KHz, we could double the data rate from 6000 bps to 12000 bps., by using QPSK instead of binary signalling as the line encoding technique. Using Shannon’s criteria for the same channel, we can conclude that irrespective of the line encoding technique used, we cannot increase the channel capacity of this channel beyond 30000bps.
In practise however, due to receiver constraints and due to external noise sources, Shannon’s theoritical limit is never achieved in practise.
Thus to summarize the relationship between bandwidth, data rate and channel capacity,
In general, greater the signal bandwidth, the higher the information-carrying capacity
But transmission system & receiver’s capability limit the bandwidth that can be transmitted
Hence data rate depends on
Available bandwidth for transmission
Channel capacity and Signal-to-Noise Ratio
Receiver Capability
More the frequency allotted, more the channel bandwidth, more the processing capability of the receiver, greater the information transfer rate that can be achieved.
Simplex, half duplex and full duplex are three kinds of communication channels in telecommunications and computer networking. These communication channels provide pathways to convey information. A communication channel can be either a physical transmission medium or a logical connection over a multiplexed medium. The physical transmission medium refers to the material substance that can propagate energy waves, such as wires in data communication. And the logical connection usually refers to the circuit switched connection or packet-mode virtual circuit connection, such as a radio channel. Thanks to the help of communication channels, information can be transmitted without obstruction. A brief introduction about three communication channel types will be given in this article.
Three Types of Communication Channel
1) Simplex
A simplex communication channel only sends information in one direction. For example, a radio station usually sends signals to the audience but never receives signals from them, thus a radio station is a simplex channel. It is also common to use simplex channel in fiber optic communication. One strand is used for transmitting signals and the other is for receiving signals. But this might not be obvious because the pair of fiber strands are often combined to one cable. The good part of simplex mode is that its entire bandwidth can be used during the transmission.
2) Half duplex
In half duplex mode, data can be transmitted in both directions on a signal carrier except not at the same time. At a certain point, it is actually a simplex channel whose transmission direction can be switched. Walkie-talkie is a typical half duplex device. It has a “push-to-talk” button which can be used to turn on the transmitter but turn off the receiver. Therefore, once you push the button, you cannot hear the person you are talking to but your partner can hear you. An advantage of half-duplex is that the single track is cheaper than the double tracks.
3) Full duplex
A full duplex communication channel is able to transmit data in both directions on a signal carrier at the same time. It is constructed as a pair of simplex links that allows bidirectional simultaneous transmission. Take telephone as an example, people at both ends of a call can speak and be heard by each other at the same time because there are two communication paths between them. Thus, using the full duplex mode can greatly increase the efficiency of communication.
Simplex Fiber Optic Cable vs. Duplex Fiber Optic Cable
A simplex fiber optic cable has only one tight-buffered fiber inside cable jacket for one-way data transmission. The aramid yarn and protective jacket enable the cable to be connected and crimped to a mechanical connector. It can be used for both single-mode and multimode fiber optic cables. For instance, single-mode simplex fiber optic cable is suitable for networks that require data to be transmitted in one direction over long distance.
Different from simplex fiber optic cable, the duplex one has two fibers constructed in a zipcord style. It is often used for duplex communication between devices to transmit and receive signals simultaneously. The duplex fiber optic cable is required for all sorts of applications, such as workstations, fiber switches and servers, fiber modems and so on. And single-mode or multimode cable is also available with duplex cables.
Controlled access-one node consult with the other node for sending data packets.so, there is no collision. three types-reservation, polling, token passing. contention based access-node send the packet without consulting with the other node.so, there is a chance of collision. four types-aloha, csma, csma/cd, csma/ca. Hence, the former is distributed and the latter is centralized.
Multiplexers, often called muxes, are extremely important to telecommunications. Their main reason for being is to reduce network costs by minimizing the number of communications links needed between two points. Multiplexing (or muxing) is a way of sending multiple signals or streams of information over a communications link at the same time in the form of a single, complex signal; the receiver recovers the separate signals, a process called demultiplexing (or demuxing).
A network port is a process-specific or an application-specific software construct serving as a communication endpoint, which is used by the Transport Layer protocols of Internet Protocol suite, such as User Diagram Protocol (UDP) and Transmission Control Protocol (TCP).
Transmission Control Protocol (TCP – RFC 793) is considered as a reliable protocol. Transmission Control Protocol (TCP) is responsible for breaking up the message (Data from application layer) into TCP Segments and reassembling them at the receiving side.
The Transmission Control Protocol (TCP) is intended for use as a highly reliable host-to-host protocol between hosts in packet-switched computer communication networks, and in interconnected systems of such networks. Transmission Control Protocol, the program that implements it, and its interface to programs or users that require its services.
This is called the TCP three-way handshake.” So we have explained one of the first thing that is transmitted during the 3-way handshake: the sequence number. MSS or how much payload (useful information) can fit into a single package. … The exchange of MSS values during setup is sometimes called MSS negotiation. TCP 3-Way Handshake (SYN,SYN-ACK,ACK) The TCP three-way handshake in Transmission Control Protocol (also called the TCP-handshake; three message handshake and/or SYN-SYN-ACK) is the method used by TCP set up a TCP/IP connection over an Internet Protocol based network.
In situations where you really want to get a simple answer to another server quickly, UDP works best. In general, you want the answer to be in one response packet, and you are prepared to implement your own protocol for reliability or to resend. DNS is the perfect description of this use case. The costs of connection setups are way too high (yet, DNS does support a TCP mode as well).
Another case is when you are delivering data that can be lost because newer data coming in will replace that previous data/state. Weather data, video streaming, a stock quotation service (not used for actual trading), or gaming data comes to mind.
Another case is when you are managing a tremendous amount of state and you want to avoid using TCP because the OS cannot handle that many sessions. This is a rare case today. In fact, there are now user-land TCP stacks that can be used so that the application writer may have finer grained control over the resources needed for that TCP state. Prior to 2003, UDP was really the only game in town.
TCP 3-Way Handshake (SYN,SYN-ACK,ACK) The TCP three-way handshake in Transmission Control Protocol (also called the TCP-handshake; three message handshake and/or SYN-SYN-ACK) is the method used by TCP set up a TCP/IP connection over an Internet Protocol based network.
Computers are today equipped with the whole range of different applications. Almost all of these applications are able in some way to communicate across the network and use Internet to send and get information, updates or check the correctness of user purchase. Consider they all these applications are in some cases simultaneously receiving and sending e-mail, instant messages, web pages, and a VoIP phone calls. In this situation the computer is using one network connection to get all this communication running. But how is it possible that this computer is never confused about choosing the right application that will receive a particular packet? We are talking about the computer that processes two or more communications in the same time for two or more applications running.
The TCP and UDP transport layer protocols based services have the possibility to keep track of the applications that are communicating in real time. To be able to differentiate the segments and datagrams of each separate application that is using connection in the same time, TCP and UDP have header fields that can identify these applications. These unique identifiers are the port numbers.
Header of each segment or datagram is made of different fields; two of these fields are source and destination port. The source port number is the number for one particular communication and is associated with the originating application on the local computer. The destination port number is the number for the same communication associated with the destination application on the remote host that is receiving the application. It can be for example port 80 on the web server that is open by deamon application and is waiting for GET request for html web page.
Port numbers are assigned in several ways, depending on whether the message is a request from local host or a response from remote host. While server processes have static port numbers assigned to them, clients dynamically choose a port number for each conversation to be sure that is not some port that is already used.
When a client application sends a request to a server, the destination port that is in the header of the request frame is the port number that is assigned to the service daemon running on the remote host. The client application must be configured to know what port number is associated with the server process on the remote host. This destination port number is usually configured by default but can also be changed manually. Let’s take an example in which user wishes to open a webpage. In that case web browser wants to open a webpage and makes a request to a web server, the browser uses TCP protocol and port number 80 because port number 80 is well-known port number used by Hypertext Transfer Protocol (http). Because TCP port 80 is the default port assigned to web-serving applications the server will receive request from web browser and it will know they this is a webpage request. Many common applications have default port assignments.
In the clients request segment or datagram header, the second port number is source port. This port number is a randomly generated port number greater than 1023. As long as it does not conflict with other ports in use on the system at that moment, the client can choose any port number from the range of default port numbers used by the operating system. This port number acts like a return address for the requesting application. The Transport layer keeps track of this port and the application that initiated the request so that when a response is returned, it can be forwarded to the correct application. The requesting application port number is used as the destination port number in the response coming back from the server.
The combination of the Transport layer port number and the Network layer IP address assigned to the host uniquely identifies a particular process running on a specific host device and is called a socket. Term socket refers to the unique combination of IP address and port number. A socket pair, consisting of the source and destination IP addresses and port numbers, is also unique and identifies the conversation between the two hosts.
For example, If we want to open a webpage from the server on the address 10.0.0.5 an HTTP web page request being sent to that web server to destination port 80. This request for the webpage will be destined to socket 10.0.0.5:80. Let’s say that our computer has a Layer 3 IPv4 address of 192.168.1.100. In the moment when the web browser requests the web page, the computer will also generate a dynamic port number 49152 assigned to the web browser instance (it can be one for every open tab). Dynamically generated port will be used by the server to uniquely describe the web browser instance with the socket 192.168.1.100:49152 to respond with the webpage content to the host.
So when our computer receives the page from server, the server has destinated the webpage to our host computer on socket 192.168.1.100:49152
This is the use of port numbers.
A hierarchical network design includes the following three layers: The backbone (core) layer that provides optimal transport between sites. The distribution layer that provides policy-based connectivity.
The Spanning Tree Protocol (STP) is a network protocol that is used to eliminate bridge loops in Ethernet LANs. STP prevents network loops and associated network outage by blocking redundant links or paths. The redundant paths can be used to keep the network operational if the primary link fails.
Gateways regulate traffic between two dissimilar networks, while routers regulate traffic between similar networks. The easiest way to illustrate this point is through an example. Suppose you have a Windows 2000 network that’s using TCP/IP as its primary protocol. Because TCP/IP is also the primary protocol of the Internet, you could use a router to connect your network to the Internet. The router would ensure that:
Traffic intended for the local network doesn’t bleed onto the Internet.
Traffic residing on the Internet that’s not specifically intended for your network stays on the Internet.
A gateway, on the other hand, joins dissimilar systems. The best example of a gateway would be a device that joins a PC network with a 3270 mainframe environment or a device that allows a Windows NT network to communicate with a NetWare network. Although a gateway can be used to reduce network traffic, it’s more often used to make communication possible in dissimilar environments.
The two networks should be connected together with a router in between the two networks. This will allow the networks to be connected so that traffic can pass from one to the other, but the router can also keep “local traffic local,” thus reducing traffic on the network. This means that a VLAN architecture can improve performance by reducing traffic in the network compared with a switched backbone architecture. Since a switched backbone uses layer-2 switches, all the computers are in the same subnet, and all broadcast traffic goes to all computers.
Generally, Layer 3 switches are faster than routers, but they usually lack some of the advanced functionalities of routers.
Specifically, a router is a device that routes the packets to their destination. What this means is that a router analyzes the Layer 3 destination address of every packet, and devises the best next hop for it. This process takes time, and hence every packet encounters some delay because of this.
In a Layer 3 switch, on the other hand, whenever a routing table searches for any specific destination, a cache entry is made in a fast memory. This cache entry contains the source-destination pair and next hop address. Once this cache entry is in place, the next packet with the same source and destination pair does not have to go through the entire process of searching the routing table. Next hop information is directly picked up from the cache. That’s why it is called route once switch many. This way, a Layer 3 switch can route packets much faster than the router.
Having explained the mechanism of both a router and a Layer 3 switch, let me also tell you that router has some advanced routing functionality, which Layer 3 switches lack. Layer 3 switches are primarily used in the LAN environment, where you need routing. Routers are used in the WAN environment. These days lots of people have started using layer 3 switches in WAN environment, like MPLS.
VLANs are a way to split a single L2 media into multiple Broadcast Domain, effectively creating “virtual” layer 2 networks VLANs are usually used in switches to separate ports into distinct network The actual separation depends on the architecture, it can be along administrative lines (departments) or technological lines (PC, Server, IP Phones, Printers) The 802.1Q Standard is only for Trunking purpose (multiple VLAN in a single link), by way of a Tag that is included in the 802.1Q Header that is inserted in the Ethernet Header and is a must when connection multiple switches using VLANs
A Layer 3 switch is a specialized hardware device used in network routing. Layer 3 switches technically have a lot in common with traditional routers, and not just in physical appearance
A VLAN is a group of switch ports, within a single or multiple switches, that is defined by the switch hardware and/or software as a single broadcast domain. A VLAN’s goal is to group devices connected to a switch into logical broadcast domains to control the effect that broadcasts have on other connected devices.
Packet switching and circuit switching are two networking methods for transferring data between two nodes or hosts. For a packet-switched network, data is transferred by dividing the data into individual packets and passing it through the circuits to the other host.
In telecommunication, a burst transmission or data burst is the broadcast of a relatively high-bandwidth transmission over a short period. … It can also occur in a computer network where data transmission is interrupted at intervals.
MIR is the maximum information rate. Which is the maximum amount of bandwidth the antenna will allow to be passed (ie not dedicated). CIR is bandwidth dedicated to that customer or antenna
MPLS is sometimes referred to as a Layer 2.5 technology. MPLS VPN is a data-carrying mechanism which operates at a layer that is generally considered to lie between traditional definitions of Layer 2 (data link layer) and Layer 3 (network layer), and thus is often referred to as a “Layer 2.5” protocol. MPLS can build unique user groups to establish your private network; data cannot be captured or viewed by anyone outside of your predetermined user group.
Carrier Access
The advantages of a wired LAN over a wireless LAN. … A wired LAN usually provides connections with a higher speed. Better Security: Because wireless networks sends out airborne signals which can be picked up, secured wired networks tend to be more secure as they are more difficult to gain access to.
A wireless antenna (technically speaking transceiver) cannot send and receive at the same time.
Answer to question number 1
An Internet Service Provider (ISP) is a company the provides Internet access. The most common ISP is the provider who delivers Internet to your home or business for a fee. However, there are 3 levels of ISPs. Tier 1, tier 2, and tier 3 providers. All 3 play an important role in providing Internet access.
An ISP is how you access the Internet, whether you are a business or a residential customer. Phone companies, cable companies, and satellite companies all work to service given areas for Internet access.
Historically, DSL speeds have been slower but new technology lessens the speed gap between DSL and Cable Internet. DSL offers users a choice of speeds ranging from 128 Kbps to 3 Mbps. Cable modem download speeds are typically up to 2 times faster than DSL
VPNs, or Virtual Private Networks, allow users to securely access a private network and share data remotely through public networks. Much like a firewall protects your data on your computer, VPNs protect it online.
Information is the heart of any business or industry. It provides sustenance to organizational units: empowering and strengthening its users as groups and individuals. It can be used for or against us, naturally concerning us with the safety and integrity of our information. If the nature of our information is to be distributed for accessibility, then so must our efforts to secure it. Over the past two decades, the distributed computing industry has utilized the International Standards Organization’s (ISO) Open System Interconnection (OSI) Model [1] for better standardization of hardware and software components. Some layers have more impact than others when securing information. Together, they can be used to build a comprehensive solution. Utilizing the OSI Model’s seven layers, this paper will demonstrate a logical, comprehensive and achievable approach to securing an organization’s information resources.
Layer 2 networks forward all their traffic, including ARP and DHCP broadcasts, so data transmitted by one device on L2 will be forwarded to all devices on the network. This type of broadcast traffic is very fast, but as the network gains in size it creates congestion and leads to inefficiency over the network.
Layer 3 traffic restricts broadcast traffic. Administrators on L3 can segment networks and restrict broadcast traffic to subnetworks, limiting the congestion of broadcast on large networks.
Layer 2 networks forward all their traffic, including ARP and DHCP broadcasts, so data transmitted by one device on L2 will be forwarded to all devices on the network. This type of broadcast traffic is very fast, but as the network gains in size it creates congestion and leads to inefficiency over the network.
Layer 3 traffic restricts broadcast traffic. Administrators on L3 can segment networks and restrict broadcast traffic to subnetworks, limiting the congestion of broadcast on large networks.
Threat is a possible danger that might exploit a vulnerability to breach security and therefore cause possible harm. So basically threat is a possible danger or vulnerability while attack is the action or attempt of unauthorized action.
Two Factor Authentication, also known as 2FA, two step verification or TFA (as an acronym), is an extra layer of security that is known as “multi factor authentication” that requires not only a password and username but also something that only, and only, that user has on them.
A public key infrastructure (PKI) is a system for the creation, storage, and distribution of digital certificates which are used to verify that a particular public key belongs to a certain entity.
Denial-Of-Service (DoS) is an attack targeted at depriving legitimate users from online services. The server/network gets stuck and remains busy, causing the service interruption for other users. A denial-of-service (DoS) is any type of attack where the attackers (hackers) attempt to prevent legitimate users from accessing the service. In a DoS attack, the attacker usually sends excessive messages asking the network or server to authenticate requests that have invalid return addresses.
Such an attack is often the result of multiple compromised systems (for example, a botnet) flooding the targeted system with traffic. … Malware can carry DDoS attack mechanisms; one of the better-known examples of this was MyDoom. Its DoS mechanism was triggered on a specific date and time.
A firewall restricts access to your network by screening traffic and deciding which packets should be allowed in. Boston University compares it to a security guard deciding who can get clearance. The firewall monitors the ports that connect your network to the Internet and checks data packets before allowing them to pass through. A firewall can accept a packet, drop it — erasing it from existence — or deny it, returning it to the sender. If firewalls are security guards, intrusion detection systems are security cameras. An IDS monitors traffic and spots patterns of activity, alerting you if it concludes that your network is under attack. Signature detection compares network or system information to attacks already listed in the IDS database. Anomaly detection compares current network traffic to the normal levels of packet size or activity and analyzes the result statistically. If network traffic suddenly shoots up to a high level, for instance, that could indicate a hacking attack.
Essay Writing Service Features
Our Experience
No matter how complex your assignment is, we can find the right professional for your specific task. Contact Essay is an essay writing company that hires only the smartest minds to help you with your projects. Our expertise allows us to provide students with high-quality academic writing, editing & proofreading services.Free Features
Free revision policy
$10Free bibliography & reference
$8Free title page
$8Free formatting
$8How Our Essay Writing Service Works
First, you will need to complete an order form. It's not difficult but, in case there is anything you find not to be clear, you may always call us so that we can guide you through it. On the order form, you will need to include some basic information concerning your order: subject, topic, number of pages, etc. We also encourage our clients to upload any relevant information or sources that will help.
Complete the order formOnce we have all the information and instructions that we need, we select the most suitable writer for your assignment. While everything seems to be clear, the writer, who has complete knowledge of the subject, may need clarification from you. It is at that point that you would receive a call or email from us.
Writer’s assignmentAs soon as the writer has finished, it will be delivered both to the website and to your email address so that you will not miss it. If your deadline is close at hand, we will place a call to you to make sure that you receive the paper on time.
Completing the order and download