Error Detection in Computer Networks

Here, we will learn about error detection in computer networks.

When data goes from one device to another, the system cannot guarantee that the data received by one device matches the data sent by the other. An error occurs when the message received at the receiver end varies from the message sent. A mismatch occurs when the receiver’s information does not match the sender’s information. During transmission, digital signals are prone to noise, which can cause errors in binary bits as they travel from sender to receiver. As a result, a 0 bit may become 1 or a 1 bit may become 0.

Techniques for Error Detection in Computer Networks

The usage of redundancy bits, where additional bits are added to enable error detection, is the most basic strategy used for error detection.

Some of the most commonly used error detection techniques are as follows:

1. A basic parity check

2. Parity check in two dimensions

3. Verify the checksum

4. Check for cyclic redundancy

A basic parity check for Error Detection in Computer Networks

To make the number of 1s even, a redundant bit, also known as a parity bit, is there to the end of the data unit in this procedure. As a result, there would be a total of nine bits transferred. Parity bit 1 is there at the end of the data unit if the number of 1s bits is odd; parity bit 0 is there if the number of 1s bits is even. We calculate the parity bit from the data bits received and compared to the parity bit received at the receiving end. Even-parity checking is a procedure that produces an even total number of 1s.

Parity check in two dimensions

We can improve the performance of the Two-Dimensional Parity Check, which arranges data in the form of a table. Parity check bits for each row, is the same as a single-parity check. In a Two-Dimensional Parity check, a block of bits is divided into rows, and the redundant row of bits is added to the entire block. At the receiving end, the parity bits are compared to the parity bits computed from the received data.

Verify the checksum for Error Detection in Computer Networks

In a checksum error detection system, the data is basically divided into k segments, each with m bits. We add the segments together at the sender’s end using 1’s complement arithmetic to produce the total. The sum is multiplied by two to get the checksum. The checksum segment is sent along with the data segments. At the receiver’s end, all received segments are added together using 1’s complement arithmetic to calculate the sum. The total has been raised. If the result is 0,we will accept the data ; else, we will reject it.

Check for cyclic redundancy

Unlike the checksum system, which is based on addition, CRC is a binary division scheme. A cyclic redundancy check bit is there at the end of a data unit in CRC so that the resulting data unit is divisible by a second, specified binary integer. At the destination, the incoming data unit is divided by the same number. If there is no trace of the data unit at this point, then it is correct and accepted. A residual indicates that the data unit was destroyed during transit and must thus be disregarded.