When two people are able to communicate we generally assume that they know the same language, and that is the key to understanding each other. This applies to the world of computing as well, where the set of standards are called protocols. During this article we plan to present in a nutshell the most common TCP/IP protocol types. We won’t get in-depth, so don’t expect university course-style material—just the basics.
TCP/IP is the reference model that basically encapsulates all of the conventions as to how end-to-end point connectivity is supposed to happen. The attributes are standardized. This is how software developers can seamlessly code their applications; by knowing the standards, you know what kind of packets to expect, what their shape will be, and can pretty much delineate their content based on the known syntax and semantics.
The Internet Protocol (IP) is the most vital protocol since almost all of the higher layer protocols use the IP routing protocol to get the data from host X to host Y. On the previous page we mentioned that TCP/IP is a protocol suite; thus, IP could be considered the dominant protocol.
IP encapsulates data from higher layer protocols and deploys them to the end-point. It is a connectionless protocol that does not work on pre-designed circuits. From an architectural sense, it is a pretty primitive protocol that does not give any kind of reliability measures, such as determining data corruption, duplicate arrival, lost data, retransmission, and so forth. But we will understand why IP was "made" to be simple.
Along with the other originating vital component of the TCP/IP suite, the so-called TCP protocol, in cooperation with many other, higher level protocols, enhances the "IQ-level" of IP communications. IP by itself works on the best-effort delivery concept. It assumes everything is working fine, the conditions are ideal, and the idea of something going wrong (packets being lost, etc.) is unknown to the protocol.
However, the TCP works on the end-points (hosts); it adds specific functions like retransmission, error detection, congestion control, and many others. Other higher level protocols also help to improve the quality of service. As we can see, the IP protocol was made to be simple, and that is why it has become so successful. It was fast. The higher level protocols could be implemented whenever needed on clients.
Let's see what kind of data IP encapsulates and what its header structure looks like. First of all, it must know the source address and destination address, just like any traditional postal service. Then, of course, it needs the data it's about to transfer, and it adds a few other elements, such as type of service identifiers, total length, flags, time to live, header checksums of the header only (just that!), version field, offset/padding, etc.
It is important to understand that a datagram is built up by an IP header and IP data. While the header has its checksum for verification purposes, just as mentioned earlier, the IP protocol does not encapsulate anything to verify at end-points whether the data has arrived intact.
Currently, there are two versions of IP protocol. IPv4 is still the dominant one; it uses 32-bit addressing (4.3×109). There is also IPv6 on 128-bit (3.4×1038). The latter has even left out the IP header-only checksums. Right now our application protocols already offer sufficient attributes to ensure error-free communications. Therefore, the purpose of IP is to remain as simple and efficient as possible.