Thursday, July 2, 2009

Comments on "Interdomain Routing"

“Internet is not free” and it has been since the transition of the NSFNET Backbone service to the current commercial structure [1]. Although there are a lot of critiques of the privatization of the Internet, it certainly brought the Internet not only to our homes but practically everywhere including our pockets. With the sheer size of the Internet, one can’t imagine how a message from a computer can reach another computer with different ISPs. The paper gave a picture on how one’s message is routed through different autonomous systems that make up the Internet today; and how money affects all of this.

Admittedly, I needed further reading in order to understand the topic wholly. One of the questions that arose after reading the paper is: Where do BGP and IGP meet?

The paper gave hints that iBGP runs over IGP and the NEXT HOP attribute of the BGP is changed to the loopback address when disseminating learned routes between iBGP speakers. Further reading from other sources said that an AS employs an intradomain routing protocol (IGP) to determine how to reach each customer prefix, and employs an interdomain routing protocol (BGP) to advertise the reachability of these prefixes to neighboring ASes [2]. In contradictory in what the paper says regarding next hops, an article from Cisco says that:



“Router C advertises network 172.16.1.0 with a next hop of 10.1.1.1. When Router A propagates this route within its own AS, the EBGP next-hop information is preserved. If Router B does not have routing information regarding the next hop, the route will be discarded. Therefore, it is important to have an IGP running in the AS to propagate next-hop routing information.”[5]

Next is: What are the IP addresses of those in a peering relationship? Will they setup a LAN or will one sacrifice one of its precious IP address?

“A pair of ASs interconnect via dedicated links and/or public network access points, and routing between ASs is determined by the interdomain routing protocol such as Border Gateway Protocol (BGP)”[2]

Network Access Points are data communication facilities at which ASes would exchange traffic, in replacement of the publicly-financed NSFNET Internet backbone [3]. There were four original network access points located at New York, Washington D.C., Chicago, and San Francisco [4]. These data communication facilities were later replaced by Internet Exchange Points which serves the same purpose [3].

Curiously, I tried searching for my ISP’s AS (Smart Bro Wireless); and I got this result from a search at Robotex.com: http://www.robtex.com/as/as10139.html#graph.

References:
[1] http://en.wikipedia.org/wiki/National_Science_Foundation_Network
[2] Gao, Lixin, “On Inferring Autonomous System Relationships in the Internet”, IEEE/ACM Transactions on Networking, Vol. 9 No. 6,December 2001
[3] http://en.wikipedia.org/wiki/Network_access_point
[4] http://searchsoa.techtarget.com/sDefinition/0,,sid26_gci214106,00.html
[5] http://www.cisco.com/en/US/docs/internetworking/technology/handbook/bgp.html
[6] Balakrishnan, Hari and Feamster, Nick, “Interdomain Internet Routing”, 2001-2005

Wednesday, June 24, 2009

Comments on “The Design Philosophy of The DARPA Internet Protocols”

The paper indeed gave insight on the how’s and why’s of the Internet architecture. It is interesting to see the problems that the architecture they designed have, and made me wonder of the progress that has been made to solve this today. A quick “google” and light reading on some of the problems gave me these results:

1) Lack of sufficient tools for distributed management (especially in the area of routing):

• To help with the issue of routing between gateways, different protocols have been invented to do dynamic configuration of the routing table. Routing Information Protocol (RIP) is a dynamic routing protocol used in local and wide area networks. RIP versions 1 and 2 are considered technically obsolete and more advanced techniques such as OSPF, IS-IS and EIGRP are highly recommended. (For auto-configuration of hosts, DHCP is used.) [2]

• Regarding general management of resources, i.e. services and applications, one popular tool for this is Nagios (I haven’t used this before, just heard of it).

2) Few tools for Accounting:

• Whenever I hear accountability for the Internet, one phrase comes into my mind, Google Analytics; this solution gives the ability to view and analyze website traffic only.

3) Better building blocks to solve the above 2 problems:

• So far, the current major innovation for IP is IPv6 but this doesn’t solve the mentioned problems. Notable features of IPv6 are larger address space and simplified processing of routers [3].

Other problems that I failed to see any progress on:

1) IP header is fairly big, actually the option-less headers of IPv6 is as twice as large as IPv4.
2) The goal of robustness, which led to the method of fate-sharing, which led to host resident algorithms, contributes to a loss of robustness if the host misbehaves.
3) How to give guidance to the designer of a realization, guidance which would relate the engineering of the realization to the types of service?
4) No formal method to describe performance.

In relation with my previous post, it is evident that the end to end principle influenced the design of the Internet; for example, by placing the state of information which describes the on-going conversation at the hosts in order to achieve reliability, rather than to the network. Another example is that the Internet architecture achieves flexibility by making a minimum set of assumptions of functions which the net will provide; this lets the applications choose the type of services that they will need. [1]

Given the goals of the DARPANET, if I were to design my own internet architecture, what would be its goals?

1. The Internet architecture must permit distributed management of its resources.
2. Internet communication must continue despite loss of networks or gateways.
3. The Internet must support multiple types of communications service.
4. The Internet architecture must accommodate a variety of networks.
5. The resources used in the internet architecture must be accountable.
6. The Internet architecture must be cost effective.
7. The Internet architecture must permit host attachment with a low level of effort.

I placed distributed management on top because if different resources are managed by different organizations, then the lost of one resource will not disrupt others and internet communication will still continue. Adhering to the principle of end to end argument, the Internet must support multiple types of service as well as networks in order to be flexible and place function at the end points. Something that is manageable can also be accountable; understanding and monitoring the usage of resources within the internet can help in optimizing it according to usage. As memory and computing power increases, sacrificing cost to achieve the higher goals is acceptable. The idea of host resident algorithms makes host attachment hard because a lot of function is placed on the host; nevertheless, because there is already a large amount of experience in protocols, implementations are now available for a wide variety of machines.

References:
[1] D. D. Clark, "The design philosophy of the DARPA Internet protocols," ACM SIGCOMM Computer Communication Review, vol. 18, issue 4, August 1988.
[2] http://en.wikipedia.org/wiki/Routing_Information_Protocol
[3] http://en.wikipedia.org/wiki/IPv6

Tuesday, June 23, 2009

Comments on End to End Arguments Principle

I can say that I mostly agree with the authors. By carefully identifying what the end points are and what they need, we can design a system that will suit its needs. The paper comes to me as a guideline; even though it promotes the use of placing function (reliability, for example) at higher levels [2], it also states that this is not true for all cases. So in order to have a protocol suite that will be flexible enough to accommodate most applications, each layer must be not clearly defined. This will make it easier to add protocols that will meet the needs of an application’s end points. [1] A good example that this principle works is the Internet Protocol Suite [4].

End to End arguments are all about making acceptable sacrifices, i.e. dropped packets can be safely ignored if it rarely occurs. In the old days it is necessary because systems are constrained by limited resources or computing power; but in the future, it may not be the same. For example: with enough resources at each level or layers of a particular system, it may be possible to have reliability at each level by achieving negligible error rates at all levels. I have not seen an example of this kind of system; and an enhancement to the end to end argument principle regarding reliability that I could think of is the ability to detect early where faults have happened like what the IPMI specification aims to do [3].

References:
[1] J.H. Saltzer, D.P. Reed and D.D. Clark. “End-to-End Arguments in System Design.” ACM Transactions on Computer Systems, Vol. 2, No. 4, pp. 277-288, November 1984Marjory S. Blumenthal and David D. Clark, “Rethinking the design of the Internet: The end to end arguments vs. the brave new world”
[2] Blumenthal, M., Clark, D.D., “Rethinking the design of the Internet: The end-to-end arguments vs. the brave new world”, ACM Transactions on Internet Technology, Vol. 1, No. 1, August 2001, pp 70-109.
[3] http://en.wikipedia.org/wiki/Intelligent_Platform_Management_Interface
[4] http://en.wikipedia.org/wiki/End-to-end_principle