1. Chào Guest! Khi bạn tham gia diễn đàn thương mại điện tử Mua Bán Plus (MB+) xin vui lòng đọc kỹ những điều khoản trong bản nội quy và quy định.... ( Xem chi tiết)
  2. Việc mua một sản phẩm trên mạng, đặc biệt là sản phẩm đã qua sử dụng đôi khi có thể có những rủi ro lớn. Một số lưu ý nhỏ sau đây bạn nên xem qua để đảm bảo an toàn hơn khi mua hàng trực tuyến. ( Xem chi tiết)

Internet information

Discussion in 'Thảo luận, Hỏi đáp' started by vhdulich15, Apr 1, 2015.

  1. MB+ -
    The Internet has revolutionized the computer and communications world like absolutely nothing before. The invention from the telegraph, telephone, radio, and computer set the phase for this unprecedented integration of capabilities. The Internet is at once a world-wide broadcasting capability, a mechanism for information dissemination, and a medium for cooperation and interaction between people and their computers without regard for geographic location. The Internet signifies one of the most successful examples of the advantages of sustained investment and dedication to research and development of details infrastructure. Beginning with the early study in packet switching, the government, industry and academia are actually partners in evolving and deploying this exciting new technology. Today, terms like "bleiner@computer. org" and trip gently off the tongue of the accidental person on the street. 1



    This is intended to be a brief, always cursory and incomplete history. Much material currently is present about the Internet, covering background, technology, and usage. A trip to almost any bookstore will discover shelves of material written about the Internet. 2

    In this document, 3 several of us active in the development and evolution of the Internet share our views of its origins and background. This history revolves close to four distinct aspects. There is the technological evolution that began with early research upon packet switching and the ARPANET (and related technologies), plus where current research continues to expand the horizons of the infrastructure along several sizes, such as scale, performance, plus higher-level functionality. There is the functions and management aspect of a global and complex operational infrastructure. There is the social aspect, which usually resulted in a broad community of Internauts working together to create plus evolve the technology. And there is the commercialization aspect, leading to an extremely effective transition of research results into a extensively deployed and available info infrastructure.

    The Internet today is a widespread information infrastructure, the initial prototype of what is known as the National (or Global or Galactic) Information Infrastructure. Its history is complicated and involves many aspects - technological, organizational, and community. And its influence gets to not only to the technical areas of computer communications yet throughout society as we move toward increasing use of on the internet tools to accomplish electronic business, information acquisition, and community operations.

    Origins of the Internet Azon Authority reivew

    The first recorded description from the social interactions that could be allowed through networking was a number of memos written by J. Chemical. R. Licklider of DURCH in August 1962 talking about his "Galactic Network" idea. He envisioned a globally interconnected set of computers through which everyone could quickly gain access to data and programs through any site. In spirit, the concept was very much like the Internet of today. Licklider was the 1st head of the computer analysis program at DARPA, 4 starting in October 1962. While at DARPA he sure his successors at DARPA, Ivan Sutherland, Bob The singer, and MIT researcher Lawrence G. Roberts, of the importance of this networking concept.

    Leonard Kleinrock at MIT published the first paper on box switching theory in Come july 1st 1961 and the first book on the subject in 1964. Kleinrock convinced Roberts of the theoretical feasibility of communications making use of packets rather than circuits, that was a major step along the route towards computer networking. Another key step was to help make the computers talk together. To explore this, in 1965 dealing with Thomas Merrill, Roberts connected the TX-2 computer in Mass. to the Q-32 within California with a low swiftness dial-up telephone line creating the very first (however small) wide-area personal computer network ever built. The result of this experiment was the realization that the time-shared computers can work well together, running applications and retrieving data as necessary on the remote machine, but that the circuit changed telephone system was completely inadequate for the job. Kleinrock's conviction of the need for box switching was confirmed.

    In late 1966 Roberts went to DARPA to develop the computer network idea and quickly put together their plan for the "ARPANET", publishing it in 1967. On the conference where he presented the paper, there was also a document on a packet network concept from the UK by Jesse Davies and Roger Scantlebury of NPL. Scantlebury told Roberts about the NPL work as well as that of John Baran and others at FLANKE. The RAND group had written a paper on box switching networks for safe voice in the military within 1964. It happened the fact that work at MIT (1961-1967), with RAND (1962-1965), and at NPL (1964-1967) had all proceeded in parallel without any from the researchers knowing about the additional work. The word "packet" has been adopted from the work at NPL and the proposed line velocity to be used in the ARPANET design was upgraded from 2 . 4 kbps to 50 kbps. 5

    In September 1968, after Roberts as well as the DARPA funded community got refined the overall structure and specifications for the ARPANET, a good RFQ was released by DARPA for the development of one of the important components, the packet buttons called Interface Message Processors (IMP's). The RFQ has been won in December 1968 by a group headed by Frank Heart at Bolt Beranek and Newman (BBN). Because the BBN team worked on the IMP's with Bob Kahn playing a major role in the overall ARPANET architectural style, the network topology plus economics were designed plus optimized by Roberts dealing with Howard Frank and his team at Network Analysis Corporation, and the network measurement program was prepared by Kleinrock's team at UCLA. 6

    Due to Kleinrock's early development of box switching theory and his focus on analysis, design and measurement, his Network Measurement Middle at UCLA was chosen to be the first node in the ARPANET. All this came with each other in September 1969 when BBN installed the first IMP at UCLA and the initial host computer was linked. Doug Engelbart's project upon "Augmentation of Human Intellect" (which included NLS, an early hypertext system) at Stanford Research Institute (SRI) offered a second node. SRI backed the Network Information Center, led by Elizabeth (Jake) Feinler and including features such as maintaining tables of host name to address umschlüsselung as well as a directory of the RFC's.

    One month later, when SRI was connected to the ARPANET, the very first host-to-host message was sent from Kleinrock's laboratory to SRI. Two more nodes were added at UC Santa Barbara and College or university of Utah. These final two nodes incorporated software visualization projects, with Glen Culler and Burton Fried at UCSB investigating techniques for display of mathematical features using storage displays to cope with the problem of refresh on the net, and Robert Taylor plus Ivan Sutherland at Utah investigating methods of 3-D representations over the net. Thus, by the end associated with 1969, four host computer systems were connected together in to the initial ARPANET, and the flourishing Internet was off the ground. Actually at this early stage, it should be noted that the networking study incorporated both work on the actual network and work on tips on how to utilize the network. This custom continues to this day.

    Computers were added quickly to the ARPANET during the following years, plus work proceeded on completing a functionally complete Host-to-Host protocol and other network software program. In December 1970 the Network Working Group (NWG) operating under S. Crocker finished the initial ARPANET Host-to-Host protocol, called the Network Control Protocol (NCP). As the ARPANET sites completed implementing NCP during the period 1971-1972, the network users finally could start to develop applications.

    In Oct 1972, Kahn organized a substantial, very successful demonstration of the ARPANET at the International Computer Communication Conference (ICCC). It was the first public demonstration of this new network technology to the public. It was also within 1972 that the initial "hot" application, electronic mail, was released. In March Ray Tomlinson at BBN wrote the basic email message send plus read software, motivated by need of the ARPANET designers for an easy coordination mechanism. In July, Roberts expanded its utility by creating the first email utility system to list, selectively examine, file, forward, and react to messages. From there email became popular as the largest network software for over a decade. This was the harbinger of the kind of action we see on the World Wide Web today, namely, the massive growth of all kinds of "people-to-people" traffic.

    The Initial Internetting Concepts

    The original ARPANET grew into the Internet. Internet was based on the idea that there would be multiple self-employed networks of rather arbitrary design, beginning with the ARPANET as the pioneering packet changing network, but soon to incorporate packet satellite networks, ground-based packet radio networks along with other networks. The Internet as we at this point know it embodies a key fundamental technical idea, namely those of open architecture networking. With this approach, the choice of any individual network technology was not determined by a particular network structures but rather could be selected openly by a provider and made to interwork with the other systems through a meta-level "Internetworking Architecture". Up until that time there was just one general method for federating systems. This was the traditional circuit changing method where networks would certainly interconnect at the circuit level, passing individual bits on a synchronous basis along a portion of an end-to-end circuit among a pair of end locations. Recall that Kleinrock had demonstrated in 1961 that packet switching was a more efficient switching method. Along with packet changing, special purpose interconnection agreements between networks were one more possibility. While there were some other limited ways to interconnect different networks, they required that one particular be used as a component of another, rather than acting as a peer of the other in offering end-to-end service.

    In an open-architecture network, the individual networks may be individually designed and developed every may have its own unique interface which it may offer in order to users and/or other suppliers. including other Internet providers. Each network can be developed in accordance with the specific environment plus user requirements of that system. There are generally no constraints on the types of network which can be included or on their geographic scope, although certain pragmatic considerations will dictate what makes sense to offer.

    The idea of open-architecture networking was first introduced by Kahn shortly after having reached DARPA in 1972. This particular work was originally section of the packet radio program, yet subsequently became a separate program in its own right. At the time, the program was called "Internetting". Key to making the box radio system work was a reliable end-end protocol that could maintain effective communication in the face of jamming and other radio disturbance, or withstand intermittent blackout such as caused by being in the tunnel or blocked by the local terrain. Kahn very first contemplated developing a protocol local only to the packet radio stations network, since that would prevent having to deal with the multitude of different operating systems, and continuing to use NCP.

    However , NCP did not have the ability to address networks (and machines) further downstream than a destination IMP on the ARPANET and thus some change in order to NCP would also be required. (The assumption was that the particular ARPANET was not changeable on this regard). NCP relied upon ARPANET to provide end-to-end dependability. If any packets had been lost, the protocol (and presumably any applications it supported) would come to a milling halt. In this model NCP had no end-end host error control, since the ARPANET was to be the only network in existence and it would be therefore reliable that no mistake control would be required on the part of the hosts. Thus, Kahn decided to develop a new edition of the protocol which could meet the needs of an open-architecture network environment. This protocol would eventually be called the Transmission Control Protocol/Internet Protocol (TCP/IP). While NCP tended to behave like a device driver, the newest protocol would be more like a communications protocol.

    Four ground rules were critical to Kahn's early thinking:

    Each unique network would have to stand on its own and no internal changes might be required to any such network to connect it to the Internet.

    Marketing communications would be on a best hard work basis. If a packet didn't make it to the final destination, it would shortly be retransmitted in the source.

    Black boxes would be used to connect the networks; these would later become called gateways and routers. There would be no information retained by the gateways about the individual flows of packets moving through them, thereby maintaining them simple and avoiding complex adaptation and recovery from various failure modes.

    There is no global control in the operations level.

    Other essential issues that needed to be addressed had been:

    Algorithms to prevent lost bouts from permanently disabling communications and enabling them to become successfully retransmitted from the supply.

    Providing for host-to-host "pipelining" so that multiple packets could be enroute from source in order to destination at the discretion from the participating hosts, if the more advanced networks allowed it.

    Entrance functions to allow it in order to forward packets appropriately. This particular included interpreting IP headers for routing, handling interfaces, breaking packets into smaller pieces if necessary, etc .

    The advantages of end-end checksums, reassembly of packets from fragments and detection of duplicates, in case any.

    The need for global addressing

    Techniques for host-to-host flow control.

    Interfacing with the various systems

    There were also other issues, such as implementation efficiency, internetwork performance, but these were secondary considerations at first.

    Share This Page

  2. Comments0 Post Comment

Share This Page