Anda di halaman 1dari 22

Absract

When I started with this project my aim was to learn about “Internet
technology” and how the Internet works in general. Internet changes the world
communication system.
Table of contents

ABSRACT..........................................................................................................1
Universal Resource Locator....................................................................................................................3
Application Protocol..............................................................................................................................4
Domain...................................................................................................................................................4
Top Level Domain.................................................................................................................................4

Domain Name System..............................................................................................................................5

Transmission Control Protocol...............................................................................................................5


Transport protocol layer.........................................................................................................................6

Internet Protocol......................................................................................................................................6
Routing...................................................................................................................................................6
Subnets...................................................................................................................................................7

Ethernet....................................................................................................................................................7
Physical protocol layer...........................................................................................................................7
Address Resolution Protocol..................................................................................................................8

Putting it all together...............................................................................................................................8

Past the basics...........................................................................................................................................9


ICMP......................................................................................................................................................9
UDP........................................................................................................................................................9

INTERNET HISTORY AND ORGANISATION................................................10


Dawn of internetworking......................................................................................................................10

Controlling the Internet........................................................................................................................11


Internet Architecture Board.................................................................................................................11
IETF and IRTF.....................................................................................................................................11
Request for Comments.........................................................................................................................12

INTERNET’S FUTURE....................................................................................12
Next generation Internet.......................................................................................................................12
Security................................................................................................................................................13
Forwarding...........................................................................................................................................14
6bone....................................................................................................................................................15

New domain names................................................................................................................................15

Future connections.................................................................................................................................16
Mobile connections..............................................................................................................................17
Wireless Internet..................................................................................................................................17
3

Introduction
Internet is not one big network. As the name claims it is inter-net, thus a
network connecting networks. This is important to know as it is the base of the
Internet foundation. When you logon to your local Internet provider, you
connect to their network, which is connected to many others. This is the
strength of Internet, if one network malfunctions, the other can function
normally without it.
While going through the information I also found papers about new standards
and projects that I saw relevant to the future development of the Internet. This
made up my project question, how will the Internet handle the future? Or
perhaps how will the future handle Internet? To explain the coming generation
Internet one must know how the Internet once started, how it works today,
who controls it and many other things. This matched my interest in finding out
how the Internet works with my plans to examine the nets future. All of these
things make up one big mix of information, it’s not too detailed, many things
are left out. My goal was to get a general feeling for what the Internet is and
what it will be.

Universal Resource Locator


Most people use the Internet merely for World Wide Web browsing and e-
mail. To explain how the Internet works we will start by using a sample
connection between a browser and a server. There are many ways to find an
URL to visit, they can be found in magazines, newspapers, they can be
bookmarked or they can be embedded in other documents as links. However
they all consist of the same major parts. Lets look at an URL and sort out
what’s what in it. We will work with http://www.internet.com, it’s a site for and
about the Internet. The easiest way to understand URLs is to split them up
into part, like this:

protocol://host.domain.tld
Application Protocol
Today everyone knows that a text starting with www is a World Wide Web
address, this is not completely true. Actually it is the http:// part of the URL
that specifies that we want to connect to the part of the server that handles
the Hyper Text Transfer Protocol although if we try to connect to an address
starting with the www prefix our software assumes it to be http. The http
protocol is the Internet standard for exchanging HTML files between clients
and servers. HTML or HyperText Markup Language is the language used to
layout pages so they may contain text, pictures, multimedia and Java among
else.this case we are acting as a client since we are requesting a document
from someone else. Other than http there is also a secure encrypted version
of http, called Secure HyperText Transfer Protocol (https://) and the File
Transfer Protocol (ftp://) among many others. So the first part of the URL, the
protocol tells our software how we want to connect to the server and what
kind of reply we are expecting.

4
Host
Next comes the host. A host is a computer that is connected to the Internet.
When you use your modem to connect to your local ISP (Internet Service
Provider) or LAN you will also become a host. However only certain
computers have hostnames that works in an URL. If you connect through an
ISP you will not get one that can be used in URLs.
Domain
Since the communication between hosts is based on IP (Internet Protocol)
addresses and the computers themselves don’t know where on the net an
URL is to be found, they can only say “I want to talk to host 130.92.112.96”.
For this reason we have domains. A domain is a way for us humans to
remember a location on the Internet. The computer must translate the domain
into a number in order to make contact to the host.
Top Level Domain
The top-level domains also exist to make it easier for us humans to find our
way on the Internet. The tld’s are provided so that domains can be sorted into
categories and countries. Countrywise they are sorted after a two-letter
country code standardised by ISO. The category domains include top level
domains like .com for commercial usage and .edu for education.

.arpa ARPA specific domains


.com Commercial organisations
.edu Educational institutions
.gov United States government agencies
.int International organisations
.mil United States military
.net Network providers
.org Non-profit organisations
.ad…zw Country specific domains
Domains can be mostly anything, different TLD registrars (the organisation
that manages the registry) have different rules for registering domains and as
long as you follow their rules and domain
names rules found in RFC 952, 1035, 1123, you are free to use your
imagination. (More on RFC’s later on.)

Domain Name System


Because the hosts only can find each other by IP address in the same way
that US Mail needs zip codes to find the correct receiver, there must be a
system to convert our easy to remember URLs to address that the computer
can use. This system is called the Domain Name System (DNS). In the early
days of the Internet there weren’t many hosts on the net so every computer
connected had its own file with all the domains and their corresponding
addresses. Today however with millions of hosts connected this system
wouldn’t be very efficient. When we want to find our host, www.internet.com,
we contact a computer that we do know the IP address of. This server is

called our root dns server. We ask this server where we can find information
5
about .com domains. The dns server then gives us a list of .com domain
name servers. Our computer now selects one of these servers, contacts it and
asks it about internet.com. Just as with the root server we
get a list of servers handling that domain. This is how it continues until we
know the IP address of www.internet.com. In general the Internet software is
“smart” and remembers common servers, and can that way skip one or two of
the domain name
servers, making the communication faster and less transfers necessary. Now
our computer software is ready to make the actual connection to the host.

Transmission Control Protocol


The communication over the Internet is made up of layers. This way many
different types of computers can talk to each other by varying methods.
Another feature of it is that every piece of software doesn’t have to “re-invent
the wheel”, for example, when we use Netscape Navigator in Windows95 to
connect to www.internet.com, it will use a piece of software built-in to
Windows95 called Winsock. Winsock will then handle the communication
such as send and receiving data and looking up domains through the domain
name system. Our browser will just tell it what to do and it will do it. Of course
it’s not only browsers that use Winsock, mostly all Windows95 Internet
software such as e-mail clients, ftp clients and newsreaders use it. The same
way that Windows95 has Winsock most other operating systems have
something similar that has the equivalent features.
We’ve been over the application protocol layer, it was the protocol that
described how two applications talk to each other, like how the HyperText
Transfer Protocol transfers the html documents used to layout webpages.
Transport protocol layer
Usually when Internet communication is discussed people talk about TCP/IP
and how all the information over the Internet is transferred over it. This is not
completely true, although most information goes by TCP (Transmission
Control Protocol) there are others. The applications that are communicating
decide which transport protocol they are going to use based on the standard
that is set for the data that is to be transferred. Applications like browsers and
file transfer clients use TCP while transfers that need more speed, like audio
and video streaming, at the cost of reliability use UDP (User Datagram
Protocol).
6
When applications talk to each other over the Internet, like when you look at
someone’s homepage, they do they do not take part in the actual sending and
receiving. This is where the transport protocol layer comes in. The application
tells the transport protocol what to send and where to send it and then in the
case of TCP it is up to the transport protocol to make sure that everything gets
sent and that everything arrives in the same state that it was sent. And if
anything goes wrong it is also the transport protocol layer that handles it by
re-sending. The transport layer also splits that data into smaller
pieces, if you want to send someone a two-megabyte file, it can’t all be sent at
once, so the transport protocol layer splits it into smaller chunks.

Internet Protocol
The Internet Protocol (IP) layer gets the datagrams (the parts of a file that has
been split up) from TCP (or whatever transport protocol that is being used)
and adds some of its own information and does the actual logical transfer over
the Internet. Simplified this can be described as if TCP makes sure everything
goes through, while IP actually makes it happen. IP only does the logical
transfer, this might sound weird but it isn’t. Since Internet spans over many
different types of networks such as Ethernet and Token Ring which all have
their own way of communicating there is a need for a layer that can work on
top of them all. The Internet consists of many networks connected to each
other, some networks might have connections to many other networks while
others only have one route out to the rest of the Internet. It is the Internet
protocol’s job to find out how to move the data between the different networks.
Routing
Finding out how to move this data between networks is called routing. Where
two networks are connected to each other there is a router or a gateway,
these are used to move data between the two networks. So what IP has to do
is to check if the target computer is in the same network and if so, just send it
away. If not it must find out to which route it should take. For this it uses a
routing table, in it there is a list of IP addresses and to what gateway they
should be sent. If there isn’t an entry for the target IP, it is sent to the default
route. The default route is the gateway that is most likely to be the correct
one. When it knows where to send the datagram, it does so and it’s that
networks responsibility to get the information to the correct computer or
onwards to another network.
Subnets
Networks on the Internet are called subnets, a subnet can also have it’s own
subnets. A large university can for example be a subnet of the Internet and
have subnets for each faculty. The purpose of making small networks it to
stop one malfunctioning hardware device from stopping the entire network.
This is a part of the overall Internet strategy, there should always be a way
out. If on connection goes down there is always another. When IP decides if a
host is located within the current subnet it looks at the IP address and
analyses it.All the computers connected to the Internet must have its own IP,
and because networks have different size, i.e. number of hosts, there are
different network classes. The system is constructed in such a way that Class
A networks may have many subnets and hosts, while Class B networks fewer,
7
and the class system ranges down to smaller and smaller classes each with
fewer hosts. All the networks connected have been assigned one or two
network ranges from a central authority in which they can decide what
computer gets what number. A company that requests X amount of IP’s might
not have the need for an entire Class B network can then be assigned two or
three Class C networks.
The point of not giving out more IP’s than necessary is due because the
Internet is starting to run low on IP’s.

Ethernet
Ethernet is a very popular and widely spread type of Local Area Network. The
most common form of Ethernet is called 10BaseT, which denotes the
maximum transmission speed of 10 Mbps using copper twisted cables.
Recent enhancements of Ethernet bumps the speed to a maximum of 100
Mbps, this system is called 100BaseT.
Physical protocol layer
Ethernet is the physical network. Here we have computers actually connected
to each other by cables and wires. Since IP was made to travel over many
kinds of networks, it has it’s own addressing system, the IP numbers. At the
physical layer of the network, IP addresses do not mean anything, Ethernet
and all the other networks have their own way of finding the correct hosts.

When Ethernet was designed one of the goals was to make sure that two
computers could not share the same address. Because of this every Ethernet
network interface card (NIC) sold has it’s unique Ethernet address consisting
of 48 bits (a bit is either 0 or 1), all the Ethernet manufacturers has to register
with a central authority that is monitoring this.
Address Resolution Protocol
Ethernet works in the same way as a big party line, what one says, everyone
hears. But just like you do not listen to what everyone says on a party line,
your Ethernet system will only listens to data directed to it. To find the
corresponding Ethernet address for an IP address (as they have nothing in
common), your system will send out a broadcast to which all the systems on
the local network will listen, asking if anyone is assigned to that IP. This
system is called the Address Resolution Protocol, commonly called ARP.
When the system that has that IP hears your request for its Ethernet address,
it will reply and the two computers can now talk to each other. It would be very
bad for the network performance if this had to be done every time two
computers try to make a connection, because when it’s done all other
communication is halted. Instead your system will save the information it
knows about other hosts in memory for some time to speed things up.
8

Putting it all together


You probably noticed that there are many protocols needed for Internet
communication and it’s not always easy to understand how they work
together. We will take a small example as a summary and show what each
protocol will do.

We will again use our browser example, lets say that you have requested a
small text document from www.internet.com and the server sends it over to
you (mycomputer.network.se). First of all the server will add information about
what is sent in the HyperText Transfer Protocol, discussed earlier. This will tell
your application what data the packet is containing. Next TCP will take all that
information and add it’s own headers to it and send it all down to the IP level.
IP will also add it’s own headers, as
each protocol layer only understands its surrounding neighbours. Ethernet will
not understand TCP headers and HTTP will not understand IP headers. The
IP layer will now found out how the packet is to be sent, and it’s most likely
through Ethernet so it passes it down to Ethernet. Once the Ethernet package
reaches mycomputer.network.se, Ethernet will remove its headers and send it
back up to IP. IP will the do the same and give the information to TCP. As you
can read from its name, Transmission Control Protocol, TCP check the
information so it hasn’t got corrupt while transferred, if so it asks the server to
send it again. If it’s all right it will remove its headers and give the information
to you browsing software that removes the HTTP headers and you can now
see the text file in your browser!
Past the basics
Internet’s smart layering system might make it seem as if it is easy for the
different layers to perform its actions. For users and most application
developers it’s both easy to use and develop Internet software as most of the
technical parts of it is built-into modules that can be easily adapted in many
programming and application environments. Behind all of this it isn’t such a
simple matter. As we discussed earlier large files have to be split into smaller
pieces so it can transferred easily. This wasn’t completely true. Almost
everything that is transferred must be split, or fragmented. Every physical
network type has it’s own limit on how big packages it accepts and if a larger
one arrives it must then handle the splitting and re-assembly, that on
9

packages that might already be split. Sending out too large packets can then
of course make Internet transfers slower as hardware on other places must
work, besides the increased traffic volume this generates.
ICMP
Internet has many other protocols than the ones we have discussed here so
far. ICMP or Internet Control Message Protocol would probably be described
as Internet’s error reporting protocol. If a packet of data takes too long to
deliver, an ICMP message will be sent to the sender telling what happened.
Also if a system tries to transfer some data to a network outside the local one
through the default router and that router has been told there is a better way
to the target, the source will receive a reply stating so. The Internet Control
Message Protocol really is what its name says, a message protocol for
reporting errors, it doesn’t find errors itself.

UDP
We talked about UDP, User Datagram Protocol before and we said it was less
reliable and faster. It is less reliable because its headers are smaller and it
has fewer features to verify that the information transferred is correct. While
TCP is what is called a connection protocol, in other words both computers
talking respond to each other data so they both know if everything worked,
UDP is connectionless. This means that in for example an audio stream from
a live radio show is sent to the listener just like in real broadcast radio. Ready
or not, we’re transmitting now. It’s basically up to the listener to make sure
he’s ready to receive. This of course means loss, some data will not reach the
listener and that is what makes is less reliable but faster. UDP itself doesn’t
have any error checking but the application using the protocol may, it is
however then easier to use TCP that has it built-in.
Ports
When TCP receives data from IP, it does not directly know how it should be
sent to the application layer. Many Internet applications might be running so
there must be a way to find out what application wants what. This is done by
using ports. When we connect with our browser to www.internet.com, our
software knows we want to connect to the HTTP part of the server since we
are using the world-wide web (it can also be specified by typing
http://www.internet.com:80). To make sure the server knows we’re requesting
an HTTP document we add the standardised port 80 to our request. With the
request we also add the port we want the server to communicate with us
through, this can be any free port. This way the two computers TCP software
can get the data sorted out correctly. Different types of transferred data uses
different ports that are standardised to make sure there are no clashes.

Internet history and organisation


Dawn of internetworking
The groundwork for Internet was created as early as in 1957. That year USSR
launched the first satellite, Sputnik. To establish lead in military science and
technology the US Department of Defence formed the Advanced Research
Projects Agency, commonly known as ARPA. Later in the 60’s, ARPA started
10
to study networks and how it could be used to spread information. In 1969 the
first few networks were connected. The first system to send e-mail across a
distributed network was developed 1971 by Ray Tomlinson and the telnet
(allowing users to login on remote computers) specifications arrived one year
later. The first drafts for a networked called Ethernet were created in ‘73 and a
year later there was a detailed description of the Transmission Control
Protocol. The Usenet newsgroups were created in 1979 and in 1982
Department of Defence declared TCP/IP to be standard. At this time the
number of hosts connected was very low, in 1984 it broke the 1000 boundary.
Three years later that number had changed to 10000, but we are still far from
the Internet explosion.

Most of this all happened before computers were widely spread, IBM released
its first PC, based on Intel’s 8088 processor in 1981. The Pentium processor
family that currently is being phased out arrived in 1994. The users connected
to the Internet at this time were researchers and students, connected by
university networks.

A worm that infected computers on the Internet with a program that took up
system resources (like memory) created a need for some sort of team that
would try to find solutions to make such issues less dangerous. The team was
called Computer Emergency Rescue Team (CERT). They work by writing
advisories and reports on how to avoid
problems.

What most people tend to define the Internet as, is the web. The World-Wide
Web standard was created in 1991 by CERN and the predecessor to
Netscape Navigator, Mosaic saw light two years later. Common people
started to get Internet access in 1994-95, it’s around those years the numbers
of hosts, domains and networks started to increase rapidly. Yet only a small
amount of the earth’s population is connected.

The so-called browser war between Microsoft’s Internet Explorer and


Netscape’s Navigator started in 1996 when the two companies released their
3.0 browsers. When this is being written there still isn’t a winner but Netscape
11
has been forced to make its browser free (Microsoft’s has always been),
including the source code. Perhaps the US Justice Department will prevent
Microsoft from giving its browser away, perhaps they will split the company
into pieces. At least it shows the future importance of the Internet when
Microsoft embeds its browser into the core of its operating system.

Controlling the Internet


When we look into how the Internet is controlled today, we have to have in
mind that when ARPA created the network for more than 25 years ago, they
did not intend it to be used the way it is used now, nor did they expect this
amount of users. The managing organisations have been created along the
way and there are no exact jurdistictions on who controls what.
Internet Architecture Board
The Internet Architecture Board, or IAB, is on top of the heirachy. They review
the Internet standards, oversee the other groups, and act to conserve control
over Internet as an international network. Their
probably most important role is to identify long term opportunities and how
they should be handled.
IETF and IRTF
Almost directly under the IAB we have the Internet Engineering Task Force
and the Internet Research Task Force. The IETF handles all the current
protocol standards and promotes further development. IETF also handles
operation and management of the Internet. The IRTF is more of the Internet’s
future department. They take care of all the future problems of the Internet
and how they are to be handled. Among their work is how the net should

handles billions of hosts, faster connections and wireless Internet. For this
they have to look at new protocols and how they can be incorporated into the
current system without major service interruptions.
Request for Comments
On the Internet anyone can propose a standard. By writing a text that follows
certain guidelines new features and standard can be proposed to the IETF
User Services Working Group for review. If it is approved it will be assigned a
unique number and it will be added to the Request for Comments (RFC)
database. The first RFC was published in April 1969, then as a way to
document the network. Today there are thousands of RFC’s dating from the
12
beginning of internetworking to present day, many have been outdated by
newer ones along the way. The RFC’s provide a great potential for the
Internet to continue its development as new technologies can be presented
quickly and then get standardised.

Internet’s Future
Next generation Internet
The current version of the Internet Protocol is version four. Abbreviated
it is known as IPv4. When it was created the amount of computers
connected to the Internet was not expected to be as high at it is. The
addressing system, the IP addresses consist of four octets of numbers
ranging from zero to 255 (example: 130.244.198.36). In technical terms this is
32-bits, a bit can be either null or one so this gives us almost 32^2 unique
numbers. The actual number is a bit lower as all combinations are not
allowed. This is quite a large amount of computers that can be connected but
in fact estimates show that early in the next century the IP addresses will be
exhausted. This is one of the reasons a new Internet Protocol version is being
developed. Formally it is named Internet Protocol Version 6 (IPv6) but it is
also known as IP Next Generation (IPng).

To provide more IP addresses the addresses in IPv6 have been expanded to


128-bit, or approximately
340,282,366,920,938,463,463,374,607,431,768,211,456 theoretically
available IP addresses. This is the limit that the engineers think we will stay
below for quite some time.
IPv6 has been designed to enable high-performance, scalableinternetworks to
remain viable well into the next century. A large part of this design process
involved correcting the inadequacies if IPv4. One major problem that has
been fixed is the routing. IPv6 does not use different network classes for
routing instead it uses a system that provides flexibility to expand networks
yet making the routing quick. With many addresses to work with the
addressing has been layed out so they first of all are sorted by their major
connection points. One such point is Sunet in Sweden, there all the

large address range that it can provide to companies, minor ISP’s and dialup
customers. This makes routing much easier, Internet backbone routers will no
longer have to have huge databases of over 40,000 entries.
Security
With IPv4 there isn’t any security at IP level. One of the design goals for
version 6 is to provide authentication and encryption at a lower level.
Previously encryption had to be done at a higher level, usually at the
application layer. The authentication part makes sure that the information is
actually coming from the source that it is claiming to be. This ensures that
valuable data or passwords that is stored on a system cannot be spoofed
(method to change the source address to make the packet appear coming
from a different host) to intruders. Encryption is made by adding extra headers
to the IP packet with encryption keys and other handshaking information. This
way every packet can be encrypted by itself at a lower level, preventing
sniffers (program to eavesdrop network traffic) from accessing the information
in the packet.
Multicast and Quality of Service
As streaming audio and video becomes more widely used over the Internet
along with other time critical applications like news and financial information
the limitations of IPv4 become more obvious. Version 6 of the Internet
Protocol has a feature called multicast. It allows broadcasters of audio and
video streams to send out just one packet of the same information to many
receiptants. It works like a tree, whenever a network is split into a few smaller
ones, the information will be replicated and distributed down the tree. This
decreases the network traffic as audio and video broadcasts are expected to
increase heavily as more people get faster Internet connections. Quality of
Service is also important for the future of streaming, by setting a high value of
Quality of Service the routers in the path to the target computer will prioritise
the packet thus leading to a faster delivery. The risk with is of course that all
applications like e-mail and news that otherwise would be considered a non-
time critical also set a high Quality of Service to make it get delivered quickly.
Neighbour Discovery
One of the major headaches for network administrators of large networks it
managing IP addresses. The InterNIC wants to have as many addresses free
as possible for future usage, giving the administrators a lot of work tracking
14
which addresses that are used and which are free. When IPv6 is used on a
network such problems can be disavowed. The protocol has a sort of
autoconfiguration so when a host is connected to a network it will talk to the
local router by using a temporary IP address and the router will tell the host
what IP it should use. The router has previously been defined a range of
addresses by the system administrator. In the same way if a network is
moved or there is a change of ISP, resulting in a major IP change, the
administrator will reconfigure the router to the new IP range and it will then, by
the Neighbour Discovery (ND) protocol, tell the hosts their new IP’s.
Forwarding
To support highly dynamic situations in the future IPv6, contains features for
IP forwarding. When a user leaves work to go on a business trip for example
he will logout from the local are network. The system will then tell the local
router that all data to that user is to be forwarded to his laptop IP instead of
his work IP. Forwarding allows domain name entries to be unchanged while
the user is connected to a network on the other side of the earth.

Transition
When or if IPv6 makes it to the common market the transition will not be too
hard. The next generation protocol is created to work with the old version of
IP. The first routers that will be installed using the new protocol will also
handle the old version so IPv4 can talk to it during the transition period. The
only dependency that exists is the DNS. When a subnet is upgraded to IPv6,
the domain name server must also be updated to handle the new IP
addresses. The network that the subnet is connected to does not have to be
upgraded. If an IPv6 host connects to a different IPv6 host on a different
subnet where the data has to travel over an old IPv4 network, it will only get
encapsulated with IPv4 headers. This method is called tunnelling. When the
packet once reaches the destination IPv6 network the IPv4 headers will be
15
removed by the router and the packet will be submitted to the correct IPv6
host. The old version four network will not know that it ever carried something
it actually cannot handle.

6bone
Currently there is a
virtual world-wide
IPv6 network called
6bone created to
test
implementations of
IPv6 in a working
environment while
not risking
production routers
and important
systems. The
network operates on
top of the ordinary
Internet by
tunnelling discussed earlier. 6bone is not however a new Internet that we will
move to once IPv6 is ready for commercial use, instead it is just a playground
for scientist and it will disappear when IPv6 becomes widely used.

New domain names


Not in anyway related to the proposed IPv6 standard, seven new top-level
domain names have been proposed as addition to the current com, org, net
and others.

.firm for businesses or firms


.store for selling products
.web for www related sites
.arts for cultural sites
.rec for recreational and entertainment sites
.info for information sites
.nom for personal homepages

It might seem great with all these new categories but will they actually matter?
The owners of many domains today registered them to make profit. By
registering corporate or product names they want to sell them to the rightful
owner later on. The same way they can also register good domains like video
or cd.store by just being quick to register and then sell to the highest bidder.
To stop domain opportunist large corporations also have to register their
domain at all top level domains just the way they have done with the country
domains. Most likely the new domains, whenever (or if) they arrive will just
create a storm of registrations, and all the sought-after domains will be taken
immediately. And along with them there will also be the normal copyright
disputes etc that already exist with the .com domain.
16

Future connections
Many new connection forms are emerging as the demand for high speed
Internet grows. Users no longer wish to browse with slow modems. In this
section we will look into some of the technologies that might become popular
in the future.

Fixed connections
The modems most people use to connect to the Internet have a speed of 33.6
kbps (thousand bits per second), this gives a transfer rate of about 3 kb/sec
(thousand bytes per second) on the Internet. When downloading files this is
very slow. The phone lines in general support much higher communication
speeds, here are some of them.
Integrated Services Digital Network
ISDN is a speedier version of standard phone lines. The difference lies in the
way the connection is handled. Instead of making calls analogue when
sending them to the subscriber at the telephone station, digital technology is
used all the way out over the standard copper cable. Normal phones are
analogue, so this system requires an adapter that converts the signal to the
analogue format. ISDN provides two channels of each 64 kbps for voice and
data and one service channel at 16 kbps to handle communication between
the telephones and telephone station, like notifying when there is an incoming
call. Recent developments of ISDN allow the use of the service channel for
other than service. Since this channel is used all the time, not only when a
phone or the Internet is used, it would allow a computer connected with ISDN
to be online constantly, and when needed it could connect with one of that
data lines to provide higher speed. This addition to the ISDN system is not
widely spread but it shows good use of existing technology.
19
Satellite
Connections through satellite is starting to become available, the pro’s of it is
the high data transfer speed. Common users can expect speed ranging from
400 to 800 kbps while professional equipment could increase that speed
dramatically over the 10 Mbps’. The big con with satellites for consumer
usage is that it is a one way system, you will have to have a modem
connection open for communication back to the Internet (to request and
acknowledge information). Another con is the latency, transferring data up to
space takes a while, this creates some slight delays that could for example
make gameplay over Internet very tedious.
xDSL
The DSL family of technologies is just like ISDN and extension of your current
phone line. DSL technology however provides much higher speeds but also
requires technical upgrades at the local telephone station. Besides that you
cannot be to far from the telephone station as background noise will disturb
the signal, giving you much slow transfers than the 9 Mbps that ADSL can
offer. Digital Subscriber Line, which is its long name, is probably one of the
connection forms that will be popular in the future, as long as you live near the
telephone station.
17
Cable Modem
The cables that already are laid out to handle cable TV can carry data very
well. Many cable networks are only good at providing data, users connecting
through a cable modem can get speeds of a couple of Mbps from the Internet
while sending might go down to a few hundred kbps. This differs widely
depending on the system that the cable operator is using.

Mobile connections
GSM
Just as the Internet wasn’t created to grow like it did, the European mobile
phone system, GSM (Global System Mobile) wasn’t created to
handle data. The system is currently limited to 9.6 kbps, while
normal telephone line modems can get speeds up
to 56 kbps. This makes mobile Internet access
very limited, only e-mail messages can be sent
and received at a reasonable speed and browsing
www would be very slow. Connection to a mobile
phone is no longer needed, telecommunication
companies have phones with integrated
computers as well as PC-Cards with built-in
phones. Fast communication over GSM is not very
good yet but by the year 2001, the GSM systems
are
expected to be enhanced for data transfer at 384
kbps.
Universal Mobile Telephone System
On January 29 1998 in Paris at the European Telecommunications Standards
Institute meeting the standard for the third generation of mobile phones was
set. The first generation was analogue, the second generation had digital
phone systems like GSM and AMPS (an American mobile phone standard).
The new system’s technical standard is called UTRA while the phone system
is called UMTS. It has some major advantages over older systems. First of all
the voice quality should be comparable to fixed lines, second and most
important in this context is its support for higher data rates. For indoor access
speeds up to 2 Mbps can be reached while wide area access only allows
speeds up to 384 kbps. What really shows the aim for global mobile network
communication is the support for multiple simultaneous connections and
support for IP packet handling.
Wireless Internet
There are systems designed specifically for wireless Internet, in Seattle,
Washington DC and San Francisco systems consist of 1000’s of small
transmitters on light poles. The system provides access all over the central
area at ordinary modem speed. The system is very flexible in the sense you
can move around freely in the city and it requires only a small antenna on the
special Ricochetmodem. The system currently has more than 15,000 users
and bandwidth upgrades to provide higher data speeds can be expected in
the future.
18
Multichannel Multipoint Distribution System
A system similar to the above is MMDS, it does however require the receiver
to have a small dish besides the modem making it non-mobile. The good part
of it however is that it can do speeds up to 30 Mbps. Speeds like that allow
television and video broadcasting. With digital transmitting technology it might
also be possible to triple the speed. The problem with the system is that all
the users in the same region share bandwidth, so when everyone wants to
surf the web, it will not be as effective as when sending TV. This system is yet
half-mobile since it does not require any cables to be drawn making it at least
portable.
Shared Wireless Access Protocol
Not only are people expected to communicate with each other over the
Internet, electronic devices at home will also be talking to each other in the
future. Almost all the major computer companies are working together to
develop SWAP, a protocol that defines how devices talk to each other by
radio signals. The system would allow you to control telephones, lights,
alarms, computers and ovens, all across the
Internet. Technically one home network can control 127 devices and
communicate at 2 Mbps. It was for these kind of applications the Internet
programming languages Java first of all was created by Sun Microsystems.
There are competing standards to both SWAP and Java however. Microsoft
wants the devices to be control by a miniature version of their Windows
operating system while electricity companies want the communication to go
not by radio but through the electricity lines. They do have a good point with
this as most of the devices that are planned to be connected to the home
network are connected to with an electric cable. Also the speed of it is
currently the same as SWAP,
with improvements likely to come. Internet over electric lines also works for
out of the house connections, like browsing and e-mail. The great advantage
of it is that everyone has it and it would only require minor changes to the
power system and a small adapter at home.

USES OF INTERNET
Schools should be connected to the NEN through their RBC. In London this
is the London Grid for Learning (LGfL) who procure the broadband supply
from Synetrix. Across this schools’ fibre network, a range of services are
provided. Internet filtering is a key service. This is updated and monitored by
Synetrix working with their Third Party Suppliers. All London maintained
schools should be part of this network.
Additionally, schools should have up-to-date anti-virus, anti-spyware and anti-
spamware software and approved firewall solutions installed on their network.
There are LGfL solutions provided for all of these and should be set-up to be
automatically updated so that networks remain up-to-date.
To make sure rogue applications are not downloaded and hackers cannot
gain access to the school’s equipment or into users’ files through Internet use,
staff and pupils should not be able to download executable files and software.
19
Unfortunately, inappropriate materials will inevitably get through any filtering
system. So, schools should be vigilant and alert so that sites can be blocked.
Conversely, sometimes appropriate websites need to be unblocked. In larger
schools, network managers will be able to block or liaise directly with Synetrix
over this. In primary or smaller schools, there should be a named member of
the ICT strategy team who manages the filtering policy for the school: this
person may be the technician or the ICT coordinator, and the LA will usually
be able to provide them with advice and back-up. l. By working together,
London schools help to make the filtering system as effective as possible.
Networks can have ‘health’ checks to ensure they have the latest versions of
patches and service updates and to check speed and the possibility of having
inappropriate applications on the network. Synetrix provide a service that
schools can purchase. This is particularly useful for secondary phase and
larger schools. [contact your LA for details]
Individual log-ins, coupled with Auditing software, means activity on the
network can be monitored and logged. Security can be enhanced by ‘timing-
out’ Internet or network sessions. High level monitoring of website access is
also undertaken by Synetrix and logs can be obtained where a site is under
investigation.
Within lessons, there is network ‘remote’ management software available on
the market, which enables the ‘tutor’ / teachers to present only the software or
the Internet links they want pupils to access for that lesson or topic –
particularly useful for younger pupils. Your LA may be able to offer advice on
such products.
Filtering, coupled with child-friendly search engines [e.g.
http://yahooligans.yahoo.com/ | http://www.askforkids.com/ ] reduce the
likelihood of children finding inappropriate materials. Schools should set-up
search engines so that ‘safe search’ is turned on: Although not a child friendly
search engine, it is worth noting that Google can be forced into safe search
mode through the LGfL provision.
Caching some sites - so they are now essentially stored as off-line resources
for viewing later from the Local Area Network (LAN) is another useful strategy.
Schools should consider having a cache server, such as the LGfL CachePilot
or other LA recommended solution.
Pupils publishing to the Internet on a class LGfL website removes the
difficulties of pupils publishing on a publicly available Web site because this
can be a safe, closed environment which only they will have access to via
their username and password.
Schools should not send personal data across the Internet unless it is
encrypted or sent via secure systems such as the DfES s2s site or an
approved Learning Platform etc.
20

Conclusion
As I’ve worked with this project I’ve made up pictures in my head about he we
will be connected in the future and how we will use the Internet. The only thing
that I can say I am really certain will happen is mobile Internet. Cell phone
usage and Internet usage has exploded hand in hand. The same way that we
want to travel and make calls with our cell phones we will also want to travel
and connect to the Internet. The problem with this is of course the bandwidth.
Wireless communication doesn’t at present day provide enough speed for
useful usage. Perhaps UMTS will be the solution to this. As fixed Internet
connections start to provide enough bandwidth to support real TV and video
broadcasting the users will want the same features in their flexible laptop
computers. The question is will the mobile Internet systems provide what the
users want?

Connecting all our home devices into one local home network will be one of
the great advantages of the Internet in the future. Letting all our home devices
talk to each other has great advantages.

In the long run I think CD discs, DVD discs and other multimedia media’s will
be phased out. When users starting getting more bandwidth such devices will
become redundant. The Internet is a better platform for multimedia than
storage discs ever can be. Updates can be done in the information whenever
needed making patches and updates unnecessary. An argument against this
theory is that a DVD discs with its gigabytes of data will take a long time to
transfer. In this lies the strength of the Internet, instead of sending all the data
to the user in one large file it will be streamed. This way the auser can use
one part of the multimedia application while the next part is being downloaded
to the computer, making it ready for use when the user wants it.

Mixing the local home device network (as I like to call it) with the Internet can
have certain qualities. One scenario would be stereos, instead of playing radio
from standard FM radio, the radio signals from all over the world would arrive
over the electricity line. No more CD’s, when you want listen to music, start
your TV, go through the menu system until you find the song you want to
listen to and it will be played over the Internet. Naturally the same goes for
videos and games.
we want to have some kind of personal storage space that we know is ours,
not publicly available to everyone over Internet. I don’t really believe in Sun’s
Network Computers, I am more for an intermediate

solution of Networked Computers, they don’t need CD-ROM or floppy drives,


all they would need is a harddrive to store information and a network interface
card for Internet access.
Working with this project has been very fun. Not only have I learned a lot,
hopefully this small report about the Internet also can teach others. When I
was finally finished writing, something struck me. All the information I used
from the Internet I had printed on paper. Even if everyone will be connected in
21
the future and all our devices around us will communicate with each other
people will still want to be able sit back, relax and enjoy a good book. A
physical one, made in real paper and printed with black ink.

List of references

Paul Simoneau: “Hand-On TCP/IP”


McGraw-Hill 1997, ISBN 0-07-912640-5

Mike Bracken: “The battle for the Web”


Internet Magazine September 97, page 43

Ahrvid Engholm: “Allt kretsar kring processorn”


Mikrodatorn 5-98, page 39

Mike Bracken “New domain names”


Internet Magazine November 97, page 47

Göte Andersson “Radionät kopplar ihop elprylar”


Dagens Nyheter 8 April 1998, page 1 / DN.IT

Paul Lavin “Internet Unplugged”


Internet Magazine March 98, page 62

David Moss “Fast net access”


Internet Magazine February 97, page 104

Martin Appel “Trådlös i Seattle”


Internetworld 1-98, page 41

Kari Malmström “Jakten på ett mobilt Internet”


Kontakten 5-98, page 21

Charles L. Hedrick “Introduction to the Internet Protocols”


The State University of New Jersey 3 October 1988

H. Gilbert “Introduction to TCP/IP”


2 February 1995

“What is the 6bone?”


http://www.6bone.net/about_6bone.htm , 21 January 1997

“Simple Internet Transition Mechanisms”


http://playground.sun.com/pub/ipng/html/ipng-transition.htm
Robert M. Hinden “IP Next Generation Overview”
14 May 1995

22

Bay Networks “IPv6 Whitepaper”


1997

Robert Zakon “Hobbes’ Internet Timeline v3.1”


1997

Dave Kristula “The History of the Internet”


March 1997

“Wideband CDMA Introduction”


http://www.ericsson.se/wcdma/wcdma/sub_intr/introduction.htm , 24 October
1997

“WCDMA in brief”
http://www.ericsson.se/wcdma/wcdma/sub_intr/wcdma_in_brief.htm , 24
October 1997

“The compelling case for Wideband CDMA for next-generation mobile Internet
and multimedia”
http://www.imt-2000.com/wcdma/wcdma/sub_tech/brochures/cdma.htm , 18
March 1998

“ETSI SMG#24 bis Paris, France 29 January 1998”


http://www.imt-2000.com/wcdma/importan.htm , 30 January 1998

23