Anda di halaman 1dari 18

Only a few years ago, seeing in 3-D meant peering through a pair of red-and-blue glasses, or

trying not to go cross-eyed in front of a page of fuzzy dots. It was great at the time, but 3-D
technology has moved on. Scientists know more about how our vision works than ever before,
and our computers are more powerful than ever before -- most of us have sophisticated
components in our computer that are dedicated to producing realistic graphics. Put those two
things together, and you ll see how 3-D graphics have really begun to take off.

Most computer users are familiar with 3-D games. Back in the 90s, computer enthusiasts were
stunned by the game Castle Wolfenstein 3D, which took place in a maze-like castle. It may have
been constructed from blocky tiles, but the castle existed in three dimensions -- you could move
c  forward and backward, or hold down the appropriate key and see your viewpoint spin through 360
degrees. Back then, it was revolutionary and quite amazing. Nowadays, gamers enjoy ever more
complicated graphics -- smooth, three-dimensional environments complete with realistic lighting
and complex simulations of real-life physics grace our screens.
But that s the problem -- the screen. The game itself may be in three dimensions, and the player may be
able to look wherever he wants with complete freedom, but at the end of the day the picture is displayed
on a computer monitor...and that s a flat surface.
That s where PC 3-D glasses come in. They re designed to convince your brain that your monitor is
showing a real, three-dimensional object. In order to understand quite how this works, we need to know
what sort of work our brain does with the information our eyes give it. Once we know about that, we ll be
able to understand just how 3-D glasses do their job.
These computers include the entire spectrum of PCs, through professional workstations upto
super-computers. As the performance of computers has increased, so too has the demand for
communication between all systems for exchanging data, or between central servers and the
associated host computer system. The replacement of copper with fiber and the advancement sin
digital communication and encoding are at the heart of several developments that will change the
communication infrastructure. The former development has provided us with huge amount of
transmission bandwidth. While the latter has made the transmission of all information including
voice and video through a packet switched network possible.
With continuously work sharing over large distances, including international
communication, the systems must be interconnected via wide area networks with increasing
demands for higher bit rates.
For the first time, a single communications technology meets LAN and WAN requirements
G 
and handles a wide variety of current and emerging applications. ATM is the first technology to
provide a common format for bursts of high speed data and the ebb and flow of the typical voice
phone call. Seamless ATM networks provide desktop-to-desktop multimedia networking over
single technology, high bandwidth, low latency network, removing the boundary between LAN
WAN.
ATM is simply a Data Link Layer protocol. It is asynchronous in the sense that the
recurrence of the cells containing information from an individual user is not necessarily periodic. It
is the technology of choice for evolving B-ISDN (Board Integrated Services Digital Network), for
next generation LANs and WANs. ATM supports transmission speeds of 155Mbits / sec. In the
future. Photonic approaches have made the advent of ATM switches feasible, and an evolution
towards an all packetized, unified, broadband telecommunications and data communication world
based on ATM is taking place.

Acoustic cryptanalysis is a side channel attack which exploits sounds, audible or not, produced
during a computation or input-output operation. In 2004, Dmitri Asonov and Rakesh Agrawal of the
IBM Almaden Research Center announced that computer keyboards and keypads used on
telephones and automated teller machines (ATMs) are vulnerable to attacks based on
differentiating the sound produced by different keys. Their attack employed a neural network to
recognize the key being pressed. By analyzing recorded sounds, they were able to recover the
G       text of data being entered. These techniques allow an attacker using covert listening devices to
obtain passwords, passphrases, personal identification numbers (PINs) and other security
information. Also in 2004, Adi Shamir and Eran Tromer demonstrated that it may be possible to
conduct timing attacks against a CPU performing cryptographic operations by analysis of
variations in its humming noise. In his book Spycatcher, former MI5 operative Peter Wright
discusses use of an acoustic attack against Egyptian Hagelin cipher machines in 1956. The attack
was codenamed 'ENGULF'.
Adaptive Partition Schedulers are a relatively new type of partition scheduler, pioneered with the
most recent version of the QNX operating system. Adaptive Partitioning (or AP) allows the real-
time system designer to request that a percentage of processing resources be reserved for a
particular subsystem (group of threads and/or processes). The operating systems priority driven
pre-emptive scheduler will behave in the same way that a non-AP system would until the system
is overloaded (i.e. system-wide there is more computation to perform, than the processor is
capable of sustaining over the long term). During overload, the AP scheduler enforces hard limits
G     on total run-time for the subsystems within a partition (as dictated by the allocated percentage of
processor bandwidth for the particular partition). If the system is not overloaded, a partition that is
allocated (for example) 10% of the processor bandwidth, can, in fact, use more than 10%, as it will
borrow from the spare budget of other partitions (but will be required to pay it back later). This is
very useful for the non real-time subsystems that experience variable load, since these
subsystems can make use of spare budget from hard real-time partitions in order to make more
forward progress than they would in a Fixed Partition Scheduler such as ARINC-653, but without
impacting the hard real-time subsystems deadlines.

A Java Ring is a finger ring that contains a small microprocessor with built-in capabilities for the
user, a sort of smart card that is wearable on a finger. Sun Microsystem s Java Ring was
introduced at their JavaOne Conference in 1998 and, instead of a gemstone, contained an
inexpensive microprocessor in a stainless-steel iButton running a Java virtual machine and
preloaded with applets (little application programs). The rings were built by Dallas Semiconductor.
Workstations at the conference had ring readers installed on them that downloaded information
about the user from the conference registration system. This information was then used to enable
a number of personalized services. For example, a robotic machine made coffee according to
user preferences, which it downloaded when they snapped the ring into another ring reader.

Although Java Rings aren t widely used yet, such rings or similar devices could have a number of
real-world applications, such as starting your car and having all your vehicle s components (such
as the seat, mirrors, and radio selections) automatically adjust to your preferences.

The Java Ring is an extremely secure Java-powered electronic token with a continuously running,
unalterable real-time clock and rugged packaging, suitable for many applications. The jewel of the
Java Ring is the Java iButton -- a one-million transistor, single chip trusted microcomputer with a
powerful Java Virtual Machine (JVM) housed in a rugged and secure stainless-steel case.

The Java Ring is a stainless-steel ring, 16-millimeters (0.6 inches) in diameter, that houses a 1-
million-transistor processor, called an iButton. The ring has 134 KB of RAM, 32 KB of ROM, a
  real-time clock and a Java virtual machine, which is a piece of software that recognizes the Java
language and translates it for the user s computer system.

The Ring, first introduced at JavaOne Conference, has been tested at Celebration School, an
innovative K-12 school just outside Orlando, FL. The rings given to students are programmed with
Java applets that communicate with host applications on networked systems. Applets are small
applications that are designed to be run within another application. The Java Ring is snapped into
a reader, called a Blue Dot receptor, to allow communication between a host system and the Java
Ring.
Designed to be fully compatible with the Java Card 2.0 standard the processor features a high-
speed 1024-bit modular exponentiator fro RSA encryption, large RAM and ROM memory capacity,
and an unalterable real time clock. The packaged module has only a single electric contact and a
ground return, conforming to the specifications of the Dallas Semiconductor 1-Wire bus. Lithium-
backed non-volatile SRAM offers high read/write speed and unparallel tamper resistance through
near-instantaneous clearing of all memory when tampering is detected, a feature known as rapid
zeroization.

Data integrity and clock function are maintained for more than 10 years. The 16-millimeter
diameter stainless steel enclosure accomodates the larger chip sizes needed for up to 128
kilobytes of high-speed nonvolatile static RAM. The small and extremely rugged packaging of the
module allows it to attach to the accessory of your choice to match individual lifestyles, such as
key fob, wallet, watch, necklace, bracelet, or finger ring.
The IMAX (Image Maximum) system has its roots in Canada where multi-screen films were the hit
of the fair. A small group of Canadian filmmakers Graeme Ferguson, Roman Kroitor and Robert
Kerr decided to design a new system using a single, powerful projector, rather than the
î
G cumbersome multiple projectors used at that time. The result is the IMAX motion picture projection
system, which would revolutionize the giantscreen cinema. IMAX delivers just that on a screen
four times the size of conventional movie screens.

Ajax, shorthand for Asynchronous JavaScript and XML, is a web development technique for
creating interactive web applications. The intent is to make web pages feel more responsive by
exchanging small amounts of data with the server behind the scenes, so that the entire web page
does not have to be reloaded each time the user makes a change. This is meant to increase the
web page s interactivity, speed, and usability.

The Ajax technique uses a combination of:


XHTML (or HTML) and CSS, for marking up and styling information.
The DOM accessed with a client-side scripting language, especially ECMAScript implementations
GG
such as JavaScript and JScript, to dynamically display and interact with the information presented.
The XMLHttpRequest object to exchange data asynchronously with the web server. In some Ajax
frameworks and in certain situations, an IFrame object is used instead of the XMLHttpRequest
object to exchange data with the web server.

XML is sometimes used as the format for transferring data between the server and client, although
any format will work, including preformatted HTML, plain text, JSON and even EBML.
Like DHTML, LAMP and SPA, Ajax is not a technology in itself, but a term that refers to the use of
a group of technologies together.
ECC is a public key encryption technique based on elliptic curve theory. ECC can be used to
create faster, smaller and more efficient cryptographic keys. It generates keys through the
properties of the elliptic curve equation rather than the traditional method of generation, as the
product of very large prime numbers. This technology can be used in conjunction with most of the
public key encryption methods such as RSA and Diffie-Hellman.

      ECC can yield a level of security with a 164-bit key compared with other systems that require a
 1,024-bit key. Since ECC provides an equivalent security at a lower computing power and battery
resource usage, it is widely used for mobile applications. ECC was developed by Certicom, a
mobile e-business security provider and was recently licensed by Hifn, a manufacturer of
integrated circuitry and network security products. Many manufacturers, including 3COM, Cylink,
Motorola, Pitney Bowes, Siemens, TRW and VeriFone have incorporated support for ECC in their
products .

Generic visual perception processor is a single chip modeled on the perception capabilities of the
human brain, which can detect objects in a motion video signal and then locate and track them in
       real time. Imitating the human eye s neural networks and the brain, the chip can handle about 20
  billion instructions per second. This electronic eye on the chip can handle a task that ranges from
sensing the variable parameters as in the form of video signals and then process it for controlling
purpose.
This describes AMD s Hyper TransportΠtechnology, a new I/O architecture for personal
computers, workstations, servers, high-performance networking and communications systems,
and embedded applications. This scalable architecture can provide significantly increased
bandwidth over existing bus architectures and can simplify in-the-box connectivity by replacing
legacy buses and bridges. The programming model used in Hyper Transport technology is
compatible with existing models and requires little or no changes to existing operating system and
driver software.

It provides a universal connection designed to reduce the number of buses within the system. It is
designed to enable the chips inside of PCs and networking and communications devices to
communicate with each other up to 48 times faster than with existing technologies. Hyper
Transport technology is truly the universal solution for in-the-box connectivity.
>> It is a new I/O architecture for personal computers, workstations, servers, embedded
‰       applications etc.
>> It is a scalable architecture can provide significantly increased.
bandwidth over existing bus architectures .
>> It simplify in-the-box connectivity by replacing legacy buses and bridges.
>> The programming model used in Hyper Transport technology is compatible with existing
models and requires little or no changes to existing operating system and driver software.

Hyper Transport technology provides high speeds while maintaining full software and operating
system compatibility with the Peripheral Component Interconnect (PCI) interface that is used in
most systems today. In older multi-drop bus architectures like PCI, the addition of hardware
devices affects the overall electrical characteristics and bandwidth of the entire bus. Even with
PCI-X1.0, the maximum supported clock speed of 133MHz must be reduced when more than one
PCI-X device is attached. Hyper Transport technology uses a point-to-point link that is connected
between two devices, enabling the overall speed of the link to transfer data much faster

In a non-networked personal computing environment resources and information can be protected


by physically securing the personal computer. But in a network of users requiring services from
  many computers the identity of each user has to be accurately verified. For authentication
kerberos is being used. Kerberos is a third party authentication technology used to identify a user
requesting a service.

The Metasploit Project is an open source computer security project which provides information
about security vulnerabilities and aids in penetration testing and IDS signature development. Its

 Gî 
most well-known sub-project is the Metasploit Framework, a tool for developing and executing
exploit code against a remote target machine.

A real time system is defined as follows - A real-time system is one in which the correctness of the
computations not only depends upon the logical correctness of the computation but also upon the
time at which the result is produced. If the timing constraints of the system are not met, system
failure is said to be occurred.

Two types Hard real time operating system Strict time constraints Secondary storage limited or
absent Conflicts with the time sharing systems Not supported by general purpose OS Soft real
time operating system Reduced Time Constraints Limited utility in industrial control or robotics
Useful in applications (multimedia, virtual reality) requiring advanced operating-system features. In
   
the robot example, it would be hard real time if the robot arriving late causes completely incorrect
operation. It would be soft real time if the robot arriving late meant a loss of throughput. Much of
what is done in real time programming is actually soft real time system. Good system design often
implies a level of fe/correct behaviour even if the computer system never completes the
computation. So if the computer is only a little late, the system effects may be somewhat
mitigated.

Hat makes an os a rtos?


1. A RTOS (Real-Time Operating System) has to be multi-threaded and preemptible.
2. The notion of thread priority has to exist as there is for the moment no deadline driven OS.
3. The OS has to support predictable thread synchronisation mechanisms
4. A system of priority inheritance has to exist
5. For every system call, the maximum it takes. It should be predictable and independent from the
number of objects in the system
6. the maximum time the OS and drivers mask the interrupts. The following points should also be
known by the developer:
1. System Interrupt Levels.
2. Device driver IRQ Levels, maximum time they take, etc.

The MBMS is a unidirectional point to multipoint bearer service in which data is transmitted from a
single source entity to multiple recipients. These services will typically be in the form of streaming
video and audio and should not be confused with the CBS (Cell Broadcast Service) that is



currently supported. This paper describes the architecture of the MBMS along with its functional
notes and integration into 3G and GERAN (GSM & EDGE Radio Access Network) with Core
Network, UTRAN (UMTS Terrestrial Radio Access Network) and radio aspects being explained.

VoIP, or Voice over Internet Protocol refers to sending voice and fax phone calls over data
networks, particularly the Internet. This technology offers cost savings by making more efficient
use of the existing network.
Traditionally, voice and data were carried over separate networks optimized to suit the differing
characteristics of voice and data traffic. With advances in technology, it is now possible to carry
voice and data over the same networks whilst still catering for the different characteristics required
by voice and data.
Voice-over-Internet-Protocol (VOIP) is an emerging technology that allows telephone calls or
faxes to be transported over an IP data network. The IP network could be
A local area network in an office
A wide area network linking the sites of a large international organization
A corporate intranet
The internet
Any combination of the above
There can be no doubt that IP is here to stay. The explosive growth of the Internet, making IP the
predominate networking protocol globally, presents a huge opportunity to dispense with separate
voice and data networks and use IP technology for voice traffic as well as data. As voice and data
network technologies merge, massive infrastructure cost savings can be made as the need to
 î      provide separate networks for voice and data can be eliminated.
Most traditional phone networks use the Public Switched Telephone Network(PSTN), this system
employs circuit-switched technology that requires a dedicated voice channel to be assigned to
each particular conversation. Messages are sent in analog format over this network.
Today, phone networks are on a migration path to VoIP. A VoIP system employs a packet-
switched network, where the voice signal is digitized, compressed and packetized. This
compressed digital message no longer requires a voice channel. Instead, a message can be sent
across the same data lines that are used for the Intranet or Internet and a dedicated channels is
no longer needed. The message can now share bandwidth with other messages in the network.
Normal data traffic is carried between PC s, servers, printers, and other networked devices
through a company s worldwide TCP/IP network. Each device on the network has an IP address,
which is attached to every packet for routing. Voice-over-IP packets are no different.
Users may use appliances such as Symbol s NetVision phone to talk to other IP phones or
desktop PC-based phones located at company sites worldwide, provided that a voice-enabled
network is installed at the site. Installation simply involves assigning an IP address to each
wireless handset.
VOIP lets you make toll-free long distance voice and fax calls over existing IP data networks
instead of the public switched telephone network (PSTN). Today business that implement their
own VOIP solution can dramatically cut long distance costs between two or more locations
When its time to find out how to make content available over WAP, we need to get to grips with its
Markup Language. ie, WML. WML was designed from the start as a markup language to describe
display of content on small screen devices.
It is a Markup language enabling the formatting of text in WAP environment using a variety of
¦ 

  markup tags to determine the display appearance of content. WML is defined using the rules of
XML-extensible markup language and therefore an XML application. WML provides a means of
allowing the user to navigate around the WAP application and supports the use of anchored links
as found commonly in the web pages. It also provides support for images and layout within the
constraints of the device

ATM makes B-ISDN a reality. The Integrated services Digital Network (ISDN) evolved during the
80 s. It carried a basic channel that could operate at 64kbps (B-channel) and combinations of this
and others (D-channels) formed the basis of communication on the network. In the new B-ISDN
world, this is supposed to supply data, voice and other communication services over a common
network with a wide range of data speeds. To understand a lot of the terminology in ATM-land, it
is necessary to understand the B-ISDN Reference Model. Just as the ISO seven-layer model
defines the layers for network software, this model defines layers for the ATM network.
The header is broken up into the following fields.
Generic Flow Control (GFC)
Virtual Channel Identifier (VCI)
Virtual Path Identifier (VPI)
Payload type (PT)
Cell Loss Priority (CLP)
Header Error Control (HEC)
î  
 Network - to - Network interface
It is necessary for the switches to know how to send the calls along. There are several techniques
that could be adopted, but the most useful one for the 1P users is called Private Network-to
Network Interface (PNNI)The PNNI is an interface between switches used to distribute information
about the state and structure of the network to establish circuit to ensure that reasonable
bandwidth and Qos contract can be established and to provide for some network management
functions. Convergence Sublayer: The function provided at this layer differ depending on the
service provided. It provides bit error correction and may use explicit time stamps to transfer
timing information.
Segmentation and reassembly sublayer:
At this layer the convergence sublayer-protocol data unit is segmented and a header added. The
header contains 3 fields Sequence Number used to detect cell insertion and cell loss. Sequence
Number protection used to correct and detect errors that occur in the sequence number.
Convergence sublayer indication used to indicate the presence of the convergence sublayer
function.


Genetic programming (GP) is an automated methodology inspired by biological evolution to find
computer programs that best perform a user-defined task. It is therefore a particular machine
learning technique that uses an evolutionary algorithm to optimize a population of computer
programs according to a fitness landscape determined by a program's ability to perform a given
computational task. The first experiments with GP were reported by Stephen F. Smith (1980) and
Nichael L. Cramer (1985), as described in the famous book Genetic Programming: On the
Programming of Computers by Means of Natural Selection by John Koza (1992).
Computer programs in GP can be written in a variety of programming languages. In the early
(and traditional) implementations of GP, program instructions and data values were organized in
tree-structures, thus favoring the use of languages that naturally embody such a structure (an
important example pioneered by Koza is Lisp). Other forms of GP have been suggested and
successfully implemented, such as the simpler linear representation which suits the more
traditional imperative languages [see, for example, Banzhaf et al. (1998)]. The commercial GP
software Discipulus, for example, uses linear genetic programming combined with machine code
language to achieve better performance. Differently, the MicroGP uses an internal representation
similar to linear genetic programming to generate programs that fully exploit the syntax of a given
assembly language.
GP is very computationally intensive and so in the 1990s it was mainly used to solve relatively
    simple problems. However, more recently, thanks to various improvements in GP technology and
to the well known exponential growth in CPU power, GP has started delivering a number of
outstanding results. At the time of writing, nearly 40 human-competitive results have been
gathered, in areas such as quantum computing, electronic design, game playing, sorting,
searching and many more. These results include the replication or infringement of several post-
year-2000 inventions, and the production of two patentable new inventions.
Developing a theory for GP has been very difficult and so in the 1990s genetic programming
was considered a sort of pariah amongst the various techniques of search. However, after a
series of breakthroughs in the early 2000s, the theory of GP has had a formidable and rapid
development. So much so that it has been possible to build exact probabilistic models of GP
(schema theories and Markov chain models) and to show that GP is more general than, and in
fact includes, genetic algorithms.
Genetic Programming techniques have now been applied to evolvable hardware as well as
computer programs.
Meta-Genetic Programming is the technique of evolving a genetic programming system using
genetic programming itself. Critics have argued that it is theoretically impossible, but more
research is needed

Inferno is answering the current and growing need in the marketplace for distributed computing
solutions. Based on more than 20 years of Bell Labs research into operating systems and
programming languages, Inferno is poised to propel network computing into the 21st century. Bell
Labs will continue to support the evolution of Inferno under a joint development agreement with
î     Vita Nuova. Inferno is an operating system for creating and supporting distributed services. It was
  originally developed by the Computing Science Research Center of Bell Labs, the R&D arm of
Lucent Technologies, and further developed by other groups in Lucent. Inferno was designed
specifically as a commercial product, both for licensing in the marketplace and for use within new
Lucent offerings. It encapsulates many years of Bell Labs research in operating systems,
languages, on-the-fly compilers, graphics, security, networking and portability.
DAP is actually a simple protocol that is used to access directory services. It is an open, vendor
neutral information such as e-mail addresses and public keys for secure transmission of data. The
information contained within an LDAP directory could be ASCII text files, JPEG photographs or
sound files. One way to reduce the time taken to search for information is to replicate the directory
information over different platforms so that the process of locating a specific data is streamlined
and more resilient to failure of connections and computers. This is what is done with information in
an LDAP structure.
LDAP, Lightweight Directory Access Protocol, is an Internet protocol runs over TCP/IP that e-mail
programs use to lookup contact information from a server. A directory structure is a specialized
database, which is optimized for browsing, searching, locating and reading information. Thus
LDAP make it possible to obtain directory information such as e-mail addresses and public keys.
LDAP can handle other information, but at present it is typically used to associate names with
phone numbers and e-mail addresses.
LDAP is a directory structure and is completely based on entries for each piece of information.
An entry is a collection of attributes that has a globally-unique Distinguished Name (DN). The
information in LDAP is arranged in a hierarchical tree-like structure. LDAP services are
implemented by using the client-server architecture. There are options for referencing and
accessing information within the LDAP structure. An entry is referenced by the type of its uniquely
distinguishable name. Unlike the other directory structure, which allows the user access to all the
      G  information available, LDAP allows information to be accessed only after authenticating the user.
   It also supports privacy and integrity security services. There are two daemons for LDAP which
are slapd and slurpd.
THE LDAP DOMAIN THE COMPONENTS OF AN LDAP DOMAIN A small domain may have
a single LDAP server, and a few clients. The server commonly runs slapd, which will serve LDAP
requests and update data. The client software is comprised of system libraries translating normal
lib calls into LDAP data requests and providing some form of update functionality .Larger domains
may have several LDAP slaves (read-only replicas of a master read/write LDAP server). For large
installations, the domain may be divided into sub domains, with referrals to â¼×glue⼌ the sub
domains together. THE STRUCTURE OF AN LDAP DOMAIN A simple LDAP domain is
structured on the surface in a manner similar to an NIS domain; there are masters, slaves, and
clients. The clients may query masters or slaves for information, but all updates must go to the
masters. The â¼×domain name⼌ under LDAP is slightly different than that under NIS. LDAP
domains may use an organization name and country.
The clients may or may not authenticate themselves to the server when performing operations,
depending on the configuration of the client and the type of information requested. Commonly
access to no sensitive information (such as port to service mappings) will be unauthenticated
requests, while password information requests or any updates are authenticated. Larger
organizations may subdivide their LDAP domain into sub domains. LDAP allows for this type of
scalability, and uses â¼×referrals⼌ to allow the passing off of clients from one server to the next
(the same method is used by slave servers to pass modification requests to the master).
By the mid 1980 s, the trend in computing was away from large centralized time-shared
computers towards networks of smaller, personal machines, typically UNIX `workstations . People
had grown weary of overloaded, bureaucratic timesharing machines and were eager to move to
small, self-maintained systems, even if that meant a net loss in computing power. As
microcomputers became faster, even that loss was recovered, and this style of computing remains
popular today.
Plan 9 began in the late 1980 s as an attempt to have it both ways: to build a system that was
centrally administered and cost-effective using cheap modern microcomputers as its computing
elements. The idea was to build a time-sharing system out of workstations, but in a novel way.
Different computers would handle different tasks: small, cheap machines in people s offices would
G    serve as terminals providing access to large, central, shared resources such as computing servers
and file servers. For the central machines, the coming wave of shared-memory multiprocessors
seemed obvious candidates.
Plan 9 is designed around this basic principle that all resources appear as files in a hierarchical
file system, which is unique to each process. As for the design of any operating system various
things such as the design of the file and directory system implementation and the various
interfaces are important. Plan 9 has all these well-designed features. All these help to provide a
strong base for the operating system that could be well suited in a distributed and networked
environment.
The different features of Plan 9 operating system are:
The dump file system makes a daily snapshot of the file store available to the users.
Unicode character set supported throughout the system.
Advanced kernel synchronization facilities for parallel processing.
Security- there is no super-user or root user and the passwords are never sent over the network

SALT stands for Speech Application Language Tags. It consists of small set of XML elements with
associated attributes and DOM object properties, events and methods which apply a speech
interface to web pages. SALT allows applications to be run on a wide variety of devices and also
through different methods for inputting data.
The main design principle of SALT include reuse the existing standards for grammar, speech
G  G    output and also separation of the speech interface from business logic and data etc. SALT is
   designed to run inside different Web execution environments. So SALT does not have any
predefined execution model but it uses an event-wiring model.
It contains a set of tags for inputting the data as well as storing and manipulating that data. The
main elements of a SALT document are , and . Using these elements we can specify grammar for
inputting data , inspect the results of recognition and copy those results properly and provide the
application needed.The architecture of SALT contains mainly 4 components .

The SAT (SIM Application Toolkit) provides a flexible interface through which developers can build
services and MMI (Man Machine Interface) in order to enhance the functionality of the mobile.
G î
G    This module is not designed for service developers, but network engineers who require a

 grounding in the concepts of the SAT and how it may impact on network architecture and
performance. It explores the basic SAT interface along with the architecture required in order to
deliver effective SAT based services to the handset.

The Wireless Application Protocol (WAP) is a result of the WAP Forum s effort to promote
industry-wide specifications for technology useful in developing applications and services that
operates over wireless communication networks. WAP specifies an application framework and
network protocols for wireless devices such as mobile telephones, pagers, and personal digital
assistants. (PDAs). The specifications extend and leverage mobile networking technologies (such
as digital data networking standards) and Internet technologies (such as XML, URLs, scripting,
and various content formats). The effort is aimed at enabling operation, manufactures, and
content developers to meet the challenges in building advanced differentiated services and
implementation in a fast and flexible manner.
The Objectives of the WAP Forum are: To bring Internet content and advanced data services to
digital cellular phones and other wireless terminals. To create a global wireless protocol
specifications that will work across differing wireless network technologies To enable the creation
of content and applications that scale across a very wide range of bearer networks and device
types. To embrace and extend existing standards and technology wherever appropriate.
¦ G      The WAP Architecture specification is intended to present the system and protocol architectures
essential to achieving the objective of the WAP Forum.
WAP is positioned at the convergence of two rapidly evolving network technologies, wireless data
and Internet. Both the wireless data market and the Internet are growing very quickly and are
continuously reaching new customers. The explosive growth of the Internet has fuelled the
creation of new and exciting information services
Most of the technology developed for the Internet has been designed for desktop and larger
computers and medium to high bandwidth, generally reliable data networks. Mass-market, hand
held wireless devices present a more constrained computing environment compared to desktop
computers. Because of fundamental invitation of power and form factor, mass market handheld
devices tend to have:
Less powerful CPUs, Less memory (ROM and RAM), Restricted power consumption, Smaller
displays, and Different input devices (eg. a phone keypad). Similarly, wireless data networks
present a more constrained communication environment compared to wired networks. Because of
fundamental limitation of power available spectrum, and mobility, wireless data networks tend to
have: Less bandwidth, More latency, Less connection stability, and Less predictable availability.
Mobile networks are growing in complexity and the cost of all aspects for provisioning of more
value added services is increasing. In order to meet the requirements of mobile network
operators, solutions must be:
Interoperable-terminals from different manufactures communicate with services in the mobile
network;
Scalable-mobile network operators are able to scale services to customer needs;
Efficient-provides quality of service suited to the behaviour and characteristics of the mobile
network;
Reliable - provides a consistent and predictable platform for deploying services; and Secure-
enables services to be extended over potentially unprotected mobile networks still preserving the
integrity of user data; protects the devices and services from security problems such as denial of
service.
The WAP specifications address mobile network characteristics and operator needs by adapting
existing network technology to the special requirements of mass market, hand-held wireless data
devices and by introducing new technology where appropriate
The requirements of the WAP Forum architecture are to:
Leverage existing standards where possible;
Define a layered, scalable and extensible architecture;
Support as many wireless networks as possible;
Optimise for narrow-band bearers with potentially high latency;
Optimise for efficient use of device resources (low memory / CPU usage / power consumption);
Provide support for secure application and communications;
Enable the creation of Man Machine Interfaces (MIMs) with maximum flexibility and vendor
control;
Provide access to local handset functionality, such as logical indication for incoming call;
Facilitate network-operator and third party service provisioning;
Support multi-vendor interoperability by defining the optional and mandatory components of the
specification

UMA (Unlicensed Mobile Access) is an industry collaboration to extend GSM and GPRS services
nto customer sites by utilizing unlicensed radio technologies such as Wi-Fi (Wireless Fidelity) and
Bluetooth®. This is achieved by tunnelling GSM and GPRS protocols through a broadband IP
r
Gr   
 
network towards the Access Point situated in the customer site and across the unlicensed radio
G  link to the mobile device.
Thus UMA provides an additional access network to the existing GERAN (GSM EDGE Radio
Access Network) and UTRAN (UMTS Terrestrial Radio Access Network).

Session Initiation Protocol (SIP) is a protocol developed by IETF MMUSIC Working Group and
proposed standard for initiating, modifying, and terminating an interactive user session that
involves multimedia elements such as video, voice, instant messaging, online games, and virtual
reality.
SIP clients traditionally use TCP and UDP port 5060 to connect to SIP servers and other SIP
endpoints. SIP is primarily used in setting up and tearing down voice or video calls. However, it
can be used in any application where session initiation is a requirement. These include, Event
Subscription and Notification, Terminal mobility and so on. There are a large number of SIP-
related RFCs that define behavior for such applications. All voice/video communications are done
î  î    over RTP.
   A motivating goal for SIP was to provide a signaling and call setup protocol for IP-based
communications that can support a superset of the call processing functions and features present
in the public switched telephone network (PSTN).
SIP enabled telephony networks can also implement many of the more advanced call processing
features present in Signalling System 7 (SS7), though the two protocols themselves are very
different. SS7 is a highly centralized protocol, characterized by highly complex central network
architecture and dumb endpoints (traditional telephone handsets). SIP is a peer-to-peer protocol.
SIP network elements
Hardware endpoints, devices with the look, feel, and shape of a traditional telephone, but that use
SIP and RTP for communication, are commercially available from several vendors. Some of these
can use Electronic Numbering (ENUM) or DUNDi to translate existing phone numbers to SIP
addresses using DNS, so calls to other SIP users can bypass the telephone network, even though
your service provider might normally act as a gateway to the PSTN network for traditional phone
numbers (and charge you for it).
SIP makes use of elements called proxy servers to help route requests to the user s current
location, authenticate and authorize users for services, implement provider call-routing policies,
and provide features to users.
SIP also provides a registration function that allows users to upload their current locations for use
by proxy servers.
Since registrations play an important role in SIP, a User Agent Server that handles a REGISTER
is given the special name registrar.
It is an important concept that the distinction between types of SIP servers is logical, not physical.

Model-view-controller (MVC) is a software architecture that separates an application s data model,


user interface, and control logic into three distinct components so that modifications to one
component can be made with minimal impact to the others.
MVC is often thought of as a software design pattern. However, MVC encompasses more of the
architecture of an application than is typical for a design pattern. Hence the term architectural
pattern may be useful (Buschmann, et al 1996), or perhaps anaggregate design pattern.
In broad terms, constructing an application using an MVC architecture involves defining three
classes of modules.
9 Model: The domain-specific representation of the information on which the

    application operates. The model is another name for the domain layer. Domain logic
adds meaning to raw data (e.g. calculating if today is the user s birthday, or the totals,
taxes and shipping charges for shopping cart items).
9 View: Renders the model into a form suitable for interaction, typically a user
interface element. MVC is often seen in web applications, where the view is the HTML
page and the code which gathers dynamic data for the page.
9 Controller: Responds to events, typically user actions, and invokes changes
on the model and perhaps the view.
9 Many applications use a persistent storage mechanism (such as a
database) to store data. MVC does not specifically mention this data access layer,
because it is understood to be underneath or encapsulated by the Model.
There are various applications that require a 3D world to be simulated as realistically as possible
on a computer screen. These include 3D animations in games, movies and other real world
simulations. It takes a lot of computing power to represent a 3D world due to the great amount of
information that must be used to generate a realistic 3D world and the complex mathematical
operations that must be used to project this 3D world onto a computer screen. In this situation, the
processing time and bandwidth are at a premium due to large amounts of both computation and
data.
   r 
The functional purpose of a GPU then, is to provide a separate dedicated graphics
resources, including a graphics processor and memory, to relieve some of the burden off of the
main system resources, namely the Central Processing Unit, Main Memory, and the System Bus,
which would otherwise get saturated with graphical operations and I/O requests. The abstract goal
of a GPU, however, is to enable a representation of a 3D world as realistically as possible. So
these GPUs are designed to provide additional computational power that is customized
specifically to perform these 3D tasks.
Embedded System is an integration of hardware and software embedded in it.This software is
called embedded software which acts an operating system in a computer.This is responsible for
the working of the system. We can say an embedded system is a computer system but not vice-
versa. It can be a real time system also. It is different from computer science and electronics
because it includes both electronic components such as sensors,timers, microprocessors as well

  ! 
 software components such as compilers,debuggers,etc. It provides very beneficial and
economical way to build or design various applications which may be of either real time such as
flight control systems or of non real time such as ATM machines,automated washing machines. It
reduces the size of the system and also the cost for designing any application.Major advantage of
embedded systems is the reusablility.ie,one application can be used for many specific purposes.
Devices that use light to store and read data have been the backbone of data storage for nearly
two decades. Compact discs revolutionized data storage in the early 1980s, allowing multi-
megabytes of data to be stored on a disc that has a diameter of a mere 12 centimeters and a
thickness of about 1.2 millimeters. In 1997, an improved version of the CD, called a digital
versatile disc (DVD), was released, which enabled the storage of full-length movies on a single
disc.
CDs and DVDs are the primary data storage methods for music, software,
personal computing and video. A CD can hold 783 megabytes of data. A double-sided, double-
layer DVD can hold 15.9 GB of data, which is about eight hours of movies. These conventional
storage mediums meet today's storage needs, but storage technologies have to evolve to keep
pace with increasing consumer demand. CDs, DVDs and magnetic storage all store bits of
information on the surface of a recording medium. In order to increase storage capabilities,
scientists are now working on a new optical storage method called holographic memory that will
go beneath the surface and use the volume of the recording medium for storage, instead of only
‰ 
 the surface area. Three-dimensional data storage will be able to store more information in a
smaller space and offer faster data transfer times.
Holographic memory is developing technology that has promised to
revolutionalise the storage systems. It can store data upto 1 Tb in a sugar cube sized crystal. Data
from more than 1000 CDs can fit into a holographic memory System. Most of the computer hard
drives available today can hold only 10 to 40 GB of data, a small fraction of what holographic
memory system can hold. Conventional memories use only the surface to store the data. But
holographic data storage systems use the volume to store data. It has more advantages than
conventional storage systems. It is based on the principle of holography.
Scientist Pieter J. van Heerden first proposed the idea of holographic (three-dimensional) storage
in the early 1960s. A decade later, scientists at RCA Laboratories demonstrated the technology by
recording 500 holograms in an iron-doped lithium-niobate crystal and 550 holograms of high-
resolution images in a light-sensitive polymer material. The lack of cheap parts and the
advancement of magnetic and semiconductor memories placed the development of holographic
data storage on hold.
EDI has no single consensus definition .Two generally accepted definitions are : Standardized
format for communication of business information between computer applications . Computer- to-
computer exchange of information between companies, using an industry standard format.
In short , Electronic Data Interchange (EDI) is the computer-to-computer exchange of business
information using a public standard. EDI is a central part of Electronic Commerce (EC), because it
enables businesses to exchange business information electronically much faster, cheaper and
more accurately than is possible using paper-based systems. Electronic Data Interchange,
consists of data that has been put into a standard format and is electronically transferred between
trading partners.Often ,an acknowledgement is returned to the sender informing them that the
data was received. The term EDI is often used synonymously with the term EDT. These two terms
are indeed different and should not be used interchangeably.
EDI VS EDT
The terms EDI and EDT are often misused .
¢ EDT, Electronic Data Transfer, is simply sending a file electronically to a trading partner.
¢ Although EDI documents are sent electronically, they are sent in a standard format.
This standard format is what makes EDI different than EDT.
 î HISTORY OF EDI
The government did not invent EC/EDI; it is merely taking advantage of an established technology
that has been widely used in the private sector for the last few decades. EDI was first used in the
transportation industry more than 20 years ago. Ocean, motor, air, and rail carriers and the
associated shippers, brokers, customs, freight forwarders, and bankers used it.Developed in 1960
s to accelerate movement of documents.Widely employed in automotive , retail , transportation &
international trade since mid-80s .Steadily growing.
EDI FEATURES
# Independent of trading partners internal computerized application systems.
# Interfaces with internal application systems rather than being integrated with them.
# Not limited by differences in computer or communications equipment of trading companies.
# Consists only of business data, not verbiage or free-form messages.
Let s take a high level look at the EDI process. In a typical example , a car manufacturing
company is a trading partner with an insurance company. The human resources department at the
car manufacturing company has a new employee who needs to be enrolled in an insurance plan.
The HR representative enters the individual into the computer. The new employee s data is
mapped into a standard format and sent electronically to the insurance company. The insurance
company maps the data out of the standard format and into a format that is usable with their
computer. An acknowledgment is automatically generated by the insurance company and sent to
the car manufacturer informing them that the data was received.
Hence, in order to summarise the EDI process , the sequence of events in any EDI transaction are
as follows :
The sender s own business application system assembles the data to be transmitted .This data is
translated into an EDI standard format (i.e., transaction set) .The transaction set is transmitted
either through a third party network ( eg : VAN) or directly to the receiver s EDI translation system
.The transaction set, in EDI standard format, is translated into files that are usable by the receiver
s business application system .The files are processed using the receiver s business application
system .

In recent years the usage of design patterns is gaining interest in the software design community.
Design patterns provide successful and proven solutions for design problems and common
terminologies that are based on the expertise of domain experts, thus providing guidance for both
      "   design and comprehension of large-scale software systems. In this presentation we focus on the
visualization of design patterns that were recovered from existing source code. In particular we
    discuss three popular approaches in graph drawing and their applicability to visualize patterns
since generating layouts for design patterns can be seen as a graph-drawing problem. The
presentation will conclude with some discussion on open problems and some potential future
directions to address the challenges of visualizing design patterns

Contiki is a small open source, yet fully featured, operating system developed for use on a number
of smallish systems ranging from 8-bit computers to embedded microcontrollers, including sensor
network motes. The name Contiki comes from Thor Heyerdahl's famous Kon-Tiki raft.
Despite providing multitasking and a built-in TCP/IP stack, Contiki only requires a few kilobytes of
code and a few hundred bytes of RAM. A fully fledged system complete with a graphical user
interface (GUI) will require about 30 kilobytes of code memory.
The basic kernel and most of the core functions are developed by Adam Dunkels.
Features
A full installation of Contiki includes the following features:
Multitasking kernel
Optional pre-emptive multitasking (on a per-application basis)

 Protothreads
TCP/IP networking
Windowing system and GUI
Networked remote display using Virtual Network Computing (VNC)
Web browser (claimed to be the world smallest)
Personal webserver
Simple telnet client
Screensaver
More applications are developed constantly. Known planned developments include:
an email client
an IRC client
Brain fingerprinting is a technique that measures recognition of familiar stimuli by measuring
electrical brain wave responses to words, phrases, or pictures that are presented on a computer
screen. Brain fingerprinting was invented by Dr. B. S. Farwell. The theory is that the suspect's
reaction to the details of an event or activity will reflect if the suspect had prior knowledge of the
event or activity. This test uses the Memory and Encoding Related Multifaceted
Electroencephalographic Response to detect familiarity reaction. It is hoped it might be more
accurate than a polygraph (lie-detector) test, which measures physiological signals such as heart
   
rate, sweating, and blood pressure.
   The person to be tested wears a special headband with electronic sensors that measure the EEG
from several locations on the scalp. In order to calibrate the brain fingerprinting system, the testee
is first presented with a series of irrelevant stimuli, words, and pictures, and then a series of
relevant stimuli, words, and pictures. The testee's brain response to these two different types of
stimuli allow the testor to determine if the measured brain responses to test stimuli, called probes,
are more similar to the relevant or irrelevant responses.
Identity documents (IDs), such as passports and drivers' licenses are relied upon to deter fraud
and stop terrorism. A multitude of document types and increased expertise in forgery make human
inspection of such documents inconsistent and error prone. New generation reader/authenticator
technology can assist in the ID screening process. Such devices can read the information on the
ID, authenticate it, and provide an overall security risk analysis. This talk will discuss how image
processing and pattern recognition technology were used in the implementation of one such
commercial device, the AssureTec i-Dentify reader. The reader is based on a high resolution color
G  G     CCD camera which automatically captures a presented ID under a variety of light sources (Visible,
UV, IR, and others) in a few seconds.
î     Automated processing of IDs involves a number of interesting technical challenges which will be
discussed: sensing the presence of a document in the reader viewing area; cropping the
document and extracting its size; identifying the document type by rapid comparison to a known
document library; locating, extracting, and image processing of data fields of various types (text,
photo, symbols, barcodes); processing text fields with appropriate OCR engines; cross-checking
data from different parts of a document for consistence; checking for the presence of security
features (e.g., UV patterns); and providing an overall risk assessment that the document is
falsified.
Object-oriented programming (OOP) has been presented as a technology that can fundamentally
aid software engineering, because the underlying object model provides a better fit with real
domain problems. However most software systems consist of several concerns that crosscut
multiple modules. Object-oriented techniques for implementing such concerns result in systems
G    that are invasive to implement, tough to understand, and difficult to evolve. This forces the
  implementation of those design decisions to be scattered throughout the code, resulting in
⼜tangled⼠code that is excessively difficult to develop and maintain. The new aspect-oriented
programming (AOP) methodology facilitates modularization of crosscutting concerns. Using AOP,
you can create implementations that are easier to design, understand, and maintain. Further, AOP
promises higher productivity, improved quality, and better ability to implement newer features.
Signcryption is a new paradigm in public key cryptography that simultaneously fulfils both the
functions of digital signature and public key encryption in a logically single step, and with a cost
significantly lower than that required by the traditional signature and encryption approach. The
main disadvantage of this approach is that, digitally signing a message and then encrypting it,
consumes more machine cycles and bloats the message by introducing extended bits to it. Hence,
decrypting and verifying the message at the receiver?s end, a lot of computational power is used
up.
     Thus you can say that the cost of delivering a message using signing-then-encryption is in effect
the sum of the costs of both digital signatures and public key encryption. Is it possible to send a
message of arbitrary length with cost less than that required by signature-then-encryption?
Signcryption is a new paradigm in public key cryptography that simultaneously fulfils both the
functions of digital signature and public key encryption in a logically single step, and with a cost
significantly lower than that required by the traditional signature followed by encryption.This topic
has similar mathematical content to the lecture on RSA.It requires a good understanding of the
various encryption algorithms like RSA,DES algorithms.
Ubiquitous computing names the third wave in computing, just now beginning. First were
mainframes, each shared by lots of people. Now we are in the personal computing era, person
and machine staring uneasily at each other across the desktop. Next comes ubiquitous
computing, or the age of calm technology, when technology recedes into the background of our
lives. Alan Kay of Apple calls this Third Paradigm computing.
r #    Mark Weiser is the father of ubiquitous computing. This paper explains what is new and different
about the computer science in ubiquitous computing and some idea about ubiquitous networks.
Ubiquitous computing is roughly the opposite of virtual reality. Where virtual reality puts people
inside a computer-generated world, ubiquitous computing forces the computer to live out here in
the world with people. aVirtual reality is primarily a horse power problem; ubiquitous computing is
a very difficult integration of human factors, computer science, engineering, and social sciences.
In its most generic sense a voice portal can be defined as speech enabled access to Web based
information . In other words, a voice portal provides telephone users with a natural language
interface to access and retrieve Web content. An Internet browser can provide Web access from a
computer but not from a telephone. A voice portal is a way to do that.
The voice portal market is exploding with enormous opportunities for service providers to grow
business and revenues. Voice based internet access uses rapidly advancing speech recognition
technology to give users any time, anywhere communication and access-the Human Voice- over
an office, wireless, or home phone. Here we would describe the various technology factors that
are making voice portal the next big opportunity on the web, as well as the various approaches
service providers and developers of voice portal solutions can follow to maximize this exciting new
market opportunity.
Why Voice?
Natural speech is modality used when communicating with other people. This makes it easier for a
user to learn the operation of voice-activate services. As an output modality, speech has several
advantages. First, auditory input does not interfere with visual tasks, such as driving a car.
   Second, it allows for easy incorporation of sound-based media, such as radio broadcasts, music,
and voice-mail messages. Third, advances in TTS (Text To Speech) technology mean text
information can be transferred easily to the user. Natural speech also has an advantage as an
input modality, allowing for hands-free and eyes-free use. With proper design, voice commands
can be created that are easy for a user to remember .These commands do not have to compete
for screen space. In addition unlike keyboard-based macros (e.g., ctrl-F7), voice commands can
be inherently mnemonic ( call United Airlines ), obviating the necessity for hint cards. Speech can
be used to create an interface that is easy to use and requires a minimum of user attention.
VUI (Voice User Interface) For a voice portal to function, one of the most important technology we
have to include is a good VUI (Voice User Interface).There has been a great deal of development
in the field of interaction between human voice and the system. And there are many other fields
they have started to get implemented. Like insurance has turned to interactive voice response
(IVR) systems to provide telephonic customer self-service, reduce the load on call-center staff,
and cut overall service costs. The promise is certainly there, but how well these systems perform-
and, ultimately, whether customers leave the system satisfied or frustrated-depends in large part
on the user interface.
A new wireless technology could beat fiber optics for speed in some applications.
Atop each of the Trump towers in New York City, there s a new type of wireless transmitter and
receiver that can send and receive data at rates of more than one gigabit per second -- fast
enough to stream 90 minutes of video from one tower to the next, more than one mile apart, in
less than six seconds. By comparison, the same video sent over a DSL or cable Internet
connection would take almost an hour to download.
This system is dubbed WiFiber by its creator, GigaBeam, a Virginia-based telecommunications
startup. Although the technology is wireless, the company s approach -- high-speed data
transferring across a point-to-point network -- is more of an alternative to fiber optics, than to Wi-Fi
or Wi-Max, says John Krzywicki, the company s vice president of marketing. And it s best suited
for highly specific data delivery situations.
This kind of point-to-point wireless technology could be used in situations where digging fiber-
optic trenches would disrupt an environment, their cost be prohibitive, or the installation process
take too long, as in extending communications networks in cities, on battlefields, or after a
disaster.
¦ $  Blasting beams of data through free space is not a new idea. LightPointe and Proxim Wireless
also provide such services. What makes GigaBeam s technology different is that it exploits a
different part of the electromagnetic spectrum. Their systems use a region of the spectrum near
visible light, at terahertz frequencies. Because of this, weather conditions in which visibility is
limited, such as fog or light rain, can hamper data transmission.
GigaBeam, however, transmits at 71-76, 81-86, and 92-95 gigahertz frequencies, where these
conditions generally do not cause problems. Additionally, by using this region of the spectrum,
GigaBeam can outpace traditional wireless data delivery used for most wireless networks.
Because so many devices, from Wi-Fi base stations to baby monitors, use the frequencies of
2.4 and 5 gigahertz, those spectrum bands are crowded, and therefore require complex
algorithms to sort and route traffic -- both data-consuming endeavors, says Jonathan Wells,
GigaBeam s director of product development. With less traffic in the region between 70 to 95
gigahertz, GigaBeam can spend less time routing data, and more time delivering it. And because
of the directional nature of the beam, problems of interference, which plague more spread-out
signals at the traditional frequencies, are not likely; because the tight beams of data will rarely, if
ever, cross each other s paths, data transmission can flow without interference, Wells says.
Correction: As a couple of readers pointed out, our title was misleading. Although the emergence
of a wireless technology operating in the gigabits per second range is an advance, it does not
outperform current fiber-optic lines, which can still send data much faster.
Even with its advances, though, Gigabeam faces the same problem as other point-to-point
technologies: creating a network with an unbroken sight line. Still, it could offer some businesses
an alternative to fiber optics. Currently, a GigaBeam link, which consists of a set of transmitting
and receiving radios, costs around $30,000. But Krzywicki says that improving technology is
driving down costs. In addition to outfitting the Trump towers, the company has deployed a link on
the campuses of Dartmouth College and Boston University, and two links for San Francisco s
Public Utility Commission
In cryptography, SAFER (Secure And Fast Encryption Routine) is the name of a family of block
ciphers designed primarily by James Massey (one of the designers of IDEA) on behalf of Cylink
Corporation. The early SAFER K and SAFER SK function, but differ in the number of rounds and
the designs share the same encryptionkey schedule. More recent versions â¼´ SAFER+ and
SAFER++ were submitted as candidates to the AES process and the NESSIE project
respectively. All of the algorithms in the SAFER family are unpatented and available for
unrestricted use.
The first SAFER cipher was SAFER K-64, published by Massey in 1993, with a 64-bit block size.
The K-64 denotes a key size of 64 bits. There was some demand for a version with a larger 128-
bit key, and the following year Massey published such a variant incorporating new key schedule
designed by the Singapore Ministry for Home affairs: SAFER K-128. However, both Lars Knudsen
and Sean Murphy found minor weaknesses in this version, prompting a redesign of the key
G$ schedule to one suggested by Knudsen; these variants were named SAFER SK-64 and SAFER
SK-128 respectively â¼´ the SK standing for Strengthened Key schedule , though the RSA FAQ
reports that, one joke has it that SK really stands for Stop Knudsen , a wise precaution in the
design of any block cipher . Another variant with a reduced key size was published, SAFER SK-
40, to comply with 40-bit export restrictions.
All of these ciphers use the same round function consisting of four stages, as shown in the
diagram: a key-mixing stage, a substitution layer, another key-mixing stage, and finally a diffusion
layer. In the first key-mixing stage, the plaintext block is divided into eight 8-bit segments, and
subkeys are added using either addition modulo 256 (denoted by a + in a square) or XOR
(denoted by a + in a circle). The substitution layer consists of two S-boxes, each the inverse of
each other, derived from discrete exponentiation (45x) and logarithm (log45x) functions. After a
second key-mixing stage there is the diffusion layer: a novel cryptographic component termed a
pseudo-Hadamard transform (PHT). (The PHT was also later used in the Twofish cipher.)

9 Cyber Crime and Security


9 en Source Technology
9 ano com uting
9 *oIP in mobile hones
9 Jobile Adhoc etwork
9 etwork Security
9 CDJA & Blue Tooth Technology
9 Software Testing & Quality Assurance
9 ¦I-FI / ¦I-JAX
9 Digital Jedia Broadcasting
9 Deal Time erating System
9 Cyborgs
9 bject oriented technologies
9 Advanced Databases
9 Image rocessing and a lications
9 Jobile etworking
9 atural Language Processor
9 Advanced algorithms
9 eural networks and a lications
9 Software advances in wireless communication (Cognitive Dadio, Dynamic s ectrum Access,
etc.)
9 Data Jining and Data ¦arehousing
9 Image rocessing in com uter vision
9 Pervasive com uting
9 Distributed and arallel systems
9 [mbeded Systems
9 Software quality assurance
9 Business Intelligence [DP
9 |rid Com uting
9 Artificial eural etworks
9 Acceleration of Intelligence in Jachines
9 Communication System in the new era
9 [-JI[ A novel web mining a roach
9 Ad-Hoc and Sensor etworks
9 Algorithms and Com utation Theories
9 Artificial Intelligence
9 Data ¦arehouse
9 Dobotics
9 Concurrent Programming and Parallel distributed .S.
9 Server virtualization
9 Advanced cry togra hy and im lementations
9 Anowledge discovery and Data Jining
9 |enetic Algorithm
9 High Performance Com uting
9 ano Technology
9 Distributed com uting
9 Parasitic com uting
9 Com utational Intelligence and Linguistics
9 Future Programming Techniques and Conce ts
9 Janaging Data with emerging technologies
9 Devolutions in the erating System and Servers
9 *isualization and Com uter |ra hics
9 etwork Janagement and Security
9 Secure Com uting
9 etwork Jodeling and Simulation
9 Advanced Processors
9 Security
9 Digital Signal Processing and their a lications
9 Performance [valuation
9 |esture recognition
9 Biometrics in secure e-transactions
9 Finger rint recognition system by neural networks
9 Search for extra terrestrial intelligence using satellite communication
9 ¦ireless communication system
9 Sensor fusion for video surveillance
9 [merging trends in robotics using neural networks
9 [mbedded systems and vlsi an architectural a roach to reduce leakage energy in memory
9 Concurrent rogramming and arallel distributed o.s.
9 Dobotics and automation(snake robots)
9 Dynamic s ectrum access
9 Jicro chi roduction using extreme uv lithogra hy
9 Detecting infrastructure damage caused by earthquakes
9 A cognitive radio a roach for using of vitual unlicenced s ectrum
9 Server virtualization
9 Twd radar satellite communications
9 Im roving tc erformance over mobile ad hoc networks
9 [-wallet
9 Anowledge discovery and data mining
9 Plasmonics
9 ano-technology and a lication
9 ATJ networks
9 etwork security
9 |eneric algorithm
9 Atm, wa , bluetooth
9 Deconfigurable com uting
9 anocom uting
9 Jobile Com uting
9 Satellite etworks
9 Distributed and Parallel Com uting
9

Anda mungkin juga menyukai