Anda di halaman 1dari 8

1-What is DMA?

Direct memory access (DMA) is a feature of modern computers that allows certain hardware
subsystems within the computer to access system memory independently of the central
processing unit (CPU).
Without DMA, when the CPU is using programmed input/output, it is typically fully occupied for the entire duration of the read or write operation, and is thus
unavailable to perform other work. With DMA, the CPU initiates the transfer, does other operations while the transfer is in progress, and receives
an interrupt from the DMA controller when the operation is done. This feature is useful any time the CPU cannot keep up with the rate of data transfer, or
where the CPU needs to perform useful work while waiting for a relatively slow I/O data transfer. Many hardware systems use DMA, including disk
drive controllers, graphics cards, network cards and sound cards. DMA is also used for intra-chip data transfer inmulti-core processors. Computers that have
DMA channels can transfer data to and from devices with much less CPU overhead than computers without a DMA channel. Similarl y, a processing element
inside a multi-core processor can transfer data to and from its local memory without occupying its processor time, allowing computation and data transfer to
proceed in parallel.
DMA can also be used for "memory to memory" copying or moving of data within memory. DMA can offload expensive memory operati ons, such as large
copies or scatter-gather operations, from the CPU to a dedicated DMA engine. An implementation example is the I/O Acceleration Technology.
2-USART
USART stands for Universal Synchronous Asynchronous Receiver Transmitter. It is sometimes called the Serial Communications Interface or SCI.
Synchronous operation uses a clock and data line while there is no separate clock accompanying the data for Asynchronous transmission.
Since there is no clock signal in asynchronous operation, one pin can be used for transmission and another pin can be used for reception. Both transmission
and reception can occur at the same time this is known as full duplex operation.
Transmission and reception can be independently enabled. However, when the serial port is enabled, the USART will control both pins and one cannot be
used for general purpose I/O when the other is being used for transmission or reception.
The USART is most commonly used in the asynchronous mode. In this presentation we will deal exclusively with asynchronous operation.
The most common use of the USART in asynchronous mode is to communicate to a PC serial port using the RS-232 protocol. Please note that a driver is
required to interface to RS-232 voltage levels and the PICmicro MCU should not be directly connected to RS-232 signals.

I2C-Bus: What's that?
The I2C bus was designed by Philips in the early '80s to allow easy communication between
components which reside on the same circuit board. Philips Semiconductors migrated
to NXP in 2006.
The name I2C translates into "Inter IC". Sometimes the bus is called IIC or IC bus.
The original communication speed was defined with a maximum of 100 kbit per second and
many applications don't require faster transmissions. For those that do there is a 400 kbit
fastmode and - since 1998 - a high speed 3.4 Mbit option available. Recently, fast mode
plus a transfer rate between this has been specified.
I2C is not only used on single boards, but also to connect components which are linked via
cable. Simplicity and flexibility are key characteristics that make this bus attractive to many
applications.
Most significant features include:
Only two bus lines are required
No strict baud rate requirements like for instance with RS232, the master generates
a bus clock
Simple master/slave relationships exist between all components
Each device connected to the bus is software-addressable by a unique address
I2C is a true multi-master bus providing arbitration and collision detection
Interrupt Service Routine (ISR),
In systems programming an interrupt handler, also known as an Interrupt Service Routine (ISR), is a callback subroutine in microcontroller
firmware, operating system ordevice driver whose execution is triggered by the reception of an interrupt. Interrupt handlers have a multitude of
functions, which vary based on the reason the interrupt was generated and the speed at which the interrupt handler completes its task.
An interrupt handler is a low-level counterpart of event handlers. These handlers are initiated by either hardware interrupts or interrupt
instructions in software, and are used for servicing hardware devices and transitions between protected modes of operation such as system
calls.
Interrupt
In systems programming, an interrupt is a signal to the processor emitted by hardware or software indicating an event that needs immediate
attention. An interrupt alerts the processor to a high-priority condition requiring the interruption of the current code the processor is executing,
the current thread. The processor responds by suspending its current activities, saving its state, and executing a small program called
an interrupt handler (or interrupt service routine, ISR) to deal with the event. This interruption is temporary, and after the interrupt handler
finishes, the processor resumes execution of the previous thread. There are two types of interrupts:
A hardware interrupt is an electronic alerting signal sent to the processor from an external device, either a part of the computer itself such as
a disk controller or an external peripheral. For example, pressing a key on the keyboard or moving the mouse triggers hardware interrupts that
cause the processor to read the keystroke or mouse position. Unlike the software type (below), hardware interrupts are asynchronous and can
occur in the middle of instruction execution, requiring additional care in programming. The act of initiating a hardware interrupt is referred to as
an interrupt request (IRQ).
A software interrupt is caused either by an exceptional condition in the processor itself, or a special instruction in theinstruction set which
causes an interrupt when it is executed. The former is often called a trap or exception and is used for errors or events occurring during
program execution that are exceptional enough that they cannot be handled within the program itself. For example, if the processor's arithmetic
logic unit is commanded to divide a number by zero, this impossible demand will cause a divide-by-zero exception, perhaps causing the
computer to abandon the calculation or display an error message. Software interrupt instructions function similarly to subroutine calls and are
used for a variety of purposes, such as to request services from low level system software such as device drivers. For example, computers
often use software interrupt instructions to communicate with the disk controller to request data be read or written to the disk.
Each interrupt has its own interrupt handler. The number of hardware interrupts is limited by the number of interrupt request (IRQ) lines to the
processor, but there may be hundreds of different software interrupts.
In systems programming, an interrupt is a signal to the processor emitted by hardware or software indicating an event that needs immediate
attention. An interrupt alerts the processor to a high-priority condition requiring the interruption of the current code the processor is executing,
the current thread. The processor responds by suspending its current activities, saving its state, and executing a small program called
an interrupt handler (or interrupt service routine, ISR) to deal with the event. This interruption is temporary, and after the interrupt handler
finishes, the processor resumes execution of the previous thread. There are two types of interrupts:
A hardware interrupt is an electronic alerting signal sent to the processor from an external device, either a part of the computer itself such as
a disk controller or an external peripheral. For example, pressing a key on the keyboard or moving the mouse triggers hardware interrupts that
cause the processor to read the keystroke or mouse position. Unlike the software type (below), hardware interrupts are asynchronous and can
occur in the middle of instruction execution, requiring additional care in programming. The act of initiating a hardware interrupt is referred to as
an interrupt request (IRQ).
A software interrupt is caused either by an exceptional condition in the processor itself, or a special instruction in theinstruction set which
causes an interrupt when it is executed. The former is often called a trap or exception and is used for errors or events occurring during
program execution that are exceptional enough that they cannot be handled within the program itself. For example, if the processor's arithmetic
logic unit is commanded to divide a number by zero, this impossible demand will cause a divide-by-zero exception, perhaps causing the
computer to abandon the calculation or display an error message. Software interrupt instructions function similarly to subroutine calls and are
used for a variety of purposes, such as to request services from low level system software such as device drivers. For example, computers
often use software interrupt instructions to communicate with the disk controller to request data be read or written to the disk.
Each interrupt has its own interrupt handler. The number of hardware interrupts is limited by the number of interrupt request (IRQ) lines to the
processor, but there may be hundreds of different software interrupts.





Mutex:

Is a key to a toilet. One person can have the key - occupy the toilet - at the
time. When finished, the person gives (frees) the key to the next person in the
queue.

Officially: "Mutexes are typically used to serialise access to a section of re-
entrant code that cannot be executed concurrently by more than one thread.
A mutex object only allows one thread into a controlled section, forcing other
threads which attempt to gain access to that section to wait until the first
thread has exited from that section." Ref: Symbian Developer Library

(A mutex is really a semaphore with value 1.)

Semaphore:

Is the number of free identical toilet keys. Example, say we have four toilets
with identical locks and keys. The semaphore count - the count of keys - is set
to 4 at beginning (all four toilets are free), then the count value is
decremented as people are coming in. If all toilets are full, ie. there are no free
keys left, the semaphore count is 0. Now, when eq. one person leaves the
toilet, semaphore is increased to 1 (one free key), and given to the next
person in the queue.

Officially: "A semaphore restricts the number of simultaneous users of a
shared resource up to a maximum number. Threads can request access to
the resource (decrementing the semaphore), and can signal that they have
finished using the resource (incrementing the semaphore)." Ref: Symbian
Developer Library

Deadlock and how to avoid it
Deadlock is a situation where two processes are each waiting for the other to
release a resource it held, or more than two processes are waiting for resources in
a circular chain, so that no one can have the resource required and all stop
running.

Deadlock can happen between processes, thread or task (in vxWorks). And it can
happen on any kinds of shared resources. I use process here for discussion.

There are three conditions in order for a deadlock to happen:
1. Each of the processes involved access multiple shared resources.
2. Each of the processes involved hold some shared resources while requiring other
shared resources.
3. A circular waiting chain potentially possible.

To handle a deadlock, basically we have to break one or more of the conditions
above. There are four ways to avoid it as far as I know.
1. All the processes apply the same coding pattern. And the pattern is, all the
processes require (semWait for example) the shared resources in the same order,
so that circular chain will be formed.
This method is suitable for smaller scale application, where all the shared
resources can be listed and ordered.

2. Back-off algorithm. Each of the processes either have all the shared resources it
need before proceed or none of them. In reality, the first semTake() can use
WAITFOREVER, and the subquential semTake() use NOWAIT. And if one of the
subsequential semTake() fail, it will back off and release all the resources it
already holds.
This method increase the coding complexity. And it's not suitable for realtime
system as the back off takes unpredictable time.

3. Avoid processes access multiple resources. This can be done by redesign the
software's structure, algorithm or data structure. For example, a client-server
model can be used so that only the server manage the shared resources, while
client access the resources through the server.

4. Use some sort of watchdog to monitor the processes and if a deadlock is
detected, reset the system or the processes.
Since a deadlock situation rarely happens, and if built-in mechanism is too
expensive, then a third party monitor maybe suitable. For example, Linux kernel
totally ignore deadlock and pretend it will never happen! (Supprising, isn't it? But
that's real. When a deadlock indeed happens, the system just reboot.)
Posted by Honeybee at 2:31 PM








What is the difference between UDP and TCP internet protocols?
by NI XCRAFT on MAY 15, 2007 64 COMMENTS LAST UPDATED DECEMBER 16, 2007
in LI NUX, NETWORKI NG, UNI X

Q. Can you explain the difference between UDP and
TCP internet protocol (IP) traffic and its usage with an
example?
A. Transmission Control Protocol (TCP) and User
Datagram Protocol (UDP)is a transportation protocol
that is one of the core protocols of the Internet protocol
suite. Both TCP and UDP work at transport layer
TCP/IP model and both have very different usage.
Difference between TCP and UDP
TCP UDP
Reliability: TCP is connection-oriented protocol. When a file or message send it will
get delivered unless connections fails. If connection lost, the server will request the
lost part. There is no corruption while transferring a message.
Reliability: UDP is connectionless protocol. When you a send a data or message,
you don't know if it'll get there, it could get lost on the way. There may be
corruption while transferring a message.
Ordered: If you send two messages along a connection, one after the other, you
know the first message will get there first. You don't have to worry about data arriving
in the wrong order.
Ordered: If you send two messages out, you don't know what order they'll arrive in
i.e. no ordered
Heavyweight: - when the low level parts of the TCP "stream" arrive in the wrong
order, resend requests have to be sent, and all the out of sequence parts have to be
put back together, so requires a bit of work to piece together.
Lightweight: No ordering of messages, no tracking connections, etc. It's just fire
and forget! This means it's a lot quicker, and the network card / OS have to do
very little work to translate the data back from the packets.
Streaming: Data is read as a "stream," with nothing distinguishing where one packet
ends and another begins. There may be multiple packets per read call.
Datagrams: Packets are sent individually and are guaranteed to be whole if they
arrive. One packet per one read call.
Examples: World Wide Web (Apache TCP port 80), e-mail (SMTP TCP port 25
Postfix MTA), File Transfer Protocol (FTP port 21) and Secure Shell (OpenSSH port
22) etc.
Examples: Domain Name System (DNS UDP port 53), streaming media
applications such as IPTV or movies, Voice over IP (VoIP), Trivial File Transfer
Protocol (TFTP) and online multiplayer games etc






The simplest way to examine the advantages and disadvantages of RISC architecture is
by contrasting it with it's predecessor: CISC (Complex Instruction Set Computers)
architecture.
Multiplying Two Numbers in Memory
On the right is a diagram representing the
storage scheme for a generic computer. The
main memory is divided into locations
numbered from (row) 1: (column) 1 to (row)
6: (column) 4. The execution unit is
responsible for carrying out all computations.
However, the execution unit can only operate
on data that has been loaded into one of the
six registers (A, B, C, D, E, or F). Let's say
we want to find the product of two numbers -
one stored in location 2:3 and another stored
in location 5:2 - and then store the product
back in the location 2:3.
The CISC Approach
The primary goal of CISC architecture is to
complete a task in as few lines of assembly
as possible. This is achieved by building
processor hardware that is capable of
understanding and executing a series of
operations. For this particular task, a CISC
processor would come prepared with a specific instruction (we'll call it "MULT"). When
executed, this instruction loads the two values into separate registers, multiplies the
operands in the execution unit, and then stores the product in the appropriate register.
Thus, the entire task of multiplying two numbers can be completed with one
instruction:
MULT 2:3, 5:2
MULT is what is known as a "complex instruction." It operates directly on the
computer's memory banks and does not require the programmer to explicitly call any
loading or storing functions. It closely resembles a command in a higher level language.
For instance, if we let "a" represent the value of 2:3 and "b" represent the value of 5:2,
then this command is identical to the C statement "a = a * b."
One of the primary advantages of this system is that the compiler has to do very little
work to translate a high-level language statement into assembly. Because the length of
the code is relatively short, very little RAM is required to store instructions. The
emphasis is put on building complex instructions directly into the hardware.
The RISC Approach
RISC processors only use simple instructions that can be executed within one clock
cycle. Thus, the "MULT" command described above could be divided into three separate
commands: "LOAD," which moves data from the memory bank to a register, "PROD,"
which finds the product of two operands located within the registers, and "STORE,"
which moves data from a register to the memory banks. In order to perform the exact
series of steps described in the CISC approach, a programmer would need to code four
lines of assembly:
LOAD A, 2:3
LOAD B, 5:2
PROD A, B
STORE 2:3, A
At first, this may seem like a much less efficient way of completing the operation.
Because there are more lines of code, more RAM is needed to store the assembly level
instructions. The compiler must also perform more work to convert a high-level
language statement into code of this form.
However, the RISC strategy
also brings some very
important advantages.
Because each instruction
requires only one clock cycle
to execute, the entire
program will execute in
approximately the same
amount of time as the multi-
cycle "MULT" command.
These RISC "reduced
instructions" require less
transistors of hardware space
than the complex instructions, leaving more room for general purpose registers.
Because all of the instructions execute in a uniform amount of time (i.e. one clock),
pipelining is possible.
Separating the "LOAD" and "STORE" instructions actually reduces the amount of work
that the computer must perform. After a CISC-style "MULT" command is executed, the
processor automatically erases the registers. If one of the operands needs to be used
for another computation, the processor must re-load the data from the memory bank
into a register. In RISC, the operand will remain in the register until another value is
loaded in its place.
The Performance Equation
The following equation is commonly used for expressing a computer's performance
ability:

The CISC approach attempts to minimize the number of instructions per program,
sacrificing the number of cycles per instruction. RISC does the opposite, reducing the
cycles per instruction at the cost of the number of instructions per program.
RISC Roadblocks
Despite the advantages of RISC
based processing, RISC chips took
over a decade to gain a foothold in
the commercial world. This was
largely due to a lack of software
support.
Although Apple's Power Macintosh
line featured RISC-based chips and
Windows NT was RISC compatible,
CISC RISC
Emphasis on hardware Emphasis on software
Includes multi-clock
complex instructions
Single-clock,
reduced instruction only
Memory-to-memory:
"LOAD" and "STORE"
incorporated in instructions
Register to register:
"LOAD" and "STORE"
are independent instructions
Small code sizes,
high cycles per second
Low cycles per second,
large code sizes
Transistors used for storing
complex instructions
Spends more transistors
on memory registers
Windows 3.1 and Windows 95 were designed with CISC processors in mind. Many
companies were unwilling to take a chance with the emerging RISC technology. Without
commercial interest, processor developers were unable to manufacture RISC chips in
large enough volumes to make their price competitive.
Another major setback was the presence of Intel. Although their CISC chips were
becoming increasingly unwieldy and difficult to develop, Intel had the resources to plow
through development and produce powerful processors. Although RISC chips might
surpass Intel's efforts in specific areas, the differences were not great enough to
persuade buyers to change technologies.
The Overall RISC Advantage
Today, the Intel x86 is arguable the only chip which retains CISC architecture. This is
primarily due to advancements in other areas of computer technology. The price of
RAM has decreased dramatically. In 1977, 1MB of DRAM cost about $5,000. By 1994,
the same amount of memory cost only $6 (when adjusted for inflation). Compiler
technology has also become more sophisticated, so that the RISC use of RAM and
emphasis on

Anda mungkin juga menyukai