Monday, February 16, 2009

IDCe

Definition

The World Wide Web's current implementation is designed predominantly for information retrieval and display in a human readable form. Its data formats and protocols are neither intended nor suitable for machine-to-machine interaction without humans in the loop. Emergent Internet uses - including peer- to- peer and grid computing - provide both a glimpse and impetus for evolving the Internet into a distributed computing platform.

What would be needed to make the Internet into a application-hosting platform. This would be a networked, distributed counterpart of the hosting environment that traditional operating system provides to application in a single node. Creating this platform requires additional functional layer to the Internet that can allocate and manage resources necessary for application execution.

Given such a hosting environment, software designers could create network application without having to know at design time the type or the number of nodes the application will execute on. With proper support, the system could allocate and bind software components to the resources they require at runtime, based on resource requirement, availability, connectivity and system state at actual time of execution. In contrast, early bindings tend to result in static allocations that cannot adapt well to resource, load and availability variations, thus the software components tend to be less efficient and have difficulty recovering from failures. The foundation of proposed approach is to disaggregate and virtualize.

System resources as services that can be described, discovered and dynamically configured at runtime to execute a application. Such a system can be built as a combination and extension of Web services, peer-to-peer computing, and grid computing standards and technologies, It thus follows the successful internet model of adding minimal and relatively simple functional layers to meet requirements while atop already available technologies.

But it does not advocate an "Internet OS" approach that would provide some form of uniform or centralized global-resources management. Several theoretical and practical reasons makes such an approach undesirable, including its inability to scale and the need to provide and manage supporting software on every participating platform. Instead, we advocate a mechanism that supports spontaneous, dynamic, and voluntary collaboration among entities with their contributing resources.


FRAM

Definition

Before the 1950's, ferromagnetic cores were the only type of random-access, nonvolatile memories available. A core memory is a regular array of tiny magnetic cores that can be magnetized in one of two opposite directions, making it possible to store binary data in the form of a magnetic field. The success of the core memory was due to a simple architecture that resulted in a relatively dense array of cells. This approach was emulated in the semiconductor memories of today (DRAM's, EEPROM's, and FRAM's).

Ferromagnetic cores, however, were too bulky and expensive compared to the smaller, low-power semiconductor memories. In place of ferromagnetic cores ferroelectric memories are a good substitute. The term "ferroelectric' indicates the similarity, despite the lack of iron in the materials themselves.

Ferroelectric memory exhibit short programming time, low power consumption and nonvolatile memory, making highly suitable for application like contact less smart card, digital cameras which demanding many memory write operations. In other word FRAM has the feature of both RAM and ROM. A ferroelectric memory technology consists of a complementry metal-oxide-semiconductor (CMOS) technology with added layers on top for ferroelectric capacitors.

A ferroelectric memory cell has at least one ferroelectric capacitor to store the binary data, and one or two transistors that provide access to the capacitor or amplify its content for a read operation.A ferroelectric capacitor is different from a regular capacitor in that it substitutes the dielectric with a ferroelectric material (lead zirconate titanate (PZT) is a common material used)-when an electric field is applied and the charges displace from their original position spontaneous polarization occurs and displacement becomes evident in the crystal structure of the material.

Importantly, the displacement does not disappear in the absence of the electric field. Moreover, the direction of polarization can be reversed or reoriented by applying an appropriate electric field.A hysteresis loop for a ferroelectric capacitor displays the total charge on the capacitor as a function of the applied voltage. It behaves similarly to that of a magnetic core, but for the sharp transitions around its coercive points, which implies that even a moderate voltage can disturb the state of the capacitor.

One remedy for this would be to modify a ferroelectric memory cell including a transistor in series with the ferroelectric capacitor. Called an access transistor, it wo control the access to the capacitor and eliminate the need for a square like hysteresis loop compensating for the softness of the hysteresis loop characteristics and blocking unwanted disturb signals from neighboring memory cells.



Optical Switching

Definition

Explosive information demand in the internet world is creating enormous needs for capacity expansion in next generation telecommunication networks. It is expected that the data- oriented network traffic will double every year.

Optical networks are widely regarded as the ultimate solution to the bandwidth needs of future communication systems. Optical fiber links deployed between nodes are capable to carry terabits of information but the electronic switching at the nodes limit the bandwidth of a network. Optical switches at the nodes will overcome this limitation. With their improved efficiency and lower costs, Optical switches provide the key to both manage the new capacity Dense Wavelength Division Multiplexing (DWDM) links as well as gain a competitive advantage for provision of new band width hungry services. However, in an optically switched network the challenge lies in overcoming signal impairment and network related parameters. Let us discuss the present status, advantages and challenges and future trends in optical switches.


A fiber consists of a glass core and a surrounding layer called the cladding. The core and cladding have carefully chosen indices of refraction to ensure that the photos propagating in the core are always reflected at the interface of the cladding. The only way the light can enter and escape is through the ends of the fiber. A transmitter either alight emitting diode or a laser sends electronic data that have been converted to photons over the fiber at a wavelength of between 1,200 and 1,600 nanometers.

Today fibers are pure enough that a light signal can travel for about 80 kilometers without the need for amplification. But at some point the signal still needs to be boosted. Electronics for amplitude signal were replaced by stretches of fiber infused with ions of the rare-earth erbium. When these erbium-doped fibers were zapped by a pump laser, the excited ions could revive a fading signal. They restore a signal without any optical to electronic conversion and can do so for very high speed signals sending tens of gigabits a second. Most importantly they can boost the power of many wavelengths simultaneously.

Now to increase information rate, as many wavelengths as possible are jammed down a fiber, with a wavelength carrying as much data as possible. The technology that does this has a name-dense wavelength division multiplexing (DWDM ) - that is a paragon of technospeak.Switches are needed to route the digital flow to its ultimate destination. The enormous bit conduits will flounder if the light streams are routed using conventional electronic switches, which require a multi-terabit signal to be converted into hundreds of lower-speed electronic signals. Finally, switched signals would have to be reconverted to photons and reaggregated into light channels that are then sent out through a designated output fiber.

Swarm intelligence & traffic Safety

Definition

An automotive controller that complements the driving experience must work to avoid collisions, enforce a smooth trajectory, and deliver the vehicle to the intended destination as quickly as possible. Unfortunately, satisfying
these requirements with traditional methods proves intractable at best and forces us to consider biologically -inspired techniques like Swarm Intelligence.

A controller is currently being designed in a robot simulation program with the goal of implementing the system in real hardware to investigate these biologically-inspired techniques and to validate the results. In this paper I present an idea that can be implemented in traffic safety by the application of Robotics & Computer Vision through Swarm Intelligence.

We stand today at the culmination of the industrial revolution. For the last four centuries, rapid advances in science have fueled industrial society. In the twentieth century, industrialization found perhaps its greatest expression
in Henry Ford's assembly line. Mass production affects almost every facet of modern life. Our food is mass produced in meat plants, commercial bakeries, and canaries.

Our clothing is shipped by the ton from factories in China and Taiwan. Certainly all the amenities of our lives - our stereos, TVs, and microwave ovens - roll off assembly lines by the truck load. Today, we're presented with another solution, that hopefully will fare better than its predecessors. It goes by the name of post-industrialism, and is commonly associated with our computer technology with Robots and Artificial Intelligence.

Robots are today where computers were 25 years ago. They're huge, hulking machines that sit on factory floors, consume massive resources and can only be afforded by large corporations and governments. Then came the PC revolution of the 1980s, when computers came out of the basements and landed on the desktops. So we're on the verge of a "PR" revolution today - a Personal Robotics revolution, which will bring the robots off the factory floor and put them in our homes, on our desktops and inside our vehicles.

Terrestrial Trunked Radio (TETRA)

Definition

TErrestrial Trunked RAdio (TETRA) standard was designed to meet some common requirements and objectives of the PMR and PAMR market alike. One of the last strong holds of analog technology in a digital world has been the area of trunked mobile radio. Although digital cellular technology has made great strides with broad support from a relatively large number of manufactures, digital trunked mobile radio systems for the Private Mobile Radio (PMR) and Public Access Mobile Radio (PAMR) market have lagged behind. Few manufacture currently offer digital systems, all of which are based on proprietary technology. However, the transition to digital is gaining momentum with the emergence of an open standard TETRA

TETRA is a Digital PMR Standard developed by ETSI. It is an open standard offers interoperability of equipment and networks from different manufacturers. It is potential replacement for analog and proprietary digital systems. Standard originated in1989 as Mobile Digital Trunked Radio System (MDTRS), later renamed to Trans European Trunked Radio, and is called TETRA since 1997.

TErrestrial Trunked Radio TETRA is the agreed standard for a new generation of digital land mobile radio communications designed to meet the needs of the most demanding Professional Mobile Radio networks (PMR) and Public Access Radio (PAMR) users. TETRA is the only existing digital PMR standard defined by the European Telecommunications Standard Institute (ETSI).

Among the standard's many features are voice and extensive data communications services. Networks based on the TETRA standard will provide cost-effective, spectrum-efficient and secure communications with advance capabilities for the mobile and fixed elements of companies and organizations.

As a standard, TETRA should be regarded as complementary to GSM and DECT. In comparison with GSM as currently implemented, TETRA provides faster call set-up, higher data rates, group calls and direct mode. TETRA manufactures have been developing their products for ten years. The investments have resulted in highly sophisticated products. A number of important orders have already been placed. According to estimates, TETRA-based networks will have 5-10 million users by the year 2010.

HVAC

Definition

Wireless transmission of electromagnetic radiation (communication signals) has become a popular method of transmitting RF signals such as cordless, wireless and cellular telephone signals, paper signals, two way radio signals,video conferencing signals and LAN signals indoors.

Indoor wireless transmission has the advantage that building in which transmission is taking place does not have to be filled with wires or cables that are equipped to carry a multitude of signals. Wires and signals are costly to install and may require expensive upgrades when their capacity is exceeded or when new technologies require different types of wires and cables than those already installed.

Traditional indoor wireless communication systems transmit and receive signals through the use of a network of transmitters, receivers and antennas that are placed through out the interior of a building. Devices must be located such that signals must not be lost or signal strength may not get attenuated. Again a change in the existing architecture also affects the wireless transmission. Another challenge related to installation of wireless networks in buildings is the need to predict the RF propagation and coverage in the presence of complex combinations of shapes and materials in the buildings.

In general, the attenuation in buildings is larger than that in free space, requiring more cells and higher power to obtain wider coverage. Despite of all these, placement of antennas, receivers and antennas in an indoor environment is largely a process of trial and error. Hence there is need for a method and a system for efficiently transmitting RF and microwave signals indoors without having to install an extensive system of wires and cables inside the buildings.

This paper suggests an alternative method of distributing electromagnetic signals in buildings by the recognition that every building is equipped with an RF wave guide distribution system, the HVAC ducts. The use of HVAC ducts is also amenable to a systematic design procedure but should be significantly less expensive than other approaches since existing infrastructure is used and RF is distributed more efficiently.

Cellular through remote control switch

Definition

Cellular through remote control switch implies control of devices at a remote location via circuit interfaced to the remote telephone line/device by dialing specific DTMF (dual tune multi frequency) digits from a local telephone. This project Cellular through remote control switch has the following features

1. It can control multiple load (on/off/status each load)
2. It provides you feedback when the circuit is in energized state and also sends an acknowledgement indicating action with respect to the switching on of each load and switching off of all loads (together).

It can selectively switch on any one or more loads one after the other and switch off all loads simultaneously

OPERATION

1. Dial the Phone Number - a OK tone produced
2. Password - 4321
3. Load number - 1, 2, 3, 4
4. Control number - 9/on, O/off, # / status

When the phone number is dialed the ring detector sense the ring and the auto lifter works after some time. When the auto lifter works an OK tone is produced. Then the password is entered. The password is 123451 Then to check the status of the corresponding load enter # and load number. To on the load enter 9 and load number. To off the load enter 0 and load number. The whole operation is done within 3 minutes. After 3 minutes the operation is timeout.

Asynchronous Chips

Definition

Computer chips of today are synchronous. They contain a main clock, which controls the timing of the entire chips. There are problems, however, involved with these clocked designs that are common today.One problem is speed. A chip can only work as fast as its slowest component. Therefore, if one part of the chip is especially slow, the other parts of the chip are forced to sit idle. This wasted computed time is obviously detrimental to the speed of the chip.

New problems with speeding up a clocked chip are just around the corner. Clock frequencies are getting so fast that signals can barely cross the chip in one clock cycle. When we get to the point where the clock cannot drive the entire chip, we'll be forced to come up with a solution. One possible solution is a second clock, but this will incur overhead and power consumption, so this is a poor solution. It is also important to note that doubling the frequency of the clock does not double the chip speed, therefore blindly trying to increase chip speed by increasing frequency without considering other options is foolish.

The other major problem with c clocked design is power consumption. The clock consumes more power that any other component of the chip. The most disturbing thing about this is that the clock serves no direct computational use. A clock does not perform operations on information; it simply orchestrates the computational parts of the computer.

New problems with power consumption are arising. As the number of transistors on a chi increases, so does the power used by the clock. Therefore, as we design more complicated chips, power consumption becomes an even more crucial topic. Mobile electronics are the target for many chips.

These chips need to be even more conservative with power consumption in order to have a reasonable battery lifetime.The natural solution to the above problems, as you may have guessed, is to eliminate the source of these headaches: the clock.

Quantum Information Technology

Definition

The subject of quantum computing brings together ideas from classical information theory, computer science, and quantum physics. This document aims to summarize not just quantum computing, but the whole subject of quantum information theory. It turns out that information theory and quantum mechanics fit together very well. In order to explain their relationship, the paper begins with an introduction to classical information theory .The principles of quantum mechanics are then outlined.

The EPR-Bell correlation and quantum entanglement in general, form the essential new ingredient, which distinguishes quantum from classical information theory, and, arguably, quantum from classical physics. Basic quantum information ideas are described, including key distribution, teleportation, the universal quantum computer and quantum algorithms. The common theme of all these ideas is the use of quantum entanglement as a computational resource.

Experimental methods for small quantum processors are briefly sketched, concentrating on ion traps, super conducting cavities, Nuclear magnetic resonance imaging based techniques, and quantum dots. "Where a calculator on the Eniac is equipped with 18000 vacuum tubes and weighs 30 tons, computers in the future may have only 1000 tubes and weigh only 1 1/2 tons" Popular Mechanics, March 1949.


Now, if this seems like a joke, wait a second. "Tomorrows computer might well resemble a jug of water"
This for sure is no joke. Quantum computing is here. What was science fiction two decades back is a reality today and is the future of computing. The history of computer technology has involved a sequence of changes from one type of physical realization to another --- from gears to relays to valves to transistors to integrated circuits and so on. Quantum computing is the next logical advancement.

Today's advanced lithographic techniques can squeeze fraction of micron wide logic gates and wires onto the surface of silicon chips. Soon they will yield even smaller parts and inevitably reach a point where logic gates are so small that they are made out of only a handful of atoms. On the atomic scale matter obeys the rules of quantum mechanics, which are quite different from the classical rules that determine the properties of conventional logic gates. So if computers are to become smaller in the future, new, quantum technology must replace or supplement what we have now.

Quantum technology can offer much more than cramming more and more bits to silicon and multiplying the clock-speed of microprocessors. It can support entirely new kind of computation with qualitatively new algorithms based on quantum principles!

Smart card

Definition

In this seminar ,is giving some basic concepts about smart cards. The physical and logical structure of the smart card and the corresponding security access control have been discussed in this seminar . It is believed that smart cards offer more security and confidentiality than the other kinds of information or transaction storage. Moreover, applications applied with smart card technologies are illustrated which demonstrate smart card is one of the best solutions to provide and enhance their system with security and integrity.

The seminar also covers the contactless type smart card briefly. Different kinds of scheme to organise and access of multiple application smart card are discussed. The first and second schemes are practical and workable on these days, and there is real applications developed using those models. For the third one, multiple independent applications in a single card, there is still a long way to go to make it becomes feasible because of several reasons.

At the end of the paper, an overview of the attack techniques on the smart card is discussed as well. Having those attacks does not mean that smart card is unsecure. It is important to realise that attacks against any secure systems are nothing new or unique. Any systems or technologies claiming 100% secure are irresponsible. The main consideration of determining whether a system is secure or not depends on whether the level of security can meet the requirement of the system.

The smart card is one of the latest additions to the world of information technology. Similar in size to today's plastic payment card, the smart card has a microprocessor or memory chip embedded in it that, when coupled with a reader, has the processing power to serve many different applications. As an access-control device, smart cards make personal and business data available only to the appropriate users. Another application provides users with the ability to make a purchase or exchange value. Smart cards provide data portability, security and convenience. Smart cards come in two varieties: memory and microprocessor.

Memory cards simply store data and can be viewed as a small floppy disk with optional security. A microprocessor card, on the other hand, can add, delete and manipulate information in its memory on the card. Similar to a miniature computer, a microprocessor card has an input/output port operating system and hard disk with built-in security features. On a fundamental level, microprocessor cards are similar to desktop computers. They have operating systems, they store data and applications, they compute and process information and they can be protected with sophisticated security tools. The self-containment of smart card makes it resistant to attack as it does not need to depend upon potentially vulnerable external resources. Because of this characteristic, smart cards are often used in different applications, which require strong security protection and authentication.

LWIP

Introduction


Over the last few years, the interest for connecting computers and computer supported devices to wireless networks has steadily increased. Computers are becoming more and more seamlessly integrated with everyday equipment and prices are dropping. At the same time wireless networking technologies, such as Bluetooth and IEEE 802.11b WLAN , are emerging. This gives rise to many new fascinating scenarios in areas such as health care, safety and security, transportation, and processing industry. Small devices such as sensors can be connected to an existing network infrastructure such as the global Internet, and monitored from anywhere.

The Internet technology has proven itself flexible enough to incorporate the changing network environments of the past few decades. While originally developed for low speed networks such as the ARPANET, the Internet technology today runs over a large spectrum of link technologies with vastly different characteristics in terms of bandwidth and bit error rate. It is highly advantageous to use the existing Internet technology in the wireless networks of tomorrow since a large amount of applications using the Internet technology have been developed. Also, the large connectivity of the global Internet is a strong incentive.

Since small devices such as sensors are often required to be physically small and inexpensive, an implementation of the Internet protocols will have to deal with having limited computing resources and memory. This report describes the design and implementation of a small TCP/IP stack called lwIP that is small enough to be used in minimal systems.

Overview

As in many other TCP/IP implementations, the layered protocol design has served as a guide for the design of the implementation of lwIP. Each protocol is implemented as its own module, with a few functions acting as entry points into each protocol. Even though the protocols are implemented separately, some layer violations are made, as discussed above, in order to improve performance both in terms of processing speed and memory usage. For example, when verifying the checksum of an incoming TCP segment and when demultiplexing a segment, the source and destination IP addresses of the segment has to be known by the TCP module. Instead of passing these addresses to TCP by the means of a function call, the TCP module is aware of the structure of the IP header, and can therefore extract this information by itself.

lwIP consists of several modules. Apart from the modules implementing the TCP/IP protocols (IP, ICMP, UDP, and TCP) a number of support modules are implemented.
The support modules consists of :-

" The operating system emulation layer (described in Chapter3)

" The buffer and memory management subsystems
(described in Chapter 4)

" Network interface functions (described in Chapter 5)

" Functions for computing Internet checksum (Chapter 6)

" An abstract API (described in Chapter 8 )

Iris Scanning

Introduction


In today's information age it is not difficult to collect data about an individual and use that information to exercise control over the individual. Individuals generally do not want others to have personal information about them unless they decide to reveal it. With the rapid development of technology, it is more difficult to maintain the levels of privacy citizens knew in the past. In this context, data security has become an inevitable feature. Conventional methods of identification based on possession of ID cards or exclusive knowledge like social security number or a password are not altogether reliable. ID cards can be almost lost, forged or misplaced: passwords can be forgotten.

Such that an unauthorized user may be able to break into an account with little effort. So it is need to ensure denial of access to classified data by unauthorized persons. Biometric technology has now become a viable alternative to traditional identification systems because of its tremendous accuracy and speed. Biometric system automatically verifies or recognizes the identity of a living person based on physiological or behavioral characteristics.

Since the persons to be identified should be physically present at the point of identification, biometric techniques gives high security for the sensitive information stored in mainframes or to avoid fraudulent use of ATMs. This paper explores the concept of Iris recognition which is one of the most popular biometric techniques. This technology finds applications in diverse fields.

Biometrics - Future Of Identity
Biometric dates back to ancient Egyptians who measured people to identify them. Biometric devices have three primary components.
1. Automated mechanism that scans and captures a digital or analog image of a living personal characteristic
2. Compression, processing, storage and comparison of image with a stored data.
3. Interfaces with application systems.


A biometric system can be divided into two stages: the enrolment module and the identification module. The enrolment module is responsible for training the system to identity a given person. During an enrolment stage, a biometric sensor scans the person's physiognomy to create a digital representation. A feature extractor processes the representation to generate a more compact and expressive representation called a template. For an iris image these include the various visible characteristics of the iris such as contraction, Furrows, pits, rings etc. The template for each user is stored in a biometric system database.

The identification module is responsible for recognizing the person. During the identification stage, the biometric sensor captures the characteristics of the person to be identified and converts it into the same digital format as the template. The resulting template is fed to the feature matcher, which compares it against the stored template to determine whether the two templates match.

The identification can be in the form of verification, authenticating a claimed identity or recognition, determining the identity of a person from a database of known persons. In a verification system, when the captured characteristic and the stored template of the claimed identity are the same, the system concludes that the claimed identity is correct. In a recognition system, when the captured characteristic and one of the stored templates are the same, the system identifies the person with matching template.

Mobile IP

Introduction


While Internet technologies largely succeed in overcoming the barriers of time and distance, existing Internet technologies have yet to fully accommodate the increasing mobile computer usage. A promising technology used to eliminate this current barrier is Mobile IP. The emerging 3G mobile networks are set to make a huge difference to the international business community. 3G networks will provide sufficient bandwidth to run most of the business computer applications while still providing a reasonable user experience.

However, 3G networks are not based on only one standard, but a set of radio technology standards such as cdma2000, EDGE and WCDMA. It is easy to foresee that the mobile user from time to time also would like to connect to fixed broadband networks, wireless LANs and, mixtures of new technologies such as Bluetooth associated to e.g. cable TV and DSL access points.

In this light, a common macro mobility management framework is required in order to allow mobile users to roam between different access networks with little or no manual intervention. (Micro mobility issues such as radio specific mobility enhancements are supposed to be handled within the specific radio technology.) IETF has created the Mobile IP standard for this purpose.

Mobile IP is different compared to other efforts for doing mobility management in the sense that it is not tied to one specific access technology. In earlier mobile cellular standards, such as GSM, the radio resource and mobility management was integrated vertically into one system. The same is also true for mobile packet data standards such as CDPD, Cellular Digital Packet Data and the internal packet data mobility protocol (GTP/MAP) of GPRS/UMTS networks. This vertical mobility management property is also inherent for the increasingly popular 802.11 Wireless LAN standard.

Mobile IP can be seen as the least common mobility denominator - providing seamless macro mobility solutions among the diversity of accesses. Mobile IP is defining a Home Agent as an anchor point with which the mobile client always has a relationship, and a Foreign Agent, which acts as the local tunnel-endpoint at the access network where the mobile client is visiting. Depending on which network the mobile client is currently visiting; its point of attachment Foreign Agent) may change. At each point of attachment, Mobile IP either requires the availability of a standalone Foreign Agent or the usage of a Co-located care-of address in the mobile client itself.

The concept of "Mobility" or "packet data mobility", means different things depending on what context the word is used within. In a wireless or fixed environment, there are many different ways of implementing partial or full mobility and roaming services. The most common ways of implementing mobility (discrete mobility or IP roaming service) support in today's IP networking environments includes simple "PPP dial-up" as well as company internal mobility solutions implemented by means of renewal of IP address at each new point of attachment. The most commonly deployed way of supporting remote access users in today's Internet is to utilize the public telephone network (fixed or mobile) and to use the PPP dial-up functionality.


Self Organizing Maps

Definition


These notes provide an introduction to unsupervised neural networks, in particular Kohonen self-organizing maps; together with some fundamental background material on statistical pattern recognition.

One question which seems to puzzle many of those who encounter unsupervised learning for the first time is how can anything useful be achieved when input information is simply poured into a black box with no provision of any rules as to how this information should be stored, or examples of the various groups into which this information can be placed. If the information is sorted on the basis of how similar one input is with another, then we will have accomplished an important step in condensing the available information by developing a more compact representation.

We can represent this information, and any subsequent information, in a much reduced fashion. We will know which information is more likely. This black box will certainly have learned. It may permit us to perceive some order in what otherwise was a mass of unrelated information to see the wood for the trees.

In any learning system, we need to make full use of the all the available data and to impose any constrains that we feel are justified. If we know that what groups the information must fall into, that certain combinations of inputs preclude others, or that certain rules underlie the production of the information then we must use them. Often, we do not possess such additional information. Consider two examples of experiments. One designed to test a particular hypothesis, say, to determine the effects of alcohol on driving; the second to investigate any possible connection between car accidents and the driver's lifestyle.

In the first experiment, we could arrange a laboratory-based experiment where volunteers took measured amounts of alcohol and then attempted some motor-skill activity (e.g., following a moving light on a computer screen by moving the mouse). We could collect the data (i.e., amount of alcohol vs. error rate on the computer test), conduct the customary statistical test and, finally, draw our conclusions. Our hypothesis may that the more alcohol consumed the greater the error rate we can confirm this on the basis of this experiment. Note, that we cannot prove the relationship only state that we are 99% certain (or whatever level we set ourselves) that the result is not due purely to chance.

The second experiment is much more open-ended (indeed, it could be argued that it is not really an experiment).Data is collected from a large number of drives those that have been involved in accidents and those that have not. This data could include the driver's age, occupation, health details, drinking habits, etc. From this mass of information, we can attempt to discover any possible connections. A number of conventional statistical tools exist to support this (e.g., factor analysis). We may discover possible relationships including one between accidents and drinking but perhaps many others as well. There could be a number of leads that need following up. Both approaches are valid in searching for causes underlying road accidents. This second experiment can be considered as an example of unsupervised learning.

The next section provides some introductory background material on statistical pattern recognition. The terms and concepts will be useful in understanding the later material on unsupervised neural networks. As the approach underlying unsupervised networks is the measurement of how similar (or different) various inputs are, we need to consider how the distances between these inputs are measured. This forms the basis Section Three, together with a brief description of non-neural approaches to unsupervised learning. Section Four discusses the background to and basic algorithm of Kohonen self-organizing maps. The next section details some of the properties of these maps and introduces several useful practical points. The final section provides pointers to further information on unsupervised neural networks.

Survivable Networks Systems

Definition


Survivability In Network Systems

Contemporary large-scale networked systems that are highly distributed improve the efficiency and effectiveness of organizations by permitting whole new levels of organizational integration. However, such integration is accompanied by elevated risks of intrusion and compromise. These risks can be mitigated by incorporating survivability capabilities into an organization's systems. As an emerging discipline, survivability builds on related fields of study (e.g., security, fault tolerance, safety, reliability, reuse, performance, verification, and testing) and introduces new concepts and principles. Survivability focuses on preserving essential services in unbounded environments, even when systems in such environments are penetrated and compromised.

The New Network Paradigm: Organizational Integration

From their modest beginnings some 20 years ago, computer networks have become a critical element of modern society. These networks not only have global reach, they also have impact on virtually every aspect of human endeavor. Network systems are principal enabling agents in business, industry, government, and defense. Major economic sectors, including defense, energy, transportation, telecommunications, manufacturing, financial services, health care, and education, all depend on a vast array of networks operating on local, national, and global scales. This pervasive societal dependency on networks magnifies the consequences of intrusions, accidents, and failures, and amplifies the critical importance of ensuring network survivability.

As organizations seek to improve efficiency and competitiveness, a new network paradigm is emerging. Networks are being used to achieve radical new levels of organizational integration. This integration obliterates traditional organizational boundaries and transforms local operations into components of comprehensive, network-resident business processes. For example, commercial organizations are integrating operations with business units, suppliers, and customers through large-scale networks that enhance communication and services.

These networks combine previously fragmented operations into coherent processes open to many organizational participants. This new paradigm represents a shift from bounded networks with central control to unbounded networks. Unbounded networks are characterized by distributed administrative control without central authority, limited visibility beyond the boundaries of local administration, and lack of complete information about the network. At the same time, organizational dependencies on networks are increasing and risks and consequences of intrusions and compromises are amplified.

The Definition of Survivability

We define survivability as the capability of a system to fulfill its mission, in a timely manner, in the presence of attacks, failures, or accidents. We use the term system in the broadest possible sense, including networks and large-scale systems of systems. The term mission refers to a set of very high-level (i.e., abstract) requirements or goals.

Missions are not limited to military settings since any successful organization or project must have a vision of its objectives whether expressed implicitly or as a formal mission statement. Judgments as to whether or not a mission has been successfully fulfilled are typically made in the context of external conditions that may affect the achievement of that mission. For example, assume that a financial system shuts down for 12 hours during a period of widespread power outages caused by a hurricane.

If the system preserves the integrity and confidentiality of its data and resumes its essential services after the period of environmental stress is over, the system can reasonably be judged to have fulfilled its mission. However, if the same system shuts down unexpectedly for 12 hours under normal conditions (or under relatively minor environmental stress) and deprives its users of essential financial services, the system can reasonably be judged to have failed its mission, even if data integrity and confidentiality are preserved.


MPEG Video Compression

Definition


MPEG is the famous four-letter word which stands for the "Moving Pictures Experts Groups.
To the real word, MPEG is a generic means of compactly representing digital video and audio signals for consumer distributionThe essence of MPEG is its syntax: the little tokens that make up the bitstream. MPEG's semantics then tell you (if you happen to be a decoder, that is) how to inverse represent the compact tokens back into something resembling the original stream of samples.

These semantics are merely a collection of rules (which people like to called algorithms, but that would imply there is a mathematical coherency to a scheme cooked up by trial and error….). These rules are highly reactive to combinations of bitstream elements set in headers and so forth.

MPEG is an institution unto itself as seen from within its own universe. When (unadvisedly) placed in the same room, its inhabitants a blood-letting debate can spontaneously erupt among, triggered by mere anxiety over the most subtle juxtaposition of words buried in the most obscure documents. Such stimulus comes readily from transparencies flashed on an overhead projector. Yet at the same time, this gestalt will appear to remain totally indifferent to critical issues set before them for many months.

It should therefore be no surprise that MPEG's dualistic chemistry reflects the extreme contrasts of its two founding fathers: the fiery Leonardo Chairiglione (CSELT, Italy) and the peaceful Hiroshi Yasuda (JVC, Japan). The excellent byproduct of the successful MPEG Processes became an International Standards document safely administered to the public in three parts: Systems (Part), Video (Part 2), and Audio (Part 3).

Pre MPEG
Before providence gave us MPEG, there was the looming threat of world domination by proprietary standards cloaked in syntactic mystery. With lossy compression being such an inexact science (which always boils down to visual tweaking and implementation tradeoffs), you never know what's really behind any such scheme (other than a lot of the marketing hype).
Seeing this threat… that is, need for world interoperability, the Fathers of MPEG sought help of their colleagues to form a committee to standardize a common means of representing video and audio (a la DVI) onto compact discs…. and maybe it would be useful for other things too.

MPEG borrowed a significantly from JPEG and, more directly, H.261. By the end of the third year (1990), a syntax emerged, which when applied to represent SIF-rate video and compact disc-rate audio at a combined bitrate of 1.5 Mbit/sec, approximated the pleasure-filled viewing experience offered by the standard VHS format.

After demonstrations proved that the syntax was generic enough to be applied to bit rates and sample rates far higher than the original primary target application ("Hey, it actually works!"), a second phase (MPEG-2) was initiated within the committee to define a syntax for efficient representation of broadcast video, or SDTV as it is now known (Standard Definition Television), not to mention the side benefits: frequent flier miles

Intel Centrino Mobile Technology

Definition


The world of mobile computing has seldom been so exciting. Not, at least, for last 3 years when all that the chip giants could think of was scaling down the frequency and voltage of the desktop CPUs, and labeling them as mobile processors. Intel Centrino mobile technology is based on the understanding that mobile customers value the four vectors of mobility: performance, battery life, small form factor, and wireless connectivity. The technologies represented by the Intel Centrino brand will include an Intel Pentium-M processor, Intel 855 chipset family, and Intel PRO/Wireless 2100 network connection .

The Intel Pentium-M processor is a higher performance, lower power mobile processor with several micro-architectural enhancements over existing Intel mobile processors. Some key features of the Intel Pentium-M processor Micro-architecture include Dynamic Execution, 400-MHz, on-die 1-MB second level cache with Advanced Transfer Cache Architecture, Streaming SIMD Extensions 2, and Enhanced Intel SpeedStep technology.

The Intel Centrino mobile technology also includes the 855GM chipset components GMCH and the ICH4-M. The Accelerated Hub Architecture is designed into the chipset to provide an efficient, high bandwidth, communication channel between the GMCH and the ICH4-M.The GMCH component contains a processor system bus controller, a graphics controller, and a memory controller, while providing an LVDS interface and two DVO ports.

The integrated Wi-Fi Certified Intel PRO/Wireless 2100 Network Connection has been designed and validated to work with all of the Intel Centrino mobile technology components and is able to connect to 802.11b Wi-Fi certified access points. It also supports advanced wireless LAN security including Cisco LEAP, 802.1X and WEP. Finally, for comprehensive security support, the Intel PRO/Wireless 2100 Network Connection has been verified with leading VPN suppliers like Cisco, CheckPoint, Microsoft and Intel NetStructure.

Pentium-M Processor

The Intel Pentium-M processor is a high performance, low power mobile processor with several micro-architectural enhancements over existing Intel mobile processors. The following list provides some of the key features on this processor:

¢ Supports Intel Architecture with Dynamic Execution
¢ High performance, low-power core
¢ On-die, 1-MByte second level cache with Advanced Transfer Cache Architecture
¢ Advanced Branch Prediction and Data Prefetch Logic
¢ Streaming SIMD Extensions 2 (SSE2)
¢ 400-MHz, Source-Synchronous processor system bus
¢ Advanced Power Management features including Enhanced Intel SpeedStep technology
¢ Micro-FCPGA and Micro-FCBGA packaging technologies

The Intel Pentium-M processor is manufactured on Intel's advanced 0.13 micron process technology with copper interconnect. The processor maintains support for MMX technology and Internet Streaming SIMD instructions and full compatibility with IA-32 software. The high performance core features architectural innovations like Micro-op Fusion and Advanced Stack Management that reduce the number of micro-ops handled by the processor. This results in more efficient scheduling and better performance at lower power.

The on-die 32-kB Level 1 instruction and data caches and the 1-MB Level 2 cache with Advanced Transfer Cache Architecture enable significant performance improvement over existing mobile processors. The processor also features a very advanced branch prediction architecture that significantly reduces the number of mispredicted branches. The processor's Data Prefetch Logic speculatively fetches data to the L2 cache before an L1 cache requests occurs, resulting in reduced bus cycle penalties and improved performance.


Hurd

Hurd


Definition


When we talk about free software, we usually refer to the free software licenses. We also need relief from software patents, so our freedom is not restricted by them. But there is a third type of freedom we need, and that's user freedom.

Expert users don't take a system as it is. They like to change the configuration, and they want to run the software that works best for them. That includes window managers as well as your favourite text editor. But even on a GNU/Linux system consisting only of free software, you can not easily use the filesystem format, network protocol or binary format you want without special privileges. In traditional Unix systems, user freedom is severly restricted by the system administrator.

The Hurd is built on top of CMU's Mach 3.0 kernel and uses Mach's virtual memory management and message-passing facilities. The GNU C Library will provide the Unix system call interface, and will call the Hurd for needed services it can't provide itself. The design and implementation of the Hurd is being lead by Michael Bushnell, with assistance from Richard Stallman, Roland McGrath, Jan Brittenson, and others.

A More Usable Approach To OS Design

The fundamental purpose of an operating system (OS) is to enable a variety of programs to share a single computer efficiently and productively. This demands memory protection, preemptively scheduled timesharing, coordinated access to I/O peripherals, and other services. In addition, an OS can allow several users to share a computer. In this case, efficiency demands services that protect users from harming each other, enable them to share without prior arrangement, and mediate access to physical devices.
On today's computer systems, programmers usually implement these goals through a large program called the kernel. Since this program must be accessible to all user programs, it is the natural place to add functionality to the system. Since the only model for process interaction is that of specific, individual services provided by the kernel, no one creates other places to add functionality. As time goes by, more and more is added to the kernel.

A traditional system allows users to add components to a kernel only if they both understand most of it and have a privileged status within the system. Testing new components requires a much more painful edit-compile-debug cycle than testing other programs. It cannot be done while others are using the system. Bugs usually cause fatal system crashes, further disrupting others' use of the system. The entire kernel is usually non-pageable. (There are systems with pageable kernels, but deciding what can be paged is difficult and error prone. Usually the mechanisms are complex, making them difficult to use even when adding simple extensions.)

Because of these restrictions, functionality which properly belongs behind the wall of a traditional kernel is usually left out of systems unless it is absolutely mandatory. Many good ideas, best done with an open/read/write interface cannot be implemented because of the problems inherent in the monolithic nature of a traditional system. Further, even among those with the endurance to implement new ideas, only those who are privileged users of their computers can do so. The software copyright system darkens the mire by preventing unlicensed people from even reading the kernel source The Hurd removes these restrictions from the user. It provides an user extensible system framework without giving up POSIX compatibility and the unix security model.

When Richard Stallman founded the GNU project in 1983, he wanted to write an operating system consisting only of free software. Very soon, a lot of the essential tools were implemented, and released under the GPL. However, one critical piece was missing: The kernel. After considering several alternatives, it was decided not to write a new kernel from scratch, but to start with the Mach micro kernel.

Buffer overflow attack:A potential problem and its Implications

Definition

Have you ever thought of a buffer overflow attack ? It occurs through careless programming and due to patchy nature of the programs. Many C programs have buffer overflow vulnerabilities because the C language lacks array bounds checking, and the culture of C programmers encourages a performance-oriented style that avoids error checking where possible. Eg: gets and strcpy ( no bounds checking ). This paper presents a systematic solution to the persistent problem of buffer overflow attacks. Buffer overflow attack gained notoriety in 1988 as part of the Morris Worm
incident on the Internet. These problems are probably the result of careless programming, and could be corrected
by elementary testing or code reviews along the way.

THE ATTACK :- A (malicious) user finds the vulnerability in a highly privileged program and someone else implements a patch to that particular attack, on that privileged program. Fixes to buffer overflow attacks attempt to solve the problem at the source (the vulnerable program) instead of at the destination (the stack that is being overflowed).


StackGuard :- It is a simple compiler extension that limits the amount of damage that a buffer overflow attack can inflict on a program. The paper discusses the various intricacies to the problem and the implementation details of the Compiler extension 'Stack Guard '.


Stack Smashing Attack :- Buffer overflow attacks exploit a lack of bounds checking on the size of input being stored in a buffer array. The most common data structure to corrupt in this fashion is the stack, called a ``stack smashing attack'' .


StackGuard For Network Access :- The paper also discusses the impacts on network access to the 'Buffer Overflow Attack'.


StackGuard prevents changes to active return addresses by either :-
1. Detecting the change of the return address before the function returns, or
2. Completely preventing the write to the return address. MemGuard is a tool developed
to help debug optimistic specializations by locating code statements that change quasi-invariant
values.


STACKGUARD OVERHEAD
" Canary StackGuard Overhead
" MemGuard StackGuard Overhead
" StackGuard Macrobenchmarks

The paper presents the issues and their implications on the 'IT APPLICATIONS' and discusses the solutions through implementation details of 'Stack Guard'.




Single Photon Emission Computed Tomography (SPECT)

Definition

Emission Computed Tomography is a technique where by multi cross sectional images of tissue function can be produced , thus removing the effect of overlying and underlying activity. The technique of ECT is generally considered as two separate modalities. SINGLE PHOTON Emission Computed Tomography involves the use single gamma ray emitted per nuclear disintegration. Positron Emission Tomography makes use of radio isotopes such as gallium-68, when two gamma rays each of 511KeV, are emitted simultaneously where a positron from a nuclear disintegration annihilates in tissue.

SPECT, the acronym of Single Photon Emission Computed Tomography is a nuclear medicine technique that uses radiopharmaceuticals, a rotating camera and a computer to produce images which allow us to visualize functional information about a patient's specific organ or body system. SPECT images are functional in nature rather than being purely anatomical such as ultrasound, CT and MRI. SPECT, like PET acquires information on the concentration of radio nuclides to the patient's body.

SPECT dates from the early 1960 are when the idea of emission traverse section tomography was introduced by D.E.Kuhl and R.Q.Edwards prior to PET, X-ray, CT or MRI. THE first commercial Single Photon- ECT or SPECT imaging device was developed by Edward and Kuhl and they produce tomographic images from emission data in 1963. Many research systems which became clinical standards were also developed in 1980's.

SPECT is short for single photon emission computed tomography. As its name suggests (single photon emission) gamma rays are the sources of the information rather than X-ray emission in the conventional CT scan.

Similar to X-ray, CT, MRI, etc SPECT allows us to visualize functional information about patient's specific organ or body system.

Internal radiation is administrated by means of a pharmaceutical which is labeled with a radioactive isotope. This pharmaceutical isotope decays, resulting in the emission of gamma rays. These gamma rays give us a picture of what's happening inside the patient's body.

By using the most essential tool in Nuclear Medicine-the Gamma Camera. The Gamma Camera can be used in planner imaging to acquire a 2-D image or in SPECT imaging to acquire a 3-D image.



Low Power UART Design for Serial Data Communication

Definition

With the proliferation of portable electronic devices, power efficient data transmission has become increasingly important. For serial data transfer, universal asynchronous receiver / transmitter (UART) circuits are often implemented because of their inherent design simplicity and application specific versatility. Components such as laptop keyboards, palm pilot organizers and modems are few examples of devices that employ UART circuits. In this work, design and analysis of a robust UART architecture has been carried out to minimize power consumption during both idle and continuous modes of operation.

UART

An UART (universal asynchronous receiver / transmitter) is responsible for performing the main task in serial communications with computers. The device changes incoming parallel information to serial data which can be sent on a communication line. A second UART can be used to receive the information. The UART performs all the tasks, timing, parity checking, etc. needed for the communication. The only extra devices attached are line driver chips capable of transforming the TTL level signals to line voltages and vice versa.

To use the device in different environments, registers are accessible to set or review the communication parameters. Setable parameters are for example the communication speed, the type of parity check, and the way incoming information is signaled to the running software.


UART types

Serial communication on PC compatibles started with the 8250 UART in the XT. In the years after, new family members were introduced like the 8250A and 8250B revisions and the 16450. The last one was first implemented in the AT. The higher bus speed in this computer could not be reached by the 8250 series. The differences between these first UART series were rather minor. The most important property changed with each new release was the maximum allowed speed at the processor bus side.

The 16450 was capable of handling a communication speed of 38.4 kbs without problems. The demand for higher speeds led to the development of newer series which would be able to release the main processor from some of its tasks. The main problem with the original series was the need to perform a software action for each single byte to transmit or receive. To overcome this problem, the 16550 was released which contained two on-board FIFO buffers, each capable of storing 16 bytes. One buffer for incoming, and one buffer for outgoing bytes.

Money Pad, The Future Wallet

Definition

"Money in the 21st century will surely prove to be as different from the money of the current century as our money is from that of the previous century. Just as fiat money replaced specie-backed paper currencies, electronically initiated debits and credits will become the dominant payment modes, creating the potential for private money to compete
with government-issued currencies." Just as every thing is getting under the shadow of "e" today we have paper currency being replaced by electronic money or e-cash.

Hardly a day goes by without some mention in the financial press of new developments in "electronic money". In the emerging field of electronic commerce, novel buzzwords like smartcards, online banking, digital cash, and electronic checks are being used to discuss money. But how are these brand-new forms of payment secure? And most importantly, which of these emerging secure electronic money technologies will survive into the next century?

These are some of the tough questions to answer but here's a solution, which provides a form of security to these modes of currency exchange using the "Biometrics Technology". The Money Pad introduced here uses the biometrics technology for Finger Print recognition. Money Pad is a form of credit card or smartcard, which we name so.

Every time the user wants to access the Money Pad he has to make an impression of his fingers which will be scanned and matched with the one in the hard disk of data base server. If the finger print matches with the user's he will be allowed to access and use the Pad other wise the Money Pad is not accessible. Thus providing a form of security to the ever-lasting transaction currency of the future "e-cash".

Money Pad - A form of credit card or smart card similar to floppy disk, which is
introduced to provide, secure e-cash transactions.

Cisco IOS Firewall

Definition


The Cisco IOS Firewall, provides robust, integrated firewall and intrusion detection functionality for every perimeter of the network. Available for a wide range of Cisco IOS software-based routers, the Cisco IOS Firewall offers sophisticated security and policy enforcement for connections within an organization (intranet) and between partner networks (extranets), as well as for securing Internet connectivity for remote and branch offices.


A security-specific, value-add option for Cisco IOS Software, the Cisco IOS Firewall enhances existing Cisco IOS security capabilities, such as authentication, encryption, and failover, with state-of-the-art security features, such as stateful, application-based filtering (context-based access control), defense against network attacks, per user authentication and authorization, and real-time alerts.


The Cisco IOS Firewall is configurable via Cisco ConfigMaker software, an easy-to-use Microsoft Windows 95, 98, NT 4.0 based software tool.

A Firewall is a network security device that ensures that all communications attempting to cross it meet an organization's security policy. Firewalls track and control communications deciding whether to allow ,reject or encrypt communications.Firewalls are used to connect a corporate local network to the Internet and also within networks. In other words they stand in between the trusted network and the untrusted network.

The first and most important decision reflects the policy of how your company or organization wants to operate the system. Is the firewall in place to explicitly deny all services except those critical to the mission of connecting to the net, or is the firewall is in place to provide a metered and audited method of 'Queuing' access in a non-threatening manner. The second is what level of monitoring, reducing and control do you want? Having established the acceptable risk level you can form a checklist of what should be monitored, permitted and denied. The third issue is financial.
Implementation methods


Two basic methods to implement a firewall are
1.As a Screening Router:
A screening router is a special computer or an electronic device that screens (filters out) specific packets based on the criteria that is defined. Almost all current screening routers operate in the following manner.
a. Packet Filter criteria must be stored for the ports of the packet filter device. The packet filter criteria are called packet filter ruler.
b. When the packets arrive at the port, the packet header is parsed. Most packet filters examine the fields in only the IP, TCP and UDP headers.
c. The packet filter rules are stored in a specific order. Each rule is applied to the packet in the order in which the packet filter is stored.
d. If the rule blocks the transmission or reception of a packet the packet is not allowed.
e. If the rule allows the transmission or reception of a packet the packet is allowed.
f. If a packet does not satisfy any rule it is blocked.

Dual Core Processor

Definition


Seeing the technical difficulties in cranking higher clock speed out of the present single core processors, dual core architecture has started to establish itself as the answer to the development of future processors. With the release of AMD dual core opteron and Intel Pentium Extreme edition 840, the month of April 2005 officially marks the beginning of dual core endeavors for both companies.


The transition from a single core to dual core architecture was triggered by a couple of factors. According to Moore's Law, the number of transistors (complexity) on a microprocessor doubles approximately every 18 months. The latest 2 MB Prescott core possesses more than 160 million transistors; breaking the 200 million mark is just a matter of time. Transistor count is one of the reasons that drive the industry toward the dual core architecture. Instead of using the available astronomically high transistor counts to design a new, more complex single core processor that would offer higher performance than the present offerings, chip makers have decided to put these transistors to use in producing two identical yet independent cores and combining them in to a single package.


To them, this is actually a far better use of the available transistors, and in return should give the consumers more value for their money. Besides, with the single core's thermal envelope being pushed to its limit and severe current leakage issues that have hit the silicon manufacturing industry ever since the transition to 90 nm chip fabrication, it's extremely difficult for chip makers (particulary Intel) to squeeze more clock speed out of the present single core design. Pushing for higher clock speeds is not a feasible option at present because of transistor current leakage. And adding more features into the core will increase the complexity of the design and make it harder to manage. These are the factors that have made the dual core option the more viable alternative in making full use of the amount of transistors available.


What is a dual core processor?
A dual core processor is a CPU with two separate cores on the same die, each with its own cache. It's the equivalent of getting two microprocessors in one. In a single-core or traditional processor the CPU is fed strings of instructions it must order, execute, then selectively store in its cache for quick retrieval. When data outside the cache is required, it is retrieved through the system bus from random access memory (RAM) or from storage devices. Accessing these slows down performance to the maximum speed the bus, RAM or storage device will allow, which is far slower than the speed of the CPU. The situation is compounded when multi-tasking. In this case the processor must switch back and forth between two or more sets of data streams and programs. CPU resources are depleted and performance suffers.


In a dual core processor each core handles incoming data strings simultaneously to improve efficiency. Just as two heads are better than one, so are two hands. Now when one is executing the other can be accessing the system bus or executing its own code. Adding to this favorable scenario, both AMD and Intel's dual-core flagships are 64-bit.
To utilize a dual core processor, the operating system must be able to recognize multi-threading and the software must have simultaneous multi-threadi0ng technology (SMT) written into its code. SMT enables parallel multi-threading wherein the cores are served multi-threaded instructions in parallel. Without SMT the software will only recognize one core. Adobe Photoshop is an example of SMT-aware software. SMT is also used with multi-processor systems common to servers.


An attractive value of dual core processors is that they do not require a new motherboard, but can be used in existing boards that feature the correct socket. For the average user the difference in performance will be most noticeable in multi-tasking until more software is SMT aware. Servers running multiple dual core processors will see an appreciable increase in performance.

Nanorobotics

Definition


Nanorobotics is an emerging field that deals with the controlled manipulation of objects with nanometer-scale dimensions. Typically, an atom has a diameter of a few Ã…ngstroms (1 Ã… = 0.1 nm = 10-10 m), a molecule's size is a few nm, and clusters or nanoparticles formed by hundreds or thousands of atoms have sizes of tens of nm. Therefore, Nanorobotics is concerned with interactions with atomic- and molecular-sized objects-and is sometimes called Molecular Robotics.


Molecular Robotics falls within the purview of Nanotechnology, which is the study of phenomena and structures with characteristic dimensions in the nanometer range. The birth of Nanotechnology is usually associated with a talk by Nobel-prize winner Richard Feynman entitled "There is plenty of room at the bottom", whose text may be found in [Crandall & Lewis 1992]. Nanotechnology has the potential for major scientific and practical breakthroughs.

Future applications ranging from very fast computers to self-replicating robots are described in Drexler's seminal book [Drexler 1986]. In a less futuristic vein, the following potential applications were suggested by well-known experimental scientists at the Nano4 conference held in Palo Alto in November 1995:


" Cell probes with dimensions ~ 1/1000 of the cell's size
" Space applications, e.g. hardware to fly on satellites
" Computer memory
" Near field optics, with characteristic dimensions ~ 20 nm
" X-ray fabrication, systems that use X-ray photons
" Genome applications, reading and manipulating DNA
" Nanodevices capable of running on very small batteries
" Optical antennas


Nanotechnology is being pursued along two converging directions. From the top down, semiconductor fabrication techniques are producing smaller and smaller structures-see e.g. [Colton & Marrian 1995] for recent work. For example, the line width of the original Pentium chip is 350 nm. Current optical lithography techniques have obvious resolution limitations because of the wavelength of visible light, which is in the order of 500 nm. X-ray and electron-beam lithography will push sizes further down, but with a great increase in complexity and cost of fabrication. These top-down techniques do not seem promising for building nanomachines that require precise positioning of atoms or molecules.


Alternatively, one can proceed from the bottom up, by assembling atoms and molecules into functional components and systems. There are two main approaches for building useful devices from nanoscale components. The first is based on self-assembly, and is a natural evolution of traditional chemistry and bulk processing-see e.g. [Gómez-López et al. 1996]. The other is based on controlled positioning of nanoscale objects, direct application of forces, electric fields, and so on. The self-assembly approach is being pursued at many laboratories. Despite all the current activity, self-assembly has severe limitations because the structures produced tend to be highly symmetric, and the most versatile self-assembled systems are organic and therefore generally lack robustness. The second approach involves Nanomanipulation, and is being studied by a small number of researchers, who are focusing on techniques based on Scanning Probe Microscopy.

Ipv6 - The Next Generation Protocol

Definition

The Internet is one of the greatest revolutionary innovations of the twentieth century.It made the 'global village utopia ' a reality in a rather short span of time. It is changing the way we interact with each other, the way we do business, the way we educate ourselves and even the way we entertain ourselves. Perhaps even the architects of Internet would not have foreseen the tremendous growth rate of the network being witnessed today.With the advent of the Web and multimedia services, the technology underlying t he Internet has been under stress.

It cannot adequately support many services being envisaged, such as real time video conferencing, interconnection of gigabit networks with lower bandwidths, high security applications such as electronic commerce, and interactive virtual reality applications. A more serious problem with today's Internet is that it can interconnect a maximum of four billion systems only, which is a small number as compared to the projected systems on the Internet in the twenty-first century.

Each machine on the net is given a 32-bit address. With 32 bits, a maximum of about four billion addresses is possible. Though this is a large a number, soon the Internet will have TV sets, and even pizza machines connected to it, and since each of them must have an IP address, this number becomes too small. The revision of IPv4 was taken up mainly to resolve the address problem, but in the course of refinements, several other features were also added to make it suitable for the next generation Internet.

This version was initially named IPng (IP next generation) and is now officially known as IPv6. IPv6 supports 128-bit addresses, the source address and the destination address, each being, 128 bits long. IPv5 a minor variation of IPv4 is presently running on some routers. Presently, most routers run software that support only IPv4. To switch over to IPv6 overnight is an impossible task and the transition is likely to take a very long time.

However to speed up the transition, an IPv4 compatible IPv6 addressing scheme has been worked out. Major vendors are now writing softwares for various computing environments to support IPv6 functionality. Incidentally, software development for different operating systems and router platforms will offer major jobs opportunities in coming years.

Robotic Surgery

Definition

The field of surgery is entering a time of great change, spurred on by remarkable recent advances in surgical and computer technology. Computer-controlled diagnostic instruments have been used in the operating room for years to help provide vital information through ultrasound, computer-aided tomography (CAT), and other imaging technologies. Only recently have robotic systems made their way into the operating room as dexterity-enhancing surgical assistants and surgical planners, in answer to surgeons' demands for ways to overcome the surgical limitations of minimally invasive laparoscopic surgery.

The Robotic surgical system enables surgeons to remove gallbladders and perform other general surgical procedures while seated at a computer console and 3-D video imaging system acrossthe room from the patient. The surgeons operate controls with their hands and fingers to direct a robotically controlled laparoscope. At the end of the laparoscope are advanced, articulating surgical instruments and miniature cameras that allow surgeons to peer into the body and perform the procedures.

Now Imagine : An army ranger is riddled with shrapnel deep behind enemy lines. Diagnostics from wearable sensors signal a physician at a nearby mobile army surgical hospital that his services are needed urgently. The ranger is loaded into an armored vehicle outfitted with a robotic surgery system. Within minutes, he is undergoing surgery performed by the physician, who is seated at a control console 100 kilometers out of harm's way.

The patient is saved. This is the power that the amalgamation of technology and surgical sciences are offering Doctors.
Just as computers revolutionized the latter half of the 20th century, the field of robotics has the potential to equally alter how we live in the 21st century. We've already seen how robots have changed the manufacturing of cars and other consumer goods by streamlining and speeding up the assembly line.

We even have robotic lawn mowers and robotic pets now. And robots have enabled us to see places that humans are not yet able to visit, such as other planets and the depths of the ocean. In the coming decades, we will see robots that have artificial intelligence,coming to resemble the humans that create them. They will eventually become self-aware and conscious, and be able to do anything that a human can. When we talk about robots doing the tasks of humans, we often talk about the future, but the future of Robotic surgery is already here.

Sensors on 3D Digitization

Definition

Machine vision involves the analysis of the properties of the luminous flux reflected or radiated by objects. To recover the geometrical structures of these objects, either to recognize or to measure their dimension, two basic vision strategies are available [1].

Passive vision, attempts to analyze the structure of the scene under ambient light. [1] Stereoscopic vision is a passive optical technique. The basic idea is that two or more digital images are taken from known locations. The images are then processed to find the correlations between them. As soon as matching points are identified, the geometry can be computed.

Active vision attempts to reduce the ambiguity of scene analysis by structuring the way in which images are formed. Sensors that capitalize on active vision can resolve most of the ambiguities found with two-dimensional imaging systems. Lidar based or triangulation based laser range cameras are examples of active vision technique. One digital 3D imaging system based on optical triangulation were developed and demonstrated.

AUTOSYNCHRONIZED SCANNER

The auto-synchronized scanner, depicted schematically on Figure 1, can provide registered range and colour data of visible surfaces. A 3D surface map is captured by scanning a laser spot onto a scene, collecting the reflected laser light, and finally focusing the beam onto a linear laser spot sensor. Geometric and photometric corrections of the raw data give two images in perfect registration: one with x, y, z co-ordinates and a second with reflectance data. The laser beam composed of multiple visible wavelengths is used for the purpose of measuring the colour map of a scene

Light Emitting Polymers (LEP)

Definition

Light emitting polymers or polymer based light emitting diodes discovered by Friend et al in 1990 has been found superior than other displays like, liquid crystal displays (LCDs) vacuum fluorescence displays and electro luminescence displays. Though not commercialised yet, these have proved to be a mile stone in the filed of flat panel displays. Research in LEP is underway in Cambridge Display Technology Ltd (CDT), the UK.

In the last decade, several other display contenders such as plasma and field emission displays were hailed as the solution to the pervasive display. Like LCD they suited certain niche applications, but failed to meet broad demands of the computer industry.

Today the trend is towards the non_crt flat panel displays. As LEDs are inexpensive devices these can be extremely handy in constructing flat panel displays. The idea was to combine the characteristics of a CRT with the performance of an LCD and added design benefits of formability and low power. Cambridge Display Technology Ltd is developing a display medium with exactly these characteristics.

The technology uses a light-emitting polymer (LEP) that costs much less to manufacture and run than CRTs because the active material used is plastic.

LEP is a polymer that emits light when a voltage is applied to it. The structure comprises a thin film semi conducting polymer sandwiched between two electrodes namely anode and cathode. When electrons and holes are injected from the electrodes, the recombination of these charge carriers takes place, which leads to emission of light that escape through glass substrate.



Satellite Radio

Definition

We all have our favorite radio stations that we preset into our car radios, flipping between them as we drive to and from work, on errands and around town. But when travel too far away from the source station, the signal breaks up and fades into static. Most radio signals can only travel about 30 or 40 miles from their source. On long trips that find you passing through different cities, you might have to change radio stations every hour or so as the signals fade in and out.

Now, imagine a radio station that can broadcast its signal from more than 22,000 miles (35,000 kill) away and then come through on your car radio with complete clarity without ever having to change the radio station.

Satellite Radio or Digital Audio Radio Service (DARS) is a subscriber based radio service that is broadcast directly from satellites. Subscribers will be able to receive up to100 radio channels featuring Compact Disk digital quality music, news, weather, sports. talk radio and other entertainment channels.

Satellite radio is an idea nearly 10 years in the making. In 1992, the U.S. Federal Communications Commission (FCC) allocated a spectrum in the "S" band (2.3 GHz) for nationwide broadcasting of satellite-based Digital Audio Radio Service (DARS).. In 1997. the FCC awarded 8-year radio broadcast licenses to two companies, Sirius Satellite Radio former (CD Radio) and XM Satellite Radio (former American Mobile Radio). Both companies have been working aggressively to be prepared to offer their radio services to the public by the end of 2000. It is expected that automotive radios would be the largest application of Satellite Radio.

The satellite era began in September 2001 when XM launched in selected markets. followed by full nationwide service in November. Sirius lagged slightly, with a gradual rollout beginning _n February, including a quiet launch in the Bay Area on June 15. The nationwide launch comes July 1.

Y2K38

Definition


The Y2K38 problem has been described as a non-problem, given that we are expected to be running 64-bit operating systems well before 2038. Well, maybe.

The Problem
Just as Y2K problems arise from programs not allocating enough digits to the year, Y2K38 problems arise from programs not allocating enough bits to internal time.Unix internal time is commonly stored in a data structure using a long int containing the number of seconds since 1970. This time is used in all time-related processes such as scheduling, file timestamps, etc. In a 32-bit machine, this value is sufficient to store time up to 18-jan-2038. After this date, 32-bit clocks will overflow and return erroneous values such as 32-dec-1969 or 13-dec-1901.


Machines Affected Currently (March 1998) there are a huge number of machines affected. Most of these will be scrapped before 2038. However, it is possible that some machines going into service now may still be operating in 2038. These may include process control computers, space probe computers, embedded systems in traffic light controllers, navigation systems etc. etc. Many of these systems may not be upgradeable. For instance, Ferranti Argus computers survived in service longer than anyone expected; long enough to present serious maintenance problems.


Note: Unix time is safe for the indefinite future for referring to future events, provided that enough bits are allocated. Programs or databases with a fixed field width should probably allocate at least 48 bits to storing time values.
Hardware, such as clock circuits, which has adopted the Unix time convention, may also be affected if 32-bit registers are used.


In my opinion, the Y2K38 threat is more likely to result in aircraft falling from the sky, glitches in life-support systems, and nuclear power plant meltdown than the Y2K threat, which is more likely to disrupt inventory control, credit card payments, pension plans etc. The reason for this is that the Y2K38 problem involves the basic system timekeeping from which most other time and date information is derived, while the Y2K problem (mostly) involves application programs.
Emulation and Megafunctions
While 32-bit CPUs may be obsolete in desktop computers and servers by 2038, they may still exist in microcontrollers and embedded circuits. For instance, the Z80 processor is still available in 1999 as an Embedded Function within Altera programmable devices. Such embedded functions present a serious maintenance problem for Y2K38 and similar rollover issues, since the package part number and other markings typically give no indication of the internal function.

Software Issues
Databases using 32-bit Unix time may survive through 2038. Care will have to be taken to avoid rollover issues.

Now that we've far surpassed the problem of "Y2K," can you believe that computer scientists and theorists are now projecting a new worldwide computer glitch for the year 2038? Commonly called the "Y2K38 Problem," it seems that computers using "long int" time systems, which were set up to start recording time from January 1, 1970 will be affected.


XML Encryption

Definition


As XML becomes a predominant means of linking blocks of information together, there is a requirement to secure specific information. That is to allow authorized entities access to specific information and prevent access to that specific information from unauthorized entities. Current methods on the Internet include password protection, smart card, PKI, tokens and a variety of other schemes. These typically solve the problem of accessing the site from unauthorized users, but do not provide mechanisms for the protection of specific information from all those who have authorized access to the site.

Now that XML is being used to provide searchable and organized information there is a sense of urgency to provide a standard to protect certain parts or elements from unauthorized access. The objective of XML encryption is to provide a standard methodology that prevents unauthorized access to specific information within an XML document.

XML (Extensible Markup Language) was developed by an XML Working Group (originally known as the SGML Editorial Review Board) formed under the auspices of the World Wide Web Consortium (W3C) in 1996. Even though there was HTML, DHTML AND SGML XML was developed byW3C to achieve the following design goals.

" XML shall be straightforwardly usable over the Internet.
" XML shall be compatible with SGML.
" It shall be easy to write programs, which process XML documents.
" The design of XML shall be formal and concise.
" XML documents shall be easy to create.

XML was created so that richly structured documents could be used over the web. The other alternate is HTML and SGML are not practical for this purpose.HTML comes bound with a set of semantics and does not provide any arbitrary structure. Even though SGML provides arbitrary structure, it is too difficult to implement just for web browser. Since SGML is so comprehensive that only large corporations can justify the cost of its implementations.

The eXtensible Markup Language, abbreviated as XML, describes a class of data objects called XML documents and partially describes the behavior of computer programs which process them. Thus XML is a restricted form of SGML

A data object is an XML document if it is well-formed, as defined in this specification. A well-formed XML document may in addition be valid if it meets certain further constraints.Each XML document has both a logical and a physical structure. Physically, the document is composed of units called entities. An entity may refer to other entities to cause their inclusion in the document. A document begins in a "root" or document entity


Unicode And Multilingual Computing

Definition


Unicode provides a unique number for every character,
no matter what the platform,
no matter what the program,
no matter what the language.

Fundamentally, computers just deal with numbers. They store letters and other characters by assigning a number for each one. Before Unicode was invented, there were hundreds of different encoding systems for assigning these numbers. No single encoding could contain enough characters: for example, the European Union alone requires several different encodings to cover all its languages. Even for a single language like English no single encoding was adequate for all the letters, punctuation, and technical symbols in common use.

These encoding systems also conflict with one another. That is, two encodings can use the same number for two different characters, or use different numbers for the same character. Any given computer (especially servers) needs to support many different encodings; yet whenever data is passed between different encodings or platforms, that data always runs the risk of corruption.

This paper is intended for software developers interested in support for the Unicode standard in the Solaris™ 7 operating environment. It discusses the following topics:

" An overview of multilingual computing, and how Unicode and the internationalization framework in the Solaris 7 operating environment work together to achieve this aim
" The Unicode standard and support for it within the Solaris operating environment
" Unicode in the Solaris 7 Operating Environment
" How developers can add Unicode support to their applications
" Codeset conversions

Unicode And Multilingual Computing

It is not a new idea that today's global economy demands global computing solutions. Instant communications and the free flow of information across continents - and across computer platforms - characterize the way the world has been doing business for some time. The widespread use of the Internet and the arrival of electronic commerce (e-commerce) together offer companies and individuals a new set of horizons to explore and master. In the global audience, there are always people and businesses at work - 24 hours of the day, 7 days a week. So global computing can only grow.

What is new is the increasing demand of users for a computing environment that is in harmony with their own cultural and linguistic requirements. Users want applications and file formats that they can share with colleagues and customers an ocean away, application interfaces in their own language, and time and date displays that they understand at a glance. Essentially, users want to write and speak at the keyboard in the same way that they always write and speak. Sun Microsystems addresses these needs at various levels, bringing together the components that make possible a truly multilingual computing environment.


Ubiquitous Networking

Definition


Mobile computing devices have changed the way we look at computing. Laptops and personal digital assistants (PDAs) have unchained us from our desktop computers. A group of researchers at AT&T Laboratories Cambridge are preparing to put a new spin on mobile computing. In addition to taking the hardware with you, they are designing a ubiquitous networking system that allows your program applications to follow you wherever you go.

By using a small radio transmitter and a building full of special sensors, your desktop can be anywhere you are, not just at your workstation. At the press of a button, the computer closest to you in any room becomes your computer for as long as you need it. In addition to computers, the Cambridge researchers have designed the system to work for other devices, including phones and digital cameras. As we move closer to intelligent computers, they may begin to follow our every move.

The essence of mobile computing is that a user's applications are available, in a suitably adapted form, wherever that user goes. Within a richly equipped networked environment such as a modern office the user need not carry any equipment around; the user-interfaces of the applications themselves can follow the user as they move, using the equipment and networking resources available. We call these applications Follow-me applications.

Typically, a context-aware application needs to know the location of users and equipment, and the capabilities of the equipment and networking infrastructure. In this paper we describe a sensor-driven, or sentient, computing platform that collects environmental data, and presents that data in a form suitable for context-aware applications.

Context-Aware Application

A context-aware application is one which adapts its behaviour to a changing environment. Other examples of context-aware applications are 'construction-kit computers' which automatically build themselves by organizing a set of proximate components to act as a more complex device, and 'walk-through videophones' which automatically select streams from a range of cameras to maintain an image of a nomadic user. Typically, a context-aware application needs to know the location of users and equipment, and the capabilities of the equipment and networking infrastructure. In this paper we describe a sensor-driven, or sentient, computing platform that collects environmental data, and presents that data in a form suitable for context-aware applications.

The platform we describe has five main components:
1. A fine-grained location system, which is used to locate and identify objects.
2. A detailed data model, which describes the essential real world entities that are involved in mobile applications.
3. A persistent distributed object system, which presents the data model in a form accessible to applications.
4. Resource monitors, which run on networked equipment and communicate status information to a centralized repository.
5. A spatial monitoring service, which enables event-based location-aware applications.
Finally, we describe an example application to show how this platform may be used.

Tripwire

Definition


Tripwire is a reliable intrusion detection system. It is a software tool that checks to see what has changed in your system. It mainly monitors the key attribute of your files, by key attribute we mean the binary signature, size and other related data. Security and operational stability must go hand in hand, if the user does not have control over the various operations taking place then naturally the security of the system is also compromised. Tripwire has a powerful feature which pinpoints the changes that has taken place, notifies the administrator of these changes, determines the nature of the changes and provide you with information you need for deciding how to manage the change.

Tripwire Integrity management solutions monitor changes to vital system and configuration files. Any changes that occur are compared to a snapshot of the established good baseline. The software detects the changes, notifies the staff and enables rapid recovery and remedy for changes. All Tripwire installation can be centrally managed. Tripwire software's cross platform functionality enables you to manage thousands of devices across your infrastructure.

Security not only means protecting your system against various attacks but also means taking quick and decisive actions when your system is attacked. First of all we must find out whether our system is attacked or not, earlier system logs were certainly handy. You can see evidences of password guessing and other suspicious activities. Logs are ideal for tracing steps of the cracker as he tries to penetrate into the system. But who has the time and the patience to examine the logs on a daily basis?

Penetration usually involves a change of some kind, like a new port has been opened or a new service. The most common change you can see is that a file has changed. If you can identify the key subsets of these files and monitor them on a daily basis, then we will be able to detect whether any intrusion took place. Tripwire is an open source program created to monitor the changes in a key subset of files identified by the user and report on any changes in any of those files. When changes made are detected, the system administrator is informed. Tripwire 's principle is very simple, the system administrator identifies key files and causes tripwire to record checksum for those files.

He also puts in place a cron job, whose job is to scan those files at regular intervals (daily or more frequently), comparing to the original checksum. Any changes, addition or deletion, are reported to the administrator. The administrator will be able to determine whether the changes were permitted or unauthorized changes. If it was the earlier case the n the database will be updated so that in future the same violation wouldn't be repeated. In the latter case then proper recovery action would be taken immediately.
Tripwire For Servers

Tripwire for Servers is a software that is exclusively used by servers. This software can be installed on any server that needs to be monitored for any changes. Typical servers include mail servers, web servers, firewalls, transaction server, development server etc, Any server where it is imperative to identity if and when a file system change has occurred should b monitored with tripwire for servers. For the tripwire for servers software to work two important things should be present - the policy file and the database.

The tripwire for Servers software conducts subsequent file checks, automatically comparing the state of the system with the baseline database. Any inconsistencies are reported to the Tripwire Manager and to the host system log file. Reports can also be emailed to an administrator. If a violation is an authorized change, a user can update the database so changes no longer show up as violations.