Friday, February 20, 2009

Crusoe Processor

Transmeta released its Very Long Instruction Word (VLIW) processor Known as the Crusoe processor, it is a hardware-software hybrid that uses a code morphing technique to emulate the x86 architecture. Here software known as Code Morphing Software converts the normal x86 instructions into the native VLIW code. In this technique, software is loaded from the ROM upon boot up and used to control the scheduling of instructions. Compatibility with x86 applications is assured because this software is able to insulate programs from the hardware engine s native VLIW instruction set.

The code morphing technique keeps the core logic design of the Crusoe processor simple and provides a solution for the problems posed by traditional architectures. Avery low power consumption is one of the resultant benefits and this makes the Crusoe most suited for internet appliances and mobile applications.

As modern CPUs became more complex, they tend to have more hardware, and perform more functions than their early RISC predecessors. All that hardware requires lots of power though, and the more power a CPU draws the hotter it gets. When Transmeta designed the Crusoe system they went back to basics. They looked at the entire picture they did not just say how fast could we make this system they said, How efficient can we possibly make this, and still have it run x86 applications acceptably . So instead of having in the past one primary directive they had two. So certain things would have to be traded off to make this the best system possible. The three main things they wanted the system to have was:

1. Full x86 compatibility

2. The lowest possible power consumption

3. A level of x86 application performance that provides for a reasonably good user experience.

imode

The imode is the NTT Do Como s new Internet access system. It is an advanced intelligent messaging service for digital mobile phones and other mobile terminals that will allow you to see Internet content in special text format on special imode-enabled mobile phones. Enabling information access from handheld devices requires a deep understanding of both technical and market issues that are unique to the wireless environment. The imode specification was developed by the industry s best minds to address these issues. Wireless devices represent the ultimate constrained computing device with limited CPU, memory and battery life and a simple user interface. Wireless networks are constrained by low bandwidth, high latency and unpredictable availability and stability.

The imode specification addresses these issues by using the best of existing standards and developing new extensions when needed. The imode solution leverages the tremendous investment in web servers, web development tools, web programmers and web applications while solving the unique problems associated with the wireless domain. The specification ensures that this solution is fast, reliable and secure. The imode specification is developed and supported by the wireless telecommunication community so that the entire industry and its subscribers can benefit from a single, open specification.

NTT DoCoMo: The Creators of imode

NTT DoCoMo is a subsidiary of Japan s incumbent telephone operator NTT. The majority of NTT-DoCoMo s shares are owned by NTT, and the majority of NTT s shares are owned by the Japanese government. NTT-DoCoMo s shares are separately listed on the Tokyo Stock Exchange and on the Osaka Stock Exchange, and NTT-DoCoMo s market value (capitalization) makes it one of the world s most valued companies.

Goals of the imode.

The goals of the imode forum are listed as follows. >>To bring Internet content and advanced data services to wireless phones and other wireless terminals.

>>To develop a global wireless protocol specification that works across all wireless network technologies.

>>To enable the creation of content and applications that scale across a wide range of wireless bearer networks and device types, i.e. to maintain device and bearer independence.

>>To embrace and extend existing standards and technology whenever possible and appropriate.

Web Spoofing

This paper describes an Internet security attack that could endanger the privacy of World Wide Web users and the integrity of their data. The attack can be carried out on today s systems, endangering users of the most common Web browsers, including Netscape Navigator and Microsoft Internet Explorer.

1.1 HISTORY

The concept of IP spoofing was initially discussed in academic circles in the 1980 s. It was primarily theoretical until Robert Morris, whose son wrote the first Internet Worm, discovered a security weakness in the TCP protocol known as sequence prediction. Another infamous attack, Kevin Mitnick s Christmas day, crack of Tsutomu Shimomura s machine, employed the IP spoofing and TCP sequence prediction techniques. While the popularity of such cracks has decreased due to the demise of the services they exploited, spoofing can still be used and needs to be addressed by all security administrators.

1.2 WHAT IS SPOOFING?

Spoofing means pretending to be something you are not. In Internet terms it means pretending to be a different Internet address from the one you really have in order to gain something. That might be information like credit card numbers, passwords, personal information or the ability to carry out actions using someone else’s identity.

IP spoofing attack involves forging one s source address. It is the act of using one machine to impersonate another. Most of the applications and tools in web rely on the source IP address authentication. Many developers have used the host based access controls to secure their networks. Source IP address is a unique identifier but not a reliable one. It can easily be spoofed.

Web spoofing allows an attacker to create a shadow copy of the entire World Wide Web. Accesses to the shadow Web are funneled through the attacker s machine, allowing the attacker to monitor the all of the victim s activities including any passwords or account numbers the victim enters. The attacker can also cause false or misleading data to be sent to Web servers in the victim s name, or to the victim in the name of any Web server. In short, the attacker observes and controls everything the victim does on the Web.

The various types of spoofing techniques that we discuss include TCP Flooding, DNS Server Spoofing Attempts, web site names, email ids and link redirection.

DELAY tolerant networks

Increasingly, network applications must communicate with counterparts across disparate networking environments characterized by significantly different sets of physical and operational constraints; wide variations in transmission latency are particularly troublesome. The proposed Interplanetary Internet (IPN), which must encompass both terrestrial and interplanetary links, is an extreme case. An architecture based on a protocol that can operate successfully and reliably in multiple disparate environments would simplify the development and deployment of such applications.

The Internet protocols are ill suited for this purpose. They are, in general, poorly suited to operation on paths in which some of the links operate intermittently or over extremely long propagation delays. The principle problem is reliable transport, but the operations of the Internet’s routing protocols would also raise troubling issues.

It is this analysis that leads us to propose an architecture based on Internet-independent middleware: use exactly those protocols at all layers that are best suited to operation within each environment, but insert a new overlay network protocol between the applications and the locally optimized stacks. This new protocol layer, called the bundle layer, ties together the region-specific lower layers so that application programs can communicate across multiple regions.

The DTN architecture implements store-and-forward message switching.

A DTN is a network of regional networks, where a regional network is a network that is adapted to a particular communication region, wherein communication characteristics are relatively homogeneous. Thus, DTNs support interoperability of regional networks by accommodating long delays between and within regional networks, and by translating between regional communication characteristics.

Java Ring

A Java Ring is a finger ring that contains a small microprocessor with built-in capabilities for the user, a sort of smart card that is wearable on a finger. Sun Microsystem s Java Ring was introduced at their JavaOne Conference in 1998 and, instead of a gemstone, contained an inexpensive microprocessor in a stainless-steel iButton running a Java virtual machine and preloaded with applets (little application programs). The rings were built by Dallas Semiconductor.

Workstations at the conference had ring readers installed on them that downloaded information about the user from the conference registration system. This information was then used to enable a number of personalized services. For example, a robotic machine made coffee according to user preferences, which it downloaded when they snapped the ring into another ring reader.

Although Java Rings aren t widely used yet, such rings or similar devices could have a number of real-world applications, such as starting your car and having all your vehicle s components (such as the seat, mirrors, and radio selections) automatically adjust to your preferences.

The Java Ring is an extremely secure Java-powered electronic token with a continuously running, unalterable real-time clock and rugged packaging, suitable for many applications. The jewel of the Java Ring is the Java iButton -- a one-million transistor, single chip trusted microcomputer with a powerful Java Virtual Machine (JVM) housed in a rugged and secure stainless-steel case.

The Java Ring is a stainless-steel ring, 16-millimeters (0.6 inches) in diameter, that houses a 1-million-transistor processor, called an iButton. The ring has 134 KB of RAM, 32 KB of ROM, a real-time clock and a Java virtual machine, which is a piece of software that recognizes the Java language and translates it for the user s computer system.

The Ring, first introduced at JavaOne Conference, has been tested at Celebration School, an innovative K-12 school just outside Orlando, FL. The rings given to students are programmed with Java applets that communicate with host applications on networked systems. Applets are small applications that are designed to be run within another application. The Java Ring is snapped into a reader, called a Blue Dot receptor, to allow communication between a host system and the Java Ring.

Designed to be fully compatible with the Java Card 2.0 standard the processor features a high-speed 1024-bit modular exponentiator fro RSA encryption, large RAM and ROM memory capacity, and an unalterable real time clock. The packaged module has only a single electric contact and a ground return, conforming to the specifications of the Dallas Semiconductor 1-Wire bus. Lithium-backed non-volatile SRAM offers high read/write speed and unparallel tamper resistance through near-instantaneous clearing of all memory when tampering is detected, a feature known as rapid zeroization.

Data integrity and clock function are maintained for more than 10 years. The 16-millimeter diameter stainless steel enclosure accomodates the larger chip sizes needed for up to 128 kilobytes of high-speed nonvolatile static RAM. The small and extremely rugged packaging of the module allows it to attach to the accessory of your choice to match individual lifestyles, such as key fob, wallet, watch, necklace, bracelet, or finger ring.


NAS

Information Technology (IT) departments are looking for cost-effective storage solutions that can offer performance, scalability, and reliability. As users on the network increase and the amounts of data generated multiply, the need for an optimized storage solution becomes essential. Network Attached Storage (NAS) is becoming a critical technology in this environment.

The benefit of NAS over the older Direct Attached Storage (DAS) technology is that it separates servers and storage, resulting in reduced costs and easier implementation. As the name implies, NAS attaches directly to the LAN, providing direct access to the file system and disk storage. Unlike DAS, the application layer no longer resides on the NAS platform, but on the client itself. This frees the NAS processor from functions that would ultimately slow down its ability to provide fast responses to data requests.

In addition, this architecture gives NAS the ability to service both Network File System (NFS) and Common Internet File System (CIFS) clients. As shown in the figure below, this allows the IT manager to provide a single shared storage solution that can simultaneously support both Windows*-and UNIX*-based clients and servers. In fact, a NAS system equipped with the right file system software can support clients based on any operating system.

NAS is typically implemented as a network appliance, requiring a small form factor (both real estate and height) as well as ease of use. NAS is a solution that meets the ever-demanding needs of today s networked storage market.


DNA Computing in security

In today’s world where no modern encryption algorithms are spared of the security breach, the world of information security is on the look out for fresh ideas. Thus came up the new theory of DNA computing in the fields of cryptography and steganography.

Though researches have been done to demonstrate DNA computing and its use in the areas of cryptography, steganography and authentication, the limitations of sophisticated lab requirements, along with high labour cost has still kept DNA computing at bay from today’s security world. But on the other hand DNA authentication has become a great boon.

LonWorks Protocol

A technology initiated by the Echelon Corporation in 1990, the LonWorks provides a platform for the for building industrial, transportation, home automation and public utility control networks to communicate with each other. Built on the Local Operating Network, it uses the LonTalk protocol, in order to have a peer to peer communication with each other, with out actually having a gateway or other hardware.

CELL PHONE VIRUSES AND SECURITY

As cell phones become a part and parcel of our life so do the threats imposed to them is also on the increase. Like the internet, today even the cell phones are going online with the technologies like the edge, GPRS etc. This online network of cellphones has exposed them to the high risks caused by malwares viruses, worms and Trojans designed for mobile phone environment. The security threat caused by these malwares are so severe that a time would soon come that the hackers could infect mobile phones with malicious software that will delete any personal data or can run up a victim s phone bill by making toll calls.

All these can lead to overload in mobile networks, which can eventually lead them to crash and then the financial data stealing which poises risk factors for smart phones. As the mobile technology is comparatively new and still on the developing stages compared to that of internet technology, the anti virus companies along with the vendors of phones and mobile operating systems have intensified the research and development activities on this growing threat, with a more serious perspective.

Thursday, February 19, 2009

imode

Definition


The imode is the NTT DoCoMo's new Internet access system. It is an advanced intelligent messaging service for digital mobile phones and other mobile terminals that will allow you to see Internet content in special text format on special imode-enabled mobile phones. Enabling information access from handheld devices requires a deep understanding of both technical and market issues that are unique to the wireless environment. The imode specification was developed by the industry's best minds to address these issues. Wireless devices represent the ultimate constrained computing device with limited CPU, memory and battery life and a simple user interface.


Wireless networks are constrained by low bandwidth, high latency and unpredictable availability and stability. The imode specification addresses these issues by using the best of existing standards and developing new extensions when needed. The imode solution leverages the tremendous investment in web servers, web development tools, web
programmers and web applications while solving the unique problems associated with the wireless domain. The specification ensures that this solution is fast, reliable and secure. The imode specification is developed and supported by the wireless telecommunication community so that the entire industry and its subscribers can benefit from a single, open specification.

NTT DoCoMo is a subsidiary of Japan's incumbent telephone operator NTT. The majority of NTT-DoCoMo's shares is owned by NTT, and the majority of NTT's shares is owned by the Japanese government. NTT-DoCoMo's shares are separately listed on the Tokyo Stock Exchange and on the Osaka Stock Exchange, and NTT-DoCoMo's market value (capitalization) makes it one of the world's most valued companies.



Goals of the imode
The goals of the imode forum are listed as follows.


To bring Internet content and advanced data services to wireless phones and other
wireless terminals.

To develop a global wireless protocol specification that works across all wireless
network technologies.


To enable the creation of content and applications that scale across a wide range
of wireless bearer networks and device types, i.e. to maintain device and bearer
independence


To embrace and extend existing standards and technology whenever possible and
appropriate.



The Technology:
imode consists of three technologies:


1.a smart handset
2.a new transmission protocol
3.a new markup language.


The above mentioned three technologies together make the brand name, imode

DNA Based Computing

Definition


Rediscovering Biology
Biology is now the study of information stored in DNA - strings of four letters: A, T, G, and C for the bases adenine, thymine, guanine and cytosine - and of the transformations that information undergoes in the cell. There were mathematics here? DNA polymerase is the king of enzymes - the maker of life. Under appropriate conditions, given a strand of DNA, DNA polymerase produces a second "Watson-Crick" complementary strand, in which every C is replaced by a G, every G by a C, every A by a T and every T by an A. For example, given a molecule with the sequence CATGTC, DNA polymerase will produce a new molecule with the sequence GTACAG. The polymerase enables DNA to reproduce, which in turn allows cells to reproduce and ultimately allows you to reproduce. For a strict reductionist, the replication of DNA polymerase is what life is is all about.


DNA polymerase is an amazing little nanomachine, a single molecule that "hops" onto a strand of DNA and slides along it, "reading " each base it passes and "writing" its complement onto a new, growing DNA strand. This was in similarity to the Turing machine (toy computer) suitable for mathematical investigation on the study of the notion of "computability" which preceded the advent of actual computers by about a decade and led to some of the major mathematical results of the 20th century. The most striking was that Turing's toy computer had turned out to be universal and could be programmed to compute anything that was computable at all. In other words, one could programme a Turing machine to produce Watson-Crick complementary strings, factor numbers, play chess and so on.


To build a DNA computer, tools were essential such as (1) Watson-Crick pairing (2) polymerases (3) Ligases (4) Nucleases (5)Gel electrophoresis, and (6) DNA synthesis.
To build a computer, only two things are really necessary - a method of sorting information and a few simple operations for acting on that information.




DNA Computer Building


The Unrestricted model of DNA computing:
To build a DNA computer, the tools were essentially the following -

1. Watson-Crick pairing - every strand of DNA has its Watson-Crick complement.
2. Polymerases - to copy information from one molecule into another.
3. Ligases - to bind molecules together.
4. Nucleases -to cut nucleic acids.
5. Gel electrophoresis - a process to separate DNA by length
6. DNA synthesis - to write a DNA sequence on a piece of paper.
Since Adleman's original experiment, several methods to reduce error and improve efficiency have been developed. The Restricted model of DNA computing solves several physical problems with the Unrestricted model. The Restricted model simplifies the physical obstructions in exchange for some additional logical considerations. The purpose of this restructuring is to simplify biochemical operations and reduce the errors due to physical obstructions

Autonomic Computing

Definition
The millions of businesses, billions of humans that compose them, and trillions of devices that they will depend upon all require the services of the IT industry to keep them running. And it's not just a matter of numbers. It's the complexity of these systems and the way they work together that is creating a shortage of skilled IT workers to manage all of the systems. It's a problem that is not going away, but will grow exponentially, just as our dependence on technology has.
The solution is to build computer systems that regulate themselves much in the same way our autonomic nervous system regulates and protects our bodies. This new model of computing is called autonomic computing. The good news is that some components of this technology are already up and running. However, complete autonomic systems do not yet exist. Autonomic computing calls for a whole new area of study and a whole new way of conducting business.




The Benefits
Autonomic computing was conceived to lessen the spiraling demands for skilled IT resources, reduce complexity and to drive computing into a new era that may better exploit its potential to support higher order thinking and decision making. Immediate benefits will include reduced dependence on human intervention to maintain complex systems accompanied by a substantial decrease in costs. Long-term benefits will allow individuals, organizations and businesses to collaborate on complex problem solving.




The Problem
Within the past two decades the development of raw computing power coupled with the proliferation of computer devices has grown at exponential rates. This phenomenal growth along with the advent of the Internet have led to a new age of accessibility - to other people, other systems, and most importantly, to information. This boom has also led to unprecedented levels of complexity.


The simultaneous explosion of information and integration of technology into everyday life has brought on new demands for how people manage and maintain computer systems. Demand is already outpacing supply when it comes to managing complex, and even simple computer systems. Even in uncertain economic times, demand for skilled IT workers is expected to increase by over 100 percent in the next six years.


As access to information becomes omnipresent through PC's, hand-held and wireless devices, the stability of current infrastructure, systems, and data is at an increasingly greater risk to suffer outages and general disrepair. IBM believes that we are quickly reaching a threshold moment in the evolution of the industry's views toward computing in general and the associated infrastructure, middleware, and services that maintain them. The increasing system complexity is reaching a level beyond human ability to manage and secure.


This increasing complexity with a shortage of skilled IT professionals points towards an inevitable need to automate many of the functions associated with computing today.




The Solution
IBM's proposed solution looks at the problem from the most important perspective: the end user's. How do IT customers want computing systems to function? They want to interact with them intuitively, and they want to have to be far less involved in running them. Ideally, they would like computing systems to pretty much take care of the mundane elements of management by themselves.
The most direct inspiration for this functionality that exists today is the autonomic function of the human central nervous system. Autonomic controls use motor neurons to send indirect messages to organs at a sub-conscious level. These messages regulate temperature, breathing, and heart rate without conscious thought. The implications for computing are immediately evident; a network of organized, "smart" computing components that give us what we need, when we need it, without a conscious mental or even physical effort.
IBM has named its vision for the future of computing "autonomic computing." This new paradigm shifts the fundamental definition of the technology age from one of computing, to one defined by data. Access to data from multiple, distributed sources, in addition to traditional centralized storage devices will allow users to transparently access information when and where they need it. At the same time, this new view of computing will necessitate changing the industry's focus on processing speed and storage to one of developing distributed networks that are largely self-managing, self-diagnostic, and transparent to the user.

Short Message Service (SMS)

Definition
Short message service (SMS) is a globally accepted wireless service that enables the transmission of alphanumeric messages between mobile subscribers and external systems such as electronic mail, paging, and voice mail systems. The benefits of SMS to subscribers center around convenience, flexibility, and seamless integration of messaging services and data access. From this perspective, the benefit is to be able to use the handset as an extension of the computer. SMS also eliminates the need for separate devices for messaging, as services can be integrated into a single wireless device-the mobile terminal. SMS provides a time stamp reporting the time of submission of the message and an indication to the handset of whether there are more messages to send (GSM) or the number of additional messages to send.

SMS appeared on the wireless scene in 1991 in Europe. The European standard for digital wireless, now known as the Global System for Mobile Communications (GSM), included short messaging services from the outset.
In North America, SMS was made available initially on digital wireless networks built by early pioneers such as BellSouth Mobility, PrimeCo, and Nextel, among others. These digital wireless networks are based on GSM, code division multiple access (CDMA), and time division multiple access (TDMA) standards. Network consolidation from mergers and acquisitions has resulted in large wireless networks having nationwide or international coverage and sometimes supporting more than one wireless technology. This new class of service providers demands network-grade products that can easily provide a uniform solution, enable ease of operation and administration, and accommodate existing subscriber capacity, message throughput, future growth, and services reliably.

Short messaging service center (SMSC) solutions based on an intelligent network (IN) approach are well suited to satisfy these requirements, while adding all the benefits of IN implementations handling multiple input sources, including a voice-mail system (VMS), Web-based messaging, e-mail integration, and other external short message entities (ESMEs).Communication with the wireless network elements such as the home location register (HLR) and mobile switching center (MSC) is achieved through the signal transfer point (STP). SMS provides a mechanism for transmitting short messages to and from wireless devices. The service makes use of an SMSC, which acts as a store-and-forward system for short messages.

The wireless network provides the mechanisms required to find the destination station(s) and transports short messages between the SMSCs and wireless stations. In contrast to other existing text-message transmission services such as alphanumeric paging, the service elements are designed to provide guaranteed delivery of text messages to the destination. Additionally, SMS supports several input mechanisms that allow interconnection with different message sources and destinations. A distinguishing characteristic of the service is that an active mobile handset is able to receive or submit a short message at any time, independent of whether a voice or data call is in progress (in some implementations, this may depend on the MSC or SMSC capabilities). SMS also guarantees delivery of the short message by the network. Temporary failures due to unavailable receiving stations are identified, and the short message is stored in the SMSC until the destination device becomes available.

SMS is characterized by out-of-band packet delivery and low-bandwidth message transfer, which results in a highly efficient means for transmitting short bursts of data. Initial applications of SMS focused on eliminating alphanumeric pagers by permitting two-way general-purpose messaging and notification services, primarily for voice mail. As technology and networks evolved, a variety of services have been introduced, including e-mail, fax, and paging integration, interactive banking, information services such as stock quotes, and integration with Internet-based applications. Wireless data applications include downloading of subscriber identity module (SIM) cards for activation, debit, profile-editing purposes, wireless points of sale (POSs), and other field-service applications such as automatic meter reading, remote sensing, and location-based services. Additionally, integration with the Internet spurred the development of Web-based messaging and other interactive applications such as instant messaging, gaming, and chatting.

Millipede

Definition
Today data storage is dominated by the use of magnetic disks. Storage densities of about more than 5 Gb/cm 2 have been achieved. In the past 40 years areal density has increased by 6 orders of magnitude. But there is a physical limit. It has been predicted that superparamagnetic effects- the bit size at which stored information become volatile as a function of time- will limit the densities of current longitudinal recording media to about 15.5 Gb/cm2 . In the near future century nanometer scale will presumably pervade the field of data storage. In magnetic storage used today, there is no clear-cut way to achieve the nanometer scale in all three dimensions. So new techniques like holographic memory and probe based data storage are emerging. If an emerging technology is to be considered as a serious candidate to replace an existing technology, it should offer long-term perspectives. Any new technology with better areal density than today's magnetic storage should have long-term potential for further scaling, desirably down to nanometer or even atomic scale.

The only available tool known today that is simple and yet offer these long-term perspectives is a nanometer-sharp tip like in atomic force microscope (AFM) and scanning tunneling microscope (STM). The simple tip is a very reliable tool that concentrates on one functionality: the ultimate local confinement of interaction. In local probe based data storage we have a cantilever that has a very small tip at its end. Small indentations are made in a polymer medium laid over a silicon substrate. These indentations serve as data storage locations. A single AFM operates best on the microsecond time scale. Conventional magnetic storage, however, operates at best on the nanosecond time scale, making it clear that AFM data rates have to be improved by at least three orders of magnitude to be competitive with current and future magnetic recording. The "millipede" concept is a new approach for storing data at high speed and with an ultrahigh density.


Millipede Concept
Millipede is a highly parallel scanning probe based data storage that has a real storage densities far beyond superparamagnetic limits and data rates comparable to today's magnetic recording. At the first glance, millipede looks like a conventional 14 X 7 mm 2 silicon chip. Mounted at the center of the chip is a miniature two-dimensional array of 1024 'v'-shaped cantilevered arms that are 70 µm long and 0.5 µm thick. A nano-sharp fang-like tip, only 20 nm in diameter, hangs from the apex of each cantilever. The multiplex drivers, allow addressing of each tip individually. Beneath the cantilever array, is a thin layer of polymer film deposited on a movable, three-axis silicon table. The 2-D AFM cantilever array storage technique called "millipede" is based on a mechanical parallel x/y scanning of either the entire cantilever array chip or the storage medium.

In addition, a feedback-controlled z-approaching and leveling scheme brings the entire cantilever array chip into contact with the storage medium. The tip-medium contact is maintained and controlled while x/y scanning is performed for read/write. The millipede approach is not based on individual z-feedback for each cantilever ; rather it uses a feedback control for the entire chip, which greatly simplifies the system. However this requires very good control and uniformity of tip height and cantilever bending. Chip approach/leveling makes use of additionally integrated approaching cantilever sensors in the corners of the array chip to control the approach of the chip to the storage medium. Signals from these sensors provide feedback signals to adjust the z-actuators until contact with the medium is established. Feedback loops maintain the chip leveled and in contact with the surface while x/y scanning is performed for write/read operations.


Millipede Is Unique
Conventional data storage devices, such as disk drives and CD/DVDs, are based on systems that sense changes in magnetic fields or light to perform the read/write/store/erase functions. Millipede is unique both in form and the way it performs data storage tasks; it is based on a chip-mounted, mechanical system that senses a physical change in the storage media. The millipede's technology is actually closer to, although on an atomic scale, the archaic punched card than the more recent magnetic media. Using millipede, the IBM scientists have demonstrated a data storage density of a trillion bits per square inch -20 times higher than the densest magnetic storage available today. Millipede is dense enough to store the equivalent of 25 DVDs on a surface of the size of a postage stamp. This technology may boost the storage capacity of handheld devices - personal digital assistants (PDAs) and cell phones - often criticized for their low storage capabilities.

AC Performance Of Nanoelectronics

Definition
Nano electronic devices fall into two classes: tunnel devices and ballistic transport devices. In Tunnel devices single electron effects occur if the tunnel resistance is larger than h/e = 25 K §Ã™. In Ballistic devices with cross sectional dimensions in the range of quantum mechanical wavelength of electrons, the resistance is of order h/e = 25 K §Ã™. This high resistance may seem to restrict the operational speed of nano electronics in general. However the capacitance values and drain source spacing are typically small which gives rise to very small RC times and transit times of order of ps or less. Thus the speed may be very large, up to THz range. The goal of this seminar is to present the models an performance predictions about the effects that set the speed limit in carbon nanotube transistors, which form the ideal test bed for understanding the high frequency properties of Nano electronics because they may behave as ideal ballistic 1d transistors.


Ballistic Transport- An Outline
When carriers travel through a semiconductor material, they are likely to be scattered by any number of possible sources, including acoustic and optical phonons, ionized impurities, defects, interfaces, and other carriers. If, however, the distance traveled by the carrier is smaller than the mean free path, it is likely not to encounter any scattering events; it can, as a result, move ballistically through the channel. To the first order, the existence of ballistic transport in a MOSFET depends on the value of the characteristic scattering length (i.e. mean free path) in relation to channel length of the transistor.


This scattering length, l , can be estimated from the measured carrier mobility where t is the average scattering time, m* is the carrier effective mass, and vth is the thermal velocity. Because scattering mechanisms determine the extent of ballistic transport, it is important to understand how these depend upon operating conditions such as normal electric field and ambient temperature.


Dependence On Normal Electric Field
In state-of-the-art MOSFET inversion layers, carrier scattering is dominated by phonons, impurities (Coulomb interaction), and surface roughness scattering at the Si-SiO2 interface. The relative importance of each scattering mechanism is dependent on the effective electric field component normal to the conduction channel. At low fields, impurity scattering dominates due to strong Coulombic interactions between the carriers and the impurity centers. As the electric field is increased, acoustic phonons begin to dominate the scattering process. At very high fields, carriers are pulled closer to the Si-SiO2 gate oxide interface; thus, surface roughness scattering degrades carrier mobility. A universal mobility model has been developed to relate field strength with the effective carrier mobility due to phonon and surface roughness scattering:


Dependence On Temperature
When the temperature is changed, the relative importance of each of the aforementioned scattering mechanisms is altered. Phonon scattering becomes less important at very low temperatures. Impurity scattering, on the other hand, becomes more significant because carriers are moving slower (thermal velocity is decreased) and thus have more time to interact with impurity centers. Surface roughness scattering remains the same because it does not depend on temperature. At liquid nitrogen temperatures (77K) and an effective electric field of 1MV/cm, the electron and hole mobilities are ~700 cm2/Vsec and ~100 cm2/Vsec, respectively. Using the above equations, the scattering lengths are approximately 17nm and 3.6nm.These scattering lengths can be assumed to be worst-case scenarios, as large operating voltages (1V) and aggressively scaled gate oxides (10Ã…) are assumed. Thus, actual scattering lengths will likely be larger than the calculated values.

Further device design considerations in maximizing this scattering length will be discussed in the last section of this paper. Still, the values calculated above are certainly in the range of transistor gate lengths currently being studied in advanced MOSFET research (<50nm). Ballistic carrier transport should thus become increasingly important as transistor channel lengths are further reduced in size. In addition, it should be noted that the mean free path of holes is generally smaller than that of electrons. Thus, it should be expected that ballistic transport in PMOS transistors is more difficult to achieve, since current conduction occurs through hole transport. Calculation of the mean scattering length, however, can only be regarded as a first-order estimation of ballistic transport.

To accurately determine the extent of ballistic transport evident in a particular transistor structure, Monte Carlo simulation methods must be employed. Only by modeling the random trajectory of each carrier traveling through the channel can we truly assess the extent of ballistic transport in a MOSFET.

4G Wireless Systems

Definition
Fourth generation wireless system is a packet switched wireless system with wide area coverage and high throughput. It is designed to be cost effective and to provide high spectral efficiency . The 4g wireless uses Orthogonal Frequency Division Multiplexing (OFDM), Ultra Wide Radio Band (UWB),and Millimeter wireless. Data rate of 20mbps is employed. Mobile speed will be up to 200km/hr.The high performance is achieved by the use of long term channel prediction, in both time and frequency, scheduling among users and smart antennas combined with adaptive modulation and power control. Frequency band is 2-8 GHz. it gives the ability for world wide roaming to access cell anywhere.


Wireless mobile communications systems are uniquely identified by "generation designations. Introduced in the early 1980s, first generation (1G) systems were marked by analog frequency modulation and used primarily for voice communications. Second generation (2G) wireless communications systems, which made their appearance in the late 1980s, were also used mainly for voice transmission and reception The wireless system in widespread use today goes by the name of 2.5G-an "in between " service that serves as a stepping stone to 3G. Whereby 2G communications is generally associated with Global System for Mobile (GSM) service, 2.5G is usually identified as being "fueled " by General Packet Radio Services (GPRS) along with GSM. In 3G systems, making their appearance in late 2002 and in 2003, are designed for voice and paging services, as well as interactive media use such as teleconferencing, Internet access, and other services. The problem with 3G wireless systems is bandwidth-these systems provide only WAN coverage ranging from 144 kbps (for vehicle mobility applications) to 2 Mbps (for indoor static applications). Segue to 4G, the "next dimension " of wireless communication. The 4g wireless uses Orthogonal Frequency Division Multiplexing (OFDM), Ultra Wide Radio Band (UWB), and Millimeter wireless and smart antenna. Data rate of 20mbps is employed. Mobile speed will be up to 200km/hr.Frequency band is 2 ]8 GHz. it gives the ability for world wide roaming to access cell anywhere.


Features:
o Support for interactive multimedia, voice, streaming video, Internet, and other broadband services
o IP based mobile system
o High speed, high capacity, and low cost per bit
o Global access, service portability, and scalable mobile services
o Seamless switching, and a variety of Quality of Service driven services
o Better scheduling and call admission control techniques
o Ad hoc and multi hop networks (the strict delay requirements of voice make multi hop network service a difficult problem)
o Better spectral efficiency
o Seamless network of multiple protocols and air interfaces (since 4G will be all ]IP, look for 4G systems to be compatible with all common network technologies, including802.11, WCDMA, Blue tooth, and Hyper LAN).
o An infrastructure to handle pre existing 3G systems along with other wireless technologies, some of which are currently under development.

NRAM

Definition
Nano-RAM, is a proprietary computer memory technology from the company Nantero and NANOMOTOR is invented by University of bologna and California nano systems.NRAM is a type of nonvolatile random access memory based on the mechanical position of carbon nanotubes deposited on a chip-like substrate. In theory the small size of the nanotubes allows for very high density memories. Nantero also refers to it as NRAM in short, but this acronym is also commonly used as a synonym for the more common NVRAM, which refers to all nonvolatile RAM memories.Nanomotor is a molecular motor which works continuously without the consumption of fuels. It is powered by sunlight. The research are federally funded by national science foundation and national academy of science.

Carbon Nanotubes
Carbon nanotubes (CNTs) are a recently discovered allotrope of carbon. They take the form of cylindrical carbon molecules and have novel properties that make them potentially useful in a wide variety of applications in nanotechnology, electronics, optics, and other fields of materials science. They exhibit extraordinary strength and unique electrical properties, and are efficient conductors of heat. Inorganic nanotubes have also been synthesized.
A nanotube is a member of the fullerene structural family, which also includes buckyballs. Whereas buckyballs are spherical in shape, a nanotube is cylindrical, with at least one end typically capped with a hemisphere of the buckyball structure. Their name is derived from their size, since the diameter of a nanotube is on the order of a few nanometers (approximately 50,000 times smaller than the width of a human hair), while they can be up to several millimeters in length. There are two main types of nanotubes: single-walled nanotubes (SWNTs) and multi-walled nanotubes (MWNTs).


Manufacturing a nanotube is dependent on applied quantum chemistry, specifically, orbital hybridization. Nanotubes are composed entirely of sp2 bonds, similar to those of graphite. This bonding structure, stronger than the sp3 bonds found in diamond, provides the molecules with their unique strength. Nanotubes naturally align themselves into "ropes" held together by Van der Waals forces. Under high pressure, nanotubes can merge together, trading some sp2 bonds for sp3 bonds, giving great possibility for producing strong, unlimited-length wires through high-pressure nanotube linking.


Fabrication Of NRAM
This nano electromechanical memory, called NRAM, is a memory with actual moving parts, with dimensions measured in nanometers. Its carbon nanotube based technology makes advantage of vaanderwaals force to create basic on off junctions of a bit. Vaanderwaals forces interaction between atoms that enable noncovalant binding. They rely on electron attractions that arise only at nano scale levels as a force to be reckoned with. The company is using this property in its design to integrate nanoscale material property with established cmos fabrication technique.


Storage In NRAM
NRAM works by balancing the on ridges of silicon. Under differing electric charges, the tubes can be physically swung into one or two positions representing one and zeros. Because the tubes are very small-under a thousands of time-this movement is very fast and needs very little power, and because the tubes are a thousand times conductive as copper it is very to sense to read back the data. Once in position the tubes stay there until a signal resets them.
The bit itself is not stored in the nano tubes, but rather is stored as the position of the nanotube. Up is bit 0 and down is bit 1.Bits are switched between the states by the application of the electric field.


The technology work by changing the charge placed on a latticework of crossed nanotube. By altering the charges, engineers can cause the tubes to bind together or separate, creating ones and zeros that form the basis of computer memory. If we have two nano tubes perpendicular to each other one is positive and other negative, they will bend together and touch. If we have two similar charges they will repel. These two positions are used to store one and zero. The chip will stay in the same state until you make another change in the electric field. So when you turn the computer off, it doesn't erase the memory .We can keep all the data in the NRAM and gives your computer an instant boot.

EDGE

Introduction
EDGE is the next step in the evolution of GSM and IS- 136. The objective of the new technology is to increase data transmission rates and spectrum efficiency and to facilitate new applications and increased capacity for mobile use. With the introduction of EDGE in GSM phase 2+, existing services such as GPRS and high-speed circuit switched data (HSCSD) are enhanced by offering a new physical layer. The services themselves are not modified. EDGE is introduced within existing specifications and descriptions rather than by creating new ones. This paper focuses on the packet-switched enhancement for GPRS, called EGPRS. GPRS allows data rates of 115 kbps and, theoretically, of up to 160 kbps on the physical layer. EGPRS is capable of offering data rates of 384 kbps and, theoretically, of up to 473.6 kbps.

A new modulation technique and error-tolerant transmission methods, combined with improved link adaptation mechanisms, make these EGPRS rates possible. This is the key to increased spectrum efficiency and enhanced applications, such as wireless Internet access, e-mail and file transfers.


GPRS/EGPRS will be one of the pacesetters in the overall wireless technology evolution in conjunction with WCDMA. Higher transmission rates for specific radio resources enhance capacity by enabling more traffic for both circuit- and packet-switched services. As the Third-generation Partnership Project (3GPP) continues standardization toward the GSM/EDGE radio access network (GERAN), GERAN will be able to offer the same services as WCDMA by connecting to the same core network. This is done in parallel with means to increase the spectral efficiency. The goal is to boost system capacity, both for real- time and best-effort services, and to compete effectively with other third-generation radio access networks such as WCDMA and cdma2000.


Technical differences between GPRS and EGPRS


Introduction
Regarded as a subsystem within the GSM standard, GPRS has introduced packet-switched data into GSM networks. Many new protocols and new nodes have been introduced to make this possible. EDGE is a method to increase the data rates on the radio link for GSM. Basically, EDGE only introduces a new modulation technique and new channel coding that can be used to transmit both packet-switched and circuit-switched voice and data services. EDGE is therefore an add-on to GPRS and cannot work alone. GPRS has a greater impact on the GSM system than EDGE has. By adding the new modulation and coding to GPRS and by making adjustments to the radio link protocols, EGPRS offers significantly higher throughput and capacity.

GPRS and EGPRS have different protocols and different behavior on the base station system side. However, on the core network side, GPRS and EGPRS share the same packet-handling protocols and, therefore, behave in the same way. Reuse of the existing GPRS core infrastructure (serving GRPS support node/gateway GPRS support node) emphasizes the fact that EGPRS is only an "add-on" to the base station system and is therefore much easier to introduce than GPRS . In addition to enhancing the throughput for each data user, EDGE also increases capacity. With EDGE, the same time slot can support more users. This decreases the number of radio resources required to support the same traffic, thus freeing up capacity for more data or voice services. EDGE makes it easier for circuit-switched and packet-switched traffic to coexist, while making more efficient use of the same radio resources. Thus in tightly planned networks with limited spectrum, EDGE may also be seen as a capacity booster for the data traffic.


EDGE technology
EDGE leverages the knowledge gained through use of the existing GPRS standard to deliver significant technical improvements. Figure 2 compares the basic technical data of GPRS and EDGE. Although GPRS and EDGE share the same symbol rate, the modulation bit rate differs. EDGE can transmit three times as many bits as GPRS during the same period of time. This is the main reason for the higher EDGE bit rates. The differences between the radio and user data rates are the result of whether or not the packet headers are taken into consideration. These different ways of calculating throughput often cause misunderstanding within the industry about actual throughput figures for GPRS and EGPRS. The data rate of 384 kbps is often used in relation to EDGE. The International Telecommunications Union (ITU) has defined 384 kbps as the data rate limit required for a service to fulfill the International Mobile Telecommunications-2000 (IMT-2000) standard in a pedestrian environment. This 384 kbps data rate corresponds to 48 kbps per time slot, assuming an eight-time slot terminal.

Delay Tolerant Networking

Introduction
Increasingly, network applications must communicate with counterparts across disparate networking environments characterized by significantly different sets of physical and operational constraints; wide variations in transmission latency are particularly troublesome. The proposed Interplanetary Internet, which must encompass both terrestrial and interplanetary links, is an extreme case. An architecture based on a "least common denominator" protocol that can operate successfully and (where required) reliably in multiple disparate environments would simplify the development and deployment of such applications. The highly successful architecture and supporting protocols of today's Internet are ill suited for this purpose. But Delay Tolerant Networking will crossover this bottle-neck. In this seminar the fundamental principles that would underlie a delay-tolerant networking (DTN) architecture and the main structural elements of that architecture, centered on a new end-to-end overlay network protocol called Bundling is examined.

The US Defense Advanced Research Projects Agency (DARPA), as part of its "Next Generation Internet" initiative, has recently been supporting a small group at the Jet Propulsion Laboratory (JPL) in Pasadena, California to study the technical architecture of an "Interplanetary Internet". The idea was to blend ongoing work in standardized space communications capabilities with state of the art techniques being developed within the terrestrial Internet community, with a goal of facilitating a transition as the Earth's Internet moves off-planet. The "Interplanetary Internet" name was deliberately coined to suggest a far-future integration of space and terrestrial
communications infrastructure to support the migration of human intelligence throughout the Solar System. Joining the JPL team in this work was one of the original designers of the Internet and co-inventor of the "Transmission Control Protocol/Internet Protocol" (TCP/IP) protocol suite. Support for the work has recently transitioned from DARPA to NASA.


An architecture based on a "least common denominator " protocol that can operate successfully and reliably in multiple disparate environments would simplify the development and deployment of Interplanetary Internet. It is this analysis that lead to the proposal of Delay-Tolerant Network (DTN) architecture, an architecture that can support deep space applications, centered on a new end-to-end overlay network protocol called 'Bundling'. The architecture and protocols developed for the project could also be useful in terrestrial environments where the dependence on real time interactive communication is not possible. The Internet protocols are ill suited for this purpose, while the overlay protocol used in DTN architecture serves to bridge between different stacks at the boundaries between environments in a standard manner, in effect providing a general -purpose application-level gateway infrastructure that can be used by any number of applications. DTN is an architecture based on Internet-independent middleware: use exactly those protocols at all layers that are best suited to operation within each environment, but insert a new overlay network protocol between the applications and the locally optimized stacks.


Research on extending Earth's Internet into interplanetary space has been underway for several years as part of an international communications standardization body known as the Consultative Committee for Space Data Systems (CCSDS). CCSDS is primarily concerned with communications standards for scientific satellites, with a focus more on the needs of near-term missions. To extend this horizon into the future, and to involve the terrestrial internet research and engineering communities, a special Interplanetary Internet study was proposed and subsequently sponsored in the United States by NASA's Jet Propulsion Laboratory (JPL) and DARPA's Next Generation Internet Initiative

Code Division Duplexing

Introduction
Reducing interference in a cellular system is the most effective approach to increasing radio capacity and transmission data rate in the wireless environment. Therefore, reducing interference is a difficult and important challenge in wireless communications.


In every two-way communication system it is necessary to use separate channels to transmit information in each direction. This is called duplexing. Currently there exist only two duplexing technologies in wireless communications, Frequency division duplexing (FDD) and time division duplexing (TDD). FDD has been the primary technology used in the first three generations of mobile wireless because of its ability to isolate interference. TDD is seemingly a more spectral efficient technology but has found limited use because of interference and coverage problems.


Code-division duplexing (CDD) is an innovative solution that can eliminate all kinds of interference. CDMA is the best multiple access scheme when compared to all others for combating interference. However, the codes in CDMA can be more than one type of code. A set of smart codes can make a high-capacity CDMA system very effective without adding other technologies. The smart code plus TDD is called CDD. This paper will elaborate on a set of smart codes that will make an efficient CDD system a reality. The CDMA system based on this is known as the LAS-CDMA, where LAS is a set of smart codes. LAS-CDMA is a new coding technology that will increase the capacity and spectral efficiency of mobile networks. The advanced technology uses a set of smart codes to restrict interference, a property that adversely affects the efficiency of CDMA networks.

To utilize spectrum efficiently, two transmission techniques need to be considered: one is a multiple access scheme and the other a duplexing system. There are three multiple access schemes namely TDMA, FDMA and CDMA. The industry has already established the best multiple access scheme, code-division multiple access (CDMA), for 3G systems. The next step is to select the best duplexing system. Duplexing systems are used for two-way communications. Presently, there are only two duplexing systems used: frequency-division duplexing (FDD), and time-division duplexing (TDD). The former uses different frequencies to handle incoming and outgoing signals. The latter uses a single frequency but different time slots to handle incoming and outgoing signals.

In the current cellular duplexing systems, FDD has been the appropriate choice, not TDD. Currently, all cellular systems use frequency-division duplexing in an attempt to eliminate interference from adjacent cells. The use of many technologies has limited the effects of interference but still certain types of interference remain. Time-division duplexing has not been used for mobile cellular systems because it is even more susceptible to different forms of interference. TDD can only be used for small confined area systems. Code-division duplexing is an innovative solution that can eliminate all kinds of interference. Eliminating all types of interference makes CDD the most spectrum efficient duplexing system.


CDMA overview


Interference and Capacity
One of the key criteria in evaluating a communication system is its spectral efficiency, or the system capacity, for a given system bandwidth, or sometimes, the total data rate supported by the system. For a given bandwidth, the system capacity for narrow band radio systems is dimension limited, while the system capacity of a traditional CDMA system is interference limited. Traditional CDMA systems are all self-interference system. Three types of interference are usually considered. By ISI we mean Inter Symbol Interference, which is created by the multi-path replica of the useful signal itself; MAI, or Mutual Access Interference, which is the interference created by the signals and their multi-path replica from the other users onto the useful signal; and ACI, or Adjacent Cell Interference, which is all the interfering signals from the adjacent cells onto the useful signal.

64-Bit Computing

Introduction
The question of why we need 64-bit computing is often asked but rarely answered in a satisfactory manner. There are good reasons for the confusion surrounding the question.That is why first of all; let's look through the list of users who need 64 addressing and 64-bit calculations today: oUsers of CAD, designing systems, simulators do need RAM over 4 GB. Although there are ways to avoid this limitation (for example, Intel PAE), it impacts the performance. Thus, the Xeon processors support the 36bit addressing mode where they can address up to 64GB RAM.

The idea of this support is that the RAM is divided into segments, and an address consists of the numbers of segment and locations inside the segment. This approach causes almost 30% performance loss in operations with memory. Besides, programming is much simpler and more convenient for a flat memory model in the 64bit address space - due to the large address space a location has a simple address processed at one pass. A lot of design offices use quite expensive workstations on the RISC processors where the 64bit addressing and large memory sizes are used for a long time already. oUsers of data bases.

Any big company has a huge data base, and extension of the maximum memory size and possibility to address data directly in the data base is very costly. Although in the special modes the 32bit architecture IA32 can address up to 64GB memory, a transition to the flat memory model in the 64bit space is much more advantageous in terms of speed and ease of programming. oScientific calculations. Memory size, a flat memory model and no limitation for processed data are the key factors here. Besides, some algorithms in the 64bit representation have a much simpler form. oCryptography and safety ensuring applications get a great benefit from 64bit integer calculations.

The labels "16-bit," "32-bit" or "64-bit," when applied to a microprocessor, characterize the processor's data stream. Although you may have heard the term "64-bit code," this designates code that operates on 64-bit data. In more specific terms, the labels "64-bit," 32-bit," etc. designate the number of bits that each of the processor's general-purpose registers (GPRs) can hold. So when someone uses the term "64-bit processor," what they mean is "a processor with GPRs that store 64-bit numbers." And in the same vein, a "64-bit instruction" is an instruction that operates on 64-bit numbers. In the diagram above black boxes are code, white boxes are data, and gray boxes are results. The instruction and code "sizes" are not to be taken literally, since they're intended to convey a general feel for what it means to "widen" a processor from 32 bits to 64 bits.


Not all the data either in memory, the cache, or the registers is 64-bit data. Rather, the data sizes are mixed, with 64 bits being the widest. Note that in the 64-bit CPU pictured above, the width of the code stream has not changed; the same-sized opcode could theoretically represent an instruction that operates on 32-bit numbers or an instruction that operates on 64-bit numbers, depending on what the opcode's default data size is. On the other hand, the width of the data stream has doubled. In order to accommodate the wider data stream, the sizes of the processor's registers and the sizes of the internal data paths that feed those registers must be doubled.


Now let's take a look at two programming models, one for a 32-bit processor and another for a 64-bit The registers in the 64-bit CPU pictured above are twice as wide as those in the 32- bit CPU, but the size of the instruction register (IR) that holds the currently executing instruction is the same in both processors. Again, the data stream has doubled in size, but the instruction stream has not. Finally, the program counter (PC) has also doubled in size.
For the simple processor pictured above, the two types of data that it can process are integer data and address data. Ultimately, addresses are really just integers that designate a memory address, so address data is just a special type of integer data. Hence, both data types are stored in the GPRs and both integer and address calculations are done by the ALU.Many modern processors support two additional data types: floating-point data and vector data. Each of these two data types has its own set of registers and its own execution unit(s).

PON Topologies

There are several topologies suitable for the access network: tree, ring, or bus. A PON can also be deployed in redundant configuration as double ring or doubletree; or redundancy may be added only to a part of the PON, say the trunk of the tree. For the rest of this article, we will focus our attention on the tree topology; however, most of the conclusions made are equally relevant to other topologies

All transmissions in a PON are performed between Optical Line Terminal (OLT) and Optical Network Units (ONU). Therefore, in the downstream direction (from OLT to ONUs), a PON is a point-to-multipoint network, and in the upstream direction it is a multipoint-to-point network. The OLT resides in the local exchange (central office), connecting the optical access network to an IP, ATM, or SONET backbone. The ONU is located either at the curb (FTTC solution), or at the end-user location (FTTH, FTTB solutions), and provides broadband voice, data, and video services. In the downstream direction, a PON is a P2MP network, and in the upstream direction it is a MP2P network.

The advantages of using PONs in subscriber access networks are numerous.
1. PONs allow for long reach between central offices and customer premises, operating at distances over 20km.
2. PONs minimizes fiber deployment in both the local exchange office and local loop.
3. PONs provides higher bandwidth due to deeper fiber penetration, offering gigabit per second solutions.
4. Operating in the downstream as a broadcast network, PONs allow for video broadcasting as either IP video or analog video using a separate wavelength overlay.
5. PONs eliminate the necessity to install active multiplexer at splitting locations thus relieving network operators
6. Being optically transparent end to end PONs allow upgrades to higher bit rates or additional wavelengths.


Multiple Access
One possible way of separating the channels is to use wavelength division multiplexing (WDM) in which each ONU operates at a different wavelength. While a simple solution, it remains cost prohibitive for an access network. A WDM solution would require either tunable receiver or a receiver array at the OLT to receive multiple channels. An even more serious problem for network operators would be wavelength-specific ONU inventory instead of having just one type of ONU, there would be multiple types of ONUs based on their laser wavelength .It would also be more problematic for an unqualified user to replace a defective ONU. Using tunable lasers in ONUs is too expensive at the current state of technology. For these reasons a WDM PON network is not an attractive solution in today's environment.

Blu Ray Disc

Optical disks share a major part among the secondary storage devices.Blu .ray Disc is a next .generation optical disc format. The technology utilizes a blue laser diode operating at a wavelength of 405 nm to read and write data. Because it uses a blue laser it can store enormous more amounts of data on it than was ever possible.
Data is stored on Blu .Ray disks in the form of tiny ridges on the surface of an opaque 1.1 .millimetre .thick substrate. This lies beneath a transparent 0.1mm protective layer. With the help of Blu .ray recording devices it is possible to record up to 2.5 hours of very high quality audio and video on a single BD.


Blu ray also promises some added security, making ways for copyright protections. Blu .ray discs can have a unique ID written on them to have copyright protection inside the recorded streams. Blu .ray disc takes the DVD technology one step further, just by using a laser with a nice color.

History of Blu ray Disc


First Generation
When the CD was introduced in the early 80s, it meant an enormous leap from traditional media. Not only did it offer a significant improvement in audio quality, its primary application, but its 650 MB storage capacity also meant a giant leap in data storage and retrieval. For the first time, there was a universal standard for pre .recorded, recordable and rewritable media, offering the best quality and features consumers could wish for themselves, at very low costs.
1.2 Second Generation

Although the CD was a very useful medium for the recording and distribution of audio and some modest data .applications, demand for a new medium offering higher storage capacities rose in the 90s. These demands lead to the evolution of the DVD specification and a five to ten fold increase in capacity. This enabled high quality, standard definition video distribution and recording. Furthermore, the increased capacity accommodated more demanding data applications. At the same time, the DVD spec used the same form factor as the CD, allowing for seamless migration to the next generation format and offering full backwards compatibility.


HDTV (High Definition Video)
This high resolution 16:9 ratio, progressive scan format can now be recorded to standard miniDV cassettes Consumer high definition cameras are becoming available but this is currently an expensive, niche market. It is also possible to capture video using inexpensive webcams. These normally connect to a computer via USB. While they are much cheaper than DV cameras, webcams offer lower quality and less flexibility for editing purposes, as they do not capture video in DV format. Digital video is available on many portable devices from digital stills cameras to mobile phones. This is contributing to the emergence of digital video as a standard technology used and shared by people on a daily basis.

MPEG
MPEG, the Moving Picture Experts Group, overseen by the International Standards Organization (ISO), develops standards for digital video and digital audio compression. MPEG .1 with a default resolution of 352x240 was designed specifically for Video .CD and CD .imedia and is often used in CD .ROMs.


MPEG .1 audio layer .3 (MP3) compression evolved from early MPEG work. MPEG1 is an established, medium quality format (similar to VHS) supported by all players and platforms. Although not the best quality it will work well on older specification machines.


MPEG .2 compression (as used for DVD movies and digital television set .top boxes) is an excellent format for distributing video, as it offers high quality and smaller file sizes than DV. Due to the way it compresses video MPEG .2 .encoded footage is more problematic to edit than DV footage. Despite this, MPEG2 is becoming more common as a capture format. MPEG 2 uses variable bit rates allowing frames to be encoded with more or less data depending on their contents. Most editing software now supports MPEG2 editing. Editing and encoding MPEG2 requires more processing power than DV and should be done on well specified machines. It is not suitable for internet delivery.

Bio-Molecular Computing

Definition
Molecular computing is an emerging field to which chemistry, biophysics, molecular biology, electronic engineering, solid state physics and computer science contribute to a large extent. It involves the encoding, manipulation and retrieval of information at a macromolecular level in contrast to the current techniques, which accomplish the above functions via IC miniaturization of bulk devices. The biological systems have unique abilities such as pattern recognition, learning, self-assembly and self-reproduction as well as high speed and parallel information processing. The aim of this article is to exploit these characteristics to build computing systems, which have many advantages over their inorganic (Si,Ge) counterparts.

DNA computing began in 1994 when Leonard Adleman proved thatDNA computing was possible by finding a solution to a real- problem, a Hamiltonian Path Problem, known to us as the Traveling Salesman Problem,with a molecular computer. In theoretical terms, some scientists say the actual beginnings of DNA computation should be attributed to Charles Bennett's work. Adleman, now considered the father of DNA computing, is a professor at the University of Southern California and spawned the field with his paper, "Molecular Computation of Solutions of Combinatorial Problems." Since then, Adleman has demonstrated how the massive parallelism of a trillion DNA strands can simultaneously attack different aspects of a computation to crack even the toughest combinatorial problems.

Adleman's Traveling Salesman Problem:
The objective is to find a path from start to end going through all the points only once. This problem is difficult for conventional computers to solve because it is a "non-deterministic polynomial time problem" . These problems, when they involve large numbers, are intractable with conventional computers, but can be solved using massively parallel computers like DNA computers. The Hamiltonian Path problem was chosen by Adleman because it is known problem.


The following algorithm solves the Hamiltonian Path problem:
1.Generate random paths through the graph.
2.Keep only those paths that begin with the start city (A) and conclude with the
end city (G).
3.If the graph has n cities, keep only those paths with n cities. (n=7)
4.Keep only those paths that enter all cities at least once.
5.Any remaining paths are solutions.


The key was using DNA to perform the five steps in the above algorithm. Adleman's first step was to synthesize DNA strands of known sequences, each strand 20 nucleotides long. He represented each of the six vertices of the path by a separate strand, and further represented each edge between two consecutive vertices, such as 1 to 2, by a DNA strand which consisted of the last ten nucleotides of the strand representing vertex 1 plus the first 10 nucleotides of the vertex 2 strand. Then, through the sheer amount of DNA molecules (3x1013 copies for each edge in this experiment!) joining together in all possible combinations, many random paths were generated. Adleman used well-established techniques of molecular biology to weed out the Hamiltonian path, the one that entered all vertices, starting at one and ending at six. After generating the numerous random paths in the first step, he used polymerase chain reaction (PCR) to amplify and keep only the paths that began on vertex 1 and ended at vertex 6. The next two steps kept only those strands that passed through six vertices, entering each vertex at least once. At this point, any paths that remained would code for a Hamiltonian path, thus solving the problem.

Third Generation

The third generation of mobile cellular systems are intended to unify the diverse systems we see today into a seamless radio infrastructure capable of offering a wide range of services in different radio environments, with the quality we
have come to expect from wire line communication networks. Since the mid-80's, studies on 3G systems have been carried out within the International Telecommunication Union (ITU), where it was called Future Public Land Mobile Telecommunication Systems (FPLMTS), lately renamed International Mobile Telecommunicatons-2000 (IMT-2000).

In Europe research and development on 3G technology, is commonly referred to as the Universal Mobile Telecommunication System (UMTS) and Mobile Broadband System (MBS), have been conducted under the European Community Research into Advanced Communications in Europe (RACE) and Advanced Communication Technologies and Services (ACTS) programs. With support from activities in Europe, the United States, Japan and developing countries, World Administrative Radio Conference (WARC) of ITU identified global bands 1885-2025Mhz and 2110-2200Mhz for IMT-2000 including 1980-2010Mhz and 2170-2200Mhz for the mobile satellite component. Key elements in the definition of 3G systems are the radio access system and Radio Transmission Technology (RTT). As a part of the standardization activities, a formal request by the ITU-Radio communication standardization sector (ITU-R) for submission of candidate RTTs for IMT-2000 has been distributed by the ITU. In response to this 10 proposals were submitted. Most of the proposals use CDMA or WCDMA as their multiple access technique. So in this seminar we are presenting the common features of WCDMA based 3G standards.

The primary focus of third generation architectures will be to attempt to seamlessly evolve second generation systems to provide high speed data services to support multimedia applications such as web browsing. The key word is "evolve" -
as the challenge to wireless equipment manufacturers is to provide existing customers, namely, service providers, with a migration path that simultaneously satisfies the requirements set forth by the International Telecommunications Union (ITU) for 3G wireless services while preserving customer investment in existing wireless

Graph Separators

Graph separation is a well-known tool to make (hard) graph problems accessible to a divide and conquer approach. We show how to use graph separator theorems in combination with (linear) problem kernels in order to develop fixed parameter algorithms for many well-known NP-hard (planar) graph problems.
We coin the key notion of glueable select verify graph problems and derive from that a prospective way to easily check whether a planar graph problem will allow for a fixed parameter algorithm of running time for constant c.
Besides, we introduce the novel concept of ``problem cores'' that might serve as an alternative to problem kernels for devising parameterized algorithms. One of the main contributions of the paper is to exactly compute the base c of the exponential term and its dependence on the various parameters specified by the
employed separator theorem and the underlying graph problem.
We discuss several strategies to improve on the involved constant c.
Our findings also give rise to studying further refinements of the complexity class FPT of fixed parameter tractable problems.

Cellular Communications

Roke Manor Research is a leading provider of mobile telecommunications technology for both terminals and base stations. We add value to our clients' projects by reducing time-to-market and lowering production costs, and provide lasting benefits through building long-term relationships and working in partnership with our customers.
We have played an active role in cellular communications technology since the 1980's, working initially in GSM and more recently in the definition and development of 3G (UMTS). Roke Manor Research has over 200 engineers with experience in designing hardware and software for 3G terminals and base stations and is currently developing technology for 4G and beyond. We are uniquely positioned to provide 2G, 3G and 4G expertise to our customers.
The role of Roke Manor Research engineers in standardisation bodies (e.g. ETSI and 3GPP) provides us with intimate knowledge of all the 2G and 3G standards (GSM, GPRS, EDGE, UMTS FDD (WCDMA) and TD-SCDMA standards). Our engineers are currently contributing to the evolution of 3G standards and can provide up-to-the-minute implementation advice to customers.

Optical Computer

The mantra of our electronic age has been 'faster, smaller, better' for over two decades now. Today, computer lies at the very core of our society. As we try to squeeze more from a silver of silicon, the cost of chip making has become prohibitively expensive. Chip barriers are now down to three or four atoms apart. So far the ride has been good, but at some point, something has to give.
At that point, incremental approach to silicon technology would not be enough - we will need a new approach. Many new technologies abound, but the most promising among them is the use of light.
An Optical Computer is a hypothetical device that uses visible light or infrared beams, rather than electric current, to perform digital computations.
An electric current flows at only about 10 percent of speed of light. By applying some of the advantages of visible and/or IR networks at the device and component scale, a computer can be developed that can perform operations very much times faster than a conventional electronic computer.

Virtual Private Network

Definition
VPNs have emerged as the key technology for achieving security over the Internet. While a VPN is an inherently simple concept, early VPN solutions were geared towards large organizations and their implementation required extensive technical expertise. As a consequence, small and medium-sized businesses were left out of the e-revolution. Recently, VPN solutions have become available that focus specifically on the needs of small and medium-sized businesses. Historically, the term VPN has also been used in contexts other than the Internet, such as in the public
telephone network and in the Frame Relay network. In the early days of the Internet-based VPNs, they were
sometimes described as Internet-VPNs or IP-VPNs. However, that usage is archaic and VPNs are now synonymous
with Internet-VPNs.



Overview and Benefits
A firewall is an important security feature for Internet users. A firewall prevents data from leaving and entering an enterprise by unauthorized users. However, when packets pass through the firewall to the Internet, sensitive data such as user names, passwords, account numbers, financial and personal medical information, server addresses, etc. is visible to hackers and to potential e-criminals. Firewalls do not protect from threats within the Internet. This is where a VPN comes into play.


A VPN, at its core, is a fairly simple concept-the ability to use the shared, public Internet in a secure manner as if it were a private network. the flow of data between two users over the Internet when not using a VPN. As shown by the dotted lines, packets between a pair of users may go over networks run by many ISPs and may take different
paths. The structure of the Internet and the different paths taken by packets are transparent to the two users. With a VPN, users encrypt their data and their identities to prevent unauthorized people or computers from looking at the data or from tampering with the data.



Applications
A VPN can be used for just about any intranet and e-business (extranet) application. Examples on the following pages illustrate the use and benefits of VPN for mobile users and for remote access to enterpriseresources, for communications between remote offices and headquarters, and for extranet/e-business.

Remote Access
In this application, when not using a VPN, mobile and remote users often use analog (dial-up modems) or ISDN switched services to connect to a headquarters data center. This is shown in figure 2a. These connections are used to access e-mail, to download files and to execute other transactions. This type of connection would also be used by small offices that do not have a permanent connection to the enterprise intranet.

Cluster Computing

Definition
The recent advances in high-speed networks and improved microprocessor performance are making clusters or networks of workstations an appealing vehicle for cost effective parallel computing. Clusters built using commodity hardware and software components are playing a major role in redefining the concept of supercomputing.



Clusters
A cluster is a type of parallel or distributed processing system, which consists of a collection of interconnected stand-alone computers cooperatively working together as a single, integrated computing resource.

This cluster of computers shares common network characteristics like the same namespace and it is available to other computers on the network as a single resource. These computers are linked together using high-speed network interfaces between themselves and the actual binding together of the all the individual computers in the cluster is performed by the operating system and the software used.







Beowulf Cluster
It's a kind of high-performance massively parallel computer built primarily out of commodity hardware components, running a free-software operating system like Linux or Free BSD, interconnected by a private high-speed network.











Motivation For Clustering
High cost of 'traditional' High Performance Computing.
Clustering using Commercial Off The Shelf (COTS) is way cheaper than buying specialized machines for computing. Cluster computing has emerged as a result of the convergence of several trends, including the availability of inexpensive high performance microprocessors and high-speed networks, and the development of standard software tools for high performance distributed computing.

Increased need for High Performance Computing
As processing power becomes available, applications which require enormous amount of processing, like weather modeling are becoming more common place requiring the high performance computing provided by Clusters.

Voice Portals

Definition
In its most generic sense a voice portal can be defined as "speech enabled access to Web based information". In other words, a voice portal provides telephone users with a natural language interface to access and retrieve Web content. An Internet browser can provide Web access from a computer but not from a telephone. A voice portal is a way to do that.



Overview
The voice portal market is exploding with enormous opportunities for service providers to grow business and revenues. Voice based internet access uses rapidly advancing speech recognition technology to give users any time, anywhere communication and access-the Human Voice- over an office, wireless, or home phone. Here we would describe the various technology factors that are making voice portal the next big opportunity on the web, as well as the various approaches service providers and developers of voice portal solutions can follow to maximize this exciting new market opportunity.


Natural speech is modality used when communicating with other people. This makes it easier for a user to learn the operation of voice-activate services. As an output modality, speech has several advantages. First, auditory input does not interfere with visual tasks, such as driving a car. Second, it allows for easy incorporation of sound-based media, such as radio broadcasts, music, and voice-mail messages. Third, advances in TTS (Text To Speech) technology mean text information can be transferred easily to the user. Natural speech also has an advantage as an input modality, allowing for hands-free and eyes-free use. With proper design, voice commands can be created that are easy for a user to remember .These commands do not have to compete for screen space. In addition unlike keyboard-based macros (e.g., ctrl-F7), voice commands can be inherently mnemonic ("call United Airlines"), obviating the necessity for hint cards. Speech can be used to create an interface that is easy to use and requires a minimum of user attention.

For a voice portal to function, one of the most important technology we have to include is a good VUI (Voice User Interface).There has been a great deal of development in the field of interaction between human voice and the system. And there are many other fields they have started to get implemented. Like insurance has turned to interactive voice response (IVR) systems to provide telephonic customer self-service, reduce the load on call-center staff, and cut overall service costs. The promise is certainly there, but how well these systems perform-and, ultimately, whether customers leave the system satisfied or frustrated-depends in large part on the user interface.
Many IVR applications use Touch-Tone interfaces-known as DTMF (dual-tone multi-frequency)-in which customers are limited to making selections from a menu. As transactions become more complex, the effectiveness of DTMF systems decreases.


In fact, IVR and speech recognition consultancy Enterprise Integration Group (EIG) reports that customer utilization rates of available DTMF systems in financial services, where transactions are primarily numeric, are as high as 90 percent; in contrast, customers' use of insurers' DTMF systems is less than 40 percent.
Enter some more acronyms. Automated speech recognition (ASR) is the engine that drives today's voice user interface (VUI) systems. These let customers break the 'menu barrier' and perform more complex transactions over the phone. "In many cases the increase in self-service when moving from DTMF to speech can be dramatic," said EIG president Rex Stringham.

Tele-immersion

Definition
Tele-Immersion is a new medium that enables a user to share a virtual space with remote participants. The user is immersed in a 3D world that is transmitted from a remote site. This medium for human interaction, enabled by digital technology, approximates the illusion that a person is in the same physical space as others, even though they may be thousands of miles distant. It combines the display and interaction techniques of virtual reality with new computer-vision technologies. Thus with the aid of this new technology, users at geographically distributed sites can collaborate in real time in a shared, simulated, hybrid environment submerging in one another's presence and feel as if they are sharing the same physical space.

It is the ultimate synthesis of media technologies:
1.3D environment scanning,
2.projective and display technologies,
3.tracking technologies,
4.audio technologies,
5.robotics and haptics, and powerful networking. The considerable requirements for tele-immersion system, make it one of the most challenging net applications.

In a tele-immersive environment computers recognize the presence and movements of individuals and objects, track those individuals and images, and then permit them to be projected in realistic, multiple, geographically distributed immersive environments on stereo-immersive surfaces. This requires sampling and resynthesis of the physical environment as well as the users' faces and bodies, which is a new challenge that will move the range of emerging technologies, such as scene depth extraction and warp rendering, to the next level.


Tele-immersive environments will therefore facilitate not only interaction between users themselves but also between users and computer generated models and simulations. This will require expanding the boundaries of computer vision, tracking, display, and rendering technologies. As a result, all of this will enable users to achieve a compelling experience and it will lay the groundwork for a higher degree of their inclusion into the entire system
Tele-immersive systems have potential to significantly change educational, scientific and manufacturing paradigms. They will show their full strength in the systems where having 3D reconstructed 'real' objects coupled with 3D virtual objects is crucial for the successful fulfillment of the tasks. It may also be the case that some tasks would not be possible to complete without having such combination of sensory information. There are several applications that will profit from tele-immersive systems. Collaborative mechanical CAD applications as well as different medical applications are two that will benefit significantly.

Tele-immersion may sound like conventional video conferencing. But it is much more. Where video conferencing delivers flat images to a screen, tele-immersion recreates an entire remote environment. Although not so, tele-immersion may seem like another kind of virtual reality. Virtual reality allows people to move around in a pre-programmed representation of a 3D environment, whereas tele-immersion is measuring the real world and conveying the results to the sensory system.ultimate synthesis of media technologies:

A tele-immersion telecubicle is designed both to acquire a 3D model of the local user and environment for rendering and interaction at remote sites, and to provide an immersive experience for the local user via head tracking and stereoscopic display projected on large scale view screens.


VHDL

Definition
VHDL (VHSIC Hardware Description Language) is a language for describing hardware. Its requirement emerged during the VHSIC development program of the US Department of Defense. The department organized a work shop in 1981 to lay down the specifications of a language which could describe hardware at various levels of abstractions, could generate test signals and record responses, and could act as a medium of information exchange between the chip foundries and the CAD tool operators. However, due to military restrictions, it remained classified till 1985.



Structural Descriptions


1.Building Blocks
To make designs more understandable and maintainable, a design is typically decomposed into several blocks. These blocks are then connected together to form a complete design. Using the schematic capture approach to design, this might be done with a block diagram editor. Every portion of a VHDL design is considered a block. A VHDL design may be completely described in a single block, or it may be decomposed in several blocks. Each block in VHDL is analogous to an off-the-shelf part and is called an entity. The entity describes the interface to that block and a separate part associated with the entity describes how that block operates. The interface description is like a pin description in a data book, specifying the inputs and outputs to the block. The description of the operation of the part is like a schematic for the block.


2.Connecting Blocks
Once we have defined the basic building blocks of our design using entities and their associated architectures, we can combine them together to form other designs. This section describes how to combine these blocks together in a structural description.
3.Data Flow Descriptions


The VHDL standard not only describes how designs are specified, but also how they should be interpreted. This is the purpose of having standards, so that we can all agree on the meaning of a design. It is important to understand how a VHDL simulator interprets a design because that dictates what the "correct" interpretation is according to the standard (Hopefully, simulators are not all 100% correct).


The scheme used to model a VHDL design is called discrete event time simulation. When the value of a signal changes, we say an event has occurred on that signal. If data flows from signal A to signal B, and an event has occurred on signal A (i.e. A's value changes), then we need to determine the possibly new value of B. This is the foundation of the discrete event time simulation. The values of signals are only updated when certain events occur and events occur at discrete instances of time.


Since one event causes another, simulation proceeds in rounds. The simulator maintains a list of events that need to be processed. In each round, all events in a list are processed, any new events that are produced are placed in a separate list (and are said to be scheduled) for processing in a later round. Each signal assignment is evaluated once, when simulation begins to determine the initial value of each signal.

Speech Application Language Tags

Definition
Advances in several fundamental technologies are making possible mobile computing platforms of unprecedented power. In the speech and voice technology business fields SALT has been introduced as a new tool. SALT supplies a critical missing component, facilitating intuitive speech-based interfaces that anyone can master. Verizon Wireless has joined the SALT Forum to make speech applications more accessible to wireless customers. The SALT specification defines a set of lightweight tags as extensions to commonly used Web-based programming languages, strengthened by incorporating existing standards from the World Wide Web Consortium (W3C) and the Internet Engineering Task Force. In multimodal applications, the tags can be added to support speech input and output either as standalone events or jointly with other interface options such as speaking while pointing to the screen with a stylus. In telephony applications, the tags provide a programming interface to manage the speech recognition and text-to-speech resources needed to conduct interactive dialogs with the caller through a speech-only interface.


SALT is a speech interface markup language. SALT (Speech Application Language Tags) is an extension of HTML and other markup languages (HTML, XHTML, WML) that adds a powerful speech interface to Web pages, while maintaining and leveraging all the advantages of the Web application model. These tags are designed to be used for both voice-only browsers (for example, a browser accessed over the telephone) and multimodal browsers. SALT (Speech Application Language Tags) is a small set of XML elements, with associated attributes and DOM object properties, events, and methods, which may be used in conjunction with a source markup document to apply a speech interface to the source page. The SALT formalism and semantics are independent of the nature of the source document, so SALT can be used equally effectively within HTML and all its flavors, or with WML, or with any other SGML-derived markup. SALT targets speech applications across a wide range of devices including telephones, PDAs, tablet computers and desktop PCs. As all these devices have different methods of inputting data SALT has taken this also into consideration.

SALT provides a multimodel access in which users will be able to interact with an application in a variety of ways: input with speech, a keyboard, keypad, mouse and/or stylus; and output as synthesized speech, audio, plain text, motion video and/ or graphics. Each of these modes could be used independently or concurrently. For example, a user might click on a flight info icon on a device and say "Show me the flights from San Francisco to Boston after 7 p.m. on Saturday" and have the browser display a Web page with the corresponding flights.


There are mainly three major challenges that SALT will help address.

1. Input on wireless devices:
Wireless devices are becoming pervasive, but lack of a natural input mechanism hinders adoption as well as application development on these devices.

2. Speech-enabled application development:
Speech-enabled integration between existing Web browser software, server and network infrastructure and speech technology, SALT will allow many more Web sites to be reachable through telephones.

3. Telephony applications:
There are 1.6 billion telephones in the world, but only a relatively small fraction of Web applications and services are reachable by phone.