Wednesday, February 18, 2009

Intrution Detection System

Definition
It is very important that the security mechanisms of a system are designed so as to prevent unauthorized access to system resources and data. However, completely preventing breaches of security appear, at present, unrealistic. We can, however, try to detect these intrusion attempts so that action may be taken to repair the damage later. This field of research is called Intrusion Detection.
Anderson, while introducing the concept of intrusion detection in 1980, defined an intrusion attempt or a threat to be the potential possibility of a deliberate unauthorized attempt to:-


a.access information,
b.manipulate information, or
c.render a system unreliable or unusable.


Since then, several techniques for detecting intrusions have been studied. This paper discusses why intrusion detection systems are needed, the main techniques, present research in the field, and possible future directions of research.

There are two ways to handle subversion attempts. One way is to prevent subversion itself by building a completely secure system. We could, for example, require all users to identify and authenticate themselves; we could protect data by various cryptographic methods and very tight access control mechanisms. However this is not really feasible because:-


1. In practice, it is not possible to build a completely secure system because bug free software is still a dream, & no-one seems to want to make the effort to try to develop such software.Apart from the fact that we do not seem to be getting our money's worth when we buy software, there are also security implications when our E-mail software, for example, can be attacked. Designing and implementing a totally secure system is thus an extremely difficult task.


2. The vast installed base of systems worldwide guarantees that any transition to a secure system, (if it is ever developed) will be long in coming.


3. Cryptographic methods have their own problems. Passwords can be cracked, users can lose their passwords, and entire crypto-systems can be broken.


4. Even a truly secure system is vulnerable to abuse by insiders who abuse their privileges.


5. It has been seen that that the relationship between the level of access control and user efficiency is an inverse one, which means that the stricter the mechanisms, the lower the efficiency becomes.


If there are attacks on a system, we would like to detect them as soon as possible (preferably in real-time) and take appropriate action. This is essentially what an Intrusion Detection System (IDS) does. An IDS does not usually take preventive measures when an attack is detected; it is a reactive rather than pro-active agent. It plays the role of an informant rather than a police officer.


The most popular way to detect intrusions has been by using the audit data generated by the operating system.

Layer 3 Switching

Definition

It was not so long ago that switches were used in telecommunications primarily for placing telephone calls. Dialing a telephone number activated a series of switches to set up a voice path that could be as simple as the next office or as complex as a multinational conference call. Internetworks and the Internet are beginning to provide similar services for PC workstations, servers, and mainframes. The primary goal of any data network provider is to eliminate geographic and media constraints on connectivity while maintaining control over resources and costs. Embedding Layer 3 Switching into the network is being promoted as the best way to achieve this.

A variety of switch and router technologies are entering the market and creating confusion among network professionals. New terms such as Layer 3 switching, multilayer switching, routing switch, switching router, and Gigabit router are clouding the traditional distinctions between switches and routers. Furthermore, many wiring closet switches that traditionally employed simple Layer 2 switching are now offering Layer 3 switching functions or future options for Layer 3 capabilities. These changes make it difficult for network designers to understand and deploy effective network solutions.

It is clear that a new generation of Internet and intranet work processes are emerging and that users will benefit from both increased competition and new services. It is therefore important do demystify the hype and understand when and where Layer 3 switching is important by getting back to the basics.

Layer 3 switches are compared to traditional multiprotocol routers. It is demonstrated that Layer 3 Switching is simply a re-invention of the router using new switch based technologies. This seminar also reviews the basic data forwarding, route processing, and value-added functions that are required of any intelligent network node.
Network Basics



Understanding the Layers
Internetworking devices such as bridges, routers, and switches have traditionally been categorized by the OSI layer they operate at and the role they play in the topology of a network:

1.Bridges and switches operate at Layer 2: they extend network capabilities by forwarding traffic among LANs and LAN segments with high throughput.

2.Routers operate at Layer 3: they perform route calculations based on Layer 3 addresses and provide multi-protocol support and WAN access, but typically at the cost of higher latency and much more complex administration requirements.

Layer 2 refers to the layer in the communications protocol that contains the physical address of a client or server station. It is also called the data link layer or MAC layer.

Optical packet switch architectures

Definition

Space switch fabric architecture is shown in figure. The switch consists of N incoming and N out going fiber links, with n wavelengths running on each fiber link. The switch is slotted, and the length of the slot is such that an optical packet can be transmitted and propagated from an input port to an out put optical buffer.

The switch fabric consists of three parts; optical packet encoder, space switch, and optical packet buffer. The optical packet buffer works as follows. For each incoming fiber link, three is an optical demultiplexer, which divides the incoming optical signal to different wavelengths. Each wavelength is fed to a different tunable wavelength converter ( TWC), which converts the wave length of the optical packet to a wavelength that is free at the destination optical output fiber. Then, through the space switch fabric, the optical packet can be switched to any of the N out put optical buffers.

Specifically the out put of a TWC is fed to a splitter, which distributes the same signal to N different out put fibbers, one per out put fibber. The signal on each of these out put fibbers goes through another splitter, which distributes this in to d+1 different out put fibbers, and each out put is connected through an optical gate to one of the ODLs of the destination out put buffer. The optical packet is forwarded to an ODL by appropriately keeping one optical gate open and closing the rest. The information regarding to which wavelength of an incoming packet and the decision as to which ODL of the destination out put buffer the packet will be switched to is provided by the control unit, which ahs knowledge of the state of the entire switch.

Each out put buffer is an optical buffer implemented as follows. It consists of d+1 ODLs, numbered from 0 to d. ODL1 delays optical packet for a fixed delay equal to 1 slots. ODL0 provides zero delay, and a packet arriving at this ODL is simply transmitted out of the out put port.


Optical Satellite Communication

Definition

The European Space Agency (ESA) has programmes underway to place Satellites carrying optical terminals in GEO orbit within the next decade. The first is the ARTEMIS technology demonstration satellite which carries both microwave and SILEX (Semiconductor Laser Intro satellite Link Experiment) optical interorbit communications terminal. SILEX employs direct detection and GaAIAs diode laser technology; the optical antenna is a 25cm diameter reflecting telescope.

The SILEX GEO terminal is capable of receiving data modulated on to an incoming laser beam at a bit rate of 50 Mbps and is equipped with a high power beacon for initial link acquisition together with a low divergence (and unmodulated) beam which is tracked by the communicating partner. ARTEMIS will be followed by the operational European data relay system (EDRS) which is planned to have data relay Satellites (DRS). These will also carry SILEX optical data relay terminals.

Once these elements of Europe's space Infrastructure are in place, these will be a need for optical communications terminals on LEO satellites which are capable of transmitting data to the GEO terminals. A wide range of LEO space craft is expected to fly within the next decade including earth observation and science, manned and military reconnaissance system.

The LEO terminal is referred to as a user terminal since it enables real time transfer of LEO instrument data back to the ground to a user access to the DRS s LEO instruments generate data over a range of bit rates extending of Mbps depending upon the function of the instrument. A significant proportion have data rates falling in the region around and below 2 Mbps. and the data would normally be transmitted via an S-brand microwave IOL

ESA initiated a development programme in 1992 for LEO optical IOL terminal targeted at the segment of the user community. This is known as SMALL OPTICAL USER TERMINALS (SOUT) with features of low mass, small size and compatibility with SILEX. The programme is in two phases. Phase I was to produce a terminal flight configuration and perform detailed subsystem design and modelling. Phase 2 which started in september 1993 is to build an elegant bread board of the complete terminal.


Dashboard

Definition


Originally a board used to stop mud from being dashed inside a vehicle, the word dashboard has evolved to mean a user interface that organizes and presents information in a way that is easy to read. Your Blogger Dashboard is your control panel, your main editing interface to Blogger.

The goal of the dashboard is to automatically show a user useful files and other objects as he goes about his day. While you read email, browse the web, write a document, or talk to your friends on IM, the dashboard does its best to proactively find objects that are relevant to your current activity, and to display them in a friendly way, saving you from digging around through your stuff like a disorganized filing clerk.

Dashboard basics

Dashboards have emerged as the fashionable term for easy-to-digest, customised views of BI software applications that aggregate and analyse data from disparate corporate sources. Yet a standard definition has yet to gel among the user community and vendors. Michael Smith, senior product marketing manager for Cognos, sees the dashboard as "a compound visual report that elevates the nitty-gritty of business reporting to a graphical level."

Chris Caren, VP of corporate marketing at Business Objects, argues that dashboards represent a new paradigm for getting data out of BI systems that radically differs from traditional BI reporting. "Dashboards are a strong indication that BI is now evolving towards a metrics-driven style of management," he says.

Meanwhile, Eugene Blaine, managing director of Atlantic Global, sees dashboards as a relationship enabler: "If a dashboard is deployed and used properly it's a powerful way to demonstrate that users are listening to the organisation and influencing it."

John Kopcke, chief technology officer at Hyperion Solutions, believes that dashboards will eventually replace traditional query and reporting tools as an entry-level interface for BI information consumption. "IT got it all wrong when they looked at what was needed for the first-level of information delivery," he says. "What users should get is a dashboard as opposed to a mountain of reports."

Dashboards are all about measurements. The centrepiece of any dashboard design is its metrics and KPIs and how they are captured and combined visually in graphs to reflect the health of the business.


Key Performance Indicators, also known as KPI or Key Success Indicators (KSI), help an organization define and measure progress toward organizational goals.
Once an organization has analyzed its mission, identified all its stakeholders, and defined its goals, it needs a way to measure progress toward those goals. Key Performance Indicators are those measurements

Rapid Prototyping

Definition


Its the name given to a host of related technologies that are used to fabricate physical objects directly from CAD data sources. These methods are unique in that they add and bond materials in layers to form objects. Such systems are also known by the names additive fabrication, three dimensional printing, solid freeform fabrication and layered manufacturing. Rapid prototyping also describes a software engineering methodology.


Rapid prototyping, is the automatic construction of physical objects using solid freeform fabrication and are used to produce models and prototype parts.Rapid prototyping takes virtual designs (from computer aided design (CAD) or from animation modeling software), transforms them into cross sections, still virtual, and then create each cross section in physical space, one after the next until the model is finished. It is a WYSIWYG process where the virtual model and the physical model correspond almost identically.

The machine reads in data from a CAD drawing, and lays down successive layers of liquid or powdered material, and in this way builds up the model from a long series of cross sections. These layers which correspond to the virtual cross section from the CAD model are glued together or fused (often using a laser) automatically to create the final shape. The standard interface between CAD software and rapid prototyping machines is the STL file format.


However, there are currently several schemes such as RepRap Project. to improve rapid prototyper technology to the stage where a prototyper can manufacture its own component parts. These technologies offer advantages in many applications compared to classical subtractive fabrication methods such as milling or turning

WYSIWYG: An acronym for What You See Is What You Get, used in computing to describe a system in which content during editing appears very similar to the final product. It is commonly used for word processors, but has other applications, such as Web (HTML) authoring.

Why use Rapid Prototyping
o To increase effective communication.
o To decrease development time.
o To decrease costly mistakes.
o To minimize sustaining engineering changes.
o To extend product lifetime by adding necessary features and eliminating redundant features early in the design.

Methodology of Rapid Prototyping
The basic methodology for all current rapid prototyping techniques can be summarized as follows:
1.A CAD model is constructed, then converted to STL format. The resolution can be set to minimize stair stepping.
2. The RP machine processes the .STL file by creating sliced layers of the model.
3. The first layer of the physical model is created. The model is then lowered by the thickness of the next layer, and the process is repeated until completion of the model.
4. The model and any supports are removed. The surface of the model is then finished and cleaned.

Broad Band Over Power Line

Definition


Power line communications (PLC) or Broadband over Power Lines (BPL) allows transmission of data over power lines.Power line communications uses the RF signal sent over medium and low voltage AC power lines to allow end users to connect to the Internet. The RF signal is modulated with digital information that is converted by an interface in the home or small business into Ethernet compatible data.

To gain a good understanding of how PLC Works an excellent understanding of the Power Grid is required. Unlike telephony and it's associated technologies there is no set standard for providing power. An example of such a difference in standards can be seen in the difference in voltage provided in the U.S. and the EU. Ireland uses a 220v AC power source whereas in the US a 110v power source is used. These differences lead to differences in basic equipment such as plugs e.g. The 3 pin plug used in Ireland compared to the 2 pin plug used in the US made possible due to the lower voltage used in the U.S. Despite these differences in basic equipment the basic network is similar in nearly all countries.

Power is generated at Power stations and distributed around a medium to large geographical area via HV lines or High Voltage lines.

In areas where power needs to be distributed to consumers transformers will be used to convert this high voltage into a lower voltage for transport over MV or Medium Voltage lines. These transformers are generally located at electrical substations operated by the utility or power supplier. Such medium voltage lines will be used to transport electricity around smaller geographical areas such as towns and small counties etc.

At the customer's house or premises a transformer is used to drop the voltage down to safer more manageable voltages for use in the home or business. This power is usually transported over LV or Low Voltage lines. These Low Voltage Lines include the lines that traverse a customer's home or business.

PLC encoding

Though there are no set standards in PLC all implementations act in the same manner. PLC is based on the idea that any copper medium will transport any electrical signal for a certain distance. Basically a radio signal is modulated with the data we wish to send. This radio signal is then sent down the copper medium (our power lines) in a band of frequencies not used by for the purposes of supplying electricity and managing electricity.
The frequencies and encoding schemes used greatly influence both the efficiency and the speed of the PLC service. Most PLC radio traffic generally occurs in the same bandwidth roughly 1.6 MHz to 80 MHz. These frequencies are in the MF Medium Frequency (300KHz-3 MHz), HF High Frequency (3MHz - 30 MHz) and some of the VHF Very High Frequency (30MHz - 300 MHz) spectrum. Various encoding schemes have been used for sending the data along the Power Lines these include:


GMSK
Used with the Single Carrier Version of PLC providing low bandwidths <1 MB
CDMA
Used with the Single Carrier Version of PLC providing low bandwidths <1 MB
OFDM
Used with the Multi Carrier version of PLC providing a bandwidth of 45 MB

RPR

Definition


The nature of the public network has changed. Demand for Internet Protocol (IP) data is growing at a compound annual rate of between 100% and 800%1, while voice demand remains stable. What was once a predominantly circuit switched network handling mainly circuit switched voice traffic has become a circuit-switched network handling mainly IP data. Because the nature of the traffic is not well matched to the underlying technology, this network is proving very costly to scale. User spending has not increased proportionally to the rate of bandwidth increase, and carrier revenue growth is stuck at the lower end of 10% to 20% per year. The result is that carriers are building themselves out of business.

Over the last 10 years, as data traffic has grown both in importance and volume, technologies such as frame relay, ATM, and Point-to-Point Protocol (PPP) have been developed to force fit data onto the circuit network. While these protocols provided virtual connections-a useful approach for many services-they have proven too inefficient, costly and complex to scale to the levels necessary to satisfy the insatiable demand for data services. More recently, Gigabit Ethernet (GigE) has been adopted by many network service providers as a way to network user data without the burden of SONET/SDH and ATM. GigE has shortcomings when applied in carrier networks were recognized and for these problems, a technology called Resilient Packet Ring Technology were developed.

RPR retains the best attributes of SONET/SDH, ATM, and Gigabit Ethernet. RPR is optimized for differentiated IP and other packet data services, while providing uncompromised quality for circuit voice and private line services. It works in point-to-point, linear, ring, or mesh networks, providing ring survivability in less than 50 milliseconds. RPR dynamically and statistically multiplexes all services into the entire available bandwidth in both directions on the ring while preserving bandwidth and service quality guarantees on a per-customer, per-service basis. And it does all this at a fraction of the cost of legacy SONET/SDH and ATM solutions.

Data, rather than voice circuits, dominates today's bandwidth requirements. New services such as IP VPN, voice over IP (VoIP), and digital video are no longer confined within the corporate local-area network (LAN). These applications are placing new requirements on metropolitan-area network (MAN) and wide-area network (WAN) transport. RPR is uniquely positioned to fulfill these bandwidth and feature requirements as networks transition from circuit-dominated to packet-optimized infrastructures.

RPR technology uses a dual counter rotating fiber ring topology. Both rings (inner and outer) are used to transport working traffic between nodes. By utilizing both fibers, instead of keeping a spare fiber for protection, RPR utilizes the total available ring bandwidth. These fibers or ringlets are also used to carry control (topology updates, protection, and bandwidth control) messages. Control messages flow in the opposite direction of the traffic that they represent. For instance, outer-ring traffic-control information is carried on the inner ring to upstream nodes.

IP Telephony

Definition


If you've never heard of Internet Telephony, get ready to change the way you think about long-distance phone calls. Internet Telephony, or Voice over Internet Protocol, is a method for taking analog audio signals, like the kind you hear when you talk on the phone, and turning them into digital data that can be transmitted over the Internet.
How is this useful? Internet Telephony can turn a standard Internet connection into a way to place free phone calls. The practical upshot of this is that by using some of the free Internet Telephony software that is available to make Internet phone calls, you are bypassing the phone company (and its charges) entirely.


Internet Telephony is a revolutionary technology that has the potential to completely rework the world's phone systems. Internet Telephony providers like Vonage have already been around for a little while and are growing steadily. Major carriers like AT&T are already setting up Internet Telephony calling plans in several markets around the United States, and the FCC is looking seriously at the potential ramifications of Internet Telephony service.
Above all else, Internet Telephony is basically a clever "reinvention of the wheel." In this article, we'll explore the principles behind Internet Telephony, its applications and the potential of this emerging technology, which will more than likely one day replace the traditional phone system entirely.


The interesting thing about Internet Telephony is that there is not just one way to place a call.

There are three different "flavors" of Internet Telephony service in common use today:
ATA - The simplest and most common way is through the use of a device called an ATA (analog telephone adaptor). The ATA allows you to connect a standard phone to your computer or your Internet connection for use with Internet Telephony.

The ATA is an analog-to-digital converter. It takes the analog signal from your traditional phone and converts it into digital data for transmission over the Internet. Providers like Vonage and AT&T CallVantage are bundling ATAs free with their service. You simply crack the ATA out of the box, plug the cable from your phone that would normally go in the wall socket into the ATA, and you're ready to make Internet Telephony calls. Some ATAs may ship with additional software that is loaded onto the host computer to configure it; but in any case, it is a very straightforward setup.


IP Phones - These specialized phones look just like normal phones with a handset, cradle and buttons. But instead of having the standard RJ-11 phone connectors, IP phones have an RJ-45 Ethernet connector. IP phones connect directly to your router and have all the hardware and software necessary right onboard to handle the IP call. Wi-Fi phones allow subscribing callers to make Internet Telephony calls from any Wi-Fi hot spot.


Computer-to-computer - This is certainly the easiest way to use Internet Telephony. You don't even have to pay for long-distance calls. There are several companies offering free or very low-cost software that you can use for this type of Internet Telephony. All you need is the software, a microphone, speakers, a sound card and an Internet connection, preferably a fast one like you would get through a cable or DSL modem. Except for your normal monthly ISP fee, there is usually no charge for computer-to-computer calls, no matter the distance.


If you're interested in trying Internet Telephony, then you should check out some of the free Internet Telephony software available on the Internet. You should be able to download and set it up in about three to five minutes. Get a friend to download the software, too, and you can start tinkering with Internet Telephony to get a feel for how it works.

M-Commerce

Definition

Advances in e-commerce have resulted in progress towards strategies, requirements and development of e-commerce application. Nearly all the e-commerce applications envisioned so far assume fixed or stationary users with wired infrastructure, such as a browser on PC connected to the internet using phone lines on LAN.

Many people do not use a PC outside the office, but keep the mobile phone at their side all the times. Mobile commerce is perfect for this group.

M-commerce allows one to reach the consumer directly, not his fax machine, his desk, his secretary or his mailbox, but ones consumer directly, regardless of where he is.

M-commerce is "the delivery of electronic commerce capabilities directly into the hands, anywhere, via wireless technology" and "putting a retail outlet in the customer's hands anywhere." This can be done with just a mobile phone, a PDA connected to a mobile phone or even a portable PC connected to a mobile phone. M-commerce is also termed as wireless e-commerce.

Background And Motivation

Electronic commerce has attracted significant attention in the last few years. Advances in e-commerce have resutled in significant progress towards strategies, requirements and development of e-commerce applications. Nearly all the applications envisioned and developed so far assume fixed or stationary users with wired infrastructure, such as browser on a PC connected to the Internet using phone lines or a Local Area Network. A new e-commerce application such as "Wireless e-commerce" or "Mobile e-commerce" will benefit one to reach the consumer directly, regardless of where he is.

The emergence of M-commerce, a synonym for wireless e-commerce allows one to do the same function that can be done over the internet. This can be done by connecting a PDA to a mobile phone, or even a portable PC connected to a mobile phone. Mobile Commerce is perfect for the group who always keep a mobile phone by side all the times. A study from the wireless data and computing service, a division of strategy analytics, reports that the mobile commerce market may rise to $200 billion by 2004. The report predicts that transactions via wireless devices will generate about $14 billion a year.

We are aware that consensus within business and industry of future applications is still in its infancy. However, we are interested in examining those future applications and technologies that will form the next frontier of electronic commerce. To help future applications and to allow designers, developers and researchers to strategize and create mobile commerce applications, a four level integrated framework is proposed.

Migration From GSM Network To GPRS

Definition


The General Packet Radio System (GPRS) is a new service that provides actual packet radio access for mobile Global System for Mobile Communications (GSM) and Time-Division Multiple Access (TDMA) users. The main benefits of GPRS are that it reserves radio resources only when there is data to send and it reduces reliance on traditional circuit-switched network elements. The increased functionality of GPRS will decrease the incremental cost to provide data services, an occurrence that will, in turn, increase the penetration of data services among consumer and business users. In addition, GPRS will allow improved quality of data services as measured in terms of reliability, response time, and features supported.

The unique applications that will be developed with GPRS will appeal to a broad base of mobile subscribers and allow operators to differentiate their services. These new services will increase capacity requirements on the radio and base-station subsystem resources. One method GPRS uses to alleviate the capacity impacts is sharing the same radio resource among all mobile stations in a cell, providing effective use of the scarce resources. In addition, new core network elements will be deployed to support the high burst ness of data services more efficiently.


The General Packet Radio Service (GPRS) is a new non voice value added service that allows information to be sent and received across a mobile telephone network. It supplements today's Circuit Switched Data and Short Message Service. GPRS is NOT related to GPS (the Global Positioning System), a similar acronym that is often used in mobile contexts.


In addition to providing new services for today's mobile user, GPRS is important as a migration step toward third-generation (3G) networks. GPRS will allow network operators to implement a IP-based core architecture for data applications, which will continue to be used and expanded upon for 3G services for integrated voice and data applications. In addition, GPRS will prove a testing

and development area for new services and applications, which will also be used in the development of 3G services.
In addition to the GPRS timeline, it is necessary to investigate the 3G deployment timeline. Because many GPRS operators are either planning to deploy or are investigating 3G, GPRS can be seen as a migration step toward 3G. Several proof-of-concept type trials are currently under way, and these trials will lead to more technical- and application-oriented trials in early 2001. As with GPRS, terminal and infrastructure availability are driving factors. In addition, completion of the licensing process is a necessary step for commercial deployment.
Global System For Mobile Communication (GSM)

Global system for mobile(GSM) is a second generation cellular system standard that was developed to solve the fragmentation problems of the first cellular systems in Europe's is the world's first cellular system to specify the digital modulation and network level architecture and services. Before GSM, European countries used different cellular standards throughout the continent, and it was not possible for customer to use a single subscriber unit throughout Europe. GSM's success has exceeded the expectations of virtually everyone, and it is now the world's most popular standard for new cellular radio and personal communication equipment throughout the world.

MiniDisc system

Definition


The MiniDisc system was introduced in the consumer audio market as a new digital audio playback and recording system. The introduction time was just ten years after the introduction of the Compact Disc (CD). As is known, CD has effectively replaced the vinyl LP records in the audio disc market. CD technology is based on 16-bit quantization and 44.1-kHz sampled digital audio recording. The CD sound quality was fairly improved compared to any consumer analog recording equipment.


Before starting the CD business, many engineers engaged in the development of the CD solely for its improvement in sound quality, but after the introduction of the CD player into the market, we found out that the consumer became aware of the quick random-access characteristic of the optical disc system. The next target of development was obviously to be the rewritable CD. Two different recordable CD systems were established. One is the write-once CD named CD-R and the other is the re-writable CD named CD-MO.


Sales of cassette tapes had been decreasing since 1989. Even if recordable CD were to be accepted by the consumer, it would still be difficult to break into the portable market. Here, portable compact cassette dominated because of its strong resistance to vibration and its compactness. Clear targets for a new disc system were to overcome these weaknesses. Sony was able to achieve this by introducing a disc system called MiniDisc (MD).
The name, MiniDisc (MD), comes from its size. MiniDisc was developed by as an audio media that combines the merits of both CD (supreme quality) and Tape (recordable). The disc, with a diameter of 64 mm and thickness of only 1.2 mm, is placed inside a cartridge of 72 X 68 X 5 mm.

The cartridge protects the disc from exposures and withstand forces eliminating problems that connects with CD (scratches) or tape (tangles). The Minidisc is based on Magneto-Optical technology, which is essentially a method of recording information by using a laser to alter magnetic information on the disc. In order to alter the information, the disc has to be heated to a high temperature, meaning that if left on a desk near a magnet, it should remain unaffected, unless you heat the disc to the required 180°C.

Types Of MiniDiscs
Premastered MiniDiscs are used most commonly for music and are sold in record stores just the same as compact cassettes and CDs are. Minidiscs, just like CDs, are manufactured in large volumes by high-speed injection molders, and the music signals are recorded during replication in the form of pits. Moreover, the discs are encased in a cartridge, so there is no worry about their being scratched. The design of the premastered Minidisc cartridges is special. Prerecorded music packages require a label, featuring the artist's picture or other information. Therefore the top face of the cartridge is left completely free for the label.

A window for the laser beam to read the disc is only necessary on the bottom face. Both a CD and a Minidisc can store the same amount of music. The difference is that a Minidisc uses a digital compression technique called ATRAC (Adaptive Transform Acoustic Coding) to compress audio data in 1:5 ratio by eliminating inaudible frequencies and faint background noises.

Modular Computing

Definition


I T's Challenge

In the past three tears, the world has changed for information technology groups. In the late 1990s, the predominant problem was deploying equipment and software quickly enough to keep up with demand for computing. While the tech sector boomed on Wall Street, money was no object. IT budgets swelled and the numbers of computers in data centers grew exponentially.


Now, in the early 2000s, the picture is very different. IT budgets are flat down, yet business demand for IT services continues to escalate. This combination of more demand and constrained budgets has compelled IT groups to consider new approaches to IT infrastructure, approaches that offer more flexibility and lower cost of ownership.
The common theme is cost cutting. In today's world, profits come less easily than in 1990s. Competitors are more experienced, and competition is more intense. Corporations that trim costs while providing great service will prevail over those that can't.


IT plays a major role in this competitive situation. As competition becomes more intense, so does the pressure on IT to cut costs and boost contribution. Now more than ever, large corporations are using their computing assets as tools to pull ahead of the competition.

Winning through Modularity

As Janet Matsuda, SGI's director of Graphics Product Marketing, says: "Modularity offers both savings and scalability so that customers don't waste their money on what they don't want and can spend it on what they do want."
Debra Goldfarb, group vice president at analyst firm IDC, agrees: "Modular computing empowers end users to build the kind of environment that they need not only today but over time.

Doing More With Less

To keep up with computing demand while operating within restricted budgets, IT must find ways to optimally use computing resources and reduce people costs. There are many areas of improvement.

Cost of Over-Provisioning

As data centers have moved toward servers and away mainframes, IT has found that some mainframe capabilities weren't available on servers. A glaring example is that smaller servers were unable to rapidly obtain more processing power to accommodate peaks in computing demand.As applications became more transactional, for example with customers entering information via the Web, these peaks in computing demand became more visible.

During peak demand, customers saw their transactions slow down. In situations where these transactions affect the bottom line, as when customers enter purchases, prompt processing becomes vital to the business.As the number of customers using Web services has increased, the peaks in computing demand became more intense and more frequent. Consequently, customers more frequently saw declines in performance.

Motes

Definition


Over the last year or so you may have heard about a new computing concept known as motes. This concept is also called smart dust and wireless sensing networks. It seems like just about every issue of Popular Science, Discover and Wired today contains a blurb about some new application of the mote idea. For example, the military plans to use them to gather information on battlefields, and engineers plan to mix them into concrete and use them to internally monitor the health of buildings and bridges.


There are thousands of different ways that motes might be used, and as people get familiar with the concept they come up with even more. It is a completely new paradigm for distributed sensing and it is opening up a fascinating new way to look at computers.In this article, you will have a chance to understand how motes work and see many of the possible applications of the technology. Then we will look at a MICA mote -- an existing technology that you can buy to experiment with this unique way of sensing the world.


Sensor network applications:

Sensor networks have been applied to various research areas at a number of academic institutions. In particular, environmental monitoring has received a lot of attention with major projects at UCB, UCLA and other places. In addition, commercial pilot projects are staring to emerge as well. There are a number of start-up companies active in this space and they are providing mote hardware as well as application software and back-end infrastructure solutions. The University of California at Berkeley in conjunction with the local Intel Lab is conducting an environmental monitoring project using mote based sensor networks on Great Duck Island off the coast of Maine. This endeavor includes the deployment of tens of motes and several gateways in a fairly harsh outdoor environment.

The motes are equipped with a variety of environmental sensors (temperature, humidity, light, atmospheric pressure, motion, etc.). They form a self-organizing multi-hop sensor net work that is linked via gateways to a base station on the island. There, the data is collected and transmitted via a satellite link to the Internet. This setup enabled researchers to continuously monitor an endangered bird species on the island without constant perturbation of their habitat. The motes gather detailed data on the bird population and their environment around the clock which would

Intel Mote Hardware

The Intel Mote has been designed after a careful study of the application space for sensor networks. We have interviewed a number of researchers in this space and collected their feedback on desired im-provements over currently available mote designs. A list of requests that have been repeatedly mentioned includes the following key items: o Increased CPU processing power. In particular, for applications such as acoustic sensing and localization additional computational resources are required. o Increased main memory size. Similar to the item above, sensor network applications are beginning to stretch the limits of existing hardware designs. This need is amplified by the desire to perform localized computation on the motes.

MPEG-7

Definition


As more and more audiovisual information becomes available from many sources around the world, many people would like to use this information for various purposes. This challenging situation led to the need for a solution that quickly and efficiently searches for and/or filters various types of multimedia material that's interesting to the user.
For example, finding information by rich-spoken queries, hand-drawn images, and humming improves the user-friendliness of computer systems and finally addresses what most people have been expecting from computers. For professionals, a new generation of applications will enable high-quality information search and retrieval.

For example, TV program producers can search with "laser-like precision" for occurrences of famous events or references to certain people, stored in thousands of hours of audiovisual records, in order to collect material for a program. This will reduce program production time and increase the quality of its content.
MPEG-7 is a multimedia content description standard, (to be defined by September 2001), that addresses how humans expect to interact with computer systems, since it develops rich descriptions that reflect those expectations.



The Moving Pictures Experts Group abbreviated MPEG is part of the International Standards Organization (ISO), and defines standards for digital video and digital audio. The primal task of this group was to develop a format to play back video and audio in real time from a CD. Meanwhile the demands have raised and beside the CD the DVD needs to be supported as well as transmission equipment like satellites and networks. All this operational uses are covered by a broad selection of standards. Well known are the standards MPEG-1, MPEG-2, MPEG-4 and MPEG-7.

Each standard provides levels and profiles to support special applications in an optimized way.
It's clearly much more fun to develop multimedia content than to index it. The amount of multimedia content available -- in digital archives, on the World Wide Web, in broadcast data streams and in personal and professional databases -- is growing out of control. But this enthusiasm has led to increasing difficulties in accessing, identifying and managing such resources due to their volume and complexity and a lack of adequate indexing standards. The large number of recently funded DLI-2 projects related to the resource discovery of different media types, including music, speech, video and images, indicates an acknowledgement of this problem and the importance of this field of research for digital libraries.


MPEG-7 is being developed by the Moving Pictures Expert Group (MPEG) a working group of ISO/IEC. Unlike the preceding MPEG standards (MPEG-1, MPEG-2, MPEG-4) which have mainly addressed coded representation of audio-visual content, MPEG-7 focuses on representing information about the content, not the content itself.
The goal of the MPEG-7 standard, formally called the "Multimedia Content Description Interface", is to provide a rich set of standardized tools to describe multimedia content.


Smart Note Taker

Definition

The Smart NoteTaker is such a helpful product that satisfies the needs of the people in today's technologic and fast life. This product can be used in many ways. The Smart NoteTaker provides taking fast and easy notes to people who are busy one's self with something. With the help of Smart NoteTaker, people will be able to write notes on the air, while being busy with their work. The written note will be stored on the memory chip of the pen, and will be able to read in digital medium after the job has done. This will save time and facilitate life.

The Smart NoteTaker is good and helpful for blinds that think and write freely. Another place, where our product can play an important role, is where two people talks on the phone. The subscribers are apart from each other while their talk, and they may want to use figures or texts to understand themselves better. It's also useful especially for instructors in presentations. The instructors may not want to present the lecture in front of the board. The drawn figure can be processed and directly sent to the server computer in the room. The server computer then can broadcast the drawn shape through network to all of the computers which are present in the room. By this way, the lectures are aimed to be more efficient and fun. This product will be simple but powerful. The product will be able to sense 3D shapes and motions that user tries to draw. The sensed information will be processed and transferred to the memory chip and then will be monitored on the display device. The drawn shape then can be broadcasted to the network or sent to a mobile device.

There will be an additional feature of the product which will monitor the notes, which were taken before, on the application program used in the computer. This application program can be a word document or an image file. Then, the sensed figures that were drawn onto the air will be recognized and by the help of the software program we will write, the desired character will be printed in the word document. If the application program is a paint related program, then the most similar shape will be chosen by the program and then will be printed on the screen.

Since, JAVA Applet is suitable for both the drawings and strings, all these applications can be put together by developing a single JAVA program. The JAVA code that we will develop will also be installed on the pen so that the processor inside the pen will type and draw the desired shape or text on the display panel.

Storage Area Network

Definition
Rapid growth in data intensive appilications continues to fuel the demand for raw data storage capacity .Applications such as data warehousing, data mining, online trasaction processing and multimedia internet internet browsing have led to a near doubling of the total storage capacity being shipped globally on an annual basis. And analysts predict that the numbers of network connections for server-storage subsystems will exceed the number of client connections are further fuelling the demand for network storage.

Limitations loom over surge of data.
With the rise of client networking, data centric computing applications, virtually all Networked-stored data has become mission critical in nature. This increasing reliance on the access to enterprise data is challenging the limitations of traditional server storage solutions. As a result, the on going need to add more storage, service more users and backup more data has become a monumental task. Having endured for nearly two decades , parallel Small Computer System Interface (SCSI) Bus that has facilitated server-storage connectivity for Local Area Network (LAN) servers , is imposing server limitations on network storage. Compounding these limitations is the traditional use of LAN connections for server storage backup , which detracts from usable client bandwidth . To contend with these limitations , network managers are often force to compromise on critical aspects of system availability , reliability and efficiency . To address the debilitating and potentially costly effects of these constraints , an infrastructure for server-storage connectivity , which can support current and future demands is badly needed..


The Storage Area Network (SAN) is an emerging data communication platform , which interconnects servers and storage at giga baud speeds. By combining LAN networking with core building blocks of server performance and mass storage capacity, SAN eliminates the bandwidth bottlenecks and scalability limitations imposed by previous SCSI bus-based architectures.

In addition to the fundamental connectivity benefits of SAN , the new capabilities , facilitated by its networking approach , enhance its value as a long term infrastructure. These capabilities, which include compute clustering, topological flexibility, fault tolerance, high availability, and remote management, further elevate SAN's ability to address the growing challenges of data intensive, mission-critical applications. From a client network perspective, the SAN environment complements the ongoing advancements in LAN and WAN technologies by extending the benefits of improved performance and capabilities all the way from the client and backbone through to servers and storage.


Fiber Channel : The Open SAN Solution.

Over the past year, Fiber Channel-Arbitrated Loop (FCAL) has emerged as the high-speed, serial technology of choice for server-storage connectivity. Most organizations prefer this solution because of the widely endorsed open standards. This broad acceptance is attributed not only to FC-AL's high bandwidth and high scalability but also to its unique ability to support multiple protocols, such as SCSI and IP, over a single physical connection. This enables the SAN infrastructure to serve as both a server-interconnect and as a direct interface to storage devices and storage arrays.

Optical Packet Switching Network

Definition

With in today's Internet data is transported using wavelength division multiplexed (WDM) optical fiber transmission system that carry 32-80 wavelengths modulated at 2.5gb/s and 10gb/s per wavelength. Today's largest routers and electronic switching systems need to handle close to 1tb/s to redirect incoming data from deployed WDM links. Mean while next generation commercial systems will be capable of single fiber transmission supporting hundreds of wavelength at 10Gb/s and world experiments have demonstrated 10Tb/shutdown transmission.

The ability to direct packets through the network when single fiber transmission capacities approach this magnitude may require electronics to run at rates that outstrip Moor's law. The bandwidth mismatch between fiber transmission systems and electronics router will becomes more complex when we consider that future routers and switches will potentially terminate hundreds of wavelength, and increase in bit rate per wavelength will head out of beyond 40gb/s to 160gb/s. even with significance advances in electronic processor speed, electronics memory access time only improve at the rate of approximately 5% per year, an important data point since memory plays a key role in how packets are buffered and directed through a router.

Additionally opto-electronic interfaces dominate the power dissipations, footprint and cost of these systems, and do not scale well as the port count and bit rate increase. Hence it is not difficult to see that the process of moving a massive number of packets through the multiple layers of electronics in a router can lead to congestion and exceed the performance of electronics and the ability to efficiently handle the dissipated power.


In this article we review the state of art in optical packet switching and more specifically the role optical signal processing plays in performing key functions. It describe how all-optical wavelength converters can be implemented as optical signal processors for packet switching, in terms of their processing functions, wavelength agile steering capabilities, and signal regeneration capabilities. Examples of how wavelength converters based processors can be used to implement asynchronous packet switching functions are reviewed. Two classes of wavelength converters will be touched on: monolithically integrated semiconductor optical amplifiers (SOA) based and nonlinear fiber based.

Speed Detection of moving vehicle using speed cameras

Definition

Although there is good road safety performance the number of people killed and injured on our roads remain unacceptably high. So the roads safety strategy was published or introduced to support the new casualty reduction targets. The road safety strategy includes all forms of invention based on the engineering and education and enforcement and recognizes that there are many different factors that lead to traffic collisions and casualties. The main reason is speed of vehicle. We use traffic lights and other traffic manager to reduce the speed. One among them is speed cameras.

Speed cameras on the side of urban and rural roads, usually placed to catch transgressors of the stipulated speed limit for that road. The speed cameras, the solely to identify and prosecute those drivers that pass by the them when exceed the stipulated speed limit.

At first glance this seemed to be reasonable that the road users do not exceed the speed limit must be a good thing because it increases road safety, reduces accidents and protect other road users and pedestrians.
So speed limits are good idea. To enforce these speed limit; laws are passed making speed an offence and signs are erected were of to indicate the maximum permissible speeds. The police can't be every where to enforce the speed limit and so enforcement cameras art director to do this work; on one who's got an ounce of Commons sense, the deliberately drive through speed camera in order fined and penalized .

So nearly everyone slowdown for the speed Camera. We finally have a solution to the speeding problem. Now if we are to assume that speed cameras are the only way to make driver's slowdown, and they work efficiently, then we would expect there to be a great number of these every were and that day would be highly visible and identifiable to make a drivers slow down.

Radio Frequency Light Sources

Definition

RF light sources follow the same principles of converting electrical power into visible radiation as conventional gas discharge lamps. The fundamental difference between RF lamps and conventional lamps is that RF lamps operate without electrodes .the presence of electrodes in conventional florescent and High Intensity Discharge lamps has put many restrictions on lamp design and performance and is a major factor limiting lamp life.

Recent progress in semiconductor power switching electronics, which is revolutionizing many factors of the electrical industry, and a better understanding of RF plasma characteristics, making it possible to drive lamps at high frequencies.The very first proposal for RF lighting, as well as the first patent on RF lamps, appeared about 100years ago, a half century before the basic principles lighting technology based on gas discharge had been developed.

Discharge tubes
Discharge Tube is the device in which a gas conducting an electric current emits visible light. It is usually a glass tube from which virtually all the air has been removed (producing a near vacuum), with electrodes at each end. When a high-voltage current is passed between the electrodes, the few remaining gas atoms (or some deliberately introduced ones) ionize and emit coloured light as they conduct the current along the tube. T

he light originates as electrons change energy levels in the ionized atoms. By coating the inside of the tube with a phosphor, invisible emitted radiation (such as ultraviolet light) can produce visible light; this is the principle of the fluorescent lamp. We will consider different kinds of RF discharges and their advantages and restrictions for lighting applications.




Voice morphing

Definition

Voice morphing means the transition of one speech signal into another. Like image morphing, speech morphing aims to preserve the shared characteristics of the starting and final signals, while generating a smooth transition between them. Speech morphing is analogous to image morphing. In image morphing the in-between images all show one face smoothly changing its shape and texture until it turns into the target face. It is this feature that a speech morph should possess. One speech signal should smoothly change into another, keeping the shared characteristics of the starting and ending signals but smoothly changing the other properties.

The major properties of concern as far as a speech signal is concerned are its pitch and envelope information. These two reside in a convolved form in a speech signal. Hence some efficient method for extracting each of these is necessary. We have adopted an uncomplicated approach namely cepstral analysis to do the same. Pitch and formant information in each signal is extracted using the cepstral approach. Necessary processing to obtain the morphed speech signal include methods like Cross fading of envelope information, Dynamic Time Warping to match the major signal features (pitch) and Signal Re-estimation to convert the morphed speech signal back into the acoustic waveform.

INTROSPECTION OF THE MORPHING PROCESS

Speech morphing can be achieved by transforming the signal's representation from the acoustic waveform obtained by sampling of the analog signal, with which many people are familiar with, to another representation. To prepare the signal for the transformation, it is split into a number of 'frames' - sections of the waveform. The transformation is then applied to each frame of the signal. This provides another way of viewing the signal information. The new representation (said to be in the frequency domain) describes the average energy present at each frequency band.

Further analysis enables two pieces of information to be obtained: pitch information and the overall envelope of the sound. A key element in the morphing is the manipulation of the pitch information. If two signals with different pitches were simply cross-faded it is highly likely that two separate sounds will be heard. This occurs because the signal will have two distinct pitches causing the auditory system to perceive two different objects. A successful morph must exhibit a smoothly changing pitch throughout.

The pitch information of each sound is compared to provide the best match between the two signals' pitches. To do this match, the signals are stretched and compressed so that important sections of each signal match in time. The interpolation of the two sounds can then be performed which creates the intermediate sounds in the morph. The final stage is then to convert the frames back into a normal waveform.

Wireless Fidelity

Definition

Wi-Fi, or Wireless Fidelity is freedom :it allows you to connect to the internet from your couch at home, in a hotel room or a conferance room at work without wires . Wi-Fi is a wireless technology like a cell phone. Wi-Fi enabled computers send and receive data indoors and out; anywhere within the range of a base station. And the best thing of all, it is fast.

However you only have true freedom to be connected any where if your computer is configured with a Wi-Fi CERTIFIED radio (a PC card or similar device). Wi-Fi certification means that you will be able able to connect anywhere there are other Wi-Fi CERTIFIED products - whether you are at home ,office , airports, coffee shops and other public areas equipped with a Wi-Fi access availability.Wi-Fi will be a major face behind hotspots , to a much greater extent.More than 400 airports and hotels in the US are targeted as Wi-Fi hotspots.

The Wi-Fi CERTIFIED logo is your only assurance that the product has met rigorous interoperability testing requirements to assure products from different vendors will work together. The Wi-Fi CERTIFIED logo means that it is a "safe" buy.

Wi-Fi certification comes from the Wi-Fi Alliance, a non profit international trade organisation that tests 802.11 based wireless equipment to make sure that it meets the Wi-Fi standard and works with all other manufacturer's Wi-Fi equipment on the market. The Wi-Fi Alliance (WELA) also has a Wi-Fi certification program for Wi-Fi products that meet interoperability standards. It is an international organisation devoted to certifying interoperability of 802.11 products and to promoting 802.11as the global wireless LAN std across all market segment.

IEEE 802.11 ARCHITECTURES

In IEEE's proposed standard for wireless LANs (IEEE 802.11), there are two different ways to configure a network: ad-hoc and infrastructure. In the ad-hoc network, computers are brought together to form a network "on the fly." As shown in Figure 1, there is no structure to the network; there are no fixed points; and usually every node is able to communicate with every other node. A good example of this is the aforementioned meeting where employees bring laptop computers together to communicate and share design or financial information. Although it seems that order would be difficult to maintain in this type of network, algorithms such as the spokesman election algorithm (SEA) [4] have been designed to "elect" one machine as the base station (master) of the network with the others being slaves. Another algorithm in ad-hoc network architectures uses a broadcast and flooding method to all other nodes to establish who's who.

QoS in Cellular Networks Based on MPT

Definition

In recent years, there has been a rapid increase in wireless network deployment and mobile device market penetration. With vigorous research that promises higher data rates, future wireless networks will likely become an integral part of the global communication infrastructure. Ultimately, wireless users will demand the same reliable service as today's wire-line telecommunications and data networks. However, there are some unique problems in cellular networks that challenge their service reliability.

In addition to problems introduced by fading, user mobility places stringent requirements on network resources. Whenever an active mobile terminal (MT) moves from one cell to another, the call needs to be handed off to the new base station (US), and network resources must be reallocated. Resource demands could fluctuate abruptly due to the movement of high data rate users. Quality of service (QoS) degradation or even forced termination may occur when there are insufficient resources to accommodate these handoffs.

If the system has prior knowledge of the exact trajectory of every MT, it could take appropriate steps to reserve resources so that QoS may be guaranteed during the MT's connection lifetime. However, such an ideal scenario is very unlikely to occur in real life. Instead, much of the work on resource reservation has adopted a predictive approach.

One approach uses pattern matching techniques and a self-adaptive extended Kalman filter for next-cell prediction based on cell sequence observations, signal strength measurements, and cell geometry assumptions. Another approach proposes the concept of a shadow cluster: a set of BSs to which an MT is likely to attach in the near future. The scheme estimates the probability of each MT being in any cell within the shadow cluster for future time intervals, based on knowledge about individual MTs' dynamics and call holding patterns.

Pivot Vector Space Approach in Audio-Video Mixing

Definition


The PIVOT VECTOR SPACE APPROACH is a novel technique of audio-video mixing which automatically selects the best audio clip from the available database, to be mixed with the given video shot. Till the development of this technique, audio-video mixing is a process that could be done only by professional audio-mixing artists. However employing these artists is very expensive and is not feasible for home video mixing. Besides, the process is time-consuming and tedious.

In today's era, significant advances are happening constantly in the field of Information Technology. The development in the IT related fields such as multimedia is extremely vast. This is evident with the release of a variety of multimedia products such as mobile handsets, portable MP3 players, digital video camcorders, handicams etc. Hence, certain activities such as production of home videos is easy due to products such as handicams, digital video camcorders etc. Such a scenario was not there a decade ago ,since no such products were available in the market. As a result production of home videos is not possible since it was reserved completely for professional video artists.

So in today's world, a large amount of home videos are being made and the number of amateur and home video enthusiasts is very large.A home video artist can never match the aesthetic capabilities of a professional audio mixing artist. However employing a professional mixing artist to develop home video is not feasible as it is expensive, tedious and time consuming.

The PIVOT VECTOR SPACE APPROACH is a technique that all amateur and home video enthusiasts can use in the creation of video footage that gives a professional look and feel. This technique saves cost and is fast. Since it is fully automatic, the user need not worry about his aesthetic capabilities. The PIVOT VECTOR SPACE APPROACH uses a pivot vector space mixing framework to incorporate the artistic heuristics for mixing audio with video .These artistic heuristics use high level perceptual descriptors of audio and video characteristics. Low-level signal processing techniques compute these descriptors.

Video Aesthetic Features

The table shows, from the cinematic point of view,a set of attributed features(such as color and motion) required to describe videos.The computations for extracting aesthetic attributed features from low-level video features occur at the video shot granularity. Because some attributed features are based on still images(such as high light falloff),we compute them on the key frame of a video shot. We try to optimize the trade-off in accuracy and computational efficiency among the competing extraction methods. Also, even though we assume that the videos considered come in the MPEG format(widely used by several home video camcorders),the features exist independently of a particular representation format.

On-line Analytical Processing (OLAP)

Definition


The term On-Line Analytical Processing (OLAP) was coined by E.F. Codd in 1993 to refer to a type of application that allows a user to interactively analyze data. An OLAP system is often contrasted to an OLTP (On-Line Transaction Processing) system that focuses on processing transaction such as orders, invoices or general ledger transactions. Before the term OLAP was, coined, these systems were often referred to as Decision Support Systems.
OLAP is now acknowledged as a key technology for successful management in the 90's.It describes a class of applications that require multidimensional analysis of business data.

OLAP systems enable managers and analysts to rapidly and easily examine key performance data and perform powerful comparison and trend analyses, even on very large data volumes. They can be used in a wide variety of business areas, including sales and marketing analysis, financial reporting, quality tracking, profitability analysis, manpower and pricing applications, and many others.


OLAP technology is being used in an increasingly wide range of applications. The most common are sales and marketing analysis; financial reporting and consolidation; and budgeting and planning. Increasingly, however OLAP is being used for applications such as product profitability and pricing analysis; activity based costing, manpower planning; quality analysis, in fact for any management system that requires a flexible, top down view of an organization.
Online Analytical Processing (OLAP) is a method of analyzing data in a multidimensional format, often across multiple time periods, with the aim of uncovering the business information concealed within the data'- OLAP enables business users to gain an insight into the business through interactive analysis of different views of the business data that have been built up from the operational systems. This approach facilitates a more intuitive and meaningful analysis of business information and assists in identifying important business trends.


OLAP is often confused with Data Warehousing. OLAP is not a data warehousing, methodology, however it is an integral part of a data warehousing solution. OLAP comes in many different shades, depending on the underlying database structure and the location of the majority of the analytical processing. Thus, the term OLAP has different meanings depending on the specific combination of these variables. This white paper examines the different options to support OLAP. It examines the strengths and weaknesses of each and recommends the analytical tasks for which each is most suitable.


OLAP provides the facility to analyze I the data held within the data warehouse in a flexible manner. It is an integral component of a successful data warehouse solution; it is not in itself a data warehousing methodology or system. However, the term OLAP has different meanings for different people, as there are many variants of OLAP. This article attempts to put the different OLAP scenarios into context.


OLAP can be defined as the process of converting raw data into business information through multi-dimensional analysis. This enables analysts to identify business strengths and weaknesses, business trends and the underlying causes of these trends. It provides an insight into the business through the interactive analysis of different views of business information that have been built up from raw operating data which reflect the business users understanding of the business.

Refactoring

Definition


Producing software is a very complex process that takes a considerable time to evolve. To help with development there are a number of software lifecycle models which aid to manage this process.A model breaks down the problem into smaller more manageable parts that are individually _ developed iteratively until all the requirements are met.

However using such models do not take into consideration some reality factors that have direct influence on developing software. These factors are, the requirements changing to accommodate clients' needs, new functionality arises, pressures of meeting deadlines and the evolution of developers changing have an over all effect on the whole quality of the system design. These are some of the problems that relate to maintaining software. Software maintenance is described formally as "the process of modifying a software system or component after delivery to correct faults, improve performance or other attributes, or adapt to a changed environment".

Software maintenance can take up to 50% of the overall development costs of producing software. Boehm carried out a study in 1975 on a project being developed and concluded that it cost $30 per line of code during development which increased to $4,000 per line for maintenance costs.

One of the main attributes to these high costs is poorly designed code, which makes it difficult for developers to understand the system even before considering implementing new code. Understanding a system requires the "Software engineer to extract high-level information from low
level code". These high level abstractions use approximations to produce abstract models of the underlying system. These models will produce a limited level of under-standing of the system through information that has been lost in using the approximations to produce these models.

This makes the design of code an increasingly important part of the overall development I of software. Refactoring "is the process of changing a software system in such a way that it does not alter the external behavior of the code yet improves its internal structure" Refactoring can have a direct influence on reducing the cost of software maintenance through changing the internal structure of the code to increase the design which helps the present and future developers evolve and understand the system. The aim of this report is to outline the importance of refactoring through a comprehensive literature review and also identifying possible research questions, which could be considered for future work.

Refactoring is relatively a new area of research and so is not well defined. There are a vast number of definitions for refactoring, most of them are mentioned below. This list is quite large to correlate to the many different areas which refactoring covers.

Refactoring (noun): a change made to the internal structure of software to make it easier to understand and cheaper to modify without changing its observable behavior. Refactoring (verb): to restructure software by applying a series of Refactoring without changing its observable behavior. Refactoring is the process of taking an object design and rearranging it in various ways to make the design more flexible and/or reusable. There are several reasons you might want to do this, efficiency and maintainability being probably the most important.

To ref actor programming code is to rewrite the code, to "clean it up" Refactoring is the moving of units of functionality from one place to another in your program.
Refactoring has as a primary objective, getting each piece of functionality to exist in exactly one place in the software Refactoring is the process of taking an object design and rearranging it in various ways to make the design more flexible and/or reusable. There are several reasons you might want to do this, efficiency and maintainability being probably the most important.

Elastic Quotas

Definition
Despite seemingly endless increases in the amount of storage and ever decreasing costs of hardware, managing storage is still expensive. Additionally, users continue to fill increasingly larger disks, worsened by the proliferation of large multimedia files and high-speed broadband networks. Storage requirements are continuing to grow at a rate of 50% a year. Worse, existing hard disk technology is reaching physical limitations, making it harder and costlier to meet growing user demands . Storage management costs have remained a significant component of total storage costs. Even in the '70s, storage costs at IBM were several times more than hardware costs, and projected that they would reach ten times the cost of the hardware. Today, management costs are five to ten times the cost of underlying hardware and are actually increasing as a proportion of cost because administrators have a limited amount of storage each can manage. Up to 47% of storage costs are associated with administrators manually manipulating files .

Thankfully, significant savings are possible: studies show that over 20% of all files--representing over half of the storage-are regenerable . Other studies, indicate that 82%-85% of storage is allocated to files that have not been accessed in more than a month. The studies shows that storage management has been a problem in a past, continues to be a problem today, and is only getting worse--all despite growing disk sizes. Recent trends have begun to address the management of storage through virtualization . Morris put forth the idea of Autonomic Computing, which includes "the system's ability to adjust to its configuration and resource allocation to achieve predetermined goals" . Elastic Quota system is designed to help the management problem via efficient allocation of storage while allowing users maxi-mal freedom, all with minimal administrator intervention.

Elastic quotas enter users into an agreement with the system: users can exceed their quota while space is available, under the condition that the system will be able to automatically re-claim the storage when the need arises. Users or applications may designate some files as elastic. When space runs short, the elastic quota system (Equota) may reclaim space from those files marked as elastic; nonelastic files maintain existing semantics and are accounted for in users' persistent quotas . This report focuses on policies for elastic space reclamation and is organized as follows. Section 2 describes the overall architecture of the policy system. Section 3 discusses the various elastic quota policies. In Section 4 we discuss interesting implementation aspects of Elastic Quota. Section 5 presents measurements and performance results of various policies. Section 6 discusses work related to storage space management policies. Finally, Section 7 presents some concluding remarks and directions for future work.

Design
The primary design goals were to allow for versatile and efficient elastic quota policy management. An additional goal was to avoid changes to the existing OS to support elastic quotas. To achieve versatility the Elastic Quota system is designed with a flexible policy management configuration language for use by administrators and users; a number of user-level and kernel features exist to support this flexibility. To achieve efficiency the design allows the system to run as a kernel file system with DB3 databases accessible to the user-level tools. Finally, the design uses a stackable file system to ensure that there is no need to modify existing file systems such as Ext3 .

Param -10000

Definition


SUPER COMPUTERS - OVERVIEW

Supercomputers introduced in the 1960s were designed primarily by Seymour Cray at Control Data Corporation (CDC), and led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for 5 years (1985-1990). Cray, himself, never used the word "supercomputer," a little-remembered fact in that he only recognized the word "computer." In the 1980s a large number of smaller competitors entered the market, in a parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash". Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional" companies such as IBM and HP, who had purchased many of the 1980s companies to gain their experience, although Cray Inc. still specializes in building supercomputers.
The Cray-2 was the world's fastest computer from 1985 to 1989.

Supercomputer Challenges & Technologies
" A supercomputer generates large amounts of heat and must be cooled. Cooling most supercomputers is a major HVAC problem.
" Information cannot move faster than the speed of light between two parts of a supercomputer. For this reason, a supercomputer that is many meters across must have latencies between its components measured at least in the tens of nanoseconds. Seymour Cray's supercomputer designs attempted to keep cable runs as short as possible for this reason: hence the cylindrical shape of his famous Cray range of computers.
" Supercomputers consume and produce massive amounts of data in a very short period of time. According to Ken Batcher, "A supercomputer is a device for turning compute-bound problems into I/O-bound problems." Much work on external storage bandwidth is needed to ensure that this information can be transferred quickly and stored/retrieved correctly.


Technologies developed for supercomputers include:
" Vector processing
" Liquid cooling
" Non-Uniform Memory Access (NUMA)
" Striped disks (the first instance of what was later called RAID)
" Parallel filesystems

Platform
SD2000 uses PARAM 10000. It used up to 4 UltraSPARC-II processors. The PARAM systems can be extended to a cluster supercomputer. A clustered system with 1200 processors can deliver a peak performance of up to 1TFlops/s. Even though PARAM 10000 system is not ranked within top 500 supercomputers, it has a possibility of gaining a high rank. It uses a variation of MPI developed in CDAC. No performance data is available, although one would presume that it will not be very different from that of other UltraSPARC-II based systems using MPI. Because SD2000 is a commercial product, it is impossible to gather detailed data about algorithm and performance of the product.


corDECT Wireless in Local Loop System

Definition


corDECT is an advanced, field proven, Wireless Access System developed by Midas Communication Technologies and the Indian Institute of Technology, Madras, in association with Analog Devices Inc., USA. corDECT provides a complete wireless access solution for new and expanding telecommunication networks with seamless integration of both voice and Internet services. It is the only cost-effective Wireless Local Loop (WLL) system in the world today that provides simultaneous toll-quality voice and 35 or 70 kbps Internet access to wireless subscribers.

corDECT is based on the DECT standard specification from the European Telecommunication Standards Institute (ETSI). In addition, it incorporates new concepts and innovative designs brought about by the collaboration of a leading R & D company, a renowned university, and a global semiconductor manufacturer. This alliance has resulted in many breakthrough concepts including that of an Access Network that segregates voice and Internet traffic and delivers each, in the most efficient manner, to the telephone network and the Internet respectively, without the one choking the other.


The corDECT Wireless Access System (WAS) is designed to provide simultaneous circuit-switched voice and medium-rate Internet connectivity at homes and offices.

A. Conceptual Access System

In this conceptual model, there is a Subscriber Unit (SU) located at the subscriber premises. The SU has a standard two-wire interface to connect a telephone, cordless phone, or modem. It also provides direct (without modem) Internet connectivity to a standard PC, using either a serial port (RS-232 or USB) or Ethernet. The access system allows simultaneous telephone and Internet connectivity. The SU's are connected to an Access Centre (AC) using any convenient technology like wireless, plain old copper, coaxial cable, optical fibre, or even power lines.

The AC must be scalable, serving as few as 200 subscribers and as many as 2000 subscribers. In urban areas, the AC could be located at a street corner, serving a radius of 700 m to 1 km. This small radius in urban areas is important for wireless access, in order to enable efficient reuse of spectrum. When cable is used, the small radius ensures low cost and higher bitrate connectivity. However in rural areas, the distance between the AC and the SU could easily be 10 km even go up to 25 km in certain situations.

The AC is thus a shared system catering to multiple subscribers. The voice and Internet traffic to and from subscribers can be concentrated here and then carried on any appropriate backhaul transport network to the telephone and Internet networks respectively. At the AC, the telephone and Internet traffic is separated. The telephone traffic carried to the telephone network on E1 links using access protocols such as V5.2. the Internet traffic from multiple subscribers is statically multiplexed, taking advantage of the bursty nature of the Internet traffic, and carried to the Internet network. As use of Voice-over-IP (VoIP) grows, voice traffic from subscriber traffic could also be sent to the Internet, gradually making the connectivity to the telephone network redundant. However, for connecting to the legacy telephone network, the voice port of the AC may be required for some time to come. An AC could also incorporate switching and maintenance functions when required.

Digital Visual Interface

Definition


In a constantly changing industry, DVI is the next major attempt at an all-in-one, standardized, universal connector for audio/video applications. Featuring a modern design and backed by the biggest names in the electronic industry, DVI is set to finally unify all digital media components with a single cable, remote, and interface.
DVI is built with a 5 Gbps bandwidth limit, over twice that of HDTV (which runs at 2.2 Gbps), and is built forwards-compatible by offering unallocated pipeline for future technologies. The connectors are sliding contact (like FireWire and USB) instead of screw-on (like DVI), and are not nearly as bulky as most current video interfaces.


The screaming bandwidth of HDMI is structured around delivering the highest-quality digital video and audio throughout your entertainment center. Capable of all international frequencies and resolutions, the HDMI cable will replace all analog signals (i.e. S-Video, Component, Composite, and Coaxial), as well as HDTV digital signals (i.e. DVI, P&D, DFP), with absolutely no compromise in quality.
Additionally, HDMI is capable of carrying up to 8 channels of digital-audio, replacing the old analog connections (RCA, 3.5mm) as well as optical formats (SPDIF, Toslink).


VIDEO INTERFACES

Video Graphics Array (VGA) is an analog computer display standard first marketed in 1987 by IBM. While it has been obsolete for some time, it was the last graphical standard that the majority of manufacturers decided to follow, making it the lowest common denominator that all PC graphics hardware supports prior to a device-specific driver being loaded. For example, the Microsoft Windows splash screen appears while the machine is still operating in VGA mode, which is the reason that this screen always appears in reduced resolution and color depth.


The term VGA is often used to refer to a resolution of 640×480, regardless of the hardware that produces the picture. It may also refer to the 15-pin D-subminiature VGA connector which is still widely used to carry analog video signals of all resolutions.


VGA was officially superseded by IBM's XGA standard, but in reality it was superseded by numerous extensions to VGA made by clone manufacturers that came to be known as "Super VGA".


A Male DVI-I Plug
The DVI interface uses a digital protocol in which the desired brightness of pixels is transmitted as binary data. When the display is driven at its native resolution, all it has to do is read each number and apply that brightness to the appropriate pixel. In this way, each pixel in the output buffer of the source device corresponds directly to one pixel in the display device, whereas with an analog signal the appearance of each pixel may be affected by its adjacent pixels as well as by electrical noise and other forms of analog distortion.
Previous standards such as the analog VGA were designed for CRT-based devices and thus did not use discrete time. As the analog source transmits each horizontal line of the image, it varies its output voltage to represent the desired brightness. In a CRT device, this is used to vary the intensity of the scanning beam as it moves across the screen.

Compact peripheral component interconnect (CPCI)

Definition


Compact peripheral component interconnect (CPCI) is an adaptation of the peripheral component interconnect (PCI) specification for industrial computer applications requiring a smaller, more robust mechanical form factor than the one defined for the desktop. CompactPCI is an open standard supported by the PCI Industrial Computer Manufacturer's Group (PICMG). CompactPCI is best suited for small, high-speed industrial computing applications where transfers occur between a number of high-speed cards.

It is a high-performance industrial bus that uses the Eurocard form factor and is fully compatible with the Enterprise Computer Telephony Forum(ECTF) computer telephony (CT) Bus™ H.110 standard specification. CompactPCI products make it possible for original equipment manufacturers (OEM), integrators, and resellers to build powerful and cost-effective solutions for telco networks, while using fewer development resources. CompactPCI products let developers scale their applications to the size, performance, maintenance, and reliability demands of telco environments by supporting the CT Bus, hot swap, administrative tools such as simple network management protocol (SNMP), and extensive system diagnostics. The move toward open, standards-based systems has revolutionized the computer telephony (CT) industry. There are a number of reasons for these changes. Open systems have benefited from improvements in personal computer (PC) hardware and software, as well as from advances in digital signal processing (DSP) technology. As a result, flexible, high performance systems are scalable to thousands of ports while remaining cost effective for use in telco networks. In addition, fault-tolerant chassis, distributed software architecture, and N+1 redundancy have succeeded in meeting the demanding reliability requirements of network operators.

One of the remaining hurdles facing open CT systems is serviceability. CT systems used in public networks must be extremely reliable and easy to repair without system downtime. In addition, network operation requires first-rate administrative and diagnostic capabilities to keep services up and running.


The Compact PCI Standard

The Peripheral Component Interconnect Industrial Computer Manufacturer's Group (PICMG) developed the compact peripheral component interconnect (CompactPCI) specification in 1994. CompactPCI is a high-performance industrial bus based on the peripheral component interconnect (PCI) electrical standard. It uses the Eurocard form factor first popularized by VersaModule-Eurocard (VME). Compared to the standard PCI desktop computer, CompactPCI supports twice as many PCI slots (eight) on a single system bus. In addition, CompactPCI boards are inserted from the front of the chassis and can route input/output (I/O) through the backplane to the back of the chassis. These design considerations make CompactPCI ideal for telco environments.

CompactPCI offers a substantial number of benefits for developers interested in building telco-grade applications. CompactPCI systems offer the durability and maintainability required for network applications. At the same time, they can be built using standard, off-the-shelf components and can run almost any operating system and thousands of existing software applications without modification. Other advantages of CompactPCI are related to its Eurocard form factor, durable and rugged design, hot swap capability, and compatibility with the CT Bus.

Earth Simulator

Definition


Milestones of Development


1. In July 1996, as part of the Global Change Prediction Plan, the promotion of research & development for the Earth Simulator has been reported to the Science Technology Agency, based on the report titled "For Realization of the Global Change Prediction" made by the Aero-Electronics Technology Committee.


2. In April 1997, the budget for the development of the Earth Simulator was authorized to be allocated to National Space Development Agency of Japan (NASDA) and Power Reactor and Nuclear Fuel Development Corporation (PNC). The Earth Simulator Research and Development Center was established, with Dr. Miyoshi assigned for the director of the center. The project had begun.


3. Discussions were made regarding the Earth Simulator Project, at the meeting on the Earth Simulator under the Computer Science Technology Promotion Council (Chairman: Prof. Taro Matsuno of Hokkaido Univ.), which was held 6 times from March to July in 1997. With a report on "Promotion of Earth Simulator Project", specific proposals were put forward to Science and Technology Agency.


4. The conceptual system design of the Earth Simulator proposed by NEC Corporation was selected by bidding.


5. Japan Atomic Energy Research Institute (JAERI) had joined the project for PNC.


6. Under the Computer Science Technology Promotion Council, the Earth Simulator Advisory Committee was instituted with 7 members with profound knowledge in the area (Chairman: Prof. Yoshio Oyanagi of Tokyo Univ.). From June to July in 1998, five meetings were held on the basic design. As a result, in 24th of August it was confirmed with "The Evaluation Report for the basic design of the Earth Simulator".


7. In February 1999, JAMSTEC joined in the Earth Simulator development project and decided to build the Earth Simulator facility in Kanazawa ward in Yokohama, where it had been the industrial experiment station of Kanagawa prefecture. Manufacturing the Earth Simulator had begun in March 2000, under NASDA, JEARI, and JAMSTEC. After completion, the entire operation and management of the Earth Simulator was decided to be solely charged by JAMSTEC.


8. At the end of February in 2002, all 640 processor nodes (PN's) started its operation for check up. The Earth Simulator Research and Development Center verified and gained the sustained performance by AFES (an Atmospheric general circulation model for ES), recording 7.2Tflops with 160 PN's, more than 1.44 times faster than the target performance, 5Tflops. The Earth Simulator Center (ESC) with Director-General Dr. Tetsuya Sato has begun the actual operation in March, 2002.


9. On May 2, 2002 the Earth Simulator achieved the sustained performance 26.58 Tflops by using the AFES. The highest performance record 35.86 Tflops was achieved with the Linpack Benchmark on next day. ES was honored at the 1st rank of TOP500 list on June 2002.

Extreme Programming (XP)

Definition


Extreme Programming (XP) is actually a deliberate and disciplined approach to software development. About six years old, it has already been proven at many companies of all different sizes and industries worldwide. XP is successful because it stresses customer satisfaction. The methodology is designed to deliver the software your customer needs when it is needed. XP empowers software developers to confidently respond to changing customer requirements, even late in the life cycle. This methodology also emphasizes teamwork. Managers, customers, and developers are all part of a team dedicated to delivering quality software. XP implements a simple, yet effective way to enable groupware style development.

XP improves a software project in four essential ways; communication, simplicity feedback, and courage. XP programmers communicate with their customers and fellow programmers. They keep their design simple and clean. They get feedback by testing their software starting on day one. They deliver the system to the customers as early as possible and implement changes as suggested. With this foundation XP programmers are able to courageously respond to changing requirements and technology. XP is different. It is a lot like a jig saw puzzle. There are many small pieces. Individually the pieces make no sense, but when combined together a complete picture can be seen. This is a significant departure from traditional software development methods and ushers in a change in the way we program.

If one or two developers have become bottlenecks because they own the core classes in the system and must make all the changes, then try collective code ownership. You will also need unit tests. Let everyone make changes to the core classes whenever they need to. You could continue this way until no problems are left. Then just add the remaining practices as you can. The first practice you add will seem easy. You are solving a large problem with a little extra effort. The second might seem easy too. But at some point between having a few XP rules and all of the XP rules it will take some persistence to make it work. Your problems will have been solved and your project is under control.

It might seem good to abandon the new methodology and go back to what is familiar and comfortable, but continuing does pay off in the end. Your development team will become much more efficient than you thought possible. At some point you will find that the XP rules no longer seem like rules at all. There is a synergy between the rules that is hard to understand until you have been fully immersed. This up hill climb is especially true with pair programming, but the pay off of this technique is very large. Also, unit tests will take time to collect, but unit tests are the foundation for many of the other XP practices so the pay off is very great.

XP projects are not quiet; there always seems to be someone talking about problems and solutions. People move about, asking each other questions and trading partners for programming. People spontaneously meet to solve tough problems, and then disperse again. Encourage this interaction, provide a meeting area and set up workspaces such that two people can easily work together. The entire work area should be open space to encourage team communication. The most obvious way to start extreme programming (XP) is with a new project. Start out collecting user stories and conducting spike solutions for things that seem risky. Spend only a few weeks doing this. Then schedule a release planning meeting. Invite customers, developers, and managers to create a schedule that everyone agrees on. Begin your iterative development with an iteration planning meeting. Now you're started.