Saturday, August 23, 2008

Thursday, August 14, 2008

Characteristics of embedded systems

Soekris net4801, an embedded system targeted at network applications.
Soekris net4801, an embedded system targeted at network applications.

1. Embedded systems are designed to do some specific task, rather than be a general-purpose computer for multiple tasks. Some also have real-time performance constraints that must be met, for reasons such as safety and usability; others may have low or no performance requirements, allowing the system hardware to be simplified to reduce costs.
2. Embedded systems are not always standalone devices. Many embedded systems consist of small, computerized parts within a larger device that serves a more general purpose. For example, the Gibson Robot Guitar features an embedded system for tuning the strings, but the overall purpose of the Robot Guitar is, of course, to play music.[2] Similarly, an embedded system in an automobile provides a specific function as a subsystem of the car itself.
3. The software written for embedded systems is often called firmware, and is usually stored in read-only memory or Flash memory chips rather than a disk drive. It often runs with limited computer hardware resources: small or no keyboard, screen, and little memory.

[edit] User interfaces

Embedded systems range from no user interface at all — dedicated only to one task — to complex graphical user interfaces that resemble modern computer desktop operating systems.

[edit] Simple systems

Simple embedded devices use buttons, LEDs, and small character- or digit-only displays, often with a simple menu system.

[edit] In more complex systems

A full graphical screen, with touch sensing or screen-edge buttons provides flexibility while minimising space used: the meaning of the buttons can change with the screen, and selection involves the natural behavior of pointing at what's desired.

Handheld systems often have a screen with a "joystick button" for a pointing device.

The rise of the World Wide Web has given embedded designers another quite different option: providing a web page interface over a network connection. This avoids the cost of a sophisticated display, yet provides complex input and display capabilities when needed, on another computer. This is successful for remote, permanently installed equipment such as Pan-Tilt-Zoom cameras and network routers.

[edit] CPU platforms

Embedded processors can be broken into two broad categories: ordinary microprocessors (μP) and microcontrollers (μC), which have many more peripherals on chip, reducing cost and size. Contrasting to the personal computer and server markets, a fairly large number of basic CPU architectures are used; there are Von Neumann as well as various degrees of Harvard architectures, RISC as well as non-RISC and VLIW; word lengths vary from 4-bit to 64-bits and beyond (mainly in DSP processors) although the most typical remain 8/16-bit. Most architectures come in a large number of different variants and shapes, many of which are also manufactured by several different companies.

A long but still not exhaustive list of common architectures are: 65816, 65C02, 68HC08, 68HC11, 68k, 8051, ARM, AVR, AVR32, Blackfin, C167, Coldfire, COP8, eZ8, eZ80, FR-V, H8, HT48, M16C, M32C, MIPS, MSP430, PIC, PowerPC, R8C, SHARC, ST6, SuperH, TLCS-47, TLCS-870, TLCS-900, Tricore, V850, x86, XE8000, Z80, etc.

[edit] Ready made computer boards

PC/104 and PC/104+ are examples of available ready made computer boards intended for small, low-volume embedded and ruggedized systems. These often use DOS, Linux, NetBSD, or an embedded real-time operating system such as MicroC/OS-II, QNX or VxWorks.

In certain applications, particularly where small size is not a primary concern, the components used may be compatible with those used in general purpose computers. Boards such as the VIA EPIA range help to bridge the gap by being PC-compatible but highly integrated, physically smaller or have other attributes making them attractive to embedded engineers. The advantage of this approach is that low-cost commodity components may be used along with the same software development tools used for general software development. Systems built in this way are still regarded as embedded since they are integrated into larger devices and fulfill a single role. Examples of devices that may adopt this approach are ATMs and arcade machines.

[edit] ASIC and FPGA solutions

A common configuration for very-high-volume embedded systems is the system on a chip (SoC), an application-specific integrated circuit (ASIC), for which the CPU core was purchased and added as part of the chip design. A related scheme is to use a field-programmable gate array (FPGA), and program it with all the logic, including the CPU.

[edit] Peripherals

Embedded Systems talk with the outside world via peripherals, such as:

* Serial Communication Interfaces (SCI): RS-232, RS-422, RS-485 etc
* Synchronous Serial Communication Interface: I2C, JTAG, SPI, SSC and ESSI
* Universal Serial Bus (USB)
* Networks: Ethernet, Controller Area Network, LonWorks, etc
* Timers: PLL(s), Capture/Compare and Time Processing Units
* Discrete IO: aka General Purpose Input/Output (GPIO)
* Analog to Digital/Digital to Analog (ADC/DAC)

[edit] Tools

As for other software, embedded system designers use compilers, assemblers, and debuggers to develop embedded system software. However, they may also use some more specific tools:

* In circuit debuggers or emulators (see next section).
* Utilities to add a checksum or CRC to a program, so the embedded system can check if the program is valid.
* For systems using digital signal processing, developers may use a math workbench such as Scilab / Scicos, MATLAB / Simulink, MathCad, or Mathematica to simulate the mathematics. They might also use libraries for both the host and target which eliminates developing DSP routines as done in DSPnano RTOS and Unison Operating System.
* Custom compilers and linkers may be used to improve optimisation for the particular hardware.
* An embedded system may have its own special language or design tool, or add enhancements to an existing language such as Forth or Basic.
* Another alternative is to add a Real-time operating system or Embedded operating system, which may have DSP capabilities like DSPnano RTOS.

Software tools can come from several sources:

* Software companies that specialize in the embedded market
* Ported from the GNU software development tools
* Sometimes, development tools for a personal computer can be used if the embedded processor is a close relative to a common PC processor

As the complexity of embedded systems grows, higher level tools and operating systems are migrating into machinery where it makes sense. For example, cellphones, personal digital assistants and other consumer computers often need significant software that is purchased or provided by a person other than the manufacturer of the electronics. In these systems, an open programming environment such as Linux, NetBSD, OSGi or Embedded Java is required so that the third-party software provider can sell to a large market.

[edit] Debugging

Embedded Debugging may be performed at different levels, depending on the facilities available. From simplest to most sophisticated they can be roughly grouped into the following areas:

* Interactive resident debugging, using the simple shell provided by the embedded operating system (e.g. Forth and Basic)
* External debugging using logging or serial port output to trace operation using either a monitor in flash or using a debug server like the Remedy Debugger which even works for heterogeneous multicore systems.
* An in-circuit debugger (ICD), a hardware device that connects to the microprocessor via a JTAG or NEXUS interface. This allows the operation of the microprocessor to be controlled externally, but is typically restricted to specific debugging capabilities in the processor.
* An in-circuit emulator replaces the microprocessor with a simulated equivalent, providing full control over all aspects of the microprocessor.
* A complete emulator provides a simulation of all aspects of the hardware, allowing all of it to be controlled and modified, and allowing debugging on a normal PC.

Unless restricted to external debugging, the programmer can typically load and run software through the tools, view the code running in the processor, and start or stop its operation. The view of the code may be as assembly code or source-code.

Because an embedded system is often composed of a wide variety of elements, the debugging strategy may vary. For instance, debugging a software- (and microprocessor-) centric embedded system is different from debugging an embedded system where most of the processing is performed by peripherals (DSP, FPGA, co-processor). An increasing number of embedded systems today use more than one single processor core. A common problem with multi-core development is the proper synchronization of software execution. In such a case, the embedded system design may wish to check the data traffic on the busses between the processor cores, which requires very low-level debugging, at signal/bus level, with a logic analyzer, for instance.

[edit] Reliability

Embedded systems often reside in machines that are expected to run continuously for years without errors, and in some cases recover by themselves if an error occurs. Therefore the software is usually developed and tested more carefully than that for personal computers, and unreliable mechanical moving parts such as disk drives, switches or buttons are avoided.

Specific reliability issues may include:

1. The system cannot safely be shut down for repair, or it is too inaccessible to repair. Examples include space systems, undersea cables, navigational beacons, bore-hole systems, and automobiles.
2. The system must be kept running for safety reasons. "Limp modes" are less tolerable. Often backups are selected by an operator. Examples include aircraft navigation, reactor control systems, safety-critical chemical factory controls, train signals, engines on single-engine aircraft.
3. The system will lose large amounts of money when shut down: Telephone switches, factory controls, bridge and elevator controls, funds transfer and market making, automated sales and service.

A variety of techniques are used, sometimes in combination, to recover from errors -- both software bugs such as memory leaks, and also soft errors in the hardware:

* watchdog timer that resets the computer unless the software periodically notifies the watchdog
* subsystems with redundant spares that can be switched over to
* software "limp modes" that provide partial function
* Immunity Aware Programming

[edit] High vs Low Volume

For high volume systems such as portable music players or mobile phones, minimizing cost is usually the primary design consideration. Engineers typically select hardware that is just “good enough” to implement the necessary functions.

For low-volume or prototype embedded systems, general purpose computers may be adapted by limiting the programs or by replacing the operating system with a real-time operating system.

[edit] Embedded software architectures

There are several different types of software architecture in common use.

[edit] Simple control loop

In this design, the software simply has a loop. The loop calls subroutines, each of which manages a part of the hardware or software.

[edit] Interrupt controlled system

Some embedded systems are predominantly interrupt controlled. This means that tasks performed by the system are triggered by different kinds of events. An interrupt could be generated for example by a timer in a predefined frequency, or by a serial port controller receiving a byte.

These kinds of systems are used if event handlers need low latency and the event handlers are short and simple.

Usually these kinds of systems run a simple task in a main loop also, but this task is not very sensitive to unexpected delays.

Sometimes the interrupt handler will add longer tasks to a queue structure. Later, after the interrupt handler has finished, these tasks are executed by the main loop. This method brings the system close to a multitasking kernel with discrete processes.

[edit] Cooperative multitasking

A nonpreemptive multitasking system is very similar to the simple control loop scheme, except that the loop is hidden in an API. The programmer defines a series of tasks, and each task gets its own environment to "run" in. Then, when a task is idle, it calls an idle routine (usually called "pause", "wait", "yield", "nop" (Stands for no operation), etc.).

The advantages and disadvantages are very similar to the control loop, except that adding new software is easier, by simply writing a new task, or adding to the queue-interpreter.

[edit] Preemptive multitasking or multi-threading

In this type of system, a low-level piece of code switches between tasks or threads based on a timer (connected to an interrupt). This is the level at which the system is generally considered to have an "operating system" kernel. Depending on how much functionality is required, it introduces more or less of the complexities of managing multiple tasks running conceptually in parallel.

As any code can potentially damage the data of another task (except in larger systems using an MMU) programs must be carefully designed and tested, and access to shared data must be controlled by some synchronization strategy, such as message queues, semaphores or a non-blocking synchronization scheme.

Because of these complexities, it is common for organizations to buy a real-time operating system, allowing the application programmers to concentrate on device functionality rather than operating system services, at least for large systems; smaller systems often cannot afford the overhead associated with a generic real time system, due to limitations regarding memory size, performance, and/or battery life.

[edit] Microkernels and exokernels

A microkernel is a logical step up from a real-time OS. The usual arrangement is that the operating system kernel allocates memory and switches the CPU to different threads of execution. User mode processes implement major functions such as file systems, network interfaces, etc.

In general, microkernels succeed when the task switching and intertask communication is fast, and fail when they are slow.

Exokernels communicate efficiently by normal subroutine calls. The hardware, and all the software in the system are available to, and extensible by application programmers.

[edit] Monolithic kernels

In this case, a relatively large kernel with sophisticated capabilities is adapted to suit an embedded environment. This gives programmers an environment similar to a desktop operating system like Linux or Microsoft Windows, and is therefore very productive for development; on the downside, it requires considerably more hardware resources, is often more expensive, and because of the complexity of these kernels can be less predictable and reliable.

Common examples of embedded monolithic kernels are Embedded Linux and Windows CE.

Despite the increased cost in hardware, this type of embedded system is increasing in popularity, especially on the more powerful embedded devices such as Wireless Routers and GPS Navigation Systems. Here are some of the reasons:

* Ports to common embedded chip sets are available.
* They permit re-use of publicly available code for Device Drivers, Web Servers, Firewalls, and other code.
* Development systems can start out with broad feature-sets, and then the distribution can be configured to exclude unneeded functionality, and save the expense of the memory that it would consume.
* Many engineers believe that running application code in user mode is more reliable, easier to debug and that therefore the development process is easier and the code more portable.
* Many embedded systems lack the tight real time requirements of a control system. A system such as Embedded Linux has fast enough response for many applications.
* Features requiring faster response than can be guaranteed can often be placed in hardware.
* Many RTOS systems have a per-unit cost. When used on a product that is or will become a commodity, that cost is significant.

[edit] Exotic custom operating systems

A small fraction of embedded systems require safe, timely, reliable or efficient behavior unobtainable with the one of the above architectures. In this case an organization builds a system to suit. In some cases, the system may be partitioned into a "mechanism controller" using special techniques, and a "display controller" with a conventional operating system. A communication system passes data between the two.

[edit] Additional software components

In addition to the core operating system, many embedded systems have additional upper-layer software components. These components consists of networking protocol stacks like TCP/IP, FTP, HTTP, and HTTPS, and also included storage capabilities like FAT and Flash memory management systems. If the embedded devices has audio and video capabilities, then the appropriate drivers and codecs will be present in the system. In the case of the monolithic kernels, many of these software layers are included. In the RTOS category, the availability of the additional software components depends upon the commercial offering.
EMBEDDED SYSTEM





An embedded system is a special-purpose computer system designed to perform one or a few dedicated functions,[1] often with real-time computing constraints. It is usually embedded as part of a complete device including hardware and mechanical parts. In contrast, a general-purpose computer, such as a personal computer, can do many different tasks depending on programming. Embedded systems control many of the common devices in use today.

Since the embedded system is dedicated to specific tasks, design engineers can optimize it, reducing the size and cost of the product, or increasing the reliability and performance. Some embedded systems are mass-produced, benefiting from economies of scale.

Physically, embedded systems range from portable devices such as digital watches and MP3 players, to large stationary installations like traffic lights, factory controllers, or the systems controlling nuclear power plants. Complexity varies from low, with a single microcontroller chip, to very high with multiple units, peripherals and networks mounted inside a large chassis or enclosure.

In general, "embedded system" is not an exactly defined term, as many systems have some element of programmability. For example, Handheld computers share some elements with embedded systems — such as the operating systems and microprocessors which power them — but are not truly embedded systems, because they allow different applications to be loaded and peripherals to be connected.
Advances in integrated circuits






The integrated circuit from an Intel 8742, an 8-bit microcontroller that includes a CPU running at 12 MHz, 128 bytes of RAM, 2048 bytes of EPROM, and I/O in the same chip.
The integrated circuit from an Intel 8742, an 8-bit microcontroller that includes a CPU running at 12 MHz, 128 bytes of RAM, 2048 bytes of EPROM, and I/O in the same chip.

Among the most advanced integrated circuits are the microprocessors or "cores", which control everything from computers to cellular phones to digital microwave ovens. Digital memory chips and ASICs are examples of other families of integrated circuits that are important to the modern information society. While cost of designing and developing a complex integrated circuit is quite high, when spread across typically millions of production units the individual IC cost is minimized. The performance of ICs is high because the small size allows short traces which in turn allows low power logic (such as CMOS) to be used at fast switching speeds.

ICs have consistently migrated to smaller feature sizes over the years, allowing more circuitry to be packed on each chip. This increased capacity per unit area can be used to decrease cost and/or increase functionality—see Moore's law which, in its modern interpretation, states that the number of transistors in an integrated circuit doubles every two years. In general, as the feature size shrinks, almost everything improves—the cost per unit and the switching power consumption go down, and the speed goes up. However, ICs with nanometer-scale devices are not without their problems, principal among which is leakage current (see subthreshold leakage for a discussion of this), although these problems are not insurmountable and will likely be solved or at least ameliorated by the introduction of high-k dielectrics. Since these speed and power consumption gains are apparent to the end user, there is fierce competition among the manufacturers to use finer geometries. This process, and the expected progress over the next few years, is well described by the International Technology Roadmap for Semiconductors (ITRS).

[edit] Popularity of ICs

Main article: Microchip revolution

Only a half century after their development was initiated, integrated circuits have become ubiquitous. Computers, cellular phones, and other digital appliances are now inextricable parts of the structure of modern societies. That is, modern computing, communications, manufacturing and transport systems, including the Internet, all depend on the existence of integrated circuits. Indeed, many scholars believe that the digital revolution—brought about by the microchip revolution—was one of the most significant occurrences in the history of humankind.

[edit] Classification
A CMOS 4000 IC in a DIP
A CMOS 4000 IC in a DIP

Integrated circuits can be classified into analog, digital and mixed signal (both analog and digital on the same chip).

Digital integrated circuits can contain anything from a few thousand to millions of logic gates, flip-flops, multiplexers, and other circuits in a few square millimeters. The small size of these circuits allows high speed, low power dissipation, and reduced manufacturing cost compared with board-level integration. These digital ICs, typically microprocessors, DSPs, and micro controllers work using binary mathematics to process "one" and "zero" signals.

Analog ICs, such as sensors, power management circuits, and operational amplifiers, work by processing continuous signals. They perform functions like amplification, active filtering, demodulation, mixing, etc. Analog ICs ease the burden on circuit designers by having expertly designed analog circuits available instead of designing a difficult analog circuit from scratch.

ICs can also combine analog and digital circuits on a single chip to create functions such as A/D converters and D/A converters. Such circuits offer smaller size and lower cost, but must carefully account for signal interference.

[edit] Manufacture

[edit] Fabrication

Main article: Semiconductor fabrication

Rendering of a small standard cell with three metal layers (dielectric has been removed). The sand-colored structures are metal interconnect, with the vertical pillars being contacts, typically plugs of tungsten. The reddish structures are polysilicon gates, and the solid at the bottom is the crystalline silicon bulk.
Rendering of a small standard cell with three metal layers (dielectric has been removed). The sand-colored structures are metal interconnect, with the vertical pillars being contacts, typically plugs of tungsten. The reddish structures are polysilicon gates, and the solid at the bottom is the crystalline silicon bulk.

The semiconductors of the periodic table of the chemical elements were identified as the most likely materials for a solid state vacuum tube by researchers like William Shockley at Bell Laboratories starting in the 1930s. Starting with copper oxide, proceeding to germanium, then silicon, the materials were systematically studied in the 1940s and 1950s. Today, silicon monocrystals are the main substrate used for integrated circuits (ICs) although some III-V compounds of the periodic table such as gallium arsenide are used for specialized applications like LEDs, lasers, solar cells and the highest-speed integrated circuits. It took decades to perfect methods of creating crystals without defects in the crystalline structure of the semiconducting material.

Semiconductor ICs are fabricated in a layer process which includes these key process steps:

* Imaging
* Deposition
* Etching

The main process steps are supplemented by doping, cleaning and polarization steps.

Mono-crystal silicon wafers (or for special applications, silicon on sapphire or gallium arsenide wafers) are used as the substrate. Photolithography is used to mark different areas of the substrate to be doped or to have polysilicon, insulators or metal (typically aluminum) tracks deposited on them.

* Integrated circuits are composed of many overlapping layers, each defined by photolithography, and normally shown in different colors. Some layers mark where various dopants are diffused into the substrate (called diffusion layers), some define where additional ions are implanted (implant layers), some define the conductors (polysilicon or metal layers), and some define the connections between the conducting layers (via or contact layers). All components are constructed from a specific combination of these layers.

* In a self-aligned CMOS process, a transistor is formed wherever the gate layer (polysilicon or metal) crosses a diffusion layer.

* Resistive structures, meandering stripes of varying lengths, form the loads on the circuit. The ratio of the length of the resistive structure to its width, combined with its sheet resistivity determines the resistance.

* Capacitive structures, in form very much like the parallel conducting plates of a traditional electrical capacitor, are formed according to the area of the "plates", with insulating material between the plates. Owing to limitations in size, only very small capacitances can be created on an IC.

* More rarely, inductive structures can be built as tiny on-chip coils, or simulated by gyrators.

Since a CMOS device only draws current on the transition between logic states, CMOS devices consume much less current than bipolar devices.

A random access memory is the most regular type of integrated circuit; the highest density devices are thus memories; but even a microprocessor will have memory on the chip. (See the regular array structure at the bottom of the first image.) Although the structures are intricate – with widths which have been shrinking for decades – the layers remain much thinner than the device widths. The layers of material are fabricated much like a photographic process, although light waves in the visible spectrum cannot be used to "expose" a layer of material, as they would be too large for the features. Thus photons of higher frequencies (typically ultraviolet) are used to create the patterns for each layer. Because each feature is so small, electron microscopes are essential tools for a process engineer who might be debugging a fabrication process.

Each device is tested before packaging using automated test equipment (ATE), in a process known as wafer testing, or wafer probing. The wafer is then cut into rectangular blocks, each of which is called a die. Each good die (plural dice, dies, or die) is then connected into a package using aluminum (or gold) wires which are welded to pads, usually found around the edge of the die. After packaging, the devices go through final testing on the same or similar ATE used during wafer probing. Test cost can account for over 25% of the cost of fabrication on lower cost products, but can be negligible on low yielding, larger, and/or higher cost devices.

As of 2005, a fabrication facility (commonly known as a semiconductor fab) costs over a billion US Dollars to construct[10], because much of the operation is automated. The most advanced processes employ the following techniques:

* The wafers are up to 300 mm in diameter (wider than a common dinner plate).
* Use of 65 nanometer or smaller chip manufacturing process. Intel, IBM, NEC, and AMD are using 45 nanometers for their CPU chips, and AMD[1] and NEC have started using a 65 nanometer process. IBM and AMD are in development of a 45 nm process using immersion lithography.
* Copper interconnects where copper wiring replaces aluminum for interconnects.
* Low-K dielectric insulators.
* Silicon on insulator (SOI)
* Strained silicon in a process used by IBM known as strained silicon directly on insulator (SSDOI)

[edit] Packaging

Main article: Integrated circuit packaging

The earliest integrated circuits were packaged in ceramic flat packs, which continued to be used by the military for their reliability and small size for many years. Commercial circuit packaging quickly moved to the dual in-line package (DIP), first in ceramic and later in plastic. In the 1980s pin counts of VLSI circuits exceeded the practical limit for DIP packaging, leading to pin grid array (PGA) and leadless chip carrier (LCC) packages. Surface mount packaging appeared in the early 1980s and became popular in the late 1980s, using finer lead pitch with leads formed as either gull-wing or J-lead, as exemplified by small-outline integrated circuit -- a carrier which occupies an area about 30 – 50% less than an equivalent DIP, with a typical thickness that is 70% less. This package has "gull wing" leads protruding from the two long sides and a lead spacing of 0.050 inches.

Small-outline integrated circuit (SOIC) and PLCC packages. In the late 1990s, PQFP and TSOP packages became the most common for high pin count devices, though PGA packages are still often used for high-end microprocessors. Intel and AMD are currently transitioning from PGA packages on high-end microprocessors to land grid array (LGA) packages.

Ball grid array (BGA) packages have existed since the 1970s. Flip-chip Ball Grid Array packages, which allow for much higher pin count than other package types, were developed in the 1990s. In an FCBGA package the die is mounted upside-down (flipped) and connects to the package balls via a package substrate that is similar to a printed-circuit board rather than by wires. FCBGA packages allow an array of input-output signals (called Area-I/O) to be distributed over the entire die rather than being confined to the die periphery.

Traces out of the die, through the package, and into the printed circuit board have very different electrical properties, compared to on-chip signals. They require special design techniques and need much more electric power than signals confined to the chip itself.

When multiple dies are put in one package, it is called SiP, for System In Package. When multiple dies are combined on a small substrate, often ceramic, it's called an MCM, or Multi-Chip Module. The boundary between a big MCM and a small printed circuit board is sometimes fuzzy.

[edit] Other developments

In the 1980's programmable integrated circuits were developed. These devices contain circuits whose logical function and connectivity can be programmed by the user, rather than being fixed by the integrated circuit manufacturer. This allows a single chip to be programmed to implement different LSI-type functions such as logic gates, adders, and registers. Current devices named FPGAs (Field Programmable Gate Arrays) can now implement tens of thousands of LSI circuits in parallel and operate up to 550 MHz.

The techniques perfected by the integrated circuits industry over the last three decades have been used to create microscopic machines, known as MEMS. These devices are used in a variety of commercial and military applications. Example commercial applications include DLP projectors, inkjet printers, and accelerometers used to deploy automobile airbags.

In the past, radios could not be fabricated in the same low-cost processes as microprocessors. But since 1998, a large number of radio chips have been developed using CMOS processes. Examples include Intel's DECT cordless phone, or Atheros's 802.11 card.

Future developments seem to follow the multi-microprocessor paradigm, already used by the Intel and AMD dual-core processors. Intel recently unveiled a prototype, "not for commercial sale" chip that bears a staggering 80 microprocessors. Each core is capable of handling its own task independently of the others. This is in response to the heat-versus-speed limit that is about to be reached using existing transistor technology. This design provides a new challenge to chip programming. X10 is the new open-source programming language designed to assist with this task. [11]

[edit] Silicon graffiti

Ever since ICs were created, some chip designers have used the silicon surface area for surreptitious, non-functional images or words. These are sometimes referred to as Chip Art, Silicon Art, Silicon Graffiti or Silicon Doodling. For an overview of this practice, see the article The Secret Art of Chip Graffiti, from the IEEE magazine Spectrum and the Silicon Zoo.

[edit] Key industrial and academic data
This article or section seems to contain embedded lists that may require cleanup.
To meet Wikipedia's style guidelines, please help improve this article by: removing items which are not notable, encyclopedic, or helpful from the list(s); incorporating appropriate items into the main body of the article; and discussing this issue on the talk page.

[edit] Notable ICs

* The 555 common multi-vibrator sub-circuit (common in electronic timing circuits)
* The 741 operational amplifier
* 7400 series TTL logic building blocks
* 4000 series, the CMOS counterpart to the 7400 series
* Intel 4004, the world's first microprocessor
* The MOS Technology 6502 and Zilog Z80 microprocessors, used in many home computers

[edit] Manufacturers

A list of notable manufacturers; some operating, some defunct:

* Agere Systems (now part of LSI Logic formerly part of Lucent, which was formerly part of AT&T)
* Agilent Technologies (formerly part of Hewlett-Packard, spun-off in 1999)
* Alcatel
* Altera
* AMD (Advanced Micro Devices; founded by ex-Fairchild employees)
* Analog Devices
* ATI Technologies (Array Technologies Incorporated; acquired parts of Tseng Labs in 1997; in 2006, became a wholly-owned subsidiary of AMD)
* Atmel (co-founded by ex-Intel employee)
* Broadcom
* Commodore Semiconductor Group (formerly MOS Technology)
* Cypress Semiconductor
* Fairchild Semiconductor (founded by ex-Shockley Semiconductor employees: the "Traitorous Eight")
* Freescale Semiconductor (formerly part of Motorola)
* Fujitsu
* Genesis Microchip
* GMT Microelectronics (formerly Commodore Semiconductor Group)
* Hitachi, Ltd.
* Horizon Semiconductors
* IBM (International Business Machines)
* Infineon Technologies (formerly part of Siemens)
* Integrated Device Technology
* Intel (founded by ex-Fairchild employees)
* Intersil (formerly Harris Semiconductor)
* Lattice Semiconductor
* Linear Technology
* LSI Logic (founded by ex-Fairchild employees)
* Maxim Integrated Products
* Marvell Technology Group
* Microchip Technology Manufacturer of the PIC microcontrollers
* MicroSystems International
* MOS Technology (founded by ex-Motorola employees)
* Mostek (founded by ex-Texas Instruments employees)
* National Semiconductor (aka "NatSemi"; founded by ex-Fairchild employees)
* Nordic Semiconductor (formerly known as Nordic VLSI)
* Nvidia (acquired IP of competitor 3dfx in 2000; 3dfx was co-founded by ex-Intel employee)
* NXP Semiconductors (formerly part of Philips)
* ON Semiconductor (formerly part of Motorola)
* Parallax Inc.Manufacturer of the BASIC Stamp and Propeller Microcontrollers
* PMC-Sierra (from the former Pacific Microelectronics Centre and Sierra Semiconductor, the latter co-founded by ex-NatSemi employee)
* Renesas Technology (joint venture of Hitachi and Mitsubishi Electric)
* Rohm
* Samsung Electronics (Semiconductor division)
* STMicroelectronics (formerly SGS Thomson)
* Texas Instruments
* Toshiba
* TSMC (Taiwan Semiconductor Manufacturing Company. semiconductor foundry)
* u-blox (Fabless GPS semiconductor provider)
* VIA Technologies (founded by ex-Intel employee) (part of Formosa Plastics Group)
* Volterra Semiconductor
* Xilinx (founded by ex-ZiLOG employee)
* ZiLOG (founded by ex-Intel employees)
INTEGRATED CIRCUITS

In electronics, an integrated circuit (also known as IC, microcircuit, microchip, silicon chip, or chip) is a miniaturized electronic circuit (consisting mainly of semiconductor devices, as well as passive components) that has been manufactured in the surface of a thin substrate of semiconductor material. Integrated circuits are used in almost all electronic equipment in use today and have revolutionized the world of electronics.

A hybrid integrated circuit is a miniaturized electronic circuit constructed of individual semiconductor devices, as well as passive components, bonded to a substrate or circuit board.

This article is about monolithic integrated circuits.

Introduction

Integrated circuits were made possible by experimental discoveries which showed that semiconductor devices could perform the functions of vacuum tubes, and by mid-20th-century technology advancements in semiconductor device fabrication. The integration of large numbers of tiny transistors into a small chip was an enormous improvement over the manual assembly of circuits using discrete electronic components. The integrated circuit's mass production capability, reliability, and building-block approach to circuit design ensured the rapid adoption of standardized ICs in place of designs using discrete transistors.

There are two main advantages of ICs over discrete circuits: cost and performance. Cost is low because the chips, with all their components, are printed as a unit by photolithography and not constructed one transistor at a time. Performance is high since the components switch quickly and consume little power, because the components are small and close together. As of 2006, chip areas range from a few square mm to around 350 mm², with up to 1 million transistors per mm².

[edit] Invention

[edit] The birth of the IC

The integrated circuit was conceived by a radar scientist, Geoffrey W.A. Dummer (1909-2002), working for the Royal Radar Establishment of the British Ministry of Defence, and published at the Symposium on Progress in Quality Electronic Components in Washington, D.C. on May 7, 1952.[1] He gave many symposiums publicly to propagate his ideas.

Dummer unsuccessfully attempted to build such a circuit in 1956.

The integrated circuit was independently co-invented by Jack Kilby of Texas Instruments[2] and Robert Noyce of Fairchild Semiconductor [3] around the same time. Kilby recorded his initial ideas concerning the integrated circuit in July 1958 and successfully demonstrated the first working integrated circuit on September 12, 1958.[2] Kilby won the 2000 Nobel Prize in Physics for his part of the invention of the integrated circuit.[4] Robert Noyce also came up with his own idea of integrated circuit, half a year later than Kilby. Noyce's chip had solved many practical problems that the microchip developed by Kilby had not. Noyce's chip, made at Fairchild, was made of silicon, whereas Kilby's chip was made of germanium.

Early developments of the integrated circuit go back to 1949, when the German engineer Werner Jacobi (Siemens AG) filed a patent for an integrated-circuit-like semiconductor amplifying device [5] showing five transistors on a common substrate arranged in a 3-stage amplifier arrangement. Jacobi discloses small and cheap hearing aids as typical industrial applications of his patent. A commercial use of his patent has not been reported.


A precursor idea to the IC was to create small ceramic squares (wafers), each one containing a single miniaturized component. Components could then be integrated and wired into a bidimensional or tridimensional compact grid. This idea, which looked very promising in 1957, was proposed to the US Army by Jack Kilby, and led to the short-lived Micromodule Program (similar to 1951's Project Tinkertoy).[6] However, as the project was gaining momentum, Kilby came up with a new, revolutionary design: the IC.

The aforementioned Noyce credited Kurt Lehovec of Sprague Electric for the principle of p-n junction isolation caused by the action of a biased p-n junction (the diode) as a key concept behind the IC.[7]

See: Other variations of vacuum tubes for precursor concepts such as the Loewe 3NF.

[edit] Generations

[edit] SSI, MSI, LSI

The first integrated circuits contained only a few transistors. Called "Small-Scale Integration" (SSI), they used circuits containing transistors numbering in the tens.

SSI circuits were crucial to early aerospace projects, and vice-versa. Both the Minuteman missile and Apollo program needed lightweight digital computers for their inertial guidance systems; the Apollo guidance computer led and motivated the integrated-circuit technology, while the Minuteman missile forced it into mass-production.

These programs purchased almost all of the available integrated circuits from 1960 through 1963, and almost alone provided the demand that funded the production improvements to get the production costs from $1000/circuit (in 1960 dollars) to merely $25/circuit (in 1963 dollars).[citation needed] They began to appear in consumer products at the turn of the decade, a typical application being FM inter-carrier sound processing in television receivers.

The next step in the development of integrated circuits, taken in the late 1960s, introduced devices which contained hundreds of transistors on each chip, called "Medium-Scale Integration" (MSI).

They were attractive economically because while they cost little more to produce than SSI devices, they allowed more complex systems to be produced using smaller circuit boards, less assembly work (because of fewer separate components), and a number of other advantages.

Further development, driven by the same economic factors, led to "Large-Scale Integration" (LSI) in the mid 1970s, with tens of thousands of transistors per chip.

Integrated circuits such as 1K-bit RAMs, calculator chips, and the first microprocessors, that began to be manufactured in moderate quantities in the early 1970s, had under 4000 transistors. True LSI circuits, approaching 10000 transistors, began to be produced around 1974, for computer main memories and second-generation microprocessors.

[edit] VLSI

Main article: Very-large-scale integration

Upper interconnect layers on an Intel 80486DX2 microprocessor die.
Upper interconnect layers on an Intel 80486DX2 microprocessor die.

The final step in the development process, starting in the 1980s and continuing through the present, was "Very Large-Scale Integration" (VLSI). This could be said to start with hundreds of thousands of transistors in the early 1980s, and continues beyond several billion transistors as of 2007.

There was no single breakthrough that allowed this increase in complexity, though many factors helped. Manufacturing moved to smaller rules and cleaner fabs, allowing them to produce chips with more transistors with adequate yield, as summarized by the International Technology Roadmap for Semiconductors (ITRS). Design tools improved enough to make it practical to finish these designs in a reasonable time. The more energy efficient CMOS replaced NMOS and PMOS, avoiding a prohibitive increase in power consumption. Better texts such as the landmark textbook by Mead and Conway helped schools educate more designers, among other factors.

In 1986 the first one megabit RAM chips were introduced, which contained more than one million transistors. Microprocessor chips passed the million transistor mark in 1989 and the billion transistor mark in 2005[8]. The trend continues largely unabated, with chips introduced in 2007 containing tens of billions of memory transistors [9].

[edit] ULSI, WSI, SOC, 3D-IC

To reflect further growth of the complexity, the term ULSI that stands for "Ultra-Large Scale Integration" was proposed for chips of complexity of more than 1 million transistors.

Wafer-scale integration (WSI) is a system of building very-large integrated circuits that uses an entire silicon wafer to produce a single "super-chip". Through a combination of large size and reduced packaging, WSI could lead to dramatically reduced costs for some systems, notably massively parallel supercomputers. The name is taken from the term Very-Large-Scale Integration, the current state of the art when WSI was being developed.

System-on-a-Chip (SoC or SOC) is an integrated circuit in which all the components needed for a computer or other system are included on a single chip. The design of such a device can be complex and costly, and building disparate components on a single piece of silicon may compromise the efficiency of some elements. However, these drawbacks are offset by lower manufacturing and assembly costs and by a greatly reduced power budget: because signals among the components are kept on-die, much less power is required (see Packaging, above).

Three Dimensional Integrated Circuit (3D-IC) has two or more layers of active electronic components that are integrated both vertically and horizontally into a single circuit. Communication between layers uses on-die signaling, so power consumption is much lower than in equivalent separate circuits. Judicious use of short vertical wires can substantially reduce overall wire length for faster operation.
Printed circuit board


A printed circuit board , or PCB, is used to mechanically support and electrically connect electronic components using conductive pathways, or traces, etched from copper sheets laminated onto a non-conductive substrate. Alternative names are printed wiring board (PWB),and etched wiring board. A PCB populated with electronic components is a printed circuit assembly (PCA), also known as a printed circuit board assembly (PCBA).

PCBs are rugged, inexpensive, and can be highly reliable. They require much more layout effort and higher initial cost than either wire-wrapped or point-to-point constructed circuits, but are much cheaper and faster for high-volume production. Much of the electronics industry's PCB design, assembly, and quality control needs are set by standards that are published by the IPC organization.

Manufacturing

[edit] Materials

Conducting layers are typically made of thin copper foil. Insulating layers are typically laminated together with epoxy resin. Well known prepreg materials used in the PCB industry are FR-2 (Phenolic cotton paper), FR-3 (Cotton paper and epoxy), FR-4 (Woven glass and epoxy), FR-5 (Woven glass and epoxy), FR-6 (Matte glass and polyester), G-10 (Woven glass and epoxy), CEM-1 (Cotton paper and epoxy), CEM-2 (Cotton paper and epoxy), CEM-3 (Woven glass and epoxy), CEM-4 (Woven glass and epoxy), CEM-5 (Woven glass and polyester).
A PCB as a design on a computer (left) and realized as a board assembly with populated components (right). The board is double sided, with through-hole plating, green solder resist, and white silkscreen printing. Both surface mount and through-hole components have been used.
A PCB as a design on a computer (left) and realized as a board assembly with populated components (right). The board is double sided, with through-hole plating, green solder resist, and white silkscreen printing. Both surface mount and through-hole components have been used.

[edit] Patterning (etching)

The vast majority of printed circuit boards are made by bonding a layer of copper over the entire substrate, sometimes on both sides, (creating a "blank PCB") then removing unwanted copper after applying a temporary mask (eg. by etching), leaving only the desired copper traces. A few PCBs are made by adding traces to the bare substrate (or a substrate with a very thin layer of copper) usually by a complex process of multiple electroplating steps.

There are three common "subtractive" methods (methods that remove copper) used for the production of printed circuit boards:

1. Silk screen printing uses etch-resistant inks to protect the copper foil. Subsequent etching removes the unwanted copper. Alternatively, the ink may be conductive, printed on a blank (non-conductive) board. The latter technique is also used in the manufacture of hybrid circuits.
2. Photoengraving uses a photomask and chemical etching to remove the copper foil from the substrate. The photomask is usually prepared with a photoplotter from data produced by a technician using CAM, or computer-aided manufacturing software. Laser-printed transparencies are typically employed for phototools; however, direct laser imaging techniques are being employed to replace phototools for high-resolution requirements.
3. PCB milling uses a two or three-axis mechanical milling system to mill away the copper foil from the substrate. A PCB milling machine (referred to as a 'PCB Prototyper') operates in a similar way to a plotter, receiving commands from the host software that control the position of the milling head in the x, y, and (if relevant) z axis. Data to drive the Prototyper is extracted from files generated in PCB design software and stored in HPGL or Gerber file format.

"Additive" processes also exist. The most common is the "semi-additive" process. In this version, the unpatterned board has a thin layer of copper already on it. A reverse mask is then applied. (Unlike a subtractive process mask, this mask exposes those parts of the substrate that will eventually become the traces.) Additional copper is then plated onto the board in the unmasked areas; copper may be plated to any desired weight. Tin-lead or other surface platings are then applied. The mask is stripped away and a brief etching step removes the now-exposed original copper laminate from the board, isolating the individual traces.

The additive process is commonly used for multi-layer boards as it facilitates the plating-through of the holes (to produce conductive vias) in the circuit board.

[edit] Lamination

Some PCBs have trace layers inside the PCB and are called multi-layer PCBs. These are formed by bonding together separately etched thin boards.

[edit] Drilling

Holes, or vias, through a PCB are typically drilled with tiny drill bits made of solid tungsten carbide. The drilling is performed by automated drilling machines with placement controlled by a drill tape or drill file. These computer-generated files are also called numerically controlled drill (NCD) files or "Excellon files". The drill file describes the location and size of each drilled hole.

When very small vias are required, drilling with mechanical bits is costly because of high rates of wear and breakage. In this case, the vias may be evaporated by lasers. Laser-drilled vias typically have an inferior surface finish inside the hole. These holes are called micro vias.

It is also possible with controlled-depth drilling, laser drilling, or by pre-drilling the individual sheets of the PCB before lamination, to produce holes that connect only some of the copper layers, rather than passing through the entire board. These holes are called blind vias when they connect an internal copper layer to an outer layer, or buried vias when they connect two or more internal copper layers and no outer layers.

The walls of the holes, for boards with 2 or more layers, are plated with copper to form plated-through holes that electrically connect the conducting layers of the PCB. For multilayer boards, those with 4 layers or more, drilling typically produces a smear comprised of the bonding agent in the laminate system. Before the holes can be plated through, this smear must be removed by a chemical de-smear process, or by plasma-etch.

[edit] Exposed conductor plating and coating

The places to which components will be mounted are typically plated, because bare copper oxidizes quickly, and therefore is not readily solderable. Traditionally, any exposed copper was plated with solder by hot air solder levelling (HASL). This solder was a tin-lead alloy, however new solder compounds are now used to achieve compliance with the RoHS directive in the EU, which restricts the use of lead. Other platings used are OSP (organic surface protectant), immersion silver (IAg), immersion tin, electroless nickel with immersion gold coating (ENIG), and direct gold. Edge connectors, placed along one edge of some boards, are often gold plated.

Electrochemical migration (ECM) is the growth of conductive metal filaments on or in a printed circuit board (PCB) under the influence of a DC voltage bias.[1][2]

[edit] Solder resist

Areas that should not be soldered to may be covered with a polymer solder resist (solder mask) coating. The solder resist prevents solder from bridging between conductors and thereby creating short circuits. Solder resist also provides some protection from the environment.

[edit] Screen printing

Line art and text may be printed onto the outer surfaces of a PCB by screen printing. When space permits, the screen print text can indicate component designators, switch setting requirements, test points, and other features helpful in assembling, testing, and servicing the circuit board.

Screen print is also known as the silk screen, or, in one sided PCBs, the red print.

Lately some digital printing solutions have been developed to substitute the traditional screen printing process. This technology allows printing variable data onto the PCB, including serialization and barcode information for traceability purposes.

[edit] Test

Unpopulated boards may be subjected to a bare-board test where each circuit connection (as defined in a netlist) is verified as correct on the finished board. For high-volume production, a Bed of nails tester, a fixture or a Rigid needle adapter is used to make contact with copper lands or holes on one or both sides of the board to facilitate testing. A computer will instruct the electrical test unit to send a small amount of current through each contact point on the bed-of-nails as required, and verify that such current can be seen on the other appropriate contact points. A "short" on a board would be a solid connection where there should be no connection. An "open" is between two points that should be connected and are not. For small- or medium-volume boards, flying-probe and flying-grid testers use moving test heads to make contact with the copper/silver/gold/solder lands or holes to verify the electrical connectivity of the board under test.

[edit] Printed circuit assembly

After the printed circuit board (PCB) is completed, electronic components must be attached to form a functional printed circuit assembly[3][4], or PCA (sometimes called a "printed circuit board assembly" PCBA). In through-hole construction, component leads are inserted in holes. In surface-mount construction, the components are placed on pads or lands on the outer surfaces of the PCB. In both kinds of construction, component leads are electrically and mechanically fixed to the board with a molten metal solder.

There are a variety of soldering techniques used to attach components to a PCB. High volume production is usually done with machine placement and bulk wave soldering or reflow ovens, but skilled technicians are able to solder very tiny parts (for instance 0201 packages which are 0.02" by 0.01") by hand under a microscope, using tweezers and a fine tip soldering iron for small volume prototypes. Some parts are impossible to solder by hand, such as ball grid array (BGA) packages.

Often, through-hole and surface-mount construction must be combined in a single PCA because some required components are available only in surface-mount packages, while others are available only in through-hole packages. Another reason to use both methods is that through-hole mounting can provide needed strength for components likely to endure physical stress, while components that are expected to go untouched will take up less space using surface-mount techniques.

After the board has been populated it may be tested in a variety of ways:

* While the power is off, visual inspection, automated optical inspection. JEDEC guidelines for PCB component placement, soldering, and inspection are commonly used to maintain quality control in this stage of PCB manufacturing.

* While the power is off, analog signature analysis, power-off testing.

* While the power is on, in-circuit test, where physical measurements (i.e. voltage, frequency) can be done.

* While the power is on, functional test, just checking if the PCB does what it had been designed for.

To facilitate these tests, PCBs may be designed with extra pads to make temporary connections. Sometimes these pads must be isolated with resistors. The in-circuit test may also exercise boundary scan test features of some components. In-circuit test systems may also be used to program nonvolatile memory components on the board.

In boundary scan testing, test circuits integrated into various ICs on the board form temporary connections between the PCB traces to test that the ICs are mounted correctly. Boundary scan testing requires that all the ICs to be tested use a standard test configuration procedure, the most common one being the Joint Test Action Group (JTAG) standard.

When boards fail the test, technicians may desolder and replace failed components (professional term: rework).

[edit] Protection and packaging

PCBs intended for extreme environments often have a conformal coat, which is applied by dipping or spraying after the components have been soldered. The coat prevents corrosion and leakage currents or shorting due to condensation. The earliest conformal coats were wax. Modern conformal coats are usually dips of dilute solutions of silicone rubber, polyurethane, acrylic, or epoxy. Some are engineering plastics sputtered onto the PCB in a vacuum chamber.

Many assembled PCBs are static sensitive, and therefore must be placed in antistatic bags during transport. When handling these boards, the user must be earthed; failure to do this might transmit an accumulated static charge through the board, damaging or destroying it. Even bare boards are sometimes static sensitive. Traces have gotten so fine that it's quite possible to blow an etch off the board (or change its characteristics) with a static charge. This is especially true on non-traditional PCBs such as MCMs and microwave PCBs.

[edit] Safety certification (US)

Safety Standard UL 796 covers component safety requirements for printed wiring boards for use as components in devices or appliances. Testing analyzes characteristics such as flammability, maximum operating temperature, electrical tracking, heat deflection, and direct support of live electrical parts.

The boards may use organic or inorganic base materials in a single or multilayer, rigid or flexible form. Circuitry construction may include etched, die stamped, precut, flush press, additive, and plated conductor techniques. Printed-component parts may be used.

The suitability of the pattern parameters, temperature and maximum solder limits shall be determined in accordance with the applicable end-product construction and requirements.

[edit] "Cordwood" construction
A cordwood module.
A cordwood module.

Cordwood construction can give large space-saving advantages and was often used with wire-ended components in applications where space was at a premium (such as missile guidance and telemetry systems). In 'cordwood' construction, two leaded components are mounted axially between two parallel planes. Instead of soldering the components, they were connected to other components by thin nickel tapes welded at right angles onto the component leads. To avoid shorting together of different interconnection layers, thin insulating cards were placed between them. Perforations or holes in the cards would allow component leads to project through to the next interconnection layer. One disadvantage of this system was that special nickel leaded components had to be used to allow the interconnecting welds to be made. Some versions of cordwood construction used single sided PCBs as the interconnection method (as pictured). This meant that normal leaded components could be used.

Before the advent of integrated circuits, this method allowed the highest possible component packing density; because of this, it was used by a number of computer vendors including Control Data Corporation. The cordwood method of construction now appears to have fallen into disuse, probably because high packing densities can be more easily achieved using surface mount techniques and integrated circuits.

[edit] Multiwire boards

Multiwire is a patented technique of interconnection which uses machine-routed insulated wires embedded in a non-conducting matrix (often plastic resin). It was used during the 1980s and 1990s. (Augat Inc., U.S. Patent 4,648,180)

Since it was quite easy to stack interconnections (wires) inside the embedding matrix, the approach allowed designers to forget completely about the routing of wires (usually a time-consuming operation of PCB design): Anywhere the designer needs a connection, the machine will draw a wire in straight line from one location/pin to another. This led to very short design times (no complex algorithms to use even for high density designs) as well as reduced crosstalk (which is worse when wires run parallel to each other--which almost never happens in Multiwire), though the cost is too high to compete with cheaper PCB technologies when large quantities are needed.

[edit] Surface-mount technology

Main article: surface-mount technology

Surface mount components, including resistors, transistors and an integrated circuit.
Surface mount components, including resistors, transistors and an integrated circuit.

Surface-mount technology was developed in the 1960s, gained momentum in the early 1980s and became widely used by the mid 1990s. Components were mechanically redesigned to have small metal tabs or end caps that could be directly soldered to the surface of the PCB. Components became much smaller and component placement on both sides of the board became far more common with surface-mounting than through-hole mounting, allowing much higher circuit densities. Surface mounting lends itself well to a high degree of automation, reducing labour cost and greatly increasing production and quality rates. Surface mount devices (SMDs) can be one-quarter to one-tenth the size and weight, and passive components can be one-half to one-quarter the cost of through-hole parts. Integrated circuits (where the chip itself is the most expensive part) are often priced the same regardless of package type however. As of 2006, some wire-ended components, such as small signal switch diodes, e.g. 1N4148, are actually significantly cheaper than corresponding SMD versions.
THE VLSI DESIGN PROCESS

A typical digital design flow is as follows:

Specification
Architecture
RTL Coding
RTL Verification
Synthesis
Backend
Tape Out to Foundry to get end product….a wafer with repeated number of identical Ics.

All modern digital designs start with a designer writing a hardware description of the IC (using HDL or Hardware Description Language) in Verilog/VHDL. A Verilog or VHDL program essentially describes the hardware (logic gates, Flip-Flops, counters etc) and the interconnect of the circuit blocks and the functionality. Various CAD tools are available to synthesize a circuit based on the HDL. The most widely used synthesis tools come from two CAD companies. Synposys and Cadence.

Without going into details, we can say that the VHDL, can be called as the "C" of the VLSI industry. VHDL stands for "VHSIC Hardware Definition Language", where VHSIC stands for "Very High Speed Integrated Circuit". This languages is used to design the circuits at a high-level, in two ways. It can either be a behavioural description, which describes what the circuit is supposed to do, or a structural description, which describes what the circuit is made of. There are other languages for describing circuits, such as Verilog, which work in a similar fashion.

Both forms of description are then used to generate a very low-level description that actually spells out how all this is to be fabricated on the silicon chips. This will result in the manufacture of the intended IC.


A typical analog design flow is as follows:

In case of analog design, the flow changes somewhat.
Specifications
Architecture
Circuit Design
SPICE Simulation
Layout
Parametric Extraction / Back Annotation
Final Design
Tape Out to foundry.

While digital design is highly automated now, very small portion of analog design can be automated. There is a hardware description language called AHDL but is not widely used as it does not accurately give us the behavioral model of the circuit because of the complexity of the effects of parasitic on the analog behavior of the circuit. Many analog chips are what are termed as “flat” or non-hierarchical designs. This is true for small transistor count chips such as an operational amplifier, or a filter or a power management chip. For more complex analog chips such as data converters, the design is done at a transistor level, building up to a cell level, then a block level and then integrated at a chip level. Not many CAD tools are available for analog design even today and thus analog design remains a difficult art. SPICE remains the most useful simulation tool for analog as well as digital design.

MOST OF TODAY’S VLSI DESIGNS ARE CLASSIFIED INTO THREE CATEGORIES:

1. Analog:
Small transistor count precision circuits such as Amplifiers, Data converters, filters, Phase Locked Loops, Sensors etc.

2. ASICS or Application Specific Integrated Circuits:
Progress in the fabrication of IC's has enabled us to create fast and powerful circuits in smaller and smaller devices. This also means that we can pack a lot more of functionality into the same area. The biggest application of this ability is found in the design of ASIC's. These are IC's that are created for specific purposes - each device is created to do a particular job, and do it well. The most common application area for this is DSP - signal filters, image compression, etc. To go to extremes, consider the fact that the digital wristwatch normally consists of a single IC doing all the time-keeping jobs as well as extra features like games, calendar, etc.

3. SoC or Systems on a chip:
These are highly complex mixed signal circuits (digital and analog all on the same chip). A network processor chip or a wireless radio chip is an example of an SoC.
Very-large-scale integration (VLSI)

Very-large-scale integration (VLSI) is the process of creating integrated circuits by combining thousands of transistor-based circuits into a single chip. VLSI began in the 1970s when complex semiconductor and communication technologies were being developed. The microprocessor is a VLSI device. The term is no longer as common as it once was, as chips have increased in complexity into the hundreds of millions of transistors.

Overview

The first semiconductor chips held one transistor each. Subsequent advances added more and more transistors, and as a consequence more individual functions or systems were integrated over time. The first integrated circuits held only a few devices, perhaps as many as ten diodes, transistors, resistors and capacitors, making it possible to fabricate one or more logic gates on a single device. Now known retrospectively as "small-scale integration" (SSI), improvements in technique led to devices with hundreds of logic gates, known as large-scale integration (LSI), i.e. systems with at least a thousand logic gates. Current technology has moved far past this mark and today's microprocessors have many millions of gates and hundreds of millions of individual transistors.

As of early 2008, billion-transistor processors are commercially available, an example of which is Intel's Montecito Itanium chip. This is expected to become more commonplace as semiconductor fabrication moves from the current generation of 65 nm processes to the next 45 nm generations. Another notable example is Nvidia's 280 series GPU. This microprocessor is unique in the fact that its 1.4 Billion transistor count, capable of a teraflop of performance, is almost entirely dedicated to logic (Itanium's transistor count is largely due to the 24MB L3 cache).

At one time, there was an effort to name and calibrate various levels of large-scale integration above VLSI. Terms like Ultra-large-scale Integration (ULSI) were used. But the huge number of gates and transistors available on common devices has rendered such fine distinctions moot. Terms suggesting greater than VLSI levels of integration are no longer in widespread use. Even VLSI is now somewhat quaint, given the common assumption that all microprocessors are VLSI or better.

[edit] Structured design

Structured VLSI design is a modular methodology originated by Carver Mead and Lynn Conway for saving microchip area by minimizing the interconnect fabrics area. This is obtained by repetitive arrangement of rectangular macro blocks which can be interconnected using wiring by abutment. An example is partitioning the layout of an adder into a row of equal bit slices cells. In complex designs this structuring may be achieved by hierarchical nesting.

Structured VLSI design had been popular in the early 1980s, but lost its popularity later because of the advent of placement and routing tools wasting a lot of area by routing, which is tolerated because of the progress of Moore's Law. When introducing the hardware description language KARL in the mid' 1970s, Reiner Hartenstein coined the term "Structured VLSI Design" (originally as "Structured LSI Design"), echoing Edsgar Dijkstras structured programming approach by procedure nesting to avoid chaotic spaghetti-structured programs.

[edit] Notable companies

* Advanced Micro Devices (AMD)
* Altera
* Analog Devices
* ARM Ltd
* ATI Technologies
* Austria Microsystems
* Broadcom
* Chartered Semiconductor Manufacturing
* Conexant
* Cypress Semiconductor
* Dalsa
* Freescale Semiconductor
* IBM
* Infineon
* Intel
* Lattice Semiconductor
* Linear Technology
* Marvell Technology Group
* Micron Technology
* MIPS Technologies
* National Semiconductor
* NEC
* NeoMagic
* Nvidia
* NXP Semiconductors
* Portal Player
* Qualcomm
* Rambus
* Renesas Technology
* Samsung Electronics
* Sandisk
* Sarnoff
* Sasken Communication Technologies Limited
* ST Microelectronics
* Tata Elxsi
* Texas Instruments
* Toshiba
* TSMC
* UMC
* Wipro
* Xilinx

Wednesday, August 13, 2008



Ultraviolet (UV) light is electromagnetic radiation with a wavelength shorter than that of visible light, but longer than X-rays. It is named because the spectrum consists of refrangible electromagnetic waves with frequencies higher than those that humans identify as the color violet.

UV light is typically found as part of the radiation received by the Earth from the Sun. Most humans are aware of the effects of UV through the painful condition of sunburn. The UV spectrum has many other effects, including both beneficial and damaging changes to human health.

Origin of term
The name means "beyond violet" (from Latin ultra, "beyond"), violet being the color of the shortest wavelengths of visible light. UV light has a shorter wavelength than that of violet light.


[edit] Subtypes
The electromagnetic spectrum of ultraviolet light can be subdivided in a number of ways. The draft ISO standard on determining solar irradiances (ISO-DIS-21348)[2] describes the following ranges:

Name Abbreviation Wavelength range in nanometers Energy per photon
Ultraviolet A, long wave, or black light UVA 400 nm – 315 nm 3.10 – 3.94 eV
Near NUV 400 nm – 300 nm 3.10 – 4.13 eV
Ultraviolet B or medium wave UVB 315 nm – 280 nm 3.94 – 4.43 eV
Middle MUV 300 nm – 200 nm 4.13 – 6.20 eV
Ultraviolet C, short wave, or germicidal UVC 280 nm – 100 nm 4.43 – 12.4 eV
Far FUV 200 nm – 122 nm 6.20 – 10.2 eV
Vacuum VUV 200 nm – 10 nm 6.20 – 124 eV
Extreme EUV 121 nm – 10 nm 10.2 – 124 eV

In photolithography, in laser technology, etc., the term deep ultraviolet or DUV refers to wavelengths below 300 nm. "Vacuum UV" is so named because it is absorbed strongly by air and is therefore used in a vacuum. In the long-wave limit of this region, roughly 150–200 nm, the principal absorber is the oxygen in air. Work in this region can be performed in an oxygen free atmosphere, pure nitrogen being commonly used, which avoids the need for a vacuum chamber.

See 1 E-7 m for a list of objects of comparable sizes.


[edit] Black light
Main article: Black light
A black light, or Wood's light, is a lamp that emits long wave UV radiation and very little visible light. Commonly these are referred to as simply a "UV light". Fluorescent black lights are typically made in the same fashion as normal fluorescent lights except that only one phosphor is used and the normally clear glass envelope of the bulb may be replaced by a deep-bluish-purple glass called Wood's glass, a nickel-oxide–doped glass, which blocks almost all visible light above 400 nanometers. The color of such lamps is often referred to in the trade as "blacklight blue" or "BLB." This is to distinguish these lamps from "bug zapper" blacklight ("BL") lamps that don't have the blue Wood's glass. The phosphor typically used for a near 368 to 371 nanometer emission peak is either europium-doped strontium fluoroborate (SrB4O7F:Eu2+) or europium-doped strontium borate (SrB4O7:Eu2+) while the phosphor used to produce a peak around 350 to 353 nanometers is lead-doped barium silicate (BaSi2O5:Pb+). "Blacklight Blue" lamps peak at 365 nm.

While "black lights" do produce light in the UV range, their spectrum is confined to the longwave UVA region. Unlike UVB and UVC, which are responsible for the direct DNA damage that leads to skin cancer, black light is limited to lower energy, longer waves and does not cause sunburn. However, UVA is capable of causing damage to collagen fibers and destroying vitamin A in skin.

A black light may also be formed by simply using Wood's glass instead of clear glass as the envelope for a common incandescent bulb. This was the method used to create the very first black light sources. Though it remains a cheaper alternative to the fluorescent method, it is exceptionally inefficient at producing UV light (a mere few lumens per watt) owing to the black body nature of the incandescent light source. Incandescent UV bulbs, due to their inefficiency, may also become dangerously hot during use. More rarely still, high power (hundreds of watts) mercury vapor black lights can be found which use a UV emitting phosphor and an envelope of Wood's glass. These lamps are used mainly for theatrical and concert displays and also become very hot during normal use.

Some UV fluorescent bulbs specifically designed to attract insects for use in bug zappers use the same near-UV emitting phosphor as normal blacklights, but use plain glass instead of the more expensive Wood's glass. Plain glass blocks less of the visible mercury emission spectrum, making them appear light blue to the naked eye. These lamps are referred to as "blacklight" or "BL" in most lighting catalogs.

Ultraviolet light can also be generated by some light-emitting diodes.


[edit] Natural sources of UV
The Sun emits ultraviolet radiation in the UVA, UVB, and UVC bands, but because of absorption in the atmosphere's ozone layer, 98.7% of the ultraviolet radiation that reaches the Earth's surface is UVA. (Some of the UVB and UVC radiation is responsible for the generation of the ozone layer.)

Ordinary glass is partially transparent to UVA but is opaque to shorter wavelengths while Silica or quartz glass, depending on quality, can be transparent even to vacuum UV wavelengths. Ordinary window glass passes about 90% of the light above 350 nm, but blocks over 90% of the light below 300 nm.[3][4][5]

The onset of vacuum UV, 200 nm, is defined by the fact that ordinary air is opaque below this wavelength. This opacity is due to the strong absorption of light of these wavelengths by oxygen in the air. Pure nitrogen (less than about 10 ppm oxygen) is transparent to wavelengths in the range of about 150–200 nm. This has wide practical significance now that semiconductor manufacturing processes are using wavelengths shorter than 200 nm. By working in oxygen-free gas, the equipment does not have to be built to withstand the pressure differences required to work in a vacuum. Some other scientific instruments, such as circular dichroism spectrometers, are also commonly nitrogen purged and operate in this spectral region.

Extreme UV is characterized by a transition in the physics of interaction with matter: wavelengths longer than about 30 nm interact mainly with the chemical valence electrons of matter, while wavelengths shorter than that interact mainly with inner shell electrons and nuclei. The long end of the EUV/XUV spectrum is set by a prominent He+ spectral line at 30.4nm. XUV is strongly absorbed by most known materials, but it is possible to synthesize multilayer optics that reflect up to about 50% of XUV radiation at normal incidence. This technology has been used to make telescopes for solar imaging; it was pioneered by the NIXT and MSSTA sounding rockets in the 1990s; (current examples are SOHO/EIT and TRACE) and for nanolithography (printing of traces and devices on microchips).


[edit] Human health-related effects of UV radiation

[edit] Beneficial effects
The Earth's atmosphere blocks UV radiation from penetrating through the atmosphere by 98.7%. A positive effect of UVB exposure is that it induces the production of vitamin D in the skin. It has been estimated that tens of thousands of premature deaths occur in the United States annually from a range of cancers due to vitamin D deficiency.[6] Another effect of vitamin D deficiency is osteomalacia (the adult equivalent of rickets), which can result in bone pain, difficulty in weight bearing and sometimes fractures. Other studies show most people get adequate Vitamin D through food and incidental exposure.[7]

Many countries have fortified certain foods with Vitamin D to prevent deficiency. Eating fortified foods or taking a dietary supplement pill is usually preferred to UVB exposure, due to the increased risk of skin cancer from UV radiation.[7]

Too little UVB radiation leads to a lack of Vitamin D. Too much UVB radiation leads to direct DNA damages and sunburn. An appropriate amount of UVB (What is appropriate depends on your skin colour) leads to a limited amount of direct DNA damage. This is recognized and repaired by the body. Then the melanin production is increased which leads to a long lasting tan. This tan occurs with a 2 day lag phase after irradiation, but it is much less harmful and long lasting than the one obtained from UVA.

Ultraviolet radiation has other medical applications, in the treatment of skin conditions such as psoriasis and vitiligo. UVA radiation can be used in conjunction with psoralens (PUVA treatment). UVB radiation is rarely used in conjunction with psoralens. In cases of psoriasis and vitiligo, UV light with wavelength of 311 nm is most effective.[citation needed]


[edit] Harmful effects
An overexposure to UVB radiation can cause sunburn and some forms of skin cancer. In humans, prolonged exposure to solar UV radiation may result in acute and chronic health effects on the skin, eye, and immune system.[8] However the most deadly form - malignant melanoma - is mostly caused by the indirect DNA damage (free radicals and oxidative stress). This can be seen from the absence of a UV-signature mutation in 92% of all melanoma.[9]

UVC rays are the highest energy, most dangerous type of ultraviolet light. Little attention has been given to UVC rays in the past since they are filtered out by the atmosphere. However, their use in equipment such as pond sterilization units may pose an exposure risk, if the lamp is switched on outside of its enclosed pond sterilization unit.


Ultraviolet photons harm the DNA molecules of living organisms in different ways. In one common damage event, adjacent Thymine bases bond with each other, instead of across the "ladder". This makes a bulge, and the distorted DNA molecule does not function properly.
[edit] Skin


“ Ultraviolet (UV) irradiation present in sunlight is an environmental human carcinogen. The toxic effects of UV from natural sunlight and therapeutic artificial lamps are a major concern for human health. The major acute effects of UV irradiation on normal human skin comprise sunburn inflammation erythema, tanning, and local or systemic immunosuppression. ”
— Matsumura and Ananthaswamy , (2004)[10]

UVA, UVB and UVC can all damage collagen fibers and thereby accelerate aging of the skin. Both UVA and UVB destroy vitamin A in skin which may cause further damage.[11] In the past UVA was considered less harmful, but today it is known, that it can contribute to skin cancer via the indirect DNA damage (free radicals and reactive oxygen species). It penetrates deeply but it does not cause sunburn. UVA does not damage DNA directly like UVB and UVC, but it can generate highly reactive chemical intermediates, such as hydroxyl and oxygen radicals, which in turn can damage DNA. Because it does not cause reddening of the skin (erythema) it cannot be measured in the SPF testing. There is no good clinical measurement of the blocking of UVA radiation, but it is important that sunscreen block both UVA and UVB. Some scientists blame the absence of UVA filters in sunscreens for the higher melanoma-risk that was found for sunscreen users. [12]


The reddening of the skin due to the action of sunlight depends both on the amount of sunlight as well as the sensitivity of the skin ("erythemal action spectrum") over the UV spectrum.UVB light can cause direct DNA damage. The radiation excites DNA molecules in skin cells, causing aberrant covalent bonds to form between adjacent cytosine bases, producing a dimer. When DNA polymerase comes along to replicate this strand of DNA, it reads the dimer as "AA" and not the original "CC". This causes the DNA replication mechanism to add a "TT" on the growing strand. This is a mutation, which can result in cancerous growths and is known as a "classical C-T mutation". The mutations that are caused by the direct DNA damage carry a UV signature mutation that is commonly seen in skin cancers. The mutagenicity of UV radiation can be easily observed in bacteria cultures. This cancer connection is one reason for concern about ozone depletion and the ozone hole. UVB causes some damage to collagen but at a very much slower rate than UVA.

As a defense against UV radiation, the amount of the brown pigment melanin in the skin increases when exposed to moderate (depending on skin type) levels of radiation; this is commonly known as a sun tan. The purpose of melanin is to absorb UV radiation and dissipate the energy as harmless heat, blocking the UV from damaging skin tissue. UVA gives a quick tan that lasts for days by oxidizing melanin that was already present and triggers the release of the melanin from melanocytes. UVB yields a tan that takes roughly 2 days to develop because it stimulates the body to produce more melanin. The photochemical properties of melanin make it an excellent photoprotectant. However, sunscreen chemicals can not dissipate the energy of the excited state as efficiently as melanin and therefore the penetration of sunscreen ingredients into the lower layers of the skin is increasing the amount of free radicals and ROS.[13]

Sunscreen prevents the direct DNA damage which causes sunburn. Most of these products contain an SPF rating to show how well they block UVB rays. The SPF rating, however, offers no data about UVA protection. In the US, the FDA is considering adding a star rating system to show UVA protection. A similar system is already used in some European countries.

Some sunscreen lotions now include compounds such as titanium dioxide which helps protect against UVA rays. Other UVA blocking compounds found in sunscreen include zinc oxide and avobenzone. Cantaloupe extract, rich in the compound superoxide dismutase (SOD), can be bound with gliadin to form glisodin, an orally-effective protectant against UVB radiation. There are also naturally occurring compounds found in rainforest plants that have been known to protect the skin from UV radiation damage, such as the fern Phlebodium aureum.


[edit] Sunscreen safety debate
Main article: Sunscreen controversy
Medical organizations recommend that patients protect themselves from UV radiation using sunscreen. Five sunscreen ingredients have been shown to protect mice against skin tumors (see sunscreen).

However, some sunscreen chemicals produce potentially harmful substances if they are illuminated while in contact with living cells.[14][15][16] The amount of sunscreen which penetrates through the stratum corneum may or may not be large enough to cause damage. In one study of sunscreens, the authors write:[17]

The question whether UV filters acts on or in the skin has so far not been fully answered. Despite the fact that an answer would be a key to improve formulations of sun protection products, many publications carefully avoid addressing this question.

In an experiment that was published in 2006 by Hanson et al, the amount of harmful reactive oxygen species (ROS) had been measured in untreated and in sunscreen treated skin. In the first 20 minutes the film of sunscreen had a protective effect and the number of ROS species was smaller. After 60 minutes however the amount of absorbed sunscreen was so high, that the amount of ROS was higher in the sunscreen treated skin than in the untreated skin.[13]


[edit] Eye
High intensities of UVB light are hazardous to the eyes, and exposure can cause welder's flash (photokeratitis or arc eye) and may lead to cataracts, pterygium,[18][19] and pinguecula formation.

Protective eyewear is beneficial to those who are working with or those who might be exposed to ultraviolet radiation, particularly short wave UV. Given that light may reach the eye from the sides, full coverage eye protection is usually warranted if there is an increased risk of exposure, as in high altitude mountaineering. Mountaineers are exposed to higher than ordinary levels of UV radiation, both because there is less atmospheric filtering and because of reflection from snow and ice.

Ordinary, untreated eyeglasses give some protection. Most plastic lenses give more protection than glass lenses, because, as noted above, glass is transparent to UVA and the common acrylic plastic used for lenses is less so. Some plastic lens materials, such as polycarbonate, inherently block most UV. There are protective treatments available for eyeglass lenses that need it which will give better protection. But even a treatment that completely blocks UV will not protect the eye from light that arrives around the lens.


[edit] Degradation of polymers, pigments and dyes
Many polymers used in consumer products are degraded by UV light, and need addition of UV absorbers to inhibit attack, especially if the products are used externally and so exposed to sunlight. The problem appears as discoloration or fading, cracking and sometimes, total product disintegration if cracking has proceeded far enough. The rate of attack increases with exposure time and sunlight intensity.

It is known as UV degradation, and is one form of polymer degradation. Sensitive polymers include thermoplastics, such as polypropylene and polyethylene as well as speciality fibres like aramids. UV absorption leads to chain degradation and loss of strength at sensitive points in the chain structure. They include tertiary carbon atoms, which in polypropylene occur in every repeat unit.

In addition, many pigments and dyes absorb UV and change colour, so paintings and textiles may need extra protection both from sunlight and fluorescent lamps, two common sources of UV radiation. Old and antique paintings such as watercolour paintings for example, usually need to be placed away from direct sunlight. Common window glass provides some protection by absorbing some of the harmful UV, but valuable artifacts need shielding.


[edit] Blockers and absorbers
Ultraviolet Light Absorbers (UVAs) are molecules used in organic materials (polymers, paints, etc.) to absorb UV light in order to reduce the UV degradation (photo-oxidation) of a material. A number of different UVAs exist with different absorption properties. UVAs can disappear over time, so monitoring of UVA levels in weathered materials is necessary.

In sunscreen, ingredients which absorb UVA/UVB rays, such as avobenzone and octyl methoxycinnamate, are known as absorbers. They are contrasted with physical "blockers" of UV radiation such as titanium dioxide and zinc oxide. (See sunscreen for a more complete list.)


[edit] Applications of UV

[edit] Security

A bird appears on many Visa credit cards when held under a UV light source.To help thwart counterfeiters, sensitive documents (e.g. credit cards, driver's licenses, passports) may also include a UV watermark that can only be seen when viewed under a UV-emitting light. Passports issued by most countries usually contain UV sensitive inks and security threads. Visa stamps and stickers on passports of visitors contain large and detailed seals invisible to the naked eye under normal lights, but strongly visible under UV illumination. Passports issued by many nations have UV sensitive watermarks on all pages of the passport. Currencies of various countries' banknotes have an image, as well as many multicolored fibers, that are visible only under ultraviolet light.

Some brands of pepper spray will leave an invisible chemical (UV Dye) that is not easily washed off on a pepper sprayed attacker, which would help police identify them later. [20]


[edit] Fluorescent lamps
Fluorescent lamps produce UV radiation by ionising low-pressure mercury vapour. A phosphorescent coating on the inside of the tubes absorbs the UV and converts it to visible light.

The main mercury emission wavelength is in the UVC range. Unshielded exposure of the skin or eyes to mercury arc lamps that do not have a conversion phosphor is quite dangerous.

The light from a mercury lamp is predominantly at discrete wavelengths. Other practical UV sources with more continuous emission spectra include xenon arc lamps (commonly used as sunlight simulators), deuterium arc lamps, mercury-xenon arc lamps, metal-halide arc lamps, and tungsten-halogen incandescent lamps.


[edit] Astronomy

Aurora at Jupiter's north pole as seen in ultraviolet light by the Hubble Space Telescope.In astronomy, very hot objects preferentially emit UV radiation (see Wien's law). Because the ozone layer blocks many UV frequencies from reaching telescopes on the surface of the Earth, most UV observations are made from space. (See UV astronomy, space observatory.)


[edit] Biological surveys and pest control
Some animals, including birds, reptiles, and insects such as bees, can see into the near ultraviolet. Many fruits, flowers, and seeds stand out more strongly from the background in ultraviolet wavelengths as compared to human color vision. Scorpions glow or take on a yellow to green color under UV illumination. Many birds have patterns in their plumage that are invisible at usual wavelengths but observable in ultraviolet, and the urine and other secretions of some animals, including dogs, cats, and human beings, is much easier to spot with ultraviolet.

Many insects use the ultraviolet wavelength emissions from celestial objects as references for flight navigation. A local ultraviolet emissor will normally disrupt the navigation process and will eventually attract the flying insect.


Entomologist using a UV light for collecting beetles in the Paraguayan Chaco.Ultraviolet traps called bug zappers are used to eliminate various small flying insects. They are attracted to the UV light, and are killed using an electric shock, or trapped once they come into contact with the device. Different designs of ultraviolet light traps are also used by entomologists for collecting nocturnal insects during faunistic survey studies.


[edit] Spectrophotometry
UV/VIS spectroscopy is widely used as a technique in chemistry, to analyze chemical structure, most notably conjugated systems. UV radiation is often used in visible spectrophotometry to determine the existence of fluorescence in a given sample.


[edit] Analyzing minerals

A collection of mineral samples brilliantly fluorescing at various wavelengths as seen while being irradiated by UV light.Ultraviolet lamps are also used in analyzing minerals, gems, and in other detective work including authentication of various collectibles. Materials may look the same under visible light, but fluoresce to different degrees under ultraviolet light; or may fluoresce differently under short wave ultraviolet versus long wave ultraviolet.


[edit] Chemical markers
UV fluorescent dyes are used in many applications (for example, biochemistry and forensics). The Green Fluorescent Protein (GFP) is often used in genetics as a marker. Many substances, such as proteins, have significant light absorption bands in the ultraviolet that are of use and interest in biochemistry and related fields. UV-capable spectrophotometers are common in such laboratories.


[edit] Photochemotherapy
Exposure to UVA light while the skin is hyper-photosensitive by taking psoralens is an effective treatment for psoriasis called PUVA. Due to psoralens potentially causing damage to the liver, PUVA may only be used a limited number of times over a patient's lifetime


[edit] Phototherapy
This article needs additional citations for verification.
Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (April 2007)
This article or section is in need of attention from an expert on the subject.

WikiProject Health or the Health Portal may be able to help recruit one.


If a more appropriate WikiProject or portal exists, please adjust this template accordingly.(March 2008)

Exposure to UVB light, particularly the 310 nm narrowband UVB range, is an effective long-term treatment for many skin conditions like psoriasis, vitiligo, eczema, and others[21]. UVB phototherapy does not require additional medications or topical preparations for the therapeutic benefit; only the light exposure is needed. However, phototherapy can be effective when used in conjunction with certain topical treatments such as anthralin, coal tar, and Vitamin A and D derivatives, or systemic treatments such as methotrexate and soriatane.[22]

Typical treatment regimes involve short exposure to UVB rays 3 to 5 times a week at a hospital or clinic, and for the best results, up to 30 or more sessions may be required.

Side effects may include itching and redness of the skin due to UVB exposure, and possibly sunburn, if patients do not minimize exposure to natural UV rays during treatment days.


[edit] Photolithography
Ultraviolet radiation is used for very fine resolution photolithography, a procedure where a chemical known as a photoresist is exposed to UV radiation which has passed through a mask. The light allows chemical reactions to take place in the photoresist, and after development (a step that either removes the exposed or unexposed photoresist), a geometric pattern which is determined by the mask remains on the sample. Further steps may then be taken to "etch" away parts of the sample with no photoresist remaining.

UV radiation is used extensively in the electronics industry because photolithography is used in the manufacture of semiconductors, integrated circuit components[23] and printed circuit boards.


[edit] Checking electrical insulation
A new application of UV is to detect corona discharge (often simply called "corona") on electrical apparatus. Degradation of insulation of electrical apparatus or pollution causes corona, wherein a strong electric field ionizes the air and excites nitrogen molecules, causing the emission of ultraviolet radiation. The corona degrades the insulation level of the apparatus. Corona produces ozone and to a lesser extent nitrogen oxide which may subsequently react with water in the air to form nitrous acid and nitric acid vapour in the surrounding air.[24]