Wednesday, June 10, 2009

Nano technology

Nanotechnology, shortened to "Nanotech", is the study of the control of matter on an atomic and molecular scale. Generally nanotechnology deals with structures of the size 100 nanometers or smaller, and involves developing materials or devices within that size. Nanotechnology is very diverse, ranging from novel extensions of conventional device physics, to completely new approaches based upon molecular self-assembly, to developing new materials with dimensions on the nanoscale, even to speculation on whether we can directly control matter on the atomic scale.

There has been much debate on the future of implications of nanotechnology. Nanotechnology has the potential to create many new materials and devices with wide-ranging applications, such as in medicine, electronics, and energy production. On the other hand, nanotechnology raises many of the same issues as with any introduction of new technology, including concerns about the toxicity and environmental impact of nanomaterials [1], and their potential effects on global economics, as well as speculation about various doomsday scenarios. These concerns have led to a debate among advocacy groups and governments on whether special regulation of nanotechnology is warranted.

The Meaning of Nanotechnology

When K. Eric Drexler (right) popularized the word 'nanotechnology' in the 1980's, he was talking about building machines on the scale of molecules, a few nanometers wide—motors, robot arms, and even whole computers, far smaller than a cell. Drexler spent the next ten years describing and analyzing these incredible devices, and responding to accusations of science fiction. Meanwhile, mundane technology was developing the ability to build simple structures on a molecular scale. As nanotechnology became an accepted concept, the meaning of the word shifted to encompass the simpler kinds of nanometer-scale technology. The U.S. National Nanotechnology Initiative was created to fund this kind of nanotech: their definition includes anything smaller than 100 nanometers with novel properties.

Much of the work being done today that carries the name 'nanotechnology' is not nanotechnology in the original meaning of the word. Nanotechnology, in its traditional sense, means building things from the bottom up, with atomic precision. This theoretical capability was envisioned as early as 1959 by the renowned physicist Richard Feynman.

I want to build a billion tiny factories, models of each other, which are manufacturing simultaneously. . . The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom. It is not an attempt to violate any laws; it is something, in principle, that can be done; but in practice, it has not been done because we are too big. — Richard Feynman, Nobel Prize winner in physics

Based on Feynman's vision of miniature factories using nanomachines to build complex products, advanced nanotechnology (sometimes referred to as molecular manufacturing) will make use of positionally-controlled mechanochemistry guided by molecular machine systems. Formulating a roadmap for development of this kind of nanotechnology is now an objective of a broadly based technology roadmap project led by Battelle (the manager of several U.S. National Laboratories) and the Foresight Nanotech Institute.

Shortly after this envisioned molecular machinery is created, it will result in a manufacturing revolution, probably causing severe disruption. It also has serious economic, social, environmental, and military implications.

Four Generations

Mihail (Mike) Roco of the U.S. National Nanotechnology Initiative has described four generations of nanotechnology development (see chart below). The current era, as Roco depicts it, is that of passive nanostructures, materials designed to perform one task. The second phase, which we are just entering, introduces active nanostructures for multitasking; for example, actuators, drug delivery devices, and sensors. The third generation is expected to begin emerging around 2010 and will feature nanosystems with thousands of interacting components. A few years after that, the first integrated nanosystems, functioning (according to Roco) much like a mammalian cell with hierarchical systems within systems, are expected to be developed.

Some experts may still insist that nanotechnology can refer to measurement or visualization at the scale of 1-100 nanometers, but a consensus seems to be forming around the idea (put forward by the NNI's Mike Roco) that control and restructuring of matter at the nanoscale is a necessary element. CRN's definition is a bit more precise than that, but as work progresses through the four generations of nanotechnology leading up to molecular nanosystems, which will include molecular manufacturing, we think it will become increasingly obvious that "engineering of functional systems at the molecular scale" is what nanotech is really all about.

Conflicting Definitions

Unfortunately, conflicting definitions of nanotechnology and blurry distinctions between significantly different fields have complicated the effort to understand the differences and develop sensible, effective policy.

The risks of today's nanoscale technologies (nanoparticle toxicity, etc.) cannot be treated the same as the risks of longer-term molecular manufacturing (economic disruption, unstable arms race, etc.). It is a mistake to put them together in one basket for policy consideration—each is important to address, but they offer different problems and will require different solutions. As used today, the term nanotechnology usually refers to a broad collection of mostly disconnected fields. Essentially, anything sufficiently small and interesting can be called nanotechnology. Much of it is harmless. For the rest, much of the harm is of familiar and limited quality. But as we will see, molecular manufacturing will bring unfamiliar risks and new classes of problems.

General-Purpose Technology

Nanotechnology is sometimes referred to as a general-purpose technology. That's because in its advanced form it will have significant impact on almost all industries and all areas of society. It will offer better built, longer lasting, cleaner, safer, and smarter products for the home, for communications, for medicine, for transportation, for agriculture, and for industry in general.

Imagine a medical device that travels through the human body to seek out and destroy small clusters of cancerous cells before they can spread. Or a box no larger than a sugar cube that contains the entire contents of the Library of Congress. Or materials much lighter than steel that possess ten times as much strength. — U.S. National Science Foundation

Dual-Use Technology

Like electricity or computers before it, nanotech will offer greatly improved efficiency in almost every facet of life. But as a general-purpose technology, it will be dual-use, meaning it will have many commercial uses and it also will have many military uses—making far more powerful weapons and tools of surveillance. Thus it represents not only wonderful benefits for humanity, but also grave risks.

A key understanding of nanotechnology is that it offers not just better products, but a vastly improved manufacturing process. A computer can make copies of data files—essentially as many copies as you want at little or no cost. It may be only a matter of time until the building of products becomes as cheap as the copying of files. That's the real meaning of nanotechnology, and why it is sometimes seen as "the next industrial revolution."

My own judgment is that the nanotechnology revolution has the potential to change America on a scale equal to, if not greater than, the computer revolution. — U.S. Senator Ron Wyden (D-Ore.)

The power of nanotechnology can be encapsulated in an apparently simple device called a personal nanofactory that may sit on your countertop or desktop. Packed with miniature chemical processors, computing, and robotics, it will produce a wide-range of items quickly, cleanly, and inexpensively, building products directly from blueprints.

◄ Click to enlarge
Artist's Conception of a Personal Nanofactory
Courtesy of John Burch, Lizard Fire Studios (3D Animation, Game Development)

Exponential Proliferation

Nanotechnology not only will allow making many high-quality products at very low cost, but it will allow making new nanofactories at the same low cost and at the same rapid speed. This unique (outside of biology, that is) ability to reproduce its own means of production is why nanotech is said to be an exponential technology. It represents a manufacturing system that will be able to make more manufacturing systems—factories that can build factories—rapidly, cheaply, and cleanly. The means of production will be able to reproduce exponentially, so in just a few weeks a few nanofactories conceivably could become billions. It is a revolutionary, transformative, powerful, and potentially very dangerous—or beneficial—technology.

How soon will all this come about? Conservative estimates usually say 20 to 30 years from now, or even much later than that. However, CRN is concerned that it may occur sooner, quite possibly within the next decade. This is because of the rapid progress being made in enabling technologies, such as optics, nanolithography, mechanochemistry and 3D prototyping. If it does arrive that soon, we may not be adequately prepared, and the consequences could be severe.

We believe it's not too early to begin asking some tough questions and facing the issues:
bullet Who will own the technology?
bullet Will it be heavily restricted, or widely available?
bullet What will it do to the gap between rich and poor?
bullet How can dangerous weapons be controlled, and perilous arms races be prevented?

Many of these questions were first raised over a decade ago, and have not yet been answered. If the questions are not answered with deliberation, answers will evolve independently and will take us by surprise; the surprise is likely to be unpleasant.

It is difficult to say for sure how soon this technology will mature, partly because it's possible (especially in countries that do not have open societies) that clandestine military or industrial development programs have been going on for years without our knowledge.

We cannot say with certainty that full-scale nanotechnology will not be developed with the next ten years, or even five years. It may take longer than that, but prudence—and possibly our survival—demands that we prepare now for the earliest plausible development scenario.

Saturday, August 23, 2008

Thursday, August 14, 2008

Characteristics of embedded systems

Soekris net4801, an embedded system targeted at network applications.
Soekris net4801, an embedded system targeted at network applications.

1. Embedded systems are designed to do some specific task, rather than be a general-purpose computer for multiple tasks. Some also have real-time performance constraints that must be met, for reasons such as safety and usability; others may have low or no performance requirements, allowing the system hardware to be simplified to reduce costs.
2. Embedded systems are not always standalone devices. Many embedded systems consist of small, computerized parts within a larger device that serves a more general purpose. For example, the Gibson Robot Guitar features an embedded system for tuning the strings, but the overall purpose of the Robot Guitar is, of course, to play music.[2] Similarly, an embedded system in an automobile provides a specific function as a subsystem of the car itself.
3. The software written for embedded systems is often called firmware, and is usually stored in read-only memory or Flash memory chips rather than a disk drive. It often runs with limited computer hardware resources: small or no keyboard, screen, and little memory.

[edit] User interfaces

Embedded systems range from no user interface at all — dedicated only to one task — to complex graphical user interfaces that resemble modern computer desktop operating systems.

[edit] Simple systems

Simple embedded devices use buttons, LEDs, and small character- or digit-only displays, often with a simple menu system.

[edit] In more complex systems

A full graphical screen, with touch sensing or screen-edge buttons provides flexibility while minimising space used: the meaning of the buttons can change with the screen, and selection involves the natural behavior of pointing at what's desired.

Handheld systems often have a screen with a "joystick button" for a pointing device.

The rise of the World Wide Web has given embedded designers another quite different option: providing a web page interface over a network connection. This avoids the cost of a sophisticated display, yet provides complex input and display capabilities when needed, on another computer. This is successful for remote, permanently installed equipment such as Pan-Tilt-Zoom cameras and network routers.

[edit] CPU platforms

Embedded processors can be broken into two broad categories: ordinary microprocessors (μP) and microcontrollers (μC), which have many more peripherals on chip, reducing cost and size. Contrasting to the personal computer and server markets, a fairly large number of basic CPU architectures are used; there are Von Neumann as well as various degrees of Harvard architectures, RISC as well as non-RISC and VLIW; word lengths vary from 4-bit to 64-bits and beyond (mainly in DSP processors) although the most typical remain 8/16-bit. Most architectures come in a large number of different variants and shapes, many of which are also manufactured by several different companies.

A long but still not exhaustive list of common architectures are: 65816, 65C02, 68HC08, 68HC11, 68k, 8051, ARM, AVR, AVR32, Blackfin, C167, Coldfire, COP8, eZ8, eZ80, FR-V, H8, HT48, M16C, M32C, MIPS, MSP430, PIC, PowerPC, R8C, SHARC, ST6, SuperH, TLCS-47, TLCS-870, TLCS-900, Tricore, V850, x86, XE8000, Z80, etc.

[edit] Ready made computer boards

PC/104 and PC/104+ are examples of available ready made computer boards intended for small, low-volume embedded and ruggedized systems. These often use DOS, Linux, NetBSD, or an embedded real-time operating system such as MicroC/OS-II, QNX or VxWorks.

In certain applications, particularly where small size is not a primary concern, the components used may be compatible with those used in general purpose computers. Boards such as the VIA EPIA range help to bridge the gap by being PC-compatible but highly integrated, physically smaller or have other attributes making them attractive to embedded engineers. The advantage of this approach is that low-cost commodity components may be used along with the same software development tools used for general software development. Systems built in this way are still regarded as embedded since they are integrated into larger devices and fulfill a single role. Examples of devices that may adopt this approach are ATMs and arcade machines.

[edit] ASIC and FPGA solutions

A common configuration for very-high-volume embedded systems is the system on a chip (SoC), an application-specific integrated circuit (ASIC), for which the CPU core was purchased and added as part of the chip design. A related scheme is to use a field-programmable gate array (FPGA), and program it with all the logic, including the CPU.

[edit] Peripherals

Embedded Systems talk with the outside world via peripherals, such as:

* Serial Communication Interfaces (SCI): RS-232, RS-422, RS-485 etc
* Synchronous Serial Communication Interface: I2C, JTAG, SPI, SSC and ESSI
* Universal Serial Bus (USB)
* Networks: Ethernet, Controller Area Network, LonWorks, etc
* Timers: PLL(s), Capture/Compare and Time Processing Units
* Discrete IO: aka General Purpose Input/Output (GPIO)
* Analog to Digital/Digital to Analog (ADC/DAC)

[edit] Tools

As for other software, embedded system designers use compilers, assemblers, and debuggers to develop embedded system software. However, they may also use some more specific tools:

* In circuit debuggers or emulators (see next section).
* Utilities to add a checksum or CRC to a program, so the embedded system can check if the program is valid.
* For systems using digital signal processing, developers may use a math workbench such as Scilab / Scicos, MATLAB / Simulink, MathCad, or Mathematica to simulate the mathematics. They might also use libraries for both the host and target which eliminates developing DSP routines as done in DSPnano RTOS and Unison Operating System.
* Custom compilers and linkers may be used to improve optimisation for the particular hardware.
* An embedded system may have its own special language or design tool, or add enhancements to an existing language such as Forth or Basic.
* Another alternative is to add a Real-time operating system or Embedded operating system, which may have DSP capabilities like DSPnano RTOS.

Software tools can come from several sources:

* Software companies that specialize in the embedded market
* Ported from the GNU software development tools
* Sometimes, development tools for a personal computer can be used if the embedded processor is a close relative to a common PC processor

As the complexity of embedded systems grows, higher level tools and operating systems are migrating into machinery where it makes sense. For example, cellphones, personal digital assistants and other consumer computers often need significant software that is purchased or provided by a person other than the manufacturer of the electronics. In these systems, an open programming environment such as Linux, NetBSD, OSGi or Embedded Java is required so that the third-party software provider can sell to a large market.

[edit] Debugging

Embedded Debugging may be performed at different levels, depending on the facilities available. From simplest to most sophisticated they can be roughly grouped into the following areas:

* Interactive resident debugging, using the simple shell provided by the embedded operating system (e.g. Forth and Basic)
* External debugging using logging or serial port output to trace operation using either a monitor in flash or using a debug server like the Remedy Debugger which even works for heterogeneous multicore systems.
* An in-circuit debugger (ICD), a hardware device that connects to the microprocessor via a JTAG or NEXUS interface. This allows the operation of the microprocessor to be controlled externally, but is typically restricted to specific debugging capabilities in the processor.
* An in-circuit emulator replaces the microprocessor with a simulated equivalent, providing full control over all aspects of the microprocessor.
* A complete emulator provides a simulation of all aspects of the hardware, allowing all of it to be controlled and modified, and allowing debugging on a normal PC.

Unless restricted to external debugging, the programmer can typically load and run software through the tools, view the code running in the processor, and start or stop its operation. The view of the code may be as assembly code or source-code.

Because an embedded system is often composed of a wide variety of elements, the debugging strategy may vary. For instance, debugging a software- (and microprocessor-) centric embedded system is different from debugging an embedded system where most of the processing is performed by peripherals (DSP, FPGA, co-processor). An increasing number of embedded systems today use more than one single processor core. A common problem with multi-core development is the proper synchronization of software execution. In such a case, the embedded system design may wish to check the data traffic on the busses between the processor cores, which requires very low-level debugging, at signal/bus level, with a logic analyzer, for instance.

[edit] Reliability

Embedded systems often reside in machines that are expected to run continuously for years without errors, and in some cases recover by themselves if an error occurs. Therefore the software is usually developed and tested more carefully than that for personal computers, and unreliable mechanical moving parts such as disk drives, switches or buttons are avoided.

Specific reliability issues may include:

1. The system cannot safely be shut down for repair, or it is too inaccessible to repair. Examples include space systems, undersea cables, navigational beacons, bore-hole systems, and automobiles.
2. The system must be kept running for safety reasons. "Limp modes" are less tolerable. Often backups are selected by an operator. Examples include aircraft navigation, reactor control systems, safety-critical chemical factory controls, train signals, engines on single-engine aircraft.
3. The system will lose large amounts of money when shut down: Telephone switches, factory controls, bridge and elevator controls, funds transfer and market making, automated sales and service.

A variety of techniques are used, sometimes in combination, to recover from errors -- both software bugs such as memory leaks, and also soft errors in the hardware:

* watchdog timer that resets the computer unless the software periodically notifies the watchdog
* subsystems with redundant spares that can be switched over to
* software "limp modes" that provide partial function
* Immunity Aware Programming

[edit] High vs Low Volume

For high volume systems such as portable music players or mobile phones, minimizing cost is usually the primary design consideration. Engineers typically select hardware that is just “good enough” to implement the necessary functions.

For low-volume or prototype embedded systems, general purpose computers may be adapted by limiting the programs or by replacing the operating system with a real-time operating system.

[edit] Embedded software architectures

There are several different types of software architecture in common use.

[edit] Simple control loop

In this design, the software simply has a loop. The loop calls subroutines, each of which manages a part of the hardware or software.

[edit] Interrupt controlled system

Some embedded systems are predominantly interrupt controlled. This means that tasks performed by the system are triggered by different kinds of events. An interrupt could be generated for example by a timer in a predefined frequency, or by a serial port controller receiving a byte.

These kinds of systems are used if event handlers need low latency and the event handlers are short and simple.

Usually these kinds of systems run a simple task in a main loop also, but this task is not very sensitive to unexpected delays.

Sometimes the interrupt handler will add longer tasks to a queue structure. Later, after the interrupt handler has finished, these tasks are executed by the main loop. This method brings the system close to a multitasking kernel with discrete processes.

[edit] Cooperative multitasking

A nonpreemptive multitasking system is very similar to the simple control loop scheme, except that the loop is hidden in an API. The programmer defines a series of tasks, and each task gets its own environment to "run" in. Then, when a task is idle, it calls an idle routine (usually called "pause", "wait", "yield", "nop" (Stands for no operation), etc.).

The advantages and disadvantages are very similar to the control loop, except that adding new software is easier, by simply writing a new task, or adding to the queue-interpreter.

[edit] Preemptive multitasking or multi-threading

In this type of system, a low-level piece of code switches between tasks or threads based on a timer (connected to an interrupt). This is the level at which the system is generally considered to have an "operating system" kernel. Depending on how much functionality is required, it introduces more or less of the complexities of managing multiple tasks running conceptually in parallel.

As any code can potentially damage the data of another task (except in larger systems using an MMU) programs must be carefully designed and tested, and access to shared data must be controlled by some synchronization strategy, such as message queues, semaphores or a non-blocking synchronization scheme.

Because of these complexities, it is common for organizations to buy a real-time operating system, allowing the application programmers to concentrate on device functionality rather than operating system services, at least for large systems; smaller systems often cannot afford the overhead associated with a generic real time system, due to limitations regarding memory size, performance, and/or battery life.

[edit] Microkernels and exokernels

A microkernel is a logical step up from a real-time OS. The usual arrangement is that the operating system kernel allocates memory and switches the CPU to different threads of execution. User mode processes implement major functions such as file systems, network interfaces, etc.

In general, microkernels succeed when the task switching and intertask communication is fast, and fail when they are slow.

Exokernels communicate efficiently by normal subroutine calls. The hardware, and all the software in the system are available to, and extensible by application programmers.

[edit] Monolithic kernels

In this case, a relatively large kernel with sophisticated capabilities is adapted to suit an embedded environment. This gives programmers an environment similar to a desktop operating system like Linux or Microsoft Windows, and is therefore very productive for development; on the downside, it requires considerably more hardware resources, is often more expensive, and because of the complexity of these kernels can be less predictable and reliable.

Common examples of embedded monolithic kernels are Embedded Linux and Windows CE.

Despite the increased cost in hardware, this type of embedded system is increasing in popularity, especially on the more powerful embedded devices such as Wireless Routers and GPS Navigation Systems. Here are some of the reasons:

* Ports to common embedded chip sets are available.
* They permit re-use of publicly available code for Device Drivers, Web Servers, Firewalls, and other code.
* Development systems can start out with broad feature-sets, and then the distribution can be configured to exclude unneeded functionality, and save the expense of the memory that it would consume.
* Many engineers believe that running application code in user mode is more reliable, easier to debug and that therefore the development process is easier and the code more portable.
* Many embedded systems lack the tight real time requirements of a control system. A system such as Embedded Linux has fast enough response for many applications.
* Features requiring faster response than can be guaranteed can often be placed in hardware.
* Many RTOS systems have a per-unit cost. When used on a product that is or will become a commodity, that cost is significant.

[edit] Exotic custom operating systems

A small fraction of embedded systems require safe, timely, reliable or efficient behavior unobtainable with the one of the above architectures. In this case an organization builds a system to suit. In some cases, the system may be partitioned into a "mechanism controller" using special techniques, and a "display controller" with a conventional operating system. A communication system passes data between the two.

[edit] Additional software components

In addition to the core operating system, many embedded systems have additional upper-layer software components. These components consists of networking protocol stacks like TCP/IP, FTP, HTTP, and HTTPS, and also included storage capabilities like FAT and Flash memory management systems. If the embedded devices has audio and video capabilities, then the appropriate drivers and codecs will be present in the system. In the case of the monolithic kernels, many of these software layers are included. In the RTOS category, the availability of the additional software components depends upon the commercial offering.
EMBEDDED SYSTEM





An embedded system is a special-purpose computer system designed to perform one or a few dedicated functions,[1] often with real-time computing constraints. It is usually embedded as part of a complete device including hardware and mechanical parts. In contrast, a general-purpose computer, such as a personal computer, can do many different tasks depending on programming. Embedded systems control many of the common devices in use today.

Since the embedded system is dedicated to specific tasks, design engineers can optimize it, reducing the size and cost of the product, or increasing the reliability and performance. Some embedded systems are mass-produced, benefiting from economies of scale.

Physically, embedded systems range from portable devices such as digital watches and MP3 players, to large stationary installations like traffic lights, factory controllers, or the systems controlling nuclear power plants. Complexity varies from low, with a single microcontroller chip, to very high with multiple units, peripherals and networks mounted inside a large chassis or enclosure.

In general, "embedded system" is not an exactly defined term, as many systems have some element of programmability. For example, Handheld computers share some elements with embedded systems — such as the operating systems and microprocessors which power them — but are not truly embedded systems, because they allow different applications to be loaded and peripherals to be connected.
Advances in integrated circuits






The integrated circuit from an Intel 8742, an 8-bit microcontroller that includes a CPU running at 12 MHz, 128 bytes of RAM, 2048 bytes of EPROM, and I/O in the same chip.
The integrated circuit from an Intel 8742, an 8-bit microcontroller that includes a CPU running at 12 MHz, 128 bytes of RAM, 2048 bytes of EPROM, and I/O in the same chip.

Among the most advanced integrated circuits are the microprocessors or "cores", which control everything from computers to cellular phones to digital microwave ovens. Digital memory chips and ASICs are examples of other families of integrated circuits that are important to the modern information society. While cost of designing and developing a complex integrated circuit is quite high, when spread across typically millions of production units the individual IC cost is minimized. The performance of ICs is high because the small size allows short traces which in turn allows low power logic (such as CMOS) to be used at fast switching speeds.

ICs have consistently migrated to smaller feature sizes over the years, allowing more circuitry to be packed on each chip. This increased capacity per unit area can be used to decrease cost and/or increase functionality—see Moore's law which, in its modern interpretation, states that the number of transistors in an integrated circuit doubles every two years. In general, as the feature size shrinks, almost everything improves—the cost per unit and the switching power consumption go down, and the speed goes up. However, ICs with nanometer-scale devices are not without their problems, principal among which is leakage current (see subthreshold leakage for a discussion of this), although these problems are not insurmountable and will likely be solved or at least ameliorated by the introduction of high-k dielectrics. Since these speed and power consumption gains are apparent to the end user, there is fierce competition among the manufacturers to use finer geometries. This process, and the expected progress over the next few years, is well described by the International Technology Roadmap for Semiconductors (ITRS).

[edit] Popularity of ICs

Main article: Microchip revolution

Only a half century after their development was initiated, integrated circuits have become ubiquitous. Computers, cellular phones, and other digital appliances are now inextricable parts of the structure of modern societies. That is, modern computing, communications, manufacturing and transport systems, including the Internet, all depend on the existence of integrated circuits. Indeed, many scholars believe that the digital revolution—brought about by the microchip revolution—was one of the most significant occurrences in the history of humankind.

[edit] Classification
A CMOS 4000 IC in a DIP
A CMOS 4000 IC in a DIP

Integrated circuits can be classified into analog, digital and mixed signal (both analog and digital on the same chip).

Digital integrated circuits can contain anything from a few thousand to millions of logic gates, flip-flops, multiplexers, and other circuits in a few square millimeters. The small size of these circuits allows high speed, low power dissipation, and reduced manufacturing cost compared with board-level integration. These digital ICs, typically microprocessors, DSPs, and micro controllers work using binary mathematics to process "one" and "zero" signals.

Analog ICs, such as sensors, power management circuits, and operational amplifiers, work by processing continuous signals. They perform functions like amplification, active filtering, demodulation, mixing, etc. Analog ICs ease the burden on circuit designers by having expertly designed analog circuits available instead of designing a difficult analog circuit from scratch.

ICs can also combine analog and digital circuits on a single chip to create functions such as A/D converters and D/A converters. Such circuits offer smaller size and lower cost, but must carefully account for signal interference.

[edit] Manufacture

[edit] Fabrication

Main article: Semiconductor fabrication

Rendering of a small standard cell with three metal layers (dielectric has been removed). The sand-colored structures are metal interconnect, with the vertical pillars being contacts, typically plugs of tungsten. The reddish structures are polysilicon gates, and the solid at the bottom is the crystalline silicon bulk.
Rendering of a small standard cell with three metal layers (dielectric has been removed). The sand-colored structures are metal interconnect, with the vertical pillars being contacts, typically plugs of tungsten. The reddish structures are polysilicon gates, and the solid at the bottom is the crystalline silicon bulk.

The semiconductors of the periodic table of the chemical elements were identified as the most likely materials for a solid state vacuum tube by researchers like William Shockley at Bell Laboratories starting in the 1930s. Starting with copper oxide, proceeding to germanium, then silicon, the materials were systematically studied in the 1940s and 1950s. Today, silicon monocrystals are the main substrate used for integrated circuits (ICs) although some III-V compounds of the periodic table such as gallium arsenide are used for specialized applications like LEDs, lasers, solar cells and the highest-speed integrated circuits. It took decades to perfect methods of creating crystals without defects in the crystalline structure of the semiconducting material.

Semiconductor ICs are fabricated in a layer process which includes these key process steps:

* Imaging
* Deposition
* Etching

The main process steps are supplemented by doping, cleaning and polarization steps.

Mono-crystal silicon wafers (or for special applications, silicon on sapphire or gallium arsenide wafers) are used as the substrate. Photolithography is used to mark different areas of the substrate to be doped or to have polysilicon, insulators or metal (typically aluminum) tracks deposited on them.

* Integrated circuits are composed of many overlapping layers, each defined by photolithography, and normally shown in different colors. Some layers mark where various dopants are diffused into the substrate (called diffusion layers), some define where additional ions are implanted (implant layers), some define the conductors (polysilicon or metal layers), and some define the connections between the conducting layers (via or contact layers). All components are constructed from a specific combination of these layers.

* In a self-aligned CMOS process, a transistor is formed wherever the gate layer (polysilicon or metal) crosses a diffusion layer.

* Resistive structures, meandering stripes of varying lengths, form the loads on the circuit. The ratio of the length of the resistive structure to its width, combined with its sheet resistivity determines the resistance.

* Capacitive structures, in form very much like the parallel conducting plates of a traditional electrical capacitor, are formed according to the area of the "plates", with insulating material between the plates. Owing to limitations in size, only very small capacitances can be created on an IC.

* More rarely, inductive structures can be built as tiny on-chip coils, or simulated by gyrators.

Since a CMOS device only draws current on the transition between logic states, CMOS devices consume much less current than bipolar devices.

A random access memory is the most regular type of integrated circuit; the highest density devices are thus memories; but even a microprocessor will have memory on the chip. (See the regular array structure at the bottom of the first image.) Although the structures are intricate – with widths which have been shrinking for decades – the layers remain much thinner than the device widths. The layers of material are fabricated much like a photographic process, although light waves in the visible spectrum cannot be used to "expose" a layer of material, as they would be too large for the features. Thus photons of higher frequencies (typically ultraviolet) are used to create the patterns for each layer. Because each feature is so small, electron microscopes are essential tools for a process engineer who might be debugging a fabrication process.

Each device is tested before packaging using automated test equipment (ATE), in a process known as wafer testing, or wafer probing. The wafer is then cut into rectangular blocks, each of which is called a die. Each good die (plural dice, dies, or die) is then connected into a package using aluminum (or gold) wires which are welded to pads, usually found around the edge of the die. After packaging, the devices go through final testing on the same or similar ATE used during wafer probing. Test cost can account for over 25% of the cost of fabrication on lower cost products, but can be negligible on low yielding, larger, and/or higher cost devices.

As of 2005, a fabrication facility (commonly known as a semiconductor fab) costs over a billion US Dollars to construct[10], because much of the operation is automated. The most advanced processes employ the following techniques:

* The wafers are up to 300 mm in diameter (wider than a common dinner plate).
* Use of 65 nanometer or smaller chip manufacturing process. Intel, IBM, NEC, and AMD are using 45 nanometers for their CPU chips, and AMD[1] and NEC have started using a 65 nanometer process. IBM and AMD are in development of a 45 nm process using immersion lithography.
* Copper interconnects where copper wiring replaces aluminum for interconnects.
* Low-K dielectric insulators.
* Silicon on insulator (SOI)
* Strained silicon in a process used by IBM known as strained silicon directly on insulator (SSDOI)

[edit] Packaging

Main article: Integrated circuit packaging

The earliest integrated circuits were packaged in ceramic flat packs, which continued to be used by the military for their reliability and small size for many years. Commercial circuit packaging quickly moved to the dual in-line package (DIP), first in ceramic and later in plastic. In the 1980s pin counts of VLSI circuits exceeded the practical limit for DIP packaging, leading to pin grid array (PGA) and leadless chip carrier (LCC) packages. Surface mount packaging appeared in the early 1980s and became popular in the late 1980s, using finer lead pitch with leads formed as either gull-wing or J-lead, as exemplified by small-outline integrated circuit -- a carrier which occupies an area about 30 – 50% less than an equivalent DIP, with a typical thickness that is 70% less. This package has "gull wing" leads protruding from the two long sides and a lead spacing of 0.050 inches.

Small-outline integrated circuit (SOIC) and PLCC packages. In the late 1990s, PQFP and TSOP packages became the most common for high pin count devices, though PGA packages are still often used for high-end microprocessors. Intel and AMD are currently transitioning from PGA packages on high-end microprocessors to land grid array (LGA) packages.

Ball grid array (BGA) packages have existed since the 1970s. Flip-chip Ball Grid Array packages, which allow for much higher pin count than other package types, were developed in the 1990s. In an FCBGA package the die is mounted upside-down (flipped) and connects to the package balls via a package substrate that is similar to a printed-circuit board rather than by wires. FCBGA packages allow an array of input-output signals (called Area-I/O) to be distributed over the entire die rather than being confined to the die periphery.

Traces out of the die, through the package, and into the printed circuit board have very different electrical properties, compared to on-chip signals. They require special design techniques and need much more electric power than signals confined to the chip itself.

When multiple dies are put in one package, it is called SiP, for System In Package. When multiple dies are combined on a small substrate, often ceramic, it's called an MCM, or Multi-Chip Module. The boundary between a big MCM and a small printed circuit board is sometimes fuzzy.

[edit] Other developments

In the 1980's programmable integrated circuits were developed. These devices contain circuits whose logical function and connectivity can be programmed by the user, rather than being fixed by the integrated circuit manufacturer. This allows a single chip to be programmed to implement different LSI-type functions such as logic gates, adders, and registers. Current devices named FPGAs (Field Programmable Gate Arrays) can now implement tens of thousands of LSI circuits in parallel and operate up to 550 MHz.

The techniques perfected by the integrated circuits industry over the last three decades have been used to create microscopic machines, known as MEMS. These devices are used in a variety of commercial and military applications. Example commercial applications include DLP projectors, inkjet printers, and accelerometers used to deploy automobile airbags.

In the past, radios could not be fabricated in the same low-cost processes as microprocessors. But since 1998, a large number of radio chips have been developed using CMOS processes. Examples include Intel's DECT cordless phone, or Atheros's 802.11 card.

Future developments seem to follow the multi-microprocessor paradigm, already used by the Intel and AMD dual-core processors. Intel recently unveiled a prototype, "not for commercial sale" chip that bears a staggering 80 microprocessors. Each core is capable of handling its own task independently of the others. This is in response to the heat-versus-speed limit that is about to be reached using existing transistor technology. This design provides a new challenge to chip programming. X10 is the new open-source programming language designed to assist with this task. [11]

[edit] Silicon graffiti

Ever since ICs were created, some chip designers have used the silicon surface area for surreptitious, non-functional images or words. These are sometimes referred to as Chip Art, Silicon Art, Silicon Graffiti or Silicon Doodling. For an overview of this practice, see the article The Secret Art of Chip Graffiti, from the IEEE magazine Spectrum and the Silicon Zoo.

[edit] Key industrial and academic data
This article or section seems to contain embedded lists that may require cleanup.
To meet Wikipedia's style guidelines, please help improve this article by: removing items which are not notable, encyclopedic, or helpful from the list(s); incorporating appropriate items into the main body of the article; and discussing this issue on the talk page.

[edit] Notable ICs

* The 555 common multi-vibrator sub-circuit (common in electronic timing circuits)
* The 741 operational amplifier
* 7400 series TTL logic building blocks
* 4000 series, the CMOS counterpart to the 7400 series
* Intel 4004, the world's first microprocessor
* The MOS Technology 6502 and Zilog Z80 microprocessors, used in many home computers

[edit] Manufacturers

A list of notable manufacturers; some operating, some defunct:

* Agere Systems (now part of LSI Logic formerly part of Lucent, which was formerly part of AT&T)
* Agilent Technologies (formerly part of Hewlett-Packard, spun-off in 1999)
* Alcatel
* Altera
* AMD (Advanced Micro Devices; founded by ex-Fairchild employees)
* Analog Devices
* ATI Technologies (Array Technologies Incorporated; acquired parts of Tseng Labs in 1997; in 2006, became a wholly-owned subsidiary of AMD)
* Atmel (co-founded by ex-Intel employee)
* Broadcom
* Commodore Semiconductor Group (formerly MOS Technology)
* Cypress Semiconductor
* Fairchild Semiconductor (founded by ex-Shockley Semiconductor employees: the "Traitorous Eight")
* Freescale Semiconductor (formerly part of Motorola)
* Fujitsu
* Genesis Microchip
* GMT Microelectronics (formerly Commodore Semiconductor Group)
* Hitachi, Ltd.
* Horizon Semiconductors
* IBM (International Business Machines)
* Infineon Technologies (formerly part of Siemens)
* Integrated Device Technology
* Intel (founded by ex-Fairchild employees)
* Intersil (formerly Harris Semiconductor)
* Lattice Semiconductor
* Linear Technology
* LSI Logic (founded by ex-Fairchild employees)
* Maxim Integrated Products
* Marvell Technology Group
* Microchip Technology Manufacturer of the PIC microcontrollers
* MicroSystems International
* MOS Technology (founded by ex-Motorola employees)
* Mostek (founded by ex-Texas Instruments employees)
* National Semiconductor (aka "NatSemi"; founded by ex-Fairchild employees)
* Nordic Semiconductor (formerly known as Nordic VLSI)
* Nvidia (acquired IP of competitor 3dfx in 2000; 3dfx was co-founded by ex-Intel employee)
* NXP Semiconductors (formerly part of Philips)
* ON Semiconductor (formerly part of Motorola)
* Parallax Inc.Manufacturer of the BASIC Stamp and Propeller Microcontrollers
* PMC-Sierra (from the former Pacific Microelectronics Centre and Sierra Semiconductor, the latter co-founded by ex-NatSemi employee)
* Renesas Technology (joint venture of Hitachi and Mitsubishi Electric)
* Rohm
* Samsung Electronics (Semiconductor division)
* STMicroelectronics (formerly SGS Thomson)
* Texas Instruments
* Toshiba
* TSMC (Taiwan Semiconductor Manufacturing Company. semiconductor foundry)
* u-blox (Fabless GPS semiconductor provider)
* VIA Technologies (founded by ex-Intel employee) (part of Formosa Plastics Group)
* Volterra Semiconductor
* Xilinx (founded by ex-ZiLOG employee)
* ZiLOG (founded by ex-Intel employees)
INTEGRATED CIRCUITS

In electronics, an integrated circuit (also known as IC, microcircuit, microchip, silicon chip, or chip) is a miniaturized electronic circuit (consisting mainly of semiconductor devices, as well as passive components) that has been manufactured in the surface of a thin substrate of semiconductor material. Integrated circuits are used in almost all electronic equipment in use today and have revolutionized the world of electronics.

A hybrid integrated circuit is a miniaturized electronic circuit constructed of individual semiconductor devices, as well as passive components, bonded to a substrate or circuit board.

This article is about monolithic integrated circuits.

Introduction

Integrated circuits were made possible by experimental discoveries which showed that semiconductor devices could perform the functions of vacuum tubes, and by mid-20th-century technology advancements in semiconductor device fabrication. The integration of large numbers of tiny transistors into a small chip was an enormous improvement over the manual assembly of circuits using discrete electronic components. The integrated circuit's mass production capability, reliability, and building-block approach to circuit design ensured the rapid adoption of standardized ICs in place of designs using discrete transistors.

There are two main advantages of ICs over discrete circuits: cost and performance. Cost is low because the chips, with all their components, are printed as a unit by photolithography and not constructed one transistor at a time. Performance is high since the components switch quickly and consume little power, because the components are small and close together. As of 2006, chip areas range from a few square mm to around 350 mm², with up to 1 million transistors per mm².

[edit] Invention

[edit] The birth of the IC

The integrated circuit was conceived by a radar scientist, Geoffrey W.A. Dummer (1909-2002), working for the Royal Radar Establishment of the British Ministry of Defence, and published at the Symposium on Progress in Quality Electronic Components in Washington, D.C. on May 7, 1952.[1] He gave many symposiums publicly to propagate his ideas.

Dummer unsuccessfully attempted to build such a circuit in 1956.

The integrated circuit was independently co-invented by Jack Kilby of Texas Instruments[2] and Robert Noyce of Fairchild Semiconductor [3] around the same time. Kilby recorded his initial ideas concerning the integrated circuit in July 1958 and successfully demonstrated the first working integrated circuit on September 12, 1958.[2] Kilby won the 2000 Nobel Prize in Physics for his part of the invention of the integrated circuit.[4] Robert Noyce also came up with his own idea of integrated circuit, half a year later than Kilby. Noyce's chip had solved many practical problems that the microchip developed by Kilby had not. Noyce's chip, made at Fairchild, was made of silicon, whereas Kilby's chip was made of germanium.

Early developments of the integrated circuit go back to 1949, when the German engineer Werner Jacobi (Siemens AG) filed a patent for an integrated-circuit-like semiconductor amplifying device [5] showing five transistors on a common substrate arranged in a 3-stage amplifier arrangement. Jacobi discloses small and cheap hearing aids as typical industrial applications of his patent. A commercial use of his patent has not been reported.


A precursor idea to the IC was to create small ceramic squares (wafers), each one containing a single miniaturized component. Components could then be integrated and wired into a bidimensional or tridimensional compact grid. This idea, which looked very promising in 1957, was proposed to the US Army by Jack Kilby, and led to the short-lived Micromodule Program (similar to 1951's Project Tinkertoy).[6] However, as the project was gaining momentum, Kilby came up with a new, revolutionary design: the IC.

The aforementioned Noyce credited Kurt Lehovec of Sprague Electric for the principle of p-n junction isolation caused by the action of a biased p-n junction (the diode) as a key concept behind the IC.[7]

See: Other variations of vacuum tubes for precursor concepts such as the Loewe 3NF.

[edit] Generations

[edit] SSI, MSI, LSI

The first integrated circuits contained only a few transistors. Called "Small-Scale Integration" (SSI), they used circuits containing transistors numbering in the tens.

SSI circuits were crucial to early aerospace projects, and vice-versa. Both the Minuteman missile and Apollo program needed lightweight digital computers for their inertial guidance systems; the Apollo guidance computer led and motivated the integrated-circuit technology, while the Minuteman missile forced it into mass-production.

These programs purchased almost all of the available integrated circuits from 1960 through 1963, and almost alone provided the demand that funded the production improvements to get the production costs from $1000/circuit (in 1960 dollars) to merely $25/circuit (in 1963 dollars).[citation needed] They began to appear in consumer products at the turn of the decade, a typical application being FM inter-carrier sound processing in television receivers.

The next step in the development of integrated circuits, taken in the late 1960s, introduced devices which contained hundreds of transistors on each chip, called "Medium-Scale Integration" (MSI).

They were attractive economically because while they cost little more to produce than SSI devices, they allowed more complex systems to be produced using smaller circuit boards, less assembly work (because of fewer separate components), and a number of other advantages.

Further development, driven by the same economic factors, led to "Large-Scale Integration" (LSI) in the mid 1970s, with tens of thousands of transistors per chip.

Integrated circuits such as 1K-bit RAMs, calculator chips, and the first microprocessors, that began to be manufactured in moderate quantities in the early 1970s, had under 4000 transistors. True LSI circuits, approaching 10000 transistors, began to be produced around 1974, for computer main memories and second-generation microprocessors.

[edit] VLSI

Main article: Very-large-scale integration

Upper interconnect layers on an Intel 80486DX2 microprocessor die.
Upper interconnect layers on an Intel 80486DX2 microprocessor die.

The final step in the development process, starting in the 1980s and continuing through the present, was "Very Large-Scale Integration" (VLSI). This could be said to start with hundreds of thousands of transistors in the early 1980s, and continues beyond several billion transistors as of 2007.

There was no single breakthrough that allowed this increase in complexity, though many factors helped. Manufacturing moved to smaller rules and cleaner fabs, allowing them to produce chips with more transistors with adequate yield, as summarized by the International Technology Roadmap for Semiconductors (ITRS). Design tools improved enough to make it practical to finish these designs in a reasonable time. The more energy efficient CMOS replaced NMOS and PMOS, avoiding a prohibitive increase in power consumption. Better texts such as the landmark textbook by Mead and Conway helped schools educate more designers, among other factors.

In 1986 the first one megabit RAM chips were introduced, which contained more than one million transistors. Microprocessor chips passed the million transistor mark in 1989 and the billion transistor mark in 2005[8]. The trend continues largely unabated, with chips introduced in 2007 containing tens of billions of memory transistors [9].

[edit] ULSI, WSI, SOC, 3D-IC

To reflect further growth of the complexity, the term ULSI that stands for "Ultra-Large Scale Integration" was proposed for chips of complexity of more than 1 million transistors.

Wafer-scale integration (WSI) is a system of building very-large integrated circuits that uses an entire silicon wafer to produce a single "super-chip". Through a combination of large size and reduced packaging, WSI could lead to dramatically reduced costs for some systems, notably massively parallel supercomputers. The name is taken from the term Very-Large-Scale Integration, the current state of the art when WSI was being developed.

System-on-a-Chip (SoC or SOC) is an integrated circuit in which all the components needed for a computer or other system are included on a single chip. The design of such a device can be complex and costly, and building disparate components on a single piece of silicon may compromise the efficiency of some elements. However, these drawbacks are offset by lower manufacturing and assembly costs and by a greatly reduced power budget: because signals among the components are kept on-die, much less power is required (see Packaging, above).

Three Dimensional Integrated Circuit (3D-IC) has two or more layers of active electronic components that are integrated both vertically and horizontally into a single circuit. Communication between layers uses on-die signaling, so power consumption is much lower than in equivalent separate circuits. Judicious use of short vertical wires can substantially reduce overall wire length for faster operation.