Friday, December 3, 2010

ECG - Electrocardiogram

Electrocardiogram Introduction

The electrocardiogram (ECG or EKG) is a diagnostic tool that measures and records the electrical activity of the heart in exquisite detail. Interpretation of these details allows diagnosis of a wide range of heart conditions. These conditions can vary from minor to life threatening.

The term electrocardiogram was introduced by Willem Einthoven in 1893 at a meeting of the Dutch Medical Society. In 1924, Einthoven received the Nobel Prize for his life's work in developing the ECG.

The ECG has evolved over the years.

* The standard 12-lead ECG that is used throughout the world was introduced in 1942.

* It is called a 12-lead ECG because it examines the electrical activity of the heart from 12 points of view.

* This is necessary because no single point (or even 2 or 3 points of view) provides a complete picture of what is going on.

* To fully understand how an ECG reveals useful information about the condition of your heart requires a basic understanding of the anatomy (that is, the structure) and physiology (that is, the function) of the heart.



Unipolar vs. bipolar leads

There are two types of leads: unipolar and bipolar. Bipolar leads have one positive and one negative pole.[21] In a 12-lead ECG, the limb leads (I, II and III) are bipolar leads. Unipolar leads also have two poles, as a voltage is measured; however, the negative pole is a composite pole (Wilson's central terminal) made up of signals from lots of other electrodes.[22] In a 12-lead ECG, all leads besides the limb leads are unipolar (aVR, aVL, aVF, V1, V2, V3, V4, V5, and V6).

Wilson's central terminal VW is produced by connecting the electrodes, RA; LA; and LL, together, via a simple resistive network, to give an average potential across the body, which approximates the potential at infinity (i.e. zero):

V_W = \frac{1}{3}(RA+LA+LL).

[edit] Augmented limb leads

Leads aVR, aVL, and aVF are augmented limb leads (after their inventor Dr. Emanuel Goldberger known collectively as the Goldberger's leads). They are derived from the same three electrodes as leads I, II, and III. However, they view the heart from different angles (or vectors) because the negative electrode for these leads is a modification of Wilson's central terminal. This zeroes out the negative electrode and allows the positive electrode to become the "exploring electrode". This is possible because Einthoven's Law states that I + (−II) + III = 0. The equation can also be written I + III = II. It is written this way (instead of I − II + III = 0) because Einthoven reversed the polarity of lead II in Einthoven's triangle, possibly because he liked to view upright QRS complexes. Wilson's central terminal paved the way for the development of the augmented limb leads aVR, aVL, aVF and the precordial leads V1, V2, V3, V4, V5 and V6.

* Lead augmented vector right (aVR) has the positive electrode (white) on the right arm. The negative electrode is a combination of the left arm (black) electrode and the left leg (red) electrode, which "augments" the signal strength of the positive electrode on the right arm:

aVR = RA - \frac{1}{2} (LA + LL).

* Lead augmented vector left (aVL) has the positive (black) electrode on the left arm. The negative electrode is a combination of the right arm (white) electrode and the left leg (red) electrode, which "augments" the signal strength of the positive electrode on the left arm:

aVL = LA - \frac{1}{2} (RA + LL).

* Lead augmented vector foot (aVF) has the positive (red) electrode on the left leg. The negative electrode is a combination of the right arm (white) electrode and the left arm (black) electrode, which "augments" the signal of the positive electrode on the left leg:

aVF = LL - \frac{1}{2} (RA + LA).

The augmented limb leads aVR, aVL, and aVF are amplified in this way because the signal is too small to be useful when the negative electrode is Wilson's central terminal. Together with leads I, II, and III, augmented limb leads aVR, aVL, and aVF form the basis of the hexaxial reference system, which is used to calculate the heart's electrical axis in the frontal plane. The aVR, aVL, and aVF leads can also be represented using the I and II limb leads:

SCADA

SCADA stands for supervisory control and data acquisition. It generally refers to industrial control systems: computer systems that monitor and control industrial, infrastructure, or facility-based processes, as described below:

* Industrial processes include those of manufacturing, production, power generation, fabrication, and refining, and may run in continuous, batch, repetitive, or discrete modes.
* Infrastructure processes may be public or private, and include water treatment and distribution, wastewater collection and treatment, oil and gas pipelines, electrical power transmission and distribution, Wind farms, civil defense siren systems, and large communication systems.
* Facility processes occur both in public facilities and private ones, including buildings, airports, ships, and space stations. They monitor and control HVAC, access, and energy consumption.

SCADA systems have evolved through 3 generations as follows:[citation needed]
First generation: "Monolithic"

In the first generation, computing was done by mainframe computers. Networks did not exist at the time SCADA was developed. Thus SCADA systems were independent systems with no connectivity to other systems. Wide Area Networks were later designed by RTU vendors to communicate with the RTU. The communication protocols used were often proprietary at that time. The first-generation SCADA system was redundant since a back-up mainframe system was connected at the bus level and was used in the event of failure of the primary mainframe system.
Second generation: "Distributed"

The processing was distributed across multiple stations which were connected through a LAN and they shared information in real time. Each station was responsible for a particular task thus making the size and cost of each station less than the one used in First Generation. The network protocols used were still mostly proprietary, which led to significant security problems for any SCADA system that received attention from a hacker. Since the protocols were proprietary, very few people beyond the developers and hackers knew enough to determine how secure a SCADA installation was. Since both parties had invested interests in keeping security issues quiet, the security of a SCADA installation was often badly overestimated, if it was considered at all.
Third generation: "Networked"

These are the current generation SCADA systems which use open system architecture rather than a vendor-controlled proprietary environment. The SCADA system utilizes open standards and protocols, thus distributing functionality across a WAN rather than a LAN. It is easier to connect third party peripheral devices like printers, disk drives, and tape drives due to the use of open architecture. WAN protocols such as Internet Protocol (IP) are used for communication between the master station and communications equipment. Due to the usage of standard protocols and the fact that many networked SCADA systems are accessible from the Internet, the systems are potentially vulnerable to remote cyber-attacks. On the other hand, the usage of standard protocols and security techniques means that standard security improvements are applicable to the SCADA systems, assuming they receive timely maintenance and updates.
Trends in SCADA

There is a trend for PLC and HMI/SCADA software to be more "mix-and-match". In the mid 1990s, the typical DAQ I/O manufacturer supplied equipment that communicated using proprietary protocols over a suitable-distance carrier like RS-485. End users who invested in a particular vendor's hardware solution often found themselves restricted to a limited choice of equipment when requirements changed (e.g. system expansions or performance improvement). To mitigate such problems, open communication protocols such as IEC IEC 60870-5-101 or 104, IEC 61850, DNP3 serial, and DNP3 LAN/WAN became increasingly popular among SCADA equipment manufacturers and solution providers alike. Open architecture SCADA systems enabled users to mix-and-match products from different vendors to develop solutions that were better than those that could be achieved when restricted to a single vendor's product offering.

Towards the late 1990s, the shift towards open communications continued with individual I/O manufacturers as well, who adopted open message structures such as Modbus RTU and Modbus ASCII (originally both developed by Modicon) over RS-485. By 2000, most I/O makers offered completely open interfacing such as Modbus TCP over Ethernet and IP.

The North American Electric Reliability Corporation (NERC) has specified that electrical system data should be time-tagged to the nearest millisecond. Electrical system SCADA systems provide this Sequence of events recorder function, using Radio clocks to synchronize the RTU or distributed RTU clocks.

SCADA systems are coming in line with standard networking technologies. Ethernet and TCP/IP based protocols are replacing the older proprietary standards. Although certain characteristics of frame-based network communication technology (determinism, synchronization, protocol selection, environment suitability) have restricted the adoption of Ethernet in a few specialized applications, the vast majority of markets have accepted Ethernet networks for HMI/SCADA.

With the emergence of software as a service in the broader software industry, a few vendors have begun offering application specific SCADA systems hosted on remote platforms over the Internet. This removes the need to install and commission systems at the end-user's facility and takes advantage of security features already available in Internet technology, VPNs and SSL. Some concerns include security,[2] Internet connection reliability, and latency.

SCADA systems are becoming increasingly ubiquitous. Thin clients, web portals, and web based products are gaining popularity with most major vendors. The increased convenience of end users viewing their processes remotely introduces security considerations. While these considerations are already considered solved in other sectors of Internet services, not all entities responsible for deploying SCADA systems have understood the changes in accessibility and threat scope implicit in connecting a system to the Internet.

Wednesday, June 10, 2009

Nano technology

Nanotechnology, shortened to "Nanotech", is the study of the control of matter on an atomic and molecular scale. Generally nanotechnology deals with structures of the size 100 nanometers or smaller, and involves developing materials or devices within that size. Nanotechnology is very diverse, ranging from novel extensions of conventional device physics, to completely new approaches based upon molecular self-assembly, to developing new materials with dimensions on the nanoscale, even to speculation on whether we can directly control matter on the atomic scale.

There has been much debate on the future of implications of nanotechnology. Nanotechnology has the potential to create many new materials and devices with wide-ranging applications, such as in medicine, electronics, and energy production. On the other hand, nanotechnology raises many of the same issues as with any introduction of new technology, including concerns about the toxicity and environmental impact of nanomaterials [1], and their potential effects on global economics, as well as speculation about various doomsday scenarios. These concerns have led to a debate among advocacy groups and governments on whether special regulation of nanotechnology is warranted.

The Meaning of Nanotechnology

When K. Eric Drexler (right) popularized the word 'nanotechnology' in the 1980's, he was talking about building machines on the scale of molecules, a few nanometers wide—motors, robot arms, and even whole computers, far smaller than a cell. Drexler spent the next ten years describing and analyzing these incredible devices, and responding to accusations of science fiction. Meanwhile, mundane technology was developing the ability to build simple structures on a molecular scale. As nanotechnology became an accepted concept, the meaning of the word shifted to encompass the simpler kinds of nanometer-scale technology. The U.S. National Nanotechnology Initiative was created to fund this kind of nanotech: their definition includes anything smaller than 100 nanometers with novel properties.

Much of the work being done today that carries the name 'nanotechnology' is not nanotechnology in the original meaning of the word. Nanotechnology, in its traditional sense, means building things from the bottom up, with atomic precision. This theoretical capability was envisioned as early as 1959 by the renowned physicist Richard Feynman.

I want to build a billion tiny factories, models of each other, which are manufacturing simultaneously. . . The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom. It is not an attempt to violate any laws; it is something, in principle, that can be done; but in practice, it has not been done because we are too big. — Richard Feynman, Nobel Prize winner in physics

Based on Feynman's vision of miniature factories using nanomachines to build complex products, advanced nanotechnology (sometimes referred to as molecular manufacturing) will make use of positionally-controlled mechanochemistry guided by molecular machine systems. Formulating a roadmap for development of this kind of nanotechnology is now an objective of a broadly based technology roadmap project led by Battelle (the manager of several U.S. National Laboratories) and the Foresight Nanotech Institute.

Shortly after this envisioned molecular machinery is created, it will result in a manufacturing revolution, probably causing severe disruption. It also has serious economic, social, environmental, and military implications.

Four Generations

Mihail (Mike) Roco of the U.S. National Nanotechnology Initiative has described four generations of nanotechnology development (see chart below). The current era, as Roco depicts it, is that of passive nanostructures, materials designed to perform one task. The second phase, which we are just entering, introduces active nanostructures for multitasking; for example, actuators, drug delivery devices, and sensors. The third generation is expected to begin emerging around 2010 and will feature nanosystems with thousands of interacting components. A few years after that, the first integrated nanosystems, functioning (according to Roco) much like a mammalian cell with hierarchical systems within systems, are expected to be developed.

Some experts may still insist that nanotechnology can refer to measurement or visualization at the scale of 1-100 nanometers, but a consensus seems to be forming around the idea (put forward by the NNI's Mike Roco) that control and restructuring of matter at the nanoscale is a necessary element. CRN's definition is a bit more precise than that, but as work progresses through the four generations of nanotechnology leading up to molecular nanosystems, which will include molecular manufacturing, we think it will become increasingly obvious that "engineering of functional systems at the molecular scale" is what nanotech is really all about.

Conflicting Definitions

Unfortunately, conflicting definitions of nanotechnology and blurry distinctions between significantly different fields have complicated the effort to understand the differences and develop sensible, effective policy.

The risks of today's nanoscale technologies (nanoparticle toxicity, etc.) cannot be treated the same as the risks of longer-term molecular manufacturing (economic disruption, unstable arms race, etc.). It is a mistake to put them together in one basket for policy consideration—each is important to address, but they offer different problems and will require different solutions. As used today, the term nanotechnology usually refers to a broad collection of mostly disconnected fields. Essentially, anything sufficiently small and interesting can be called nanotechnology. Much of it is harmless. For the rest, much of the harm is of familiar and limited quality. But as we will see, molecular manufacturing will bring unfamiliar risks and new classes of problems.

General-Purpose Technology

Nanotechnology is sometimes referred to as a general-purpose technology. That's because in its advanced form it will have significant impact on almost all industries and all areas of society. It will offer better built, longer lasting, cleaner, safer, and smarter products for the home, for communications, for medicine, for transportation, for agriculture, and for industry in general.

Imagine a medical device that travels through the human body to seek out and destroy small clusters of cancerous cells before they can spread. Or a box no larger than a sugar cube that contains the entire contents of the Library of Congress. Or materials much lighter than steel that possess ten times as much strength. — U.S. National Science Foundation

Dual-Use Technology

Like electricity or computers before it, nanotech will offer greatly improved efficiency in almost every facet of life. But as a general-purpose technology, it will be dual-use, meaning it will have many commercial uses and it also will have many military uses—making far more powerful weapons and tools of surveillance. Thus it represents not only wonderful benefits for humanity, but also grave risks.

A key understanding of nanotechnology is that it offers not just better products, but a vastly improved manufacturing process. A computer can make copies of data files—essentially as many copies as you want at little or no cost. It may be only a matter of time until the building of products becomes as cheap as the copying of files. That's the real meaning of nanotechnology, and why it is sometimes seen as "the next industrial revolution."

My own judgment is that the nanotechnology revolution has the potential to change America on a scale equal to, if not greater than, the computer revolution. — U.S. Senator Ron Wyden (D-Ore.)

The power of nanotechnology can be encapsulated in an apparently simple device called a personal nanofactory that may sit on your countertop or desktop. Packed with miniature chemical processors, computing, and robotics, it will produce a wide-range of items quickly, cleanly, and inexpensively, building products directly from blueprints.

◄ Click to enlarge
Artist's Conception of a Personal Nanofactory
Courtesy of John Burch, Lizard Fire Studios (3D Animation, Game Development)

Exponential Proliferation

Nanotechnology not only will allow making many high-quality products at very low cost, but it will allow making new nanofactories at the same low cost and at the same rapid speed. This unique (outside of biology, that is) ability to reproduce its own means of production is why nanotech is said to be an exponential technology. It represents a manufacturing system that will be able to make more manufacturing systems—factories that can build factories—rapidly, cheaply, and cleanly. The means of production will be able to reproduce exponentially, so in just a few weeks a few nanofactories conceivably could become billions. It is a revolutionary, transformative, powerful, and potentially very dangerous—or beneficial—technology.

How soon will all this come about? Conservative estimates usually say 20 to 30 years from now, or even much later than that. However, CRN is concerned that it may occur sooner, quite possibly within the next decade. This is because of the rapid progress being made in enabling technologies, such as optics, nanolithography, mechanochemistry and 3D prototyping. If it does arrive that soon, we may not be adequately prepared, and the consequences could be severe.

We believe it's not too early to begin asking some tough questions and facing the issues:
bullet Who will own the technology?
bullet Will it be heavily restricted, or widely available?
bullet What will it do to the gap between rich and poor?
bullet How can dangerous weapons be controlled, and perilous arms races be prevented?

Many of these questions were first raised over a decade ago, and have not yet been answered. If the questions are not answered with deliberation, answers will evolve independently and will take us by surprise; the surprise is likely to be unpleasant.

It is difficult to say for sure how soon this technology will mature, partly because it's possible (especially in countries that do not have open societies) that clandestine military or industrial development programs have been going on for years without our knowledge.

We cannot say with certainty that full-scale nanotechnology will not be developed with the next ten years, or even five years. It may take longer than that, but prudence—and possibly our survival—demands that we prepare now for the earliest plausible development scenario.