Everyday Science Papers 2001 Solved

Q2:
Cyclone, in strict meteorological terminology, an area of low atmospheric pressure surrounded by a wind system blowing, in the northern hemisphere, in a counterclockwise direction. A corresponding high-pressure area with clockwise winds is known as an anticyclone. In the southern hemisphere these wind directions are reversed. Cyclones are commonly called lows and anticyclones highs. The term cyclone has often been more loosely applied to a storm and disturbance attending such pressure systems, particularly the violent tropical hurricane and the typhoon, which center on areas of unusually low pressure.
Tornado, violently rotating column of air extending from within a thundercloud (see Cloud) down to ground level. The strongest tornadoes may sweep houses from their foundations, destroy brick buildings, toss cars and school buses through the air, and even lift railroad cars from their tracks. Tornadoes vary in diameter from tens of meters to nearly 2 km (1 mi), with an average diameter of about 50 m (160 ft). Most tornadoes in the northern hemisphere create winds that blow counterclockwise around a center of extremely low atmospheric pressure. In the southern hemisphere the winds generally blow clockwise. Peak wind speeds can range from near 120 km/h (75 mph) to almost 500 km/h (300 mph). The forward motion of a tornado can range from a near standstill to almost 110 km/h (70 mph).
Hurricane, name given to violent storms that originate over the tropical or subtropical waters of the Atlantic Ocean, Caribbean Sea, Gulf of Mexico, or North Pacific Ocean east of the International Date Line. Such storms over the North Pacific west of the International Date Line are called typhoons; those elsewhere are known as tropical cyclones, which is the general name for all such storms including hurricanes and typhoons. These storms can cause great damage to property and loss of human life due to high winds, flooding, and large waves crashing against shorelines. The worst natural disaster in United States history was caused by a hurricane that struck the coast of Texas in 1900. See also Tropical Storm; Cyclone.
Q3:
Energy
Energy, capacity of matter to perform work as the result of its motion or its position in relation to forces acting on it. Energy associated with motion is known as kinetic energy, and energy related to position is called potential energy. Thus, a swinging pendulum has maximum potential energy at the terminal points; at all intermediate positions it has both kinetic and potential energy in varying proportions. Energy exists in various forms, including mechanical (see Mechanics), thermal (see Thermodynamics), chemical (see Chemical Reaction), electrical (see Electricity), radiant (see Radiation), and atomic (see Nuclear Energy). All forms of energy are interconvertible by appropriate processes. In the process of transformation either kinetic or potential energy may be lost or gained, but the sum total of the two remains always the same.
A weight suspended from a cord has potential energy due to its position, inasmuch as it can perform work in the process of falling. An electric battery has potential energy in chemical form. A piece of magnesium has potential energy stored in chemical form that is expended in the form of heat and light if the magnesium is ignited. If a gun is fired, the potential energy of the gunpowder is transformed into the kinetic energy of the moving projectile. The kinetic mechanical energy of the moving rotor of a dynamo is changed into kinetic electrical energy by electromagnetic induction. All forms of energy tend to be transformed into heat, which is the most transient form of energy. In mechanical devices energy not expended in useful work is dissipated in frictional heat, and losses in electrical circuits are largely heat losses.
Empirical observation in the 19th century led to the conclusion that although energy can be transformed, it cannot be created or destroyed. This concept, known as the conservation of energy, constitutes one of the basic principles of classical mechanics. The principle, along with the parallel principle of conservation of matter, holds true only for phenomena involving velocities that are small compared with the velocity of light. At higher velocities close to that of light, as in nuclear reactions, energy and matter are interconvertible (see Relativity). In modern physics the two concepts, the conservation of energy and of mass, are thus unified.
ENERGY CONVERSION
Transducer, device that converts an input energy into an output energy. Usually, the output energy is a different kind of energy than the input energy. An example is a temperature gauge in which a spiral metallic spring converts thermal energy into a mechanical deflection of the dial needle. Because of the ease with which electrical energy may be transmitted and amplified, the most useful transducers are those that convert other forms of energy, such as heat, light, or sound, into electrical energy. Some examples are microphones, which convert sound energy into electrical energy; photoelectric materials, which convert light energy into electrical energy; and pyroelectric energy crystals, which convert heat energy into electrical energy.
Electric Motors and Generators, group of devices used to convert mechanical energy into electrical energy, or electrical energy into mechanical energy, by electromagnetic means (see Energy). A machine that converts mechanical energy into electrical energy is called a generator, alternator, or dynamo, and a machine that converts electrical energy into mechanical energy is called a motor.
Most electric cars use lead-acid batteries, but new types of batteries, including zinc-chlorine, nickel metal hydride, and sodium-sulfur, are becoming more common. The motor of an electric car harnesses the battery's electrical energy by converting it to kinetic energy. The driver simply switches on the power, selects “Forward” or “Reverse” with another switch, and steps on the accelerator pedal.
Photosynthesis, process by which green plants and certain other organisms use the energy of light to convert carbon dioxide and water into the simple sugar glucose.
Turbine, rotary engine that converts the energy of a moving stream of water, steam, or gas into mechanical energy. The basic element in a turbine is a wheel or rotor with paddles, propellers, blades, or buckets arranged on its circumference in such a fashion that the moving fluid exerts a tangential force that turns the wheel and imparts energy to it. This mechanical energy is then transferred through a drive shaft to operate a machine, compressor, electric generator, or propeller. Turbines are classified as hydraulic, or water, turbines, steam turbines, or gas turbines. Today turbine-powered generators produce most of the world's electrical energy. Windmills that generate electricity are known as wind turbines (see Windmill).
Wind Energy, energy contained in the force of the winds blowing across the earth’s surface. When harnessed, wind energy can be converted into mechanical energy for performing work such as pumping water, grinding grain, and milling lumber. By connecting a spinning rotor (an assembly of blades attached to a hub) to an electric generator, modern wind turbines convert wind energy, which turns the rotor, into electrical energy.
Q4:
(I)
Polymer
I INTRODUCTION
Polymer, substance consisting of large molecules that are made of many small, repeating units called monomers, or mers. The number of repeating units in one large molecule is called the degree of polymerization. Materials with a very high degree of polymerization are called high polymers. Polymers consisting of only one kind of repeating unit are called homopolymers. Copolymers are formed from several different repeating units.
Most of the organic substances found in living matter, such as protein, wood, chitin, rubber, and resins, are polymers. Many synthetic materials, such as plastics, fibers (; Rayon), adhesives, glass, and porcelain, are also to a large extent polymeric substances.
II STRUCTURE OF POLYMERS
Polymers can be subdivided into three, or possibly four, structural groups. The molecules in linear polymers consist of long chains of monomers joined by bonds that are rigid to a certain degree—the monomers cannot rotate freely with respect to each other. Typical examples are polyethylene, polyvinyl alcohol, and polyvinyl chloride (PVC).
Branched polymers have side chains that are attached to the chain molecule itself. Branching can be caused by impurities or by the presence of monomers that have several reactive groups. Chain polymers composed of monomers with side groups that are part of the monomers, such as polystyrene or polypropylene, are not considered branched polymers.
In cross-linked polymers, two or more chains are joined together by side chains. With a small degree of cross-linking, a loose network is obtained that is essentially two dimensional. High degrees of cross-linking result in a tight three-dimensional structure. Cross-linking is usually caused by chemical reactions. An example of a two-dimensional cross-linked structure is vulcanized rubber, in which cross-links are formed by sulfur atoms. Thermosetting plastics are examples of highly cross-linked polymers; their structure is so rigid that when heated they decompose or burn rather than melt.
III SYNTHESIS
Two general methods exist for forming large molecules from small monomers: addition polymerization and condensation polymerization. In the chemical process called addition polymerization, monomers join together without the loss of atoms from the molecules. Some examples of addition polymers are polyethylene, polypropylene, polystyrene, polyvinyl acetate, and polytetrafluoroethylene (Teflon).
In condensation polymerization, monomers join together with the simultaneous elimination of atoms or groups of atoms. Typical condensation polymers are polyamides, polyesters, and certain polyurethanes.
In 1983 a new method of addition polymerization called group transfer polymerization was announced. An activating group within the molecule initiating the process transfers to the end of the growing polymer chain as individual monomers insert themselves in the group. The method has been used for acrylic plastics; it should prove applicable to other plastics as well.
Synthetic polymers include the plastics polystyrene, polyester, nylon (a polyamide), and polyvinyl chloride. These polymers differ in their repeating monomer units. Scientists build polymers from different monomer units to create plastics with different properties. For example, polyvinyl chloride is tough and nylon is silklike. Synthetic polymers usually do not dissolve in water or react with other chemicals. Strong synthetic polymers form fibers for clothing and other materials. Synthetic fibers usually last longer than natural fibers do.

(II)
Laser
I INTRODUCTION
Laser, a device that produces and amplifies light. The word laser is an acronym for Light Amplification by Stimulated Emission of Radiation. Laser light is very pure in color, can be extremely intense, and can be directed with great accuracy. Lasers are used in many modern technological devices including bar code readers, compact disc (CD) players, and laser printers. Lasers can generate light beyond the range visible to the human eye, from the infrared through the X-ray range. Masers are similar devices that produce and amplify microwaves.
II PRINCIPLES OF OPERATION
Lasers generate light by storing energy in particles called electrons inside atoms and then inducing the electrons to emit the absorbed energy as light. Atoms are the building blocks of all matter on Earth and are a thousand times smaller than viruses. Electrons are the underlying source of almost all light.
Light is composed of tiny packets of energy called photons. Lasers produce coherent light: light that is monochromatic (one color) and whose photons are “in step” with one another.
A Excited Atoms
At the heart of an atom is a tightly bound cluster of particles called the nucleus. This cluster is made up of two types of particles: protons, which have a positive charge, and neutrons, which have no charge. The nucleus makes up more than 99.9 percent of the atom’s mass but occupies only a tiny part of the atom’s space. Enlarge an atom up to the size of Yankee Stadium and the equally magnified nucleus is only the size of a baseball.
Electrons, tiny particles that have a negative charge, whirl through the rest of the space inside atoms. Electrons travel in complex orbits and exist only in certain specific energy states or levels (see Quantum Theory). Electrons can move from a low to a high energy level by absorbing energy. An atom with at least one electron that occupies a higher energy level than it normally would is said to be excited. An atom can become excited by absorbing a photon whose energy equals the difference between the two energy levels. A photon’s energy, color, frequency, and wavelength are directly related: All photons of a given energy are the same color and have the same frequency and wavelength.
Usually, electrons quickly jump back to the low energy level, giving off the extra energy as light (see Photoelectric Effect). Neon signs and fluorescent lamps glow with this kind of light as many electrons independently emit photons of different colors in all directions.
B Stimulated Emission
Lasers are different from more familiar sources of light. Excited atoms in lasers collectively emit photons of a single color, all traveling in the same direction and all in step with one another. When two photons are in step, the peaks and troughs of their waves line up. The electrons in the atoms of a laser are first pumped, or energized, to an excited state by an energy source. An excited atom can then be “stimulated” by a photon of exactly the same color (or, equivalently, the same wavelength) as the photon this atom is about to emit spontaneously. If the photon approaches closely enough, the photon can stimulate the excited atom to immediately emit light that has the same wavelength and is in step with the photon that interacted with it. This stimulated emission is the key to laser operation. The new light adds to the existing light, and the two photons go on to stimulate other excited atoms to give up their extra energy, again in step. The phenomenon snowballs into an amplified, coherent beam of light: laser light.
In a gas laser, for example, the photons usually zip back and forth in a gas-filled tube with highly reflective mirrors facing inward at each end. As the photons bounce between the two parallel mirrors, they trigger further stimulated emissions and the light gets brighter and brighter with each pass through the excited atoms. One of the mirrors is only partially silvered, allowing a small amount of light to pass through rather than reflecting it all. The intense, directional, and single-colored laser light finally escapes through this slightly transparent mirror. The escaped light forms the laser beam.
Albert Einstein first proposed stimulated emission, the underlying process for laser action, in 1917. Translating the idea of stimulated emission into a working model, however, required more than four decades. The working principles of lasers were outlined by the American physicists Charles Hard Townes and Arthur Leonard Schawlow in a 1958 patent application. (Both men won Nobel Prizes in physics for their work, Townes in 1964 and Schawlow in 1981). The patent for the laser was granted to Townes and Schawlow, but it was later challenged by the American physicist and engineer Gordon Gould, who had written down some ideas and coined the word laser in 1957. Gould eventually won a partial patent covering several types of laser. In 1960 American physicist Theodore Maiman of Hughes Aircraft Corporation constructed the first working laser from a ruby rod.
III TYPES OF LASERS
Lasers are generally classified according to the material, called the medium, they use to produce the laser light. Solid-state, gas, liquid, semiconductor, and free electron are all common types of lasers.
A Solid-State Lasers
Solid-state lasers produce light by means of a solid medium. The most common solid laser media are rods of ruby crystals and neodymium-doped glasses and crystals. The ends of the rods are fashioned into two parallel surfaces coated with a highly reflecting nonmetallic film. Solid-state lasers offer the highest power output. They are usually pulsed to generate a very brief burst of light. Bursts as short as 12 × 10-15 sec have been achieved. These short bursts are useful for studying physical phenomena of very brief duration.
One method of exciting the atoms in lasers is to illuminate the solid laser material with higher-energy light than the laser produces. This procedure, called pumping, is achieved with brilliant strobe light from xenon flash tubes, arc lamps, or metal-vapor lamps.
B Gas Lasers
The lasing medium of a gas laser can be a pure gas, a mixture of gases, or even metal vapor. The medium is usually contained in a cylindrical glass or quartz tube. Two mirrors are located outside the ends of the tube to form the laser cavity. Gas lasers can be pumped by ultraviolet light, electron beams, electric current, or chemical reactions. The helium-neon laser is known for its color purity and minimal beam spread. Carbon dioxide lasers are very efficient at turning the energy used to excite their atoms into laser light. Consequently, they are the most powerful continuous wave (CW) lasers—that is, lasers that emit light continuously rather than in pulses.
C Liquid Lasers
The most common liquid laser media are inorganic dyes contained in glass vessels. They are pumped by intense flash lamps in a pulse mode or by a separate gas laser in the continuous wave mode. Some dye lasers are tunable, meaning that the color of the laser light they emit can be adjusted with the help of a prism located inside the laser cavity.
D Semiconductor Lasers
Semiconductor lasers are the most compact lasers. Gallium arsenide is the most common semiconductor used. A typical semiconductor laser consists of a junction between two flat layers of gallium arsenide. One layer is treated with an impurity whose atoms provide an extra electron, and the other with an impurity whose atoms are one electron short. Semiconductor lasers are pumped by the direct application of electric current across the junction. They can be operated in the continuous wave mode with better than 50 percent efficiency. Only a small percentage of the energy used to excite most other lasers is converted into light.
Scientists have developed extremely tiny semiconductor lasers, called quantum-dot vertical-cavity surface-emitting lasers. These lasers are so tiny that more than a million of them can fit on a chip the size of a fingernail.
Common uses for semiconductor lasers include compact disc (CD) players and laser printers. Semiconductor lasers also form the heart of fiber-optics communication systems (see Fiber Optics).
E Free Electron Lasers.
Free electron lasers employ an array of magnets to excite free electrons (electrons not bound to atoms). First developed in 1977, they are now becoming important research instruments. Free electron lasers are tunable over a broader range of energies than dye lasers. The devices become more difficult to operate at higher energies but generally work successfully from infrared through ultraviolet wavelengths. Theoretically, electron lasers can function even in the X-ray range.
The free electron laser facility at the University of California at Santa Barbara uses intense far-infrared light to investigate mutations in DNA molecules and to study the properties of semiconductor materials. Free electron lasers should also eventually become capable of producing very high-power radiation that is currently too expensive to produce. At high power, near-infrared beams from a free electron laser could defend against a missile attack.
IV LASER APPLICATIONS
The use of lasers is restricted only by imagination. Lasers have become valuable tools in industry, scientific research, communications, medicine, the military, and the arts.
A Industry
Powerful laser beams can be focused on a small spot to generate enormous temperatures. Consequently, the focused beams can readily and precisely heat, melt, or vaporize material. Lasers have been used, for example, to drill holes in diamonds, to shape machine tools, to trim microelectronics, to cut fashion patterns, to synthesize new material, and to attempt to induce controlled nuclear fusion (see Nuclear Energy).
Highly directional laser beams are used for alignment in construction. Perfectly straight and uniformly sized tunnels, for example, may be dug using lasers for guidance. Powerful, short laser pulses also make high-speed photography with exposure times of only several trillionths of a second possible.
B Scientific Research
Because laser light is highly directional and monochromatic, extremely small amounts of light scattering and small shifts in color caused by the interaction between laser light and matter can easily be detected. By measuring the scattering and color shifts, scientists can study molecular structures of matter. Chemical reactions can be selectively induced, and the existence of trace substances in samples can be detected. Lasers are also the most effective detectors of certain types of air pollution. (see Chemical Analysis; Photochemistry).
Scientists use lasers to make extremely accurate measurements. Lasers are used in this way for monitoring small movements associated with plate tectonics and for geographic surveys. Lasers have been used for precise determination (to within one inch) of the distance between Earth and the Moon, and in precise tests to confirm Einstein’s theory of relativity. Scientists also have used lasers to determine the speed of light to an unprecedented accuracy.
Very fast laser-activated switches are being developed for use in particle accelerators. Scientists also use lasers to trap single atoms and subatomic particles in order to study these tiny bits of matter (see Particle Trap).
C Communications
Laser light can travel a large distance in outer space with little reduction in signal strength. In addition, high-energy laser light can carry 1,000 times the television channels today carried by microwave signals. Lasers are therefore ideal for space communications. Low-loss optical fibers have been developed to transmit laser light for earthbound communication in telephone and computer systems. Laser techniques have also been used for high-density information recording. For instance, laser light simplifies the recording of a hologram, from which a three-dimensional image can be reconstructed with a laser beam. Lasers are also used to play audio CDs and videodiscs (see Sound Recording and Reproduction).
D Medicine
Lasers have a wide range of medical uses. Intense, narrow beams of laser light can cut and cauterize certain body tissues in a small fraction of a second without damaging surrounding healthy tissues. Lasers have been used to “weld” the retina, bore holes in the skull, vaporize lesions, and cauterize blood vessels. Laser surgery has virtually replaced older surgical procedures for eye disorders. Laser techniques have also been developed for lab tests of small biological samples.
E Military Applications
Laser guidance systems for missiles, aircraft, and satellites have been constructed. Guns can be fitted with laser sights and range finders. The use of laser beams to destroy hostile ballistic missiles has been proposed, as in the Strategic Defense Initiative urged by U.S. president Ronald Reagan and the Ballistic Missile Defense program supported by President George W. Bush. The ability of tunable dye lasers to selectively excite an atom or molecule may open up more efficient ways to separate isotopes for construction of nuclear weapons.
V LASER SAFETY
Because the eye focuses laser light just as it does other light, the chief danger in working with lasers is eye damage. Therefore, laser light should not be viewed either directly or reflected.
Lasers sold and used commercially in the United States must comply with a strict set of laws enforced by the Center for Devices and Radiological Health (CDRH), a department of the Food and Drug Administration. The CDRH has divided lasers into six groups, depending on their power output, their emission duration, and the energy of the photons they emit. The classification is then attached to the laser as a sticker. The higher the laser’s energy, the higher its potential to injure. High-powered lasers of the Class IV type (the highest classification) generate a beam of energy that can start fires, burn flesh, and cause permanent eye damage whether the light is direct, reflected, or diffused. Canada uses the same classification system, and laser use in Canada is overseen by Health Canada’s Radiation Protection Bureau.
Goggles blocking the specific color of photons that a laser produces are mandatory for the safe use of lasers. Even with goggles, direct exposure to laser light should be avoided.
(iii)Pesticides
The chemical agents called pesticides include herbicides (for weed control), insecticides, and fungicides. More than half the pesticides used in the U.S. are herbicides that control weeds: USDA estimates indicate that 86 percent of U.S. agricultural land areas are treated with herbicides, 18 percent with insecticides, and 3 percent with fungicides. The amount of pesticide used on different crops also varies. For example, in the U.S., about 67 percent of the insecticides used in agriculture are applied to two crops, cotton and corn; about 70 percent of the herbicides are applied to corn and soybeans, and most of the fungicides are applied to fruit and vegetable crops.
Most of the insecticides now applied are long-lasting synthetic compounds that affect the nervous system of insects on contact. Among the most effective are the chlorinated hydrocarbons DDT, chlordane, and toxaphene, although agricultural use of DDT has been banned in the U.S. since 1973. Others, the organophosphate insecticides, include malathion, parathion, and dimethoate. Among the most effective herbicides are the compounds of 2,4-D (2,4-dichlorophenoxyacetic acid), only a few kilograms of which are required per hectare to kill broad-leaved weeds while leaving grains unaffected.
Agricultural pesticides prevent a monetary loss of about $9 billion each year in the U.S. For every $1 invested in pesticides, the American farmer gets about $4 in return. These benefits, however, must be weighed against the costs to society of using pesticides, as seen in the banning of ethylene dibromide in the early 1980s. These costs include human poisonings, fish kills, honey bee poisonings, and the contamination of livestock products. The environmental and social costs of pesticide use in the U.S. have been estimated to be at least $1 billion each year. Thus, although pesticides are valuable for agriculture, they also can cause serious harm. Indeed, the question may be asked—what would crop losses be if insecticides were not used in the U.S., and readily available nonchemical controls were substituted? The best estimate is that only another 5 percent of the nation's food would be lost.

(iv) Fission and Fusion

Fission and Fusion
Nuclear energy can be released in two different ways: fission, the splitting of a large nucleus, and fusion, the combining of two small nuclei. In both cases energy—measured in millions of electron volts (MeV)—is released because the products are more stable (have a higher binding energy) than the reactants. Fusion reactions are difficult to maintain because the nuclei repel each other, but fusion creates much less radioactive waste than does fission.

Q: How would a fusion reactor differ from the nuclear reactors we currently have?
A: The nuclear reactors we have now are fission reactors. This means that they obtain their energy from nuclear reactions that split large nuclei such as uranium into smaller ones such as rubidium and cesium. There is a binding energy that holds a nucleus together. If the binding energy of the original large nucleus is greater than the sum of the binding energies of the smaller pieces, you get the difference in energy as heat that can be used in a power station to generate electricity.
A fusion reaction works the other way. It takes small nuclei like deuterium (heavy hydrogen) and fuses them together to make larger ones such as helium. If the binding energy of the two deuterium nuclei is greater than that of the final larger helium nucleus, it can be used to generate electricity.
There are two main differences between fission and fusion. The first is that the materials required for fission are rarer and more expensive to produce than those for fusion. For example, uranium has to be mined in special areas and then purified by difficult processes. By contrast, even though deuterium makes up only 0.02 percent of naturally occurring hydrogen, we have a vast supply of hydrogen in the water making up the oceans. The second difference is that the products of fission are radioactive and so need to be treated carefully, as they are dangerous to health. The products of fusion are not radioactive (although a realistic reactor will likely have some relatively small amount of radioactive product).
The problem with building fusion reactors is that a steady, controlled fusion reaction is very hard to achieve. It is still a subject of intense research. The main problem is that to achieve fusion we need to keep the nuclei we wish to fuse at extremely high temperatures and close enough for them to have a chance of fusing with one other. It is extremely difficult to find a way of holding everything together, since the nuclei naturally repel each other and the temperatures involved are high enough to melt any solid substance known. As technology improves, holding everything together will become easier, but it seems that we are a long way off from having commercial fusion reactors.

(v) Paramagnetism and Diamagnetism
Paramagnetism 
Liquid oxygen becomes trapped in an electromagnet’s magnetic field because oxygen (O2) is paramagnetic. Oxygen has two unpaired electrons whose magnetic moments align with external magnetic field lines. When this occurs, the O2 molecules themselves behave like tiny magnets, and become trapped between the poles of the electromagnet.
Magnetism
I INTRODUCTION
Magnetism, an aspect of electromagnetism, one of the fundamental forces of nature. Magnetic forces are produced by the motion of charged particles such as electrons, indicating the close relationship between electricity and magnetism. The unifying frame for these two forces is called electromagnetic theory (see Electromagnetic Radiation). The most familiar evidence of magnetism is the attractive or repulsive force observed to act between magnetic materials such as iron. More subtle effects of magnetism, however, are found in all matter. In recent times these effects have provided important clues to the atomic structure of matter.
II HISTORY OF STUDY
The phenomenon of magnetism has been known of since ancient times. The mineral lodestone (see Magnetite), an oxide of iron that has the property of attracting iron objects, was known to the Greeks, Romans, and Chinese. When a piece of iron is stroked with lodestone, the iron itself acquires the same ability to attract other pieces of iron. The magnets thus produced are polarized—that is, each has two sides or ends called north-seeking and south-seeking poles. Like poles repel one another, and unlike poles attract.
The compass was first used for navigation in the West some time after AD1200. In the 13th century, important investigations of magnets were made by the French scholar Petrus Peregrinus. His discoveries stood for nearly 300 years, until the English physicist and physician William Gilbert published his book Of Magnets, Magnetic Bodies, and the Great Magnet of the Earth in 1600. Gilbert applied scientific methods to the study of electricity and magnetism. He pointed out that the earth itself behaves like a giant magnet, and through a series of experiments, he investigated and disproved several incorrect notions about magnetism that were accepted as being true at the time. Subsequently, in 1750, the English geologist John Michell invented a balance that he used in the study of magnetic forces. He showed that the attraction and repulsion of magnets decrease as the squares of the distance from the respective poles increase. The French physicist Charles Augustin de Coulomb, who had measured the forces between electric charges, later verified Michell's observation with high precision.
III ELECTROMAGNETIC THEORY
In the late 18th and early 19th centuries, the theories of electricity and magnetism were investigated simultaneously. In 1819 an important discovery was made by the Danish physicist Hans Christian Oersted, who found that a magnetic needle could be deflected by an electric current flowing through a wire. This discovery, which showed a connection between electricity and magnetism, was followed up by the French scientist André Marie Ampère, who studied the forces between wires carrying electric currents, and by the French physicist Dominique François Jean Arago, who magnetized a piece of iron by placing it near a current-carrying wire. In 1831 the English scientist Michael Faraday discovered that moving a magnet near a wire induces an electric current in that wire, the inverse effect to that found by Oersted: Oersted showed that an electric current creates a magnetic field, while Faraday showed that a magnetic field can be used to create an electric current. The full unification of the theories of electricity and magnetism was achieved by the English physicist James Clerk Maxwell, who predicted the existence of electromagnetic waves and identified light as an electromagnetic phenomenon.
Subsequent studies of magnetism were increasingly concerned with an understanding of the atomic and molecular origins of the magnetic properties of matter. In 1905 the French physicist Paul Langevin produced a theory regarding the temperature dependence of the magnetic properties of paramagnets (discussed below), which was based on the atomic structure of matter. This theory is an early example of the description of large-scale properties in terms of the properties of electrons and atoms. Langevin's theory was subsequently expanded by the French physicist Pierre Ernst Weiss, who postulated the existence of an internal, “molecular” magnetic field in materials such as iron. This concept, when combined with Langevin's theory, served to explain the properties of strongly magnetic materials such as lodestone.
After Weiss's theory, magnetic properties were explored in greater and greater detail. The theory of atomic structure of Danish physicist Niels Bohr, for example, provided an understanding of the periodic table and showed why magnetism occurs in transition elements such as iron and the rare earth elements, or in compounds containing these elements. The American physicists Samuel Abraham Goudsmit and George Eugene Uhlenbeck showed in 1925 that the electron itself has spin and behaves like a small bar magnet. (At the atomic level, magnetism is measured in terms of magnetic moments—a magnetic moment is a vector quantity that depends on the strength and orientation of the magnetic field, and the configuration of the object that produces the magnetic field.) The German physicist Werner Heisenberg gave a detailed explanation for Weiss's molecular field in 1927, on the basis of the newly-developed quantum mechanics (see Quantum Theory). Other scientists then predicted many more complex atomic arrangements of magnetic moments, with diverse magnetic properties.
IV THE MAGNETIC FIELD
Objects such as a bar magnet or a current-carrying wire can influence other magnetic materials without physically contacting them, because magnetic objects produce a magnetic field. Magnetic fields are usually represented by magnetic flux lines. At any point, the direction of the magnetic field is the same as the direction of the flux lines, and the strength of the magnetic field is proportional to the space between the flux lines. For example, in a bar magnet, the flux lines emerge at one end of the magnet, then curve around the other end; the flux lines can be thought of as being closed loops, with part of the loop inside the magnet, and part of the loop outside. At the ends of the magnet, where the flux lines are closest together, the magnetic field is strongest; toward the side of the magnet, where the flux lines are farther apart, the magnetic field is weaker. Depending on their shapes and magnetic strengths, different kinds of magnets produce different patterns of flux lines. The pattern of flux lines created by magnets or any other object that creates a magnetic field can be mapped by using a compass or small iron filings. Magnets tend to align themselves along magnetic flux lines. Thus a compass, which is a small magnet that is free to rotate, will tend to orient itself in the direction of the magnetic flux lines. By noting the direction of the compass needle when the compass is placed at many locations around the source of the magnetic field, the pattern of flux lines can be inferred. Alternatively, when iron filings are placed around an object that creates a magnetic field, the filings will line up along the flux lines, revealing the flux line pattern.
Magnetic fields influence magnetic materials, and also influence charged particles that move through the magnetic field. Generally, when a charged particle moves through a magnetic field, it feels a force that is at right angles both to the velocity of the charged particle and the magnetic field. Since the force is always perpendicular to the velocity of the charged particle, a charged particle in a magnetic field moves in a curved path. Magnetic fields are used to change the paths of charged particles in devices such as particle accelerators and mass spectrometers.
V KINDS OF MAGNETIC MATERIALS
The magnetic properties of materials are classified in a number of different ways.
One classification of magnetic materials—into diamagnetic, paramagnetic, and ferromagnetic—is based on how the material reacts to a magnetic field. Diamagnetic materials, when placed in a magnetic field, have a magnetic moment induced in them that opposes the direction of the magnetic field. This property is now understood to be a result of electric currents that are induced in individual atoms and molecules. These currents, according to Ampere's law, produce magnetic moments in opposition to the applied field. Many materials are diamagnetic; the strongest ones are metallic bismuth and organic molecules, such as benzene, that have a cyclic structure, enabling the easy establishment of electric currents.
Paramagnetic behavior results when the applied magnetic field lines up all the existing magnetic moments of the individual atoms or molecules that make up the material. This results in an overall magnetic moment that adds to the magnetic field. Paramagnetic materials usually contain transition metals or rare earth elements that possess unpaired electrons. Paramagnetism in nonmetallic substances is usually characterized by temperature dependence; that is, the size of an induced magnetic moment varies inversely to the temperature. This is a result of the increasing difficulty of ordering the magnetic moments of the individual atoms along the direction of the magnetic field as the temperature is raised.
A ferromagnetic substance is one that, like iron, retains a magnetic moment even when the external magnetic field is reduced to zero. This effect is a result of a strong interaction between the magnetic moments of the individual atoms or electrons in the magnetic substance that causes them to line up parallel to one another. In ordinary circumstances these ferromagnetic materials are divided into regions called domains; in each domain, the atomic moments are aligned parallel to one another. Separate domains have total moments that do not necessarily point in the same direction. Thus, although an ordinary piece of iron might not have an overall magnetic moment, magnetization can be induced in it by placing the iron in a magnetic field, thereby aligning the moments of all the individual domains. The energy expended in reorienting the domains from the magnetized back to the demagnetized state manifests itself in a lag in response, known as hysteresis.
Ferromagnetic materials, when heated, eventually lose their magnetic properties. This loss becomes complete above the Curie temperature, named after the French physicist Pierre Curie, who discovered it in 1895. (The Curie temperature of metallic iron is about 770° C/1300° F.)
VI OTHER MAGNETIC ORDERINGS
In recent years, a greater understanding of the atomic origins of magnetic properties has resulted in the discovery of other types of magnetic ordering. Substances are known in which the magnetic moments interact in such a way that it is energetically favorable for them to line up antiparallel; such materials are called antiferromagnets. There is a temperature analogous to the Curie temperature called the Neel temperature, above which antiferromagnetic order disappears.
Other, more complex atomic arrangements of magnetic moments have also been found. Ferrimagnetic substances have at least two different kinds of atomic magnetic moments, which are oriented antiparallel to one another. Because the moments are of different size, a net magnetic moment remains, unlike the situation in an antiferromagnet where all the magnetic moments cancel out. Interestingly, lodestone is a ferrimagnet rather than a ferromagnet; two types of iron ions, each with a different magnetic moment, are in the material. Even more complex arrangements have been found in which the magnetic moments are arranged in spirals. Studies of these arrangements have provided much information on the interactions between magnetic moments in solids.
VII APPLICATIONS
Numerous applications of magnetism and of magnetic materials have arisen in the past 100 years. The electromagnet, for example, is the basis of the electric motor and the transformer. In more recent times, the development of new magnetic materials has also been important in the computer revolution. Computer memories can be fabricated using bubble domains. These domains are actually smaller regions of magnetization that are either parallel or antiparallel to the overall magnetization of the material. Depending on this direction, the bubble indicates either a one or a zero, thus serving as the units of the binary number system used in computers. Magnetic materials are also important constituents of tapes and disks on which data are stored.
In addition to the atomic-sized magnetic units used in computers, large, powerful magnets are crucial to a variety of modern technologies. Powerful magnetic fields are used in nuclear magnetic resonance imaging, an important diagnostic tool used by doctors. Superconducting magnets are used in today's most powerful particle accelerators to keep the accelerated particles focused and moving in a curved path. Scientists are developing magnetic levitation trains that use strong magnets to enable trains to float above the tracks, reducing friction.


Q5:
(i) Microcomputer and Minicomputer
Minicomputer, a mid-level computer built to perform complex computations while dealing efficiently with a high level of input and output from users connected via terminals. Minicomputers also frequently connect to other minicomputers on a network and distribute processing among all the attached machines. Minicomputers are used heavily in transaction-processing applications and as interfaces between mainframe computer systems and wide area networks. See also Office Systems; Time-Sharing.

Microcomputer, desktop- or notebook-size computing device that uses a microprocessor as its central processing unit, or CPU (see Computer). Microcomputers are also called personal computers (PCs), home computers, small-business computers, and micros. The smallest, most compact are called laptops. When they first appeared, they were considered single-user devices, and they were capable of handling only four, eight, or 16 bits of information at one time. More recently the distinction between microcomputers and large, mainframe computers (as well as the smaller mainframe-type systems called minicomputers) has become blurred, as newer microcomputer models have increased the speed and data-handling capabilities of their CPUs into the 32-bit, multiuser range.

(ii)
Supercomputer
I INTRODUCTION
Supercomputer, computer designed to perform calculations as fast as current technology allows and used to solve extremely complex problems. Supercomputers are used to design automobiles, aircraft, and spacecraft; to forecast the weather and global climate; to design new drugs and chemical compounds; and to make calculations that help scientists understand the properties of particles that make up atoms as well as the behavior and evolution of stars and galaxies. Supercomputers are also used extensively by the military for weapons and defense systems research, and for encrypting and decoding sensitive intelligence information. See Computer; Encryption; Cryptography.
Supercomputers are different than other types of computers in that they are designed to work on a single problem at a time, devoting all their resources to the solution of the problem. Other powerful computers such as mainframes and workstations are specifically designed so that they can work on numerous problems, and support numerous users, simultaneously. Because of their high cost—usually in the hundreds of thousands to millions of dollars—supercomputers are shared resources. Supercomputers are so expensive that usually only large companies, universities, and government agencies and laboratories can afford them.
II HOW SUPERCOMPUTERS WORK
The two major components of a supercomputer are the same as any other computer—a central processing unit (CPU) where instructions are carried out, and the memory in which data and instructions are stored. The CPU in a supercomputer is similar in function to a standard personal computer (PC) CPU, but it usually has a different type of transistor technology that minimizes transistor switching time. Switching time is the length of time that it takes for a transistor in the CPU to open or close, which corresponds to a piece of data moving or changing value in the computer. This time is extremely important in determining the absolute speed at which a CPU can operate. By using very high performance circuits, architectures, and, in some cases, even special materials, supercomputer designers are able to make CPUs that are 10 to 20 times faster than state-of-the-art processors for other types of commercial computers.
Supercomputer memory also has the same function as memory in other computers, but it is optimized so that retrieval of data and instructions from memory takes the least amount of time possible. Also important to supercomputer performance is that the connections between the memory and the CPU be as short as possible to minimize the time that information takes to travel between the memory and the CPU.
A supercomputer functions in much the same way as any other type of computer, except that it is designed to do calculations as fast as possible. Supercomputer designers use two main methods to reduce the amount of time that supercomputers spend carrying out instructions—pipelining and parallelism. Pipelining allows multiple operations to take place at the same time in the supercomputer’s CPU by grouping together pieces of data that need to have the same sequence of operations performed on them and then feeding them through the CPU one after the other. The general idea of parallelism is to process data and instructions in parallel rather than in sequence.
In pipelining, the various logic circuits (electronic circuits within the CPU that perform arithmetic calculations) used on a specific calculation are continuously in use, with data streaming from one logic unit to the next without interruption. For instance, a sequence of operations on a large group of numbers might be to add adjacent numbers together in pairs beginning with the first and second numbers, then to multiply these results by some constant, and finally to store these results in memory. The addition operation would be Step 1, the multiplication operation would be Step 2, and the assigning of the result to a memory location would be Step 3 in the sequence. The CPU could perform the sequence of operations on the first pair of numbers, store the result in memory and then pass the second pair of numbers through, and continue on like this. For a small group of numbers this would be fine, but since supercomputers perform calculations on massive groups of numbers this technique would be inefficient, because only one operation at a time is being performed.
Pipelining overcomes the source of inefficiency associated with the CPU performing a sequence of operations on only one piece of data at a time until the sequence is finished. The pipeline method would be to perform Step 1 on the first pair of data and move it to Step 2. As the result of the first operation move to Step 2, the second pair of data move into Step 1. Step 1 and 2 are then performed simultaneously on their respective data and the results of the operations are moved ahead in the pipeline, or the sequence of operations performed on a group of data. Hence the third pair of numbers are in Step 1, the second pair of numbers are in Step 2, and the first pair of numbers are in Step 3. The remainder of the calculations are performed in this way, with the specific logic units in the sequence are always operating simultaneously on data.
The example used above to illustrate pipelining can also be used to illustrate the concept of parallelism (see Parallel Processing). A computer that parallel-processed data would perform Step 1 on multiple pieces of data simultaneously, then move these to Step 2, then to Step 3, each step being performed on the multiple pieces of data simultaneously. One way to do this is to have multiple logic circuits in the CPU that perform the same sequence of operations. Another way is to link together multiple CPUs, synchronize them (meaning that they all perform an operation at exactly the same time) and have each CPU perform the necessary operation on one of the pieces of data.
Pipelining and parallelism are combined and used to greater or lesser extent in all supercomputers. Until the early 1990s, parallelism achieved through the interconnection of CPUs was limited to between 2 and 16 CPUs connected in parallel. However, the rapid increase in processing speed of off-the-shelf microprocessors used in personal computers and workstations made possible massively-parallel processing (MPP) supercomputers. While the individual processors used in MPP supercomputers are not as fast as specially designed supercomputer CPUs, they are much less expensive and because of this, hundreds or even thousands of these processors can be linked together to achieve extreme parallelism.
III SUPERCOMPUTER PERFORMANCE
Supercomputers are used to create mathematical models of complex phenomena. These models usually contain long sequences of numbers that are manipulated by the supercomputer with a kind of mathematics called matrix arithmetic. For example, to accurately predict the weather, scientists use mathematical models that contain current temperature, air pressure, humidity, and wind velocity measurements at many neighboring locations and altitudes. Using these numbers as data, the computer makes many calculations to simulate the physical interactions that will likely occur during the forecast period.
When supercomputers perform matrix arithmetic on large sets of numbers, it is often necessary to multiply many pairs of numbers together and to then add up each of their individual products. A simple example of such a calculation is: (4 × 6) + (7 × 2) + (9 × 5) + (8 × 8) + (2 × 9) = 165. In real problems, the strings of numbers used in calculations are usually much longer, often containing hundreds or thousands of pairs of numbers. Furthermore, the numbers used are not simple integers but more complicated types of numbers called floating point numbers that allow a wide range of digits before and after the decimal point, for example 5,063,937.9120834.
The various operations of adding, subtracting, multiplying, and dividing floating-point numbers are collectively called floating-point operations. An important way of measuring a supercomputer’s performance is in the peak number of floating-point operations per second (FLOPS) that it can do. In the mid-1990s, the peak computational rate for state-of-the-art supercomputers was between 1 and 200 Gigaflops (billion floating-point operations per second), depending on the specific model and configuration of the supercomputer.
In July 1995, computer scientists at the University of Tokyo, in Japan, broke the 1 teraflops (1 trillion floating-point operations per second) mark with a computer they designed to perform astrophysical simulations. Named GRAPE-4 (GRAvity PipE number 4), this MPP supercomputer consisted of 1692 interconnected processors. In November 1996, Cray Research debuted the CRAY T3E-900, the first commercially-available supercomputer to offer teraflops performance. In 1997 the Intel Corporation installed the teraflop machine Janus at Sandia National Laboratories in New Mexico. Janus is composed of 9072 interconnected processors. Scientists use Janus for classified work such as weapons research as well as for unclassified scientific research such as modeling the impact of a comet on the earth.
The definition of what a supercomputer is constantly changes with technological progress. The same technology that increases the speed of supercomputers also increases the speed of other types of computers. For instance, the first computer to be called a supercomputer, the Cray-1 developed by Cray Research and first sold in 1976, had a peak speed of 167 megaflops. This is only a few times faster than standard personal computers today, and well within the reach of some workstations.

Contributed By:
Steve Nelson

(iii)
(iv) Byte and Word
Byte, in computer science, a unit of information built from bits, the smallest units of information used in computers. Bits have one of two absolute values, either 0 or 1. These bit values physically correspond to whether transistors and other electronic circuitry in a computer are on or off. A byte is usually composed of 8 bits, although bytes composed of 16 bits are also used. See Number Systems.

(v)RAM and Cache Memory
Cache (computer), in computer science, an area of memory that holds frequently accessed data or program instructions for the purpose of speeding a computer system's performance. A cache consists of ultrafast static random-access memory (SRAM) chips, which rapidly move data to the central processing unit (the device in a computer that interprets and executes instructions). The process minimizes the amount of time the processor must be idle while it waits for data. This time is measured in a clock cycle, which is the equivalent in time to a bit in data. The effectiveness of the cache is dependent on the speed of the chips and the quality of the algorithm that determines which data is most likely to be requested by the processor See also Disk Cache.
RAM, in computer science, acronym for random access memory. Semiconductor-based memory that can be read and written by the microprocessor or other hardware devices. The storage locations can be accessed in any order. Note that the various types of ROM memory are capable of random access. The term RAM, however, is generally understood to refer to volatile memory, which can be written as well as read. See also Computer; EPROM; PROM.
Buffer (computer science), in computer science, an intermediate repository of data—a reserved portion of memory in which data is temporarily held pending an opportunity to complete its transfer to or from a storage device or another location in memory. Some devices, such as printers or the adapters supporting them, commonly have their own buffers.
Q6:
(i)
(ii)Television
Television
I INTRODUCTION
Television, system of sending and receiving pictures and sound by means of electronic signals transmitted through wires and optical fibers or by electromagnetic radiation. These signals are usually broadcast from a central source, a television station, to reception devices such as television sets in homes or relay stations such as those used by cable television service providers. Television is the most widespread form of communication in the world. Though most people will never meet the leader of a country, travel to the moon, or participate in a war, they can observe these experiences through the images on their television.
Television has a variety of applications in society, business, and science. The most common use of television is as a source of information and entertainment for viewers in their homes. Security personnel also use televisions to monitor buildings, manufacturing plants, and numerous public facilities. Public utility employees use television to monitor the condition of an underground sewer line, using a camera attached to a robot arm or remote-control vehicle. Doctors can probe the interior of a human body with a microscopic television camera without having to conduct major surgery on the patient. Educators use television to reach students throughout the world.
People in the United States have the most television sets per person of any country, with 835 sets per 1,000 people as of 2000. Canadians possessed 710 sets per 1,000 people during the same year. Japan, Germany, Denmark, and Finland follow North America in the number of sets per person.
II HOW TELEVISION WORKS
A television program is created by focusing a television camera on a scene. The camera changes light from the scene into an electric signal, called the video signal, which varies depending on the strength, or brightness, of light received from each part of the scene. In color television, the camera produces an electric signal that varies depending on the strength of each color of light.
Three or four cameras are typically used to produce a television program (see Television Production). The video signals from the cameras are processed in a control room, then combined with video signals from other cameras and sources, such as videotape recorders, to provide the variety of images and special effects seen during a television program.
Audio signals from microphones placed in or near the scene also flow to the control room, where they are amplified and combined. Except in the case of live broadcasts (such as news and sports programs) the video and audio signals are recorded on tape and edited, assembled with the use of computers into the final program, and broadcast later. In a typical television station, the signals from live and recorded features, including commercials, are put together in a master control room to provide the station's continuous broadcast schedule. Throughout the broadcast day, computers start and stop videotape machines and other program sources, and switch the various audio and visual signals. The signals are then sent to the transmitter.
The transmitter amplifies the video and audio signals, and uses the electronic signals to modulate, or vary, carrier waves (oscillating electric currents that carry information). The carrier waves are combined (diplexed), then sent to the transmitting antenna, usually placed on the tallest available structure in a given broadcast area. In the antenna, the oscillations of the carrier waves generate electromagnetic waves of energy that radiate horizontally throughout the atmosphere. The waves excite weak electric currents in all television-receiving antennas within range. These currents have the characteristics of the original picture and sound currents. The currents flow from the antenna attached to the television into the television receiver, where they are electronically separated into audio and video signals. These signals are amplified and sent to the picture tube and the speakers, where they produce the picture and sound portions of the program.
III THE TELEVISION CAMERA
The television camera is the first tool used to produce a television program. Most cameras have three basic elements: an optical system for capturing an image, a pickup device for translating the image into electronic signals, and an encoder for encoding signals so they may be transmitted.
A Optical System
The optical system of a television camera includes a fixed lens that is used to focus the scene onto the front of the pickup device. Color cameras also have a system of prisms and mirrors that separate incoming light from a scene into the three primary colors: red, green, and blue. Each beam of light is then directed to its own pickup device. Almost any color can be reproduced by combining these colors in the appropriate proportions. Most inexpensive consumer video cameras use a filter that breaks light from an image into the three primary colors.
B Pickup Device
The pickup device takes light from a scene and translates it into electronic signals. The first pickup devices used in cameras were camera tubes. The first camera tube used in television was the iconoscope. Invented in the 1920s, it needed a great deal of light to produce a signal, so it was impractical to use in a low-light setting, such as an outdoor evening scene. The image-orthicon tube and the vidicon tube were invented in the 1940s and were a vast improvement on the iconoscope. They needed only about as much light to record a scene as human eyes need to see. Instead of camera tubes, most modern cameras now use light-sensitive integrated circuits (tiny, electronic devices) called charge-coupled devices (CCDs).
When recording television images, the pickup device replaces the function of film used in making movies. In a camera tube pickup device, the front of the tube contains a layer of photosensitive material called a target. In the image-orthicon tube, the target material is photoemissive—that is, it emits electrons when it is struck by light. In the vidicon camera tube, the target material is photoconductive—that is, it conducts electricity when it is struck by light. In both cases, the lens of a camera focuses light from a scene onto the front of the camera tube, and this light causes changes in the target material. The light image is transformed into an electronic image, which can then be read from the back of the target by a beam of electrons (tiny, negatively charged particles).
The beam of electrons is produced by an electron gun at the back of the camera tube. The beam is controlled by a system of electromagnets that make the beam systematically scan the target material. Whenever the electron beam hits the bright parts of the electronic image on the target material, the tube emits a high voltage, and when the beam hits a dark part of the image, the tube emits a low voltage. This varying voltage is the electronic television signal.
A charge-coupled device (CCD) can be much smaller than a camera tube and is much more durable. As a result, cameras with CCDs are more compact and portable than those using a camera tube. The image they create is less vulnerable to distortion and is therefore clearer. In a CCD, the light from a scene strikes an array of photodiodes arranged on a silicon chip. Photodiodes are devices that conduct electricity when they are struck by light; they send this electricity to tiny capacitors. The capacitors store the electrical charge, with the amount of charge stored depending on the strength of the light that struck the photodiode. The CCD converts the incoming light from the scene into an electrical signal by releasing the charges from the photodiodes in an order that follows the scanning pattern that the receiver will follow in re-creating the image.
C Encoder
In color television, the signals from the three camera tubes or charge-coupled devices are first amplified, then sent to the encoder before leaving the camera. The encoder combines the three signals into a single electronic signal that contains the brightness information of the colors (luminance). It then adds another signal that contains the code used to combine the colors (color burst), and the synchronization information used to direct the television receiver to follow the same scanning pattern as the camera. The color television receiver uses the color burst part of the signal to separate the three colors again.
IV SCANNING
Television cameras and television receivers use a procedure called scanning to record visual images and re-create them on a television screen. The television camera records an image, such as a scene in a television show, by breaking it up into a series of lines and scanning over each line with the beam or beams of electrons contained in the camera tube. The pattern is created in a CCD camera by the array of photodiodes. One scan of an image produces one static picture, like a single frame in a film. The camera must scan a scene many times per second to record a continuous image. In the television receiver, another electron beam—or set of electron beams, in the case of color television—uses the signals recorded by the camera to reproduce the original image on the receiver's screen. Just like the beam or beams in the camera, the electron beam in the receiver must scan the screen many times per second to reproduce a continuous image.
In order for television to work, television images must be scanned and recorded in the same manner as television receivers reproduce them. In the United States, broadcasters and television manufacturers have agreed on a standard of breaking images down into 525 horizontal lines, and scanning images 30 times per second. In Europe, most of Asia, and Australia, images are broken down into 625 lines, and they are scanned 25 times per second. Special equipment can be used to make television images that have been recorded in one standard fit a television system that uses a different standard. Telecine equipment (from the words television and cinema) is used to convert film and slide images to television signals. The images from film projectors or slides are directed by a system of mirrors toward the telecine camera, which records the images as video signals.
The scanning method that is most commonly used today is called interlaced scanning. It produces a clear picture that does not fade. When an image is scanned line by line from top to bottom, the top of the image on the screen will begin to fade by the time the electron beam reaches the bottom of the screen. With interlaced scanning, odd-numbered lines are scanned first, and the remaining even-numbered lines are scanned next. A full image is still produced 30 times a second, but the electron beam travels from the top of the screen to the bottom of the screen twice for every time a full image is produced.
V TRANSMISSION OF TELEVISION SIGNALS
The audio and video signals of a television program are broadcast through the air by a transmitter. The transmitter superimposes the information in the camera's electronic signals onto carrier waves. The transmitter amplifies the carrier waves, making them much stronger, and sends them to a transmitting antenna. This transmitting antenna radiates the carrier waves in all directions, and the waves travel through the air to antennas connected to television sets or relay stations.
A The Transmitter
The transmitter superimposes the information from the electronic television signal onto carrier waves by modulating (varying) either the wave's amplitude, which corresponds to the wave's strength, or the wave's frequency, which corresponds to the number of times the wave oscillates each second (see Radio: Modulation). The amplitude of one carrier wave is modulated to carry the video signal (amplitude modulation, or AM) and the frequency of another wave is modulated to carry the audio signal (frequency modulation, or FM). These waves are combined to produce a carrier wave that contains both the video and audio information. The transmitter first generates and modulates the wave at a low power of several watts. After modulation, the transmitter amplifies the carrier signal to the desired power level, sometimes many kilowatts (1,000 watts), depending on how far the signal needs to travel, and then sends the carrier wave to the transmitting antenna.
The frequency of carrier waves is measured in hertz (Hz), which is equal to the number of wave peaks that pass by a point every second. The frequency of the modulated carrier wave varies, covering a range, or band, of about 4 million hertz, or 4 megahertz (4 MHz). This band is much wider than the band needed for radio broadcasting, which is about 10,000 Hz, or 10 kilohertz (10 kHz). Television stations that broadcast in the same area send out carrier waves on different bands of frequencies, each called a channel, so that the signals from different stations do not mix. To accommodate all the channels, which are spaced at least 6 MHz apart, television carrier frequencies are very high. Six MHz does not represent a significant chunk of bandwidth if the television stations broadcast between 50 and 800 MHz.
In the United States and Canada, there are two ranges of frequency bands that cover 67 different channels. The first range is called very high frequency (VHF), and it includes frequencies from 54 to 72 MHz, from 76 to 88 MHz, and from 174 to 216 MHz. These frequencies correspond to channels 2 through 13 on a television set. The second range, ultrahigh frequency (UHF), includes frequencies from 407 MHz to 806 MHz, and it corresponds to channels 14 through 69 (see Radio and Television Broadcasting).
The high-frequency waves radiated by transmitting antennas can travel only in a straight line, and may be blocked by obstacles in between the transmitting and receiving antennas. For this reason, transmitting antennas must be placed on tall buildings or towers. In practice, these transmitters have a range of about 120 km (75 mi). In addition to being blocked, some television signals may reflect off buildings or hills and reach a receiving antenna a little later than the signals that travel directly to the antenna. The result is a ghost, or second image, that appears on the television screen. Television signals may, however, be sent clearly from almost any point on earth to any other—and from spacecraft to earth—by means of cables, microwave relay stations, and communications satellites.
B Cable Transmission
Cable television was first developed in the late 1940s to serve shadow areas—that is, areas that are blocked from receiving signals from a station's transmitting antenna. In these areas, a community antenna receives the signal, and the signal is then redistributed to the shadow areas by coaxial cable (a large cable with a wire core that can transmit the wide band of frequencies required for television) or, more recently, by fiber-optic cable. Viewers in most areas can now subscribe to a cable television service, which provides a wide variety of television programs and films adapted for television that are transmitted by cable directly to the viewer's television set. Digital data-compression techniques, which convert television signals to digital code in an efficient way, have increased cable's capacity to 500 or more channels.
C Microwave Relay Transmission
Microwave relay stations are tall towers that receive television signals, amplify them, and retransmit them as a microwave signal to the next relay station. Microwaves are electromagnetic waves that are much shorter than normal television carrier waves and can travel farther. The stations are placed about 50 km (30 mi) apart. Television networks once relied on relay stations to broadcast to affiliate stations located in cities far from the original source of the broadcast. The affiliate stations received the microwave transmission and rebroadcast it as a normal television signal to the local area. This system has now been replaced almost entirely by satellite transmission in which networks send or uplink their program signals to a satellite that in turn downlinks the signals to affiliate stations.
D Satellite Transmission
Communications satellites receive television signals from a ground station, amplify them, and relay them back to the earth over an antenna that covers a specified terrestrial area. The satellites circle the earth in a geosynchronous orbit, which means they stay above the same place on the earth at all times. Instead of a normal aerial antenna, receiving dishes are used to receive the signal and deliver it to the television set or station. The dishes can be fairly small for home use, or large and powerful, such as those used by cable and network television stations.
Satellite transmissions are used to efficiently distribute television and radio programs from one geographic location to another by networks; cable companies; individual broadcasters; program providers; and industrial, educational, and other organizations. Programs intended for specific subscribers are scrambled so that only the intended recipients, with appropriate decoders, can receive the program.
Direct-broadcast satellites (DBS) are used worldwide to deliver TV programming directly to TV receivers through small home dishes. The Federal Communications Commission (FCC) licensed several firms in the 1980s to begin DBS service in the United States. The actual launch of DBS satellites, however, was delayed due to the economic factors involved in developing a digital video compression system. The arrival in the early 1990s of digital compression made it possible for a single DBS satellite to carry more than 200 TV channels. DBS systems in North America are operating in the Ku band (12.0-19.0 GHz). DBS home systems consist of the receiving dish antenna and a low-noise amplifier that boosts the antenna signal level and feeds it to a coaxial cable. A receiving box converts the superhigh frequency (SHF) signals to lower frequencies and puts them on channels that the home TV set can display.
VI TELEVISION RECEIVER
The television receiver translates the pulses of electric current from the antenna or cable back into images and sound. A traditional television set integrates the receiver, audio system, and picture tube into one device. However, some cable TV systems use a separate component such as a set-top box as a receiver. A high-definition television (HDTV) set integrates the receiver directly into the set like a traditional TV. However, some televisions receive high-definition signals and display them on a monitor. In these instances, an external receiver is required.
A Tuner
The tuner blocks all signals other than that of the desired channel. Blocking is done by the radio frequency (RF) amplifier. The RF amplifier is set to amplify a frequency band, 6 MHz wide, transmitted by a television station; all other frequencies are blocked. A channel selector connected to the amplifier determines the particular frequency band that is amplified. When a new channel is selected, the amplifier is reset accordingly. In this way, the band, or channel, picked out by the home receiver is changed. Once the viewer selects a channel, the incoming signal is amplified, and the video, audio, and scanning signals are separated from the higher-frequency carrier waves by a process called demodulation. The tuner amplifies the weak signal intercepted by the antenna and partially demodulates (decodes) it by converting the carrier frequency to a lower frequency—the intermediate frequency. Intermediate-frequency amplifiers further increase the strength of the signals received from the antenna. After the incoming signals have been amplified, audio, scanning, and video signals are separated.
B Audio System
The audio system consists of a discriminator, which translates the audio portion of the carrier wave back into an electronic audio signal; an amplifier; and a speaker. The amplifier strengthens the audio signal from the discriminator and sends it to the speaker, which converts the electrical waves into sound waves that travel through the air to the listener.
C Picture Tube
The television picture tube receives video signals from the tuner and translates the signals back into images. The images are created by an electron gun in the back of the picture tube, which shoots a beam of electrons toward the back of the television screen. A black-and-white picture tube contains just one electron gun, while a color picture tube contains three electron guns, one for each of the primary colors of light (red, green, and blue). Part of the video signal goes to a magnetic coil that directs the beam and makes it scan the screen in the same manner as the camera originally scanned the scene. The rest of the signal directs the strength of the electron beam as it strikes the screen. The screen is coated with phosphor, a substance that glows when it is struck by electrons (see Luminescence). The stronger the electron beam, the stronger the glow and the brighter that section of the scene appears.
In color television, a portion of the video signal is used to separate out the three color signals, which are then sent to their corresponding electron beams. The screen is coated by tiny phosphor strips or dots that are arranged in groups of three: one strip or dot that emits blue, one that emits green, and one that emits red. Before light from each beam hits the screen, it passes through a shadow mask located just behind the screen. The shadow mask is a layer of opaque material that is covered with slots or holes. It partially blocks the beam corresponding to one color and prevents it from hitting dots of another color. As a result, the electron beam directed by signals for the color blue can strike and light up only blue dots. The result is similar for the beams corresponding to red and green. Images in the three different colors are produced on the television screen. The eye automatically combines these images to produce a single image having the entire spectrum of colors formed by mixing the primary colors in various proportions.
VII TELEVISION'S HISTORY
The scientific principles on which television is based were discovered in the course of basic research. Only much later were these concepts applied to television as it is known today. The first practical television system began operating in the 1940s.
In 1873 the Scottish scientist James Clerk Maxwell predicted the existence of the electromagnetic waves that make it possible to transmit ordinary television broadcasts. Also in 1873 the English scientist Willoughby Smith and his assistant Joseph May noticed that the electrical conductivity of the element selenium changes when light falls on it. This property, known as photoconductivity, is used in the vidicon television camera tube. In 1888 the German physicist Wilhelm Hallwachs noticed that certain substances emit electrons when exposed to light. This effect, called photoemission, was applied to the image-orthicon television camera tube.
Although several methods of changing light into electric current were discovered, it was some time before the methods were applied to the construction of a television system. The main problem was that the currents produced were weak and no effective method of amplifying them was known. Then, in 1906, the American engineer Lee De Forest patented the triode vacuum tube. By 1920 the tube had been improved to the point where it could be used to amplify electric currents for television.
A Nipkow Disk
Some of the earliest work on television began in 1884, when the German engineer Paul Nipkow designed the first true television mechanism. In front of a brightly lit picture, he placed a scanning disk (called a Nipkow disk) with a spiral pattern of holes punched in it. As the disk revolved, the first hole would cross the picture at the top. The second hole passed across the picture a little lower down, the third hole lower still, and so on. In effect, he designed a disk with its own form of scanning. With each complete revolution of the disk, all parts of the picture would be briefly exposed in turn. The disk revolved quickly, accomplishing the scanning within one-fifteenth of a second. Similar disks rotated in the camera and receiver. Light passing through these disks created crude television images.
Nipkow's mechanical scanner was used from 1923 to 1925 in experimental television systems developed in the United States by the inventor Charles F. Jenkins, and in England by the inventor John L. Baird. The pictures were crude but recognizable. The receiver also used a Nipkow disk placed in front of a lamp whose brightness was controlled by the signal from the light-sensitive tube behind the disk in the transmitter. In 1926 Baird demonstrated a system that used a 30-hole Nipkow disk.
B Electronic Television
Simultaneous to the development of a mechanical scanning method, an electronic method of scanning was conceived in 1908 by the English inventor A. A. Campbell-Swinton. He proposed using a screen to collect a charge whose pattern would correspond to the scene, and an electron gun to neutralize this charge and create a varying electric current. This concept was used by the Russian-born American physicist Vladimir Kosma Zworykin in his iconoscope camera tube of the 1920s. A similar arrangement was later used in the image-orthicon tube.
The American inventor and engineer Philo Taylor Farnsworth also devised an electronic television system in the 1920s. He called his television camera, which converted each element of an image into an electrical signal, an image dissector. Farnsworth continued to improve his system in the 1930s, but his project lost its financial backing at the beginning of World War II (1939-1945). Many aspects of Farnsworth's image dissector were also used in Zworykin's more successful iconoscope camera.
Cathode rays, or beams of electrons in evacuated glass tubes, were first noted by the British chemist and physicist Sir William Crookes in 1878. By 1908 Campbell-Swinton and a Russian, Boris Rosing, had independently suggested that a cathode-ray tube (CRT) be used to reproduce the television picture on a phosphor-coated screen. The CRT was developed for use in television during the 1930s by the American electrical engineer Allen B. DuMont. DuMont's method of picture reproduction is essentially the same as the one used today.
The first home television receiver was demonstrated in Schenectady, New York, on January 13, 1928, by the American inventor Ernst F. W. Alexanderson. The images on the 76-mm (3-in) screen were poor and unsteady, but the set could be used in the home. A number of these receivers were built by the General Electric Company (GE) and distributed in Schenectady. On May 10, 1928, station WGY began regular broadcasting to this area.
C Public Broadcasting
The first public broadcasting of television programs took place in London in 1936. Broadcasts from two competing firms were shown. Marconi-EMI produced a 405-line frame at 25 frames per second, and Baird Television produced a 240-line picture at 25 frames per second. In early 1937 the Marconi system, clearly superior, was chosen as the standard. In 1941 the United States adopted a 525-line, 30-image-per-second standard.
The first regular television broadcasts began in the United States in 1939, but after two years they were suspended until shortly after the end of World War II in 1945. A television broadcasting boom began just after the war in 1946, and the industry grew rapidly. The development of color television had always lagged a few steps behind that of black-and-white (monochrome) television. At first, this was because color television was technically more complex. Later, however, the growth of color television was delayed because it had to be compatible with monochrome—that is, color television would have to use the same channels as monochrome television and be receivable in black and white on monochrome sets.
D Color Television
It was realized as early as 1904 that color television was possible using the three primary colors of light: red, green, and blue. In 1928 Baird demonstrated color television using a Nipkow disk in which three sets of openings scanned the scene. A fairly refined color television system was introduced in New York City in 1940 by the Hungarian-born American inventor Peter Goldmark. In 1951 public broadcasting of color television was begun using Goldmark's system. However, the system was incompatible with monochrome television, and the experiment was dropped at the end of the year. Compatible color television was perfected in 1953, and public broadcasting in color was revived a year later.
Other developments that improved the quality of television were larger screens and better technology for broadcasting and transmitting television signals. Early television screens were either 18 or 25 cm (7 or 10 in) diagonally across. Television screens now come in a range of sizes. Those that use built-in cathode-ray tubes (CRTs) measure as large as 89 or 100 cm (35 or 40 in) diagonally. Projection televisions (PTVs), first introduced in the 1970s, now come with screens as large as 2 m (7 ft) diagonally. The most common are rear-projection sets in which three CRTs beam their combined light indirectly to a screen via an assembly of lenses and mirrors. Another type of PTV is the front-projection set, which is set up like a motion picture projector to project light across a room to a separate screen that can be as large as a wall in a home allows. Newer types of PTVs use liquid-crystal display (LCD) technology or an array of micro mirrors, also known as a digital light processor (DLP), instead of cathode-ray tubes. Manufacturers have also developed very small, portable television sets with screens that are 7.6 cm (3 in) diagonally across.
E Television in Space
Television evolved from an entertainment medium to a scientific medium during the exploration of outer space. Knowing that broadcast signals could be sent from transmitters in space, the National Aeronautics and Space Administration (NASA) began developing satellites with television cameras. Unmanned spacecraft of the Ranger and Surveyor series relayed thousands of close-up pictures of the moon's surface back to earth for scientific analysis and preparation for lunar landings. The successful U.S. manned landing on the moon in July 1969 was documented with live black-and-white broadcasts made from the surface of the moon. NASA's use of television helped in the development of photosensitive camera lenses and more-sophisticated transmitters that could send images from a quarter-million miles away.
Since 1960 television cameras have also been used extensively on orbiting weather satellites. Video cameras trained on Earth record pictures of cloud cover and weather patterns during the day, and infrared cameras (cameras that record light waves radiated at infrared wavelengths) detect surface temperatures. The ten Television Infrared Observation Satellites (TIROS) launched by NASA paved the way for the operational satellites of the Environmental Science Services Administration (ESSA), which in 1970 became a part of the National Oceanic and Atmospheric Administration (NOAA). The pictures returned from these satellites aid not only weather prediction but also understanding of global weather systems. High-resolution cameras mounted in Landsat satellites have been successfully used to provide surveys of crop, mineral, and marine resources.
F Home Recording
In time, the process of watching images on a television screen made people interested in either producing their own images or watching programming at their leisure, rather than during standard broadcasting times. It became apparent that programming on videotape—which had been in use since the 1950s—could be adapted for use by the same people who were buying televisions. Affordable videocassette recorders (VCRs) were introduced in the 1970s and in the 1980s became almost as common as television sets. 
During the late 1990s and early 2000s the digital video disc (DVD) player had the most successful product launch in consumer electronics history. According to the Consumer Electronics Association (CEA), which represents manufacturers and retailers of audio and video products, 30 million DVD players were sold in the United States in a record five-year period from 1997 to 2001. It took compact disc (CD) players 8 years and VCRs 13 years to achieve that 30-million milestone. The same size as a CD, a DVD can store enough data to hold a full-length motion picture with a resolution twice that of a videocassette. The DVD player also offered the digital surround-sound quality experienced in a state-of-the-art movie theater. Beginning in 2001 some DVD players also offered home recording capability.
G Digital Television
Digital television receivers, which convert the analog, or continuous, electronic television signals received by an antenna into an electronic digital code (a series of ones and zeros), are currently available. The analog signal is first sampled and stored as a digital code, then processed, and finally retrieved. This method provides a cleaner signal that is less vulnerable to distortion, but in the event of technical difficulties, the viewer is likely to receive no picture at all rather than the degraded picture that sometimes occurs with analog reception. The difference in quality between digital television and regular television is similar to the difference between a compact disc recording (using digital technology) and an audiotape or long-playing record.
The high-definition television (HDTV) system was developed in the 1980s. It uses 1,080 lines and a wide-screen format, providing a significantly clearer picture than the traditional 525- and 625-line television screens. Each line in HDTV also contains more information than normal formats. HDTV is transmitted using digital technology. Because it takes a huge amount of coded information to represent a visual image—engineers believe HDTV will need about 30 million bits (ones and zeros of the digital code) each second—data-compression techniques have been developed to reduce the number of bits that need to be transmitted. With these techniques, digital systems need to continuously transmit codes only for a scene in which images are changing; the systems can compress the recurring codes for images that remain the same (such as the background) into a single code. Digital technology is being developed that will offer sharper pictures on wider screens, and HDTV with cinema-quality images.
A fully digital system was demonstrated in the United States in the 1990s. A common world standard for digital television, the MPEG-2, was agreed on in April 1993 at a meeting of engineers representing manufacturers and broadcasters from 18 countries. Because HDTV receivers initially cost much more than regular television sets, and broadcasts of HDTV and regular television are incompatible, the transition from one format to the next could take many years. The method endorsed by the U.S. Congress and the FCC to ease this transition is to give existing television networks a second band of frequencies on which to broadcast, allowing networks to broadcast in both formats at the same time. Engineers are also working on making HDTV compatible with computers and telecommunications equipment so that HDTV technology may be applied to other systems besides home television, such as medical devices, security systems, and computer-aided manufacturing (CAM).
H Flat Panel Display
In addition to getting clearer, televisions are also getting thinner. Flat panel displays, some just a few centimeters thick, offer an alternative to bulky cathode ray tube televisions. Even the largest flat panel display televisions are thin enough to be hung on the wall like a painting. Many flat panel TVs use liquid-crystal display (LCD) screens that make use of a special substance that changes properties when a small electric current is applied to it. LCD technology has already been used extensively in laptop computers. LCD television screens are flat, use very little electricity, and work well for small, portable television sets. LCD has not been as successful, however, for larger television screens. 
Flat panel TVs made from gas-plasma displays can be much larger. In gas-plasma displays, a small electric current stimulates an inert gas sandwiched between glass panels, including one coated with phosphors that emit light in various colors. While just 8 cm (3 in) thick, plasma screens can be more than 150 cm (60 in) diagonally. 
I Computer and Internet Integration
As online computer systems become more popular, televisions and computers are increasingly integrated. Such technologies combine the capabilities of personal computers, television, DVD players, and in some cases telephones, and greatly expand the kinds of services that can be provided. For example, computer-like hard drives in set-top recorders automatically store a TV program as it is being received so that the consumer can pause live TV, replay a scene, or skip ahead. For programs that consumers want to record for future viewing, a hard drive makes it possible to store a number of shows. Some set-top devices offer Internet access through a dial-up modem or broadband connection. Others allow the consumer to browse the World Wide Web on their TV screen. When a device has both a hard drive and a broadband connection, consumers may be able to download a specific program, opening the way for true video on demand. 
Consumers may eventually need only one system or device, known as an information appliance, which they could use for entertainment, communication, shopping, and banking in the convenience of their home.


(iii) Microwave Oven
Microwave Oven, appliance that uses electromagnetic energy to heat and cook foods. A microwave oven uses microwaves, very short radio waves commonly employed in radar and satellite communications. When concentrated within a small space, these waves efficiently heat water and other substances within foods.
In a microwave oven, an electronic vacuum tube known as a magnetron produces an oscillating beam of microwaves. Before passing into the cooking space, the microwaves are sent through a fanlike set of spinning metal blades called a stirrer. The stirrer scatters the microwaves, dispersing them evenly within the oven, where they are absorbed by the food. Within the food the microwaves orient molecules, particularly water molecules, in a specific direction. The oscillating effect produced by the magnetron changes the orientation of the microwaves millions of times per second. The water molecules begin to vibrate as they undergo equally rapid changes in direction. This vibration produces heat, which in turn cooks the food.
Microwaves cook food rapidly and efficiently because, unlike conventional ovens, they heat only the food and not the air or the oven walls. The heat spreads within food by conduction (see Heat Transfer). Microwave ovens tend to cook moist food more quickly than dry foods, because there is more water to absorb the microwaves. However, microwaves cannot penetrate deeply into foods, sometimes making it difficult to cook thicker foods.
Microwaves pass through many types of glass, paper, ceramics, and plastics, making many containers composed of these materials good for holding food; microwave instructions detail exactly which containers are safe for microwave use. Metal containers are particularly unsuitable because they reflect microwaves and prevent food from cooking. Metal objects may also reflect microwaves back into the magnetron and cause damage. The door of the oven should always be securely closed and properly sealed to prevent escape of microwaves. Leakage of microwaves affects cooking efficiency and can pose a health hazard to anyone near the oven.
The discovery that microwaves could cook food was accidental. In 1945 Percy L. Spencer, a technician at the Raytheon Company, was experimenting with a magnetron designed to produce short radio waves for a radar system. Standing close to the magnetron, he noticed that a candy bar in his pocket melted even though he felt no heat. Raytheon developed this food-heating capacity and introduced the first microwave oven, then called a radar range, in the early 1950s. Although it was slow to catch on at first, the microwave oven has since grown steadily in popularity to its current status as a common household appliance.

(iv) Radar
I INTRODUCTION
Radar (Radio Detection And Ranging), remote detection system used to locate and identify objects. Radar signals bounce off objects in their path, and the radar system detects the echoes of signals that return. Radar can determine a number of properties of a distant object, such as its distance, speed, direction of motion, and shape. Radar can detect objects out of the range of sight and works in all weather conditions, making it a vital and versatile tool for many industries.
Radar has many uses, including aiding navigation in the sea and air, helping detect military forces, improving traffic safety, and providing scientific data. One of radar’s primary uses is air traffic control, both civilian and military. Large networks of ground-based radar systems help air traffic controllers keep track of aircraft and prevent midair collisions. Commercial and military ships also use radar as a navigation aid to prevent collisions between ships and to alert ships of obstacles, especially in bad weather conditions when visibility is poor. Military forces around the world use radar to detect aircraft and missiles, troop movement, and ships at sea, as well as to target various types of weapons. Radar is a valuable tool for the police in catching speeding motorists. In the world of science, meteorologists use radar to observe and forecast the weather (see Meteorology). Other scientists use radar for remote sensing applications, including mapping the surface of the earth from orbit, studying asteroids, and investigating the surfaces of other planets and their moons (see Radar Astronomy).
II HOW RADAR WORKS
Radar relies on sending and receiving electromagnetic radiation, usually in the form of radio waves (see Radio) or microwaves. Electromagnetic radiation is energy that moves in waves at or near the speed of light. The characteristics of electromagnetic waves depend on their wavelength. Gamma rays and X rays have very short wavelengths. Visible light is a tiny slice of the electromagnetic spectrum with wavelengths longer than X rays, but shorter than microwaves. Radar systems use long-wavelength electromagnetic radiation in the microwave and radio ranges. Because of their long wavelengths, radio waves and microwaves tend to reflect better than shorter wavelength radiation, which tends to scatter or be absorbed before it gets to the target. Radio waves at the long-wavelength end of the spectrum will even reflect off of the atmosphere’s ionosphere, a layer of electrically-charged particles in the earth’s atmosphere.
A radar system starts by sending out electromagnetic radiation, called the signal. The signal bounces off objects in its path. When the radiation bounces back, part of the signal returns to the radar system; this echo is called the return. The radar system detects the return and, depending on the sophistication of the system, simply reports the detection or analyzes the signal for more information. Even though radio waves and microwaves reflect better than electromagnetic waves of other lengths, only a tiny portion—about a billionth of a billionth—of the radar signal gets reflected back. Therefore, a radar system must be able to transmit high amounts of energy in the signal and to detect tiny amounts of energy in the return.
A radar system is composed of four basic components: a transmitter, an antenna, a receiver, and a display. The transmitter produces the electrical signals in the correct form for the type of radar system. The antenna sends these signals out as electromagnetic radiation. The antenna also collects incoming return signals and passes them to the receiver, which analyzes the return and passes it to a display. The display enables human operators see the data.
All radar systems perform the same basic tasks, but the way systems carry out their tasks has some effect on the system’s parts. A type of radar called pulse radar sends out bursts of radar at regular intervals. Pulse radar requires a method of timing the bursts from its transmitter, so this part is more complicated than the transmitter in other radar systems. Another type of radar called continuous-wave radar sends out a continuous signal. Continuous-wave radar gets much of its information about the target from subtle changes in the return, or the echo of the signal. The receiver in continuous-wave radar is therefore more complicated than in other systems.
A Transmitter System
The system surrounding the transmitter is made up of three main elements: the oscillator, the modulator, and the transmitter itself. The transmitter supplies energy to the antenna in the form of a high-energy electrical signal. The antenna then sends out electromagnetic radar waves as the signal passes through it.
A1 The Oscillator
The production of a radar signal begins with an oscillator, a device that produces a pure electrical signal at the desired frequency. Most radar systems use frequencies that fall in the radio range (from a few million cycles per second—or Hertz—to several hundred million Hertz) or the microwave range (from several hundred million Hertz to a several tens of billions Hertz). The oscillator must produce a precise and pure frequency to provide the radar system with an accurate reference when it calculates the Doppler shift of the signal (for further discussion of the Doppler shift, see the Receiver section of this article below).
A2 The Modulator
The next stage of a radar system is the modulator, which rapidly varies, or modulates, the signal from the oscillator. In a simple pulse radar system the modulator merely turns the signal on and off. The modulator should vary the signal, but not distort it. This requires careful design and engineering.
A3 The Transmitter
The radar system’s transmitter increases the power of the oscillator signal. The transmitter amplifies the power from the level of about 1 watt to as much as 1 megawatt, or 1 million watts. Radar signals have such high power levels because so little of the original signal comes back in the return.
A4 The Antenna
After the transmitter amplifies the radar signal to the required level, it sends the signal to the antenna, usually a dish-shaped piece of metal. Electromagnetic waves at the proper wavelength propagate out from the antenna as the electrical signal passes through it. Most radar antennas direct the radiation by reflecting it from a parabolic, or concave shaped, metal dish. The output from the transmitter feeds into the focus of the dish. The focus is the point at which radio waves reflected from the dish travel out from the surface of the dish in a single direction. Most antennas are steerable, meaning that they can move to point in different directions. This enables a radar system to scan an area of space rather than always pointing in the same direction.
B Reception Elements
A radar receiver detects and often analyzes the faint echoes produced when radar waves bounce off of distant objects and return to the radar system. The antenna gathers the weak returning radar signals and converts them into an electric current. Because a radar antenna may both transmit and receive signals, the duplexer determines whether the antenna is connected to the receiver or the transmitter. The receiver determines whether the signal should be reported and often does further analysis before sending the results to the display. The display conveys the results to the human operator through a visual display or an audible signal.
B1 The Antenna
The receiver uses an antenna to gather the reflected radar signal. Often the receiver uses the same antenna as the transmitter. This is possible even in some continuous-wave radar because the modulator in the transmitter system formats the outgoing signals in such a way that the receiver (described in following paragraphs) can recognize the difference between outgoing and incoming signals.
B2 The Duplexer
The duplexer enables a radar system to transmit powerful signals and still receive very weak radar echoes. The duplexer acts as a gate between the antenna and the receiver and transmitter. It keeps the intense signals from the transmitter from passing to the receiver and overloading it, and also ensures that weak signals coming in from the antenna go to the receiver. A pulse radar duplexer connects the transmitter to the antenna only when a pulse is being emitted. Between pulses, the duplexer disconnects the transmitter and connects the receiver to the antenna. If the receiver were connected to the antenna while the pulse was being transmitted, the high power level of the pulse would damage the receiver’s sensitive circuits. In continuous-wave radar the receivers and transmitters operate at the same time. These systems have no duplexer. In this case, the receiver separates the signals by frequency alone. Because the receiver must listen for weak signals at the same time that the transmitter is operating, high power continuous-wave radar systems use separate transmitting and receiving antennas.
B3 The Receiver
Most modern radar systems use digital equipment because this equipment can perform many complicated functions. In order to use digital equipment, radar systems need analog-to-digital converters to change the received signal from an analog form to a digital form.
The incoming analog signal can have any value, from 0 to tens of millions, including fractional values such as . Digital information must have discrete values, in certain regular steps, such as 0, 1, or 2, but nothing in between. A digital system might require the fraction  to be rounded off to the decimal number 0.6666667, or 0.667, or 0.7, or even 1. After the analog information has been translated into discrete intervals, digital numbers are usually expressed in binary form, or as series of 1s and 0s that represent numbers. The analog-to-digital converter measures the incoming analog signal many times each second and expresses each signal as a binary number.
Once the signal is in digital form, the receiver can perform many complex functions on it. One of the most important functions for the receiver is Doppler filtering. Signals that bounce off of moving objects come back with a slightly different wavelength because of an effect called the Doppler effect. The wavelength changes as waves leave a moving object because the movement of the object causes each wave to leave from a slightly different position than the waves before it. If an object is moving away from the observer, each successive wave will leave from slightly farther away, so the waves will be farther apart and the signal will have a longer wavelength. If an object is moving toward the observer, each successive wave will leave from a position slightly closer than the one before it, so the waves will be closer to each other and the signal will have a shorter wavelength. Doppler shifts occur in all kinds of waves, including radar waves, sound waves, and light waves. Doppler filtering is the receiver’s way of differentiating between multiple targets. Usually, targets move at different speeds, so each target will have a different Doppler shift. Following Doppler filtering, the receiver performs other functions to maximize the strength of the return signal and to eliminate noise and other interfering signals.
B4 The Display
Displaying the results is the final step in converting the received radar signals into useful information. Early radar systems used a simple amplitude scope—a display of received signal amplitude, or strength, as a function of distance from the antenna. In such a system, a spike in the signal strength appears at the place on the screen that corresponds to the target’s distance. A more useful and more modern display is the plan position indicator (PPI). The PPI displays the direction of the target in relation to the radar system (relative to north)as an angle measured from the top of the display, while the distance to the target is represented as a distance from the center of the display. Some radar systems that use PPI display the actual amplitude of the signal, while others process the signal before displaying it and display possible targets as symbols. Some simple radar systems designed to look for the presence of an object and not the object’s speed or distance notify the user with an audible signal, such as a beep.
C Radar Frequencies
Early radar systems were capable only of detecting targets and making a crude measurement of the distance to the target. As radar technology evolved, radar systems could measure more and more properties. Modern technology allows radar systems to use higher frequencies, permitting better measurement of the target’s direction and location. Advanced radar can detect individual features of the target and show a detailed picture of the target instead of a single blurred object.
Most radar systems operate in frequencies ranging from the Very High Frequency (VHF) band, at about 150 MHz (150 million Hz), to the Extra High Frequency band, which may go as high as 95 GHz (95 billion Hz). Specific ranges of frequencies work well for certain applications and not as well for others, so most radar systems are specialized to do one type of tracking or detection. The frequency of the radar system is related to the resolution of the system. Resolution determines how close two objects may be and still be distinguished by the radar, and how accurately the system can determine the target’s position. Higher frequencies provide better resolution than lower frequencies because the beam formed by the antenna is sharper. Tracking radar, which precisely locates objects and tracks their movement, needs higher resolution and so uses higher frequencies. On the other hand, if a radar system is used to search large areas for targets, a narrow beam of high-frequency radar will be less efficient. Because the high-power transmitters and large antennas that radar systems require are easier to build for lower frequencies, lower frequency radar systems are more popular for radar that does not need particularly good resolution.
D Clutter
Clutter is what radar users call radar signals that do not come from actual targets. Rain, snow, and the surface of the earth reflect energy, including radar waves. Such echoes can produce signals that the radar system may mistake for actual targets. Clutter makes it difficult to locate targets, especially when the system is searching for objects that are small and distant. Fortunately, most sources of clutter move slowly if at all, so their radar echoes produce little or no Doppler shift. Radar engineers have developed several systems to take advantage of the difference in Doppler shifts between clutter and moving targets. Some radar systems use a moving target indicator (MTI), which subtracts out every other radar return from the total signal. Because the signals from stationary objects will remain the same over time, the MTI subtracts them from the total signal, and only signals from moving targets get past the receiver. Other radar systems actually measure the frequencies of all returning signals. Frequencies with very low Doppler shifts are assumed to come from clutter. Those with substantial shifts are assumed to come from moving targets.
Clutter is actually a relative term, since the clutter for some systems could be the target for other systems. For example, a radar system that tracks airplanes considers precipitation to be clutter, but precipitation is the target of weather radar. The plane-tracking radar would ignore the returns with large sizes and low Doppler shifts that represent weather features, while the weather radar would ignore the small-sized, highly-Doppler-shifted returns that represent airplanes.
III TYPES OF RADAR
All radar systems send out electromagnetic radiation in radio or microwave frequencies and use echoes of that radiation to detect objects, but different systems use different methods of emitting and receiving radiation. Pulse radar sends out short bursts of radiation. Continuous wave radar sends out a constant signal. Synthetic aperture radar and phased-array radar have special ways of positioning and pointing the antennas that improve resolution and accuracy. Secondary radar detects radar signals that targets send out, instead of detecting echoes of radiation.
A Simple Pulse Radar
Simple pulse radar is the simplest type of radar. In this system, the transmitter sends out short pulses of radio frequency energy. Between pulses, the radar receiver detects echoes of radiation that objects reflect. Most pulse radar antennas rotate to scan a wide area. Simple pulse radar requires precise timing circuits in the duplexer to prevent the transmitter from transmitting while the receiver is acquiring a signal from the antenna, and to keep the receiver from trying to read a signal from the antenna while the transmitter is operating. Pulse radar is good at locating an object, but it is not very accurate at measuring an object’s speed.
B Continuous Wave Radar
Continuous-wave (CW) radar systems transmit a constant radar signal. The transmission is continuous, so, except in systems with very low power, the receiver cannot use the same antenna as the transmitter because the radar emissions would interfere with the echoes that the receiver detects. CW systems can distinguish between stationary clutter and moving targets by analyzing the Doppler shift of the signals, without having to use the precise timing circuits that separates the signal from the return in pulse radar. Continuous wave radar systems are excellent at measuring the speed and direction of an object, but they are not as accurate as pulse radar at measuring an object’s position. Some systems combine pulse and CW radar to achieve both good range and velocity resolution. Such systems are called Pulse-Doppler radar systems.
C Synthetic Aperture Radar
Synthetic aperture radar (SAR) tracks targets on the ground from the air. The name comes from the fact that the system uses the movement of the airplane or satellite carrying it to make the antenna seem much larger than it actually is. The ability of radar to distinguish between two closely spaced objects depends on the width of the beam that the antenna sends out. The narrower the beam is, the better its resolution. Getting a narrow beam requires a big antenna. A SAR system is limited to a relatively small antenna with a wide beam because it must fit on an aircraft or satellite. SAR systems are called synthetic aperture, however, because the antenna appears to be bigger than it really is. This is because the moving aircraft or satellite allows the SAR system to repeatedly take measurements from different positions. The receiver processes these signals to make it seem as though they came from a large stationary antenna instead of a small moving one. Synthetic aperture radar resolution can be high enough to pick out individual objects as small as automobiles.
Typically, an aircraft or satellite equipped with SAR flies past the target object. In inverse synthetic aperture radar, the target moves past the radar antenna. Inverse SAR can give results as good as normal SAR.
D Phased-Array Radar
Most radar systems use a single large antenna that stays in one place, but can rotate on a base to change the direction of the radar beam. A phased-array radar antenna actually comprises many small separate antennas, each of which can be rotated. The system combines the signals gathered from all the small antennas. The receiver can change the way it combines the signals from the antennas to change the direction of the beam. A huge phased-array radar antenna can change its beam direction electronically many times faster than any mechanical radar system can.
E Secondary Radar
A radar system that sends out radar signals and reads the echoes that bounce back is a primary radar system. Secondary radar systems read coded radar signals that the target emits in response to signals received, instead of signals that the target reflects. Air traffic control depends heavily on the use of secondary radar. Aircraft carry small radar transmitters called beacons or transponders. Receivers at the air traffic control tower search for signals from the transponders. The transponder signals not only tell controllers the location of the aircraft, but can also carry encoded information about the target. For example, the signal may contain a code that indicates whether the aircraft is an ally, or it may contain encoded information from the aircraft’s altimeter (altitude indicator).
IV RADAR APPLICATIONS
Many industries depend on radar to carry out their work. Civilian aircraft and maritime industries use radar to avoid collisions and to keep track of aircraft and ship positions. Military craft also use radar for collision avoidance, as well as for tracking military targets. Radar is important to meteorologists, who use it to track weather patterns. Radar also has many other scientific applications.
A Air-Traffic Control
Radar is a vital tool in avoiding midair aircraft collisions. The international air traffic control system uses both primary and secondary radar. A network of long-range radar systems called Air Route Surveillance Radar (ARSR) tracks aircraft as they fly between airports. Airports use medium-range radar systems called Airport Surveillance Radar to track aircraft more accurately while they are near the airport.
B Maritime Navigation
Radar also helps ships navigate through dangerous waters and avoid collisions. Unlike air-traffic radar, with its centralized networks that monitor many craft, maritime radar depends almost entirely on radar systems installed on individual vessels. These radar systems search the surface of the water for landmasses; navigation aids, such as lighthouses and channel markers; and other vessels. For a ship’s navigator, echoes from landmasses and other stationary objects are just as important as those from moving objects. Consequently, marine radar systems do not include clutter removal circuits. Instead, ship-based radar depends on high-resolution distance and direction measurements to differentiate between land, ships, and unwanted signals. Marine radar systems have become available at such low cost that many pleasure craft are equipped with them, especially in regions where fog is common.
C Military Defense and Attack
Historically, the military has played the leading role in the use and development of radar. The detection and interception of opposing military aircraft in air defense has been the predominant military use of radar. The military also uses airborne radar to scan large battlefields for the presence of enemy forces and equipment and to pick out precise targets for bombs and missiles.
C1 Air Defense
A typical surface-based air defense system relies upon several radar systems. First, a lower frequency radar with a high-powered transmitter and a large antenna searches the airspace for all aircraft, both friend and foe. A secondary radar system reads the transponder signals sent by each aircraft to distinguish between allies and enemies. After enemy aircraft are detected, operators track them more precisely by using high-frequency waves from special fire control radar systems. The air defense system may attempt to shoot down threatening aircraft with gunfire or missiles, and radar sometimes guides both gunfire and missiles (see Guided Missiles).
Longer-range air defense systems use missiles with internal guidance. These systems track a target using data from a radar system on the missile. Such missile-borne radar systems are called seekers. The seeker uses radar signals from the missile or radar signals from a transmitter on the ground to determine the position of the target relative to the missile, then passes the information to the missile’s guidance system.
The military uses surface-to-air systems for defense against ballistic missiles as well as aircraft (see Defense Systems). During the Cold War both the United States and the Union of Soviet Socialist Republics (USSR) did a great deal of research into defense against intercontinental ballistic missiles (ICBMs) and submarine-launched ballistic missiles (SLBMs). The United States and the USSR signed the Anti-Ballistic Missile (ABM) treaty in 1972. This treaty limited each of the superpowers to a single, limited capability system. The U.S. system consisted of a low-frequency (UHF) phased-array radar around the perimeter of the country, another phased-array radar to track incoming missiles more accurately, and several very high speed missiles to intercept the incoming ballistic missiles. The second radar guided the interceptor missiles.
Airborne air defense systems incorporate the same functions as ground-based air defense, but special aircraft carry the large area search radar systems. This is necessary because it is difficult for high-performance fighter aircraft to carry both large radar systems and weapons.
Modern warfare uses air-to-ground radar to detect targets on the ground and to monitor the movement of troops. Advanced Doppler techniques and synthetic aperture radar have greatly increased the accuracy and usefulness of air-to-ground radar since their introduction in the 1960s and 1970s. Military forces around the world use air-to-ground radar for weapon aiming and for battlefield surveillance. The United States used the Joint Surveillance and Tracking Radar System (JSTARS) in the Persian Gulf War (1991), demonstrating modern radar’s ability to provide information about enemy troop concentrations and movements during the day or night, regardless of weather conditions.
C2 Countermeasures
The military uses several techniques to attempt to avoid detection by enemy radar. One common technique is jamming—that is, sending deceptive signals to the enemy’s radar system. During World War II (1939-1945), flyers under attack jammed enemy radar by dropping large clouds of chaff—small pieces of aluminum foil or some other material that reflects radar well. “False” returns from the chaff hid the aircraft’s exact location from the enemy’s air defense radar. Modern jamming uses sophisticated electronic systems that analyze enemy radar, then send out false radar echoes that mask the actual target echoes or deceive the radar about a target’s location.
Stealth technology is a collection of methods that reduce the radar echoes from aircraft and other radar targets (see Stealth Aircraft). Special paint can absorb radar signals and sharp angles in the aircraft design can reflect radar signals in deceiving directions. Improvements in jamming and stealth technology force the continual development of high-power transmitters, antennas good at detecting weak signals, and very sensitive receivers, as well as techniques for improved clutter rejection.
D Traffic safety
Since the 1950s, police have used radar to detect motorists who are exceeding the speed limit. Most older police radar “guns” use Doppler technology to determine the target vehicle’s speed. Such systems were simple, but they sometimes produced false results. The radar beam of such systems was relatively wide, which meant that stray radar signals could be detected by motorists with radar detectors. Newer police radar systems, developed in the 1980s and 1990s, use laser light to form a narrow, highly selective radar beam. The narrow beam helps insure that the radar returns signals from a single, selected car and reduces the chance of false results. Instead of relying on the Doppler effect to measure speed, these systems use pulse radar to measure the distance to the car many times, then calculate the speed by dividing the change in distance by the change in time. Laser radar is also more reliable than normal radar for the detection of speeding motorists because its narrow beam is more difficult to detect by motorists with radar detectors.
E Meteorology
Meteorologists use radar to learn about the weather. Networks of radar systems installed across many countries throughout the world detect and display areas of rain, snow, and other precipitation. Weather radar systems use Doppler radar to determine the speed of the wind within the storm. The radar signals bounce off of water droplets or ice crystals in the atmosphere. Gaseous water vapor does not reflect radar waves as well as the liquid droplets of water or solid ice crystals, so radar returns from rain or snow are stronger than that from clouds. Dust in the atmosphere also reflects radar, but the returns are only significant when the concentration of dust is much higher than usual. The Terminal Doppler Weather Radar can detect small, localized, but hazardous wind conditions, especially if precipitation or a large amount of dust accompanies the storm. Many airports use this advanced radar to make landing safer.
F Scientific Applications
Scientists use radar in several space-related applications. The Spacetrack system is a cooperative effort of the United States, Canada, and the United Kingdom. It uses data from several large surveillance and tracking radar systems (including the Ballistic Missile Early Warning System) to detect and track all objects in orbit around the earth. This helps scientists and engineers keep an eye on space junk—abandoned satellites, discarded pieces of rockets, and other unused fragments of spacecraft that could pose a threat to operating spacecraft. Other special-purpose radar systems track specific satellites that emit a beacon signal. One of the most important of these systems is the Global Positioning System (GPS), operated by the U.S. Department of Defense. GPS provides highly accurate navigational data for the U.S. military and for anyone who owns a GPS receiver.
During space flights, radar gives precise measurements of the distances between the spacecraft and other objects. In the U.S. Surveyor missions to the moon in the 1960s, radar measured the altitude of the probe above the moon’s surface to help the probe control its descent. In the Apollo missions, which landed astronauts on the moon during the 1960s and 1970s, radar measured the altitude of the Lunar Module, the part of the Apollo spacecraft that carried two astronauts from orbit around the moon down to the moon’s surface, above the surface of the moon. Apollo also used radar to measure the distance between the Lunar Module and the Command and Service Module, the part of the spacecraft that remained in orbit around the moon.
Astronomers have used ground-based radar to observe the moon, some of the larger asteroids in our solar system, and a few of the planets and their moons. Radar observations provide information about the orbit and surface features of the object.
The U.S. Magellan space probe mapped the surface of the planet Venus with radar from 1990 to 1994. Magellan’s radar was able to penetrate the dense cloud layer of the Venusian atmosphere and provide images of much better quality than radar measurements from Earth.
Many nations have used satellite-based radar to map portions of the earth’s surface. Radar can show conditions on the surface of the earth and can help determine the location of various resources such as oil, water for irrigation, and mineral deposits. In 1995 the Canadian Space Agency launched a satellite called RADARsat to provide radar imagery to commercial, government, and scientific users.
V HISTORY
Although British physicist James Clerk Maxwell predicted the existence of radio waves in the 1860s, it wasn’t until the 1890s that British-born American inventor Elihu Thomson and German physicist Heinrich Hertz independently confirmed their existence. Scientists soon realized that radio waves could bounce off of objects, and by 1904 Christian Hülsmeyer, a German inventor, had used radio waves in a collision avoidance device for ships. Hülsmeyer’s system was only effective for a range of about 1.5 km (about 1 mi). The first long-range radar systems were not developed until the 1920s. In 1922 Italian radio pioneer Guglielmo Marconi demonstrated a low-frequency (60 MHz) radar system. In 1924 English physicist Edward Appleton and his graduate student from New Zealand, Miles Barnett, proved the existence of the ionosphere, an electrically charged upper layer of the atmosphere, by reflecting radio waves off of it. Scientists at the U.S. Naval Research Laboratory in Washington, D.C., became the first to use radar to detect aircraft in 1930.
A Radar in World War II
None of the early demonstrations of radar generated much enthusiasm. The commercial and military value of radar did not become readily apparent until the mid-1930s. Before World War II, the United States, France, and the United Kingdom were all carrying out radar research. Beginning in 1935, the British built a network of ground-based aircraft detection radar, called Chain Home, under the direction of Sir Robert Watson-Watt. Chain Home was fully operational from 1938 until the end of World War II in 1945 and was extremely instrumental in Britain’s defense against German bombers.
The British recognized the value of radar with frequencies much higher than the radio waves used for most systems. A breakthrough in radar technology came in 1939 when two British scientists, physicist Henry Boot and biophysicist John Randall, developed the resonant-cavity magnetron. This device generates high-frequency radio pulses with a large amount of power, and it made the development of microwave radar possible. Also in 1939, the Massachusetts Institute of Technology (MIT) Radiation Laboratory was formed in Cambridge, Massachusetts, bringing together U.S. and British radar research. In March 1942 scientists demonstrated the detection of ships from the air. This technology became the basis of antiship and antisubmarine radar for the U.S. Navy.
The U.S. Army operated air surveillance radar at the start of World War II. The army also used early forms of radar to direct antiaircraft guns. Initially the radar systems were used to aim searchlights so the soldier aiming the gun could see where to fire, but the systems evolved into fire-control radar that aimed the guns automatically.
B Radar during the Cold War 
With the end of World War II, interest in radar development declined. Some experiments continued, however; for instance, in 1946 the U.S. Army Signal Corps bounced radar signals off of the moon, ushering in the field of radar astronomy. The growing hostility between the United States and the Union of Soviet Socialists Republics—the so-called Cold War—renewed military interest in radar improvements. After the Soviets detonated their first atomic bomb in 1949, interest in radar development, especially for air defense, surged. Major programs included the installation of the Distant Early Warning (DEW) network of long-range radar across the northern reaches of North America to warn against bomber attacks. As the potential threat of attack by ICBMs increased, the United Kingdom, Greenland, and Alaska installed the Ballistic Missile Early Warning System (BMEWS).
C Modern Radar
Radar found many applications in civilian and military life and became more sophisticated and specialized for each application. The use of radar in air traffic control grew quickly during the Cold War, especially with the jump in air traffic that occurred in the 1960s. Today almost all commercial and private aircraft have transponders. Transponders send out radar signals encoded with information about an aircraft and its flight that other aircraft and air traffic controllers can use. American traffic engineer John Barker discovered in 1947 that moving automobiles would reflect radar waves, which could be analyzed to determine the car’s speed. Police began using traffic radar in the 1950s, and the accuracy of traffic radar has increased markedly since the 1980s.
Doppler radar came into use in the 1960s and was first dedicated to weather forecasting in the 1970s. In the 1990s the United States had a nationwide network of more than 130 Doppler radar stations to help meteorologists track weather patterns.
Earth-observing satellites such as those in the SEASAT program began to use radar to measure the topography of the earth in the late 1970s. The Magellan spacecraft mapped most of the surface of the planet Venus in the 1990s. The Cassini spacecraft, scheduled to reach Saturn in 2004, carries radar instruments for studying the surface of Saturn’s moon Titan.
As radar continues to improve, so does the technology for evading radar. Stealth aircraft feature radar-absorbing coatings and deceptive shapes to reduce the possibility of radar detection. The Lockheed F-117A, first flown in 1981, and the Northrop , first flown in 1989, are two of the latest additions to the U.S. stealth aircraft fleet. In the area of civilian radar avoidance, companies are introducing increasingly sophisticated radar detectors, designed to warn motorists of police using traffic radar.

(v)

1. Tape Recording
In analog tape recording, electrical signals from a microphone are transformed into magnetic signals. These signals are encoded onto a thin plastic ribbon of recording tape. Recording tape is coated with tiny magnetic particles. Chromium dioxide and ferric oxide are two magnetic materials commonly used. A chemical binder coats the particles to the tape, and a back coating prevents the magnetic charge from traveling from one layer of tape to the next.
Tape is wound onto reels, which can vary in diameter and size. Professional reel-to-reel tape, which is 6.2 mm (0.25 in) wide, is wound on large metal or plastic reels. Reel-to-reel tapes must be loaded onto a reel-to-reel tape recorder by hand. Cassette tape is only 3.81 mm (0.15 in) wide and is completely self-enclosed for convenience. Regardless of size, all magnetic tape is drawn from a supply reel on the left side of the recorder to a take-up reel on the right. A drive shaft, called a capstan, rolls against a pinch roller and pulls the tape along. Various guides and rollers are used to mechanically regulate the speed and tension of the tape, since any variations in speed or tension will affect sound quality.
As the tape is drawn from the supply reel to the take-up reel, it passes over a series of three magnetic coils called heads. The erase head is activated only while recording. It generates a current that places the tape's magnetic particles in a neutral position in order to remove any previous sounds. The record head transforms the electrical signal coming into the recorder into a magnetic flux and thus applies the original electrical signal onto the tape. The sound wave is now physically present on the analog tape. The playback head reads the magnetic field on the tape and converts this field back to electric energy.
Unwanted noise, such as hiss, is a frequent problem with recording on tape. To combat this problem, sound engineers developed noise reduction systems that help reduce unwanted sounds. Many different systems exist, such as the Dolby System, which is used to reduce hiss on musical recordings and motion-picture soundtracks. Most noise occurs around the weakest sounds on a tape recording. Noise reduction systems work by boosting weak signals during recording. When the tape is played, the boosted signals are reduced to their normal levels. This reduction to normal levels also minimizes any noise that might have been present.

Q7:
(i)
Deoxyribonucleic Acid
I INTRODUCTION
Deoxyribonucleic Acid (DNA), genetic material of all cellular organisms and most viruses. DNA carries the information needed to direct protein synthesis and replication. Protein synthesis is the production of the proteins needed by the cell or virus for its activities and development. Replication is the process by which DNA copies itself for each descendant cell or virus, passing on the information needed for protein synthesis. In most cellular organisms, DNA is organized on chromosomes located in the nucleus of the cell.
II STRUCTURE
A molecule of DNA consists of two chains, strands composed of a large number of chemical compounds, called nucleotides, linked together to form a chain. These chains are arranged like a ladder that has been twisted into the shape of a winding staircase, called a double helix. Each nucleotide consists of three units: a sugar molecule called deoxyribose, a phosphate group, and one of four different nitrogen-containing compounds called bases. The four bases are adenine (A), guanine (G), thymine (T), and cytosine (C). The deoxyribose molecule occupies the center position in the nucleotide, flanked by a phosphate group on one side and a base on the other. The phosphate group of each nucleotide is also linked to the deoxyribose of the adjacent nucleotide in the chain. These linked deoxyribose-phosphate subunits form the parallel side rails of the ladder. The bases face inward toward each other, forming the rungs of the ladder.
The nucleotides in one DNA strand have a specific association with the corresponding nucleotides in the other DNA strand. Because of the chemical affinity of the bases, nucleotides containing adenine are always paired with nucleotides containing thymine, and nucleotides containing cytosine are always paired with nucleotides containing guanine. The complementary bases are joined to each other by weak chemical bonds called hydrogen bonds.
In 1953 American biochemist James D. Watson and British biophysicist Francis Crick published the first description of the structure of DNA. Their model proved to be so important for the understanding of protein synthesis, DNA replication, and mutation that they were awarded the 1962 Nobel Prize for physiology or medicine for their work.
III PROTEIN SYNTHESIS
DNA carries the instructions for the production of proteins. A protein is composed of smaller molecules called amino acids, and the structure and function of the protein is determined by the sequence of its amino acids. The sequence of amino acids, in turn, is determined by the sequence of nucleotide bases in the DNA. A sequence of three nucleotide bases, called a triplet, is the genetic code word, or codon, that specifies a particular amino acid. For instance, the triplet GAC (guanine, adenine, and cytosine) is the codon for the amino acid leucine, and the triplet CAG (cytosine, adenine, and guanine) is the codon for the amino acid valine. A protein consisting of 100 amino acids is thus encoded by a DNA segment consisting of 300 nucleotides. Of the two polynucleotide chains that form a DNA molecule, only one strand contains the information needed for the production of a given amino acid sequence. The other strand aids in replication.
Protein synthesis begins with the separation of a DNA molecule into two strands. In a process called transcription, a section of one strand acts as a template, or pattern, to produce a new strand called messenger RNA (mRNA). The mRNA leaves the cell nucleus and attaches to the ribosomes, specialized cellular structures that are the sites of protein synthesis. Amino acids are carried to the ribosomes by another type of RNA, called transfer RNA (tRNA). In a process called translation, the amino acids are linked together in a particular sequence, dictated by the mRNA, to form a protein.
A gene is a sequence of DNA nucleotides that specify the order of amino acids in a protein via an intermediary mRNA molecule. Substituting one DNA nucleotide with another containing a different base causes all descendant cells or viruses to have the altered nucleotide base sequence. As a result of the substitution, the sequence of amino acids in the resulting protein may also be changed. Such a change in a DNA molecule is called a mutation. Most mutations are the result of errors in the replication process. Exposure of a cell or virus to radiation or to certain chemicals increases the likelihood of mutations.
IV REPLICATION
In most cellular organisms, replication of a DNA molecule takes place in the cell nucleus and occurs just before the cell divides. Replication begins with the separation of the two polynucleotide chains, each of which then acts as a template for the assembly of a new complementary chain. As the old chains separate, each nucleotide in the two chains attracts a complementary nucleotide that has been formed earlier by the cell. The nucleotides are joined to one another by hydrogen bonds to form the rungs of a new DNA molecule. As the complementary nucleotides are fitted into place, an enzyme called DNA polymerase links them together by bonding the phosphate group of one nucleotide to the sugar molecule of the adjacent nucleotide, forming the side rail of the new DNA molecule. This process continues until a new polynucleotide chain has been formed alongside the old one, forming a new double-helix molecule.
V TOOLS AND PROCEDURES
Several tools and procedures facilitate are used by scientists for the study and manipulation of DNA. Specialized enzymes, called restriction enzymes, found in bacteria act like molecular scissors to cut the phosphate backbones of DNA molecules at specific base sequences. Strands of DNA that have been cut with restriction enzymes are left with single-stranded tails that are called sticky ends, because they can easily realign with tails from certain other DNA fragments. Scientists take advantage of restriction enzymes and the sticky ends generated by these enzymes to carry out recombinant DNA technology, or genetic engineering. This technology involves removing a specific gene from one organism and inserting the gene into another organism.
Another tool for working with DNA is a procedure called polymerase chain reaction (PCR). This procedure uses the enzyme DNA polymerase to make copies of DNA strands in a process that mimics the way in which DNA replicates naturally within cells. Scientists use PCR to obtain vast numbers of copies of a given segment of DNA.
DNA fingerprinting, also called DNA typing, makes it possible to compare samples of DNA from various sources in a manner that is analogous to the comparison of fingerprints. In this procedure, scientists use restriction enzymes to cleave a sample of DNA into an assortment of fragments. Solutions containing these fragments are placed at the surface of a gel to which an electric current is applied. The electric current causes the DNA fragments to move through the gel. Because smaller fragments move more quickly than larger ones, this process, called electrophoresis, separates the fragments according to their size. The fragments are then marked with probes and exposed on X-ray film, where they form the DNA fingerprint—a pattern of characteristic black bars that is unique for each type of DNA.
A procedure called DNA sequencing makes it possible to determine the precise order, or sequence, of nucleotide bases within a fragment of DNA. Most versions of DNA sequencing use a technique called primer extension, developed by British molecular biologist Frederick Sanger. In primer extension, specific pieces of DNA are replicated and modified, so that each DNA segment ends in a fluorescent form of one of the four nucleotide bases. Modern DNA sequencers, pioneered by American molecular biologist Leroy Hood, incorporate both lasers and computers. Scientists have completely sequenced the genetic material of several microorganisms, including the bacterium Escherichia coli. In 1998, scientists achieved the milestone of sequencing the complete genome of a multicellular organism—a roundworm identified as Caenorhabditis elegans. The Human Genome Project, an international research collaboration, has been established to determine the sequence of all of the three billion nucleotide base pairs that make up the human genetic material. 
An instrument called an atomic force microscope enables scientists to manipulate the three-dimensional structure of DNA molecules. This microscope involves laser beams that act like tweezers—attaching to the ends of a DNA molecule and pulling on them. By manipulating these laser beams, scientists can stretch, or uncoil, fragments of DNA. This work is helping reveal how DNA changes its three-dimensional shape as it interacts with enzymes. 
VI APPLICATIONS
Research into DNA has had a significant impact on medicine. Through recombinant DNA technology, scientists can modify microorganisms so that they become so-called factories that produce large quantities of medically useful drugs. This technology is used to produce insulin, which is a drug used by diabetics, and interferon, which is used by some cancer patients. Studies of human DNA are revealing genes that are associated with specific diseases, such as cystic fibrosis and breast cancer. This information is helping physicians to diagnose various diseases, and it may lead to new treatments. For example, physicians are using a technology called chimeraplasty, which involves a synthetic molecule containing both DNA and RNA strands, in an effort to develop a treatment for a form of hemophilia.
Forensic science uses techniques developed in DNA research to identify individuals who have committed crimes. DNA from semen, skin, or blood taken from the crime scene can be compared with the DNA of a suspect, and the results can be used in court as evidence.
DNA has helped taxonomists determine evolutionary relationships among animals, plants, and other life forms. Closely related species have more similar DNA than do species that are distantly related. One surprising finding to emerge from DNA studies is that vultures of the Americas are more closely related to storks than to the vultures of Europe, Asia, or Africa (see Classification).
Techniques of DNA manipulation are used in farming, in the form of genetic engineering and biotechnology. Strains of crop plants to which genes have been transferred may produce higher yields and may be more resistant to insects. Cattle have been similarly treated to increase milk and beef production, as have hogs, to yield more meat with less fat.
VII SOCIAL ISSUES
Despite the many benefits offered by DNA technology, some critics argue that its development should be monitored closely. One fear raised by such critics is that DNA fingerprinting could provide a means for employers to discriminate against members of various ethnic groups. Critics also fear that studies of people’s DNA could permit insurance companies to deny health insurance to those people at risk for developing certain diseases. The potential use of DNA technology to alter the genes of embryos is a particularly controversial issue.
The use of DNA technology in agriculture has also sparked controversy. Some people question the safety, desirability, and ecological impact of genetically altered crop plants. In addition, animal rights groups have protested against the genetic engineering of farm animals. 
Despite these and other areas of disagreement, many people agree that DNA technology offers a mixture of benefits and potential hazards. Many experts also agree that an informed public can help assure that DNA technology is used wisely.

Ribonucleic Acid
I INTRODUCTION
Ribonucleic Acid (RNA), genetic material of certain viruses (RNA viruses) and, in cellular organisms, the molecule that directs the middle steps of protein production. In RNA viruses, the RNA directs two processes—protein synthesis (production of the virus's protein coat) and replication (the process by which RNA copies itself). In cellular organisms, another type of genetic material, called deoxyribonucleic acid (DNA), carries the information that determines protein structure. But DNA cannot act alone and relies upon RNA to transfer this crucial information during protein synthesis (production of the proteins needed by the cell for its activities and development).
Like DNA, RNA consists of a chain of chemical compounds called nucleotides. Each nucleotide is made up of a sugar molecule called ribose, a phosphate group, and one of four different nitrogen-containing compounds called bases. The four bases are adenine, guanine, uracil, and cytosine. These components are joined together in the same manner as in a deoxyribonucleic acid (DNA) molecule. RNA differs chemically from DNA in two ways: The RNA sugar molecule contains an oxygen atom not found in DNA, and RNA contains the base uracil in the place of the base thymine in DNA.
II CELLULAR RNA
In cellular organisms, RNA is a single-stranded polynucleotide chain, a strand of many nucleotides linked together. There are three types of RNA. Ribosomal RNA (rRNA) is found in the cell's ribosomes, the specialized structures that are the sites of protein synthesis). Transfer RNA (tRNA) carries amino acids to the ribosomes for incorporation into a protein. Messenger RNA (mRNA) carries the genetic blueprint copied from the sequence of bases in a cell's DNA. This blueprint specifies the sequence of amino acids in a protein. All three types of RNA are formed as needed, using specific sections of the cell's DNA as templates.
III VIRAL RNA
Some RNA viruses have double-stranded RNA—that is, their RNA molecules consist of two parallel polynucleotide chains. The base of each RNA nucleotide in one chain pairs with a complementary base in the second chain—that is, adenine pairs with uracil, and guanine pairs with cytosine. For these viruses, the process of RNA replication in a host cell follows the same pattern as that of DNA replication, a method of replication called semi-conservative replication. In semi-conservative replication, each newly formed double-stranded RNA molecule contains one polynucleotide chain from the parent RNA molecule, and one complementary chain formed through the process of base pairing. The Colorado tick fever virus, which causes mild respiratory infections, is a double stranded RNA virus.
There are two types of single-stranded RNA viruses. After entering a host cell, one type, polio virus, becomes double-stranded by making an RNA strand complementary to its own. During replication, although the two strands separate, only the recently formed strand attracts nucleotides with complementary bases. Therefore, the polynucleotide chain that is produced as a result of replication is exactly the same as the original RNA chain.
The other type of single-stranded RNA viruses, called retroviruses, include the human immunodeficiency virus (HIV), which causes AIDS, and other viruses that cause tumors. After entering a host cell, a retrovirus makes a DNA strand complementary to its own RNA strand using the host's DNA nucleotides. This new DNA strand then replicates and forms a double helix that becomes incorporated into the host cell's chromosomes, where it is replicated along with the host DNA. While in a host cell, the RNA-derived viral DNA produces single-stranded RNA viruses that then leave the host cell and enter other cells, where the replication process is repeated. 
IV RNA AND THE ORIGIN OF LIFE
In 1981, American biochemist Thomas Cech discovered that certain RNA molecules appear to act as enzymes, molecules that speed up, or catalyze, some reactions inside cells. Until this discovery biologists thought that all enzymes were proteins. Like other enzymes, these RNA catalysts, called ribozymes, show great specificity with respect to the reactions they speed up. The discovery of ribozymes added to the evidence that RNA, not DNA, was the earliest genetic material. Many scientists think that the earliest genetic molecule was simple in structure and capable of enzymatic activity. Furthermore, the molecule would necessarily exist in all organisms. The enzyme ribonuclease-P, which exists in all organisms, is made of protein and a form of RNA that has enzymatic activity. Based on this evidence, some scientists suspect that the RNA portion of ribonuclease-P may be the modern equivalent of the earliest genetic molecule, the molecule that first enabled replication to occur in primitive cells.


(ii) Brass (alloy)
Brass (alloy), alloy of copper and zinc. Harder than copper, it is ductile and can be hammered into thin leaves. Formerly any alloy of copper, especially one with tin, was called brass, and it is probable that the “brass” of ancient times was of copper and tin (see Bronze). The modern alloy came into use about the 16th century.
The malleability of brass varies with its composition and temperature and with the presence of foreign metals, even in minute quantities. Some kinds of brass are malleable only when cold, others only when hot, and some are not malleable at any temperature. All brass becomes brittle if heated to a temperature near the melting point. See Metalwork.
To prepare brass, zinc is mixed directly with copper in crucibles or in a reverberatory or cupola furnace. The ingots are rolled when cold. The bars or sheets can be rolled into rods or cut into strips that can be drawn out into wire.

Bronze
I INTRODUCTION
Bronze, metal compound containing copper and other elements. The term bronze was originally applied to an alloy of copper containing tin, but the term is now used to describe a variety of copper-rich material, including aluminum bronze, manganese bronze, and silicon bronze. 
Bronze was developed about 3500 BC by the ancient Sumerians in the Tigris-Euphrates Valley. Historians are unsure how this alloy was discovered, but believe that bronze may have first been made accidentally when rocks rich in ores of copper and tin were used to build campfire rings (enclosures for preventing fires from spreading). As fire heated these stones, the metals may have mixed, forming bronze. This theory is supported by the fact that bronze was not developed in North America, where natural tin and copper ores are rarely found in the same rocks. 
Around 3000 BC, bronze-making spread to Persia, where bronze objects such as ornaments, weapons, and chariot fittings have been found. Bronzes appeared in both Egypt and China around 2000 BC. The earliest bronze castings (objects made by pouring liquid metal into molds) were made in sand; later, clay and stone molds were used. Zinc, lead, and silver were added to bronze alloys by Greek and Roman metalworkers for use in tools, weapons, coins, and art objects. During the Renaissance, a series of cultural movements that occurred in Europe in the 14th, 15th, and 16th centuries, bronze was used to make guns, and artists such as Michelangelo and Benvenuto Cellini used bronze for sculpting See also Metalwork; Founding. 
Today, bronze is used for making products ranging from household items such as doorknobs, drawer handles, and clocks to industrial products such as engine parts, bearings, and wire.
II TYPES
Tin bronzes, the original bronzes, are alloys of copper and tin. They may contain from 5 to 22 percent tin. When a tin bronze contains at least 10 percent tin, the alloy is hard and has a low melting point. Leaded tin bronzes, used for casting, contain 5 to 10 percent tin, 1.5 to 25 percent lead, and 0 to 4.5 percent zinc. Manganese bronze contains 39 percent zinc, 1 percent tin, and 0.5 to 4 percent manganese. Aluminum bronze contains 5 to 10 percent aluminum. Silicon bronze contains 1.5 to 3 percent silicon. 
Bronze is made by heating and mixing the molten metal constituents. When the molten mixture is poured into a mold and begins to harden, the bronze expands and fills the entire mold. Once the bronze has cooled, it shrinks slightly and can easily be removed from the mold. 
III CHARACTERISTICS AND USES 
Bronze is stronger and harder than any other common metal alloy except steel. It does not easily break under stress, is corrosion resistant, and is easy to form into finished shapes by molding, casting, or machining (See also Engineering). 
The strongest bronze alloys contain tin and a small amount of lead. Tin, silicon, or aluminum is often added to bronze to improve its corrosion resistance. As bronze weathers, a brown or green film forms on the surface. This film inhibits corrosion. For example, many bronze statues erected hundreds of years ago show little sign of corrosion. Bronzes have a low melting point, a characteristic that makes them useful for brazing—that is, for joining two pieces of metal. When used as brazing material, bronze is heated above 430°C (800°F), but not above the melting point of the metals being joined. The molten bronze fuses to the other metals, forming a solid joint after cooling.
Lead is often added to make bronze easier to machine. Silicon bronze is machined into piston rings and screening, and because of its resistance to chemical corrosion it is also used to make chemical containers. Manganese bronze is used for valve stems and welding rods. Aluminum bronzes are used in engine parts and in marine hardware. 
Bronze containing 10 percent or more tin is most often rolled or drawn into wires, sheets, and pipes. Tin bronze, in a powdered form, is sintered (heated without being melted), pressed into a solid mass, saturated with oil, and used to make self-lubricating bearings. 

(iii) Lymph
Lymph, common name for the fluid carried in the lymphatic system. Lymph is diluted blood plasma containing large numbers of white blood cells, especially lymphocytes, and occasionally a few red blood cells. Because of the number of living cells it contains, lymph is classified as a fluid tissue.
Lymph diffuses into and is absorbed by the lymphatic capillaries from the spaces between the various cells constituting the tissues. In these spaces lymph is known as tissue fluid, plasma that has permeated the blood capillary walls and surrounded the cells to bring them nutriment and to remove their waste substances. The lymph contained in the lacteals of the small intestine is known as chyle.
The synovial fluid that lubricates joints is almost identical with lymph, as is the serous fluid found in the body and pleural cavities. The fluid contained within the semicircular canals of the ear, although known as endolymph, is not true lymph.

Blood
I INTRODUCTION
Blood, vital fluid found in humans and other animals that provides important nourishment to all body organs and tissues and carries away waste materials. Sometimes referred to as “the river of life,” blood is pumped from the heart through a network of blood vessels collectively known as the circulatory system.
An adult human has about 5 to 6 liters (1 to 2 gal) of blood, which is roughly 7 to 8 percent of total body weight. Infants and children have comparably lower volumes of blood, roughly proportionate to their smaller size. The volume of blood in an individual fluctuates. During dehydration, for example while running a marathon, blood volume decreases. Blood volume increases in circumstances such as pregnancy, when the mother’s blood needs to carry extra oxygen and nutrients to the baby.
II ROLE OF BLOOD
Blood carries oxygen from the lungs to all the other tissues in the body and, in turn, carries waste products, predominantly carbon dioxide, back to the lungs where they are released into the air. When oxygen transport fails, a person dies within a few minutes. Food that has been processed by the digestive system into smaller components such as proteins, fats, and carbohydrates is also delivered to the tissues by the blood. These nutrients provide the materials and energy needed by individual cells for metabolism, or the performance of cellular function. Waste products produced during metabolism, such as urea and uric acid, are carried by the blood to the kidneys, where they are transferred from the blood into urine and eliminated from the body. In addition to oxygen and nutrients, blood also transports special chemicals, called hormones, that regulate certain body functions. The movement of these chemicals enables one organ to control the function of another even though the two organs may be located far apart. In this way, the blood acts not just as a means of transportation but also as a communications system.
The blood is more than a pipeline for nutrients and information; it is also responsible for the activities of the immune system, helping fend off infection and fight disease. In addition, blood carries the means for stopping itself from leaking out of the body after an injury. The blood does this by carrying special cells and proteins, known as the coagulation system, that start to form clots within a matter of seconds after injury.
Blood is vital to maintaining a stable body temperature; in humans, body temperature normally fluctuates within a degree of 37.0° C (98.6° F). Heat production and heat loss in various parts of the body are balanced out by heat transfer via the bloodstream. This is accomplished by varying the diameter of blood vessels in the skin. When a person becomes overheated, the vessels dilate and an increased volume of blood flows through the skin. Heat dissipates through the skin, effectively lowering the body temperature. The increased flow of blood in the skin makes the skin appear pink or flushed. When a person is cold, the skin may become pale as the vessels narrow, diverting blood from the skin and reducing heat loss.
III COMPOSITION OF BLOOD
About 55 percent of the blood is composed of a liquid known as plasma. The rest of the blood is made of three major types of cells: red blood cells (also known as erythrocytes), white blood cells (leukocytes), and platelets (thrombocytes).
A Plasma
Plasma consists predominantly of water and salts. The kidneys carefully maintain the salt concentration in plasma because small changes in its concentration will cause cells in the body to function improperly. In extreme conditions this can result in seizures, coma, or even death. The pH of plasma, the common measurement of the plasma’s acidity, is also carefully controlled by the kidneys within the neutral range of 6.8 to 7.7. Plasma also contains other small molecules, including vitamins, minerals, nutrients, and waste products. The concentrations of all of these molecules must be carefully regulated.
Plasma is usually yellow in color due to proteins dissolved in it. However, after a person eats a fatty meal, that person’s plasma temporarily develops a milky color as the blood carries the ingested fats from the intestines to other organs of the body.
Plasma carries a large number of important proteins, including albumin, gamma globulin, and clotting factors. Albumin is the main protein in blood. It helps regulate the water content of tissues and blood. Gamma globulin is composed of tens of thousands of unique antibody molecules. Antibodies neutralize or help destroy infectious organisms. Each antibody is designed to target one specific invading organism. For example, chicken pox antibody will target chicken pox virus, but will leave an influenza virus unharmed. Clotting factors, such as fibrinogen, are involved in forming blood clots that seal leaks after an injury. Plasma that has had the clotting factors removed is called serum. Both serum and plasma are easy to store and have many medical uses.
B Red Blood Cells
Red blood cells make up almost 45 percent of the blood volume. Their primary function is to carry oxygen from the lungs to every cell in the body. Red blood cells are composed predominantly of a protein and iron compound, called hemoglobin, that captures oxygen molecules as the blood moves through the lungs, giving blood its red color. As blood passes through body tissues, hemoglobin then releases the oxygen to cells throughout the body. Red blood cells are so packed with hemoglobin that they lack many components, including a nucleus, found in other cells.
The membrane, or outer layer, of the red blood cell is flexible, like a soap bubble, and is able to bend in many directions without breaking. This is important because the red blood cells must be able to pass through the tiniest blood vessels, the capillaries, to deliver oxygen wherever it is needed. The capillaries are so narrow that the red blood cells, normally shaped like a disk with a concave top and bottom, must bend and twist to maneuver single file through them.
C Blood Type
There are several types of red blood cells and each person has red blood cells of just one type. Blood type is determined by the occurrence or absence of substances, known as recognition markers or antigens, on the surface of the red blood cell. Type A blood has just marker A on its red blood cells while type B has only marker B. If neither A nor B markers are present, the blood is type O. If both the A and B markers are present, the blood is type AB. Another marker, the Rh antigen (also known as the Rh factor), is present or absent regardless of the presence of A and B markers. If the Rh marker is present, the blood is said to be Rh positive, and if it is absent, the blood is Rh negative. The most common blood type is A positive—that is, blood that has an A marker and also an Rh marker. More than 20 additional red blood cell types have been discovered.
Blood typing is important for many medical reasons. If a person loses a lot of blood, that person may need a blood transfusion to replace some of the lost red blood cells. Since everyone makes antibodies against substances that are foreign, or not of their own body, transfused blood must be matched so as not to contain these substances. For example, a person who is blood type A positive will not make antibodies against the A or Rh markers, but will make antibodies against the B marker, which is not on that person’s own red blood cells. If blood containing the B marker (from types B positive, B negative, AB positive, or AB negative) is transfused into this person, then the transfused red blood cells will be rapidly destroyed by the patient’s anti-B antibodies. In this case, the transfusion will do the patient no good and may even result in serious harm. For a successful blood transfusion into an A positive blood type individual, blood that is type O negative, O positive, A negative, or A positive is needed because these blood types will not be attacked by the patient’s anti-B antibodies.
D White Blood Cells
White blood cells only make up about 1 percent of blood, but their small number belies their immense importance. They play a vital role in the body’s immune system—the primary defense mechanism against invading bacteria, viruses, fungi, and parasites. They often accomplish this goal through direct attack, which usually involves identifying the invading organism as foreign, attaching to it, and then destroying it. This process is referred to as phagocytosis.
White blood cells also produce antibodies, which are released into the circulating blood to target and attach to foreign organisms. After attachment, the antibody may neutralize the organism, or it may elicit help from other immune system cells to destroy the foreign substance. There are several varieties of white blood cells, including neutrophils, monocytes, and lymphocytes, all of which interact with one another and with plasma proteins and other cell types to form the complex and highly effective immune system.
E Platelets and Clotting
The smallest cells in the blood are the platelets, which are designed for a single purpose—to begin the process of coagulation, or forming a clot, whenever a blood vessel is broken. As soon as an artery or vein is injured, the platelets in the area of the injury begin to clump together and stick to the edges of the cut. They also release messengers into the blood that perform a variety of functions: constricting the blood vessels to reduce bleeding, attracting more platelets to the area to enlarge the platelet plug, and initiating the work of plasma-based clotting factors, such as fibrinogen. Through a complex mechanism involving many steps and many clotting factors, the plasma protein fibrinogen is transformed into long, sticky threads of fibrin. Together, the platelets and the fibrin create an intertwined meshwork that forms a stable clot. This self-sealing aspect of the blood is crucial to survival.
IV PRODUCTION AND ELIMINATION OF BLOOD CELLS
Blood is produced in the bone marrow, a tissue in the central cavity inside almost all of the bones in the body. In infants, the marrow in most of the bones is actively involved in blood cell formation. By later adult life, active blood cell formation gradually ceases in the bones of the arms and legs and concentrates in the skull, spine, ribs, and pelvis.
Red blood cells, white blood cells, and platelets grow from a single precursor cell, known as a hematopoietic stem cell. Remarkably, experiments have suggested that as few as 10 stem cells can, in four weeks, multiply into 30 trillion red blood cells, 30 billion white blood cells, and 1.2 trillion platelets—enough to replace every blood cell in the body.
Red blood cells have the longest average life span of any of the cellular elements of blood. A red blood cell lives 100 to 120 days after being released from the marrow into the blood. Over that period of time, red blood cells gradually age. Spent cells are removed by the spleen and, to a lesser extent, by the liver. The spleen and the liver also remove any red blood cells that become damaged, regardless of their age. The body efficiently recycles many components of the damaged cells, including parts of the hemoglobin molecule, especially the iron contained within it.
The majority of white blood cells have a relatively short life span. They may survive only 18 to 36 hours after being released from the marrow. However, some of the white blood cells are responsible for maintaining what is called immunologic memory. These memory cells retain knowledge of what infectious organisms the body has previously been exposed to. If one of those organisms returns, the memory cells initiate an extremely rapid response designed to kill the foreign invader. Memory cells may live for years or even decades before dying.
Memory cells make immunizations possible. An immunization, also called a vaccination or an inoculation, is a method of using a vaccine to make the human body immune to certain diseases. A vaccine consists of an infectious agent that has been weakened or killed in the laboratory so that it cannot produce disease when injected into a person, but can spark the immune system to generate memory cells and antibodies specific for the infectious agent. If the infectious agent should ever invade that vaccinated person in the future, these memory cells will direct the cells of the immune system to target the invader before it has the opportunity to cause harm.
Platelets have a life span of seven to ten days in the blood. They either participate in clot formation during that time or, when they have reached the end of their lifetime, are eliminated by the spleen and, to a lesser extent, by the liver.
V BLOOD DISEASES
Many diseases are caused by abnormalities in the blood. These diseases are categorized by which component of the blood is affected.
A Red Blood Cell Diseases
One of the most common blood diseases worldwide is anemia, which is characterized by an abnormally low number of red blood cells or low levels of hemoglobin. One of the major symptoms of anemia is fatigue, due to the failure of the blood to carry enough oxygen to all of the tissues.
The most common type of anemia, iron-deficiency anemia, occurs because the marrow fails to produce sufficient red blood cells. When insufficient iron is available to the bone marrow, it slows down its production of hemoglobin and red blood cells. The most common causes of iron-deficiency anemia are certain infections that result in gastrointestinal blood loss and the consequent chronic loss of iron. Adding supplemental iron to the diet is often sufficient to cure iron-deficiency anemia.
Some anemias are the result of increased destruction of red blood cells, as in the case of sickle-cell anemia, a genetic disease most common in persons of African ancestry. The red blood cells of sickle-cell patients assume an unusual crescent shape, causing them to become trapped in some blood vessels, blocking the flow of other blood cells to tissues and depriving them of oxygen.
B White Blood Cell Diseases
Some white blood cell diseases are characterized by an insufficient number of white blood cells. This can be caused by the failure of the bone marrow to produce adequate numbers of normal white blood cells, or by diseases that lead to the destruction of crucial white blood cells. These conditions result in severe immune deficiencies characterized by recurrent infections.
Any disease in which excess white blood cells are produced, particularly immature white blood cells, is called leukemia, or blood cancer. Many cases of leukemia are linked to gene abnormalities, resulting in unchecked growth of immature white blood cells. If this growth is not halted, it often results in the death of the patient. These genetic abnormalities are not inherited in the vast majority of cases, but rather occur after birth. Although some causes of these abnormalities are known, for example exposure to high doses of radiation or the chemical benzene, most remain poorly understood.
Treatment for leukemia typically involves the use of chemotherapy, in which strong drugs are used to target and kill leukemic cells, permitting normal cells to regenerate. In some cases, bone marrow transplants are effective. Much progress has been made over the last 30 years in the treatment of this disease. In one type of childhood leukemia, more than 80 percent of patients can now be cured of their disease.
C Coagulation Diseases
One disease of the coagulation system is hemophilia, a genetic bleeding disorder in which one of the plasma clotting factors, usually factor VIII, is produced in abnormally low quantities, resulting in uncontrolled bleeding from minor injuries. Although individuals with hemophilia are able to form a good initial platelet plug when blood vessels are damaged, they are not easily able to form the meshwork that holds the clot firmly intact. As a result, bleeding may occur some time after the initial traumatic event. Treatment for hemophilia relies on giving transfusions of factor VIII. Factor VIII can be isolated from the blood of normal blood donors but it also can be manufactured in a laboratory through a process known as gene cloning.
VI BLOOD BANKS
The Red Cross and a number of other organizations run programs, known as blood banks, to collect, store, and distribute blood and blood products for transfusions. When blood is donated, its blood type is determined so that only appropriately matched blood is given to patients needing a transfusion. Before using the blood, the blood bank also tests it for the presence of disease-causing organisms, such as hepatitis viruses and human immunodeficiency virus (HIV), the cause of acquired immunodeficiency syndrome (AIDS). This blood screening dramatically reduces, but does not fully eliminate, the risk to the recipient of acquiring a disease through a blood transfusion. Blood donation, which is extremely safe, generally involves giving about 400 to 500 ml (about 1 pt) of blood, which is only about 7 percent of a person’s total blood.
VII BLOOD IN NONHUMANS
One-celled organisms have no need for blood. They are able to absorb nutrients, expel wastes, and exchange gases with their environment directly. Simple multicelled marine animals, such as sponges, jellyfishes, and anemones, also do not have blood. They use the seawater that bathes their cells to perform the functions of blood. However, all more complex multicellular animals have some form of a circulatory system using blood. In some invertebrates, there are no cells analogous to red blood cells. Instead, hemoglobin, or the related copper compound heocyanin, circulates dissolved in the plasma. 
The blood of complex multicellular animals tends to be similar to human blood, but there are also some significant differences, typically at the cellular level. For example, fish, amphibians, and reptiles possess red blood cells that have a nucleus, unlike the red blood cells of mammals. The immune system of invertebrates is more primitive than that of vertebrates, lacking the functionality associated with the white blood cell and antibody system found in mammals. Some arctic fish species produce proteins in their blood that act as a type of antifreeze, enabling them to survive in environments where the blood of other animals would freeze. Nonetheless, the essential transportation, communication, and protection functions that make blood essential to the continuation of life occur throughout much of the animal kingdom.
(IV)
Heavy water
Almost all the hydrogen in water has an atomic weight of 1. The American chemist Harold Clayton Urey discovered in 1932 the presence in water of a small amount (1 part in 6000) of so-called heavy water, or deuterium oxide (D2O); deuterium is the hydrogen isotope with an atomic weight of 2. In 1951 the American chemist Aristid Grosse discovered that naturally occurring water contains also minute traces of tritium oxide (T2O); tritium is the hydrogen isotope with an atomic weight of 3. See Atom.

Hard Water
Hardness of natural waters is caused largely by calcium and magnesium salts and to a small extent by iron, aluminum, and other metals. Hardness resulting from the bicarbonates and carbonates of calcium and magnesium is called temporary hardness and can be removed by boiling, which also sterilizes the water. The residual hardness is known as noncarbonate, or permanent, hardness. The methods of softening noncarbonate hardness include the addition of sodium carbonate and lime and filtration through natural or artificial zeolites which absorb the hardness-producing metallic ions and release sodium ions to the water See Ion Exchange; Zeolite. Sequestering agents in detergents serve to inactivate the substances that make water hard.
Iron, which causes an unpleasant taste in drinking water, may be removed by aeration and sedimentation or by passing the water through iron-removing zeolite filters, or the iron may be stabilized by addition of such salts as polyphosphates. For use in laboratory applications, water is either distilled or demineralized by passing it through ion-absorbing compounds.
(v)
Smallpox, highly contagious viral disease that is often fatal. The disease is chiefly characterized by a skin rash that develops on the face, chest, back, and limbs. Over the course of a week the rash develops into pustular (pus-filled) pimples resembling boils. In extreme cases the pustular pimples run together—usually an indication of a fatal infection. Death may result from a secondary bacterial infection of the pustules, from cell damage caused by the viral infection, or from heart attack or shock. In the latter stages of nonfatal cases, smallpox pustules become crusted, often leaving the survivor with permanent, pitted scars.
Smallpox is caused by a virus. An infected person spreads virus particles into the air in the form of tiny droplets emitted from the mouth by speaking, coughing, or simply breathing. The virus can then infect anyone who inhales the droplets. By this means, smallpox can spread extremely rapidly from person to person.
Smallpox has afflicted humanity for thousands of years, causing epidemics from ancient times through the 20th century. No one is certain where the smallpox virus came from, but scientists speculate that several thousand years ago the virus made a trans-species jump into humans from an animal—likely a rodent species such as a mouse. The disease probably did not become established among humans until the beginnings of agriculture gave rise to the first large settlements in the Nile valley (northeastern Africa) and Mesopotamia (now eastern Syria, southeastern Turkey, and Iraq) more than 10,000 years ago.
Over the next several centuries smallpox established itself as a widespread disease in Europe, Asia, and across Africa. During the 16th and 17th centuries, a time when Europeans made journeys of exploration and conquest to the Americas and other continents, smallpox went with them. By 1518 the disease reached the Americas aboard a Spanish ship that landed at the island of Hispaniola (now the Dominican Republic and Haiti) in the West Indies. Before long smallpox had killed half of the Taíno people, the native population of the island. The disease followed the Spanish conquistadors into Mexico and Central America in 1520. With fewer than 500 men, the Spanish explorer Hernán Cortés was able to conquer the great Aztec Empire under the emperor Montezuma in what is now Mexico. One of Cortés's men was infected with smallpox, triggering an epidemic that ultimately killed an estimated 3 million Aztecs, one-third of the population. A similar path of devastation was left among the people of the Inca Empire of South America. Smallpox killed the Inca emperor Huayna Capac in 1525, along with an estimated 100,000 Incas in the capital city of Cuzco. The Incas and Aztecs are only two of many examples of smallpox cutting a swath through a native population in the Americas, easing the way for Europeans to conquer and colonize new territory. It can truly be said that smallpox changed history.
Yet the story of smallpox is also the story of great biomedical advancement and of ultimate victory. As the result of a worldwide effort of vaccination and containment, the last naturally occurring infection of smallpox occurred in 1977. Stockpiles of the virus now exist only in secured laboratories. Some experts, however, are concerned about the potential use of smallpox as a weapon of bioterrorism. Thus, despite being deliberately and successfully eradicated, smallpox may still pose a threat to humanity.

Measles, also rubeola, acute, highly contagious, fever-producing disease caused by a filterable virus, different from the virus that causes the less serious disease German measles, or rubella. Measles is characterized by small red dots appearing on the surface of the skin, irritation of the eyes (especially on exposure to light), coughing, and a runny nose. About 12 days after first exposure, the fever, sneezing, and runny nose appear. Coughing and swelling of the neck glands often follow. Four days later, red spots appear on the face or neck and then on the trunk and limbs. In 2 or 3 days the rash subsides and the fever falls; some peeling of the involved skin areas may take place. Infection of the middle ear may also occur.
Measles was formerly one of the most common childhood diseases. Since the development of an effective vaccine in 1963, it has become much less frequent. By 1988 annual measles cases in the United States had been reduced to fewer than 3,500, compared with about 500,000 per year in the early 1960s. However, the number of new cases jumped to more than 18,000 in 1989 and to nearly 28,000 in 1990. Most of these cases occurred among inner-city preschool children and recent immigrants, but adolescents and young adults, who may have lost immunity (see Immunization) from their childhood vaccinations, also experienced an increase. The number of new cases declined rapidly in the 1990s and by 1999 fewer than 100 cases were reported. The reasons for this resurgence and subsequent decline are not clearly understood. In other parts of the world measles is still a common childhood disease and according to the World Health Organization (WHO), about 1 million children die from measles each year. In the U.S., measles is rarely fatal; should the virus spread to the brain, however, it can cause death or brain damage (see Encephalitis).
No specific treatment for measles exists. Patients are kept isolated from other susceptible individuals, usually resting in bed, and are treated with aspirin, cough syrup, and skin lotions to lessen fever, coughing, and itching. The disease usually confers immunity after one attack, and an immune pregnant woman passes the antibody in the globulin fraction of the blood serum, through the placenta, to her fetus.

(vi)
PIG IRON
The basic materials used for the manufacture of pig iron are iron ore, coke, and limestone. The coke is burned as a fuel to heat the furnace; as it burns, the coke gives off carbon monoxide, which combines with the iron oxides in the ore, reducing them to metallic iron. This is the basic chemical reaction in the blast furnace; it has the equation: Fe2O3 + 3CO = 3CO2 + 2Fe. The limestone in the furnace charge is used as an additional source of carbon monoxide and as a “flux” to combine with the infusible silica present in the ore to form fusible calcium silicate. Without the limestone, iron silicate would be formed, with a resulting loss of metallic iron. Calcium silicate plus other impurities form a slag that floats on top of the molten metal at the bottom of the furnace. Ordinary pig iron as produced by blast furnaces contains iron, about 92 percent; carbon, 3 or 4 percent; silicon, 0.5 to 3 percent; manganese, 0.25 to 2.5 percent; phosphorus, 0.04 to 2 percent; and a trace of sulfur.
A typical blast furnace consists of a cylindrical steel shell lined with a refractory, which is any nonmetallic substance such as firebrick. The shell is tapered at the top and at the bottom and is widest at a point about one-quarter of the distance from the bottom. The lower portion of the furnace, called the bosh, is equipped with several tubular openings or tuyeres through which the air blast is forced. Near the bottom of the bosh is a hole through which the molten pig iron flows when the furnace is tapped, and above this hole, but below the tuyeres, is another hole for draining the slag. The top of the furnace, which is about 27 m (about 90 ft) in height, contains vents for the escaping gases, and a pair of round hoppers closed with bell-shaped valves through which the charge is introduced into the furnace. The materials are brought up to the hoppers in small dump cars or skips that are hauled up an inclined external skip hoist.
Blast furnaces operate continuously. The raw material to be fed into the furnace is divided into a number of small charges that are introduced into the furnace at 10- to 15-min intervals. Slag is drawn off from the top of the melt about once every 2 hr, and the iron itself is drawn off or tapped about five times a day.
The air used to supply the blast in a blast furnace is preheated to temperatures between approximately 540° and 870° C (approximately 1,000° and 1,600° F). The heating is performed in stoves, cylinders containing networks of firebrick. The bricks in the stoves are heated for several hours by burning blast-furnace gas, the waste gases from the top of the furnace. Then the flame is turned off and the air for the blast is blown through the stove. The weight of air used in the operation of a blast furnace exceeds the total weight of the other raw materials employed.
An important development in blast furnace technology, the pressurizing of furnaces, was introduced after World War II. By “throttling” the flow of gas from the furnace vents, the pressure within the furnace may be built up to 1.7 atm or more. The pressurizing technique makes possible better combustion of the coke and higher output of pig iron. The output of many blast furnaces can be increased 25 percent by pressurizing. Experimental installations have also shown that the output of blast furnaces can be increased by enriching the air blast with oxygen.
The process of tapping consists of knocking out a clay plug from the iron hole near the bottom of the bosh and allowing the molten metal to flow into a clay-lined runner and then into a large, brick-lined metal container, which may be either a ladle or a rail car capable of holding as much as 100 tons of metal. Any slag that may flow from the furnace with the metal is skimmed off before it reaches the container. The container of molten pig iron is then transported to the steelmaking shop.
Modern-day blast furnaces are operated in conjunction with basic oxygen furnaces and sometimes the older open-hearth furnaces as part of a single steel-producing plant. In such plants the molten pig iron is used to charge the steel furnaces. The molten metal from several blast furnaces may be mixed in a large ladle before it is converted to steel, to minimize any irregularities in the composition of the individual melts.

STAINLESS STEEL
Stainless steels contain chromium, nickel, and other alloying elements that keep them bright and rust resistant in spite of moisture or the action of corrosive acids and gases. Some stainless steels are very hard; some have unusual strength and will retain that strength for long periods at extremely high and low temperatures. Because of their shining surfaces architects often use them for decorative purposes. Stainless steels are used for the pipes and tanks of petroleum refineries and chemical plants, for jet planes, and for space capsules. Surgical instruments and equipment are made from these steels, and they are also used to patch or replace broken bones because the steels can withstand the action of body fluids. In kitchens and in plants where food is prepared, handling equipment is often made of stainless steel because it does not taint the food and can be easily cleaned.

(VII)
Alloy, substance composed of two or more metals. Alloys, like pure metals, possess metallic luster and conduct heat and electricity well, although not generally as well as do the pure metals of which they are formed. Compounds that contain both a metal or metals and certain nonmetals, particularly those containing carbon, are also called alloys. The most important of these is steel. Simple carbon steels consist of about 0.5 percent manganese and up to 0.8 percent carbon, with the remaining material being iron.
An alloy may consist of an intermetallic compound, a solid solution, an intimate mixture of minute crystals of the constituent metallic elements, or any combination of solutions or mixtures of the foregoing. Intermetallic compounds, such as NaAu2, CuSn, and CuAl2, do not follow the ordinary rules of valency. They are generally hard and brittle; although they have not been important in the past where strength is required, many new developments have made such compounds increasingly important. Alloys consisting of solutions or mixtures of two metals generally have lower melting points than do the pure constituents. A mixture with a melting point lower than that of any other mixture of the same constituents is called a eutectic. The eutectoid, the solid-phase analog of the eutectic, frequently has better physical characteristics than do alloys of different proportions.
The properties of alloys are frequently far different from those of their constituent elements, and such properties as strength and corrosion resistance may be considerably greater for an alloy than for any of the separate metals. For this reason, alloys are more generally used than pure metals. Steel is stronger and harder than wrought iron, which is approximately pure iron, and is used in far greater quantities. The alloy steels, mixtures of steel with such metals as chromium, manganese, molybdenum, nickel, tungsten, and vanadium, are stronger and harder than steel itself, and many of them are also more corrosion-resistant than iron or steel. An alloy can often be made to match a predetermined set of characteristics. An important case in which particular characteristics are necessary is the design of rockets, spacecraft, and supersonic aircraft. The materials used in these vehicles and their engines must be light in weight, very strong, and able to sustain very high temperatures. To withstand these high temperatures and reduce the overall weight, lightweight, high-strength alloys of aluminum, beryllium, and titanium have been developed. To resist the heat generated during reentry into the atmosphere of the earth, alloys containing heat-resistant metals such as tantalum, niobium, tungsten, cobalt, and nickel are being used in space vehicles.
A wide variety of special alloys containing metals such as beryllium, boron, niobium, hafnium, and zirconium, which have particular nuclear absorption characteristics, are used in nuclear reactors. Niobium-tin alloys are used as superconductors at extremely low temperatures. Special copper, nickel, and titanium alloys, designed to resist the corrosive effects of boiling salt water, are used in desalination plants.
Historically, most alloys have been prepared by mixing the molten materials. More recently, powder metallurgy has become important in the preparation of alloys with special characteristics. In this process, the alloys are prepared by mixing dry powders of the materials, squeezing them together under high pressure, and then heating them to temperatures just below their melting points. The result is a solid, homogeneous alloy. Mass-produced products may be prepared by this technique at great savings in cost. Among the alloys made possible by powder metallurgy are the cermets. These alloys of metal and carbon (carbides), boron (borides), oxygen (oxides), silicon (silicides), and nitrogen (nitrides) combine the advantages of the high-temperature strength, stability, and oxidation resistance of the ceramic compound with the ductility and shock resistance of the metal. Another alloying technique is ion implantation, which has been adapted from the processes used to produce computer chips; beams of ions of carbon, nitrogen, and other elements are fired into selected metals in a vacuum chamber to produce a strong, thin layer of alloy on the metal surface. Bombarding titanium with nitrogen, for example, can produce a superior alloy for prosthetic implants.
Sterling silver, 14-karat gold, white gold, and plantinum-iridium are precious metal alloys. Babbit metal, brass, bronze, Dow-metal, German silver, gunmetal, Monel metal, pewter, and solder are alloys of less precious metals. Commercial aluminum is, because of impurities, actually an alloy. Alloys of mercury with other metals are called amalgams.

Amalgam
Mercury combines with all the common metals except iron and platinum to form alloys that are called amalgams. In one method of extracting gold and silver from their ores, the metals are combined with mercury to make them dissolve; the mercury is then removed by distillation. This method is no longer commonly used, however.

(viii) Isotope, one of two or more species of atom having the same atomic number, hence constituting the same element, but differing in mass number. As atomic number is equivalent to the number of protons in the nucleus, and mass number is the sum total of the protons plus the neutrons in the nucleus, isotopes of the same element differ from one another only in the number of neutrons in their nuclei. See Atom.

Isobars
i•so•bar [ssə br]
(plural i•so•bars) 
noun 
1. line showing weather patterns: a line drawn on a weather map that connects places with equal atmospheric pressure. Isobars are often used collectively to indicate the movement or formation of weather systems. 
2. atom with same mass number: one of two or more atoms or elements that have the same mass number but different atomic numbers 
[Mid-19th century. < Greek isobaros "of equal weight"] 

Microsoft® Encarta® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
(ix)
Vein (anatomy)
Vein (anatomy), in anatomy, blood vessel that conducts the deoxygenated blood from the capillaries back to the heart. Three exceptions to this description exist: the pulmonary veins return blood from the lungs, where it has been oxygenated, to the heart; the portal veins receive blood from the pyloric, gastric, cystic, superior mesenteric, and splenic veins and, entering the liver, break up into small branches that pass through all parts of that organ; and the umbilical veins convey blood from the fetus to the mother's placenta. Veins enlarge as they proceed, gathering blood from their tributaries. They finally pour the blood through the superior and inferior venae cavae into the right atrium of the heart. Their coats are similar to those of the arteries, but thinner, and often transparent. See Circulatory System; Heart; Varicose Vein.

Artery, one of the tubular vessels that conveys blood from the heart to the tissues of the body. Two arteries have direct connection with the heart: (1) the aorta, which, with its branches, conveys oxygenated blood from the left ventricle to every part of the body; and (2) the pulmonary artery, which conveys blood from the right ventricle to the lungs, whence it is returned bearing oxygen to the left side of the heart (see Heart: Structure and Function). Arteries in their ultimate minute branchings are connected with the veins by capillaries. They are named usually from the part of the body where they are found, as the brachial (arm) or the metacarpal (wrist) artery; or from the organ which they supply, as the hepatic (liver) or the ovarian artery. The facial artery is the branch of the external carotid artery that passes up over the lower jaw and supplies the superficial portion of the face; the hemorrhoidal arteries are three vessels that supply the lower end of the rectum; the intercostal arteries are the arteries that supply the space between the ribs; the lingual artery is the branch of the external carotid artery that supplies the tongue. The arteries expand and then constrict with each beat of the heart, a rhythmic movement that may be felt as the pulse.
Disorders of the arteries may involve inflammation, infection, or degeneration of the walls of the arterial blood vessels. The most common arterial disease, and the one which is most often a contributory cause of death, particularly in old people, is arteriosclerosis, known popularly as hardening of the arteries. The hardening usually is preceded by atherosclerosis, an accumulation of fatty deposits on the inner lining of the arterial wall. The deposits reduce the normal flow of the blood through the artery. One of the substances associated with atherosclerosis is cholesterol. As arteriosclerosis progresses, calcium is deposited and scar tissue develops, causing the wall to lose its elasticity. Localized dilatation of the arterial wall, called an aneurysm, may also develop. Arteriosclerosis may affect any or all of the arteries of the body. If the blood vessels supplying the heart muscle are affected, the disease may lead to a painful condition known as angina pectoris. See Heart: Heart Diseases.
The presence of arteriosclerosis in the wall of an artery can precipitate formation of a clot, or thrombus (see Thrombosis). Treatment consists of clot-dissolving enzymes called urokinase and streptokinase, which were approved for medical use in 1979. Studies indicate that compounds such as aspirin and sulfinpyrazone, which inhibit platelet reactivity, may act to prevent formation of a thrombus, but whether they can or should be taken in tolerable quantities over a long period of time for this purpose has not yet been determined.
Embolism is the name given to the obstruction of an artery by a clot carried to it from another part of the body. Such floating clots may be caused by arteriosclerosis, but are most commonly a consequence of the detachment of a mass of fibrin from a diseased heart. Any artery may be obstructed by embolism; the consequences are most serious in the brain, the retina, and the limbs. In the larger arteries of the brain, 

Aorta, principal artery of the body that carries oxygenated blood to most other arteries in the body. In humans the aorta rises from the left ventricle (lower chamber) of the heart, arches back and downward through the thorax, passes through the diaphragm into the abdomen, and divides into the right and left iliac arteries at about the level of the fourth lumbar vertebra. The aorta gives rise to the coronary arteries, which supply the heart muscle with blood, and to the innominate, subclavian, and carotid arteries, which supply the head and arms. The descending part of the aorta gives rise, in the thorax, to the intercostal arteries that branch in the body wall. In the abdomen it gives off the coeliac artery, which divides into the gastric, hepatic, and splenic arteries, which supply the stomach, liver, and spleen, respectively; the mesenteric arteries to the intestines; the renal arteries to the kidneys; and small branches to the body wall and to reproductive organs. The aorta is subject to a condition known as atherosclerosis, in which fat deposits attach to the aortic walls. If left untreated, this condition may lead to hypertension or to an aneurysm (a swelling of the vessel wall), which can be fatal.

VALVES
In passing through the system, blood pumped by the heart follows a winding course through the right chambers of the heart, into the lungs, where it picks up oxygen, and back into the left chambers of the heart. From these it is pumped into the main artery, the aorta, which branches into increasingly smaller arteries until it passes through the smallest, known as arterioles. Beyond the arterioles, the blood passes through a vast amount of tiny, thin-walled structures called capillaries. Here, the blood gives up its oxygen and its nutrients to the tissues and absorbs from them carbon dioxide and other waste products of metabolism. The blood completes its circuit by passing through small veins that join to form increasingly larger vessels until it reaches the largest veins, the inferior and superior venae cavae, which return it to the right side of the heart. Blood is propelled mainly by contractions of the heart; contractions of skeletal muscle also contribute to circulation. Valves in the heart and in the veins ensure its flow in one direction.

Q10:
Gland
Gland, any structure of animals, plants, or insects that produces chemical secretions or excretions. Glands are classified by shape, such as tubular and saccular, or saclike, and by structure, such as simple and compound. Types of the simple tubular and the simple saccular glands are, respectively, the sweat and the sebaceous glands (see Skin). The kidney is a compound tubular gland, and the tear-producing glands are compound saccular (see Eye). The so-called lymph glands are erroneously named and are in reality nodes (see Lymphatic System). “Swollen glands” are actually infected lymph nodes.
Glands are of two principal types: (1) those of internal secretion, called endocrine, and (2) those of external secretion, called exocrine. Some glands such as the pancreas produce both internal and external secretions. Because endocrine glands produce and release hormones (see Hormone) directly into the bloodstream without passing through a canal, they are called ductless. For the functions and diseases of endocrine glands, see Endocrine System.
In animals, insects, and plants, exocrine glands secrete chemical substances for a variety of purposes. In plants, they produce water, protective sticky fluids, and nectars. The materials for the eggs of birds, the shells of mussels, the cocoons of caterpillars and silkworms, the webs of spiders, and the wax of honeycombs are other examples of exocrine secretions.

Endocrine System
I INTRODUCTION
Endocrine System, group of specialized organs and body tissues that produce, store, and secrete chemical substances known as hormones. As the body's chemical messengers, hormones transfer information and instructions from one set of cells to another. Because of the hormones they produce, endocrine organs have a great deal of influence over the body. Among their many jobs are regulating the body's growth and development, controlling the function of various tissues, supporting pregnancy and other reproductive functions, and regulating metabolism.
Endocrine organs are sometimes called ductless glands because they have no ducts connecting them to specific body parts. The hormones they secrete are released directly into the bloodstream. In contrast, the exocrine glands, such as the sweat glands or the salivary glands, release their secretions directly to target areas—for example, the skin or the inside of the mouth. Some of the body's glands are described as endo-exocrine glands because they secrete hormones as well as other types of substances. Even some nonglandular tissues produce hormone-like substances—nerve cells produce chemical messengers called neurotransmitters, for example.
The earliest reference to the endocrine system comes from ancient Greece, in about 400 BC. However, it was not until the 16th century that accurate anatomical descriptions of many of the endocrine organs were published. Research during the 20th century has vastly improved our understanding of hormones and how they function in the body. Today, endocrinology, the study of the endocrine glands, is an important branch of modern medicine. Endocrinologists are medical doctors who specialize in researching and treating disorders and diseases of the endocrine system.
II COMPONENTS OF THE ENDOCRINE SYSTEM
The primary glands that make up the human endocrine system are the hypothalamus, pituitary, thyroid, parathyroid, adrenal, pineal body, and reproductive glands—the ovary and testis. The pancreas, an organ often associated with the digestive system, is also considered part of the endocrine system. In addition, some nonendocrine organs are known to actively secrete hormones. These include the brain, heart, lungs, kidneys, liver, thymus, skin, and placenta. Almost all body cells can either produce or convert hormones, and some secrete hormones. For example, glucagon, a hormone that raises glucose levels in the blood when the body needs extra energy, is made in the pancreas but also in the wall of the gastrointestinal tract. However, it is the endocrine glands that are specialized for hormone production. They efficiently manufacture chemically complex hormones from simple chemical substances—for example, amino acids and carbohydrates—and they regulate their secretion more efficiently than any other tissues.
The hypothalamus, found deep within the brain, directly controls the pituitary gland. It is sometimes described as the coordinator of the endocrine system. When information reaching the brain indicates that changes are needed somewhere in the body, nerve cells in the hypothalamus secrete body chemicals that either stimulate or suppress hormone secretions from the pituitary gland. Acting as liaison between the brain and the pituitary gland, the hypothalamus is the primary link between the endocrine and nervous systems.
Located in a bony cavity just below the base of the brain is one of the endocrine system's most important members: the pituitary gland. Often described as the body’s master gland, the pituitary secretes several hormones that regulate the function of the other endocrine glands. Structurally, the pituitary gland is divided into two parts, the anterior and posterior lobes, each having separate functions. The anterior lobe regulates the activity of the thyroid and adrenal glands as well as the reproductive glands. It also regulates the body's growth and stimulates milk production in women who are breast-feeding. Hormones secreted by the anterior lobe include adrenocorticotropic hormone (ACTH), thyrotropic hormone (TSH), luteinizing hormone (LH), follicle-stimulating hormone (FSH), growth hormone (GH), and prolactin. The anterior lobe also secretes endorphins, chemicals that act on the nervous system to reduce sensitivity to pain.
The posterior lobe of the pituitary gland contains the nerve endings (axons) from the hypothalamus, which stimulate or suppress hormone production. This lobe secretes antidiuretic hormones (ADH), which control water balance in the body, and oxytocin, which controls muscle contractions in the uterus.
The thyroid gland, located in the neck, secretes hormones in response to stimulation by TSH from the pituitary gland. The thyroid secretes hormones—for example, thyroxine and three-iodothyronine—that regulate growth and metabolism, and play a role in brain development during childhood.
The parathyroid glands are four small glands located at the four corners of the thyroid gland. The hormone they secrete, parathyroid hormone, regulates the level of calcium in the blood.
Located on top of the kidneys, the adrenal glands have two distinct parts. The outer part, called the adrenal cortex, produces a variety of hormones called corticosteroids, which include cortisol. These hormones regulate salt and water balance in the body, prepare the body for stress, regulate metabolism, interact with the immune system, and influence sexual function. The inner part, the adrenal medulla, produces catecholamines, such as epinephrine, also called adrenaline, which increase the blood pressure and heart rate during times of stress.
The reproductive components of the endocrine system, called the gonads, secrete sex hormones in response to stimulation from the pituitary gland. Located in the pelvis, the female gonads, the ovaries, produce eggs. They also secrete a number of female sex hormones, including estrogen and progesterone, which control development of the reproductive organs, stimulate the appearance of female secondary sex characteristics, and regulate menstruation and pregnancy. 
Located in the scrotum, the male gonads, the testes, produce sperm and also secrete a number of male sex hormones, or androgens. The androgens, the most important of which is testosterone, regulate development of the reproductive organs, stimulate male secondary sex characteristics, and stimulate muscle growth.
The pancreas is positioned in the upper abdomen, just under the stomach. The major part of the pancreas, called the exocrine pancreas, functions as an exocrine gland, secreting digestive enzymes into the gastrointestinal tract. Distributed through the pancreas are clusters of endocrine cells that secrete insulin, glucagon, and somastatin. These hormones all participate in regulating energy and metabolism in the body.
The pineal body, also called the pineal gland, is located in the middle of the brain. It secretes melatonin, a hormone that may help regulate the wake-sleep cycle. Research has shown that disturbances in the secretion of melatonin are responsible, in part, for the jet lag associated with long-distance air travel.
III HOW THE ENDOCRINE SYSTEM WORKS
Hormones from the endocrine organs are secreted directly into the bloodstream, where special proteins usually bind to them, helping to keep the hormones intact as they travel throughout the body. The proteins also act as a reservoir, allowing only a small fraction of the hormone circulating in the blood to affect the target tissue. Specialized proteins in the target tissue, called receptors, bind with the hormones in the bloodstream, inducing chemical changes in response to the body’s needs. Typically, only minute concentrations of a hormone are needed to achieve the desired effect.
Too much or too little hormone can be harmful to the body, so hormone levels are regulated by a feedback mechanism. Feedback works something like a household thermostat. When the heat in a house falls, the thermostat responds by switching the furnace on, and when the temperature is too warm, the thermostat switches the furnace off. Usually, the change that a hormone produces also serves to regulate that hormone's secretion. For example, parathyroid hormone causes the body to increase the level of calcium in the blood. As calcium levels rise, the secretion of parathyroid hormone then decreases. This feedback mechanism allows for tight control over hormone levels, which is essential for ideal body function. Other mechanisms may also influence feedback relationships. For example, if an individual becomes ill, the adrenal glands increase the secretions of certain hormones that help the body deal with the stress of illness. The adrenal glands work in concert with the pituitary gland and the brain to increase the body’s tolerance of these hormones in the blood, preventing the normal feedback mechanism from decreasing secretion levels until the illness is gone.
Long-term changes in hormone levels can influence the endocrine glands themselves. For example, if hormone secretion is chronically low, the increased stimulation by the feedback mechanism leads to growth of the gland. This can occur in the thyroid if a person's diet has insufficient iodine, which is essential for thyroid hormone production. Constant stimulation from the pituitary gland to produce the needed hormone causes the thyroid to grow, eventually producing a medical condition known as goiter.
IV DISEASES OF THE ENDOCRINE SYSTEM
Endocrine disorders are classified in two ways: disturbances in the production of hormones, and the inability of tissues to respond to hormones. The first type, called production disorders, are divided into hypofunction (insufficient activity) and hyperfunction (excess activity). Hypofunction disorders can have a variety of causes, including malformations in the gland itself. Sometimes one of the enzymes essential for hormone production is missing, or the hormone produced is abnormal. More commonly, hypofunction is caused by disease or injury. Tuberculosis can appear in the adrenal glands, autoimmune diseases can affect the thyroid, and treatments for cancer—such as radiation therapy and chemotherapy—can damage any of the endocrine organs. Hypofunction can also result when target tissue is unable to respond to hormones. In many cases, the cause of a hypofunction disorder is unknown.
Hyperfunction can be caused by glandular tumors that secrete hormone without responding to feedback controls. In addition, some autoimmune conditions create antibodies that have the side effect of stimulating hormone production. Infection of an endocrine gland can have the same result.
Accurately diagnosing an endocrine disorder can be extremely challenging, even for an astute physician. Many diseases of the endocrine system develop over time, and clear, identifying symptoms may not appear for many months or even years. An endocrinologist evaluating a patient for a possible endocrine disorder relies on the patient's history of signs and symptoms, a physical examination, and the family history—that is, whether any endocrine disorders have been diagnosed in other relatives. A variety of laboratory tests—for example, a radioimmunoassay—are used to measure hormone levels. Tests that directly stimulate or suppress hormone production are also sometimes used, and genetic testing for deoxyribonucleic acid (DNA) mutations affecting endocrine function can be helpful in making a diagnosis. Tests based on diagnostic radiology show anatomical pictures of the gland in question. A functional image of the gland can be obtained with radioactive labeling techniques used in nuclear medicine.
One of the most common diseases of the endocrine systems is diabetes mellitus, which occurs in two forms. The first, called diabetes mellitus Type 1, is caused by inadequate secretion of insulin by the pancreas. Diabetes mellitus Type 2 is caused by the body's inability to respond to insulin. Both types have similar symptoms, including excessive thirst, hunger, and urination as well as weight loss. Laboratory tests that detect glucose in the urine and elevated levels of glucose in the blood usually confirm the diagnosis. Treatment of diabetes mellitus Type 1 requires regular injections of insulin; some patients with Type 2 can be treated with diet, exercise, or oral medication. Diabetes can cause a variety of complications, including kidney problems, pain due to nerve damage, blindness, and coronary heart disease. Recent studies have shown that controlling blood sugar levels reduces the risk of developing diabetes complications considerably.
Diabetes insipidus is caused by a deficiency of vasopressin, one of the antidiuretic hormones (ADH) secreted by the posterior lobe of the pituitary gland. Patients often experience increased thirst and urination. Treatment is with drugs, such as synthetic vasopressin, that help the body maintain water and electrolyte balance.
Hypothyroidism is caused by an underactive thyroid gland, which results in a deficiency of thyroid hormone. Hypothyroidism disorders cause myxedema and cretinism, more properly known as congenital hypothyroidism. Myxedema develops in older adults, usually after age 40, and causes lethargy, fatigue, and mental sluggishness. Congenital hypothyroidism, which is present at birth, can cause more serious complications including mental retardation if left untreated. Screening programs exist in most countries to test newborns for this disorder. By providing the body with replacement thyroid hormones, almost all of the complications are completely avoidable.
Addison's disease is caused by decreased function of the adrenal cortex. Weakness, fatigue, abdominal pains, nausea, dehydration, fever, and hyperpigmentation (tanning without sun exposure) are among the many possible symptoms. Treatment involves providing the body with replacement corticosteroid hormones as well as dietary salt.
Cushing's syndrome is caused by excessive secretion of glucocorticoids, the subgroup of corticosteroid hormones that includes hydrocortisone, by the adrenal glands. Symptoms may develop over many years prior to diagnosis and may include obesity, physical weakness, easily bruised skin, acne, hypertension, and psychological changes. Treatment may include surgery, radiation therapy, chemotherapy, or blockage of hormone production with drugs.
Thyrotoxicosis is due to excess production of thyroid hormones. The most common cause for it is Graves' disease, an autoimmune disorder in which specific antibodies are produced, stimulating the thyroid gland. Thyrotoxicosis is eight to ten times more common in women than in men. Symptoms include nervousness, sensitivity to heat, heart palpitations, and weight loss. Many patients experience protruding eyes and tremors. Drugs that inhibit thyroid activity, surgery to remove the thyroid gland, and radioactive iodine that destroys the gland are common treatments.
Acromegaly and gigantism both are caused by a pituitary tumor that stimulates production of excessive growth hormone, causing abnormal growth in particular parts of the body. Acromegaly is rare and usually develops over many years in adult subjects. Gigantism occurs when the excess of growth hormone begins in childhood.


Human hormones significantly affect the activity of every cell in the body. They influence mental acuity, physical agility, and body build and stature. Growth hormone is a hormone produced by the pituitary gland. It regulates growth by stimulating the formation of bone and the uptake of amino acids, molecules vital to building muscle and other tissue.
Sex hormones regulate the development of sexual organs, sexual behavior, reproduction, and pregnancy. For example, gonadotropins, also secreted by the pituitary gland, are sex hormones that stimulate egg and sperm production. The gonadotropin that stimulates production of sperm in men and formation of ovary follicles in women is called a follicle-stimulating hormone. When a follicle-stimulating hormone binds to an ovary cell, it stimulates the enzymes needed for the synthesis of estradiol, a female sex hormone. Another gonadotropin called luteinizing hormone regulates the production of eggs in women and the production of the male sex hormone testosterone. Produced in the male gonads, or testes, testosterone regulates changes to the male body during puberty, influences sexual behavior, and plays a role in growth. The female sex hormones, called estrogens, regulate female sexual development and behavior as well as some aspects of pregnancy. Progesterone, a female hormone secreted in the ovaries, regulates menstruation and stimulates lactation in humans and other mammals.
Other hormones regulate metabolism. For example, thyroxine, a hormone secreted by the thyroid gland, regulates rates of body metabolism. Glucagon and insulin, secreted in the pancreas, control levels of glucose in the blood and the availability of energy for the muscles. A number of hormones, including insulin, glucagon, cortisol, growth hormone, epinephrine, and norepinephrine, maintain glucose levels in the blood. While insulin lowers the blood glucose, all the other hormones raise it. In addition, several other hormones participate indirectly in the regulation. A protein called somatostatin blocks the release of insulin, glucagon, and growth hormone, while another hormone, gastric inhibitory polypeptide, enhances insulin release in response to glucose absorption. This complex system permits blood glucose concentration to remain within a very narrow range, despite external conditions that may vary to extremes.
Hormones also regulate blood pressure and other involuntary body functions. Epinephrine, also called adrenaline, is a hormone secreted in the adrenal gland. During periods of stress, epinephrine prepares the body for physical exertion by increasing the heart rate, raising the blood pressure, and releasing sugar stored in the liver for quick energy.


Insulin Secretion






Insulin Secretion
This light micrograph of a section of the human pancreas shows one of the islets of Langerhans, center, a group of modified glandular cells. These cells secrete insulin, a hormone that helps the body metabolize sugars, fats, and starches. The blue and white lines in the islets of Langerhans are blood vessels that carry the insulin to the rest of the body. Insulin deficiency causes diabetes mellitus, a disease that affects at least 10 million people in the United States.
Encarta Encyclopedia
Photo Researchers, Inc./Astrid and Hanns-Frieder Michler



Full Size



Hormones are sometimes used to treat medical problems, particularly diseases of the endocrine system. In people with diabetes mellitus type 1, for example, the pancreas secretes little or no insulin. Regular injections of insulin help maintain normal blood glucose levels. Sometimes, an illness or injury not directly related to the endocrine system can be helped by a dose of a particular hormone. Steroid hormones are often used as anti-inflammatory agents to treat the symptoms of various diseases, including cancer, asthma, or rheumatoid arthritis. Oral contraceptives, or birth control pills, use small, regular doses of female sex hormones to prevent pregnancy.
Initially, hormones used in medicine were collected from extracts of glands taken from humans or animals. For example, pituitary growth hormone was collected from the pituitary glands of dead human bodies, or cadavers, and insulin was extracted from cattle and hogs. As technology advanced, insulin molecules collected from animals were altered to produce the human form of insulin.
With improvements in biochemical technology, many hormones are now made in laboratories from basic chemical compounds. This eliminates the risk of transferring contaminating agents sometimes found in the human and animal sources. Advances in genetic engineering even enable scientists to introduce a gene of a specific protein hormone into a living cell, such as a bacterium, which causes the cell to secrete excess amounts of a desired hormone. This technique, known as recombinant DNA technology, has vastly improved the availability of hormones.
Recombinant DNA has been especially useful in producing growth hormone, once only available in limited supply from the pituitary glands of human cadavers. Treatments using the hormone were far from ideal because the cadaver hormone was often in short supply. Moveover, some of the pituitary glands used to make growth hormone were contaminated with particles called prions, which could cause diseases such as Creutzfeldt-Jakob disease, a fatal brain disorder. The advent of recombinant technology made growth hormone widely available for safe and effective therapy.
Q11:
Flower
I INTRODUCTION
Flower, reproductive organ of most seed-bearing plants. Flowers carry out the multiple roles of sexual reproduction, seed development, and fruit production. Many plants produce highly visible flowers that have a distinctive size, color, or fragrance. Almost everyone is familiar with beautiful flowers such as the blossoms of roses, orchids, and tulips. But many plants—including oaks, beeches, maples, and grasses—have small, green or gray flowers that typically go unnoticed. 
Whether eye-catching or inconspicuous, all flowers produce the male or female sex cells required for sexual reproduction. Flowers are also the site of fertilization, which is the union of a male and female sex cell to produce a fertilized egg. The fertilized egg then develops into an embryonic (immature) plant, which forms part of the developing seed. Neighboring structures of the flower enclose the seed and mature into a fruit. 
Botanists estimate that there are more than 240,000 species of flowering plants. However, flowering plants are not the only seed-producing plants. Pines, firs, and cycads are among the few hundred plants that bear their seeds on the surface of cones, rather than within a fruit. Botanists call the cone-bearing plants gymnosperms, which means naked seeds; they refer to flowering plants as angiosperms, which means enclosed seeds. 
Flowering plants are more widespread than any other group of plants. They bloom on every continent, from the bogs and marshes of the Arctic tundra to the barren soils of Antarctica. Deserts, grasslands, rainforests, and other biomes display distinctive flower species. Even streams, rivers, lakes, and swamps are home to many flowering plants. 
In their diverse environments, flowers have evolved to become irreplaceable participants in the complex, interdependent communities of organisms that make up ecosystems. The seeds or fruits that flowers produce are food sources for many animals, large and small. In addition, many insects, bats, hummingbirds, and small mammals feed on nectar, a sweet liquid produced by many flowers, or on flower products known as pollen grains. The animals that eat flowers, seeds, and fruits are prey for other animals—lizards, frogs, salamanders, and fish, for example—which in turn are devoured by yet other animals, such as owls and snakes. Thus, flowers provide a bountiful feast that sustains an intricate web of predators and prey (see Food Web).
Flowers play diverse roles in the lives of humans. Wildflowers of every hue brighten the landscape, and the attractive shapes and colors of cultivated flowers beautify homes, parks, and roadsides. The fleshy fruits that flowers produce, such as apples, grapes, strawberries, and oranges, are eaten worldwide, as are such hard-shelled fruits as pecans and other nuts. Flowers also produce wheat, rice, oats, and corn—the grains that are dietary mainstays throughout the world. People even eat unopened flowers, such as those of broccoli and cauliflower, which are popular vegetables. Natural dyes come from flowers, and fragrant flowers, such as jasmine and damask rose, are harvested for their oils and made into perfumes. Certain flowers, such as red clover blossoms, are collected for their medicinal properties, and edible flowers, such as nasturtiums, add color and flavor to a variety of dishes. Flowers also are used to symbolize emotions, as is evidenced by their use from ancient times in significant rituals, such as weddings and funerals. 
II PARTS OF A FLOWER
Flowers typically are composed of four parts, or whorls, arranged in concentric rings attached to the tip of the stem. From innermost to outermost, these whorls are the (1) pistil, (2) stamens, (3) petals, and (4) sepals. 
A Pistil
The innermost whorl, located in the center of the flower, is the female reproductive structure, or pistil. Often vase-shaped, the pistil consists of three parts: the stigma, the style, and the ovary. The stigma, a slightly flared and sticky structure at the top of the pistil, functions by trapping pollen grains, the structures that give rise to the sperm cells necessary for fertilization. The style is a narrow stalk that supports the stigma. The style rises from the ovary, a slightly swollen structure seated at the base of the flower. Depending on the species, the ovary contains one or more ovules, each of which holds one egg cell. After fertilization, the ovules develop into seeds, while the ovary enlarges into the fruit. If a flower has only one ovule, the fruit will contain one seed, as in a peach. The fruit of a flower with many ovules, such as a tomato, will have many seeds. An ovary that contains one or more ovules also is called a carpel, and a pistil may be composed of one to several carpels.
B Stamens
The next whorl consists of the male reproductive structures, several to many stamens arranged around the pistil. A stamen consists of a slender stalk called the filament, which supports the anther, a tiny compartment where pollen forms. When a flower is still an immature, unopened bud, the filaments are short and serve to transport nutrients to the developing pollen. As the flower opens, the filaments lengthen and hold the anthers higher in the flower, where the pollen grains are more likely to be picked up by visiting animals, wind, or in the case of some aquatic plants, by water. The animals, wind, or water might then carry the pollen to the stigma of an appropriate flower. The placement of pollen on the stigma is called pollination. Pollination initiates the process of fertilization.
C Petals
Petals, the next whorl, surround the stamens and collectively are termed the corolla. Many petals have bright colors, which attract animals that carry out pollination, collectively termed pollinators. Three groups of pigments—alone or in combination—produce a veritable rainbow of petal colors: anthocyanins yield shades of violet, blue, and red; betalains create reds; and carotenoids produce yellows and orange. Petal color can be modified in several ways. Texture, for example, can play a role in the overall effect—a smooth petal is shiny, while a rough one appears velvety. If cells inside the petal are filled with starch, they create a white layer that makes pigments appear brighter. Petals with flat air spaces between cells shimmer iridescently. 
In some flowers, the pigments form distinct patterns, invisible to humans but visible to bees, who can see ultraviolet light. Like the landing strips of an airport, these patterns, called nectar guides, direct bees to the nectar within the flower. Nectar is made in specialized glands located at or near the petal’s base. Some flowers secrete copious amounts of nectar and attract big pollinators with large appetites, such as bats. Other flowers, particularly those that depend on wind or water to transport their pollen, may secrete little or no nectar. The petals of many species also are the source of the fragrances that attract pollinators. In these species, the petals house tiny glands that produce essential, or volatile, oils that vaporize easily, often releasing a distinctive aroma. One flower can make dozens of different essential oils, which mingle to yield the flower’s unique fragrance. 
D Sepals
The sepals, the outermost whorl, together are called the calyx. In the flower bud, the sepals tightly enclose and protect the petals, stamens, and pistil from rain or insects. The sepals unfurl as the flower opens and often resemble small green leaves at the flower’s base. In some flowers, the sepals are colorful and work with the petals to attract pollinators. 
E Variations in Structure
Like virtually all forms in nature, flowers display many variations in their structure. Most flowers have all four whorls—pistil, stamens, petals, and sepals. Botanists call these complete flowers. But some flowers are incomplete, meaning they lack one or more whorls. Incomplete flowers are most common in plants whose pollen is dispersed by the wind or water. Since these flowers do not need to attract pollinators, most have no petals, and some even lack sepals. Certain wind-pollinated flowers do have small sepals and petals that create eddies in the wind, directing pollen to swirl around and settle on the flower. In still other flowers, the petals and sepals are fused into structures called a floral tube.
Flowers that lack either stamens or a pistil are said to be imperfect. The petal-like rays on the edge of a sunflower, for example, are actually tiny, imperfect flowers that lack stamens. Imperfect flowers can still function in sexual reproduction. A flower that lacks a pistil but has stamens produces pollen, and a flower with a pistil but no stamens provides ovules and can develop into fruits and seeds. Flowers that have only stamens are termed staminate, and flowers that have only a pistil are called pistillate.
Although a single flower can be either staminate or pistillate, a plant species must have both to reproduce sexually. In some species with imperfect flowers, the staminate and pistillate flowers occur on the same plant. Such plants, known as monoecious species, include corn. The tassel at the top of the corn plant consists of hundreds of tiny staminate flowers, and the ears, which are located laterally on the stem, contain clusters of pistillate flowers. The silks of corn are very long styles leading to the ovaries, which, when ripe, form the kernels of corn. In dioecious species—such as date, willow, and hemp—staminate and pistillate flowers are found on different plants. A date tree, for example, will develop male or female flowers but not both. In dioecious species, at least two plants, one bearing staminate flowers and one bearing pistillate flowers, are needed for pollination and fertilization.
Other variations are found in the types of stems that support flowers. In some species, flowers are attached to only one main stem, called the peduncle. In others, flowers are attached to smaller stems, called pedicels, that branch from the peduncle. The peduncle and pedicels orient a flower so that its pollinator can reach it. In the morning glory, for example, pedicels hold the flowers in a horizontal position. This enables their hummingbird pollinators to feed since they do not crawl into the flower as other pollinators do, but hover near the flower and lick the nectar with their long tongues. Scientists assign specific terms to the different flower and stem arrangements to assist in the precise identification of a flower. A plant with just one flower at the tip of the peduncle—a tulip, for example—is termed solitary. In a spike, such as sage, flowers are attached to the sides of the peduncle.
Sometimes flowers are grouped together in a cluster called an inflorescence. In an indeterminate inflorescence, the lower flowers bloom first, and blooming proceeds over a period of days from the bottom to the top of the peduncle or pedicels. As long as light, water, temperature, and nutrients are favorable, the tip of the peduncle or pedicel continues to add new buds. There are several types of indeterminate inflorescences. These include the raceme, formed by a series of pedicels that emerge from the peduncle, as in snapdragons and lupines; and the panicle, in which the series of pedicels branches and rebranches, as in lilac. 
In determinate inflorescences, called cymes, the peduncle is capped by a flower bud, which prevents the stem from elongating and adding more flowers. However, new flower buds appear on side pedicels that form below the central flower, and the flowers bloom from the top to the bottom of the pedicels. Flowers that bloom in cymes include chickweed and phlox.
III SEXUAL REPRODUCTION
Sexual reproduction mixes the hereditary material from two parents, creating a population of genetically diverse offspring. Such a population can better withstand environmental changes. Unlike animals, flowers cannot move from place to place, yet sexual reproduction requires the union of the egg from one parent with the sperm from another parent. Flowers overcome their lack of mobility through the all-important process of pollination. Pollination occurs in several ways. In most flowers pollinated by insects and other animals, the pollen escapes through pores in the anthers. As pollinators forage for food, the pollen sticks to their body and then rubs off on the flower's stigma, or on the stigma of the next flower they visit. In plants that rely on wind for pollination, the anthers burst open, releasing a cloud of yellow, powdery pollen that drifts to other flowers. In a few aquatic plants, pollen is released into the water, where it floats to other flowers.
Pollen consists of thousands of microscopic pollen grains. A tough pollen wall surrounds each grain. In most flowers, the pollen grains released from the anthers contain two cells. If a pollen grain lands on the stigma of the same species, the pollen grain germinates—one cell within the grain emerges through the pollen wall and contacts the surface of the stigma, where it begins to elongate. The lengthening cell grows through the stigma and style, forming a pollen tube that transports the other cell within the pollen down the style to the ovary. As the tube grows, the cell within it divides to produce two sperm cells, the male sex cells. In some species, the sperm are produced before the pollen is released from the anther.
Independently of the pollen germination and pollen tube growth, developmental changes occur within the ovary. The ovule produces several specialized structures—among them, the egg, or female sex cell. The pollen tube grows into the ovary, crosses the ovule wall, and releases the two sperm cells into the ovule. One sperm unites with the egg, triggering hormonal changes that transform the ovule into a seed. The outer wall of the ovule develops into the seed coat, while the fertilized egg grows into an embryonic plant. The growing embryonic plant relies on a starchy, nutrient-rich food in the seed called endosperm. Endosperm develops from the union of the second sperm with the two polar nuclei, also known as the central cell nuclei, structures also produced by the ovary. As the seed grows, hormones are released that stimulate the walls of the ovary to expand, and it develops into the fruit. The mature fruit often is hundreds or even thousands of times larger than the tiny ovary from which it grew, and the seeds also are quite large compared to the miniscule ovules from which they originated. The fruits, which are unique to flowering plants, play an extremely important role in dispersing seeds. Animals eat fruits, such as berries and grains. The seeds pass through the digestive tract of the animal unharmed and are deposited in a wide variety of locations, where they germinate to produce the next generation of flowering plants, thus continuing the species. Other fruits are dispersed far and wide by wind or water; the fruit of maple trees, for example, has a winglike structure that catches the wind. 
IV FLOWERING AND THE LIFE CYCLE
The life cycle of a flowering plant begins when the seed germinates. It progresses through the growth of roots, stems, and leaves; formation of flower buds; pollination and fertilization; and seed and fruit development. The life cycle ends with senescence, or old age, and death. Depending on the species, the life cycle of a plant may last one, two, or many years. Plants called annuals carry out their life cycle within one year. Biennial plants live for two years: The first year they produce leaves, and in the second year they produce flowers and fruits and then die. Perennial plants live for more than one year. Some perennials bloom every year, while others, like agave, live for years without flowering and then in a few weeks produce thousands of flowers, fruits, and seeds before dying.
Whatever the life cycle, most plants flower in response to certain cues. A number of factors influence the timing of flowering. The age of the plant is critical—most plants must be at least one or two weeks old before they bloom; presumably they need this time to accumulate the energy reserves required for flowering. The number of hours of darkness is another factor that influences flowering. Many species bloom only when the night is just the right length—a phenomenon called photoperiodism. Poinsettias, for example, flower in winter when the nights are long, while spinach blooms when the nights are short—late spring through late summer. Temperature, light intensity, and moisture also affect the time of flowering. In the desert, for example, heavy rains that follow a long dry period often trigger flowers to bloom.
V EVOLUTION OF FLOWERS
Flowering plants are thought to have evolved around 135 million years ago from cone-bearing gymnosperms. Scientists had long proposed that the first flower most likely resembled today’s magnolias or water lilies, two types of flowers that lack some of the specialized structures found in most modern flowers. But in the late 1990s scientists compared the genetic material deoxyribonucleic acid (DNA) of different plants to determine their evolutionary relationships. From these studies, scientists identified a small, cream-colored flower from the genus Amborella as the only living relative to the first flowering plant. This rare plant is found only on the South Pacific island of New Caledonia. 
The evolution of flowers dramatically changed the face of earth. On a planet where algae, ferns, and cycads tinged the earth with a monochromatic green hue, flowers emerged to paint the earth with vivid shades of red, pink, orange, yellow, blue, violet, and white. Flowering plants spread rapidly, in part because their fruits so effectively disperse seeds. Today, flowering plants occupy virtually all areas of the planet, with about 240,000 species known.
Many flowers and pollinators coevolved—that is, they influenced each other’s traits during the process of evolution. For example, any population of flowers displays a range of color, fragrance, size, and shape—hereditary traits that can be passed from one generation to the next. Certain traits or combinations of traits appeal more to pollinators, so pollinators are more likely to visit these attractive plants. The appealing plants have a greater chance of being pollinated than others and, thus, are likely to produce more seeds. The seeds develop into plants that display the inherited appealing traits. Similarly, in a population of pollinators, there are variations in hereditary traits, such as wing size and shape, length and shape of tongue, ability to detect fragrance, and so on. For example, pollinators whose bodies are small enough to reach inside certain flowers gather pollen and nectar more efficiently than larger-sized members of their species. These efficient, well-fed pollinators have more energy for reproduction. Their offspring inherit the traits that enable them to forage successfully in flowers, and from generation to generation, these traits are preserved. The pollinator preference seen today for certain flower colors, fragrances, and shapes often represents hundreds of thousands of years of coevolution.
Coevolution often results in exquisite adaptations between flower and pollinator. These adaptations can minimize competition for nectar and pollen among pollinators and also can minimize competition among flowers for pollinators. Comet orchids, for example, have narrow flowers almost a foot and a half long. These flowers are pollinated only by a species of hawk moth that has a narrow tongue just the length of the flowers. The flower shape prevents other pollinators from consuming the nectar, guarantees the moths a meal, and ensures the likelihood of pollination and fertilization.
Most flowers and pollinators, however, are not as precisely matched to each other, but adaptation still plays a significant role in their interactions. For example, hummingbirds are particularly attracted to the color red. Hummingbird-pollinated flowers typically are red, and they often are narrow, an adaptation that suits the long tongues of hummingbirds. Bats are large pollinators that require relatively more energy than other pollinators. They visit big flowers like those of saguaro cactus, which supply plenty of nectar or pollen. Bats avoid little flowers that do not offer enough reward. 
Other examples of coevolution are seen in the bromeliads and orchids that grow in dark forests. These plants often have bright red, purple, or white sepals or petals, which make them visible to pollinators. Night-flying pollinators, such as moths and bats, detect white flowers most easily, and flowers that bloom at sunset, such as yucca, datura, and cereus, usually are white. 
The often delightful and varied fragrances of flowers also reveal the hand of coevolution. In some cases, insects detect fragrance before color. They follow faint aromas to flowers that are too far away to be seen, recognizing petal shape and color only when they are very close to the flower. Some night-blooming flowers emit sweet fragrances that attract night-flying moths. At the other extreme, carrion flowers, flowers pollinated by flies, give off the odor of rotting meat to attract their pollinators.
Flowers and their pollinators also coevolved to influence each other’s life cycles. Among species that flower in response to a dark period, some measure the critical night length so accurately that all species of the region flower in the same week or two. This enables related plants to interbreed, and provides pollinators with enough pollen and nectar to live on so that they too can reproduce. The process of coevolution also has resulted in synchronization of floral and insect life cycles. Sometimes flowering occurs the week that insect pollinators hatch or emerge from dormancy, or bird pollinators return from winter migration, so that they feed on and pollinate the flowers. Flowering also is timed so that fruits and seeds are produced when animals are present to feed on the fruits and disperse the seeds. 
VI FLOWERS AND EXTINCTION
Like the amphibians, reptiles, insects, birds, and mammals that are experiencing alarming extinction rates, a number of wildflower species also are endangered. The greatest threat lies in the furious pace at which land is cleared for new houses, industries, and shopping malls to accommodate rapid population growth. Such clearings are making the meadow, forest, and wetland homes of wildflowers ever more scarce. Among the flowers so endangered is the rosy periwinkle of Madagascar, a plant whose compounds have greatly reduced the death rates from childhood leukemia and Hodgkin’s disease. Flowering plants, many with other medicinal properties, also are threatened by global warming from increased combustion of fossil fuels; increased ultraviolet light from ozone layer breakdown; and acid rain from industrial emissions. Flowering plants native to a certain region also may be threatened by introduced species. Yellow toadflax, for example, a garden plant brought to the United States and Canada from Europe, has become a notorious weed, spreading to many habitats and preventing the growth of native species. In some cases, unusual wildflowers such as orchids are placed at risk when they are collected extensively to be sold.
Many of the threats that endanger flowering plants also place their pollinators at risk. When a species of flower or pollinator is threatened, the coevolution of pollinators and flowers may prove to be disadvantageous. If a flower species dies out, its pollinators will lack food and may also die out, and the predators that depend on the pollinators also become threatened. In cases where pollinators are adapted to only one or a few types of flowers, the loss of those plants can disrupt an entire ecosystem. Likewise, if pollinators are damaged by ecological changes, plants that depend on them will not be pollinated, seeds will not be formed, and new generations of plants cannot grow. The fruits that these flowers produce may become scarce, affecting the food supply of humans and other animals that depend on them.
Worldwide, more than 300 species of flowering plants are endangered, or at immediate risk of extinction. Another two dozen or so are considered threatened, or likely to become extinct in the near future. Of these species, fewer than 50 were the focus of preservation plans in the late 1990s. Various regional, national, and international organizations have marshaled their resources in response to the critical need for protecting flowering plants and their habitats. In the United States, native plant societies work to conserve regional plants in every state. The United States Fish and Wildlife Endangered Species Program protects habitats for threatened and endangered species throughout the United States, as do the Canadian Wildlife Service in Canada, the Ministry for Social Development in Mexico, and similar agencies in other countries. At the international level, the International Plant Conservation Programme at Cambridge, England, collects information and provides education worldwide on plant species at risk, and the United Nations Environmental Programme supports a variety of efforts that address the worldwide crisis of endangered species.
Pollination
I INTRODUCTION
Pollination, transfer of pollen grains from the male structure of a plant to the female structure of a plant. The pollen grains contain cells that will develop into male sex cells, or sperm. The female structure of a plant contains the female sex cells, or eggs. Pollination prepares the plant for fertilization, the union of the male and female sex cells. Virtually all grains, fruits, vegetables, wildflowers, and trees must be pollinated and fertilized to produce seed or fruit, and pollination is vital for the production of critically important agricultural crops, including corn, wheat, rice, apples, oranges, tomatoes, and squash.
Pollen grains are microscopic in size, ranging in diameter from less than 0.01mm (about 0.0000004 in) to a little over 0.5 mm (about 0.00002 in). Millions of pollen grains waft along in the clouds of pollen seen in the spring, often causing the sneezing and watery eyes associated with pollen allergies. The outer covering of pollen grains, called the pollen wall, may be intricately sculpted with designs that in some instances can be used to distinguish between plant species. A chemical in the wall called sporopollenin makes the wall resistant to decay.
Although the single cell inside the wall is viable, or living, for only a few weeks, the distinctive patterns of the pollen wall can remain intact for thousands or millions of years, enabling scientists to identify the plant species that produced the pollen. Scientists track long-term climate changes by studying layers of pollen deposited in lake beds. In a dry climate, for example, desert species such as tanglehead grass and vine mesquite grass thrive, and their pollen drifts over lakes, settling in a layer at the bottom. If a climate change brings increased moisture, desert species are gradually replaced by forest species such as pines and spruce, whose pollen forms a layer on top of the grass pollen. Scientists take samples of mud from the lake bottom and analyze the pollen in the mud to identify plant species. Comparing the identified species with their known climate requirements, scientists can trace climate shifts over the millennia.
II HOW POLLINATION WORKS
Most plants have specialized reproductive structures—cones or flowers—where the gametes, or sex cells, are produced. Cones are the reproductive structures of spruce, pine, fir, cycads, and certain other gymnosperms and are of two types: male and female. On conifers such as fir, spruce, and pine trees, the male cones are produced in the spring. The cones form in clusters of 10 to 50 on the tips of the lower branches. Each cone typically measures 1 to 4 cm (0.4 to 1.5 in) and consists of numerous soft, green, spirally attached scales shaped like a bud. Thousands of pollen grains are produced on the lower surface of each scale, and are released to the wind when they mature in late spring. The male cones dry out and shrivel up after their pollen is shed. The female cones typically develop on the upper branches of the same tree that produces the male cones. They form as individual cones or in groups of two or three. A female cone is two to five times longer than the male cone, and starts out with green, spirally attached scales. The scales open the first spring to take in the drifting pollen. After pollination, the scales close for one to two years to protect the developing seed. During this time the scales gradually become brown and stiff, the cones typically associated with conifers. When the seeds are mature, the scales of certain species separate and the mature seeds are dispersed by the wind. In other species, small animals such as gray jays, chipmunks, or squirrels break the scales apart before swallowing some of the enclosed seeds. They cache, or hide, other seeds in a variety of locations, which results in effective seed dispersal-and eventually germination-since the animals do not always return for the stored seeds.
Pollination occurs in cone-bearing plants when the wind blows pollen from the male to the female cone. Some pollen grains are trapped by the pollen drop, a sticky substance produced by the ovule, the egg-containing structure that becomes the seed. As the pollen drop dries, it draws a pollen grain through a tiny hole into the ovule, and the events leading to fertilization begin. The pollen grain germinates and produces a short tube, a pollen tube, which grows through the tissues of the ovule and contacts the egg. A sperm cell moves through the tube to the egg where it unites with it in fertilization. The fertilized egg develops into an embryonic plant, and at the same time, tissues in the ovule undergo complex changes. The inner tissues become food for the embryo, and the outer wall of the ovule hardens into a seedcoat. The ovule thus becomes a seed—a tough structure containing an embryonic plant and its food supply. The seed remains tucked in the closed cone scale until it matures and the cone scales open. Each scale of a cone bears two seeds on its upper surface.
In plants with flowers, such as roses, maple trees, and corn, pollen is produced within the male parts of the plant, called the stamens, and the female sex cells, or eggs, are produced within the female part of the plant, the pistil. With the help of wind, water, insects, birds, or small mammals, pollen is transferred from the stamens to the stigma, a sticky surface on the pistil. Pollination may be followed by fertilization. The pollen on the stigma germinates to produce a long pollen tube, which grows down through the style, or neck of the pistil, and into the ovary, located at the base of the pistil. Depending on the species, one, several, or many ovules are embedded deep within the ovary. Each ovule contains one egg.
Fertilization occurs when a sperm cell carried by the pollen tube unites with the egg. As the fertilized egg begins to develop into an embryonic plant, it produces a variety of hormones to stimulate the outer wall of the ovule to harden into a seedcoat, and tissues of the ovary enlarge into a fruit. The fruit may be a fleshy fruit, such as an apple, orange, tomato, or squash, or a dry fruit, such as an almond, walnut, wheat grain, or rice grain. Unlike conifer seeds, which lie exposed on the cone scales, the seeds of flowering plants are contained within a ripened ovary, a fleshy or dry fruit.
III POLLINATION METHODS
In order for pollination to be successful, pollen must be transferred between plants of the same species—for example, a rose flower must always receive rose pollen and a pine tree must always receive pine pollen. Plants typically rely on one of two methods of pollination: cross-pollination or self-pollination, but some species are capable of both.
Most plants are designed for cross-pollination, in which pollen is transferred between different plants of the same species. Cross-pollination ensures that beneficial genes are transmitted relatively rapidly to succeeding generations. If a beneficial gene occurs in just one plant, that plant’s pollen or eggs can produce seeds that develop into numerous offspring carrying the beneficial gene. The offspring, through cross-pollination, transmit the gene to even more plants in the next generation. Cross-pollination introduces genetic diversity into the population at a rate that enables the species to cope with a changing environment. New genes ensure that at least some individuals can endure new diseases, climate changes, or new predators, enabling the species as a whole to survive and reproduce.
Plant species that use cross-pollination have special features that enhance this method. For instance, some plants have pollen grains that are lightweight and dry so that they are easily swept up by the wind and carried for long distances to other plants. Other plants have pollen and eggs that mature at different times, preventing the possibility of self-pollination.
In self-pollination, pollen is transferred from the stamens to the pistil within one flower. The resulting seeds and the plants they produce inherit the genetic information of only one parent, and the new plants are genetically identical to the parent. The advantage of self-pollination is the assurance of seed production when no pollinators, such as bees or birds, are present. It also sets the stage for rapid propagation—weeds typically self-pollinate, and they can produce an entire population from a single plant. The primary disadvantage of self-pollination is that it results in genetic uniformity of the population, which makes the population vulnerable to extinction by, for example, a single devastating disease to which all the genetically identical plants are equally susceptible. Another disadvantage is that beneficial genes do not spread as rapidly as in cross-pollination, because one plant with a beneficial gene can transmit it only to its own offspring and not to other plants. Self-pollination evolved later than cross-pollination, and may have developed as a survival mechanism in harsh environments where pollinators were scarce.
IV POLLEN TRANSFER
Unlike animals, plants are literally rooted to the spot, and so cannot move to combine sex cells from different plants; for this reason, species have evolved effective strategies for accomplishing cross-pollination. Some plants simply allow their pollen to be carried on the wind, as is the case with wheat, rice, corn, and other grasses, and pines, firs, cedars, and other conifers. This method works well if the individual plants are growing close together. To ensure success, huge amounts of pollen must be produced, most of which never reaches another plant.
Most plants, however, do not rely on the wind. These plants employ pollinators—bees, butterflies, and other insects, as well as birds, bats, and mice—to transport pollen between sometimes widely scattered plants. While this strategy enables plants to expend less energy making large amounts of pollen, they must still use energy to produce incentives for their pollinators. For instance, birds and insects may be attracted to a plant by its tasty food in the form of nectar, a sugary, energy-rich fluid that bees eat and also use for making honey. Bees and other pollinators may be attracted by a plant’s pollen, a nutritious food that is high in protein and provides almost every known vitamin, about 25 trace minerals, and 22 amino acids. As a pollinator enters a flower or probes it for nectar, typically located deep in the flower, or grazes on the pollen itself, the sticky pollen attaches to parts of its body. When the pollinator visits the next flower in search of more nectar or pollen, it brushes against the stigma and pollen grains rub off onto the stigma. In this way, pollinators inadvertently transfer pollen from flower to flower.
Some flowers supply wax that bees use for construction material in their hives. In the Amazonian rain forest, the males of certain bee species travel long distances to visit orchid flowers, from which they collect oil used to make a powerful chemical, called a pheromone, used to attract female bees for mating. The bees carry pollen between flowers as they collect the oils from the orchids.
Flowers are designed to attract pollinators, and the unique shape, color, and even scent of a flower appeals to specific pollinators. Birds see the color red particularly well and are prone to pollinating red flowers. The long red floral tubes of certain flowers are designed to attract hummingbirds but discourage small insects that might take the nectar without transferring pollen. Flowers that are pollinated by bats are usually large, light in color, heavily scented, and open at night, when bats are most active. Many of the brighter pink, orange, and yellow flowers are marked by patterns on the petals that can be seen only with ultraviolet light. These patterns act as maps to the nectar glands typically located at the base of the flower. Bees are able to see ultraviolet light and use the colored patterns to find nectar efficiently.
These interactions between plants and animals are mutualistic, since both species benefit from the interaction. Undoubtedly plants have evolved flower structures that successfully attract specific pollinators. And in some cases the pollinators may have adapted their behaviors to take advantage of the resources offered by specific kinds of flowers.
V CURRENT TOPICS
Scientists control pollination by transferring pollen by hand from stamens to stigmas. Using these artificial pollination techniques, scientists study how traits are inherited in plants, and they also breed plants with selected traits—roses with larger blooms, for example, or apple trees that bear more fruit. Scientists also use artificial pollination to investigate temperature and moisture requirements for pollination in different species, the biochemistry of pollen germination, and other details of the pollination process.
Some farmers are concerned about the decline in numbers of pollinating insects, especially honey bees. In recent years many fruit growers have found their trees have little or no fruit, thought to be the result of too few honey bee pollinators. Wild populations of honey bees are nearly extinct in some areas of the northern United States and southern Canada. Domestic honey bees—those kept in hives by beekeepers—have declined by as much as 80 percent since the late 1980s. The decline of wild and domestic honey bees is due largely to mite infestations in their hives—the mites eat the young, developing bees. Bees and other insect pollinators are also seriously harmed by chemical toxins in their environment. These toxins, such as the insecticides Diazinon and Malathion, either kill the pollinator directly or harm them by damaging the environment in which they live.
Fertilization
I INTRODUCTION
Fertilization, the process in which gametes—a male's sperm and a female's egg or ovum—fuse together, producing a single cell that develops into an adult organism. Fertilization occurs in both plants and animals that reproduce sexually—that is, when a male and a female are needed to produce an offspring (see Reproduction). This article focuses on animal fertilization. For information on plant fertilization see the articles on Seed, Pollination, and Plant Propagation.
Fertilization is a precise period in the reproductive process. It begins when the sperm contacts the outer surface of the egg and it ends when the sperm's nucleus fuses with the egg's nucleus. Fertilization is not instantaneous—it may take 30 minutes in sea urchins and up to several hours in mammals. After nuclear fusion, the fertilized egg is called a zygote. When the zygote divides to a two-cell stage, it is called an embryo.
Fertilization is necessary to produce a single cell that contains a full complement of genes. When a cell undergoes meiosis, gametes are formed—a sperm cell or an egg cell. Each gamete contains only half the genetic material of the original cell. During sperm and egg fusion in fertilization, the full amount of genetic material is restored: half contributed by the male parent and half contributed by the female. In humans, for example, there are 46 chromosomes (carriers of genetic material) in each human body cell—except in the sperm and egg, which each have 23 chromosomes. As soon as fertilization is complete, the zygote that is formed has a complete set of 46 chromosomes containing genetic information from both parents.
The fertilization process also activates cell division. Without activation from the sperm, an egg typically remains dormant and soon dies. In general, it is fertilization that sets the egg on an irreversible pathway of cell division and embryo development.
II THE FERTILIZATION PROCESS
Fertilization is complete when the sperm's nucleus fuses with the egg's nucleus. Researchers have identified several specific steps in this process. The first step is the sperm approaching the egg. In some organisms, sperm just swim randomly toward the egg (or eggs). In others, the eggs secrete a chemical substance that attracts the sperm toward the eggs. For example, in one species of sea urchin (an aquatic animal often used in fertilization research), the sperm swim toward a small protein molecule in the egg's protective outer layer, or surface coat. In humans there is evidence that sperm are attracted to the fluid surrounding the egg.
The second step of fertilization is the attachment of several sperm to the egg's surface coat. All animal eggs have surface coats, which are variously named the vitelline envelope (in abalone and frogs) or the zona pellucida (in mammals). This attachment step may last for just a few seconds or for several minutes. 
The third step is a complex process in which the sperm penetrate the egg’s surface coat. The head, or front end, of the sperm of almost all animals except fish contains an acrosome, a membrane-enclosed compartment. The acrosome releases proteins that dissolve the surface coat of an egg of the same species. 
In mammals, a molecule of the egg’s surface coat triggers the sperm's acrosome to explosively release its contents onto the surface coat, where the proteins dissolve a tiny hole. A single sperm is then able to make a slitlike channel in the surface coat, through which it swims to reach the egg's cell membrane. In fish eggs that do not have acrosomes, specialized channels, called micropyles, enable a single sperm to swim down through the egg's surface coat to reach the cell membrane. When more than one sperm enters the egg, the resulting zygote typically develops abnormally.
The next step in fertilization—the fusion of sperm and egg cell membranes—is poorly understood. When the membranes fuse, a single sperm and the egg become one cell. This process takes only seconds, and it is directly observable by researchers. Specific proteins on the surface of the sperm appear to induce this fusion process, but the exact mechanism is not yet known. 
After fusion of the cell membranes the sperm is motionless. The egg extends cytoplasmic fingers to surround the sperm and pull it into the egg's cytoplasm. Filaments called microtubules begin to grow from the inner surface of the egg cell's membrane inward toward the cell's center, resembling spokes of a bicycle wheel growing from the rim inward toward the wheel's hub. As the microtubules grow, the sperm and egg nuclei are pushed toward the egg's center. Finally, in a process that is also poorly understood, the egg and sperm nuclear envelopes (outer membranes) fuse, permitting the chromosomes from the egg and sperm to mix within a common space. A zygote is formed, and development of an embryo begins.
III TYPES OF FERTILIZATION
Two types of fertilization occur in animals: external and internal. In external fertilization the egg and sperm come together outside of the parents' bodies. Animals such as sea urchins, starfish, clams, mussels, frogs, corals, and many fish reproduce in this way. The gametes are released, or spawned, by the adults into the ocean or a pond. Fertilization takes place in this watery environment, where embryos start to develop.
A disadvantage to external fertilization is that the meeting of egg and sperm is somewhat left to chance. Swift water currents, water temperature changes, predators, and a variety of other interruptions can prevent fertilization from occurring. A number of adaptations help ensure that offspring will successfully be produced. The most important adaptation is the production of literally millions of sperm and eggs—if even a tiny fraction of these gametes survive to become zygotes, many offspring will still result. 
Males and females also use behavioral clues, chemical signals, or other stimuli to coordinate spawning so that sperm and eggs appear in the water at the same time and in the same place. In animals that use external fertilization, there is no parental care for the developing embryos. Instead, the eggs of these animals contain a food supply in the form of a yolk that nourishes the embryos until they hatch and are able to feed on their own.
Internal fertilization takes place inside the female's body. The male typically has a penis or other structure that delivers sperm into the female's reproductive tract. All mammals, reptiles, and birds as well as some invertebrates, including snails, worms, and insects, use internal fertilization. Internal fertilization does not necessarily require that the developing embryo remains inside the female's body. In honey bees, for example, the queen bee deposits the fertilized eggs into special compartments in the honeycomb. These compartments are supplied with food resources for the young bees to use as they develop. 
Various adaptations have evolved in the reproductive process of internal-fertilizing organisms. Because the sperm and egg are always protected inside the male's and female's bodies—and are deliberately placed into close contact during mating—relatively few sperm and eggs are produced. Many animals in this group provide extensive parental care of their young. In most mammals, including humans, two specialized structures in the female's body further help to protect and nourish the developing embryo. One is the uterus, which is the cushioned chamber where the embryo matures before birth; the other is the placenta, which is a blood-rich organ that supplies nutrients to the embryo and also removes its wastes (see Pregnancy and Childbirth).
IV RESEARCH ISSUES
Although reproduction is well studied in many kinds of organisms, fertilization is one of the least understood of all fundamental biological processes. Our knowledge of this fascinating topic has been vastly improved by many recent discoveries. For example, researchers have discovered how to clone the genes that direct the fertilization process. 
Yet many important questions still remain. Scientists are actively trying to determine issues such as how sperm and egg cells recognize that they are from the same species; what molecules sperm use to attach to egg coats; and how signals on the sperm's surface are relayed inside to trigger the acrosome reaction. With continued study, answers to these questions will one day be known.
Q12:
(i)
(ii) Research companies developing compressed natural gas (CNG) and methanol (most of which is made from natural gas today but can be made from garbage, trees, or seaweed) have been given government subsidies to get these efforts off the ground. But with oil prices still low, consumers have not had much incentive to accept the inconveniences of finding supply stations, more time-consuming fueling processes, reduced power output, and reduced driving range. Currently, all the alternatives to gas have drawbacks in terms of cost, ease of transport, and efficiency that prohibit their spread. But that could change rapidly if another oil crisis like that of the 1970s develops and if research continues.
Any fuel combustion contributes to greenhouse gas emissions, however, and automakers imagine that stricter energy-consumption standards are probable in the future. In the United States onerous gasoline or energy taxes are less likely than a sudden tightening of CAFE standards, which have not changed for cars since 1994. Such restriction could, for example, put an end to the current boom in sales of large sport-utility vehicles that get relatively poor gas mileage. Therefore, long-term research focuses on other means of propulsion, including cars powered by electricity

(iii) Polyvinyl chloride (PVC) is prepared from the organic compound vinyl chloride (CH2CHCl). PVC is the most widely used of the amorphous plastics. PVC is lightweight, durable, and waterproof. Chlorine atoms bonded to the carbon backbone of its molecules give PVC its hard and flame-resistant properties.
In its rigid form, PVC is weather-resistant and is extruded into pipe, house siding, and gutters. Rigid PVC is also blow molded into clear bottles and is used to form other consumer products, including compact discs and computer casings.
PVC can be softened with certain chemicals. This softened form of PVC is used to make shrink-wrap, food packaging, rainwear, shoe soles, shampoo containers, floor tile, gloves, upholstery, and other products. Most softened PVC plastic products are manufactured by extrusion, injection molding, or casting.
(iv)
(v) Antibiotics
I INTRODUCTION
Antibiotics (Greek anti, “against”; bios, “life”) are chemical compounds used to kill or inhibit the growth of infectious organisms. Originally the term antibiotic referred only to organic compounds, produced by bacteria or molds, that are toxic to other microorganisms. The term is now used loosely to include synthetic and semisynthetic organic compounds. Antibiotic refers generally to antibacterials; however, because the term is loosely defined, it is preferable to specify compounds as being antimalarials, antivirals, or antiprotozoals. All antibiotics share the property of selective toxicity: They are more toxic to an invading organism than they are to an animal or human host. Penicillin is the most well-known antibiotic and has been used to fight many infectious diseases, including syphilis, gonorrhea, tetanus, and scarlet fever. Another antibiotic, streptomycin, has been used to combat tuberculosis.
II HISTORY
Although the mechanisms of antibiotic action were not scientifically understood until the late 20th century, the principle of using organic compounds to fight infection has been known since ancient times. Crude plant extracts were used medicinally for centuries, and there is anecdotal evidence for the use of cheese molds for topical treatment of infection. The first observation of what would now be called an antibiotic effect was made in the 19th century by French chemist Louis Pasteur, who discovered that certain saprophytic bacteria can kill anthrax bacilli. In the first decade of the 20th century, German physician and chemist Paul Ehrlich began experimenting with the synthesis of organic compounds that would selectively attack an infecting organism without harming the host organism. His experiments led to the development, in 1909, of salvarsan, a synthetic compound containing arsenic, which exhibited selective action against spirochetes, the bacteria that cause syphilis. Salvarsan remained the only effective treatment for syphilis until the purification of penicillin in the 1940s. In the 1920s British bacteriologist Sir Alexander Fleming, who later discovered penicillin, found a substance called lysozyme in many bodily secretions, such as tears and sweat, and in certain other plant and animal substances. Lysozyme has some antimicrobial activity, but it is not clinically useful.
Penicillin, the archetype of antibiotics, is a derivative of the mold Penicillium notatum. Penicillin was discovered accidentally in 1928 by Fleming, who showed its effectiveness in laboratory cultures against many disease-producing bacteria. This discovery marked the beginning of the development of antibacterial compounds produced by living organisms. Penicillin in its original form could not be given by mouth because it was destroyed in the digestive tract and the preparations had too many impurities for injection. No progress was made until the outbreak of World War II stimulated renewed research and the Australian pathologist Sir Howard Florey and German-British biochemist Ernst Chain purified enough of the drug to show that it would protect mice from infection. Florey and Chain then used the purified penicillin on a human patient who had staphylococcal and streptococcal septicemia with multiple abscesses and osteomyelitis. The patient, gravely ill and near death, was given intravenous injections of a partly purified preparation of penicillin every three hours. Because so little was available, the patient's urine was collected each day, the penicillin was extracted from the urine and used again. After five days the patient's condition improved vastly. However, with each passage through the body, some penicillin was lost. Eventually the supply ran out and the patient died.
The first antibiotic to be used successfully in the treatment of human disease was tyrothricin, isolated from certain soil bacteria by American bacteriologist Rene Dubos in 1939. This substance is too toxic for general use, but it is employed in the external treatment of certain infections. Other antibiotics produced by a group of soil bacteria called actinomycetes have proved more successful. One of these, streptomycin, discovered in 1944 by American biologist Selman Waksman and his associates, was, in its time, the major treatment for tuberculosis.
Since antibiotics came into general use in the 1950s, they have transformed the patterns of disease and death. Many diseases that once headed the mortality tables—such as tuberculosis, pneumonia, and septicemia—now hold lower positions. Surgical procedures, too, have been improved enormously, because lengthy and complex operations can now be carried out without a prohibitively high risk of infection. Chemotherapy has also been used in the treatment or prevention of protozoal and fungal diseases, especially malaria, a major killer in economically developing nations (see Third World). Slow progress is being made in the chemotherapeutic treatment of viral diseases. New drugs have been developed and used to treat shingles (see herpes) and chicken pox. There is also a continuing effort to find a cure for acquired immunodeficiency syndrome (AIDS), caused by the human immunodeficiency virus (HIV).
III CLASSIFICATION
Antibiotics can be classified in several ways. The most common method classifies them according to their action against the infecting organism. Some antibiotics attack the cell wall; some disrupt the cell membrane; and the majority inhibit the synthesis of nucleic acids and proteins, the polymers that make up the bacterial cell. Another method classifies antibiotics according to which bacterial strains they affect: staphylococcus, streptococcus, or Escherichia coli, for example. Antibiotics are also classified on the basis of chemical structure, as penicillins, cephalosporins, aminoglycosides, tetracyclines, macrolides, or sulfonamides, among others.
A Mechanisms of Action
Most antibiotics act by selectively interfering with the synthesis of one of the large-molecule constituents of the cell—the cell wall or proteins or nucleic acids. Some, however, act by disrupting the cell membrane (see Cell Death and Growth Suppression below). Some important and clinically useful drugs interfere with the synthesis of peptidoglycan, the most important component of the cell wall. These drugs include the Β-lactam antibiotics, which are classified according to chemical structure into penicillins, cephalosporins, and carbapenems. All these antibiotics contain a Β-lactam ring as a critical part of their chemical structure, and they inhibit synthesis of peptidoglycan, an essential part of the cell wall. They do not interfere with the synthesis of other intracellular components. The continuing buildup of materials inside the cell exerts ever greater pressure on the membrane, which is no longer properly supported by peptidoglycan. The membrane gives way, the cell contents leak out, and the bacterium dies. These antibiotics do not affect human cells because human cells do not have cell walls.
Many antibiotics operate by inhibiting the synthesis of various intracellular bacterial molecules, including DNA, RNA, ribosomes, and proteins. The synthetic sulfonamides are among the antibiotics that indirectly interfere with nucleic acid synthesis. Nucleic-acid synthesis can also be stopped by antibiotics that inhibit the enzymes that assemble these polymers—for example, DNA polymerase or RNA polymerase. Examples of such antibiotics are actinomycin, rifamicin, and rifampicin, the last two being particularly valuable in the treatment of tuberculosis. The quinolone antibiotics inhibit synthesis of an enzyme responsible for the coiling and uncoiling of the chromosome, a process necessary for DNA replication and for transcription to messenger RNA. Some antibacterials affect the assembly of messenger RNA, thus causing its genetic message to be garbled. When these faulty messages are translated, the protein products are nonfunctional. There are also other mechanisms: The tetracyclines compete with incoming transfer-RNA molecules; the aminoglycosides cause the genetic message to be misread and a defective protein to be produced; chloramphenicol prevents the linking of amino acids to the growing protein; and puromycin causes the protein chain to terminate prematurely, releasing an incomplete protein.
B Range of Effectiveness
In some species of bacteria the cell wall consists primarily of a thick layer of peptidoglycan. Other species have a much thinner layer of peptidoglycan and an outer as well as an inner membrane. When bacteria are subjected to Gram's stain, these differences in structure affect the differential staining of the bacteria with a dye called gentian violet. The differences in staining coloration (gram-positive bacteria appear purple and gram-negative bacteria appear colorless or reddish, depending on the process used) are the basis of the classification of bacteria into gram-positive (those with thick peptidoglycan) and gram-negative (those with thin peptidoglycan and an outer membrane), because the staining properties correlate with many other bacterial properties. Antibacterials can be further subdivided into narrow-spectrum and broad-spectrum agents. The narrow-spectrum penicillins act against many gram-positive bacteria. Aminoglycosides, also narrow-spectrum, act against many gram-negative as well as some gram-positive bacteria. The tetracyclines and chloramphenicols are both broad-spectrum drugs because they are effective against both gram-positive and gram-negative bacteria.
C Cell Death and Growth Suppression
Antibiotics may also be classed as bactericidal (killing bacteria) or bacteriostatic (stopping bacterial growth and multiplication). Bacteriostatic drugs are nonetheless effective because bacteria that are prevented from growing will die off after a time or be killed by the defense mechanisms of the host. The tetracyclines and the sulfonamides are among the bacteriostatic antiobiotics. Antibiotics that damage the cell membrane cause the cell's metabolites to leak out, thus killing the organism. Such compounds, including penicillins and cephalosporins, are therefore classed as bactericidal.
IV TYPES OF ANTIBIOTICS
Following is a list of some of the more common antibiotics and examples of some of their clinical uses. This section does not include all antibiotics nor all of their clinical applications.
A Penicillins
Penicillins are bactericidal, inhibiting formation of the cell wall. There are four types of penicillins: the narrow-spectrum penicillin-G types, ampicillin and its relatives, the penicillinase-resistants, and the extended spectrum penicillins that are active against pseudomonas. Penicillin-G types are effective against gram-positive strains of streptococci, staphylococci, and some gram-negative bacteria such as meningococcus. Penicillin-G is used to treat such diseases as syphilis, gonorrhea, meningitis, anthrax, and yaws. The related penicillin V has a similar range of action but is less effective. Ampicillin and amoxicillin have a range of effectiveness similar to that of penicillin-G, with a slightly broader spectrum, including some gram-negative bacteria. The penicillinase-resistants are penicillins that combat bacteria that have developed resistance to penicillin-G. The antipseudomonal penicillins are used against infections caused by gram-negative Pseudomonas bacteria, a particular problem in hospitals. They may be administered as a prophylactic in patients with compromised immune systems, who are at risk from gram-negative infections.
Side effects of the penicillins, while relatively rare, can include immediate and delayed allergic reactions—specifically, skin rashes, fever, and anaphylactic shock, which can be fatal.
B Cephalosporin
Like the penicillins, cephalosporins have a Β-lactam ring structure that interferes with synthesis of the bacterial cell wall and so are bactericidal. Cephalosporins are more effective than penicillin against gram-negative bacilli and equally effective against gram-positive cocci. Cephalosporins may be used to treat strains of meningitis and as a prophylactic for orthopedic, abdominal, and pelvic surgery. Rare hypersensitive reactions from the cephalosporins include skin rash and, less frequently, anaphylactic shock.
C Aminoglycosides
Streptomycin is the oldest of the aminoglycosides. The aminoglycosides inhibit bacterial protein synthesis in many gram-negative and some gram-positive organisms. They are sometimes used in combination with penicillin. The members of this group tend to be more toxic than other antibiotics. Rare adverse effects associated with prolonged use of aminoglycosides include damage to the vestibular region of the ear, hearing loss, and kidney damage.
D Tetracyclines
Tetracyclines are bacteriostatic, inhibiting bacterial protein synthesis. They are broad-spectrum antibiotics effective against strains of streptococci, gram-negative bacilli, rickettsia (the bacteria that causes typhoid fever), and spirochetes (the bacteria that causes syphilis). They are also used to treat urinary-tract infections and bronchitis. Because of their wide range of effectiveness, tetracyclines can sometimes upset the balance of resident bacteria that are normally held in check by the body's immune system, leading to secondary infections in the gastrointestinal tract and vagina, for example. Tetracycline use is now limited because of the increase of resistant bacterial strains.
E Macrolides
The macrolides are bacteriostatic, binding with bacterial ribosomes to inhibit protein synthesis. Erythromycin, one of the macrolides, is effective against gram-positive cocci and is often used as a substitute for penicillin against streptococcal and pneumococcal infections. Other uses for macrolides include diphtheria and bacteremia. Side effects may include nausea, vomiting, and diarrhea; infrequently, there may be temporary auditory impairment.
F Sulfonamides
The sulfonamides are synthetic bacteriostatic, broad-spectrum antibiotics, effective against most gram-positive and many gram-negative bacteria. However, because many gram-negative bacteria have developed resistance to the sulfonamides, these antibiotics are now used only in very specific situations, including treatment of urinary-tract infection, against meningococcal strains, and as a prophylactic for rheumatic fever. Side effects may include disruption of the gastrointestinal tract and hypersensitivity.
V PRODUCTION
The production of a new antibiotic is lengthy and costly. First, the organism that makes the antibiotic must be identified and the antibiotic tested against a wide variety of bacterial species. Then the organism must be grown on a scale large enough to allow the purification and chemical analysis of the antibiotic and to demonstrate that it is unique. This is a complex procedure because there are several thousand compounds with antibiotic activity that have already been discovered, and these compounds are repeatedly rediscovered. After the antibiotic has been shown to be useful in the treatment of infections in animals, larger-scale preparation can be undertaken.
Commercial development requires a high yield and an economic method of purification. Extensive research may be needed to increase the yield by selecting improved strains of the organism or by changing the growth medium. The organism is then grown in large steel vats, in submerged cultures with forced aeration. The naturally fermented product may be modified chemically to produce a semisynthetic antibiotic. After purification, the effect of the antibiotic on the normal function of host tissues and organs (its pharmacology), as well as its possible toxic actions (toxicology), must be tested on a large number of animals of several species. In addition, the effective forms of administration must be determined. Antibiotics may be topical, applied to the surface of the skin, eye, or ear in the form of ointments or creams. They may be oral, or given by mouth, and either allowed to dissolve in the mouth or swallowed, in which case they are absorbed into the bloodstream through the intestines. Antibiotics may also be parenteral, or injected intramuscularly, intravenously, or subcutaneously; antibiotics are administered parenterally when fast absorption is required.
In the United States, once these steps have been completed, the manufacturer may file an Investigational New Drug Application with the Food and Drug Administration (FDA). If approved, the antibiotic can be tested on volunteers for toxicity, tolerance, absorption, and excretion. If subsequent tests on small numbers of patients are successful, the drug can be used on a larger group, usually in the hundreds. Finally a New Drug Application can be filed with the FDA, and, if this application is approved, the drug can be used generally in clinical medicine. These procedures, from the time the antibiotic is discovered in the laboratory until it undergoes clinical trial, usually extend over several years.
VI RISKS AND LIMITATIONS
The use of antibiotics is limited because bacteria have evolved defenses against certain antibiotics. One of the main mechanisms of defense is inactivation of the antibiotic. This is the usual defense against penicillins and chloramphenicol, among others. Another form of defense involves a mutation that changes the bacterial enzyme affected by the drug in such a way that the antibiotic can no longer inhibit it. This is the main mechanism of resistance to the compounds that inhibit protein synthesis, such as the tetracyclines.
All these forms of resistance are transmitted genetically by the bacterium to its progeny. Genes that carry resistance can also be transmitted from one bacterium to another by means of plasmids, chromosomal fragments that contain only a few genes, including the resistance gene. Some bacteria conjugate with others of the same species, forming temporary links during which the plasmids are passed from one to another. If two plasmids carrying resistance genes to different antibiotics are transferred to the same bacterium, their resistance genes can be assembled onto a single plasmid. The combined resistances can then be transmitted to another bacterium, where they may be combined with yet another type of resistance. In this way, plasmids are generated that carry resistance to several different classes of antibiotic. In addition, plasmids have evolved that can be transmitted from one species of bacteria to another, and these can transfer multiple antibiotic resistance between very dissimilar species of bacteria.
The problem of resistance has been exacerbated by the use of antibiotics as prophylactics, intended to prevent infection before it occurs. Indiscriminate and inappropriate use of antibiotics for the treatment of the common cold and other common viral infections, against which they have no effect, removes antibiotic-sensitive bacteria and allows the development of antibiotic-resistant bacteria. Similarly, the use of antibiotics in poultry and livestock feed has promoted the spread of drug resistance and has led to the widespread contamination of meat and poultry by drug-resistant bacteria such as Salmonella.
In the 1970s, tuberculosis seemed to have been nearly eradicated in the developed countries, although it was still prevalent in developing countries. Now its incidence is increasing, partly due to resistance of the tubercle bacillus to antibiotics. Some bacteria, particularly strains of staphylococci, are resistant to so many classes of antibiotics that the infections they cause are almost untreatable. When such a strain invades a surgical ward in a hospital, it is sometimes necessary to close the ward altogether for a time. Similarly, plasmodia, the causative organisms of malaria, have developed resistance to antibiotics, while, at the same time, the mosquitoes that carry plasmodia have become resistant to the insecticides that were once used to control them. Consequently, although malaria had been almost entirely eliminated, it is now again rampant in Africa, the Middle East, Southeast Asia, and parts of Latin America. Furthermore, the discovery of new antibiotics is now much less common than in the past.
(vi) Ceramics
I INTRODUCTION
Ceramics (Greek keramos, "potter's clay"), originally the art of making pottery, now a general term for the science of manufacturing articles prepared from pliable, earthy materials that are made rigid by exposure to heat. Ceramic materials are nonmetallic, inorganic compounds—primarily compounds of oxygen, but also compounds of carbon, nitrogen, boron, and silicon. Ceramics includes the manufacture of earthenware, porcelain, bricks, and some kinds of tile and stoneware. 
Ceramic products are used not only for artistic objects and tableware, but also for industrial and technical items, such as sewer pipe and electrical insulators. Ceramic insulators have a wide range of electrical properties. The electrical properties of a recently discovered family of ceramics based on a copper-oxide mixture allow these ceramics to become superconductive, or to conduct electricity with no resistance, at temperatures much higher than those at which metals do (see Superconductivity). In space technology, ceramic materials are used to make components for space vehicles.
The rest of this article will deal only with ceramic products that have industrial or technical applications. Such products are known as industrial ceramics. The term industrial ceramics also refers to the science and technology of developing and manufacturing such products.
II PROPERTIES
Ceramics possess chemical, mechanical, physical, thermal, electrical, and magnetic properties that distinguish them from other materials, such as metals and plastics. Manufacturers customize the properties of ceramics by controlling the type and amount of the materials used to make them.
A Chemical Properties
Industrial ceramics are primarily oxides (compounds of oxygen), but some are carbides (compounds of carbon and heavy metals), nitrides (compounds of nitrogen), borides (compounds of boron), and silicides (compounds of silicon). For example, aluminum oxide can be the main ingredient of a ceramic—the important alumina ceramics contain 85 to 99 percent aluminum oxide. Primary components, such as the oxides, can also be chemically combined to form complex compounds that are the main ingredient of a ceramic. Examples of such complex compounds are barium titanate (BaTiO3) and zinc ferrite (ZnFe2O4). Another material that may be regarded as a ceramic is the element carbon (in the form of diamond or graphite). 
Ceramics are more resistant to corrosion than plastics and metals are. Ceramics generally do not react with most liquids, gases, alkalies, and acids. Most ceramics have very high melting points, and certain ceramics can be used up to temperatures approaching their melting points. Ceramics also remain stable over long time periods. 
B Mechanical Properties
Ceramics are extremely strong, showing considerable stiffness under compression and bending. Bend strength, the amount of pressure required to bend a material, is often used to determine the strength of a ceramic. One of the strongest ceramics, zirconium dioxide, has a bend strength similar to that of steel. Zirconias (ZrO2) retain their strength up to temperatures of 900° C (1652° F), while silicon carbides and silicon nitrides retain their strength up to temperatures of 1400° C (2552° F). These silicon materials are used in high-temperature applications, such as to make parts for gas-turbine engines. Although ceramics are strong, temperature-resistant, and resilient, these materials are brittle and may break when dropped or when quickly heated and cooled. 
C Physical Properties
Most industrial ceramics are compounds of oxygen, carbon, or nitrogen with lighter metals or semimetals. Thus, ceramics are less dense than most metals. As a result, a light ceramic part may be just as strong as a heavier metal part. Ceramics are also extremely hard, resisting wear and abrasion. The hardest known substance is diamond, followed by boron nitride in cubic-crystal form. Aluminum oxide and silicon carbide are also extremely hard materials and are often used to cut, grind, sand, and polish metals and other hard materials. 
D Thermal Properties
Most ceramics have high melting points, meaning that even at high temperatures, these materials resist deformation and retain strength under pressure. Silicon carbide and silicon nitride, for example, withstand temperature changes better than most metals do. Large and sudden changes in temperature, however, can weaken ceramics. Materials that undergo less expansion or contraction per degree of temperature change can withstand sudden changes in temperature better than materials that undergo greater deformation. Silicon carbide and silicon nitride expand and contract less during temperature changes than most other ceramics do. These materials are therefore often used to make parts, such as turbine rotors used in jet engines, that can withstand extreme variations in temperature. 
E Electrical Properties
Certain ceramics conduct electricity. Chromium dioxide, for example, conducts electricity as well as most metals do. Other ceramics, such as silicon carbide, do not conduct electricity as well, but may still act as semiconductors. (A semiconductor is a material with greater electrical conductivity than an insulator has but with less than that of a good conductor.) Other types of ceramics, such as aluminum oxide, do not conduct electricity at all. These ceramics are used as insulators—devices used to separate elements in an electrical circuit to keep the current on the desired pathway. Certain ceramics, such as porcelain, act as insulators at lower temperatures but conduct electricity at higher temperatures. 
F Magnetic Properties
Ceramics containing iron oxide (Fe2O3) can have magnetic properties similar to those of iron, nickel, and cobalt magnets (see Magnetism). These iron oxide-based ceramics are called ferrites. Other magnetic ceramics include oxides of nickel, manganese, and barium. Ceramic magnets, used in electric motors and electronic circuits, can be manufactured with high resistance to demagnetization. When electrons become highly aligned, as they do in ceramic magnets, they create a powerful magnetic field which is more difficult to disrupt (demagnetize) by breaking the alignment of the electrons. 
III MANUFACTURE
Industrial ceramics are produced from powders that have been tightly squeezed and then heated to high temperatures. Traditional ceramics, such as porcelain, tiles, and pottery, are formed from powders made from minerals such as clay, talc, silica, and feldspar. Most industrial ceramics, however, are formed from highly pure powders of specialty chemicals such as silicon carbide, alumina, and barium titanate. 
The minerals used to make ceramics are dug from the earth and are then crushed and ground into fine powder. Manufacturers often purify this powder by mixing it in solution and allowing a chemical precipitate (a uniform solid that forms within a solution) to form. The precipitate is then separated from the solution, and the powder is heated to drive off impurities, including water. The result is typically a highly pure powder with particle sizes of about 1 micrometer (a micrometer is 0.000001 meter, or 0.00004 in).
A Molding
After purification, small amounts of wax are often added to bind the ceramic powder and make it more workable. Plastics may also be added to the powder to give the desired pliability and softness. The powder can then be shaped into different objects by various molding processes. These molding processes include slip casting, pressure casting, injection molding, and extrusion. After the ceramic is molded, it is heated in a process known as densification to make the material stronger and more dense. 
A1 Slip Casting
Slip casting is a molding process used to form hollow ceramic objects. The ceramic powder is poured into a mold that has porous walls, and then the mold is filled with water. The capillary action (forces created by surface tension and by wetting the sides of a tube) of the porous walls drains water through the powder and the mold, leaving a solid layer of ceramic inside.
A2 Pressure Casting
In pressure casting, ceramic powder is poured into a mold, and pressure is then applied to the powder. The pressure condenses the powder into a solid layer of ceramic that is shaped to the inside of the mold. 
A3 Injection Molding
Injection molding is used to make small, intricate objects. This method uses a piston to force the ceramic powder through a heated tube into a mold, where the powder cools, hardening to the shape of the mold. When the object has solidified, the mold is opened and the ceramic piece is removed.
A4 Extrusion
Extrusion is a continuous process in which ceramic powder is heated in a long barrel. A rotating screw then forces the heated material through an opening of the desired shape. As the continuous form emerges from the die opening, the form cools, solidifies, and is cut to the desired length. Extrusion is used to make products such as ceramic pipe, tiles, and brick.
B Densification
The process of densification uses intense heat to condense a ceramic object into a strong, dense product. After being molded, the ceramic object is heated in an electric furnace to temperatures between 1000° and 1700° C (1832° and 3092° F). As the ceramic heats, the powder particles coalesce, much as water droplets join at room temperature. As the ceramic particles merge, the object becomes increasingly dense, shrinking by up to 20 percent of its original size . The goal of this heating process is to maximize the ceramic’s strength by obtaining an internal structure that is compact and extremely dense.
IV APPLICATIONS
Ceramics are valued for their mechanical properties, including strength, durability, and hardness. Their electrical and magnetic properties make them valuable in electronic applications, where they are used as insulators, semiconductors, conductors, and magnets. Ceramics also have important uses in the aerospace, biomedical, construction, and nuclear industries.
A Mechanical Applications 
Industrial ceramics are widely used for applications requiring strong, hard, and abrasion-resistant materials. For example, machinists use metal-cutting tools tipped with alumina, as well as tools made from silicon nitrides, to cut, shape, grind, sand, and polish cast iron, nickel-based alloys, and other metals. Silicon nitrides, silicon carbides, and certain types of zirconias are used to make components such as valves and turbocharger rotors for high-temperature diesel and gas-turbine engines. The textile industry uses ceramics for thread guides that can resist the cutting action of fibers traveling through these guides at high speed.
B Electrical and Magnetic Applications
Ceramic materials have a wide range of electrical properties. Hence, ceramics are used as insulators (poor conductors of electricity), semiconductors (greater conductivity than insulators but less than good conductors), and conductors (good conductors of electricity).
Ceramics such as aluminum oxide (Al2O3) do not conduct electricity at all and are used to make insulators. Stacks of disks made of this material are used to suspend high-voltage power lines from transmission towers. Similarly, thin plates of aluminum oxide , which remain electrically and chemically stable when exposed to high-frequency currents, are used to hold microchips. 
Other ceramics make excellent semiconductors. Small semiconductor chips, often made from barium titanate (BaTiO3) and strontium titanate (SrTiO3), may contain hundreds of thousands of transistors, making possible the miniaturization of electronic devices. 
Scientists have discovered a family of copper-oxide-based ceramics that become superconductive at higher temperatures than do metals. Superconductivity refers to the ability of a cooled material to conduct an electric current with no resistance. This phenomenon can occur only at extremely low temperatures, which are difficult to maintain. However, in 1988 researchers discovered a copper oxide ceramic that becomes superconductive at -148° C (-234° F). This temperature is far higher than the temperatures at which metals become superconductors (see Superconductivity). 
Thin insulating films of ceramic material such as barium titanate and strontium titanate are capable of storing large quantities of electricity in extremely small volumes. Devices capable of storing electrical charge are known as capacitors. Engineers form miniature capacitors from ceramics and use them in televisions, stereos, computers, and other electronic products.
Ferrites (ceramics containing iron oxide) are widely used as low-cost magnets in electric motors. These magnets help convert electric energy into mechanical energy. In an electric motor, an electric current is passed through a magnetic field created by a ceramic magnet. As the current passes through the magnetic field, the motor coil turns, creating mechanical energy. Unlike metal magnets, ferrites conduct electric currents at high frequencies (currents that increase and decrease rapidly in voltage). Because ferrites conduct high-frequency currents, they do not lose as much power as metal conductors do. Ferrites are also used in video, radio, and microwave equipment. Manganese zinc ferrites are used in magnetic recording heads, and bits of ferric oxides are the active component in a variety of magnetic recording media, such as recording tape and computer diskettes (see Sound Recording and Reproduction; Floppy Disk). 
C Aerospace 
Aerospace engineers use ceramic materials and cermets (durable, highly heat-resistant alloys made by combining powdered metal with an oxide or carbide and then pressing and baking the mixture) to make components for space vehicles. Such components include heat-shield tiles for the space shuttle and nosecones for rocket payloads.
D Bioceramics
Certain advanced ceramics are compatible with bone and tissue and are used in the biomedical field to make implants for use within the body. For example, specially prepared, porous alumina will bond with bone and other natural tissue. Medical and dental specialists use this ceramic to make hip joints, dental caps, and dental bridges. Ceramics such as calcium hydroxyl phosphates are compatible with bone and are used to reconstruct fractured or diseased bone (See Bioengineering; Dentistry).
E Nuclear Power
Engineers use uranium ceramic pellets to generate nuclear power. These pellets are produced in fuel fabrication plants from the gas uranium hexafluoride (UF6). The pellets are then packed into hollow tubes called fuel rods and are transported to nuclear power plants.
F Building and Construction
Manufacturers use ceramics to make bricks, tiles, piping, and other construction materials. Ceramics for these purposes are made primarily from clay and shale. Household fixtures such as sinks and bathtubs are made from feldspar- and clay-based ceramics. 
G Coatings
Because ceramic materials are harder and have better corrosion resistance than most metals, manufacturers often coat metal with ceramic enamel. Manufacturers apply ceramic enamel by injecting a compressed gas containing ceramic powder into the flame of a hydrocarbon-oxygen torch burning at about 2500° C (about 4500° F). The semimolten powder particles adhere to the metal, cooling to form a hard enamel. Household appliances, such as refrigerators, stoves, washing machines, and dryers, are often coated with ceramic enamel. 

(vii) Greenhouse Effect
I INTRODUCTION
Greenhouse Effect, the capacity of certain gases in the atmosphere to trap heat emitted from the Earth’s surface, thereby insulating and warming the Earth. Without the thermal blanketing of the natural greenhouse effect, the Earth’s climate would be about 33 Celsius degrees (about 59 Fahrenheit degrees) cooler—too cold for most living organisms to survive.
The greenhouse effect has warmed the Earth for over 4 billion years. Now scientists are growing increasingly concerned that human activities may be modifying this natural process, with potentially dangerous consequences. Since the advent of the Industrial Revolution in the 1700s, humans have devised many inventions that burn fossil fuels such as coal, oil, and natural gas. Burning these fossil fuels, as well as other activities such as clearing land for agriculture or urban settlements, releases some of the same gases that trap heat in the atmosphere, including carbon dioxide, methane, and nitrous oxide. These atmospheric gases have risen to levels higher than at any time in the last 420,000 years. As these gases build up in the atmosphere, they trap more heat near the Earth’s surface, causing Earth’s climate to become warmer than it would naturally.
Scientists call this unnatural heating effect global warming and blame it for an increase in the Earth’s surface temperature of about 0.6 Celsius degrees (about 1 Fahrenheit degree) over the last nearly 100 years. Without remedial measures, many scientists fear that global temperatures will rise 1.4 to 5.8 Celsius degrees (2.5 to 10.4 Fahrenheit degrees) by 2100. These warmer temperatures could melt parts of polar ice caps and most mountain glaciers, causing a rise in sea level of up to 1 m (40 in) within a century or two, which would flood coastal regions. Global warming could also affect weather patterns causing, among other problems, prolonged drought or increased flooding in some of the world’s leading agricultural regions.
II HOW THE GREENHOUSE EFFECT WORKS
The greenhouse effect results from the interaction between sunlight and the layer of greenhouse gases in the Earth's atmosphere that extends up to 100 km (60 mi) above Earth's surface. Sunlight is composed of a range of radiant energies known as the solar spectrum, which includes visible light, infrared light, gamma rays, X rays, and ultraviolet light. When the Sun’s radiation reaches the Earth’s atmosphere, some 25 percent of the energy is reflected back into space by clouds and other atmospheric particles. About 20 percent is absorbed in the atmosphere. For instance, gas molecules in the uppermost layers of the atmosphere absorb the Sun’s gamma rays and X rays. The Sun’s ultraviolet radiation is absorbed by the ozone layer, located 19 to 48 km (12 to 30 mi) above the Earth’s surface.
About 50 percent of the Sun’s energy, largely in the form of visible light, passes through the atmosphere to reach the Earth’s surface. Soils, plants, and oceans on the Earth’s surface absorb about 85 percent of this heat energy, while the rest is reflected back into the atmosphere—most effectively by reflective surfaces such as snow, ice, and sandy deserts. In addition, some of the Sun’s radiation that is absorbed by the Earth’s surface becomes heat energy in the form of long-wave infrared radiation, and this energy is released back into the atmosphere. 
Certain gases in the atmosphere, including water vapor, carbon dioxide, methane, and nitrous oxide, absorb this infrared radiant heat, temporarily preventing it from dispersing into space. As these atmospheric gases warm, they in turn emit infrared radiation in all directions. Some of this heat returns back to Earth to further warm the surface in what is known as the greenhouse effect, and some of this heat is eventually released to space. This heat transfer creates equilibrium between the total amount of heat that reaches the Earth from the Sun and the amount of heat that the Earth radiates out into space. This equilibrium or energy balance—the exchange of energy between the Earth’s surface, atmosphere, and space—is important to maintain a climate that can support a wide variety of life.
The heat-trapping gases in the atmosphere behave like the glass of a greenhouse. They let much of the Sun’s rays in, but keep most of that heat from directly escaping. Because of this, they are called greenhouse gases. Without these gases, heat energy absorbed and reflected from the Earth’s surface would easily radiate back out to space, leaving the planet with an inhospitable temperature close to –19°C (2°F), instead of the present average surface temperature of 15°C (59°F).
To appreciate the importance of the greenhouse gases in creating a climate that helps sustain most forms of life, compare Earth to Mars and Venus. Mars has a thin atmosphere that contains low concentrations of heat-trapping gases. As a result, Mars has a weak greenhouse effect resulting in a largely frozen surface that shows no evidence of life. In contrast, Venus has an atmosphere containing high concentrations of carbon dioxide. This heat-trapping gas prevents heat radiated from the planet’s surface from escaping into space, resulting in surface temperatures that average 462°C (864°F)—too hot to support life. 
III TYPES OF GREENHOUSE GASES
Earth’s atmosphere is primarily composed of nitrogen (78 percent) and oxygen (21 percent). These two most common atmospheric gases have chemical structures that restrict absorption of infrared energy. Only the few greenhouse gases, which make up less than 1 percent of the atmosphere, offer the Earth any insulation. Greenhouse gases occur naturally or are manufactured. The most abundant naturally occurring greenhouse gas is water vapor, followed by carbon dioxide, methane, and nitrous oxide. Human-made chemicals that act as greenhouse gases include chlorofluorocarbons (CFCs), hydrochlorofluorocarbons (HCFCs), and hydrofluorocarbons (HFCs).
Since the 1700s, human activities have substantially increased the levels of greenhouse gases in the atmosphere. Scientists are concerned that expected increases in the concentrations of greenhouse gases will powerfully enhance the atmosphere’s capacity to retain infrared radiation, leading to an artificial warming of the Earth’s surface.
A Water Vapor
Water vapor is the most common greenhouse gas in the atmosphere, accounting for about 60 to 70 percent of the natural greenhouse effect. Humans do not have a significant direct impact on water vapor levels in the atmosphere. However, as human activities increase the concentration of other greenhouse gases in the atmosphere (producing warmer temperatures on Earth), the evaporation of oceans, lakes, and rivers, as well as water evaporation from plants, increase and raise the amount of water vapor in the atmosphere.
B Carbon Dioxide
Carbon dioxide constantly circulates in the environment through a variety of natural processes known as the carbon cycle. Volcanic eruptions and the decay of plant and animal matter both release carbon dioxide into the atmosphere. In respiration, animals break down food to release the energy required to build and maintain cellular activity. A byproduct of respiration is the formation of carbon dioxide, which is exhaled from animals into the environment. Oceans, lakes, and rivers absorb carbon dioxide from the atmosphere. Through photosynthesis, plants collect carbon dioxide and use it to make their own food, in the process incorporating carbon into new plant tissue and releasing oxygen to the environment as a byproduct. 
In order to provide energy to heat buildings, power automobiles, and fuel electricity-producing power plants, humans burn objects that contain carbon, such as the fossil fuels oil, coal, and natural gas; wood or wood products; and some solid wastes. When these products are burned, they release carbon dioxide into the air. In addition, humans cut down huge tracts of trees for lumber or to clear land for farming or building. This process, known as deforestation, can both release the carbon stored in trees and significantly reduce the number of trees available to absorb carbon dioxide.
As a result of these human activities, carbon dioxide in the atmosphere is accumulating faster than the Earth’s natural processes can absorb the gas. By analyzing air bubbles trapped in glacier ice that is many centuries old, scientists have determined that carbon dioxide levels in the atmosphere have risen by 31 percent since 1750. And since carbon dioxide increases can remain in the atmosphere for centuries, scientists expect these concentrations to double or triple in the next century if current trends continue. 
C Methane
Many natural processes produce methane, also known as natural gas. Decomposition of carbon-containing substances found in oxygen-free environments, such as wastes in landfills, release methane. Ruminating animals such as cattle and sheep belch methane into the air as a byproduct of digestion. Microorganisms that live in damp soils, such as rice fields, produce methane when they break down organic matter. Methane is also emitted during coal mining and the production and transport of other fossil fuels.
Methane has more than doubled in the atmosphere since 1750, and could double again in the next century. Atmospheric concentrations of methane are far less than carbon dioxide, and methane only stays in the atmosphere for a decade or so. But scientists consider methane an extremely effective heat-trapping gas—one molecule of methane is 20 times more efficient at trapping infrared radiation radiated from the Earth’s surface than a molecule of carbon dioxide.
D Nitrous Oxide
Nitrous oxide is released by the burning of fossil fuels, and automobile exhaust is a large source of this gas. In addition, many farmers use nitrogen-containing fertilizers to provide nutrients to their crops. When these fertilizers break down in the soil, they emit nitrous oxide into the air. Plowing fields also releases nitrous oxide. 
Since 1750 nitrous oxide has risen by 17 percent in the atmosphere. Although this increase is smaller than for the other greenhouse gases, nitrous oxide traps heat about 300 times more effectively than carbon dioxide and can stay in the atmosphere for a century. 
E Fluorinated Compounds
Some of the most potent greenhouse gases emitted are produced solely by human activities. Fluorinated compounds, including CFCs, HCFCs, and HFCs, are used in a variety of manufacturing processes. For each of these synthetic compounds, one molecule is several thousand times more effective in trapping heat than a single molecule of carbon dioxide. 
CFCs, first synthesized in 1928, were widely used in the manufacture of aerosol sprays, blowing agents for foams and packing materials, as solvents, and as refrigerants. Nontoxic and safe to use in most applications, CFCs are harmless in the lower atmosphere. However, in the upper atmosphere, ultraviolet radiation breaks down CFCs, releasing chlorine into the atmosphere. In the mid-1970s, scientists began observing that higher concentrations of chlorine were destroying the ozone layer in the upper atmosphere. Ozone protects the Earth from harmful ultraviolet radiation, which can cause cancer and other damage to plants and animals. Beginning in 1987 with the Montréal Protocol on Substances that Deplete the Ozone Layer, representatives from 47 countries established control measures that limited the consumption of CFCs. By 1992 the Montréal Protocol was amended to completely ban the manufacture and use of CFCs worldwide, except in certain developing countries and for use in special medical processes such as asthma inhalers. 
Scientists devised substitutes for CFCs, developing HCFCs and HFCs. Since HCFCs still release ozone-destroying chlorine in the atmosphere, production of this chemical will be phased out by the year 2030, providing scientists some time to develop a new generation of safer, effective chemicals. HFCs, which do not contain chlorine and only remain in the atmosphere for a short time, are now considered the most effective and safest substitute for CFCs.
F Other Synthetic Chemicals
Experts are concerned about other industrial chemicals that may have heat-trapping abilities. In 2000 scientists observed rising concentrations of a previously unreported compound called trifluoromethyl sulphur pentafluoride. Although present in extremely low concentrations in the environment, the gas still poses a significant threat because it traps heat more effectively than all other known greenhouse gases. The exact sources of the gas, undisputedly produced from industrial processes, still remain uncertain.
IV OTHER FACTORS AFFECTING THE GREENHOUSE EFFECT
Aerosols, also known as particulates, are airborne particles that absorb, scatter, and reflect radiation back into space. Clouds, windblown dust, and particles that can be traced to erupting volcanoes are examples of natural aerosols. Human activities, including the burning of fossil fuels and slash-and-burn farming techniques used to clear forestland, contribute additional aerosols to the atmosphere. Although aerosols are not considered a heat-trapping greenhouse gas, they do affect the transfer of heat energy radiated from the Earth to space. The effect of aerosols on climate change is still debated, but scientists believe that light-colored aerosols cool the Earth’s surface, while dark aerosols like soot actually warm the atmosphere. The increase in global temperature in the last century is lower than many scientists predicted when only taking into account increasing levels of carbon dioxide, methane, nitrous oxide, and fluorinated compounds. Some scientists believe that aerosol cooling may be the cause of this unexpectedly reduced warming. 
However, scientists do not expect that aerosols will ever play a significant role in offsetting global warming. As pollutants, aerosols typically pose a health threat, and the manufacturing or agricultural processes that produce them are subject to air-pollution control efforts. As a result, scientists do not expect aerosols to increase as fast as other greenhouse gases in the 21st century.
V UNDERSTANDING THE GREENHOUSE EFFECT
Although concern over the effect of increasing greenhouse gases is a relatively recent development, scientists have been investigating the greenhouse effect since the early 1800s. French mathematician and physicist Jean Baptiste Joseph Fourier, while exploring how heat is conducted through different materials, was the first to compare the atmosphere to a glass vessel in 1827. Fourier recognized that the air around the planet lets in sunlight, much like a glass roof. 
In the 1850s British physicist John Tyndall investigated the transmission of radiant heat through gases and vapors. Tyndall found that nitrogen and oxygen, the two most common gases in the atmosphere, had no heat-absorbing properties. He then went on to measure the absorption of infrared radiation by carbon dioxide and water vapor, publishing his findings in 1863 in a paper titled “On Radiation Through the Earth’s Atmosphere.”
Swedish chemist Svante August Arrhenius, best known for his Nobel Prize-winning work in electrochemistry, also advanced understanding of the greenhouse effect. In 1896 he calculated that doubling the natural concentrations of carbon dioxide in the atmosphere would increase global temperatures by 4 to 6 Celsius degrees (7 to 11 Fahrenheit degrees), a calculation that is not too far from today’s estimates using more sophisticated methods. Arrhenius correctly predicted that when Earth’s temperature warms, water vapor evaporation from the oceans increases. The higher concentration of water vapor in the atmosphere would then contribute to the greenhouse effect and global warming.
The predictions about carbon dioxide and its role in global warming set forth by Arrhenius were virtually ignored for over half a century, until scientists began to detect a disturbing change in atmospheric levels of carbon dioxide. In 1957 researchers at the Scripps Institution of Oceanography, based in San Diego, California, began monitoring carbon dioxide levels in the atmosphere from Hawaii’s remote Mauna Loa Observatory located 3,000 m (11,000 ft) above sea level. When the study began, carbon dioxide concentrations in the Earth’s atmosphere were 315 molecules of gas per million molecules of air (abbreviated parts per million or ppm). Each year carbon dioxide concentrations increased—to 323 ppm by 1970 and 335 ppm by 1980. By 1988 atmospheric carbon dioxide had increased to 350 ppm, an 8 percent increase in only 31 years.
As other researchers confirmed these findings, scientific interest in the accumulation of greenhouse gases and their effect on the environment slowly began to grow. In 1988 the World Meteorological Organization and the United Nations Environment Programme established the Intergovernmental Panel on Climate Change (IPCC). The IPCC was the first international collaboration of scientists to assess the scientific, technical, and socioeconomic information related to the risk of human-induced climate change. The IPCC creates periodic assessment reports on advances in scientific understanding of the causes of climate change, its potential impacts, and strategies to control greenhouse gases. The IPCC played a critical role in establishing the United Nations Framework Convention on Climate Change (UNFCCC). The UNFCCC, which provides an international policy framework for addressing climate change issues, was adopted by the United Nations General Assembly in 1992. 
Today scientists around the world monitor atmospheric greenhouse gas concentrations and create forecasts about their effects on global temperatures. Air samples from sites spread across the globe are analyzed in laboratories to determine levels of individual greenhouse gases. Sources of greenhouse gases, such as automobiles, factories, and power plants, are monitored directly to determine their emissions. Scientists gather information about climate systems and use this information to create and test computer models that simulate how climate could change in response to changing conditions on the Earth and in the atmosphere. These models act as high-tech crystal balls to project what may happen in the future as greenhouse gas levels rise. Models can only provide approximations, and some of the predictions based on these models often spark controversy within the science community. Nevertheless, the basic concept of global warming is widely accepted by most climate scientists. 
VI EFFORTS TO CONTROL GREENHOUSE GASES
Due to overwhelming scientific evidence and growing political interest, global warming is currently recognized as an important national and international issue. Since 1992 representatives from over 160 countries have met regularly to discuss how to reduce worldwide greenhouse gas emissions. 
In 1997 representatives met in Kyōto, Japan, and produced an agreement, known as the Kyōto Protocol, which requires industrialized countries to reduce their emissions by 2012 to an average of 5 percent below 1990 levels. To help countries meet this agreement cost-effectively, negotiators developed a system in which nations that have no obligations or that have successfully met their reduced emissions obligations could profit by selling or trading their extra emissions quotas to other countries that are struggling to reduce their emissions. In 2004 Russia’s cabinet approved the treaty, paving the way for it to go into effect in 2005. More than 126 countries have ratified the protocol. Australia and the United States are the only industrialized nations that have failed to support it.
(viii) Greenhouse Effect
I INTRODUCTION
Greenhouse Effect, the capacity of certain gases in the atmosphere to trap heat emitted from the Earth’s surface, thereby insulating and warming the Earth. Without the thermal blanketing of the natural greenhouse effect, the Earth’s climate would be about 33 Celsius degrees (about 59 Fahrenheit degrees) cooler—too cold for most living organisms to survive.
The greenhouse effect has warmed the Earth for over 4 billion years. Now scientists are growing increasingly concerned that human activities may be modifying this natural process, with potentially dangerous consequences. Since the advent of the Industrial Revolution in the 1700s, humans have devised many inventions that burn fossil fuels such as coal, oil, and natural gas. Burning these fossil fuels, as well as other activities such as clearing land for agriculture or urban settlements, releases some of the same gases that trap heat in the atmosphere, including carbon dioxide, methane, and nitrous oxide. These atmospheric gases have risen to levels higher than at any time in the last 420,000 years. As these gases build up in the atmosphere, they trap more heat near the Earth’s surface, causing Earth’s climate to become warmer than it would naturally.
Scientists call this unnatural heating effect global warming and blame it for an increase in the Earth’s surface temperature of about 0.6 Celsius degrees (about 1 Fahrenheit degree) over the last nearly 100 years. Without remedial measures, many scientists fear that global temperatures will rise 1.4 to 5.8 Celsius degrees (2.5 to 10.4 Fahrenheit degrees) by 2100. These warmer temperatures could melt parts of polar ice caps and most mountain glaciers, causing a rise in sea level of up to 1 m (40 in) within a century or two, which would flood coastal regions. Global warming could also affect weather patterns causing, among other problems, prolonged drought or increased flooding in some of the world’s leading agricultural regions.
II HOW THE GREENHOUSE EFFECT WORKS
The greenhouse effect results from the interaction between sunlight and the layer of greenhouse gases in the Earth's atmosphere that extends up to 100 km (60 mi) above Earth's surface. Sunlight is composed of a range of radiant energies known as the solar spectrum, which includes visible light, infrared light, gamma rays, X rays, and ultraviolet light. When the Sun’s radiation reaches the Earth’s atmosphere, some 25 percent of the energy is reflected back into space by clouds and other atmospheric particles. About 20 percent is absorbed in the atmosphere. For instance, gas molecules in the uppermost layers of the atmosphere absorb the Sun’s gamma rays and X rays. The Sun’s ultraviolet radiation is absorbed by the ozone layer, located 19 to 48 km (12 to 30 mi) above the Earth’s surface.
About 50 percent of the Sun’s energy, largely in the form of visible light, passes through the atmosphere to reach the Earth’s surface. Soils, plants, and oceans on the Earth’s surface absorb about 85 percent of this heat energy, while the rest is reflected back into the atmosphere—most effectively by reflective surfaces such as snow, ice, and sandy deserts. In addition, some of the Sun’s radiation that is absorbed by the Earth’s surface becomes heat energy in the form of long-wave infrared radiation, and this energy is released back into the atmosphere. 
Certain gases in the atmosphere, including water vapor, carbon dioxide, methane, and nitrous oxide, absorb this infrared radiant heat, temporarily preventing it from dispersing into space. As these atmospheric gases warm, they in turn emit infrared radiation in all directions. Some of this heat returns back to Earth to further warm the surface in what is known as the greenhouse effect, and some of this heat is eventually released to space. This heat transfer creates equilibrium between the total amount of heat that reaches the Earth from the Sun and the amount of heat that the Earth radiates out into space. This equilibrium or energy balance—the exchange of energy between the Earth’s surface, atmosphere, and space—is important to maintain a climate that can support a wide variety of life.
The heat-trapping gases in the atmosphere behave like the glass of a greenhouse. They let much of the Sun’s rays in, but keep most of that heat from directly escaping. Because of this, they are called greenhouse gases. Without these gases, heat energy absorbed and reflected from the Earth’s surface would easily radiate back out to space, leaving the planet with an inhospitable temperature close to –19°C (2°F), instead of the present average surface temperature of 15°C (59°F).
To appreciate the importance of the greenhouse gases in creating a climate that helps sustain most forms of life, compare Earth to Mars and Venus. Mars has a thin atmosphere that contains low concentrations of heat-trapping gases. As a result, Mars has a weak greenhouse effect resulting in a largely frozen surface that shows no evidence of life. In contrast, Venus has an atmosphere containing high concentrations of carbon dioxide. This heat-trapping gas prevents heat radiated from the planet’s surface from escaping into space, resulting in surface temperatures that average 462°C (864°F)—too hot to support life. 
III TYPES OF GREENHOUSE GASES
Earth’s atmosphere is primarily composed of nitrogen (78 percent) and oxygen (21 percent). These two most common atmospheric gases have chemical structures that restrict absorption of infrared energy. Only the few greenhouse gases, which make up less than 1 percent of the atmosphere, offer the Earth any insulation. Greenhouse gases occur naturally or are manufactured. The most abundant naturally occurring greenhouse gas is water vapor, followed by carbon dioxide, methane, and nitrous oxide. Human-made chemicals that act as greenhouse gases include chlorofluorocarbons (CFCs), hydrochlorofluorocarbons (HCFCs), and hydrofluorocarbons (HFCs).
Since the 1700s, human activities have substantially increased the levels of greenhouse gases in the atmosphere. Scientists are concerned that expected increases in the concentrations of greenhouse gases will powerfully enhance the atmosphere’s capacity to retain infrared radiation, leading to an artificial warming of the Earth’s surface.
A Water Vapor
Water vapor is the most common greenhouse gas in the atmosphere, accounting for about 60 to 70 percent of the natural greenhouse effect. Humans do not have a significant direct impact on water vapor levels in the atmosphere. However, as human activities increase the concentration of other greenhouse gases in the atmosphere (producing warmer temperatures on Earth), the evaporation of oceans, lakes, and rivers, as well as water evaporation from plants, increase and raise the amount of water vapor in the atmosphere.
B Carbon Dioxide
Carbon dioxide constantly circulates in the environment through a variety of natural processes known as the carbon cycle. Volcanic eruptions and the decay of plant and animal matter both release carbon dioxide into the atmosphere. In respiration, animals break down food to release the energy required to build and maintain cellular activity. A byproduct of respiration is the formation of carbon dioxide, which is exhaled from animals into the environment. Oceans, lakes, and rivers absorb carbon dioxide from the atmosphere. Through photosynthesis, plants collect carbon dioxide and use it to make their own food, in the process incorporating carbon into new plant tissue and releasing oxygen to the environment as a byproduct. 
In order to provide energy to heat buildings, power automobiles, and fuel electricity-producing power plants, humans burn objects that contain carbon, such as the fossil fuels oil, coal, and natural gas; wood or wood products; and some solid wastes. When these products are burned, they release carbon dioxide into the air. In addition, humans cut down huge tracts of trees for lumber or to clear land for farming or building. This process, known as deforestation, can both release the carbon stored in trees and significantly reduce the number of trees available to absorb carbon dioxide.
As a result of these human activities, carbon dioxide in the atmosphere is accumulating faster than the Earth’s natural processes can absorb the gas. By analyzing air bubbles trapped in glacier ice that is many centuries old, scientists have determined that carbon dioxide levels in the atmosphere have risen by 31 percent since 1750. And since carbon dioxide increases can remain in the atmosphere for centuries, scientists expect these concentrations to double or triple in the next century if current trends continue. 
C Methane
Many natural processes produce methane, also known as natural gas. Decomposition of carbon-containing substances found in oxygen-free environments, such as wastes in landfills, release methane. Ruminating animals such as cattle and sheep belch methane into the air as a byproduct of digestion. Microorganisms that live in damp soils, such as rice fields, produce methane when they break down organic matter. Methane is also emitted during coal mining and the production and transport of other fossil fuels.
Methane has more than doubled in the atmosphere since 1750, and could double again in the next century. Atmospheric concentrations of methane are far less than carbon dioxide, and methane only stays in the atmosphere for a decade or so. But scientists consider methane an extremely effective heat-trapping gas—one molecule of methane is 20 times more efficient at trapping infrared radiation radiated from the Earth’s surface than a molecule of carbon dioxide.
D Nitrous Oxide
Nitrous oxide is released by the burning of fossil fuels, and automobile exhaust is a large source of this gas. In addition, many farmers use nitrogen-containing fertilizers to provide nutrients to their crops. When these fertilizers break down in the soil, they emit nitrous oxide into the air. Plowing fields also releases nitrous oxide. 
Since 1750 nitrous oxide has risen by 17 percent in the atmosphere. Although this increase is smaller than for the other greenhouse gases, nitrous oxide traps heat about 300 times more effectively than carbon dioxide and can stay in the atmosphere for a century. 
E Fluorinated Compounds
Some of the most potent greenhouse gases emitted are produced solely by human activities. Fluorinated compounds, including CFCs, HCFCs, and HFCs, are used in a variety of manufacturing processes. For each of these synthetic compounds, one molecule is several thousand times more effective in trapping heat than a single molecule of carbon dioxide. 
CFCs, first synthesized in 1928, were widely used in the manufacture of aerosol sprays, blowing agents for foams and packing materials, as solvents, and as refrigerants. Nontoxic and safe to use in most applications, CFCs are harmless in the lower atmosphere. However, in the upper atmosphere, ultraviolet radiation breaks down CFCs, releasing chlorine into the atmosphere. In the mid-1970s, scientists began observing that higher concentrations of chlorine were destroying the ozone layer in the upper atmosphere. Ozone protects the Earth from harmful ultraviolet radiation, which can cause cancer and other damage to plants and animals. Beginning in 1987 with the Montréal Protocol on Substances that Deplete the Ozone Layer, representatives from 47 countries established control measures that limited the consumption of CFCs. By 1992 the Montréal Protocol was amended to completely ban the manufacture and use of CFCs worldwide, except in certain developing countries and for use in special medical processes such as asthma inhalers. 
Scientists devised substitutes for CFCs, developing HCFCs and HFCs. Since HCFCs still release ozone-destroying chlorine in the atmosphere, production of this chemical will be phased out by the year 2030, providing scientists some time to develop a new generation of safer, effective chemicals. HFCs, which do not contain chlorine and only remain in the atmosphere for a short time, are now considered the most effective and safest substitute for CFCs.
F Other Synthetic Chemicals
Experts are concerned about other industrial chemicals that may have heat-trapping abilities. In 2000 scientists observed rising concentrations of a previously unreported compound called trifluoromethyl sulphur pentafluoride. Although present in extremely low concentrations in the environment, the gas still poses a significant threat because it traps heat more effectively than all other known greenhouse gases. The exact sources of the gas, undisputedly produced from industrial processes, still remain uncertain.
IV OTHER FACTORS AFFECTING THE GREENHOUSE EFFECT
Aerosols, also known as particulates, are airborne particles that absorb, scatter, and reflect radiation back into space. Clouds, windblown dust, and particles that can be traced to erupting volcanoes are examples of natural aerosols. Human activities, including the burning of fossil fuels and slash-and-burn farming techniques used to clear forestland, contribute additional aerosols to the atmosphere. Although aerosols are not considered a heat-trapping greenhouse gas, they do affect the transfer of heat energy radiated from the Earth to space. The effect of aerosols on climate change is still debated, but scientists believe that light-colored aerosols cool the Earth’s surface, while dark aerosols like soot actually warm the atmosphere. The increase in global temperature in the last century is lower than many scientists predicted when only taking into account increasing levels of carbon dioxide, methane, nitrous oxide, and fluorinated compounds. Some scientists believe that aerosol cooling may be the cause of this unexpectedly reduced warming. 
However, scientists do not expect that aerosols will ever play a significant role in offsetting global warming. As pollutants, aerosols typically pose a health threat, and the manufacturing or agricultural processes that produce them are subject to air-pollution control efforts. As a result, scientists do not expect aerosols to increase as fast as other greenhouse gases in the 21st century.
V UNDERSTANDING THE GREENHOUSE EFFECT
Although concern over the effect of increasing greenhouse gases is a relatively recent development, scientists have been investigating the greenhouse effect since the early 1800s. French mathematician and physicist Jean Baptiste Joseph Fourier, while exploring how heat is conducted through different materials, was the first to compare the atmosphere to a glass vessel in 1827. Fourier recognized that the air around the planet lets in sunlight, much like a glass roof. 
In the 1850s British physicist John Tyndall investigated the transmission of radiant heat through gases and vapors. Tyndall found that nitrogen and oxygen, the two most common gases in the atmosphere, had no heat-absorbing properties. He then went on to measure the absorption of infrared radiation by carbon dioxide and water vapor, publishing his findings in 1863 in a paper titled “On Radiation Through the Earth’s Atmosphere.”
Swedish chemist Svante August Arrhenius, best known for his Nobel Prize-winning work in electrochemistry, also advanced understanding of the greenhouse effect. In 1896 he calculated that doubling the natural concentrations of carbon dioxide in the atmosphere would increase global temperatures by 4 to 6 Celsius degrees (7 to 11 Fahrenheit degrees), a calculation that is not too far from today’s estimates using more sophisticated methods. Arrhenius correctly predicted that when Earth’s temperature warms, water vapor evaporation from the oceans increases. The higher concentration of water vapor in the atmosphere would then contribute to the greenhouse effect and global warming.
The predictions about carbon dioxide and its role in global warming set forth by Arrhenius were virtually ignored for over half a century, until scientists began to detect a disturbing change in atmospheric levels of carbon dioxide. In 1957 researchers at the Scripps Institution of Oceanography, based in San Diego, California, began monitoring carbon dioxide levels in the atmosphere from Hawaii’s remote Mauna Loa Observatory located 3,000 m (11,000 ft) above sea level. When the study began, carbon dioxide concentrations in the Earth’s atmosphere were 315 molecules of gas per million molecules of air (abbreviated parts per million or ppm). Each year carbon dioxide concentrations increased—to 323 ppm by 1970 and 335 ppm by 1980. By 1988 atmospheric carbon dioxide had increased to 350 ppm, an 8 percent increase in only 31 years.
As other researchers confirmed these findings, scientific interest in the accumulation of greenhouse gases and their effect on the environment slowly began to grow. In 1988 the World Meteorological Organization and the United Nations Environment Programme established the Intergovernmental Panel on Climate Change (IPCC). The IPCC was the first international collaboration of scientists to assess the scientific, technical, and socioeconomic information related to the risk of human-induced climate change. The IPCC creates periodic assessment reports on advances in scientific understanding of the causes of climate change, its potential impacts, and strategies to control greenhouse gases. The IPCC played a critical role in establishing the United Nations Framework Convention on Climate Change (UNFCCC). The UNFCCC, which provides an international policy framework for addressing climate change issues, was adopted by the United Nations General Assembly in 1992. 
Today scientists around the world monitor atmospheric greenhouse gas concentrations and create forecasts about their effects on global temperatures. Air samples from sites spread across the globe are analyzed in laboratories to determine levels of individual greenhouse gases. Sources of greenhouse gases, such as automobiles, factories, and power plants, are monitored directly to determine their emissions. Scientists gather information about climate systems and use this information to create and test computer models that simulate how climate could change in response to changing conditions on the Earth and in the atmosphere. These models act as high-tech crystal balls to project what may happen in the future as greenhouse gas levels rise. Models can only provide approximations, and some of the predictions based on these models often spark controversy within the science community. Nevertheless, the basic concept of global warming is widely accepted by most climate scientists. 
VI EFFORTS TO CONTROL GREENHOUSE GASES
Due to overwhelming scientific evidence and growing political interest, global warming is currently recognized as an important national and international issue. Since 1992 representatives from over 160 countries have met regularly to discuss how to reduce worldwide greenhouse gas emissions. 
In 1997 representatives met in Kyōto, Japan, and produced an agreement, known as the Kyōto Protocol, which requires industrialized countries to reduce their emissions by 2012 to an average of 5 percent below 1990 levels. To help countries meet this agreement cost-effectively, negotiators developed a system in which nations that have no obligations or that have successfully met their reduced emissions obligations could profit by selling or trading their extra emissions quotas to other countries that are struggling to reduce their emissions. In 2004 Russia’s cabinet approved the treaty, paving the way for it to go into effect in 2005. More than 126 countries have ratified the protocol. Australia and the United States are the only industrialized nations that have failed to support it.
(ix) Pasteurization
Pasteurization, process of heating a liquid, particularly milk, to a temperature between 55° and 70° C (131° and 158° F), to destroy harmful bacteria without materially changing the composition, flavor, or nutritive value of the liquid. The process is named after the French chemist Louis Pasteur, who devised it in 1865 to inhibit fermentation of wine and milk. Milk is pasteurized by heating at a temperature of 63° C (145° F) for 30 minutes, rapidly cooling it, and then storing it at a temperature below 10° C (50° F). Beer and wine are pasteurized by being heated at about 60° C (140° F) for about 20 minutes; a newer method involves heating at 70° C (158° F) for about 30 seconds and filling the container under sterile conditions.
(x) Immunization
I INTRODUCTION
Immunization, also called vaccination or inoculation, a method of stimulating resistance in the human body to specific diseases using microorganisms—bacteria or viruses—that have been modified or killed. These treated microorganisms do not cause the disease, but rather trigger the body's immune system to build a defense mechanism that continuously guards against the disease. If a person immunized against a particular disease later comes into contact with the disease-causing agent, the immune system is immediately able to respond defensively.
Immunization has dramatically reduced the incidence of a number of deadly diseases. For example, a worldwide vaccination program resulted in the global eradication of smallpox in 1980, and in most developed countries immunization has essentially eliminated diphtheria, poliomyelitis, and neonatal tetanus. The number of cases of Haemophilus influenzae type b meningitis in the United States has dropped 95 percent among infants and children since 1988, when the vaccine for that disease was first introduced. In the United States, more than 90 percent of children receive all the recommended vaccinations by their second birthday. About 85 percent of Canadian children are immunized by age two. 
II TYPES OF IMMUNIZATION
Scientists have developed two approaches to immunization: active immunization, which provides long-lasting immunity, and passive immunization, which gives temporary immunity. In active immunization, all or part of a disease-causing microorganism or a modified product of that microorganism is injected into the body to make the immune system respond defensively. Passive immunity is accomplished by injecting blood from an actively immunized human being or animal.
A Active Immunization
Vaccines that provide active immunization are made in a variety of ways, depending on the type of disease and the organism that causes it. The active components of the vaccinations are antigens, substances found in the disease-causing organism that the immune system recognizes as foreign. In response to the antigen, the immune system develops either antibodies or white blood cells called T lymphocytes, which are special attacker cells. Immunization mimics real infection but presents little or no risk to the recipient. Some immunizing agents provide complete protection against a disease for life. Other agents provide partial protection, meaning that the immunized person can contract the disease, but in a less severe form. These vaccines are usually considered risky for people who have a damaged immune system, such as those infected with the virus that causes acquired immunodeficiency syndrome (AIDS) or those receiving chemotherapy for cancer or organ transplantation. Without a healthy defense system to fight infection, these people may develop the disease that the vaccine is trying to prevent. Some immunizing agents require repeated inoculations—or booster shots—at specific intervals. Tetanus shots, for example, are recommended every ten years throughout life.
In order to make a vaccine that confers active immunization, scientists use an organism or part of one that has been modified so that it has a low risk of causing illness but still triggers the body’s immune defenses against disease. One type of vaccine contains live organisms that have been attenuated—that is, their virulence has been weakened. This procedure is used to protect against yellow fever, measles, smallpox, and many other viral diseases. Immunization can also occur when a person receives an injection of killed or inactivated organisms that are relatively harmless but that still contain antigens. This type of vaccination is used to protect against bacterial diseases such as poliomyelitis, typhoid fever, and diphtheria.
Some vaccines use only parts of an infectious organism that contain antigens, such as a protein cell wall or a flagellum. Known as acellular vaccines, they produce the desired immunity with a lower risk of producing potentially harmful immune reactions that may result from exposure to other parts of the organism. Acellular vaccines include the Haemophilus influenzae type B vaccine for meningitis and newer versions of the whooping cough vaccine. Scientists use genetic engineering techniques to refine this approach further by isolating a gene or genes within an infectious organism that code for a particular antigen. The subunit vaccines produced by this method cannot cause disease and are safe to use in people who have an impaired immune system. Subunit vaccines for hepatitis B and pneumococcus infection, which causes pneumonia, became available in the late 1990s.
Active immunization can also be carried out using bacterial toxins that have been treated with chemicals so that they are no longer toxic, even though their antigens remain intact. This procedure uses the toxins produced by genetically engineered bacteria rather than the organism itself and is used in vaccinating against tetanus, botulism, and similar toxic diseases.
B Passive Immunization
Passive immunization is performed without injecting any antigen. In this method, vaccines contain antibodies obtained from the blood of an actively immunized human being or animal. The antibodies last for two to three weeks, and during that time the person is protected against the disease. Although short-lived, passive immunization provides immediate protection, unlike active immunization, which can take weeks to develop. Consequently, passive immunization can be lifesaving when a person has been infected with a deadly organism.
Occasionally there are complications associated with passive immunization. Diseases such as botulism and rabies once posed a particular problem. Immune globulin (antibody-containing plasma) for these diseases was once derived from the blood serum of horses. Although this animal material was specially treated before administration to humans, serious allergic reactions were common. Today, human-derived immune globulin is more widely available and the risk of side effects is reduced.
III IMMUNIZATION RECOMMENDATIONS
More than 50 vaccines for preventable diseases are licensed in the United States. The American Academy of Pediatrics and the U.S. Public Health Service recommend a series of immunizations beginning at birth. The initial series for children is complete by the time they reach the age of two, but booster vaccines are required for certain diseases, such as diphtheria and tetanus, in order to maintain adequate protection. When new vaccines are introduced, it is uncertain how long full protection will last. Recently, for example, it was discovered that a single injection of measles vaccine, first licensed in 1963 and administered to children at the age of 15 months, did not confer protection through adolescence and young adulthood. As a result, in the 1980s a series of measles epidemics occurred on college campuses throughout the United States among students who had been vaccinated as infants. To forestall future epidemics, health authorities now recommend that a booster dose of the measles, mumps, and rubella (also known as German measles) vaccine be administered at the time a child first enters school.
Not only children but also adults can benefit from immunization. Many adults in the United States are not sufficiently protected against tetanus, diphtheria, measles, mumps, and German measles. Health authorities recommend that most adults 65 years of age and older, and those with respiratory illnesses, be immunized against influenza (yearly) and pneumococcus (once).
IV HISTORY OF IMMUNIZATION
The use of immunization to prevent disease predated the knowledge of both infection and immunology. In China in approximately 600 BC, smallpox material was inoculated through the nostrils. Inoculation of healthy people with a tiny amount of material from smallpox sores was first attempted in England in 1718 and later in America. Those who survived the inoculation became immune to smallpox. American statesman Thomas Jefferson traveled from his home in Virginia to Philadelphia, Pennsylvania, to undergo this risky procedure.
A significant breakthrough came in 1796 when British physician Edward Jenner discovered that he could immunize patients against smallpox by inoculating them with material from cowpox sores. Cowpox is a far milder disease that, unlike smallpox, carries little risk of death or disfigurement. Jenner inserted matter from cowpox sores into cuts he made on the arm of a healthy eight-year-old boy. The boy caught cowpox. However, when Jenner exposed the boy to smallpox eight weeks later, the child did not contract the disease. The vaccination with cowpox had made him immune to the smallpox virus. Today we know that the cowpox virus antigens are so similar to those of the smallpox virus that they trigger the body's defenses against both diseases.
In 1885 Louis Pasteur created the first successful vaccine against rabies for a young boy who had been bitten 14 times by a rabid dog. Over the course of ten days, Pasteur injected progressively more virulent rabies organisms into the boy, causing the boy to develop immunity in time to avert death from this disease.
Another major milestone in the use of vaccination to prevent disease occurred with the efforts of two American physician-researchers. In 1954 Jonas Salk introduced an injectable vaccine containing an inactivated virus to counter the epidemic of poliomyelitis. Subsequently, Albert Sabin made great strides in the fight against this paralyzing disease by developing an oral vaccine containing a live weakened virus. Since the introduction of the polio vaccine, the disease has been nearly eliminated in many parts of the world.
As more vaccines are developed, a new generation of combined vaccines are becoming available that will allow physicians to administer a single shot for multiple diseases. Work is also under way to develop additional orally administered vaccines and vaccines for sexually transmitted infections. Possible future vaccines may include, for example, one that would temporarily prevent pregnancy. Such a vaccine would still operate by stimulating the immune system to recognize and attack antigens, but in this case the antigens would be those of the hormones that are necessary for pregnancy.
Reply With Quote
  #3  
Old Sunday, December 30, 2007
Dilrauf  Dilrauf is offline
Member
 
Join Date: Sep 2005
Location: Islamabad
Posts: 34
Thanks: 6
Thanked 27 Times in 9 Posts
Dilrauf is on a distinguished road
Default
PAPER 2002

Q.1 Write short notes on any two of the following : 5 each
a. Acid Rain b. pesticides c endocrine system 

(a) Acid Rain

I INTRODUCTION
Acid Rain, form of air pollution in which airborne acids produced by electric utility plants and other sources fall to Earth in distant regions. The corrosive nature of acid rain causes widespread damage to the environment. The problem begins with the production of sulfur dioxide and nitrogen oxides from the burning of fossil fuels, such as coal, natural gas, and oil, and from certain kinds of manufacturing. Sulfur dioxide and nitrogen oxides react with water and other chemicals in the air to form sulfuric acid, nitric acid, and other pollutants. These acid pollutants reach high into the atmosphere, travel with the wind for hundreds of miles, and eventually return to the ground by way of rain, snow, or fog, and as invisible “dry” forms.

Damage from acid rain has been widespread in eastern North America and throughout Europe, and in Japan, China, and Southeast Asia. Acid rain leaches nutrients from soils, slows the growth of trees, and makes lakes uninhabitable for fish and other wildlife. In cities, acid pollutants corrode almost everything they touch, accelerating natural wear and tear on structures such as buildings and statues. Acids combine with other chemicals to form urban smog, which attacks the lungs, causing illness and premature deaths.

II FORMATION OF ACID RAIN
The process that leads to acid rain begins with the burning of fossil fuels. Burning, or combustion, is a chemical reaction in which oxygen from the air combines with carbon, nitrogen, sulfur, and other elements in the substance being burned. The new compounds formed are gases called oxides. When sulfur and nitrogen are present in the fuel, their reaction with oxygen yields sulfur dioxide and various nitrogen oxide compounds. In the United States, 70 percent of sulfur dioxide pollution comes from power plants, especially those that burn coal. In Canada, industrial activities, including oil refining and metal smelting, account for 61 percent of sulfur dioxide pollution. Nitrogen oxides enter the atmosphere from many sources, with motor vehicles emitting the largest share—43 percent in the United States and 60 percent in Canada.

Once in the atmosphere, sulfur dioxide and nitrogen oxides undergo complex reactions with water vapor and other chemicals to yield sulfuric acid, nitric acid, and other pollutants called nitrates and sulfates. The acid compounds are carried by air currents and the wind, sometimes over long distances. When clouds or fog form in acid-laden air, they too are acidic, and so is the rain or snow that falls from them. 

Acid pollutants also occur as dry particles and as gases, which may reach the ground without the help of water. When these “dry” acids are washed from ground surfaces by rain, they add to the acids in the rain itself to produce a still more corrosive solution. The combination of acid rain and dry acids is known as acid deposition.

III EFFECTS OF ACID RAIN
The acids in acid rain react chemically with any object they contact. Acids are corrosive chemicals that react with other chemicals by giving up hydrogen atoms. The acidity of a substance comes from the abundance of free hydrogen atoms when the substance is dissolved in water. Acidity is measured using a pH scale with units from 0 to 14. Acidic substances have pH numbers from 1 to 6—the lower the pH number, the stronger, or more corrosive, the substance. Some nonacidic substances, called bases or alkalis, are like acids in reverse—they readily accept the hydrogen atoms that the acids offer. Bases have pH numbers from 8 to 14, with the higher values indicating increased alkalinity. Pure water has a neutral pH of 7—it is not acidic or basic. Rain, snow, or fog with a pH below 5.6 is considered acid rain.

When bases mix with acids, the bases lessen the strength of an acid (see Acids and Bases). This buffering action regularly occurs in nature. Rain, snow, and fog formed in regions free of acid pollutants are slightly acidic, having a pH near 5.6. Alkaline chemicals in the environment, found in rocks, soils, lakes, and streams, regularly neutralize this precipitation. But when precipitation is highly acidic, with a pH below 5.6, naturally occurring acid buffers become depleted over time, and nature’s ability to neutralize the acids is impaired. Acid rain has been linked to widespread environmental damage, including soil and plant degradation, depleted life in lakes and streams, and erosion of human-made structures.

A Soil
In soil, acid rain dissolves and washes away nutrients needed by plants. It can also dissolve toxic substances, such as aluminum and mercury, which are naturally present in some soils, freeing these toxins to pollute water or to poison plants that absorb them. Some soils are quite alkaline and can neutralize acid deposition indefinitely; others, especially thin mountain soils derived from granite or gneiss, buffer acid only briefly.

B Trees
By removing useful nutrients from the soil, acid rain slows the growth of plants, especially trees. It also attacks trees more directly by eating holes in the waxy coating of leaves and needles, causing brown dead spots. If many such spots form, a tree loses some of its ability to make food through photosynthesis. Also, organisms that cause disease can infect the tree through its injured leaves. Once weakened, trees are more vulnerable to other stresses, such as insect infestations, drought, and cold temperatures. 

Spruce and fir forests at higher elevations, where the trees literally touch the acid clouds, seem to be most at risk. Acid rain has been blamed for the decline of spruce forests on the highest ridges of the Appalachian Mountains in the eastern United States. In the Black Forest of southwestern Germany, half of the trees are damaged from acid rain and other forms of pollution.

C Agriculture
Most farm crops are less affected by acid rain than are forests. The deep soils of many farm regions, such as those in the Midwestern United States, can absorb and neutralize large amounts of acid. Mountain farms are more at risk—the thin soils in these higher elevations cannot neutralize so much acid. Farmers can prevent acid rain damage by monitoring the condition of the soil and, when necessary, adding crushed limestone to the soil to neutralize acid. If excessive amounts of nutrients have been leached out of the soil, farmers can replace them by adding nutrient-rich fertilizer.

D Surface Waters
Acid rain falls into and drains into streams, lakes, and marshes. Where there is snow cover in winter, local waters grow suddenly more acidic when the snow melts in the spring. Most natural waters are close to chemically neutral, neither acidic nor alkaline: their pH is between 6 and 8. In the northeastern United States and southeastern Canada, the water in some lakes now has a pH value of less than 5 as a result of acid rain. This means they are at least ten times more acidic than they should be. In the Adirondack Mountains of New York State, a quarter of the lakes and ponds are acidic, and many have lost their brook trout and other fish. In the middle Appalachian Mountains, over 1,300 streams are afflicted. All of Norway’s major rivers have been damaged by acid rain, severely reducing salmon and trout populations. 

E Plants and Animals
The effects of acid rain on wildlife can be far-reaching. If a population of one plant or animal is adversely affected by acid rain, animals that feed on that organism may also suffer. Ultimately, an entire ecosystem may become endangered. Some species that live in water are very sensitive to acidity, some less so. Freshwater clams and mayfly young, for instance, begin dying when the water pH reaches 6.0. Frogs can generally survive more acidic water, but if their supply of mayflies is destroyed by acid rain, frog populations may also decline. Fish eggs of most species stop hatching at a pH of 5.0. Below a pH of 4.5, water is nearly sterile, unable to support any wildlife. 

Land animals dependent on aquatic organisms are also affected. Scientists have found that populations of snails living in or near water polluted by acid rain are declining in some regions. In The Netherlands songbirds are finding fewer snails to eat. The eggs these birds lay have weakened shells because the birds are receiving less calcium from snail shells.

F Human-Made Structures
Acid rain and the dry deposition of acidic particles damage buildings, statues, automobiles, and other structures made of stone, metal, or any other material exposed to weather for long periods. The corrosive damage can be expensive and, in cities with very historic buildings, tragic. Both the Parthenon in Athens, Greece, and the Taj Mahal in Agra, India, are deteriorating due to acid pollution.

G Human Health
The acidification of surface waters causes little direct harm to people. It is safe to swim in even the most acidified lakes. However, toxic substances leached from soil can pollute local water supplies. In Sweden, as many as 10,000 lakes have been polluted by mercury released from soils damaged by acid rain, and residents have been warned to avoid eating fish caught in these lakes. In the air, acids join with other chemicals to produce urban smog, which can irritate the lungs and make breathing difficult, especially for people who already have asthma, bronchitis, or other respiratory diseases. Solid particles of sulfates, a class of minerals derived from sulfur dioxide, are thought to be especially damaging to the lungs. 

H Acid Rain and Global Warming
Acid pollution has one surprising effect that may be beneficial. Sulfates in the upper atmosphere reflect some sunlight out into space, and thus tend to slow down global warming. Scientists believe that acid pollution may have delayed the onset of warming by several decades in the middle of the 20th century.

IV EFFORTS TO CONTROL ACID RAIN
Acid rain can best be curtailed by reducing the amount of sulfur dioxide and nitrogen oxides released by power plants, motorized vehicles, and factories. The simplest way to cut these emissions is to use less energy from fossil fuels. Individuals can help. Every time a consumer buys an energy-efficient appliance, adds insulation to a house, or takes a bus to work, he or she conserves energy and, as a result, fights acid rain.

Another way to cut emissions of sulfur dioxide and nitrogen oxides is by switching to cleaner-burning fuels. For instance, coal can be high or low in sulfur, and some coal contains sulfur in a form that can be washed out easily before burning. By using more of the low-sulfur or cleanable types of coal, electric utility companies and other industries can pollute less. The gasoline and diesel oil that run most motor vehicles can also be formulated to burn more cleanly, producing less nitrogen oxide pollution. Clean-burning fuels such as natural gas are being used increasingly in vehicles. Natural gas contains almost no sulfur and produces very low nitrogen oxides. Unfortunately, natural gas and the less-polluting coals tend to be more expensive, placing them out of the reach of nations that are struggling economically.

Pollution can also be reduced at the moment the fuel is burned. Several new kinds of burners and boilers alter the burning process to produce less nitrogen oxides and more free nitrogen, which is harmless. Limestone or sandstone added to the combustion chamber can capture some of the sulfur released by burning coal.

Once sulfur dioxide and oxides of nitrogen have been formed, there is one more chance to keep them out of the atmosphere. In smokestacks, devices called scrubbers spray a mixture of water and powdered limestone into the waste gases (flue gases), recapturing the sulfur. Pollutants can also be removed by catalytic converters. In a converter, waste gases pass over small beads coated with metals. These metals promote chemical reactions that change harmful substances to less harmful ones. In the United States and Canada, these devices are required in cars, but they are not often used in smokestacks.

Once acid rain has occurred, a few techniques can limit environmental damage. In a process known as liming, powdered limestone can be added to water or soil to neutralize the acid dropping from the sky. In Norway and Sweden, nations much afflicted with acid rain, lakes are commonly treated this way. Rural water companies may need to lime their reservoirs so that acid does not eat away water pipes. In cities, exposed surfaces vulnerable to acid rain destruction can be coated with acid-resistant paints. Delicate objects like statues can be sheltered indoors in climate-controlled rooms.
Cleaning up sulfur dioxide and nitrogen oxides will reduce not only acid rain but also smog, which will make the air look clearer. Based on a study of the value that visitors to national parks place on clear scenic vistas, the U.S. Environmental Protection Agency thinks that improving the vistas in eastern national parks alone will be worth $1 billion in tourist revenue a year.

A National Legislation
In the United States, legislative efforts to control sulfur dioxide and nitrogen oxides began with passage of the Clean Air Act of 1970. This act established emissions standards for pollutants from automobiles and industry. In 1990 Congress approved a set of amendments to the act that impose stricter limits on pollution emissions, particularly pollutants that cause acid rain. These amendments aim to cut the national output of sulfur dioxide from 23.5 million tons to 16 million tons by the year 2010. Although no national target is set for nitrogen oxides, the amendments require that power plants, which emit about one-third of all nitrogen oxides released to the atmosphere, reduce their emissions from 7.5 million tons to 5 million tons by 2010. These rules were applied first to selected large power plants in Eastern and Midwestern states. In the year 2000, smaller, cleaner power plants across the country came under the law.
These 1990 amendments include a novel provision for sulfur dioxide control. Each year the government gives companies permits to release a specified number of tons of sulfur dioxide. Polluters are allowed to buy and sell their emissions permits. For instance, a company can choose to reduce its sulfur dioxide emissions more than the law requires and sell its unused pollution emission allowance to another company that is further from meeting emission goals; the buyer may then pollute above the limit for a certain time. Unused pollution rights can also be "banked" and kept for later use. It is hoped that this flexible market system will clean up emissions more quickly and cheaply than a set of rigid rules.
Legislation enacted in Canada restricts the annual amount of sulfur dioxide emissions to 2.3 million tons in all of Canada’s seven easternmost provinces, where acid rain causes the most damage. A national cap for sulfur dioxide emissions has been set at 3.2 million tons per year. Legislation is currently being developed to enforce stricter pollution emissions by 2010.
Norwegian law sets the goal of reducing sulfur dioxide emission to 76 percent of 1980 levels and nitrogen oxides emissions to 70 percent of the 1986 levels. To encourage cleanup, Norway collects a hefty tax from industries that emit acid pollutants. In some cases these taxes make it more expensive to emit acid pollutants than to reduce emissions.

B International Agreements
Acid rain typically crosses national borders, making pollution control an international issue. Canada receives much of its acid pollution from the United States—by some estimates as much as 50 percent. Norway and Sweden receive acid pollutants from Britain, Germany, Poland, and Russia. The majority of acid pollution in Japan comes from China. Debates about responsibilities and cleanup costs for acid pollutants led to international cooperation. In 1988, as part of the Long-Range Transboundary Air Pollution Agreement sponsored by the United Nations, the United States and 24 other nations ratified a protocol promising to hold yearly nitrogen oxide emissions at or below 1987 levels. In 1991 the United States and Canada signed an Air Quality Agreement setting national limits on annual sulfur dioxide emissions from power plants and factories. In 1994 in Oslo, Norway, 12 European nations agreed to reduce sulfur dioxide emissions by as much as 87 percent by 2010.

Legislative actions to prevent acid rain have results. The targets established in laws and treaties are being met, usually ahead of schedule. Sulfur emissions in Europe decreased by 40 percent from 1980 to 1994. In Norway sulfur dioxide emissions fell by 75 percent during the same period. Since 1980 annual sulfur dioxide emissions in the United States have dropped from 26 million tons to 18.3 million tons. Canada reports sulfur dioxide emissions have been reduced to 2.6 million tons, 18 percent below the proposed limit of 3.2 million tons. 

Monitoring stations in several nations report that precipitation is actually becoming less acidic. In Europe, lakes and streams are now growing less acid. However, this does not seem to be the case in the United States and Canada. The reasons are not completely understood, but apparently, controls reducing nitrogen oxide emissions only began recently and their effects have yet to make a mark. In addition, soils in some areas have absorbed so much acid that they contain no more neutralizing alkaline chemicals. The weathering of rock will gradually replace the missing alkaline chemicals, but scientists fear that improvement will be very slow unless pollution controls are made even stricter. 

(b)Pesticides
(c)Endocrine System


Q.2 Differentiate between any five of the following pairs :

a) rotation and revolution of earth
As Earth revolves around the Sun, it rotates, or spins, on its axis, an imaginary line that runs between the North and South poles. The period of one complete rotation is defined as a day and takes 23 hr 56 min 4.1 sec. The period of one revolution around the Sun is defined as a year, or 365.2422 solar days, or 365 days 5 hr 48 min 46 sec. Earth also moves along with the Milky Way Galaxy as the Galaxy rotates and moves through space. It takes more than 200 million years for the stars in the Milky Way to complete one revolution around the Galaxy’s center.

Earth’s axis of rotation is inclined (tilted) 23.5° relative to its plane of revolution around the Sun. This inclination of the axis creates the seasons and causes the height of the Sun in the sky at noon to increase and decrease as the seasons change. The Northern Hemisphere receives the most energy from the Sun when it is tilted toward the Sun. This orientation corresponds to summer in the Northern Hemisphere and winter in the Southern Hemisphere. The Southern Hemisphere receives maximum energy when it is tilted toward the Sun, corresponding to summer in the Southern Hemisphere and winter in the Northern Hemisphere. Fall and spring occur in between these orientations.

(b) Monocot and dicot plants

Dicots
Dicots, popular name for dicotyledons, one of the two large groups of flowering plants. A number of floral and vegetative features of dicots distinguish them from the more recently evolved monocotyledons (see Monocots), the other class of flowering plants. In dicots the embryo sprouts two cotyledons, which are seed leaves that usually do not become foliage leaves but serve to provide food for the new seedling. 

Flower parts of dicots are in fours or fives, and the leaves usually have veins arranged in a reticulate (netlike) pattern. The vascular tissue in the stems is arranged in a ring, and true secondary growth takes place, causing stems and roots to increase in diameter. Tree forms are common. Certain woody dicot groups (see Magnolia) exhibit characteristics such as large flowers with many unfused parts that are thought to be similar to those of early flowering plants. About 170,000 species of dicots are known, including buttercups, maples, roses, and violets.

Scientific classification: Dicots make up the class Magnoliopsida, in the phylum Magnoliophyta.

Monocots
Monocots, more properly monocotyledons, one of two classes of flowering plants (see Angiosperm). They are mostly herbaceous and include such familiar plants as iris, lily, orchid, grass, and palm. Several floral and vegetative features distinguish them from dicots, the other angiosperm class. These features include flower parts in threes; one cotyledon (seed leaf); leaf veins that are usually parallel; vascular tissue in scattered bundles in the stem; and no true secondary growth.

Monocots are thought to have evolved from some early aquatic group of dicots through reduction of various flower and vegetative parts. Among living monocot groups, one order (see Water Plantain) contains the most primitive monocots. About 50,000 species of monocots are known—about one-third the number of dicot species.

Scientific classification: Monocots make up the class Liliopsida of the phylum Magnoliophyta. The most primitive living monocots belong to the order Alismatales.

(d) Umbra and penumbra

Penumbra
1. partial shadow: a partial outer shadow that is lighter than the darker inner shadow umbra, e.g. the area between complete darkness and complete light in an eclipse 
2. indeterminate area: an indistinct area, especially a state in which something is unclear or uncertain 
3. periphery: the outer region or periphery of something 
4. ASTRONOMY edge of sunspot: a grayish area surrounding the dark center of a sunspot 

Umbra 
1. PHYSICS complete shadow: an area of complete shadow caused by light from all points of a source being prevented from reaching the area, usually by an opaque object 
2. ASTRONOMY darkest part on moon or Earth: the darkest portion of the shadow cast by an astronomical object during an eclipse, especially that cast on Earth during a solar eclipse 
3. ASTRONOMY dark part of sunspot: the inner, darker area of a sunspot 
The earth, lit by the sun, casts a long, conical shadow in space. At any point within that cone the light of the sun is wholly obscured. Surrounding the shadow cone, also called the umbra, is an area of partial shadow called the penumbra. The approximate mean length of the umbra is 1,379,200 km (857,000 mi); at a distance of 384,600 km (239,000 mi), the mean distance of the moon from the earth, it has a diameter of about 9170 km (about 5700 mi).

(e) Nucleus and nucleolus 

Nucleus (atomic structure)
Nucleus (atomic structure), in atomic structure, the positively charged central mass of an atom about which the orbital electrons revolve. The nucleus is composed of nucleons, that is, protons and neutrons, and its mass accounts for nearly the entire mass of the atom. 

Nucleolus
Nucleolus, structure within the nucleus of cells, involved in the manufacture of ribosomes (cell structures where protein synthesis occurs). Each cell nucleus typically contains one or more nucleoli, which appear as irregularly shaped fibers and granules embedded in the nucleus. There is no membrane separating the nucleolus from the rest of the nucleus.

The manufacture of ribosomes requires that the components of ribosomes—ribonucleic acid (RNA) and protein—be synthesized and brought together for assembly. The ribosomes of eukaryotic cells contain four strands of RNA and from 70 to 80 proteins. Using genes that reside on regions of chromosomes located in the nucleolus, three of the four ribosomal RNA strands are synthesized in the center of the nucleolus. The fourth RNA strand is synthesized outside of the nucleolus, using genes at a different location. The fourth strand is then transported into the nucleolus to participate in ribosome assembly. 

The genetic information for ribosomal proteins, found in the nucleus, is copied, or transcribed, into special chemical messengers called messenger RNA (mRNA), a different type of RNA than ribosomal RNA. The mRNA travels out of the nucleus into the cell’s cytoplasm where its information is transferred, or translated, into the ribosomal proteins. The newly created proteins enter the nucleolus and bind with the four ribosomal RNA strands to create two ribosomal structures: the large and small subunits. These two subunits leave the nucleus and enter the cytoplasm. When protein synthesis is initiated, the two subunits merge to form the completed ribosome. 

The nucleolus creates the two subunits for a single ribosome in about one hour. Thousands of subunits are manufactured by each nucleolus simultaneously, however, since several hundred to several thousand copies of the ribosomal RNA genes are present in the nucleolus. Before a cell divides, the nucleolus assembles about ten million ribosomal subunits, necessary for the large-scale protein production that occurs in cell division.

(f) Heavy water and Hard water


Q#3raw a labeled diagram of human eye, indicating all essential parts, discuss its working

I INTRODUCTION
Eye (anatomy), light-sensitive organ of vision in animals. The eyes of various species vary from simple structures that are capable only of differentiating between light and dark to complex organs, such as those of humans and other mammals, that can distinguish minute variations of shape, color, brightness, and distance. The actual process of seeing is performed by the brain rather than by the eye. The function of the eye is to translate the electromagnetic vibrations of light into patterns of nerve impulses that are transmitted to the brain.

II THE HUMAN EYE
The entire eye, often called the eyeball, is a spherical structure approximately 2.5 cm (about 1 in) in diameter with a pronounced bulge on its forward surface. The outer part of the eye is composed of three layers of tissue. The outside layer is the sclera, a protective coating. It covers about five-sixths of the surface of the eye. At the front of the eyeball, it is continuous with the bulging, transparent cornea. The middle layer of the coating of the eye is the choroid, a vascular layer lining the posterior three-fifths of the eyeball. The choroid is continuous with the ciliary body and with the iris, which lies at the front of the eye. The innermost layer is the light-sensitive retina.

The cornea is a tough, five-layered membrane through which light is admitted to the interior of the eye. Behind the cornea is a chamber filled with clear, watery fluid, the aqueous humor, which separates the cornea from the crystalline lens. The lens itself is a flattened sphere constructed of a large number of transparent fibers arranged in layers. It is connected by ligaments to a ringlike muscle, called the ciliary muscle, which surrounds it. The ciliary muscle and its surrounding tissues form the ciliary body. This muscle, by flattening the lens or making it more nearly spherical, changes its focal length.

The pigmented iris hangs behind the cornea in front of the lens, and has a circular opening in its center. The size of its opening, the pupil, is controlled by a muscle around its edge. This muscle contracts or relaxes, making the pupil larger or smaller, to control the amount of light admitted to the eye.
Behind the lens the main body of the eye is filled with a transparent, jellylike substance, the vitreous humor, enclosed in a thin sac, the hyaloid membrane. The pressure of the vitreous humor keeps the eyeball distended.
The retina is a complex layer, composed largely of nerve cells. The light-sensitive receptor cells lie on the outer surface of the retina in front of a pigmented tissue layer. These cells take the form of rods or cones packed closely together like matches in a box. Directly behind the pupil is a small yellow-pigmented spot, the macula lutea, in the center of which is the fovea centralis, the area of greatest visual acuity of the eye. At the center of the fovea, the sensory layer is composed entirely of cone-shaped cells. Around the fovea both rod-shaped and cone-shaped cells are present, with the cone-shaped cells becoming fewer toward the periphery of the sensitive area. At the outer edges are only rod-shaped cells.

Where the optic nerve enters the eyeball, below and slightly to the inner side of the fovea, a small round area of the retina exists that has no light-sensitive cells. This optic disk forms the blind spot of the eye.

III FUNCTIONING OF THE EYE
In general the eyes of all animals resemble simple cameras in that the lens of the eye forms an inverted image of objects in front of it on the sensitive retina, which corresponds to the film in a camera.

Focusing the eye, as mentioned above, is accomplished by a flattening or thickening (rounding) of the lens. The process is known as accommodation. In the normal eye accommodation is not necessary for seeing distant objects. The lens, when flattened by the suspensory ligament, brings such objects to focus on the retina. For nearer objects the lens is increasingly rounded by ciliary muscle contraction, which relaxes the suspensory ligament. A young child can see clearly at a distance as close as 6.3 cm (2.5 in), but with increasing age the lens gradually hardens, so that the limits of close seeing are approximately 15 cm (about 6 in) at the age of 30 and 40 cm (16 in) at the age of 50. In the later years of life most people lose the ability to accommodate their eyes to distances within reading or close working range. This condition, known as presbyopia, can be corrected by the use of special convex lenses for the near range.

Structural differences in the size of the eye cause the defects of hyperopia, or farsightedness, and myopia, or nearsightedness. See Eyeglasses; Vision.

As mentioned above, the eye sees with greatest clarity only in the region of the fovea; due to the neural structure of the retina. The cone-shaped cells of the retina are individually connected to other nerve fibers, so that stimuli to each individual cell are reproduced and, as a result, fine details can be distinguished. The rodshaped cells, on the other hand, are connected in groups so that they respond to stimuli over a general area. The rods, therefore, respond to small total light stimuli, but do not have the ability to separate small details of the visual image. The result of these differences in structure is that the visual field of the eye is composed of a small central area of great sharpness surrounded by an area of lesser sharpness. In the latter area, however, the sensitivity of the eye to light is great. As a result, dim objects can be seen at night on the peripheral part of the retina when they are invisible to the central part.

The mechanism of seeing at night involves the sensitization of the rod cells by means of a pigment, called visual purple or rhodopsin, that is formed within the cells. Vitamin A is necessary for the production of visual purple; a deficiency of this vitamin leads to night blindness. Visual purple is bleached by the action of light and must be reformed by the rod cells under conditions of darkness. Hence a person who steps from sunlight into a darkened room cannot see until the pigment begins to form. When the pigment has formed and the eyes are sensitive to low levels of illumination, the eyes are said to be dark-adapted.

A brownish pigment present in the outer layer of the retina serves to protect the cone cells of the retina from overexposure to light. If bright light strikes the retina, granules of this brown pigment migrate to the spaces around the cone cells, sheathing and screening them from the light. This action, called light adaptation, has the opposite effect to that of dark adaptation.

Subjectively, a person is not conscious that the visual field consists of a central zone of sharpness surrounded by an area of increasing fuzziness. The reason is that the eyes are constantly moving, bringing first one part of the visual field and then another to the foveal region as the attention is shifted from one object to another. These motions are accomplished by six muscles that move the eyeball upward, downward, to the left, to the right, and obliquely. The motions of the eye muscles are extremely precise; the estimation has been made that the eyes can be moved to focus on no less than 100,000 distinct points in the visual field. The muscles of the two eyes, working together, also serve the important function of converging the eyes on any point being observed, so that the images of the two eyes coincide. When convergence is nonexistent or faulty, double vision results. The movement of the eyes and fusion of the images also play a part in the visual estimation of size and distance.

IV PROTECTIVE STRUCTURES
Several structures, not parts of the eyeball, contribute to the protection of the eye. The most important of these are the eyelids, two folds of skin and tissue, upper and lower, that can be closed by means of muscles to form a protective covering over the eyeball against excessive light and mechanical injury. The eyelashes, a fringe of short hairs growing on the edge of either eyelid, act as a screen to keep dust particles and insects out of the eyes when the eyelids are partly closed. Inside the eyelids is a thin protective membrane, the conjunctiva, which doubles over to cover the visible sclera. Each eye also has a tear gland, or lacrimal organ, situated at the outside corner of the eye. The salty secretion of these glands lubricates the forward part of the eyeball when the eyelids are closed and flushes away any small dust particles or other foreign matter on the surface of the eye. Normally the eyelids of human eyes close by reflex action about every six seconds, but if dust reaches the surface of the eye and is not washed away, the eyelids blink oftener and more tears are produced. On the edges of the eyelids are a number of small glands, the Meibomian glands, which produce a fatty secretion that lubricates the eyelids themselves and the eyelashes. The eyebrows, located above each eye, also have a protective function in soaking up or deflecting perspiration or rain and preventing the moisture from running into the eyes. The hollow socket in the skull in which the eye is set is called the orbit. The bony edges of the orbit, the frontal bone, and the cheekbone protect the eye from mechanical injury by blows or collisions.

V COMPARATIVE ANATOMY
The simplest animal eyes occur in the cnidarians and ctenophores, phyla comprising the jellyfish and somewhat similar primitive animals. These eyes, known as pigment eyes, consist of groups of pigment cells associated with sensory cells and often covered with a thickened layer of cuticle that forms a kind of lens. Similar eyes, usually having a somewhat more complex structure, occur in worms, insects, and mollusks.

Two kinds of image-forming eyes are found in the animal world, single and compound eyes. The single eyes are essentially similar to the human eye, though varying from group to group in details of structure. The lowest species to develop such eyes are some of the large jellyfish. Compound eyes, confined to the arthropods (see Arthropod), consist of a faceted lens, each facet of which forms a separate image on a retinal cell, creating a moasic field. In some arthropods the structure is more sophisticated, forming a combined image.

The eyes of other vertebrates are essentially similar to human eyes, although important modifications may exist. The eyes of such nocturnal animals as cats, owls, and bats are provided only with rod cells, and the cells are both more sensitive and more numerous than in humans. The eye of a dolphin has 7000 times as many rod cells as a human eye, enabling it to see in deep water. The eyes of most fish have a flat cornea and a globular lens and are hence particularly adapted for seeing close objects. Birds’ eyes are elongated from front to back, permitting larger images of distant objects to be formed on the retina.

VI EYE DISEASES
Eye disorders may be classified according to the part of the eye in which the disorders occur.

The most common disease of the eyelids is hordeolum, known commonly as a sty, which is an infection of the follicles of the eyelashes, usually caused by infection by staphylococci. Internal sties that occur inside the eyelid and not on its edge are similar infections of the lubricating Meibomian glands. Abscesses of the eyelids are sometimes the result of penetrating wounds. Several congenital defects of the eyelids occasionally occur, including coloboma, or cleft eyelid, and ptosis, a drooping of the upper lid. Among acquired defects are symblepharon, an adhesion of the inner surface of the eyelid to the eyeball, which is most frequently the result of burns. Entropion, the turning of the eyelid inward toward the cornea, and ectropion, the turning of the eyelid outward, can be caused by scars or by spasmodic muscular contractions resulting from chronic irritation. The eyelids also are subject to several diseases of the skin such as eczema and acne, and to both benign and malignant tumors. Another eye disease is infection of the conjunctiva, the mucous membranes covering the inside of the eyelids and the outside of the eyeball. See Conjunctivitis; Trachoma.
Disorders of the cornea, which may result in a loss of transparency and impaired sight, are usually the result of injury but may also occur as a secondary result of disease; for example, edema, or swelling, of the cornea sometimes accompanies glaucoma.

The choroid, or middle coat of the eyeball, contains most of the blood vessels of the eye; it is often the site of secondary infections from toxic conditions and bacterial infections such as tuberculosis and syphilis. Cancer may develop in the choroidal tissues or may be carried to the eye from malignancies elsewhere in the body. The light-sensitive retina, which lies just beneath the choroid, also is subject to the same type of infections. The cause of retrolental fibroplasia, however—a disease of premature infants that causes retinal detachment and partial blindness—is unknown. Retinal detachment may also follow cataract surgery. Laser beams are sometimes used to weld detached retinas back onto the eye. Another retinal condition, called macular degeneration, affects the central retina. Macular degeneration is a frequent cause of loss of vision in older persons. Juvenile forms of this condition also exist.

The optic nerve contains the retinal nerve fibers, which carry visual impulses to the brain. The retinal circulation is carried by the central artery and vein, which lie in the optic nerve. The sheath of the optic nerve communicates with the cerebral lymph spaces. Inflammation of that part of the optic nerve situated within the eye is known as optic neuritis, or papillitis; when inflammation occurs in the part of the optic nerve behind the eye, the disease is called retrobulbar neuritis. When the pressure in the skull is elevated, or increased in intracranial pressure, as in brain tumors, edema and swelling of the optic disk occur where the nerve enters the eyeball, a condition known as papilledema, or chocked disk.

VII EYE BANK
Eye banks are organizations that distribute corneal tissue taken from deceased persons for eye grafts. Blindness caused by cloudiness or scarring of the cornea can sometimes be cured by surgical removal of the affected portion of the corneal tissue. With present techniques, such tissue can be kept alive for only 48 hours, but current experiments in preserving human corneas by freezing give hope of extending its useful life for months. Eye banks also preserve and distribute vitreous humor, the liquid within the larger chamber of the eye, for use in treatment of detached retinas. The first eye bank was opened in New York City in 1945. The Eye-Bank Association of America, in Rochester, New York, acts as a clearinghouse for information.

Q.5 What is the solar system ? Indicate the position of planet pluto in it. State the characteristics that classify it as : (5,1,4)
a. a planet b. an asteroid 

I INTRODUCTION
Solar System, the Sun and everything that orbits the Sun, including the nine planets and their satellites; the asteroids and comets; and interplanetary dust and gas. The term may also refer to a group of celestial bodies orbiting another star (see Extrasolar Planets). In this article, solar system refers to the system that includes Earth and the Sun. 

The dimensions of the solar system are specified in terms of the mean distance from Earth to the Sun, called the astronomical unit (AU). One AU is 150 million km (about 93 million mi). The most distant known planet, Pluto, orbits about 39 AU from the Sun. Estimates for the boundary where the Sun’s magnetic field ends and interstellar space begins—called the heliopause—range from 86 to 100 AU.

The most distant known planetoid orbiting the Sun is Sedna, whose discovery was reported in March 2004. A planetoid is an object that is too small to be a planet. At the farthest point in its orbit, Sedna is about 900 AU from the Sun. Comets known as long-period comets, however, achieve the greatest distance from the Sun; they have highly eccentric orbits ranging out to 50,000 AU or more.

The solar system was the only planetary system known to exist around a star similar to the Sun until 1995, when astronomers discovered a planet about 0.6 times the mass of Jupiter orbiting the star 51 Pegasi. Jupiter is the most massive planet in our solar system. Soon after, astronomers found a planet about 8.1 times the mass of Jupiter orbiting the star 70 Virginis, and a planet about 3.5 times the mass of Jupiter orbiting the star 47 Ursa Majoris. Since then, astronomers have found planets and disks of dust in the process of forming planets around many other stars. Most astronomers think it likely that solar systems of some sort are numerous throughout the universe. See Astronomy; Galaxy; Star.

II THE SUN AND THE SOLAR WIND
The Sun is a typical star of intermediate size and luminosity. Sunlight and other radiation are produced by the conversion of hydrogen into helium in the Sun’s hot, dense interior (see Nuclear Energy). Although this nuclear fusion is transforming 600 million metric tons of hydrogen each second, the Sun is so massive (2 × 1030 kg, or 4.4 × 1030 lb) that it can continue to shine at its present brightness for 6 billion years. This stability has allowed life to develop and survive on Earth.

For all the Sun’s steadiness, it is an extremely active star. On its surface, dark sunspots bounded by intense magnetic fields come and go in 11-year cycles and sudden bursts of charged particles from solar flares can cause auroras and disturb radio signals on Earth. A continuous stream of protons, electrons, and ions also leaves the Sun and moves out through the solar system. This solar wind shapes the ion tails of comets and leaves its traces in the lunar soil, samples of which were brought back from the Moon’s surface by piloted United States Apollo spacecraft.

The Sun’s activity also influences the heliopause, a region of space that astronomers believe marks the boundary between the solar system and interstellar space. The heliopause is a dynamic region that expands and contracts due to the constantly changing speed and pressure of the solar wind. In November 2003 a team of astronomers reported that the Voyager 1 spacecraft appeared to have encountered the outskirts of the heliopause at about 86 AU from the Sun. They based their report on data that indicated the solar wind had slowed from 1.1 million km/h (700,000 mph) to 160,000 km/h (100,000 mph). This finding is consistent with the theory that when the solar wind meets interstellar space at a turbulent zone known as the termination shock boundary, it will slow abruptly. However, another team of astronomers disputed the finding, saying that the spacecraft had neared but had not yet reached the heliopause.

III THE MAJOR PLANETS
Nine major planets are currently known. They are commonly divided into two groups: the inner planets (Mercury, Venus, Earth, and Mars) and the outer planets (Jupiter, Saturn, Uranus, and Neptune). The inner planets are small and are composed primarily of rock and iron. The outer planets are much larger and consist mainly of hydrogen, helium, and ice. Pluto does not belong to either group, and there is an ongoing debate as to whether Pluto should be categorized as a major planet.

Mercury is surprisingly dense, apparently because it has an unusually large iron core. With only a transient atmosphere, Mercury has a surface that still bears the record of bombardment by asteroidal bodies early in its history. Venus has a carbon dioxide atmosphere 90 times thicker than that of Earth, causing an efficient greenhouse effect by which the Venusian atmosphere is heated. The resulting surface temperature is the hottest of any planet—about 477°C (about 890°F). 

Earth is the only planet known to have abundant liquid water and life. However, in 2004 astronomers with the National Aeronautics and Space Administration’s Mars Exploration Rover mission confirmed that Mars once had liquid water on its surface. Scientists had previously concluded that liquid water once existed on Mars due to the numerous surface features on the planet that resemble water erosion found on Earth. Mars’s carbon dioxide atmosphere is now so thin that the planet is dry and cold, with polar caps of frozen water and solid carbon dioxide, or dry ice. However, small jets of subcrustal water may still erupt on the surface in some places.

Jupiter is the largest of the planets. Its hydrogen and helium atmosphere contains pastel-colored clouds, and its immense magnetosphere, rings, and satellites make it a planetary system unto itself. One of Jupiter’s largest moons, Io, has volcanoes that produce the hottest surface temperatures in the solar system. At least four of Jupiter’s moons have atmospheres, and at least three show evidence that they contain liquid or partially frozen water. Jupiter’s moon Europa may have a global ocean of liquid water beneath its icy crust. 

Saturn rivals Jupiter, with a much more intricate ring structure and a similar number of satellites. One of Saturn’s moons, Titan, has an atmosphere thicker than that of any other satellite in the solar system. Uranus and Neptune are deficient in hydrogen compared with Jupiter and Saturn; Uranus, also ringed, has the distinction of rotating at 98° to the plane of its orbit. Pluto seems similar to the larger, icy satellites of Jupiter or Saturn. Pluto is so distant from the Sun and so cold that methane freezes on its surface. See also Planetary Science.

IV OTHER ORBITING BODIES
The asteroids are small rocky bodies that move in orbits primarily between the orbits of Mars and Jupiter. Numbering in the thousands, asteroids range in size from Ceres, which has a diameter of 1,003 km (623 mi), to microscopic grains. Some asteroids are perturbed, or pulled by forces other than their attraction to the Sun, into eccentric orbits that can bring them closer to the Sun. If the orbits of such bodies intersect that of Earth, they are called meteoroids. When they appear in the night sky as streaks of light, they are known as meteors, and recovered fragments are termed meteorites. Laboratory studies of meteorites have revealed much information about primitive conditions in our solar system. The surfaces of Mercury, Mars, and several satellites of the planets (including Earth’s Moon) show the effects of an intense bombardment by asteroidal objects early in the history of the solar system. On Earth that record has eroded away, except for a few recently found impact craters.

Some meteors and interplanetary dust may also come from comets, which are basically aggregates of dust and frozen gases typically 5 to 10 km (about 3 to 6 mi) in diameter. Comets orbit the Sun at distances so great that they can be perturbed by stars into orbits that bring them into the inner solar system. As comets approach the Sun, they release their dust and gases to form a spectacular coma and tail. Under the influence of Jupiter’s strong gravitational field, comets can sometimes adopt much smaller orbits. The most famous of these is Halley’s Comet, which returns to the inner solar system at 75-year periods. Its most recent return was in 1986. In July 1994 fragments of Comet Shoemaker-Levy 9 bombarded Jupiter’s dense atmosphere at speeds of about 210,000 km/h (130,000 mph). Upon impact, the tremendous kinetic energy of the fragments was released through massive explosions, some resulting in fireballs larger than Earth.

Comets circle the Sun in two main groups, within the Kuiper Belt or within the Oort cloud. The Kuiper Belt is a ring of debris that orbits the Sun beyond the planet Neptune. Many of the comets with periods of less than 500 years come from the Kuiper Belt. In 2002 astronomers discovered a planetoid in the Kuiper Belt, and they named it Quaoar. 

The Oort cloud is a hypothetical region about halfway between the Sun and the heliopause. Astronomers believe that the existence of the Oort cloud, named for Dutch astronomer Jan Oort, explains why some comets have very long periods. A chunk of dust and ice may stay in the Oort cloud for thousands of years. Nearby stars sometimes pass close enough to the solar system to push an object in the Oort cloud into an orbit that takes it close to the Sun. 

The first detection of the long-hypothesized Oort cloud came in March 2004 when astronomers reported the discovery of a planetoid about 1,700 km (about 1,000 mi) in diameter. They named it Sedna, after a sea goddess in Inuit mythology. Sedna was found about 13 billion km (about 8 billion mi) from the Sun. At its farthest point from the Sun, Sedna is the most distant object in the solar system and is about 130 billion km (about 84 billion mi) from the Sun.

Many of the objects that do not fall into the asteroid belts, the Kuiper Belt, or the Oort cloud may be comets that will never make it back to the Sun. The surfaces of the icy satellites of the outer planets are scarred by impacts from such bodies. The asteroid-like object Chiron, with an orbit between Saturn and Uranus, may itself be an extremely large inactive comet. Similarly, some of the asteroids that cross the path of Earth’s orbit may be the rocky remains of burned-out comets. Chiron and similar objects called the Centaurs probably escaped from the Kuiper Belt and were drawn into their irregular orbits by the gravitational pull of the giant outer planets, Jupiter, Saturn, Neptune, and Uranus. 

The Sun was also found to be encircled by rings of interplanetary dust. One of them, between Jupiter and Mars, has long been known as the cause of zodiacal light, a faint glow that appears in the east before dawn and in the west after dusk. Another ring, lying only two solar widths away from the Sun, was discovered in 1983.

V MOVEMENTS OF THE PLANETS AND THEIR SATELLITES
If one could look down on the solar system from far above the North Pole of Earth, the planets would appear to move around the Sun in a counterclockwise direction. All of the planets except Venus and Uranus rotate on their axes in this same direction. The entire system is remarkably flat—only Mercury and Pluto have obviously inclined orbits. Pluto’s orbit is so elliptical that it is sometimes closer than Neptune to the Sun.

The satellite systems mimic the behavior of their parent planets and move in a counterclockwise direction, but many exceptions are found. Jupiter, Saturn, and Neptune each have at least one satellite that moves around the planet in a retrograde orbit (clockwise instead of counterclockwise), and several satellite orbits are highly elliptical. Jupiter, moreover, has trapped two clusters of asteroids (the so-called Trojan asteroids) leading and following the planet by 60° in its orbit around the Sun. (Some satellites of Saturn have done the same with smaller bodies.) The comets exhibit a roughly spherical distribution of orbits around the Sun.

Within this maze of motions, some remarkable patterns exist: Mercury rotates on its axis three times for every two revolutions about the Sun; no asteroids exist with periods (intervals of time needed to complete one revolution) 1/2, 1/3, …, 1/n (where n is an integer) the period of Jupiter; the three inner Galilean satellites of Jupiter have periods in the ratio 4:2:1. These and other examples demonstrate the subtle balance of forces that is established in a gravitational system composed of many bodies.

VI THEORIES OF ORIGIN
Despite their differences, the members of the solar system probably form a common family. They seem to have originated at the same time; few indications exist of bodies joining the solar system, captured later from other stars or interstellar space.

Early attempts to explain the origin of this system include the nebular hypothesis of the German philosopher Immanuel Kant and the French astronomer and mathematician Pierre Simon de Laplace, according to which a cloud of gas broke into rings that condensed to form planets. Doubts about the stability of such rings led some scientists to consider various catastrophic hypotheses, such as a close encounter of the Sun with another star. Such encounters are extremely rare, and the hot, tidally disrupted gases would dissipate rather than condense to form planets.

Current theories connect the formation of the solar system with the formation of the Sun itself, about 4.7 billion years ago. The fragmentation and gravitational collapse of an interstellar cloud of gas and dust, triggered perhaps by nearby supernova explosions, may have led to the formation of a primordial solar nebula. The Sun would then form in the densest, central region. It is so hot close to the Sun that even silicates, which are relatively dense, have difficulty forming there. This phenomenon may account for the presence near the Sun of a planet such as Mercury, having a relatively small silicate crust and a larger than usual, dense iron core. (It is easier for iron dust and vapor to coalesce near the central region of a solar nebula than it is for lighter silicates to do so.) At larger distances from the center of the solar nebula, gases condense into solids such as are found today from Jupiter outward. Evidence of a possible preformation supernova explosion appears as traces of anomalous isotopes in tiny inclusions in some meteorites. This association of planet formation with star formation suggests that billions of other stars in our galaxy may also have planets. The high frequency of binary and multiple stars, as well as the large satellite systems around Jupiter and Saturn, attest to the tendency of collapsing gas clouds to fragment into multibody systems.
See separate articles for most of the celestial bodies mentioned in this article. See also Exobiology.

Pluto (planet)
I INTRODUCTION
Pluto (planet), ninth planet from the Sun, smallest and outermost known planet of the solar system. Pluto revolves about the Sun once in 247.9 Earth years at an average distance of 5,880 million km (3,650 million mi). The planet’s orbit is so eccentric that at certain points along its path Pluto is slightly closer to the Sun than is Neptune. Pluto is about 2,360 km (1,475 mi) in diameter, about two-thirds the size of Earth's moon. Discovered in 1930, Pluto is the most recent planet in the solar system to be detected. The planet was named after the god of the underworld in Roman mythology.

II OBSERVATION FROM EARTH
Pluto is far away from Earth, and no spacecraft has yet been sent to the planet. All the information astronomers have on Pluto comes from observation through large telescopes. Pluto was discovered as the result of a telescopic search inaugurated in 1905 by American astronomer Percival Lowell, who postulated the existence of a distant planet beyond Neptune as the cause of slight irregularities in the orbits of Uranus and Neptune. Continued after Lowell’s death by members of the Lowell Observatory staff, the search ended successfully in 1930, when American astronomer Clyde William Tombaugh found Pluto.

For many years very little was known about the planet, but in 1978 astronomers discovered a relatively large moon orbiting Pluto at a distance of only about 19,600 km (about 12,180 mi) and named it Charon. The orbits of Pluto and Charon caused them to pass repeatedly in front of one another as seen from Earth between 1985 and 1990, enabling astronomers to determine their sizes accurately. Charon is about 1,200 km (750 mi) in diameter, making Pluto and Charon the planet-satellite pair closest in size to one another in the solar system. Scientists often call Pluto and Charon a double planet.

Every 248 years Pluto’s elliptical orbit brings it within the orbit of Neptune. Pluto last traded places with Neptune as the most distant planet in 1979 and crossed back outside Neptune’s orbit in 1999. No possibility of collision exists, however, because Pluto's orbit is inclined more than 17.2° to the plane of the ecliptic (the plane in which Earth and most of the other planets orbit the Sun) and is oriented such that it never actually crosses Neptune's path.

Pluto has a pinkish color. In 1988, astronomers discovered that Pluto has a thin atmosphere consisting of nitrogen with traces of carbon monoxide and methane. Atmospheric pressure on the planet's surface is about 100,000 times less than Earth's atmospheric pressure at sea level. Pluto’s atmosphere is believed to freeze out as a snow on the planet’s surface for most of each Plutonian orbit. During the decades when Pluto is closest to the Sun, however, the snows sublimate (evaporate) and create the atmosphere that has been observed. In 1994 the Hubble Space Telescope imaged 85 percent of Pluto's surface, revealing polar caps and bright and dark areas of startling contrast. Astronomers believe that the bright areas are likely to be shifting fields of clean ice and that the dark areas are fields of dirty ice colored by interaction with sunlight. These images show that extensive ice caps form on Pluto's poles, especially when the planet is farthest from the Sun.

III ORIGIN OF PLUTO 
With a density about twice that of water, Pluto is apparently made of a much greater proportion of rockier material than are the giant planets of the outer solar system. This may be the result of the kind of chemical reactions that took place during the formation of the planet under cold temperatures and low pressure. Many astronomers think Pluto was growing rapidly to be a larger planet when Neptune’s gravitational influence disturbed the region where Pluto orbits (the Kuiper Belt), stopping the process of planetary growth there. The Kuiper Belt is a ring of material orbiting the Sun beyond the planet Neptune that contains millions of rocky, icy objects like Pluto and Charon. Charon could be an accumulation of the lighter materials resulting from a collision between Pluto and another large Kuiper Belt Object (KBO) in the ancient past.


Asteroid

I INTRODUCTION
Asteroid, one of the many small or minor rocky planetoids that are members of the solar system and that move in elliptical orbits primarily between the orbits of Mars and Jupiter. 

II SIZES AND ORBITS
The largest representatives are 1 Ceres, with a diameter of about 1,003 km (about 623 mi), and 2 Pallas and 4 Vesta, with diameters of about 550 km (about 340 mi). The naming of asteroids is governed by the International Astronomical Union (IAU). After an astronomer observes a possible unknown asteroid, other astronomers confirm the discovery by observing the body over a period of several orbits and comparing the asteroid’s position and orbit to those of known asteroids. If the asteroid is indeed a newly discovered object, the IAU gives it a number according to its order of discovery, and the astronomer who discovered it chooses a name. Asteroids are usually referred to by both number and name. 

About 200 asteroids have diameters of more than 97 km (60 mi), and thousands of smaller ones exist. The total mass of all asteroids in the solar system is much less than the mass of the Moon. The larger bodies are roughly spherical, but elongated and irregular shapes are common for those with diameters of less than 160 km (100 mi). Most asteroids, regardless of size, rotate on their axes every 5 to 20 hours. Certain asteroids may be binary, or have satellites of their own.

Few scientists now believe that asteroids are the remnants of a former planet. It is more likely that asteroids occupy a place in the solar system where a sizable planet could have formed but was prevented from doing so by the disruptive gravitational influences of the nearby giant planet Jupiter. Originally perhaps only a few dozen asteroids existed, which were subsequently fragmented by mutual collisions to produce the population now present. Scientists believe that asteroids move out of the asteroid belt because heat from the Sun warms them unevenly. This causes the asteroids to drift slowly away from their original orbits. 

The so-called Trojan asteroids lie in two clouds, one moving 60° ahead of Jupiter in its orbit and the other 60° behind. In 1977 the asteroid 2060 Chiron was discovered in an orbit between that of Saturn and Uranus. Asteroids that intersect the orbit of Mars are called Amors; asteroids that intersect the orbit of Earth are known as Apollos; and asteroids that have orbits smaller than Earth’s orbit are called Atens. One of the largest inner asteroids is 443 Eros, an elongated body measuring 13 by 33 km (8 by 21 mi). The peculiar Apollo asteroid 3200 Phaethon, about 5 km (about 3 mi) wide, approaches the Sun more closely, at 20.9 million km (13.9 million mi), than any other known asteroid. It is also associated with the yearly return of the Geminid stream of meteors (see Geminids).

Several Earth-approaching asteroids are relatively easy targets for space missions. In 1991 the United States Galileo space probe, on its way to Jupiter, took the first close-up pictures of an asteroid. The images showed that the small, lopsided body, 951 Gaspra, is pockmarked with craters, and revealed evidence of a blanket of loose, fragmental material, or regolith, covering the asteroid’s surface. Galileo also visited an asteroid named 243 Ida and found that Ida has its own moon, a smaller asteroid subsequently named Dactyl. (Dactyl’s official designation is 243 Ida I, because it is a satellite of Ida.)

In 1996 the National Aeronautics and Space Administration (NASA) launched the Near-Earth Asteroid Rendezvous (NEAR) spacecraft. NEAR was later renamed NEAR Shoemaker in honor of American scientist Eugene M. Shoemaker. NEAR Shoemaker’s goal was to go into orbit around the asteroid Eros. On its way to Eros, the spacecraft visited the asteroid 253 Mathilde in June 1997. At 60 km (37 mi) in diameter, Mathilde is larger than either of the asteroids that Galileo visited. In February 2000, NEAR Shoemaker reached Eros, moved into orbit around the asteroid, and began making observations. The spacecraft orbited the asteroid for a year, gathering data to provide astronomers with a better idea of the origin, composition, and structure of large asteroids. After NEAR Shoemaker’s original mission ended, NASA decided to attempt a “controlled crash” on the surface of Eros. NEAR Shoemaker set down safely on Eros in February 2001—the first spacecraft ever to land on an asteroid.

In 1999 Deep Space 1, a probe NASA designed to test new space technologies, flew by the tiny asteroid 9969 Braille. Measurements taken by Deep Space 1 revealed that the composition of Braille is very similar to that of 4 Vesta, the third largest asteroid known. Scientists believe that Braille may be a broken piece of Vesta or that the two asteroids may have formed under similar conditions.

III SURFACE COMPOSITION
With the exception of a few that have been traced to the Moon and Mars, most of the meteorites recovered on Earth are thought to be asteroid fragments. Remote observations of asteroids by telescopic spectroscopy and radar support this hypothesis. They reveal that asteroids, like meteorites, can be classified into a few distinct types.

Three-quarters of the asteroids visible from Earth, including 1 Ceres, belong to the C type, which appear to be related to a class of stony meteorites known as carbonaceous chondrites. These meteorites are considered the oldest materials in the solar system, with a composition reflecting that of the primitive solar nebula. Extremely dark in color, probably because of their hydrocarbon content, they show evidence of having adsorbed water of hydration. Thus, unlike the Earth and the Moon, they have never either melted or been reheated since they first formed.

Asteroids of the S type, related to the stony iron meteorites, make up about 15 percent of the total population. Much rarer are the M-type objects, corresponding in composition to the meteorites known as “irons.” Consisting of an iron-nickel alloy, they may represent the cores of melted, differentiated planetary bodies whose outer layers were removed by impact cratering.

A very few asteroids, notably 4 Vesta, are probably related to the rarest meteorite class of all: the achondrites. These asteroids appear to have an igneous surface composition like that of many lunar and terrestrial lava flows. Thus, astronomers are reasonably certain that Vesta was, at some time in its history, at least partly melted. Scientists are puzzled that some of the asteroids have been melted but others, such as 1 Ceres, have not. One possible explanation is that the early solar system contained certain concentrated, highly radioactive isotopes that might have generated enough heat to melt the asteroids.

IV ASTEROIDS AND EARTH
Astronomers have found more than 300 asteroids with orbits that approach Earth’s orbit. Some scientists project that several thousand of these near-Earth asteroids may exist and that as many as 1,500 could be large enough to cause a global catastrophe if they collided with Earth. Still, the chances of such a collision average out to only one collision about every 300,000 years. 

Many scientists believe that a collision with an asteroid or a comet may have been responsible for at least one mass extinction of life on Earth over the planet’s history. A giant crater on the Yucatán Peninsula in Mexico marks the spot where a comet or asteroid struck Earth at the end of the Cretaceous Period, about 65 million years ago. This is about the same time as the disappearance of the last of the dinosaurs. A collision with an asteroid large enough to cause the Yucatán crater would have sent so much dust and gas into the atmosphere that sunlight would have been dimmed for months or years. Reactions of gases from the impact with clouds in the atmosphere would have caused massive amounts of acid rain. The acid rain and the lack of sunlight would have killed off plant life and the animals in the food chain that were dependent on plants for survival. 
The most recent major encounter between Earth and what may have been an asteroid was a 1908 explosion in the atmosphere above the Tunguska region of Siberia. The force of the blast flattened more than 200,000 hectares (500,000 acres) of pine forest and killed thousands of reindeer. The number of human casualties, if any, is unknown. The first scientific expedition went to the region two decades later. This expedition and several detailed studies following it found no evidence of an impact crater. This led scientists to believe that the heat generated by friction with the atmosphere as the object plunged toward Earth was great enough to make the object explode before it hit the ground. 

If the Tunguska object had exploded in a less remote area, the loss of human life and property could have been astounding. Military satellites—in orbit around Earth watching for explosions that could signal violations of weapons testing treaties—have detected dozens of smaller asteroid explosions in the atmosphere each year. In 1995 NASA, the Jet Propulsion Laboratory, and the U.S. Air Force began a project called Near-Earth Asteroid Tracking (NEAT). NEAT uses an observatory in Hawaii to search for asteroids with orbits that might pose a threat to Earth. By tracking these asteroids, scientists can calculate the asteroids’ precise orbits and project these orbits into the future to determine whether the asteroids will come close to Earth. 

Astronomers believe that tracking programs such as NEAT would probably give the world decades or centuries of warning time for any possible asteroid collision. Scientists have suggested several strategies for deflecting asteroids from a collision course with Earth. If the asteroid is very far away, a nuclear warhead could be used to blow it up without much danger of pieces of the asteroid causing significant damage to Earth. Another suggested strategy would be to attach a rocket engine to the asteroid and direct the asteroid off course without breaking it up. Both of these methods require that the asteroid be far from Earth. If an asteroid exploded close to Earth, chunks of it would probably cause damage. Any effort to push an asteroid off course would also require years to work. Asteroids are much too large for a rocket to push quickly. If astronomers were to discover an asteroid less than ten years away from collision with Earth, new strategies for deflecting the asteroid would probably be needed.

Q7: What are minerals ? For most of the part minerals are constituted of eight elements, name any six of them. State the six characteristics that are used to identify minerals.

Mineral (chemistry), in general, any naturally occurring chemical element or compound, but in mineralogy and geology, chemical elements and compounds that have been formed through inorganic processes. Petroleum and coal, which are formed by the decomposition of organic matter, are not minerals in the strict sense. More than 3000 mineral species are known, most of which are characterized by definite chemical composition, crystalline structure, and physical properties. They are classified primarily by chemical composition, crystal class, hardness, and appearance (color, luster, and opacity). Mineral species are, as a rule, limited to solid substances, the only liquids being metallic mercury and water. All the rocks forming the earth's crust consist of minerals. Metalliferous minerals of economic value, which are mined for their metals, are known as ores. See Crystal.

I INTRODUCTION
Mineralogy, the identification of minerals and the study of their properties, origin, and classification. The properties of minerals are studied under the convenient subdivisions of chemical mineralogy, physical mineralogy, and crystallography. The properties and classification of individual minerals, their localities and modes of occurrence, and their uses are studied under descriptive mineralogy. Identification according to chemical, physical, and crystallographic properties is called determinative mineralogy.

II CHEMICAL MINERALOGY
Chemical composition is the most important property for identifying minerals and distinguishing them from one another. Mineral analysis is carried out according to standard qualitative and quantitative methods of chemical analysis. Minerals are classified on the basis of chemical composition and crystal symmetry. The chemical constituents of minerals may also be determined by electron-beam microprobe analysis.

Although chemical classification is not rigid, the various classes of chemical compounds that include a majority of minerals are as follows: (1) elements, such as gold, graphite, diamond, and sulfur, that occur in the native state, that is, in an uncombined form; (2) sulfides, which are minerals composed of various metals combined with sulfur. Many important ore minerals, such as galena and sphalerite, are in this class; (3) sulfo salts, minerals composed of lead, copper, or silver in combination with sulfur and one or more of the following: antimony, arsenic, and bismuth. Pyrargyrite, Ag3SbS3, belongs to this class; (4) oxides, minerals composed of a metal in combination with oxygen, such as hematite, Fe2O3. Mineral oxides that contain water, such as diaspore, Al2O3• H2O, or the hydroxyl (OH) group, such as bog iron ore, FeO(OH), also belong to this group; (5) halides, composed of metals in combination with chlorine, fluorine, bromine, or iodine; halite, NaCl, is the most common mineral of this class; (6) carbonates, minerals such as calcite, CaCO 3, containing a carbonate group; (7) phosphates, minerals such as apatite, Ca5(F,Cl)(PO4)3, that contain a phosphate group; (8) sulfates, minerals such as barite, BaSO4, containing a sulfate group; and (9) silicates, the largest class of minerals, containing various elements in combination with silicon and oxygen, often with complex chemical structure, and minerals composed solely of silicon and oxygen (silica). The silicates include the minerals comprising the feldspar, mica, pyroxene, quartz, and zeolite and amphibole families.

III PHYSICAL MINERALOGY
The physical properties of minerals are important aids in identifying and characterizing them. Most of the physical properties can be recognized at sight or determined by simple tests. The most important properties include powder (streak), color, cleavage, fracture, hardness, luster, specific gravity, and fluorescence or phosphorescence.

IV CRYSTALLOGRAPHY
The majority of minerals occur in crystal form when conditions of formation are favorable. Crystallography is the study of the growth, shape, and geometric character of crystals. The arrangement of atoms within a crystal is determined by X-ray diffraction analysis. Crystal chemistry is the study of the relationship of chemical composition, arrangement of atoms, and the binding forces among atoms. This relationship determines minerals' chemical and physical properties. Crystals are grouped into six main classes of symmetry: isometric, hexagonal, tetragonal, orthorhombic, monoclinic, and triclinic. 

The study of minerals is an important aid in understanding rock formation. Laboratory synthesis of the high-pressure varieties of minerals is helping the understanding of igneous processes deep in the lithosphere (see Earth). Because all of the inorganic materials of commerce are minerals or derivatives of minerals, mineralogy has direct economic application. Important uses of minerals and examples in each category are gem minerals (diamond, garnet, opal, zircon); ornamental objects and structural material (agate, calcite, gypsum); abrasives (corundum, diamond, kaolin); lime, cement, and plaster (calcite, gypsum); refractories (asbestos, graphite, magnesite, mica); ceramics (feldspar, quartz); chemical minerals (halite, sulfur, borax); fertilizers (phosphates); natural pigments (hematite, limonite); optical and scientific apparatus (quartz, mica, tourmaline); and the ores of metals (cassiterite, chalcopyrite, chromite, cinnabar, ilmenite, molybdenite, galena, and sphalerite).


Q.8 Define any five of the following terms using suitable examples :
a. Polymerization b. Ecosystem c. Antibiotics 
d. Renewable energy resources e. Gene f. Software 
I INTRODUCTION
Polymer, substance consisting of large molecules that are made of many small, repeating units called monomers, or mers. The number of repeating units in one large molecule is called the degree of polymerization. Materials with a very high degree of polymerization are called high polymers. Polymers consisting of only one kind of repeating unit are called homopolymers. Copolymers are formed from several different repeating units.

Most of the organic substances found in living matter, such as protein, wood, chitin, rubber, and resins, are polymers. Many synthetic materials, such as plastics, fibers (; Rayon), adhesives, glass, and porcelain, are also to a large extent polymeric substances.

II STRUCTURE OF POLYMERS
Polymers can be subdivided into three, or possibly four, structural groups. The molecules in linear polymers consist of long chains of monomers joined by bonds that are rigid to a certain degree—the monomers cannot rotate freely with respect to each other. Typical examples are polyethylene, polyvinyl alcohol, and polyvinyl chloride (PVC).

Branched polymers have side chains that are attached to the chain molecule itself. Branching can be caused by impurities or by the presence of monomers that have several reactive groups. Chain polymers composed of monomers with side groups that are part of the monomers, such as polystyrene or polypropylene, are not considered branched polymers.

In cross-linked polymers, two or more chains are joined together by side chains. With a small degree of cross-linking, a loose network is obtained that is essentially two dimensional. High degrees of cross-linking result in a tight three-dimensional structure. Cross-linking is usually caused by chemical reactions. An example of a two-dimensional cross-linked structure is vulcanized rubber, in which cross-links are formed by sulfur atoms. Thermosetting plastics are examples of highly cross-linked polymers; their structure is so rigid that when heated they decompose or burn rather than melt.

III SYNTHESIS
Two general methods exist for forming large molecules from small monomers: addition polymerization and condensation polymerization. In the chemical process called addition polymerization, monomers join together without the loss of atoms from the molecules. Some examples of addition polymers are polyethylene, polypropylene, polystyrene, polyvinyl acetate, and polytetrafluoroethylene (Teflon).

In condensation polymerization, monomers join together with the simultaneous elimination of atoms or groups of atoms. Typical condensation polymers are polyamides, polyesters, and certain polyurethanes.
In 1983 a new method of addition polymerization called group transfer polymerization was announced. An activating group within the molecule initiating the process transfers to the end of the growing polymer chain as individual monomers insert themselves in the group. The method has been used for acrylic plastics; it should prove applicable to other plastics as well.

(b)Eco System
(c)Antihiotia

(d) Polymer

I INTRODUCTION
Polymer, substance consisting of large molecules that are made of many small, repeating units called monomers, or mers. The number of repeating units in one large molecule is called the degree of polymerization. Materials with a very high degree of polymerization are called high polymers. Polymers consisting of only one kind of repeating unit are called homopolymers. Copolymers are formed from several different repeating units.

Most of the organic substances found in living matter, such as protein, wood, chitin, rubber, and resins, are polymers. Many synthetic materials, such as plastics, fibers (; Rayon), adhesives, glass, and porcelain, are also to a large extent polymeric substances.

II STRUCTURE OF POLYMERS
Polymers can be subdivided into three, or possibly four, structural groups. The molecules in linear polymers consist of long chains of monomers joined by bonds that are rigid to a certain degree—the monomers cannot rotate freely with respect to each other. Typical examples are polyethylene, polyvinyl alcohol, and polyvinyl chloride (PVC).

Branched polymers have side chains that are attached to the chain molecule itself. Branching can be caused by impurities or by the presence of monomers that have several reactive groups. Chain polymers composed of monomers with side groups that are part of the monomers, such as polystyrene or polypropylene, are not considered branched polymers.
In cross-linked polymers, two or more chains are joined together by side chains. With a small degree of cross-linking, a loose network is obtained that is essentially two dimensional. High degrees of cross-linking result in a tight three-dimensional structure. Cross-linking is usually caused by chemical reactions. An example of a two-dimensional cross-linked structure is vulcanized rubber, in which cross-links are formed by sulfur atoms. Thermosetting plastics are examples of highly cross-linked polymers; their structure is so rigid that when heated they decompose or burn rather than melt.

III SYNTHESIS
Two general methods exist for forming large molecules from small monomers: addition polymerization and condensation polymerization. In the chemical process called addition polymerization, monomers join together without the loss of atoms from the molecules. Some examples of addition polymers are polyethylene, polypropylene, polystyrene, polyvinyl acetate, and polytetrafluoroethylene (Teflon).

In condensation polymerization, monomers join together with the simultaneous elimination of atoms or groups of atoms. Typical condensation polymers are polyamides, polyesters, and certain polyurethanes.
In 1983 a new method of addition polymerization called group transfer polymerization was announced. An activating group within the molecule initiating the process transfers to the end of the growing polymer chain as individual monomers insert themselves in the group. The method has been used for acrylic plastics; it should prove applicable to other plastics as well.

(e) Gene
Gene, basic unit of heredity found in the cells of all living organisms, from bacteria to humans. Genes determine the physical characteristics that an organism inherits, such as the shape of a tree’s leaf, the markings on a cat’s fur, and the color of a human hair. 

Genes are composed of segments of deoxyribonucleic acid (DNA), a molecule that forms the long, threadlike structures called chromosomes. The information encoded within the DNA structure of a gene directs the manufacture of proteins, molecular workhorses that carry out all life-supporting activities within a cell (see Genetics).

Chromosomes within a cell occur in matched pairs. Each chromosome contains many genes, and each gene is located at a particular site on the chromosome, known as the locus. Like chromosomes, genes typically occur in pairs. A gene found on one chromosome in a pair usually has the same locus as another gene in the other chromosome of the pair, and these two genes are called alleles. Alleles are alternate forms of the same gene. For example, a pea plant has one gene that determines height, but that gene appears in more than one form—the gene that produces a short plant is an allele of the gene that produces a tall plant. The behavior of alleles and how they influence inherited traits follow predictable patterns. Austrian monk Gregor Mendel first identified these patterns in the 1860s.

In organisms that use sexual reproduction, offspring inherit one-half of their genes from each parent and then mix the two sets of genes together. This produces new combinations of genes, so that each individual is unique but still possesses the same genes as its parents. As a result, sexual reproduction ensures that the basic characteristics of a particular species remain largely the same for generations. However, mutations, or alterations in DNA, occur constantly. They create variations in the genes that are inherited. Some mutations may be neutral, or silent, and do not affect the function of a protein. Occasionally a mutation may benefit or harm an organism and over the course of evolutionary time, these mutations serve the crucial role of providing organisms with previously nonexistent proteins. In this way, mutations are a driving force behind genetic diversity and the rise of new or more competitive species that are better able to adapt to changes, such as climate variations, depletion of food sources, or the emergence of new types of disease .

Geneticists are scientists who study the function and behavior of genes. Since the 1970s geneticists have devised techniques, cumulatively known as genetic engineering, to alter or manipulate the DNA structure within genes. These techniques enable scientists to introduce one or more genes from one organism into a second organism. The second organism incorporates the new DNA into its own genetic material, thereby altering its own genetic characteristics by changing the types of proteins it can produce. In humans these techniques form the basis of gene therapy, a group of experimental procedures in which scientists try to substitute one or more healthy genes for defective ones in order to eliminate symptoms of disease.

Genetic engineering techniques have also enabled scientists to determine the chromosomal location and DNA structure of all the genes found within a variety of organisms. In April 2003 the Human Genome Project, a publicly funded consortium of academic scientists from around the world, identified the chromosomal locations and structure of the estimated 20,000 to 25,000 genes found within human cells. The genetic makeup of other organisms has also been identified, including that of the bacterium Escherichia coli, the yeast Saccharomyces cerevisiae, the roundworm Caenorhabditis elegans, and the fruit fly Drosophila melanogaster. Scientists hope to use this genetic information to develop life-saving drugs for a variety of diseases, to improve agricultural crop yields, and to learn more about plant and animal physiology and evolutionary history.

(f) Software
Software, computer programs; instructions that cause the hardware—the machines—to do work. Software as a whole can be divided into a number of categories based on the types of work done by programs. The two primary software categories are operating systems (system software), which control the workings of the computer, and application software, which addresses the multitude of tasks for which people use computers. System software thus handles such essential, but often invisible, chores as maintaining disk files and managing the screen, whereas application software performs word processing, database management, and the like. Two additional categories that are neither system nor application software, although they contain elements of both, are network software, which enables groups of computers to communicate, and language software, which provides programmers with the tools they need to write programs. 


Q9: what do you understand by the term “Balanced Diet ? What are its essential constituents ? state the function of each constituent.

I INTRODUCTION
Human Nutrition, study of how food affects the health and survival of the human body. Human beings require food to grow, reproduce, and maintain good health. Without food, our bodies could not stay warm, build or repair tissue, or maintain a heartbeat. Eating the right foods can help us avoid certain diseases or recover faster when illness occurs. These and other important functions are fueled by chemical substances in our food called nutrients. Nutrients are classified as carbohydrates, proteins, fats, vitamins, minerals, and water.

When we eat a meal, nutrients are released from food through digestion. Digestion begins in the mouth by the action of chewing and the chemical activity of saliva, a watery fluid that contains enzymes, certain proteins that help break down food. Further digestion occurs as food travels through the stomach and the small intestine, where digestive enzymes and acids liquefy food and muscle contractions push it along the digestive tract. Nutrients are absorbed from the inside of the small intestine into the bloodstream and carried to the sites in the body where they are needed. At these sites, several chemical reactions occur that ensure the growth and function of body tissues. The parts of foods that are not absorbed continue to move down the intestinal tract and are eliminated from the body as feces.

Once digested, carbohydrates, proteins, and fats provide the body with the energy it needs to maintain its many functions. Scientists measure this energy in kilocalories, the amount of energy needed to raise 1 kilogram of water 1 degree Celsius. In nutrition discussions, scientists use the term calorie instead of kilocalorie as the standard unit of measure in nutrition.

II ESSENTIAL NUTRIENTS
Nutrients are classified as essential or nonessential. Nonessential nutrients are manufactured in the body and do not need to be obtained from food. Examples include cholesterol, a fatlike substance present in all animal cells. Essential nutrients must be obtained from food sources, because the body either does not produce them or produces them in amounts too small to maintain growth and health. Essential nutrients include water, carbohydrates, proteins, fats, vitamins, and minerals.

An individual needs varying amounts of each essential nutrient, depending upon such factors as gender and age. Specific health conditions, such as pregnancy, breast-feeding, illness, or drug use, make unusual demands on the body and increase its need for nutrients. Dietary guidelines, which take many of these factors into account, provide general guidance in meeting daily nutritional needs.

III WATER
If the importance of a nutrient is judged by how long we can do without it, water ranks as the most important. A person can survive only eight to ten days without water, whereas it takes weeks or even months to die from a lack of food. Water circulates through our blood and lymphatic system, transporting oxygen and nutrients to cells and removing wastes through urine and sweat. Water also maintains the natural balance between dissolved salts and water inside and outside of cells. Our joints and soft tissues depend on the cushioning that water provides for them. While water has no caloric value and therefore is not an energy source, without it in our diets we could not digest or absorb the foods we eat or eliminate the body’s digestive waste.

The human body is 65 percent water, and it takes an average of eight to ten cups to replenish the water our bodies lose each day. How much water a person needs depends largely on the volume of urine and sweat lost daily, and water needs are increased if a person suffers from diarrhea or vomiting or undergoes heavy physical exercise. Water is replenished by drinking liquids, preferably those without caffeine or alcohol, both of which increase the output of urine and thus dehydrate the body. Many foods are also a good source of water—fruits and vegetables, for instance, are 80 to 95 percent water; meats are made up of 50 percent water; and grains, such as oats and rice, can have as much as 35 percent water.

IV CARBOHYDRATES
Carbohydrates are the human body’s key source of energy, providing 4 calories of energy per gram. When carbohydrates are broken down by the body, the sugar glucose is produced; glucose is critical to help maintain tissue protein, metabolize fat, and fuel the central nervous system.
Glucose is absorbed into the bloodstream through the intestinal wall. Some of this glucose goes straight to work in our brain cells and red blood cells, while the rest makes its way to the liver and muscles, where it is stored as glycogen (animal starch), and to fat cells, where it is stored as fat. Glycogen is the body’s auxiliary energy source, tapped and converted back into glucose when we need more energy. Although stored fat can also serve as a backup source of energy, it is never converted into glucose. Fructose and galactose, other sugar products resulting from the breakdown of carbohydrates, go straight to the liver, where they are converted into glucose.

Starches and sugars are the major carbohydrates. Common starch foods include whole-grain breads and cereals, pasta, corn, beans, peas, and potatoes. Naturally occurring sugars are found in fruits and many vegetables; milk products; and honey, maple sugar, and sugar cane. Foods that contain starches and naturally occurring sugars are referred to as complex carbohydrates, because their molecular complexity requires our bodies to break them down into a simpler form to obtain the much-needed fuel, glucose. Our bodies digest and absorb complex carbohydrates at a rate that helps maintain the healthful levels of glucose already in the blood.

In contrast, simple sugars, refined from naturally occurring sugars and added to processed foods, require little digestion and are quickly absorbed by the body, triggering an unhealthy chain of events. The body’s rapid absorption of simple sugars elevates the levels of glucose in the blood, which triggers the release of the hormone insulin. Insulin reins in the body’s rising glucose levels, but at a price: Glucose levels may fall so low within one to two hours after eating foods high in simple sugars, such as candy, that the body responds by releasing chemicals known as anti-insulin hormones. This surge in chemicals, the aftermath of eating a candy bar, can leave a person feeling irritable and nervous.

Many processed foods not only contain high levels of added simple sugars, they also tend to be high in fat and lacking in the vitamins and minerals found naturally in complex carbohydrates. Nutritionists often refer to such processed foods as junk foods and say that they provide only empty calories, meaning they are loaded with calories from sugars and fats but lack the essential nutrients our bodies need.

In addition to starches and sugars, complex carbohydrates contain indigestible dietary fibers. Although such fibers provide no energy or building materials, they play a vital role in our health. Found only in plants, dietary fiber is classified as soluble or insoluble. Soluble fiber, found in such foods as oats, barley, beans, peas, apples, strawberries, and citrus fruits, mixes with food in the stomach and prevents or reduces the absorption by the small intestine of potentially dangerous substances from food. Soluble fiber also binds dietary cholesterol and carries it out of the body, thus preventing it from entering the bloodstream where it can accumulate in the inner walls of arteries and set the stage for high blood pressure, heart disease, and strokes. Insoluble fiber, found in vegetables, whole-grain products, and bran, provides roughage that speeds the elimination of feces, which decreases the time that the body is exposed to harmful substances, possibly reducing the risk of colon cancer. Studies of populations with fiber-rich diets, such as Africans and Asians, show that these populations have less risk of colon cancer compared to those who eat low-fiber diets, such as Americans. In the United States, colon cancer is the third most common cancer for both men and women, but experts believe that, with a proper diet, it is one of the most preventable types of cancer.

Nutritionists caution that most Americans need to eat more complex carbohydrates. In the typical American diet, only 40 to 50 percent of total calories come from carbohydrates—a lower percentage than found in most of the world. To make matters worse, half of the carbohydrate calories consumed by the typical American come from processed foods filled with simple sugars. Experts recommend that these foods make up no more that 10 percent of our diet, because these foods offer no nutritional value. Foods rich in complex carbohydrates, which provide vitamins, minerals, some protein, and dietary fiber and are an abundant energy source, should make up roughly 50 percent of our daily calories.

V PROTEINS
Dietary proteins are powerful compounds that build and repair body tissues, from hair and fingernails to muscles. In addition to maintaining the body’s structure, proteins speed up chemical reactions in the body, serve as chemical messengers, fight infection, and transport oxygen from the lungs to the body’s tissues. Although protein provides 4 calories of energy per gram, the body uses protein for energy only if carbohydrate and fat intake is insufficient. When tapped as an energy source, protein is diverted from the many critical functions it performs for our bodies.

Proteins are made of smaller units called amino acids. Of the more than 20 amino acids our bodies require, eight (nine in some older adults and young children) cannot be made by the body in sufficient quantities to maintain health. These amino acids are considered essential and must be obtained from food. When we eat food high in proteins, the digestive tract breaks this dietary protein into amino acids. Absorbed into the bloodstream and sent to the cells that need them, amino acids then recombine into the functional proteins our bodies need.

Animal proteins, found in such food as eggs, milk, meat, fish, and poultry, are considered complete proteins because they contain all of the essential amino acids our bodies need. Plant proteins, found in vegetables, grains, and beans, lack one or more of the essential amino acids. However, plant proteins can be combined in the diet to provide all of the essential amino acids. A good example is rice and beans. Each of these foods lacks one or more essential amino acids, but the amino acids missing in rice are found in the beans, and vice versa. So when eaten together, these foods provide a complete source of protein. Thus, people who do not eat animal products (see Vegetarianism) can meet their protein needs with diets rich in grains, dried peas and beans, rice, nuts, and tofu, a soybean product.

Experts recommend that protein intake make up only 10 percent of our daily calorie intake. Some people, especially in the United States and other developed countries, consume more protein than the body needs. Because extra amino acids cannot be stored for later use, the body destroys these amino acids and excretes their by-products. Alternatively, deficiencies in protein consumption, seen in the diets of people in some developing nations, may result in health problems. Marasmus and kwashiorkor, both life-threatening conditions, are the two most common forms of protein malnutrition.

Some health conditions, such as illness, stress, and pregnancy and breast-feeding in women, place an enormous demand on the body as it builds tissue or fights infection, and these conditions require an increase in protein consumption. For example, a healthy woman normally needs 45 grams of protein each day. Experts recommend that a pregnant woman consume 55 grams of protein per day, and that a breast-feeding mother consume 65 grams to maintain health.

A man of average size should eat 57 grams of protein daily. To support their rapid development, infants and young children require relatively more protein than do adults. A three-month-old infant requires about 13 grams of protein daily, and a four-year-old child requires about 22 grams. Once in adolescence, sex hormone differences cause boys to develop more muscle and bone than girls; as a result, the protein needs of adolescent boys are higher than those of girls.

VI FATS
Fats, which provide 9 calories of energy per gram, are the most concentrated of the energy-producing nutrients, so our bodies need only very small amounts. Fats play an important role in building the membranes that surround our cells and in helping blood to clot. Once digested and absorbed, fats help the body absorb certain vitamins. Fat stored in the body cushions vital organs and protects us from extreme cold and heat.

Fat consists of fatty acids attached to a substance called glycerol. Dietary fats are classified as saturated, monounsaturated, and polyunsaturated according to the structure of their fatty acids (see Fats and Oils). Animal fats—from eggs, dairy products, and meats—are high in saturated fats and cholesterol, a chemical substance found in all animal fat. Vegetable fats—found, for example, in avocados, olives, some nuts, and certain vegetable oils—are rich in monounsaturated and polyunsaturated fat. As we will see, high intake of saturated fats can be unhealthy.

To understand the problem with eating too much saturated fat, we must examine its relationship to cholesterol. High levels of cholesterol in the blood have been linked to the development of heart disease, strokes, and other health problems. Despite its bad reputation, our bodies need cholesterol, which is used to build cell membranes, to protect nerve fibers, and to produce vitamin D and some hormones, chemical messengers that help coordinate the body’s functions. We just do not need cholesterol in our diet. The liver, and to a lesser extent the small intestine, manufacture all the cholesterol we require. When we eat cholesterol from foods that contain saturated fatty acids, we increase the level of a cholesterol-carrying substance in our blood that harms our health.

Cholesterol, like fat, is a lipid—an organic compound that is not soluble in water. In order to travel through blood, cholesterol therefore must be transported through the body in special carriers, called lipoproteins. High-density lipoproteins (HDLs) remove cholesterol from the walls of arteries, return it to the liver, and help the liver excrete it as bile, a liquid acid essential to fat digestion. For this reason, HDL is called “good” cholesterol.

Low-density lipoproteins (LDLs) and very-low-density lipoproteins (VLDLs) are considered “bad” cholesterol. Both LDLs and VLDLs transport cholesterol from the liver to the cells. As they work, LDLs and VLDLs leave plaque-forming cholesterol in the walls of the arteries, clogging the artery walls and setting the stage for heart disease. Almost 70 percent of the cholesterol in our bodies is carried by LDLs and VLDLs, and the remainder is transported by HDLs. For this reason, we need to consume dietary fats that increase our HDLs and decrease our LDL and VLDL levels.

Saturated fatty acids—found in foods ranging from beef to ice cream, to mozzarella cheese to doughnuts—should make up no more than 10 percent of a person’s total calorie intake each day. Saturated fats are considered harmful to the heart and blood vessels because they are thought to increase the level of LDLs and VLDLs and decrease the levels of HDLs.

Monounsaturated fats—found in olive, canola, and peanut oils—appear to have the best effect on blood cholesterol, decreasing the level of LDLs and VLDLs and increasing the level of HDLs. Polyunsaturated fats—found in margarine and sunflower, soybean, corn, and safflower oils—are considered more healthful than saturated fats. However, if consumed in excess (more than 10 percent of daily calories), they can decrease the blood levels of HDLs.

Most Americans obtain 15 to 50 percent of their daily calories from fats. Health experts consider diets with more than 30 percent of calories from fat to be unsafe, increasing the risk of heart disease. High-fat diets also contribute to obesity, which is linked to high blood pressure (see hypertension) and diabetes mellitus. A diet high in both saturated and unsaturated fats has also been associated with greater risk of developing cancers of the colon, prostate, breast, and uterus. Choosing a diet that is low in fat and cholesterol is critical to maintaining health and reducing the risk of life-threatening disease.

VII VITAMINS AND MINERALS
Both vitamins and minerals are needed by the body in very small amounts to trigger the thousands of chemical reactions necessary to maintain good health. Many of these chemical reactions are linked, with one triggering another. If there is a missing or deficient vitamin or mineral—or link—anywhere in this chain, this process may break down, with potentially devastating health effects. Although similar in supporting critical functions in the human body, vitamins and minerals have key differences.

Among their many functions, vitamins enhance the body’s use of carbohydrates, proteins, and fats. They are critical in the formation of blood cells, hormones, nervous system chemicals known as neurotransmitters, and the genetic material deoxyribonucleic acid (DNA). Vitamins are classified into two groups: fat soluble and water soluble. Fat-soluble vitamins, which include vitamins A, D, E, and K, are usually absorbed with the help of foods that contain fat. Fat containing these vitamins is broken down by bile, a liquid released by the liver, and the body then absorbs the breakdown products and vitamins. Excess amounts of fat-soluble vitamins are stored in the body’s fat, liver, and kidneys. Because these vitamins can be stored in the body, they do not need to be consumed every day to meet the body’s needs.

Water-soluble vitamins, which include vitamins C (also known as ascorbic acid), B1 (thiamine), B2 (riboflavin), B3 (niacin), B6, B12, and folic acid, cannot be stored and rapidly leave the body in urine if taken in greater quantities than the body can use. Foods that contain water-soluble vitamins need to be eaten daily to replenish the body’s needs.

In addition to the roles noted in the vitamin and mineral chart accompanying this article, vitamins A (in the form of beta-carotene), C, and E function as antioxidants, which are vital in countering the potential harm of chemicals known as free radicals. If these chemicals remain unchecked they can make cells more vulnerable to cancer-causing substances. Free radicals can also transform chemicals in the body into cancer-causing agents. Environmental pollutants, such as cigarette smoke, are sources of free radicals.

Minerals are minute amounts of metallic elements that are vital for the healthy growth of teeth and bones. They also help in such cellular activity as enzyme action, muscle contraction, nerve reaction, and blood clotting. Mineral nutrients are classified as major elements (calcium, chlorine, magnesium, phosphorus, potassium, sodium, and sulfur) and trace elements (chromium, copper, fluoride, iodine, iron, selenium, and zinc).

Vitamins and minerals not only help the body perform its various functions, but also prevent the onset of many disorders. For example, vitamin C is important in maintaining our bones and teeth; scurvy, a disorder that attacks the gums, skin, and muscles, occurs in its absence. Diets lacking vitamin B1, which supports neuromuscular function, can result in beriberi, a disease characterized by mental confusion, muscle weakness, and inflammation of the heart. Adequate intake of folic acid by pregnant women is critical to avoid nervous system defects in the developing fetus. The mineral calcium plays a critical role in building and maintaining strong bones; without it, children develop weak bones and adults experience the progressive loss of bone mass known as osteoporosis, which increases their risk of bone fractures.

Vitamins and minerals are found in a wide variety of foods, but some foods are better sources of specific vitamins and minerals than others. For example, oranges contain large amounts of vitamin C and folic acid but very little of the other vitamins. Milk contains large amounts of calcium but no vitamin C. Sweet potatoes are rich in vitamin A, but white potatoes contain almost none of this vitamin. Because of these differences in vitamin and mineral content, it is wise to eat a wide variety of foods.

VIII TOO LITTLE AND TOO MUCH FOOD
When the body is not given enough of any one of the essential nutrients over a period of time, it becomes weak and less able to fight infection. The brain may become sluggish and react slowly. The body taps its stored fat for energy, and muscle is broken down to use for energy. Eventually the body withers away, the heart ceases to pump properly, and death occurs—the most extreme result of a dietary condition known as deficiency-related malnutrition.

Deficiency diseases result from inadequate intake of the major nutrients. These deficiencies can result from eating foods that lack critical vitamins and minerals, from a lack of variety of foods, or from simply not having enough food. Malnutrition can reflect conditions of poverty, war, famine, and disease. It can also result from eating disorders, such as anorexia nervosa and bulimia.

Although malnutrition is more commonly associated with dietary deficiencies, it also can develop in cases where people have enough food to eat, but they choose foods low in essential nutrients. This is the more common form of malnutrition in developed countries such as the United States. When poor food choices are made, a person may be getting an adequate, or excessive, amount of calories each day, yet still be undernourished. For example, iron deficiency is a common health problem among women and young children in the United States, and low intake of calcium is directly related to poor quality bones and increased fracture risk, especially in the elderly.

A diet of excesses may also lead to other nutritional problems. Obesity is the condition of having too much body fat. It has been linked to life-threatening diseases including diabetes mellitus, heart problems, and some forms of cancer. Eating too many salty foods may contribute to high blood pressure (see hypertension), an often undiagnosed condition that causes the heart to work too hard and puts strain on the arteries. High blood pressure can lead to strokes, heart attacks, and kidney failure. A diet high in cholesterol and fat, particularly saturated fat, is the primary cause of atherosclerosis, which results when fat and cholesterol deposits build up in the arteries, causing a reduction in blood flow.

IX MAKING GOOD NUTRITIONAL CHOICES
To determine healthful nutrition standards, the Food and Nutrition Board of the National Academy of Sciences (NAS), a nonprofit, scholarly society that advises the United States government, periodically assembles committees of national experts to update and assess nutrition guidelines. The NAS first published its Recommended Dietary Allowances (RDAs) in 1941. An RDA reflects the amount of a nutrient in the diet that should decrease the risk of chronic disease for most healthy individuals. The NAS originally developed the RDAs to ensure that World War II soldiers stationed around the world received enough of the right kinds of foods to maintain their health. The NAS periodically has updated the RDAs to reflect new knowledge of nutrient needs.

In the late 1990s the NAS decided that the RDAs, originally developed to prevent nutrient deficiencies, needed to serve instead as a guide for optimizing health. Consequently, the NAS created Dietary Reference Intakes (DRIs), which incorporate the RDAs and a variety of new dietary guidelines. As part of this change, the NAS replaced some RDAs with another measure, called Adequate Intake (AI). Although the AI recommendations are often the same as those in the original RDA, use of this term reflects that there is not enough scientific evidence to set a standard for the nutrient. Calcium has an AI of 1000 to 1200 mg per day, not an RDA, because scientists do not yet know how much calcium is needed to prevent osteoporosis.
Tolerable Upper Intake Level (UL) designates the highest recommended intake of a nutrient for good health. If intake exceeds this amount, health problems may develop. Calcium, for instance, has a UL of 2500 mg per day. Scientists know that more than this amount of calcium taken every day can interfere with the absorption of iron, zinc, and magnesium and may result in kidney stones or kidney failure.

Estimated Average Requirement (EAR) reflects the amount of a particular nutrient that meets the optimal needs of half the individuals in a specified group. For example, the NAS cites an EAR of 45 to 90 grams of protein for men aged 18 to 25. This figure means that half the men in that population need a daily intake of protein that falls within that range.

To simplify the complex standards established by the NAS, the United States Department of Agriculture (USDA) created the Food Guide Pyramid, a visual display of the relative importance to health of six food groups common to the American diet. The food groups are arranged in a pyramid to emphasize that it is wise to choose an abundance of foods from the category at the broad base (bread, cereal, rice, pasta) and use sparingly foods from the peak (fats, oils, sweets). The other food groups appear between these two extremes, indicating the importance of vegetables and fruits and the need for moderation in eating dairy products and meats. The pyramid recommends a range of the number of servings to choose from each group, based on the nutritional needs of males and females and different age groups. Other food pyramids have been developed based on the USDA pyramid to help people choose foods that fit a specific ethnic or cultural pattern, including Mediterranean, Asian, Latin American, Puerto Rican, and vegetarian diets.

In an effort to provide additional nutritional guidance and reduce the incidence of diet-related cancers, the National Cancer Institute developed the 5-a-Day Campaign for Better Health, a program that promotes the practice of eating five servings of fruits and vegetables daily. Studies of populations that eat many fruits and vegetables reveal a decreased incidence of diet-related cancers. Laboratory studies have shown that many fruits and vegetables contain phytochemicals, substances that appear to limit the growth of cancer cells.

Many people obtain most of their nutrition information from a food label called the Nutrition Facts panel. This label is mandatory for most foods that contain more than one ingredient, and these foods are mostly processed foods. Labeling remains voluntary for raw meats, fresh fruits and vegetables, foods produced by small businesses, and those sold in restaurants, food stands, and local bakeries.

The Nutrition Facts panel highlights a product’s content of fat, saturated fat, cholesterol, sodium, dietary fiber, vitamins A and C, and the minerals calcium and iron. The stated content of these nutrients must be based on a standard serving size, as defined by the Food and Drug Administration (FDA). Food manufacturers may provide information about other nutrients if they choose. However, if a nutritional claim is made on a product’s package, the appropriate nutrient content must be listed. For example, if the package says “high in folic acid,” then the folic acid content in the product must be given in the Nutrition Facts panel.

The Nutrition Facts panel also includes important information in a column headed % Daily Value (DV). DVs tell how the food item meets the recommended daily intakes of fat, saturated fat, cholesterol, carbohydrates, dietary fiber, and protein necessary for nutritional health based on the total intake recommended for a person consuming 2000 calories per day. One portion from a can of soup, for example, may have less than 2 percent of the recommended daily value for cholesterol intake.

Health-conscious consumers can use the Nutrition Facts panel to guide their food choices. For example, based on a daily diet of 2000 calories, nutrition experts recommend that no more than 30 percent of those calories should be from fat, which would allow for a daily intake of around 65 grams of fat. A Nutrition Facts panel may indicate that a serving of one brand of macaroni and cheese contains 14 grams of fat, or a % DV of 25 percent. This tells the consumer that a serving of macaroni and cheese provides about one-fourth of the suggested healthy level of daily fat intake. If another brand of macaroni and cheese displays a % DRV of 10 percent fat, the nutrition-conscious consumer would opt for this brand.

Nutritionists and other health experts help consumers make good food choices. People who study nutrition in college may refer to themselves as nutritionists; often, however, the term refers to a scientist who has pursued graduate education in this field. A nutritionist may also be a dietitian. Dietitians are trained in nutrition, food chemistry, and diet planning. In the United States, dietitians typically have graduated from a college program accredited by the American Dietetic Association (ADA), completed an approved program of clinical experience, and passed the ADA’s registration examination to earn the title Registered Dietitian (RD).


Q14: What are fertilizers ? what do you understand by the term NPK fertilizer ? How do fertilizers contribute to water pollution ?

Fertilizer, natural or synthetic chemical substance or mixture used to enrich soil so as to promote plant growth. Plants do not require complex chemical compounds analogous to the vitamins and amino acids required for human nutrition, because plants are able to synthesize whatever compounds they need. They do require more than a dozen different chemical elements and these elements must be present in such forms as to allow an adequate availability for plant use. Within this restriction, nitrogen, for example, can be supplied with equal effectiveness in the form of urea, nitrates, ammonium compounds, or pure ammonia.

Virgin soil usually contains adequate amounts of all the elements required for proper plant nutrition. When a particular crop is grown on the same parcel of land year after year, however, the land may become exhausted of one or more specific nutrients. If such exhaustion occurs, nutrients in the form of fertilizers must be added to the soil. Plants can also be made to grow more lushly with suitable fertilizers.

Of the required nutrients, hydrogen, oxygen, and carbon are supplied in inexhaustible form by air and water. Sulfur, calcium, and iron are necessary nutrients that usually are present in soil in ample quantities. Lime (calcium) is often added to soil, but its function is primarily to reduce acidity and not, in the strict sense, to act as a fertilizer. Nitrogen is present in enormous quantities in the atmosphere, but plants are not able to use nitrogen in this form; bacteria provide nitrogen from the air to plants of the legume family through a process called nitrogen fixation. The three elements that most commonly must be supplied in fertilizers are nitrogen, phosphorus, and potassium. Certain other elements, such as boron, copper, and manganese, sometimes need to be included in small quantities.

Many fertilizers used since ancient times contain one or more of the three elements important to the soil. For example, manure and guano contain nitrogen. Bones contain small quantities of nitrogen and larger quantities of phosphorus. Wood ash contains appreciable quantities of potassium (depending considerably on the type of wood). Clover, alfalfa, and other legumes are grown as rotating crops and then plowed under, enriching the soil with nitrogen. 

The term complete fertilizer often refers to any mixture containing all three important elements; such fertilizers are described by a set of three numbers. For example, 5-8-7 designates a fertilizer (usually in powder or granular form) containing 5 percent nitrogen, 8 percent phosphorus (calculated as phosphorus pentoxide), and 7 percent potassium (calculated as potassium oxide). 

While fertilizers are essential to modern agriculture, their overuse can have harmful effects on plants and crops and on soil quality. In addition, the leaching of nutrients into bodies of water can lead to water pollution problems such as eutrophication, by causing excessive growth of vegetation.

The use of industrial waste materials in commercial fertilizers has been encouraged in the United States as a means of recycling waste products. The safety of this practice has recently been called into question. Its opponents argue that industrial wastes often contain elements that poison the soil and can introduce toxic chemicals into the food chain.
Last edited by Last Island; Sunday, December 30, 2007 at 09:18 PM.
Reply With Quote
  #4  
Old Sunday, December 30, 2007
Dilrauf  Dilrauf is offline
Member
 
Join Date: Sep 2005
Location: Islamabad
Posts: 34
Thanks: 6
Thanked 27 Times in 9 Posts
Dilrauf is on a distinguished road
Default
PAPER 2003

Q 1. Write short notes on any two of the following :

(a)Microwave Oven

(b) Optic Fibre

(c) Biotechnology

I INTRODUCTION
Biotechnology, the manipulation of biological organisms to make products that benefit human beings. Biotechnology contributes to such diverse areas as food production, waste disposal, mining, and medicine.
Although biotechnology has existed since ancient times, some of its most dramatic advances have come in more recent years. Modern achievements include the transferal of a specific gene from one organism to another (by means of a set of genetic engineering techniques known as transgenics); the maintenance and growth of genetically uniform plant- and animal-cell cultures, called clones; and the fusing of different types of cells to produce beneficial medical products such as monoclonal antibodies, which are designed to attack a specific type of foreign substance.

II HISTORY
The first achievements in biotechnology were in food production, occurring about 5000 BC. Diverse strains of plants or animals were hybridized (crossed) to produce greater genetic variety. The offspring from these crosses were then selectively bred to produce the greatest number of desirable traits (see Genetics). Repeated cycles of selective breeding produced many present-day food staples. This method continues to be used in food-production programs.

Corn (maize) was one of the first food crops known to have been cultivated by human beings. Although used as food as early as 5000 BC in Mexico, no wild forms of the plant have ever been found, indicating that corn was most likely the result of some fortunate agricultural experiment in antiquity.

The modern era of biotechnology had its origin in 1953 when American biochemist James Watson and British biophysicist Francis Crick presented their double-helix model of DNA. This was followed by Swiss microbiologist Werner Arber's discovery in the 1960s of special enzymes, called restriction enzymes, in bacteria. These enzymes cut the DNA strands of any organism at precise points. In 1973 American geneticist Stanley Cohen and American biochemist Herbert Boyer removed a specific gene from one bacterium and inserted it into another using restriction enzymes. This event marked the beginning of recombinant DNA technology, commonly called genetic engineering. In 1977 genes from other organisms were transferred to bacteria. This achievement eventually led to the first transfer of a human gene, which coded for a hormone, to Escherichia coli bacteria. Although the transgenic bacteria (bacteria to which a gene from a different species has been transferred) could not use the human hormone, they produced it along with their own normal chemical compounds.

In the 1960s an important project used hybridization followed by selective breeding to increase food production and quality of wheat and rice crops. American agriculturalist Norman Borlaug, who spearheaded the program, was awarded the Nobel Peace Prize in 1970 in recognition of the important contribution that increasing the world's food supply makes to the cause of peace.

III CURRENT TRENDS
Today biotechnology is applied in various fields. In waste management, for example, biotechnology is used to create new biodegradable materials. One such material is made from the lactic acid produced during the bacterial fermentation of discarded corn stalks. When individual lactic acid molecules are joined chemically, they form a material that has the properties of plastics but is biodegradable. Widespread production of plastic from this material is expected to become more economically viable in the future.
Biotechnology also has applications in the mining industry. In its natural state, copper is found combined with other elements in the mineral chalcopyrite. The bacterium Thiobacillus ferrooxidans can use the molecules of copper found in chalcopyrite to form the compound copper sulfate (CuSO4), which, in turn, can be treated chemically to obtain pure copper. This microbiological mining process is used only with low-grade ores and currently accounts for about 10 percent of copper production in the United States. The percentage will rise, however, as conventionally mined high-grade deposits are exhausted. Procedures have also been developed for the use of bacteria in the mining of zinc, lead, and other metals.
The field of medicine employs some of the most dramatic applications in biotechnology. One advance came in 1986 with the first significant laboratory production of factor VIII, a blood-clotting protein that is not produced, or has greatly reduced activity, in people who have hemophilia. As a result of this condition, hemophiliacs are at risk of bleeding to death after suffering minor cuts or bruises. In this biotechnological procedure, the human gene that codes for the blood-clotting protein is transferred to hamster cells grown in tissue culture, which then produce factor VIII for use by hemophiliacs. Factor VIII was approved for commercial production in 1992.

IV CONTROVERSIES
Some people, including scientists, object to any procedure that changes the genetic composition of an organism. Critics are concerned that some of the genetically altered forms will eliminate existing species, thereby upsetting the natural balance of organisms. There are also fears that recombinant DNA experiments with pathogenic microorganisms may result in the formation of extremely virulent forms which, if accidentally released from the laboratory, will cause worldwide epidemics. Some critics cite ethical dilemmas associated with the production of transgenic organisms.
In 1976, in response to fears of disastrous consequences of unregulated genetic engineering procedures, the National Institutes of Health created a body of rules governing the handling of microorganisms in recombinant DNA experiments. Although many of the rules have been relaxed over time, certain restrictions are still imposed on those working with pathogenic microorganisms.

Q2: Give names of the members of the solar system. Briefly write down main characteristics of : a). Mars b). venus 

Solar System, the Sun and everything that orbits the Sun, including the nine planets and their satellites; the asteroids and comets; and interplanetary dust and gas. The term may also refer to a group of celestial bodies orbiting another star (see Extrasolar Planets). In this article, solar system refers to the system that includes Earth and the Sun. 

Planet, any major celestial body that orbits a star and does not emit visible light of its own but instead shines by reflected light. Smaller bodies that also orbit a star and are not satellites of a planet are called asteroids or planetoids. In the solar system, there are nine planets: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune, and Pluto. Planets that orbit stars other than the Sun are collectively called extrasolar planets. Some extrasolar planets are nearly large enough to become stars themselves. Such borderline planets are called brown dwarfs. 

Mars:

Mars (planet), one of the planets in the solar system, it is the fourth planet from the Sun and orbits the Sun at an average distance of about 228 million km (about 141 million mi). Mars is named for the Roman god of war and is sometimes called the red planet because it appears fiery red in Earth’s night sky.

Mars is a relatively small planet, with about half the diameter of Earth and about one-tenth Earth’s mass. The force of gravity on the surface of Mars is about one-third of that on Earth. Mars has twice the diameter and twice the surface gravity of Earth’s Moon. The surface area of Mars is almost exactly the same as the surface area of the dry land on Earth. Mars is believed to be about the same age as Earth, having formed from the same spinning, condensing cloud of gas and dust that formed the Sun and the other planets about 4.6 billion years ago.

Venus:

Venus (planet), one of the planets in the solar system, the second in distance from the Sun. Except for the Sun and the Moon, Venus is the brightest object in the sky. The planet was named for the Roman goddess of beauty. It is often called the morning star when it appears in the east at sunrise, and the evening star when it is in the west at sunset. In ancient times the evening star was called Hesperus and the morning star Phosphorus or Lucifer. Because the planet orbits closer to the Sun than Earth does, Venus seems to either precede or trail the Sun in the sky. Venus is never visible more than three hours before sunrise or three hours after sunset.

Q 6 : Define any five of the following :

(i) Acoustic
Acoustics (Greek akouein, “to hear”), term sometimes used for the science of sound in general. It is more commonly used for the special branch of that science, architectural acoustics, that deals with the construction of enclosed areas so as to enhance the hearing of speech or music. For the treatment of acoustics as a branch of the pure science of physics, see Sound.

The acoustics of buildings was an undeveloped aspect of the study of sound until comparatively recent times. The Roman architect Marcus Pollio, who lived during the 1st century BC, made some pertinent observations on the subject and some astute guesses concerning reverberation and interference. The scientific aspects of this subject, however, were first thoroughly treated by the American physicist Joseph Henry in 1856 and more fully developed by the American physicist Wallace Sabine in 1900.

(ii) Quartz
Quartz, second most common of all minerals, composed of silicon dioxide, or silica, SiO2. It is distributed all over the world as a constituent of rocks and in the form of pure deposits. It is an essential constituent of igneous rocks such as granite, rhyolite, and pegmatite, which contain an excess of silica. In metamorphic rocks, it is a major constituent of the various forms of gneiss and schist; the metamorphic rock quartzite is composed almost entirely of quartz. Quartz forms veins and nodules in sedimentary rock, principally limestone. Sandstone, a sedimentary rock, is composed mainly of quartz. Many widespread veins of quartz deposited in rock fissures form the matrix for many valuable minerals. Precious metals, such as gold, are found in sufficient quantity in quartz veins to warrant the mining of quartz to recover the precious mineral. Quartz is also the primary constituent of sand.

(iii) Pollination
Pollination, transfer of pollen grains from the male structure of a plant to the female structure of a plant. The pollen grains contain cells that will develop into male sex cells, or sperm. The female structure of a plant contains the female sex cells, or eggs. Pollination prepares the plant for fertilization, the union of the male and female sex cells. Virtually all grains, fruits, vegetables, wildflowers, and trees must be pollinated and fertilized to produce seed or fruit, and pollination is vital for the production of critically important agricultural crops, including corn, wheat, rice, apples, oranges, tomatoes, and squash.

In order for pollination to be successful, pollen must be transferred between plants of the same species—for example, a rose flower must always receive rose pollen and a pine tree must always receive pine pollen. Plants typically rely on one of two methods of pollination: cross-pollination or self-pollination, but some species are capable of both.

Most plants are designed for cross-pollination, in which pollen is transferred between different plants of the same species. Cross-pollination ensures that beneficial genes are transmitted relatively rapidly to succeeding generations. If a beneficial gene occurs in just one plant, that plant’s pollen or eggs can produce seeds that develop into numerous offspring carrying the beneficial gene. The offspring, through cross-pollination, transmit the gene to even more plants in the next generation. Cross-pollination introduces genetic diversity into the population at a rate that enables the species to cope with a changing environment. New genes ensure that at least some individuals can endure new diseases, climate changes, or new predators, enabling the species as a whole to survive and reproduce.

Plant species that use cross-pollination have special features that enhance this method. For instance, some plants have pollen grains that are lightweight and dry so that they are easily swept up by the wind and carried for long distances to other plants. Other plants have pollen and eggs that mature at different times, preventing the possibility of self-pollination.

In self-pollination, pollen is transferred from the stamens to the pistil within one flower. The resulting seeds and the plants they produce inherit the genetic information of only one parent, and the new plants are genetically identical to the parent. The advantage of self-pollination is the assurance of seed production when no pollinators, such as bees or birds, are present. It also sets the stage for rapid propagation—weeds typically self-pollinate, and they can produce an entire population from a single plant. The primary disadvantage of self-pollination is that it results in genetic uniformity of the population, which makes the population vulnerable to extinction by, for example, a single devastating disease to which all the genetically identical plants are equally susceptible. Another disadvantage is that beneficial genes do not spread as rapidly as in cross-pollination, because one plant with a beneficial gene can transmit it only to its own offspring and not to other plants. Self-pollination evolved later than cross-pollination, and may have developed as a survival mechanism in harsh environments where pollinators were scarce.

(iv) Allele
All genetic traits result from different combinations of gene pairs, one gene inherited from the mother and one from the father. Each trait is thus represented by two genes, often in different forms. Different forms of the same gene are called alleles. Traits depend on very precise rules governing how genetic units are expressed through generations. For example, some people have the ability to roll their tongue into a U-shape, while others can only curve their tongue slightly. A single gene with two alleles controls this heritable trait. If a child inherits the allele for tongue rolling from one parent and the allele for no tongue rolling from the other parent, she will be able to roll her tongue. The allele for tongue rolling dominates the gene pair, and so its trait is expressed. According to the laws governing heredity, when a dominant allele (in this case, tongue rolling) and a recessive allele (no tongue rolling) combine, the trait will always be dictated by the dominant allele. The no tongue rolling trait, or any other recessive trait, will only occur in an individual who inherits the two recessive alleles. 

(v) Optical Illusion
All genetic traits result from different combinations of gene pairs, one gene inherited from the mother and one from the father. Each trait is thus represented by two genes, often in different forms. Different forms of the same gene are called alleles. Traits depend on very precise rules governing how genetic units are expressed through generations. For example, some people have the ability to roll their tongue into a U-shape, while others can only curve their tongue slightly. A single gene with two alleles controls this heritable trait. If a child inherits the allele for tongue rolling from one parent and the allele for no tongue rolling from the other parent, she will be able to roll her tongue. The allele for tongue rolling dominates the gene pair, and so its trait is expressed. According to the laws governing heredity, when a dominant allele (in this case, tongue rolling) and a recessive allele (no tongue rolling) combine, the trait will always be dictated by the dominant allele. The no tongue rolling trait, or any other recessive trait, will only occur in an individual who inherits the two recessive alleles. 

(f) Ovulation
Unlike germ cells in the testis, female germ cells originate as single cells in the embryonic tissue that later develops into an ovary. At maturity, after the production of ova from the female germ cells, groups of ovary cells surrounding each ovum develop into “follicle cells” that secrete nutriment for the contained egg. As the ovum is prepared for release during the breeding season, the tissue surrounding the ovum hollows out and becomes filled with fluid and at the same time moves to the surface of the ovary; this mass of tissue, fluid, and ovum is known as a Graafian follicle. The ovary of the adult is merely a mass of glandular and connective tissue containing numerous Graafian follicles at various stages of maturity. When the Graafian follicle is completely mature, it bursts through the surface of the ovary, releasing the ovum, which is then ready for fertilization; the release of the ovum from the ovary is known as ovulation. The space formerly occupied by the Graafian follicle is filled by a blood clot known as the corpus hemorrhagicum; in four or five days this clot is replaced by a mass of yellow cells known as the corpus luteum, which secretes hormones playing an important part in preparation of the uterus for the reception of a fertilized ovum. If the ovum goes unfertilized, the corpus luteum is eventually replaced by scar tissue known as the corpus albicans. The ovary is located in the body cavity, attached to the peritoneum that lines this cavity.

(vii) Aqua Regia
Aqua Regia (Latin, “royal water”), mixture of concentrated hydrochloric and nitric acids, containing one part by volume of nitric acid (HNO3) to three parts of hydrochloric acid (HCl). Aqua regia was used by the alchemists (see Alchemy) and its name is derived from its ability to dissolve the so-called noble metals, particularly gold, which are inert to either of the acids used separately. It is still occasionally used in the chemical laboratory for dissolving gold and platinum. Aqua regia is a powerful solvent because of the combined effects of the H+, NO 3-, and Cl- ions in solution. The three ions react with gold atoms, for example, to form water, nitric oxide (NO), and the stable ion AuCl- 4, which remains in solution.

Q12: Differentiate between the following pairs :

(A) Lava and Magma 
Lava, molten or partially molten rock that erupts at the earth’s surface. When lava comes to the surface, it is red-hot, reaching temperatures as high as 1200° C (2200° F). Some lava can be as thick and viscous as toothpaste, while other lava can be as thin and fluid as warm syrup and flow rapidly down the sides of a volcano. Molten rock that has not yet erupted is called magma. Once lava hardens it forms igneous rock. Volcanoes build up where lava erupts from a central vent. Flood basalt forms where lava erupts from huge fissures. The eruption of lava is the principal mechanism whereby new crust is produced (see Plate Tectonics). Since lava is generated at depth, its chemical and physical characteristics provide indirect information about the chemical composition and physical properties of the rocks 50 to 150 km (30 to 90 mi) below the surface.
Magma, molten or partially molten rock beneath the earth’s surface. Magma is generated when rock deep underground melts due to the high temperatures and pressures inside the earth. Because magma is lighter than the surrounding rock, it tends to rise. As it moves upward, the magma encounters colder rock and begins to cool. If the temperature of the magma drops low enough, the magma will crystallize underground to form rock; rock that forms in this way is called intrusive, or plutonic igneous rock, as the magma has formed by intruding the surrounding rocks. If the crust through which the magma passes is sufficiently shallow, warm, or fractured, and if the magma is sufficiently hot and fluid, the magma will erupt at the surface of the earth, possibly forming volcanoes. Magma that erupts is called lava.

(B) Ultraviolet and infrared 
Ultraviolet Radiation, electromagnetic radiation that has wavelengths in the range between 4000 angstrom units (Å), the wavelength of violet light, and 150 Å, the length of X rays. Natural ultraviolet radiation is produced principally by the sun. Ultraviolet radiation is produced artificially by electric-arc lamps (see Electric Arc).

Ultraviolet radiation is often divided into three categories based on wavelength, UV-A, UV-B, and UV-C. In general shorter wavelengths of ultraviolet radiation are more dangerous to living organisms. UV-A has a wavelength from 4000 Å to about 3150 Å. UV-B occurs at wavelengths from about 3150 Å to about 2800 Å and causes sunburn; prolonged exposure to UV-B over many years can cause skin cancer. UV-C has wavelengths of about 2800 Å to 150 Å and is used to sterilize surfaces because it kills bacteria and viruses.

The earth's atmosphere protects living organisms from the sun's ultraviolet radiation. If all the ultraviolet radiation produced by the sun were allowed to reach the surface of the earth, most life on earth would probably be destroyed. Fortunately, the ozone layer of the atmosphere absorbs almost all of the short-wavelength ultraviolet radiation, and much of the long-wavelength ultraviolet radiation. However, ultraviolet radiation is not entirely harmful; a large portion of the vitamin D that humans and animals need for good health is produced when the human's or animal's skin is irradiated by ultraviolet rays.

When exposed to ultraviolet light, many substances behave differently than when exposed to visible light. For example, when exposed to ultraviolet radiation, certain minerals, dyes, vitamins, natural oils, and other products become fluorescent—that is, they appear to glow. Molecules in the substances absorb the invisible ultraviolet light, become energetic, then shed their excess energy by emitting visible light. As another example, ordinary window glass, transparent to visible light, is opaque to a large portion of ultraviolet rays, particularly ultraviolet rays with short wavelengths. Special-formula glass is transparent to the longer ultraviolet wavelengths, and quartz is transparent to the entire naturally occurring range.

In astronomy, ultraviolet-radiation detectors have been used since the early 1960s on artificial satellites, providing data on stellar objects that cannot be obtained from the earth's surface. An example of such a satellite is the International Ultraviolet Explorer, launched in 1978.

INFRARED RADIATION
Infrared Radiation, emission of energy as electromagnetic waves in the portion of the spectrum just beyond the limit of the red portion of visible radiation (see Electromagnetic Radiation). The wavelengths of infrared radiation are shorter than those of radio waves and longer than those of light waves. They range between approximately 10-6 and 10-3 (about 0.0004 and 0.04 in). Infrared radiation may be detected as heat, and instruments such as bolometers are used to detect it. See Radiation; Spectrum.

Infrared radiation is used to obtain pictures of distant objects obscured by atmospheric haze, because visible light is scattered by haze but infrared radiation is not. The detection of infrared radiation is used by astronomers to observe stars and nebulas that are invisible in ordinary light or that emit radiation in the infrared portion of the spectrum.

An opaque filter that admits only infrared radiation is used for very precise infrared photographs, but an ordinary orange or light-red filter, which will absorb blue and violet light, is usually sufficient for most infrared pictures. Developed about 1880, infrared photography has today become an important diagnostic tool in medical science as well as in agriculture and industry. Use of infrared techniques reveals pathogenic conditions that are not visible to the eye or recorded on X-ray plates. Remote sensing by means of aerial and orbital infrared photography has been used to monitor crop conditions and insect and disease damage to large agricultural areas, and to locate mineral deposits. See Aerial Survey; Satellite, Artificial. In industry, infrared spectroscopy forms an increasingly important part of metal and alloy research, and infrared photography is used to monitor the quality of products. See also Photography: Photographic Films.

Infrared devices such as those used during World War II enable sharpshooters to see their targets in total visual darkness. These instruments consist essentially of an infrared lamp that sends out a beam of infrared radiation, often referred to as black light, and a telescope receiver that picks up returned radiation from the object and converts it to a visible image.

(C) Fault and Fold 
Fold (geology), in geology, bend in a rock layer caused by forces within the crust of the earth. The forces that cause folds range from slight differences in pressure in the earth’s crust, to large collisions of the crust’s tectonic plates. As a result, a fold may be only a few centimeters in width, or it may cover several kilometers. Rock layers can also break in response to these forces, in which case a fault occurs. Folds usually occur in a series and look like waves. If the rocks have not been turned upside down, then the crests of the waves are called anticlines and the troughs are called synclines (see Anticline and Syncline).

Fault (geology), crack in the crust of the earth along which there has been movement of the rocks on either side of the crack. A crack without movement is called a joint. Faults occur on a wide scale, ranging in length from millimeters to thousands of kilometers. Large-scale faults result from the movement of tectonic plates, continent-sized slabs of the crust that move as coherent pieces (see Plate Tectonics). 

(D) Caustic Soda and Caustic Potash 
Electrolytic decomposition is the basis for a number of important extractive and manufacturing processes in modern industry. Caustic soda, an important chemical in the manufacture of paper, rayon, and photographic film, is produced by the electrolysis of a solution of common salt in water (see Alkalies). The reaction produces chlorine and sodium. The sodium in turn reacts with the water in the cell to yield caustic soda. The chlorine evolved is used in pulp and paper manufacture.

Caustic soda, or sodium hydroxide, NaOH, is an important commercial product, used in making soap, rayon, and cellophane; in processing paper pulp; in petroleum refining; and in the manufacture of many other chemical products. Caustic soda is manufactured principally by electrolysis of a common salt solution, with chlorine and hydrogen as important by-products.
Potassium hydroxide (KOH), called caustic potash, a white solid that is dissolved by the moisture in the air, is prepared by the electrolysis of potassium chloride or by the reaction of potassium carbonate and calcium hydroxide; it is used in the manufacture of soap and is an important chemical reagent. It dissolves in less than its own weight of water, liberating heat and forming a strongly alkaline solution.

(E) S.E.M. and T.E.M.


Q15: Laser

I INTRODUCTION
Laser, a device that produces and amplifies light. The word laser is an acronym for Light Amplification by Stimulated Emission of Radiation. Laser light is very pure in color, can be extremely intense, and can be directed with great accuracy. Lasers are used in many modern technological devices including bar code readers, compact disc (CD) players, and laser printers. Lasers can generate light beyond the range visible to the human eye, from the infrared through the X-ray range. Masers are similar devices that produce and amplify microwaves.

II PRINCIPLES OF OPERATION
Lasers generate light by storing energy in particles called electrons inside atoms and then inducing the electrons to emit the absorbed energy as light. Atoms are the building blocks of all matter on Earth and are a thousand times smaller than viruses. Electrons are the underlying source of almost all light.

Light is composed of tiny packets of energy called photons. Lasers produce coherent light: light that is monochromatic (one color) and whose photons are “in step” with one another.

A Excited Atoms
At the heart of an atom is a tightly bound cluster of particles called the nucleus. This cluster is made up of two types of particles: protons, which have a positive charge, and neutrons, which have no charge. The nucleus makes up more than 99.9 percent of the atom’s mass but occupies only a tiny part of the atom’s space. Enlarge an atom up to the size of Yankee Stadium and the equally magnified nucleus is only the size of a baseball.

Electrons, tiny particles that have a negative charge, whirl through the rest of the space inside atoms. Electrons travel in complex orbits and exist only in certain specific energy states or levels (see Quantum Theory). Electrons can move from a low to a high energy level by absorbing energy. An atom with at least one electron that occupies a higher energy level than it normally would is said to be excited. An atom can become excited by absorbing a photon whose energy equals the difference between the two energy levels. A photon’s energy, color, frequency, and wavelength are directly related: All photons of a given energy are the same color and have the same frequency and wavelength.

Usually, electrons quickly jump back to the low energy level, giving off the extra energy as light (see Photoelectric Effect). Neon signs and fluorescent lamps glow with this kind of light as many electrons independently emit photons of different colors in all directions.

B Stimulated Emission
Lasers are different from more familiar sources of light. Excited atoms in lasers collectively emit photons of a single color, all traveling in the same direction and all in step with one another. When two photons are in step, the peaks and troughs of their waves line up. The electrons in the atoms of a laser are first pumped, or energized, to an excited state by an energy source. An excited atom can then be “stimulated” by a photon of exactly the same color (or, equivalently, the same wavelength) as the photon this atom is about to emit spontaneously. If the photon approaches closely enough, the photon can stimulate the excited atom to immediately emit light that has the same wavelength and is in step with the photon that interacted with it. This stimulated emission is the key to laser operation. The new light adds to the existing light, and the two photons go on to stimulate other excited atoms to give up their extra energy, again in step. The phenomenon snowballs into an amplified, coherent beam of light: laser light.

In a gas laser, for example, the photons usually zip back and forth in a gas-filled tube with highly reflective mirrors facing inward at each end. As the photons bounce between the two parallel mirrors, they trigger further stimulated emissions and the light gets brighter and brighter with each pass through the excited atoms. One of the mirrors is only partially silvered, allowing a small amount of light to pass through rather than reflecting it all. The intense, directional, and single-colored laser light finally escapes through this slightly transparent mirror. The escaped light forms the laser beam.

Albert Einstein first proposed stimulated emission, the underlying process for laser action, in 1917. Translating the idea of stimulated emission into a working model, however, required more than four decades. The working principles of lasers were outlined by the American physicists Charles Hard Townes and Arthur Leonard Schawlow in a 1958 patent application. (Both men won Nobel Prizes in physics for their work, Townes in 1964 and Schawlow in 1981). The patent for the laser was granted to Townes and Schawlow, but it was later challenged by the American physicist and engineer Gordon Gould, who had written down some ideas and coined the word laser in 1957. Gould eventually won a partial patent covering several types of laser. In 1960 American physicist Theodore Maiman of Hughes Aircraft Corporation constructed the first working laser from a ruby rod.

III TYPES OF LASERS
Lasers are generally classified according to the material, called the medium, they use to produce the laser light. Solid-state, gas, liquid, semiconductor, and free electron are all common types of lasers.

A Solid-State Lasers
Solid-state lasers produce light by means of a solid medium. The most common solid laser media are rods of ruby crystals and neodymium-doped glasses and crystals. The ends of the rods are fashioned into two parallel surfaces coated with a highly reflecting nonmetallic film. Solid-state lasers offer the highest power output. They are usually pulsed to generate a very brief burst of light. Bursts as short as 12 × 10-15 sec have been achieved. These short bursts are useful for studying physical phenomena of very brief duration.

One method of exciting the atoms in lasers is to illuminate the solid laser material with higher-energy light than the laser produces. This procedure, called pumping, is achieved with brilliant strobe light from xenon flash tubes, arc lamps, or metal-vapor lamps.

B Gas Lasers
The lasing medium of a gas laser can be a pure gas, a mixture of gases, or even metal vapor. The medium is usually contained in a cylindrical glass or quartz tube. Two mirrors are located outside the ends of the tube to form the laser cavity. Gas lasers can be pumped by ultraviolet light, electron beams, electric current, or chemical reactions. The helium-neon laser is known for its color purity and minimal beam spread. Carbon dioxide lasers are very efficient at turning the energy used to excite their atoms into laser light. Consequently, they are the most powerful continuous wave (CW) lasers—that is, lasers that emit light continuously rather than in pulses.

C Liquid Lasers
The most common liquid laser media are inorganic dyes contained in glass vessels. They are pumped by intense flash lamps in a pulse mode or by a separate gas laser in the continuous wave mode. Some dye lasers are tunable, meaning that the color of the laser light they emit can be adjusted with the help of a prism located inside the laser cavity.

D Semiconductor Lasers
Semiconductor lasers are the most compact lasers. Gallium arsenide is the most common semiconductor used. A typical semiconductor laser consists of a junction between two flat layers of gallium arsenide. One layer is treated with an impurity whose atoms provide an extra electron, and the other with an impurity whose atoms are one electron short. Semiconductor lasers are pumped by the direct application of electric current across the junction. They can be operated in the continuous wave mode with better than 50 percent efficiency. Only a small percentage of the energy used to excite most other lasers is converted into light.

Scientists have developed extremely tiny semiconductor lasers, called quantum-dot vertical-cavity surface-emitting lasers. These lasers are so tiny that more than a million of them can fit on a chip the size of a fingernail.

Common uses for semiconductor lasers include compact disc (CD) players and laser printers. Semiconductor lasers also form the heart of fiber-optics communication systems (see Fiber Optics).

E Free Electron Lasers.
Free electron lasers employ an array of magnets to excite free electrons (electrons not bound to atoms). First developed in 1977, they are now becoming important research instruments. Free electron lasers are tunable over a broader range of energies than dye lasers. The devices become more difficult to operate at higher energies but generally work successfully from infrared through ultraviolet wavelengths. Theoretically, electron lasers can function even in the X-ray range.

The free electron laser facility at the University of California at Santa Barbara uses intense far-infrared light to investigate mutations in DNA molecules and to study the properties of semiconductor materials. Free electron lasers should also eventually become capable of producing very high-power radiation that is currently too expensive to produce. At high power, near-infrared beams from a free electron laser could defend against a missile attack.

IV LASER APPLICATIONS
The use of lasers is restricted only by imagination. Lasers have become valuable tools in industry, scientific research, communications, medicine, the military, and the arts.

A Industry
Powerful laser beams can be focused on a small spot to generate enormous temperatures. Consequently, the focused beams can readily and precisely heat, melt, or vaporize material. Lasers have been used, for example, to drill holes in diamonds, to shape machine tools, to trim microelectronics, to cut fashion patterns, to synthesize new material, and to attempt to induce controlled nuclear fusion (see Nuclear Energy).
Highly directional laser beams are used for alignment in construction. Perfectly straight and uniformly sized tunnels, for example, may be dug using lasers for guidance. Powerful, short laser pulses also make high-speed photography with exposure times of only several trillionths of a second possible.

B Scientific Research
Because laser light is highly directional and monochromatic, extremely small amounts of light scattering and small shifts in color caused by the interaction between laser light and matter can easily be detected. By measuring the scattering and color shifts, scientists can study molecular structures of matter. Chemical reactions can be selectively induced, and the existence of trace substances in samples can be detected. Lasers are also the most effective detectors of certain types of air pollution. (see Chemical Analysis; Photochemistry).

Scientists use lasers to make extremely accurate measurements. Lasers are used in this way for monitoring small movements associated with plate tectonics and for geographic surveys. Lasers have been used for precise determination (to within one inch) of the distance between Earth and the Moon, and in precise tests to confirm Einstein’s theory of relativity. Scientists also have used lasers to determine the speed of light to an unprecedented accuracy.

Very fast laser-activated switches are being developed for use in particle accelerators. Scientists also use lasers to trap single atoms and subatomic particles in order to study these tiny bits of matter (see Particle Trap).

C Communications
Laser light can travel a large distance in outer space with little reduction in signal strength. In addition, high-energy laser light can carry 1,000 times the television channels today carried by microwave signals. Lasers are therefore ideal for space communications. Low-loss optical fibers have been developed to transmit laser light for earthbound communication in telephone and computer systems. Laser techniques have also been used for high-density information recording. For instance, laser light simplifies the recording of a hologram, from which a three-dimensional image can be reconstructed with a laser beam. Lasers are also used to play audio CDs and videodiscs (see Sound Recording and Reproduction).

D Medicine
Lasers have a wide range of medical uses. Intense, narrow beams of laser light can cut and cauterize certain body tissues in a small fraction of a second without damaging surrounding healthy tissues. Lasers have been used to “weld” the retina, bore holes in the skull, vaporize lesions, and cauterize blood vessels. Laser surgery has virtually replaced older surgical procedures for eye disorders. Laser techniques have also been developed for lab tests of small biological samples.

E Military Applications
Laser guidance systems for missiles, aircraft, and satellites have been constructed. Guns can be fitted with laser sights and range finders. The use of laser beams to destroy hostile ballistic missiles has been proposed, as in the Strategic Defense Initiative urged by U.S. president Ronald Reagan and the Ballistic Missile Defense program supported by President George W. Bush. The ability of tunable dye lasers to selectively excite an atom or molecule may open up more efficient ways to separate isotopes for construction of nuclear weapons.

V LASER SAFETY
Because the eye focuses laser light just as it does other light, the chief danger in working with lasers is eye damage. Therefore, laser light should not be viewed either directly or reflected.

Lasers sold and used commercially in the United States must comply with a strict set of laws enforced by the Center for Devices and Radiological Health (CDRH), a department of the Food and Drug Administration. The CDRH has divided lasers into six groups, depending on their power output, their emission duration, and the energy of the photons they emit. The classification is then attached to the laser as a sticker. The higher the laser’s energy, the higher its potential to injure. High-powered lasers of the Class IV type (the highest classification) generate a beam of energy that can start fires, burn flesh, and cause permanent eye damage whether the light is direct, reflected, or diffused. Canada uses the same classification system, and laser use in Canada is overseen by Health Canada’s Radiation Protection Bureau.

Goggles blocking the specific color of photons that a laser produces are mandatory for the safe use of lasers. Even with goggles, direct exposure to laser light should be avoided.

Post a Comment

[disqus][blogger][facebook]

Author

MKRdezign

MathJax

Contact Form

Name

Email *

Message *

Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget