忍者ブログ
Technical News
[1]  [2]  [3]  [4]  [5]  [6]  [7
×

[PR]上記の広告は3ヶ月以上新規記事投稿のないブログに表示されています。新しい記事を書く事で広告が消えます。

Adam's Maxim and Spinoza's Conjecture

Belief, disbelief and uncertainty generate different neural pathways in the brain



During an early episode of the über-pyrotechnic television series MythBusters, Adam Savage was busted by the camera crew for misremembering his predictions of the probability of an axle being ripped out of a car, à la American Graffiti. When confronted with the unmistakable video evidence of his error, Adam sardonically rejoined: “I reject your reality and substitute my own.”

Skepticism is the fine art and technical science of understanding why rejecting everyone else’s reality and substituting your own almost always results in a failed belief system. Where in the brain do such belief processes unfold? To find out, neuroscientists Sam Harris, Sameer A. Sheth and Mark S. Cohen employed functional magnetic resonance imaging to scan the brains of 14 adults at the University of California, Los Angeles, Brain Mapping Center. The researchers presented the subjects with a series of statements designed to be plainly true, false or undecidable. In response, the volunteers were to press a button indicating their belief, disbelief or uncertainty. For example:

Mathematical:
(2 + 6) + 8 = 16.
62 can be evenly divided by 9.
1.257 = 32608.5153.

Factual:
Most people have 10 fingers and 10 toes.
Eagles are common pets.
The Dow Jones Industrial Average rose 1.2% last Tuesday.

Ethical:
It is bad to take pleasure at another’s suffering.
Children should have no rights until they can vote.
It is better to lie to a child than to an adult.

The findings were revealing. First, there were significant reaction time differences in evaluating statements; responses to belief statements were significantly shorter than responses to both disbelief and uncertainty statements (but no difference was detected between disbelief and uncertainty statements). Second, contrasting belief and disbelief in the brain scans yielded a spike in neural activity in the ventromedial prefrontal cortex, associated with decision making and learning in the context of rewards. Third, contrasting disbelief and belief showed increased brain response in the left inferior frontal gyrus, the anterior insula and the dorsal anterior cingulate, all associated with responses to negative stimuli, pain perception and disgust. Finally, contrasting uncertainty with both belief and disbelief revealed elevated neural action in the anterior cingulate cortex, a region associated with conflict resolution.

What do these results tell us? “Several psychological studies appear to support [17th-century Dutch philosopher Benedict] Spinoza’s conjecture that the mere comprehension of a statement entails the tacit acceptance of its being true, whereas disbelief requires a subsequent process of rejection,” report Harris and his collaborators on the study in their paper, published in the December 2007 Annals of Neurology. “Understanding a proposition may be analogous to perceiving an object in physical space: We seem to accept appearances as reality until they prove otherwise.” So subjects assessed true statements as believable faster than they judged them as unbelievable or undecidable. Further, because the brain appears to process false or uncertain statements in regions linked to pain and disgust, especially in judging tastes and odors, this study gives new meaning to a claim passing the “taste test” or the “smell test.”

As for the neural correlates of belief and skepticism, the ventromedial prefrontal cortex is instrumental in linking higher-order cognitive factual evaluations with lower-order emotional response associations, and it does so in evaluating all types of claims. Thus, the assessment of the ethical statements showed a similar pattern of neural activation, as did the evaluation of the mathematical and factual statements. People with damage in this area have a difficult time feeling an emotional difference between good and bad decisions, and they are susceptible to confabulation—mixing true and false memories and conflating reality with fantasy.

This research supports Spinoza’s conjecture that most people have a low tolerance for ambiguity and that belief comes quickly and naturally, whereas skepticism is slow and unnatural. The scientific principle of the null hypothesis—that a claim is untrue unless proved otherwise—runs counter to our natural tendency to accept as true what we can comprehend quickly. Given the chance, most of us would like to invoke Adam’s Maxim because it is faster and feels better. Thus, it is that we should reward skepticism and disbelief and champion those willing to change their mind in the teeth of new evidence. 

PR

Visionary Research: Teaching Computers to See Like a Human

M.I.T. researchers are harnessing computer models of human vision to improve image recognition software

 
IMAGE PROCESSING: M.I.T. researchers are looking to advances in neuroscience for ways to improve artificial intelligence, and vice versa.

For all their sophistication, computers still can't compete with nature's gift—a brain that sorts objects quickly and accurately enough so that people and primates can interpret what they see as it happens. Despite decades of development, computer vision systems still get bogged down by the massive amounts of data necessary just to identify the most basic images. Throw that same image into a different setting or change the lighting and artificial intelligence is even less of a match for good old gray matter.

These shortcomings become more pressing as demand grows for security systems that can recognize a known terrorist's face in a crowded airport and car safety mechanisms such as a sensor that can hit the brakes when it detects a pedestrian or another vehicle in the car's path. Seeking the way forward, Massachusetts Institute of Technology researchers are looking to advances in neuroscience for ways to improve artificial intelligence, and vice versa. The school's leading minds in both neural and computer sciences are pooling their research, mixing complex computational models of the brain with their work on image processing.

This cross-disciplinary approach began to yield fruit a year ago, when a group of researchers led by Tomaso Poggio, a professor in M.I.T.'s Department of Brain and Cognitive Sciences and an investigator at the school's McGovern Institute for Brain Research, used a brain-inspired computer model to interpret a series of photographs. Although the neurological model had been developed as a theoretical analysis of how certain visual pathways in the brain work, it turned out to be as good as, or even better than, the best existing computer vision systems at rapidly recognizing some complex scenes. Previously, when a computer was shown pictures of a horse, along with other animals standing in a forest and asked to identify the equine each time, it was swamped by all the variables that might distinguish the horse from the other animals or the trees.

When the neurological model was used, it was the first time a computer model was able to reproduce human behavior on that kind of task, Poggio says, and it brought the researchers closer to understanding how the visual cortex recognizes objects and scenes.

Some car companies have for years been trying to develop computer systems that allow their vehicles to identify pedestrians and other vehicles amidst a crowded background and provide drivers with a warning if they get too close. This type of recognition is very easy for humans, Poggio says, but "we're not conscious of what goes on in our head[s] when we do this." When a person is shown a picture, even for just a fraction of a second, the brain's visual cortex, known as the ventral 1 pathway, recognizes what it sees immediately. The visual cortex is a large part of the brain's processing system and one of the most complex. Poggio says that understanding how it works could be a significant step toward knowing how the whole brain operates. "Vision is just a proxy for intelligence," he says. The human brain is much more aware of how it solves complex problems such as playing chess or solving algebra equations, which is why computer programmers have had so much success building machines that emulate this type of activity.

Thus far, Poggio's research has modeled "feedforward" vision, which occurs when an image is first presented to the eye. He and his colleagues are now looking to develop new models that help them better understand how the brain works once the eye begins to scan the scene portrayed in an image and interpret spatial relationships among objects in the scene. The hope is that this will ultimately lead to computer software that can do the same thing. Keep your eyes peeled.

Laser could provide breath test for cancer, asthma

Photo
University of Colorado at Boulder physics doctoral student Michael Thorpe holds a detection chamber while standing next to a laser apparatus in a photo released by the university on Tuesday. A new laser analyzer might be able to help doctors detect cancer, asthma or other diseases by sampling a patient's breath, researchers reported on Tuesday. 

A new laser analyzer might be able to help doctors detect cancer, asthma or other diseases by sampling a patient's breath, researchers reported on Tuesday.

The device uses mirrors to bounce the laser's light back and forth until it has touched every molecule a patient exhales in a single breath, the team reported in the journal Optics Express.

This can help detect minute traces of compounds that can point to various diseases, including cancer, asthma, diabetes and kidney malfunction, they said.

"This technique can give a broad picture of many different molecules in the breath all at once," Jun Ye, who led the research at the University of Colorado, said in a statement.

Ye's team at a joint institute of the National Institute of Standards and Technology and the university developed a new technique, called cavity-enhanced direct optical frequency comb spectroscopy.

When animals and people breathe out, they exhale not only gases that are not needed, such as carbon dioxide, but also compounds that result from the metabolism of cells.

"To date, researchers have identified over 1,000 different compounds contained in human breath," Ye's team wrote in the report, published on the Internet here

Some point to abnormal function -- such as methylamine, produced in higher amounts by liver and kidney disease, ammonia produced when the kidneys are failing or elevated acetone caused by diabetes.

People with asthma may produce too much nitric oxide, exhaled in the breath, while smokers produce high levels of carbon monoxide.

Last February, a team at the Cleveland Clinic in Ohio reported they could use a mass spectrometer breath test to detect lung cancer in patients. Tumor cells produce compounds called volatile organic compounds at higher levels than healthy cells.

In 2006, researchers found dogs could be trained to smell cancer on the breath of patients with 99 percent accuracy.

Ye's team used their method to analyze the breath of several student volunteers and found they could detect trace signatures of ammonia, carbon monoxide, and methane in breath.

Their volunteers breathed into an optical cavity, which is a space between two mirrors. When a pulsed laser light was shone into this space, it bounced back and forth multiple times, striking all the molecules in the sample, Ye's team said.

Spectrometry analysis showed which frequencies of light were absorbed, in turn an indirect measure of which molecules were in the sample.

Space Wars - Coming to the Sky Near You?

A recent shift in U.S. military strategy and provocative actions by china threaten to ignite a new arms race in space. But would placing weapons in space be in anyone's national interest?

SPACE WEAPONS concepts include a variety of satellite killers-projectiles, microwave- and laser-beam weapons, and orbital mines-as well as arms launched from space at surface targets, such as the heavy tungsten bunker busters nicknamed "rods from God."

“Take the high ground and hold it!” has been standard combat doctrine for armies since ancient times. Now that people and their machines have entered outer space, it is no surprise that generals the world over regard Earth orbit as the key to modern warfare. But until recently, a norm had developed against the weaponization of space—even though there are no international treaties or laws explicitly prohibiting nonnuclear anti­satellite systems or weapons placed in orbit. Nations mostly shunned such weapons, fearing the possibility of destabilizing the global balance of power with a costly arms race in space.

 

In war, do not launch an ascending attack head-on against the enemy who holds the high ground. Do not engage the enemy when he makes a descending attack from high ground. Lure him to level ground to do battle.
—Sun Tzu, Chinese military strategist, The Art of War, circa 500 B.C.

That consensus is now in danger of unraveling. In October 2006 the Bush administration adopted a new, rather vaguely worded National Space Policy that asserts the right of the U.S. to conduct “space control” and rejects “new legal regimes or other restrictions that seek to prohibit or limit U.S. access to or use of space.” Three months later the People’s Republic of China shocked the world by shooting down one of its own aging Fengyun weather satellites, an act that resulted in a hailstorm of dangerous orbital debris and a deluge of international protests, not to mention a good deal of hand-wringing in American military and political circles. The launch was the first test of a dedicated antisatellite weapon in more than two decades—making China only the third country, after the U.S. and the Russian Federation, to have demonstrated such a technology. Many observers wondered whether the test might be the first shot in an emerging era of space warfare.

Critics maintain it is not at all clear that a nation’s security would be enhanced by developing the means to wage space war. After all, satellites and even orbiting weapons, by their very nature, are relatively easy to spot and easy to track, and they are likely to remain highly vulnerable to attack no matter what defense measures are taken. Further, developing antisatellite systems would almost surely lead to a hugely expensive and potentially runaway arms race, as other countries would conclude that they, too, must compete. And even tests of the technology needed to conduct space battles—not to mention a real battle—could generate enormous amounts of wreckage that would continue to orbit Earth. Converging on satellites and crewed space vehicles at speeds approaching several miles a second, such space debris would threaten satellite-based telecommunications, weather forecasting, precision navigation, even military command and control, potentially sending the world’s economy back to the 1950s.

“Star Wars” Redux
Since the dawn of the space age, defense planners have hatched concepts for antisatellite and space-based weaponry—all in the interest of exploiting the military advantages of the ultimate high ground. Perhaps the most notable effort was President Ronald Reagan’s Strategic Defense Initiative (SDI)—derided by its critics as “Star Wars.” Yet by and large, U.S. military strategy has never embraced such weapons.

Traditionally, space weapons have been defined as destructive systems that operate in outer space after having been launched directly from Earth or parked in orbit. The category includes antisatellite weapons; laser systems that couple ground-based lasers with airship- or satellite-mounted mirrors, which could reflect a laser beam beyond the ground horizon; and orbital platforms that could fire projectiles or energy beams from space. (It is important to note that all nations would presumably avoid using a fourth kind of antisatellite weapon, namely, a high-altitude nuclear explosion. The electromagnetic pulse and cloud of highly charged particles created by such a blast would likely disable or destroy nearly all satellites and manned spacecraft in orbit [see “Nuclear Explosions in Orbit,” by Daniel G. Dupont; Scientific American, June 2004].)

But virtually no statement about space weapons goes politically uncontested. Recently some proponents of such weapons have sought to expand the long-held classification I just described to include two existing technologies that depend on passage through space: intercontinental ballistic missiles (ICBMs) and ground-based electronic warfare systems. Their existence, or so the argument goes, renders moot any question about whether to build space weapons systems. By the revised definition, after all, “space weapons” already exist. Whatever the exact meaning of the term, however, the questions such weapons raise are hardly new to think tanks and military-planning circles in Washington: Is it desirable, or even feasible, to incorporate antisatellite weapons and weapons fired from orbit into the nation’s military strategy?

The new National Space Policy, coupled with the Chinese test, has brought renewed urgency to that behind-the-scenes debate. Many American military leaders expressed alarm in the wake of the Chinese test, worrying that in any conflict over Taiwan, China could threaten U.S. satellites in low Earth orbit. In April 2007 Michael Moseley, the U.S. Air Force chief of staff, compared China’s antisatellite test with the launch of Sputnik by the Soviet Union in 1957, an act that singularly intensified the arms race during the cold war. Moseley also revealed that the Pentagon had begun reviewing the nation’s satellite defenses, explaining that outer space was now a “contested domain.”

Congressional reaction fell along predictable political lines. Conservative “China hawks” such as Senator Jon Kyl of Arizona immediately called for the development of antisatellite weapons and space-based interceptors to counter Chinese capabilities. Meanwhile more moderate politicians, including Representative Edward Markey of Massachusetts, urged the Bush administration to begin negotiations aimed at banning all space weapons.

International Power Plays
Perhaps of even greater concern is that several other nations, including one of China’s regional rivals, India, may feel compelled to seek ­offensive as well as defensive capabilities in space. The U.S. trade journal Defense News, for instance, quoted unidentified Indian defense officials as stating that their country had already begun developing its own kinetic-energy (nonexplosive, hit-to-kill) and laser-based antisatellite weapons.

If India goes down that path, its archrival Pakistan will probably follow suit. Like India, Pakistan has a well-developed ballistic missile program, including medium-range missiles that could launch an antisatellite system. Even Japan, the third major Asian power, might join such a space race. In June 2007 the National Diet of Japan began considering a bill backed by the current Fukuda government that would permit the development of satellites for “military and national security” purposes.

As for Russia, in the wake of the Chinese test President Vladimir Putin reiterated Moscow’s stance against the weaponization of space. At the same time, though, he refused to criticize Beijing’s actions and blamed the U.S. instead. The American efforts to build a missile defense system, Putin charged, and the increasingly aggressive American plans for a military position in space were prompting China’s moves. Yet Russia itself, as a major spacefaring power that has incorporated satellites into its national security structure, would be hard-pressed to forgo entering an arms race in space.

Given the proliferation of spacefaring entities, proponents of a robust space warfare strategy believe that arming the heavens is inevitable and that it would be best for the U.S. to get there first with firepower. Antisatellite and space-based weapons, they argue, will be necessary not only to defend U.S. military and commercial satellites but also to deny any future adversary the use of space capabilities to enhance the performance of its forces on the battlefield.

Yet any arms race in space would almost inevitably destabilize the balance of power and thereby multiply the risks of global conflict. In such headlong competition—whether in space or elsewhere—equilibrium among the adversaries would be virtually impossible to maintain. Even if the major powers did achieve stability, that reality would still provide no guarantee that both sides would perceive it to be so. The moment one side saw itself to be slipping behind the other, the first side would be strongly tempted to launch a preemptive strike, before things got even worse. Ironically, the same would hold for the side that perceived itself to have gained an advantage. Again, there would be strong temptation to strike first, before the adversary could catch up. Finally, a space weapons race would ratchet up the chances that a mere technological mistake could trigger a battle. After all, in the distant void, reliably distinguishing an intentional act from an accidental one would be highly problematic.

Hit-to-Kill Interceptors
According to assessments by U.S. military and intelligence officials as well as by independent experts, the Chinese probably destroyed their weather satellite with a kinetic-energy vehicle boosted by a two-stage medium-range ballistic missile. Technologically, launching such direct-ascent antisatellite weapons is one of the simplest ways to take out a satellite. About a dozen nations and consortia can reach low Earth orbit (between roughly 100 and 2,000 kilometers, or 60 to 1,250 miles, high) with a medium-range missile; eight of those countries can reach geostationary orbit (about 36,000 kilometers, or 22,000 miles, above Earth).

But the real technical hurdle to making a hit-to-kill vehicle is not launch capacity; it is the precision maneuverability and guidance technology needed to steer the vehicle into its target. Just how well China has mastered those techniques is unclear. Because the weather satellite was still operating when it was destroyed, the Chinese operators would have known its exact location at all times.

Ground-Based Lasers
The test of China’s direct-ascent antisatellite device came on the heels of press reports in September 2006 that the Chinese had also managed to “paint,” or illuminate, U.S. spy satellites with a ground-based laser [see lower box on page 83]. Was Beijing actually trying to “blind” or otherwise damage the satellites? No one knows, and no consensus seems to have emerged in official Washington circles about the Chinese intent. Per­haps China was simply testing how well its network of low-power laser-ranging stations could track American orbital observation platforms.

Even so, the test was provocative. Not all satellites have to be electronically “fried” to be put out of commission. A 1997 test of the army’s MIRACL system (for midinfrared advanced chemical laser) showed that satellites designed to collect optical images can be temporarily disrupted—dazzled—by low-power beams. It follows that among the satellites vulnerable to such an attack are the orbital spies.

The U.S. and the former Soviet Union began experimenting with laser-based antisatellite weapons in the 1970s. Engineers in both countries have focused on the many problems of building high-power laser systems that could reliably destroy low-flying satellites from the ground. Such systems could be guided by “adaptive optics”: deformable mirrors that can continuously compensate for atmospheric distortions. But tremendous amounts of energy would be needed to feed high-power lasers, and even then the range and effectiveness of the beams would be severely limited by dispersion, by attenuation as they passed through smoke or clouds, and by the difficulty of keeping the beams on-target long enough to do damage.

During the development of the SDI, the U.S. conducted several laser experiments from Hawaii, including a test in which a beam was bounced off a mirror mounted on a satellite. Laser experiments continue at the Starfire Optical Range at Kirtland Air Force Base in New Mexico. Pentagon budget documents from fiscal years 2004 through 2007 listed antisatellite operations among the goals of the Starfire research, but that language was removed from budget documents in fiscal year 2008 after Congress made inquiries. The Starfire system incorporates adaptive optics that narrow the outgoing laser beam and thus increase the density of its power. That capability is not required for imagery or tracking, further suggesting that Starfire could be used as a weapon.

Yet despite decades of work, battle-ready versions of directed-energy weapons still seem far away. An air force planning document, for instance, predicted in 2003 that a ground-based weapon able to “propagate laser beams through the atmosphere to [stun or kill low Earth orbit] satellites” could be available between 2015 and 2030. Given the current state of research, even those dates seem optimistic.

Co-orbital Satellites
Recent advances in miniaturized sensors, powerful onboard computers and efficient rocket thrusters have made a third kind of antisatellite technology increasingly feasible: the offensive microsatellite. One example that demonstrates the potential is the air force’s experimental satellite series (XSS) project, which is developing microsatellites intended to conduct “autonomous proximity operations” around larger satellites. The first two microsatellites in the program, the XSS-10 and XSS-11, were launched in 2003 and 2005. Though ostensibly intended to inspect larger satellites, such microsatellites could also ram target satellites or carry explosives or directed-energy payloads such as radio-frequency jamming systems or high-powered microwave emitters. Air force budget documents show that the XSS effort is tied to a program called Advanced Weapons Technology, which is dedicated to research on military laser and microwave systems.

During the cold war the Soviet Union developed, tested and even declared operational a co-orbital antisatellite system—a maneuverable interceptor with an explosive payload that was launched by missile into an orbit near a target satellite in low Earth orbit. In effect, the device was a smart “space mine,” but it was last demonstrated in 1982 and is probably no longer working. Today such an interceptor would likely be a microsatellite that could be parked in an orbit that would cross the orbits of several of its potential targets. It could then be activated on command during a close encounter.

In 2005 the air force described a program that would provide “localized” space “situational awareness” and “anomaly characterization” for friendly host satellites in geostationary orbit. The program is dubbed ANGELS (for autonomous nanosatellite guardian for evaluating local space), and the budget line believed to represent it focuses on acquiring “high value space asset defensive capabilities,” including a “warning sensor for detection of a direct ascent or co-orbital vehicle.” It is clear that such guardian nanosatellites could also serve as offensive weapons if they were maneuvered close to enemy satellites.

And the list goes on. A “parasitic satellite” would shadow or even attach itself to a target in geostationary orbit. Farsat, which was mentioned in an appendix to the [Donald] Rumsfeld Space Commission report in 2001, “would be placed in a ‘storage’ orbit (perhaps with many microsatellites housed inside) relatively far from its target but ready to be maneuvered in for a kill.”

Finally, the air force proposed some time ago a space-based radio-frequency weapon system, which “would be a constellation of satellites containing high-power radio-frequency transmitters that possess the capability to disrupt/­destroy/disable a wide variety of electronics and national-level command and control systems.”

Air force planning documents from 2003 envisioned that such a technology would emerge after 2015. But outside experts think that orbital radio-frequency and microwave weapons are technically feasible today and could be deployed in the relatively near future.

Space Bombers
Though not by definition a space weapon, the Pentagon’s Common Aero Vehicle/Hypersonic Technology Vehicle (often called CAV) enters into this discussion because, like an ICBM, it would travel through space to strike Earth-bound targets. An unpowered but highly maneuverable hypersonic glide vehicle, the CAV would be deployed from a future hypersonic space plane, swoop down into the atmosphere from orbit and drop conventional bombs on ground targets. Congress recently began funding the project but, to avoid stoking a potential arms race in space, has prohibited any work to place weapons on the CAV. Although engineers are making steady progress on the key technologies for the CAV program, both the vehicle and its space plane mothership are still likely decades off.

Some of the congressional sensitivity to the design of the CAV may have arisen from another, much more controversial space weapons concept with parallel goals: hypervelocity rod bundles that would be dropped to Earth from orbital platforms. For decades air force planners have been thinking about placing weapons in orbit that could strike terrestrial targets, particularly buried, “hardened” bunkers and caches of weapons of mass destruction. Commonly called “rods from God,” the bundles would be made up of large tungsten rods, each as long as six meters (20 feet) and 30 centimeters (12 inches) across. Each rod would be hurled downward from an orbiting spacecraft and guided to its target at tremendous speed.

Both high costs and the laws of physics, however, challenge their feasibility. Ensuring that the projectiles do not burn up or deform from reentry friction while sustaining a precise, nearly vertical flight path would be extremely difficult. Calculations indicate that the nonexplosive rods would probably be no more effective than more conventional munitions. Furthermore, the expense of lofting the heavy projectiles into orbit would be exorbitant. Thus, despite continued interest in them, rods from God seem to fall into the realm of science fiction.

Obstacles to Space Weapons
What, then, is holding the U.S. (and other nations) back from a full-bore pursuit of space weapons? The countervailing pressures are threefold: political opposition, technological challenges and high costs.

The American body politic is deeply divided over the wisdom of making space warfare a part of the national military strategy. The risks are manifold. I remarked earlier on the general instabilities of an arms race, but there is a further issue of stability among the nuclear powers. Early-warning and spy satellites have traditionally played a crucial role in reducing fears of a surprise nuclear attack. But if antisatellite weapons disabled those eyes-in-the-sky, the resulting uncertainty and distrust could rapidly lead to catastrophe.

One of the most serious technological challenges posed by space weapons is the proliferation of space debris, to which I alluded earlier. According to investigators at the air force, NASA and Celestrak (an independent space-monitoring Web site), the Chinese antisatellite test left more than 2,000 pieces of junk, baseball-size and larger, orbiting the globe in a cloud that lies between about 200 kilometers (125 miles) and 4,000 kilometers (2,500 miles) above Earth’s surface. Perhaps another 150,000 objects that are a centimeter (half an inch) across and larger were released. High orbital velocities make even tiny pieces of space junk dangerous to spacecraft of all kinds. And ground stations cannot reliably monitor or track objects smaller than about five centimeters (two inches) across in low Earth orbit (around a meter in geostationary orbit), a capability that might enable satellites to maneuver out of the way. To avoid being damaged by the Chinese space debris, in fact, two U.S. satellites had to alter course. Any shooting war in space would raise the specter of a polluted space environment no longer navigable by Earth-orbiting satellites.

Basing weapons in orbit also pre­sents difficult technical obstacles. They would be just as vulnerable as satellites are to all kinds of outside agents: space debris, projectiles, electromagnetic signals, even natural micrometeoroids. Shielding space weapons against such threats would also be impractical, mostly because shielding is bulky and adds mass, thereby greatly increasing launch costs. Orbital weapons would be mostly autonomous mechanisms, which would make operational errors and failures likely. The paths of objects in orbit are relatively easy to predict, which would make hiding large weapons problematic. And because satellites in low Earth orbit are overhead for only a few minutes at a time, keeping one of them constantly in range would require many weapons.

Finally, getting into space and operating there is extremely expensive: between $2,000 and $10,000 a pound to reach low Earth orbit and between $15,000 and $20,000 a pound for geostationary orbit. Each space-based weapon would require replacement every seven to 15 years, and in-orbit repairs would not be cheap, either.

Alternatives to Space Warfare
Given the risks of space warfare to national and international security, as well as the technical and financial hurdles that must be overcome, it would seem only prudent for spacefaring nations to find ways to prevent an arms race in space. The U.S. focus has been to reduce the vulnerability of its satellite fleet and explore alternatives to its dependence on satellite services. Most other space-capable countries are instead seeking multilateral diplomatic and legal measures. The options range from treaties that would ban antisatellite and space-based weapons to voluntary measures that would help build transparency and mutual confidence.

The Bush administration has adamantly opposed any form of negotiations regarding space weapons. Opponents of multilateral space weapons agreements contend that others (particularly China) will sign up but build secret arsenals at the same time, because such treaty violations cannot be detected. They argue further that the U.S. cannot sit idly as potential adversaries gain spaceborne resources that could enhance their terrestrial combat capabilities.

Proponents of international treaties counter that failure to negotiate such agreements entails real opportunity costs. An arms race in space may end up compromising the security of all nations, including that of the U.S., while it stretches the economic capacities of the competitors to the breaking point. And whereas many advocates of a space weapons ban concede that it will be difficult to construct a fully verifiable treaty—because space technology can be used for both military and civilian ends—effective treaties already exist that do not require strict verification. A good example is the Biological Weapons Convention. Certainly a prohibition on the testing and use (as opposed to the deployment) of the most dangerous class of near-term space weapons—destructive (as opposed to jamming) antisatellite systems—would be easily verifiable, because earthbound observers can readily detect orbital debris. Furthermore, any party to a treaty would know that all its space launches would be tracked from the ground, and any suspicious object in orbit would promptly be labeled as such. The international outcry that would ensue from such overt treaty violations could deter would-be violators.

Since the mid-1990s, however, progress on establishing a new multilateral space regime has lagged. The U.S. has blocked efforts at the United Nations Conference on Disarmament in Geneva to begin negotiations on a treaty to ban space weapons. China, meanwhile, has refused to accept anything less. Hence, intermediate measures such as voluntary confidence-building, space traffic control or a code of responsible conduct for spacefaring nations have remained stalled.

Space warfare is not inevitable. But the recent policy shift in the U.S. and China’s provocative actions have highlighted the fact that the world is approaching a crossroads. Countries must come to grips with their strong self-interest in preventing the testing and use of orbital weapons. The nations of Earth must soon decide whether it is possible to sustain the predominantly peaceful human space exploration that has already lasted half a century. The likely alternative would be unacceptable to all.

 

As Nanotech's Promise Grows, Will Puny Particles Present Big Health Problems?

Amid the great promise nanotechnology offers, big questions remain on health dangers posed by exposure to tissue-penetrating particles

 
THE BENEFITS (AND RISKS?) OF NANOTECHNOLOGY: Nanotechnolology offers great promise in medicine and many other fields but does it also pose hazards?

It seems like a noble goal: amid growing concern about the health risks of nanoparticles, why not keep tabs on the health of people who work with the little buggers? But it turns out that's easier said than done.

"You could probably count the world's published literature on exposure to nanoparticles on both hands," says Paul Schulte, director of the Education and Information Division of the National Institute for Occupational Safety and Health (NIOSH). "And yet a lot of words have been written about nanotechnology, and it leads one to want to take action. We're struggling with finding a scientific basis on which to do that."

Unfortunately, he says, a NIOSH draft proposal—titled "Interim Guidance on Medical Screening of Workers Potentially Exposed to Engineered Nanoparticles''—is limited in the guidance it can provide. The reason: too little information. Scientists have only the broadest suspicions about harm that nanoparticles may cause. How, then, to recommend which workers should be screened and exactly what they should be tested for?

"In essence, you're going on a fishing expedition," says Andrew Maynard, a former NIOSH researcher who is now chief science adviser at the nonprofit Project on Emerging Nanotechnologies, part of the Woodrow Wilson International Center for Scholars in Washington, D.C. "We need to make this link to disease, but you can't just do that by randomly testing people."

NIOSH scientists are not the only ones scratching their heads over the possible dangers of nanotech. It is one of the world's hottest technologies, and experts agree that it poses unpredictable, potentially serious health risks. Beyond that, not much is known.

Nanotechnology involves the manipulation of teeny particles, measuring between one and 100 nanometers. (A nanometer is one billionth of a meter, or roughly 80,000 times smaller than the width of a human hair.) At that size, substances shed some of their usual rules of behavior, which is both the magic and the menace of nanotech.

Physically, nanoparticles are so minute that they can penetrate deep into the body. Animal studies have found, for example, that some can cross the blood–brain barrier, which normally protects the brain from toxins in the bloodstream. That may be great if you are using carbon nanotubes to deliver chemotherapy drugs to people with brain tumors, as cancer researchers in California (at City of Hope, a cancer research and treatment center in Duarte in collaboration with NASA's Jet Propulsion Laboratory in Pasadena), hope to do. It may not be so awesome if the particles enter the bloodstream by accident rather than as part of a medical treatment.

Some of the worry about exposure to engineered nanoparticles arises from their unintended counterparts, often found in air pollution. The puniest bits of soot in diesel exhaust, known as ultrafines, measure on the nanoscale. When inhaled, they journey into the smallest air passages in the lungs, which are off-limits to larger particles. There they cause respiratory problems and, more surprisingly, heart disease, according to University of Rochester researcher Günter Oberdörster and others.

Chemically, nanoparticles tend to be more reactive than larger amounts of the same substance, because they have more surface area and therefore more opportunity to interact with other substances. That means a chemical that's normally harmless might be toxic in minuscule doses. Animal studies show that inhaled nanoparticles can cause pulmonary inflammation, move from the lungs to other organs, and interfere with cell signaling. Shrink something to nanosize and it can do surprising things: change color, become soluble, conduct electricity.

At the same time, the potential benefits are enormous. Medical researchers hold out hope for "nanomiracles" ranging from drugs that fight radiation poisoning to a shoebox-size portable genetics testing lab. Current and potential green applications abound: window coatings that block heat but not light, more efficient solar panels, energy-saving traffic lights. Researchers at Lehigh University in Bethlehem, Pa., say that because of their size and reactivity, iron nanoparticles can decontaminate solvent-soaked soil up to 1,000 times faster than a conventional iron mixture.

Worldwide, sales of nano-enabled products reached $50 billion in 2007 and are projected to hit $150 billion this year, according to New York City–based Lux Research, an industry consultant on emerging technologies. Nanomaterials make clothing resist stains and sunscreen turn clear on your skin. Manufacturers put microbe-killing nanosilver in washing machines and plastic food storage containers. We can buy faster computer chips, lighter and stronger bicycles, fleece without static cling—all thanks to nanotechnology.

With more than 500 nano-enabled consumer products on the market, some people worry about stray particles finding their way into our food, air and water. But history tells us the most likely to experience any ill effects are the people who make the products, not those who buy them.

Coal miners used to take a canary into the shaft to warn them of deadly methane: if the canary passed out, it was time to come up for air. Historically, workers have often played the role of canary with other toxics. From leaded gasoline to mercury in felt hats, laborers absorb the highest exposures and are the first to get sick.

No wonder, then, that NIOSH is pushing for better understanding of the health effects of nanoparticles. But as Schulte notes, far more is unknown than known.

And as his colleague Doug Trout points out, it is a mistake to talk about nanomaterials as a single entity. "They're a whole universe," with no one-size-fits-all answers.

It is clear that inhaled nanoparticles can make their way into the bloodstream and throughout the body. Can they also penetrate the skin? What happens when they are ingested? Nobody knows. The size and shape of the particle are critical variables. And what about the amount? Nobody knows. Also, which companies are using nanomaterials, especially in sprays or powders that can easily be inhaled? How many workers might be affected? Nobody knows.

The reason: nanomaterials are completely unregulated; industry is under no obligation to keep records on potential hazards or anything else. NIOSH is a research institute; it can recommend that employers reduce worker exposure—and ways they might do that—but it has no enforcement power. Although the Environmental Protection Agency is a regulatory agency, it, like NIOSH, is in the early stages of gathering data. Last week, it released a long-awaited proposal asking businesses to voluntarily report safety data on engineered nanomaterials. Of the more than $1.3 billion budgeted for federal nanotech research in fiscal year 2006, only $38 million was targeted at investigating environmental, health and safety risks; the rest was earmarked for research and development.

There's currently a push from both inside and outside the government for the feds to do more. A recent Congressional Research Service report urges the multiagency National Nanotechnology Initiative (NNI) to establish an environmental and safety research agenda with real priorities—something NNI has been promising to do since 2006. U.S.

Rep. Albert Wynn, a Maryland Democrat who chairs the House Subcommittee on the Environment and Hazardous Materials, says he plans to hold hearings this year on "the serious gaps in the current statutory and regulatory framework." At the state level, Wisconsin legislator Terese Berceau (D-Madison) has asked her state's departments of Natural Resources, Health and Family Services, and Agriculture, Trade and Consumer Protection to work with her to create a registry of businesses that make nanoparticles as a first step toward tracking their use and potential health effects.

NIOSH is also interested in the possibility of establishing registries of workers exposed to nanoparticles. It is unclear who would run these registries, what information they would collect, or how the information would be used.

"A registry is not an end in itself," Patrick Conner, medical director for the Germany-based chemical company BASF, said during a recent meeting held by NIOSH in Cincinnati to get input on its nanotech draft proposal. "If you're going to gather data, you have to act on it."

And action is crucial, the Project on Emerging Nanotechnologies' Maynard says—not only to protect workers and consumers, but also to protect the promise of the technology itself.

"If you look at the potential of what we can achieve with nanotechnology, it's really quite incredible," he says. "But if we want to realize the long-term benefit, we've got to get these health, safety, ethical and social issues right as early as possible."

In the past, Maynard says, policymakers and businesses have developed technology and then tried to address problems as they emerge. "This is an opportunity," he notes, "to do things the other way around."

Calendar
04 2024/05 06
S M T W T F S
1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31
Timepiece
タグホイヤー フォーミュラー1 ドリームキャンペーン
Blog Plus
SEO / RSS
Podcast
by PODCAST-BP
New TB
Bar Code
Data Retrieval
Oldest Articles
(09/30)
(09/30)
(09/30)
(09/30)
(09/30)
Photo Index
Reference
Latina




RSS Reader
無料RSSブログパーツ
Misc.
◆BBS


◆Chat


◆Micro TV


Maps



顔文字教室




Copyright © Tech All Rights Reserved.
Powered by NinjaBlog
Graphics by 写真素材Kun * Material by Gingham * Template by Kaie
忍者ブログ [PR]