忍者ブログ
Technical News
[1]  [2]  [3]  [4]  [5]  [6]  [7]  [8]  [9]  [10]  [11
×

[PR]上記の広告は3ヶ月以上新規記事投稿のないブログに表示されています。新しい記事を書く事で広告が消えます。

Galactic Wi-fi?


Evening settles on the Allen Telescope Array construction site at Hat Creek in Northern California. 


Google Lunar X Prize contestants can utilize the SETI Institute's Allen Telescope Array for downlinking "Mooncasts" from their respective Moon vehicles.

Incredibly, it's been only a bit more than a century since Oliver Heaviside consolidated the work of several 19th century physicists into the four compact mathematical formulations known as Maxwell's Equations. You may gleefully recall them from sophomore physics.

Aside from their display by the rabidly nerdy on pretentious t-shirts, the formulae have a splendid utility: they describe all electromagnetic radiation — in particular, light and radio. In the short time since their discovery, we have been able to milk these elegant equations to build crude spark transmitters, and eventually to develop the diminutive cell phones that allow you to blithely ring up your pals while comfortably seated in restaurants and movie theaters. We have exploited Maxwell's Equations like an old-growth forest, and many technical types aver that we know all there is to know about them.

Not true. And the fact that it's untrue may affect our thinking about SETI.

Today's SETI experiments generally look for what are politely termed "narrow-band signals." In other words, the receivers at the back ends of our radio telescopes search wide swaths of the spectrum looking for a signal that's at one spot on the dial — a signal that's very constrained in frequency. By putting all the transmitted power into this small bandwidth, the aliens can ensure that their signal will stand out like Yao Ming at a Munchkin picnic.

That makes sense — at least if the aliens want only to help us find their signal. But they might have other priorities. In particular, the history of earthly communication suggests that there is an inexorable pressure to increase the bit rate of any transmission channel. A half-decade ago, most readers accessed this web site with a simple dial-up phone line. Today, you're more likely to have some sort of wide-band service, which is to say, you're inhaling Internet bits at least ten times quicker than before.

More generally, in 150 years, we've gone from telegraph wires, capable of a few bits per second, to optical fibers that are billions of times speedier. The idea of "more bandwidth" is so compelling, the phrase has entered the lexicon of everyday speech — even among those who couldn't tell a hertz from a hub nut. Communication technology is always driven to send more bits — more information — per second.

Now consider the plight of aliens wishing to get in touch. Because the separation between one civilization and another is likely to be at least hundreds — and maybe thousands — of light-years, any interstellar pinging is effectively one-way. Back and forth conversations will take too long. So perhaps the aliens will opt to send, not the easiest-to-find signal, but a signal that says it all — a signal bristling with information. If you're going to stuff a message into a bottle, why not use onion-skin paper and write small?

The straightforward way to get more information down a radio channel is, as everyone knows, by using greater bandwidth. Nearly once a week someone sends me an e-mail pointing this out, saying that SETI should be looking for wide-band signals, not narrow-band ones. But there's a problem here. While sending a wide-band, information-rich signal between nearby stars is perfectly practical (assuming you're willing to pay the power bill), once the distance exceeds a thousand light-years or so the billowing hot gas that permeates interstellar space begins to wreak havoc and destruction on the transmission. A process of "dispersion" occurs, which works to slow the broadcast — but it slows different frequencies by different amounts. The result is to distort a wide-bandwidth signal in much the way that a highly reverberant hall would distort the music from an orchestra. A narrow-band signal (the acoustical analog is a simple flute note) would not be adversely affected.

So it seems that there may be difficulties in sending certain kinds of complex radio signals over significant distances in the Galaxy. Interstellar correspondence could be restricted to mere postcards, which would be a disappointment to aliens interested in heavy-duty data distribution.

However, some Swedish physicists are pointing out a possible scheme for beating this rap. In careful analyses of some of the subtle properties of Maxwell's Equations, Bo Thide and Jan Bergman at the Swedish Institute of Space Physics in Uppsala have explored a property of radio waves called orbital angular momentum. You can think of this orbital momentum as a twisting of the wave's electric and magnetic fields — a twisting that would show up if you were measuring the wave with an array of antennas. The technical details are intricate, but suffice it to say that the Swedish scientists are noting another way to send information in a radio signal — even a narrow-band radio signal — by encoding it in the orbital angular momentum.

It's as if they've found "subspace channels," a là Star Trek. Hidden highways down which additional bits can be moved. And there's reason to think that these momentum channels might be impervious to the interstellar jumbling that afflicts the usual types of wide-band signals when sent over great distances.

So it may be that our search for narrow-band signals is actually a very good SETI strategy, and not just an obvious one. While such monotonic messages may seem to be elementary and devoid of much information, they could be laden with additional, hidden complexity.

The investigation of new transmission modes by Thide and Bergman hints that if we do find a signal from ET, we may wish to reconfigure our radio telescopes to look for encoding of the message via such subtle effects as orbital angular momentum. A simple signal may only be a cipher for a more complex message, and there may be more things in heaven and earth than even Maxwell had dreamt of …

PR

A Robot in Every Home

The leader of the PC revolution predicts that the next hot field will be robotics

Imagine being present at the birth of a new industry. It is an industry based on groundbreaking new technologies, wherein a handful of well-established corporations sell highly specialized devices for business use and a fast-growing number of start-up companies produce innovative toys, gadgets for hobbyists and other interesting niche products. But it is also a highly fragmented industry with few common standards or platforms. Projects are complex, progress is slow, and practical applications are relatively rare. In fact, for all the excitement and promise, no one can say with any certainty when--or even if--this industry will achieve critical mass. If it does, though, it may well change the world.

Of course, the paragraph above could be a description of the computer industry during the mid-1970s, around the time that Paul Allen and I launched Microsoft. Back then, big, expensive mainframe computers ran the back-office operations for major companies, governmental departments and other institutions. Researchers at leading universities and industrial laboratories were creating the basic building blocks that would make the information age possible. Intel had just introduced the 8080 microprocessor, and Atari was selling the popular electronic game Pong. At homegrown computer clubs, enthusiasts struggled to figure out exactly what this new technology was good for.

But what I really have in mind is something much more contemporary: the emergence of the robotics industry, which is developing in much the same way that the computer business did 30 years ago. Think of the manufacturing robots currently used on automobile assembly lines as the equivalent of yesterday's mainframes. The industry's niche products include robotic arms that perform surgery, surveillance robots deployed in Iraq and Afghanistan that dispose of roadside bombs, and domestic robots that vacuum the floor. Electronics companies have made robotic toys that can imitate people or dogs or dinosaurs, and hobbyists are anxious to get their hands on the latest version of the Lego robotics system.

Meanwhile some of the world's best minds are trying to solve the toughest problems of robotics, such as visual recognition, navigation and machine learning. And they are succeeding. At the 2004 Defense Advanced Research Projects Agency (DARPA) Grand Challenge, a competition to produce the first robotic vehicle capable of navigating autonomously over a rugged 142-mile course through the Mojave Desert, the top competitor managed to travel just 7.4 miles before breaking down. In 2005, though, five vehicles covered the complete distance, and the race's winner did it at an average speed of 19.1 miles an hour. (In another intriguing parallel between the robotics and computer industries, DARPA also funded the work that led to the creation of Arpanet, the precursor to the Internet.)

What is more, the challenges facing the robotics industry are similar to those we tackled in computing three decades ago. Robotics companies have no standard operating software that could allow popular application programs to run in a variety of devices. The standardization of robotic processors and other hardware is limited, and very little of the programming code used in one machine can be applied to another. Whenever somebody wants to build a new robot, they usually have to start from square one.

Despite these difficulties, when I talk to people involved in robotics--from university researchers to entrepreneurs, hobbyists and high school students--the level of excitement and expectation reminds me so much of that time when Paul Allen and I looked at the convergence of new technologies and dreamed of the day when a computer would be on every desk and in every home. And as I look at the trends that are now starting to converge, I can envision a future in which robotic devices will become a nearly ubiquitous part of our day-to-day lives. I believe that technologies such as distributed computing, voice and visual recognition, and wireless broadband connectivity will open the door to a new generation of autonomous devices that enable computers to perform tasks in the physical world on our behalf. We may be on the verge of a new era, when the PC will get up off the desktop and allow us to see, hear, touch and manipulate objects in places where we are not physically present.

From Science Fiction to Reality
The word "robot" was popularized in 1921 by Czech play¿wright Karel Capek, but people have envisioned creating robotlike devices for thousands of years. In Greek and Roman mythology, the gods of metalwork built mechanical servants made from gold. In the first century A.D., Heron of Alexandria--the great engineer credited with inventing the first steam engine--designed intriguing automatons, including one said to have the ability to talk. Leonardo da Vinci's 1495 sketch of a mechanical knight, which could sit up and move its arms and legs, is considered to be the first plan for a humanoid robot.

Over the past century, anthropomorphic machines have become familiar figures in popular culture through books such as Isaac Asimov's I, Robot, movies such as Star Wars and television shows such as Star Trek. The popularity of robots in fiction indicates that people are receptive to the idea that these machines will one day walk among us as helpers and even as companions. Nevertheless, although robots play a vital role in industries such as automobile manufacturing--where there is about one robot for every 10 workers--the fact is that we have a long way to go before real robots catch up with their science-fiction counterparts.

One reason for this gap is that it has been much harder than expected to enable computers and robots to sense their surrounding environment and to react quickly and accurately. It has proved extremely difficult to give robots the capabilities that humans take for granted--for example, the abilities to orient themselves with respect to the objects in a room, to respond to sounds and interpret speech, and to grasp objects of varying sizes, textures and fragility. Even something as simple as telling the difference between an open door and a window can be devilishly tricky for a robot.

But researchers are starting to find the answers. One trend that has helped them is the increasing availability of tremendous amounts of computer power. One megahertz of processing power, which cost more than $7,000 in 1970, can now be purchased for just pennies. The price of a megabit of storage has seen a similar decline. The access to cheap computing power has permitted scientists to work on many of the hard problems that are fundamental to making robots practical. Today, for example, voice-recognition programs can identify words quite well, but a far greater challenge will be building machines that can understand what those words mean in context. As computing capacity continues to expand, robot designers will have the processing power they need to tackle issues of ever greater complexity.

Another barrier to the development of robots has been the high cost of hardware, such as sensors that enable a robot to determine the distance to an object as well as motors and servos that allow the robot to manipulate an object with both strength and delicacy. But prices are dropping fast. Laser range finders that are used in robotics to measure distance with precision cost about $10,000 a few years ago; today they can be purchased for about $2,000. And new, more accurate sensors based on ultrawideband radar are available for even less.

Now robot builders can also add Global Positioning System chips, video cameras, array microphones (which are better than conventional microphones at distinguishing a voice from background noise) and a host of additional sensors for a reasonable expense. The resulting enhancement of capabilities, combined with expanded processing power and storage, allows today's robots to do things such as vacuum a room or help to defuse a roadside bomb--tasks that would have been impossible for commercially produced machines just a few years ago.

A BASIC Approach
In february 2004 I visited a number of leading universities, including Carnegie Mellon University, the Massachusetts Institute of Technology, Harvard University, Cornell University and the University of Illinois, to talk about the powerful role that computers can play in solving some of society's most pressing problems. My goal was to help students understand how exciting and important computer science can be, and I hoped to encourage a few of them to think about careers in technology. At each university, after delivering my speech, I had the opportunity to get a firsthand look at some of the most interesting research projects in the school's computer science department. Almost without exception, I was shown at least one project that involved robotics.

At that time, my colleagues at Microsoft were also hearing from people in academia and at commercial robotics firms who wondered if our company was doing any work in robotics that might help them with their own development efforts. We were not, so we decided to take a closer look. I asked Tandy Trower, a member of my strategic staff and a 25-year Microsoft veteran, to go on an extended fact-finding mission and to speak with people across the robotics community. What he found was universal enthusiasm for the potential of robotics, along with an industry-wide desire for tools that would make development easier. "Many see the robotics industry at a technological turning point where a move to PC architecture makes more and more sense," Tandy wrote in his report to me after his fact-finding mission. "As Red Whittaker, leader of [Carnegie Mellon's] entry in the DARPA Grand Challenge, recently indicated, the hardware capability is mostly there; now the issue is getting the software right."

Back in the early days of the personal computer, we realized that we needed an ingredient that would allow all of the pioneering work to achieve critical mass, to coalesce into a real industry capable of producing truly useful products on a commercial scale. What was needed, it turned out, was Microsoft BASIC. When we created this programming language in the 1970s, we provided the common foundation that enabled programs developed for one set of hardware to run on another. BASIC also made computer programming much easier, which brought more and more people into the industry. Although a great many individuals made essential contributions to the development of the personal computer, Microsoft BASIC was one of the key catalysts for the software and hardware innovations that made the PC revolution possible.

After reading Tandy's report, it seemed clear to me that before the robotics industry could make the same kind of quantum leap that the PC industry made 30 years ago, it, too, needed to find that missing ingredient. So I asked him to assemble a small team that would work with people in the robotics field to create a set of programming tools that would provide the essential plumbing so that anybody interested in robots with even the most basic understanding of computer programming could easily write robotic applications that would work with different kinds of hardware. The goal was to see if it was possible to provide the same kind of common, low-level foundation for integrating hardware and software into robot designs that Microsoft BASIC provided for computer programmers.

Tandy's robotics group has been able to draw on a number of advanced technologies developed by a team working under the direction of Craig Mundie, Microsoft's chief research and strategy officer. One such technology will help solve one of the most difficult problems facing robot designers: how to simultaneously handle all the data coming in from multiple sensors and send the appropriate commands to the robot's motors, a challenge known as concurrency. A conventional approach is to write a traditional, single-threaded program--a long loop that first reads all the data from the sensors, then processes this input and finally delivers output that determines the robot's behavior, before starting the loop all over again. The shortcomings are obvious: if your robot has fresh sensor data indicating that the machine is at the edge of a precipice, but the program is still at the bottom of the loop calculating trajectory and telling the wheels to turn faster based on previous sensor input, there is a good chance the robot will fall down the stairs before it can process the new information.

Concurrency is a challenge that extends beyond robotics. Today as more and more applications are written for distributed networks of computers, programmers have struggled to figure out how to efficiently orchestrate code running on many different servers at the same time. And as computers with a single processor are replaced by machines with multiple processors and "multicore" processors--integrated circuits with two or more processors joined together for enhanced performance--software designers will need a new way to program desktop applications and operating systems. To fully exploit the power of processors working in parallel, the new software must deal with the problem of concurrency.

One approach to handling concurrency is to write multi-threaded programs that allow data to travel along many paths. But as any developer who has written multithreaded code can tell you, this is one of the hardest tasks in programming. The answer that Craig's team has devised to the concurrency problem is something called the concurrency and coordination runtime (CCR). The CCR is a library of functions--sequences of software code that perform specific tasks--that makes it easy to write multithreaded applications that can coordinate a number of simultaneous activities. Designed to help programmers take advantage of the power of multicore and multiprocessor systems, the CCR turns out to be ideal for robotics as well. By drawing on this library to write their programs, robot designers can dramatically reduce the chances that one of their creations will run into a wall because its software is too busy sending output to its wheels to read input from its sensors.

In addition to tackling the problem of concurrency, the work that Craig's team has done will also simplify the writing of distributed robotic applications through a technology called decentralized software services (DSS). DSS enables developers to create applications in which the services--the parts of the program that read a sensor, say, or control a motor-- operate as separate processes that can be orchestrated in much the same way that text, images and information from several servers are aggregated on a Web page. Because DSS allows software components to run in isolation from one another, if an individual component of a robot fails, it can be shut down and restarted--or even replaced--without having to reboot the machine. Combined with broadband wireless technology, this architecture makes it easy to monitor and adjust a robot from a remote location using a Web browser.

What is more, a DSS application controlling a robotic device does not have to reside entirely on the robot itself but can be distributed across more than one computer. As a result, the robot can be a relatively inexpensive device that delegates complex processing tasks to the high-performance hardware found on today's home PCs. I believe this advance will pave the way for an entirely new class of robots that are essentially mobile, wireless peripheral devices that tap into the power of desktop PCs to handle processing-intensive tasks such as visual recognition and navigation. And because these devices can be networked together, we can expect to see the emergence of groups of robots that can work in concert to achieve goals such as mapping the seafloor or planting crops.

These technologies are a key part of Microsoft Robotics Studio, a new software development kit built by Tandy's team. Microsoft Robotics Studio also includes tools that make it easier to create robotic applications using a wide range of programming languages. One example is a simulation tool that lets robot builders test their applications in a three-dimensional virtual environment before trying them out in the real world. Our goal for this release is to create an affordable, open platform that allows robot developers to readily integrate hardware and software into their designs.

Should We Call Them Robots?
How soon will robots become part of our day-to-day lives? According to the International Federation of Robotics, about two million personal robots were in use around the world in 2004, and another seven million will be installed by 2008. In South Korea the Ministry of Information and Communication hopes to put a robot in every home there by 2013. The Japanese Robot Association predicts that by 2025, the personal robot industry will be worth more than $50 billion a year worldwide, compared with about $5 billion today.

As with the PC industry in the 1970s, it is impossible to predict exactly what applications will drive this new industry. It seems quite likely, however, that robots will play an important role in providing physical assistance and even companionship for the elderly. Robotic devices will probably help people with disabilities get around and extend the strength and endurance of soldiers, construction workers and medical professionals. Robots will maintain dangerous industrial machines, handle hazardous materials and monitor remote oil pipelines. They will enable health care workers to diagnose and treat patients who may be thousands of miles away, and they will be a central feature of security systems and search-and-rescue operations.

Although a few of the robots of tomorrow may resemble the anthropomorphic devices seen in Star Wars, most will look nothing like the humanoid C-3PO. In fact, as mobile peripheral devices become more and more common, it may be increasingly difficult to say exactly what a robot is. Because the new machines will be so specialized and ubiquitous--and look so little like the two-legged automatons of science fiction--we probably will not even call them robots. But as these devices become affordable to consumers, they could have just as profound an impact on the way we work, communicate, learn and entertain ourselves as the PC has had over the past 30 years. 

Scan Uncovers Thousands of Copycat Scientific Articles

Database search turns up research papers suspiciously similar to prior publications, prompting investigations

sneaking a peek 
MIND IF I BORROW THAT? A new computerized search has turned up tens of thousands of possibly duplicated biomedical research papers.

A new computerized scan of the biomedical research literature has turned up tens of thousands of articles in which entire passages appear to have been lifted from other papers. Based on the study, researchers estimate that there may be as many as 200,000 duplicates among some 17 million papers in leading research database Medline.

The finding has already led one publication to retract a paper for being too similar to a prior article by another author.

Researchers Mounir Errami and Harold "Skip" Garner of the University of Texas Southwestern Medical Center at Dallas used a text-matching algorithm to compare seven million Medline abstracts against matching entries flagged by the database's software as being closely related.

The researchers set their own software tool, called eTBLAST, to identify pairs that were more than 45 percent identical, Errami says. The search turned up more than 70,000 hits, which the researchers and a team of three assistants have been manually checking. So far, Errami says they have gone through close to 3,000 pairs of abstracts or the full articles, if the duplicates have different authors. He notes that some matches were found to be innocent duplications, such as reprints or translations.

But in 79 cases (and counting), duplicates with different authors had no obviously legitimate explanation. The group has set up a public Web site, Déjà vu, to document the findings.

The next step in these cases of potential plagiarism, the researchers say, is for journals to investigate. In a Nature report, they advise other scientists "to withhold judgment of any candidate duplicates until evaluated by a suitable body such as an editorial board or a university ethics committee."

They note that most of the questionable duplicates inspected thus far appear to be papers submitted by the same authors to multiple journals, a less serious ethical lapse that allows researchers to artificially inflate their publication credits and give added weight to their work.

Errami and Garner estimate that perhaps 50,000 of the eTBLAST hits and 200,000 (0.01 percent) of the 17 million–plus Medline entries will turn out to be either plagiarized or multiple listings.

Prior studies have come up with different duplication rates. In a 2002 blind survey of 3,247 biomedical researchers by the University of Minnesota, 4.7 percent admitted that they had republished papers and 1.4 percent confessed to borrowing from others' work. A 2006 analysis of more than 280,000 papers in the physics preprint database arXiv, led by a U.S. computer scientist, found that 30,316 (10.5 percent) were suspected duplicates, and 677 (0.2 percent) were potentially plagiarized.

Action and Retraction

The U.T. Southwestern authors uncovered three cases in which their own colleagues may have been ripped off. Errami and Garner alerted the authors and journals involved, which they say has led to probes by the implicated publications.

One investigation has already led to a retraction: Journal publisher Elsevier is retracting a 2004 review paper (summarizing existing research) by rheumatologist Lee Simon of Harvard Medical School, says Shira Tabachnikoff, director of corporate relations at Elsevier. According to the Déjà vu entry, 55 percent of Simon's text, published in Best Practice & Research Clinical Rheumatology, closely matches that of a paper published a year earlier by U.T. Southwestern rheumatologist Roy Fleischmann in Expert Opinion on Drug Safety.

A review by SciAm.com of both articles confirmed that multiple consecutive pages of text in Simon's 32-page article were nearly identical to passages in Fleischmann's 19-page paper; of the 161 references listed in the later paper, nearly all were listed in the 2003 publication in the same nonalphabetical, nonchronological order.

In a telephone interview before the retraction Fleischmann stopped short of accusing Simon of plagiarism, pending Elsevier's decision, but acknowledged that the similarities were suspicious to say the least. "It's word for word, comma for comma, period for period, sentence for sentence, paragraph for paragraph, for the bulk of the article," he says.

Simon, who admits that he reviewed Fleischmann's paper prior to its publication, defends his own article by noting that there are only so many ways for two authors to summarize the same body of research. "This wasn't intentional duplication," he told SciAm.com in a telephone interview. "This is what happens when you do review articles."

He added that he was being singled out for a paper that was a chore to write and brought him no added prestige. "It's a review paper. Who cares?" he says. "I'm never going to write another one, because of this bullshit."

Will Duplicates Keep Multiplying?

Errami and Garner say they hope that the prospect of being found out will discourage would-be copycats.

But Mike Rossner, executive director of journal publisher The Rockefeller University Press, notes that eTBLAST or similar search schemes may not be successful barriers against republication, because manuscripts submitted simultaneously to two journals would not turn up in databases until after they had been published.

Maxine Clarke, publishing executive editor of the journal Nature, says her publication uses text-matching software to compare a submission with papers in the publishing group's many specialty journals. She notes that they also ask prospective authors to submit copies of preprints and related manuscripts submitted to other journals to help editors and reviewers assess their novelty. Bronwen Dekker, an assistant editor at Nature Protocols, says her journal uses eTBLAST to scan submissions for evidence of self-plagiarism (copying one's past work) in the abstract or introduction.

Some evidence suggests that the possibility of detection may not deter the unscrupulous. Rossner says that five years ago, The Rockefeller University Press began checking papers for manipulation of photos that depicted experimental data, but he says he has seen no decline in the number of doctored images.

Although the long-term effect of the finding remains to be seen, there has already been some fallout. To wit: Fleischmann says that he has known Simon for 25 years and considered him a friend, but adds "I don't know if we still are."

FBI's New Technology Revolutionizes DNA Analysis

Genetic code in FBI database. 
Criminals' genetic code is part of a national database compiled by the FBI. 

The 1995 double-murder trial of O.J. Simpson fundamentally changed the way Americans saw criminal evidence. Forensic details suddenly became dinner table conversation. Television programs like CSI have only added to the allure of forensics.

Everyone has become an armchair crime-scene specialist, and that has meant the bar has been raised. In a courtroom, jurors are expecting to see something like what they see on television: forensic proof that they think any good investigator — like the ones on TV — should be able to find.

"The data is suggesting that interviews of jurors have them saying they would convict, if only the investigators had done DNA," said Mitchell Holland, a professor of Forensic Science at Penn State. "Some jurors are saying they needed this forensic evidence when there was probably enough in the case already to convict."

Solving the Unsolvable

It isn't just a hot television series that is creating the problem. Expectations are rising because, frankly, the science is getting better. Not so long ago, forensic experts needed a sample about the size of a nickel for processing. Now it just needs to be the size of a pin prick to do the trick.

That means that cases unsolvable just a decade ago are now ripe for reopening.

The FBI's nuclear DNA lab at Quantico, Va., hums with activity, though much of that movement is robotic. A new machine there allows a robot to process DNA samples at an exponentially higher rate than humans ever could. This is where the FBI is processing punch cards with DNA samples from the nation's federal offenders.

The FBI is trying to get a roster of prior criminals into a national database that will not only help law enforcement solve new crimes, but potentially old ones, too.

Look inside the new processing machine and you see a robotic arm skimming along a long tray of test tubes. Each contains a small blood sample which is bathed in a sort of chemical detergent that breaks the human cells open. DNA is then released inside the test tube and settles to the bottom — like sediment in a bottle of wine.

That sediment is fed into another instrument that reads its genetic code to include in a database. Just a handful of years ago, compiling that kind of database would have taken human technicians years. The robot can do 500 samples a day — many more than a human ever could.

Robot in the Lab

Jennifer Luttman runs the Convicted Offender program at the FBI Lab. She says that the FBI still uses people to find the DNA at a crime scene, but steps that come after that are easily automated.

"We still use humans to look for the stains, to test for blood, to test for semen, to cut out the stains," she said. "Only a human can do that because they need to see how much is there and that's all based on experience."

But Luttman says that once the DNA is extracted and purified, robots can take care of the rote processes.

Down the hall from the nuclear DNA lab, other FBI scientists are trying to tug clues from a different kind of DNA, called mitochondrial DNA.

If you think of regular DNA as being part of the yolk of an egg, mitochondrial DNA is in the white. Regular DNA is passed on from both parents. Mitochondrial DNA comes just from the mother. So mitochondrial DNA isn't as useful for identification as nuclear DNA. But it doesn't break down as quickly — and that makes it vitally important to "cold" cases.

Alice Eisenberg, head of the FBI's Mitochondrial DNA Analysis lab, says cold cases are the meat and potatoes of her unit's operation. Typically, her technicians are dealing with bone and hair samples that have been sitting on evidence room shelves for years.

"No one was able to perform DNA analysis on them until we came along with our mitochondrial DNA technology," she said.

Weighing Molecules

The newest wrinkle involves a rather innocuous-looking machine called a mass spectrometer, which is, in essence, a glorified scale that weighs individual molecules. The actual machine is not very big. It is about the length of a kitchen counter and a little over 5 feet tall. Inside, little robotic arms move trays around a series of short towers.

Up to now, law enforcement has mostly used it to identify chemicals, like accelerants around suspicious fires. The FBI lab is the first to use it to measure mitochondrial DNA in crime work.

The idea, in a very simplified way, is to separate out the DNA's component parts by weight.

That becomes important in an event like the World Trade Center attack, in which remains end up co-mingled, making it hard to tell which DNA is which. The mass spectrometer is able to use weights to identify different fragments of the DNA.

Les McCurdy, a forensic examiner in the DNA analysis lab, likened the instrument's work on DNA to a scale weighing a pocketful of coins.

"You have pennies, nickels, dimes and quarters. Each of those coins has a different weight," he said, adding that if he had a pocketful of coins and put it on a scale, he could tell the difference between each coin. "It is the same type of thing we are doing with mitochondrial DNA with this instrument."

Separating DNA

Essentially the machine helps analysts shake out and identify several individuals from a DNA mixture. The robot picks up a plate, reads a bar code on its side and cleans the DNA. Then, using magnetic beads, the robot separates the DNA so it can put it in the mass spectrometer for weighing. A computer then records the various weights and DNA combinations.

McCurdy said it was one of the most exciting projects he's worked on at the FBI.

"It will have tremendous application," he said. "I think it is really going to open up all new kinds of evidence and all new kinds of cases. It will have a huge impact on how we can assist different investigations."

This new mass spectrometer technology is still in its infancy, but it is part of a larger program to expand the uses of DNA. And even people outside the FBI, like Penn State's Holland, see new uses for DNA.

"The power of DNA is just beginning to emerge," he said. "That's really exciting."

The question is whether these new DNA advances will raise the forensics bar even higher — with jurors coming to expect science to wring doubt out of their deliberations during a trial. The FBI is quick to say that new DNA science doesn't just assign guilt. Almost one-third of their DNA work goes toward exonerating suspects. Because of that, they say, these DNA advances are good for everyone.

U.S. Spy Satellite, Power Gone, May Hit Earth

Officials said that they had no control over the nonfunctioning satellite and that it was unknown where the debris might land.

“Appropriate government agencies are monitoring the situation,” Gordon Johndroe, a spokesman for the National Security Council, said in a statement. “Numerous satellites over the years have come out of orbit and fallen harmlessly. We are looking at potential options to mitigate any possible damage this satellite may cause.”

Specialists who follow spy satellite operations suspect it is an experimental imagery satellite built by Lockheed Martin and launched from Vandenberg Air Force Base in California in December 2006 aboard a Delta II rocket. Shortly after the satellite reached orbit, ground controllers lost the ability to control it and were never able to regain communication.

“It’s not necessarily dead, but deaf,” said Jonathan McDowell, an astronomer at the Harvard-Smithsonian Center for Astrophysics and an analyst of various government space programs.

It is fairly common for satellites to drop out of orbit and enter Earth’s atmosphere, but most break up before they reach the surface, Mr. McDowell said. Such incidents occur every few months, and it is often difficult to control the satellite’s trajectory or its re-entry into the atmosphere. The debris, if any survives the fiery descent, typically lands in remote areas and causes little or no harm.

“For the most part,” Mr. McDowell said, “re-entering space hardware isn’t a threat because so much of the Earth is empty. But one could say we’ve been lucky so far.”

Of particular concern in this case, however, is that the debris from the satellite may include hydrazine fuel, which is typically used for rocket maneuvers in space.

Much of the fuel on the experimental satellite may not have been used and, should the tank survive re-entry into the atmosphere, the remaining fuel would be hazardous to anyone on the ground. It is likely, however, that the tank may rupture on re-entry, and that the fuel would burn off in a fiery plume that would be visible to the naked eye.

John E. Pike, the director of Globalsecurity.org in Alexandria, Va., said that if the satellite in question was a spy satellite, it was unlikely to have any kind of nuclear fuel, but that it could contain toxins, including beryllium, which is often used as a rigid frame for optical components.

Since it was launched, the experimental satellite has been in a slowly decaying orbit. As of Jan. 22, it was moving in a circular orbit at about 275 kilometers above the Earth, Mr. McDowell said. In the last month, its orbit has declined by 15 to 20 kilometers.

“If you plot the curve, it’s now just a matter of weeks before it falls out of orbit,” he said.

The largest uncontrolled re-entry by a NASA spacecraft was that of Skylab, the 78-ton abandoned space station that fell from orbit in 1979.

Calendar
03 2024/04 05
S M T W T F S
1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30
Timepiece
タグホイヤー フォーミュラー1 ドリームキャンペーン
Blog Plus
SEO / RSS
Podcast
by PODCAST-BP
New TB
Bar Code
Data Retrieval
Oldest Articles
(09/30)
(09/30)
(09/30)
(09/30)
(09/30)
Photo Index
Reference
Latina




RSS Reader
無料RSSブログパーツ
Misc.
◆BBS


◆Chat


◆Micro TV


Maps



顔文字教室




Copyright © Tech All Rights Reserved.
Powered by NinjaBlog
Graphics by 写真素材Kun * Material by Gingham * Template by Kaie
忍者ブログ [PR]