忍者ブログ
Technical News
[3]  [4]  [5]  [6]  [7]  [8]  [9]  [10]  [11]  [12]  [13
×

[PR]上記の広告は3ヶ月以上新規記事投稿のないブログに表示されています。新しい記事を書く事で広告が消えます。

Future of Top U.S. Particle Physics Lab in Jeopardy

Congress's budget cut decelerates U.S. high-energy physics research
 
ILC schematic 
TUNNEL VISION: The 2008 high-energy physics budget passed by Congress in December took away funds to pursue research into the proposed International Linear Collider, shown here in a cut-away schematic.

NuMI beam line 
NuMI BEAM LINE at Fermilab provides neutrinos for the ongoing MINOS experiment and was expected to do the same for the sister experiment NOvA, which 2008 budget cuts have threatened to permanently derail.

In recent years the U.S. national laboratories have laid out an ambitious research agenda for particle physics. About 170 scientists and engineers at the Fermi National Accelerator Laboratory (Fermilab) in Batavia, Ill., have been developing designs and technologies for the International Linear Collider (ILC), a proposed machine that would explore the frontiers of high-energy physics by smashing electrons into their antimatter counterparts [see "Building the Next-Generation Collider" by Barry Barish, Nicholas Walker and Hitoshi Yamamoto; SCIENTIFIC AMERICAN, February 2008]. Another 80 researchers at Fermilab have been finalizing the plans for NOvA, a giant detector in northern Minnesota that could answer fundamental questions about the neutrino, a particle that is ubiquitous but maddeningly elusive. But on December 17, 2007—a date that scientists quickly dubbed "Black Monday"—Congress unexpectedly slashed funding for ILC and NOvA, throwing the future of American physics into doubt.

What made the cutbacks so devastating was that President George W. Bush and Congress had promised substantial budget increases for the physical sciences earlier in 2007. In the rush to trim the 2008 spending bill enough to avert a presidential veto, however, legislative leaders excised $88 million from the U.S. Department of Energy's funding for high-energy physics. Fermilab's 2008 budget abruptly shrank from $372 million to $320 million.

Fermilab isn't the only physics facility devastated by the recent budget cuts. Congress eliminated the $160 million U.S. contribution to ITER, the international project to build an experimental fusion reactor, as well as the 2008 funding for the Stanford Linear Accelerator Center (SLAC) in Menlo Park, Calif., which was collaborating with Fermilab in the planning of the ILC. The cuts will force SLAC to lay off 125 employees and to prematurely end its BaBar experiment (also known as the B-factory), which is looking for violations of charge and parity symmetry in the decay of short-lived particles called B mesons.

Fermilab’s director, Pier Oddone, announced that the lab would need to lay off 200 employees, or about 10 percent of its workforce, and that the remaining researchers would have to take off two unpaid days per month. These measures would allow the lab to keep operating the Tevatron, its phenomenally successful proton–antiproton collider, which is now racing to find evidence of new particles and extra dimensions before the more powerful European accelerator, the Large Hadron Collider (LHC), begins operations later this year. But the ILC and NOvA were expected to become the major focuses of research at Fermilab after the shutdown of the Tevatron, due to occur by 2010, and now the investigators in those projects must be assigned to different efforts or dismissed. "The greatest impact is on the future of the lab," Oddone says. "We have no ability now to develop our future."

A big part of that envisioned future is the proposed ILC, a 31-kilometer- (20-mile-) long facility [see image above] that would be able to detail the properties of any new particles discovered by the Tevatron or the LHC. American physicists had taken a leading role in the international effort to develop the collider, but the sudden cutoff in funding reduces the chances that the machine will be built on U.S. soil. "The ILC will go forward, but the U.S. will fall behind," says Barry Barish, director of the global design effort for the collider. The project is expected to yield technological advances that could benefit medical accelerators and materials science, and Barish says the U.S. may become less competitive in these fields if American support for the ILC is not restored.

The NOvA project is further along than the ILC; in fact, before the funding cuts were announced, the program managers had planned to upgrade the roads to their Minnesota site this spring so they could begin delivering construction materials for their enormous neutrino detector, which will weigh 15,000 tons when completed. Neutrinos come in three flavors—electron, muon and tau—and the particles constantly oscillate from one flavor to another; the NOvA detector is intended to measure how many of the muon neutrinos generated at Fermilab transform to electron neutrinos by the time they reach northern Minnesota. The results could reveal the answer to a longstanding mystery: why our universe is dominated by matter rather than antimatter.

Although NOvA has not been canceled, the suspension of funding may lead some of its scientists to abandon the effort. "The question is whether you can put a project on mothballs for a year and bring it back again," says Mark Messier of Indiana University, one of the spokesmen for NOvA. "The signal this sends is, 'Go do your research somewhere else.'"

PR

How to Build a Snowflake

The Original
 
The Original
Caltech researcher Ken Libbrecht, a leading expert on snow crystal formation, has photographed hundreds of natural snow crystals like this one.

Two mathematicians have for the first time created a computer simulation that generates realistic three-dimensional snowflakes -- although even they aren't sure how it works.

"We know surprisingly little about how ice crystals grow," said Caltech physicist Ken Libbrecht, who is considered a leading expert in snow crystal physics.

Figuring out some of the details could perhaps teach physicists a lot about how nature "self-assembles" complex structures -- a trick that nano-engineers have been trying to learn in recent years, he said.

Mathematicians Janko Gravner of the University of California at Davis and David Griffeath of the University of Wisconsin-Madison avoided the old approach of virtually building the snow crystals molecule-by-molecule.

Instead, they used virtual 3-D cells much larger than water molecules, which behave according to the same physics thought to control crystal growth.

"This is kind of an intermediate approach," said Gravner. He and Griffeath created their virtual cells -- called cellular automata -- to be one cubic micron in size.

At that scale the cells, about the size of a speck of dust, mimic the physics of water vapor and crystalline growth.

They then ran the model many times to see what happened when they tweaked with temperatures and vapor pressures. The result was a wide variety of snow crystals -- including the complicated and stunning six-sided star crystals. Each crystal took about 24 hours to build using a powerful desktop computer, Gravner said.

"Some forms are easier to get than others," Gravner told Discovery News. In this way the model seems to reflect the predominant crystals seen in nature, he said.

"I think it's a real big advance since nobody was able to do it before," said Libbrecth. "People have tried to get realistic snowflakes and it just didn't work."

Those previous attempts tended to succeed up to a certain point, after which the virtual crystals would go nuts, probably because of errors that built up in the computations and overpowered the simulation, Libbrecht told Discovery News.

"These guys were able to generate some structures that were very well-behaved," said Libbrecht.

Their success is all the more interesting, said Libbrecht, because the details of the physics Gravner and Griffeath programmed into their model are not quite in line with what he and some other physicists think are going on in snow crystal formation.

So either the physicists have been wrong, Libbrecht said, or there's something about the modeling approach that allows it to work despite the physics. Either way, it's a bit of a mystery.

 

Navy Mulls New Way to Enhance, Hide Submarine Communications

Deep Siren technology would let submarines communicate with ships and shore without compromising stealth
 
 
DEEP THOUGHTS: The Deep Siren system comprises a disposable gateway buoy with an antenna that gathers radio-frequency signals and converts them to Deep Siren acoustic signals that are converted on board the submarine to text messages.

 
EXPENDABLE: Deep Siren includes expendable buoys five inches (12.7 centimeters) in diameter and about 3.5 feet (one meter) long and black launch sleeves. The buoys are designed to stay afloat for up to three days and can be ejected via the sub's trash disposal unit without major modifications to the vessel.

The U.S. Navy is considering new technology that will allow land-based officers to communicate with submarines with minimal disruption to the sub's operations and reduced risk of detection. The military hopes that an emerging tactical paging technology dubbed Deep Siren will allow fleet commanders anywhere in the world to instantly communicate with subs despite the latters' depth or speed.

Currently, vessels can only be contacted if they are on or near the surface, which is not only inefficient but dangerous for subs furtively trolling hostile waters. Deep Siren is designed to deliver communications using acoustic, expendable buoys that, when contacted via a communications satellite in the National Security Agency's Global Information Grid, can send and receive messages to and from submerged subs as far as 175 miles (240 kilometers) away depending upon acoustic propagation conditions.

"This is about bringing real-time communications to the sub, without latency," says Bill Matzelevich, a former Navy captain who retired in 2000 and is now a senior manager in government contractor Raytheon Company's Network Centric Systems group. The Navy in July awarded Raytheon a $5.2 million development contract to deliver a Deep Siren tactical paging system. "If you need to get a message urgently to a sub, you might have to wait eight hours for it to come close enough to the surface. A strike group commander may need to change direction and can't get this info to the sub immediately."

Messages to submarines are typically broadcast from onshore naval communication centers for a fixed amount of time--eight hours or so. For a sub to receive these radio-frequency or satellite messages, it must stop what it is doing within that time period, extend an antenna and rise to "periscope depth"— approximately 60 feet (18 meters) below the surface, which is shallow enough to use a periscope. During this time the sub may become more vulnerable to detection and may be more restricted in its ability to perform its mission.

Once at periscope depth, submarines tow a floating long-distance antenna behind them, but the data rates are generally slow and the wire used to tether the antenna to the sub restricts the vessel's agility. "You can only go so fast and so deep with this wire attached," Matzelevich says. "This is Word War II–era technology."

To communicate with a submerged submarine safely, a gateway mechanism is required to deliver messages deeper than periscope depth. The Deep Siren Tactical Paging system is comprised of a disposable gateway buoy with an antenna that gathers radio-frequency signals and converts them to Deep Siren acoustic signals that penetrate the water and are received by the submarine's sonar system. These acoustic signals are then converted on board the submarine to text messages with the Deep Siren receiver. The Deep Siren system also includes a portable transmit station which can be located on shore or carried on board a ship or airplane. "You want to have this be a global capability, where the buoy can be called from anywhere in the world," Matzelevich says.

Working with RRK Technologies, Ltd., in Glasgow, Scotland, and Ultra Electronics Maritime Systems in Dartmouth, Nova Scotia, Raytheon is developing a Deep Siren system that includes expendable buoys that are five inches (12.7 centimeters) in diameter and about 3.5 feet (one meter) long with antennas that receive signals from a constellation of Iridium Satellite, LLC, communication satellites. The buoys—designed to stay afloat for up to three days—can be ejected out of the sub's trash disposal unit without major modifications to the vessel. In this way, subs can set up their own acoustic networks without the need to tow an antenna.

The other components of Deep Siren include computers onboard subs and in communications facilities—which may be located ashore, or onboard ships or aircraft—to access messages, along with special software to interpret them. The software—written by RRK—matches different acoustic tones emitted by the buoys with a set of vocabulary words shared between the sender and receiver, performing the translation from words to tones and back to words again. This methodology allows communications to a submarine in a format similar to text messages that occur on a cell phone or PDA.

Deep Siren acoustic technology uses digital message processing to ensure that the receiver can move at a rate of greater than 30 knots (about 35 miles per hour) without incurring any measurable interference. Deep Siren uses digital signaling capabilities at lower frequencies—less than two kilohertz— and permits signal encryption to achieve secure sonar communications at a substantial range to a submarine at depth. Secure and encrypted signals permit more liberal communication from ship to submarine; enemy units may be able to pick up the signals, but they cannot decode them.

The Navy plans to conduct an at-sea military assessment of Deep Siren in June as part of its Communications at Speed and Depth initiatives.

Internet Maps Get Streetwise

Start-up earthmine inc challenges Google and Microsoft with new maps that provide 360-degree panoramic views of city streets

 
ROCKET SCIENCE: Berkeley, Calif., startup earthmine, inc. plans to offer a Web-based navigation service that employs technology developed for NASA to give users panoramic views of city streets.

 
DIFFERENT PERSPECTIVE: Earthmine lets users zoom in on select points on a map; offering here a view of San Francisco's Market Street.

 
ROAD TRIP: Earthmine tested its technology in San Francisco but hopes to expand to other cities much the same way Google has with its Street View feature in Google Maps.

Google took Internet maps to the streets when it launched itsStreet View feature in Google Maps. Rather than relying on satellite photos, Street View, which debuted in May, enables users to view and navigate 360-degree street-level digital images of 21 U.S. cities, including San Francisco, Denver, Las Vegas, Miami and the Big Apple.

Now a Berkeley, Calif., start-up called earthmine inc plans to offer a similar Web-based navigation service that employs technology that NASA uses on the Mars Exploration Rover missions to help guide Opportunity and Spirit on their treks across the Red Planet's craggy surface. Earthmine last month announced that it had cut an exclusive deal with the California Institute of Technology (Caltech) in Pasadena, Calif., and the Jet Propulsion Laboratory (JPL) that it runs for NASA to license software and algorithms that create 3-D data from stereo panoramic imagery.

JPL actually began developing these algorithms and software for autonomous navigation about a decade ago. "The technology can be used by any robots that need to take visual information about a physical environment and make navigation decisions based on that information," says Andrew Gray, deputy manager of JPL's Commercial Technology Program office.

Earthmine is using this Space Age technology for the more down-to-Earth purpose of creating maps designed to help travelers find their way in unfamiliar urban settings, government agencies create visual property catalogues for tax-assessment and other purposes, and real estate agents provide prospective buyers with true-to-life images of properties and neighborhoods.

"Our mission at earthmine is to index reality," says John Ristevski, who co-founded the company with Anthony Fassero in 2006. Fassero adds: "We get down to a level of detail so it looks like you're standing on the street." The two men met at the University of California, Berkeley; Fassero, an architecture grad student, was wrapping up a thesis on digital panoramic photography and Ristevski was conducting research for a PhD in applied laser scanning technology. They figured out that if they combined panoramic photos with geospatial data, they could capture and deliver photo-realistic 3-D environments that would accurately document the world.

Earthmine chose San Fransisco—a major city right in their backyard—as their test bed, prowling the hilly streets in an SUV equipped with an array of cameras and collecting stereo photographs at various locations along the city's 2,100 linear miles of road, including Front Street in the financial district and Union Square downtown. These photographs were combined to create three-dimensional, panoramic images. Using JPL's stereo-imaging technology, the data was then processed into three-dimensional coordinates for each pixel in the panoramic image. Each pixel used to create these 3-D images contains a data set of latitude, longitude and elevation information. The result is a series of seamlessly sewn panoramic images that offer a 360-degree view from any Web browser that supports Flash.

Fassero and Ristevski covered San Francisco in about three weeks and took another three weeks to create their maps. The goal is to expand this service to other major U.S. cities, build up a fleet of camera-equipped vehicles and eventually deliver their street-level maps to mobile phone users.

Among other things, Fassero and Ristevski view their service as something that local governments could use to create digital catalogues for property tax assessments. "If they have property information that has a latitude and longitude, they can build a map in their systems," Ristevski says. "Earthmine lets them visualize what to this point has simply been data points." Utility companies might also use these maps to accurately guide service representatives to work sites.

Earthmine faces tough competition for its service as it hits the streets some eight months after Google, whose method of data collection is similar to that of earthmine. Google uses a Calgary, Alberta–based company, Immersive Media Corp., to drive its vans throughout its target cities collecting images. Microsoft also offers its own version with Street-Side, part of its Live Search Maps site, which covers portions of San Francisco and Seattle.

Large Hadron Collider: The Discovery Machine

A global collaboration of scientists is preparing to start up the greatest particle physics experiment in history

   

You could think of it as the biggest, most powerful microscope in the history of science. The Large Hadron Collider (LHC), now being completed underneath a circle of countryside and villages a short drive from Geneva, will peer into the physics of the shortest distances (down to a nano-nanometer) and the highest energies ever probed. For a decade or more, particle physicists have been eagerly awaiting a chance to explore that domain, sometimes called the tera­scale because of the energy range involved: a trillion electron volts, or 1 TeV. Significant new physics is expected to occur at these energies, such as the elusive Higgs particle (believed to be responsible for imbuing other particles with mass) and the particle that constitutes the dark matter that makes up most of the material in the universe.

The mammoth machine, after a nine-year construction period, is scheduled (touch wood) to begin producing its beams of particles later this year. The commissioning process is planned to proceed from one beam to two beams to colliding beams; from lower energies to the tera­scale; from weaker test intensities to stronger ones suitable for producing data at useful rates but more difficult to control. Each step along the way will produce challenges to be overcome by the more than 5,000 scientists, engineers and students collaborating on the gargantuan effort. When I visited the project last fall to get a firsthand look at the preparations to probe the high-energy frontier, I found that everyone I spoke to expressed quiet confidence about their ultimate success, despite the repeatedly delayed schedule. The particle physics community is eagerly awaiting the first results from the LHC. Frank Wil­czek of the Massachusetts Institute of Technology echoes a common sentiment when he speaks of the prospects for the LHC to produce “a golden age of physics.”

A Machine of Superlatives
To break into the new territory that is the tera­scale, the LHC’s basic parameters outdo those of previous colliders in almost every respect. It starts by producing proton beams of far higher energies than ever before. Its nearly 7,000 magnets, chilled by liquid helium to less than two kelvins to make them superconducting, will steer and focus two beams of protons traveling within a millionth of a percent of the speed of light. Each proton will have about 7 TeV of energy—7,000 times as much energy as a proton at rest has embodied in its mass, courtesy of Einstein’s E = mc2. That is about seven times the energy of the reigning record holder, the Tevatron collider at Fermi National Accelerator Laboratory in Batavia, Ill. Equally important, the machine is designed to produce beams with 40 times the intensity, or luminosity, of the Tevatron’s beams. When it is fully loaded and at maximum energy, all the circulating particles will carry energy roughly equal to the kinetic energy of about 900 cars traveling at 100 kilometers per hour, or enough to heat the water for nearly 2,000 liters of coffee.

The protons will travel in nearly 3,000 bunches, spaced all around the 27-kilometer circumference of the collider. Each bunch of up to 100 billion protons will be the size of a needle, just a few centimeters long and squeezed down to 16 microns in diameter (about the same as the thinnest of human hairs) at the collision points. At four locations around the ring, these needles will pass through one another, producing more than 600 million particle collisions every second. The collisions, or events, as physicists call them, actually will occur between particles that make up the protons—quarks and gluons. The most cataclysmic of the smashups will release about a seventh of the energy available in the parent protons, or about 2 TeV. (For the same reason, the Tevatron falls short of exploring tera­scale physics by about a factor of five, despite the 1-TeV energy of its protons and antiprotons.)

Four giant detectors—the largest would roughly half-fill the Notre Dame cathedral in Paris, and the heaviest contains more iron than the Eiffel Tower—will track and measure the thousands of particles spewed out by each collision occurring at their centers. Despite the detectors’ vast size, some elements of them must be positioned with a precision of 50 microns.

The nearly 100 million channels of data streaming from each of the two largest detectors would fill 100,000 CDs every second, enough to produce a stack to the moon in six months. So instead of attempting to record it all, the experiments will have what are called trigger and data-acquisition systems, which act like vast spam filters, immediately discarding almost all the information and sending the data from only the most promising-looking 100 events each second to the LHC’s central computing system at CERN, the European laboratory for particle physics and the collider’s home, for archiving and later analysis.

A “farm” of a few thousand computers at CERN will turn the filtered raw data into more compact data sets organized for physicists to comb through. Their analyses will take place on a so-called grid network comprising tens of thousands of PCs at institutes around the world, all connected to a hub of a dozen major centers on three continents that are in turn linked to CERN by dedicated optical cables.

Journey of a Thousand Steps
In the coming months, all eyes will be on the accelerator. The final connections between adjacent magnets in the ring were made in early November, and as we go to press in mid-December one of the eight sectors has been cooled almost to the cryogenic temperature required for operation, and the cooling of a second has begun. One sector was cooled, powered up and then returned to room temperature earlier in 2007. After the operation of the sectors has been tested, first individually and then together as an integrated system, a beam of protons will be injected into one of the two beam pipes that carry them around the machine’s 27 kilometers.

The series of smaller accelerators that supply the beam to the main LHC ring has already been checked out, bringing protons with an energy of 0.45 TeV “to the doorstep” of where they will be injected into the LHC. The first injection of the beam will be a critical step, and the LHC scientists will start with a low-intensity beam to reduce the risk of damaging LHC hardware. Only when they have carefully assessed how that “pilot” beam responds inside the LHC and have made fine corrections to the steering magnetic fields will they proceed to higher intensities. For the first running at the design energy of 7 TeV, only a single bunch of protons will circulate in each direction instead of the nearly 3,000 that constitute the ultimate goal.

As the full commissioning of the accelerator proceeds in this measured step-by-step fashion, problems are sure to arise. The big unknown is how long the engineers and scientists will take to overcome each challenge. If a sector has to be brought back to room temperature for repairs, it will add months.

The four experiments—ATLAS, ALICE, CMS and LHCb—also have a lengthy process of completion ahead of them, and they must be closed up before the beam commissioning begins. Some extremely fragile units are still being installed, such as the so-called vertex locator detector that was positioned in LHCb in mid-November. During my visit, as one who specialized in theoretical rather than experimental physics many years ago in graduate school, I was struck by the thick rivers of thousands of cables required to carry all the channels of data from the detectors—every cable individually labeled and needing to be painstakingly matched up to the correct socket and tested by present-day students.

Although colliding beams are still months in the future, some of the students and postdocs already have their hands on real data, courtesy of cosmic rays sleeting down through the Franco-Swiss rock and passing through their detectors sporadically. Seeing how the detectors respond to these interlopers provides an important reality check that everything is working together correctly—from the voltage supplies to the detector elements themselves to the electronics of the readouts to the data-acquisition software that integrates the millions of individual signals into a coherent description of an “event.”

All Together Now
When everything is working together, including the beams colliding at the center of each detector, the task faced by the detectors and the data-processing systems will be Herculean. At the design luminosity, as many as 20 events will occur with each crossing of the needlelike bunches of protons. A mere 25 nanoseconds pass between one crossing and the next (some have larger gaps). Product particles sprayed out from the collisions of one crossing will still be moving through the outer layers of a detector when the next crossing is already taking place. Individual elements in each of the detector layers respond as a particle of the right kind passes through it. The millions of channels of data streaming away from the detector produce about a megabyte of data from each event: a petabyte, or a billion megabytes, of it every two seconds.

The trigger system that will reduce this flood of data to manageable proportions has multiple levels. The first level will receive and analyze data from only a subset of all the detector’s components, from which it can pick out promising events based on isolated factors such as whether an energetic muon was spotted flying out at a large angle from the beam axis. This so-called level-one triggering will be conducted by hundreds of dedicated computer boards—the logic embodied in the hardware. They will select 100,000 bunches of data per second for more careful analysis by the next stage, the higher-level trigger.

The higher-level trigger, in contrast, will receive data from all of the detector’s millions of channels. Its software will run on a farm of computers, and with an average of 10 microseconds elapsing between each bunch approved by the level-one trigger, it will have enough time to “reconstruct” each event. In other words, it will project tracks back to common points of origin and thereby form a coherent set of data—energies, momenta, trajectories, and so on—for the particles produced by each event.

The higher-level trigger passes about 100 events per second to the hub of the LHC’s global network of computing resources—the LHC Computing Grid. A grid system combines the processing power of a network of computing centers and makes it available to users who may log in to the grid from their home institutes [see “The Grid: Computing without Bounds,” by Ian Foster; Scientific American, April 2003].

The LHC’s grid is organized into tiers. Tier 0 is at CERN itself and consists in large part of thousands of commercially bought computer processors, both PC-style boxes and, more recently, “blade” systems similar in dimensions to a pizza box but in stylish black, stacked in row after row of shelves. Computers are still being purchased and added to the system. Much like a home user, the people in charge look for the ever moving sweet spot of most bang for the buck, avoiding the newest and most powerful models in favor of more economical options.

The data passed to Tier 0 by the four LHC experiments’ data-acquisition systems will be archived on magnetic tape. That may sound old-fashioned and low-tech in this age of DVD-RAM disks and flash drives, but François Grey of the CERN Computing Center says it turns out to be the most cost-effective and secure approach.

Tier 0 will distribute the data to the 12 Tier 1 centers, which are located at CERN itself and at 11 other major institutes around the world, including Fermilab and Brookhaven National Laboratory in the U.S., as well as centers in Europe, Asia and Canada. Thus, the unprocessed data will exist in two copies, one at CERN and one divided up around the world. Each of the Tier 1 centers will also host a complete set of the data in a compact form structured for physicists to carry out many of their analyses.

The full LHC Computing Grid also has Tier 2 centers, which are smaller computing centers at universities and research institutes. Computers at these centers will supply distributed processing power to the entire grid for the data analyses.

Rocky Road
With all the novel technologies being prepared to come online, it is not surprising that the LHC has experienced some hiccups—and some more serious setbacks—along the way. Last March a magnet of the kind used to focus the proton beams just ahead of a collision point (called a quadrupole magnet) suffered a “serious failure” during a test of its ability to stand up against the kind of significant forces that could occur if, for instance, the magnet’s coils lost their superconductivity during operation of the beam (a mishap called quenching). Part of the supports of the magnet had collapsed under the pressure of the test, producing a loud bang like an explosion and releasing helium gas. (Incidentally, when workers or visiting journalists go into the tunnel, they carry small emergency breathing apparatuses as a safety precaution.)

These magnets come in groups of three, to squeeze the beam first from side to side, then in the vertical direction, and finally again side to side, a sequence that brings the beam to a sharp focus. The LHC uses 24 of them, one triplet on each side of the four interaction points. At first the LHC scientists did not know if all 24 would need to be removed from the machine and brought aboveground for modification, a time-consuming procedure that could have added weeks to the schedule. The problem was a design flaw: the magnet designers (researchers at Fermilab) had failed to take account of all the kinds of forces the magnets had to withstand. CERN and Fermilab researchers worked feverishly, identifying the problem and coming up with a strategy to fix the undamaged magnets in the accelerator tunnel. (The triplet damaged in the test was moved aboveground for its repairs.)

In June, CERN director general Robert Aymar announced that because of the magnet failure, along with an accumulation of minor problems, he had to postpone the scheduled start-up of the accelerator from November 2007 to spring of this year. The beam energy is to be ramped up faster to try to stay on schedule for “doing physics” by July.

Although some workers on the detectors hinted to me that they were happy to have more time, the seemingly ever receding start-up date is a concern because the longer the LHC takes to begin producing sizable quantities of data, the more opportunity the Tevatron has—it is still running—to scoop it. The Tevatron could find evidence of the Higgs boson or something equally exciting if nature has played a cruel trick and given it just enough mass for it to show up only now in Fermilab’s growing mountain of data.

Holdups also can cause personal woes through the price individual students and scientists pay as they delay stages of their careers waiting for data.

Another potentially serious problem came to light in September, when engineers discovered that sliding copper fingers inside the beam pipes known as plug-in modules had crumpled after a sector of the accelerator had been cooled to the cryogenic temperatures required for operation and then warmed back to room temperature.

At first the extent of the problem was unknown. The full sector where the cooling test had been conducted has 366 plug-in modules, and opening up every one for inspection and possibly repair would have been terrible. Instead the team addressing the issue devised a scheme to insert a ball slightly smaller than a Ping-Pong ball into the beam pipe—just small enough to fit and be blown along the pipe with compressed air and large enough to be stopped at a deformed module. The sphere contained a radio transmitting at 40 megahertz—the same frequency at which bunches of protons will travel along the pipe when the accelerator is running at full capacity—enabling the tracking of its progress by beam sensors that are installed every 50 meters. To everyone’s relief, this procedure revealed that only six of the sector’s modules had malfunctioned, a manageable number to open up and repair.

When the last of the connections between accelerating magnets was made in November, completing the circle and clearing the way to start cooling down all the sectors, project leader Lyn Evans commented, “For a machine of this complexity, things are going remarkably smoothly, and we’re all looking forward to doing physics with the LHC next summer."

Calendar
03 2024/04 05
S M T W T F S
1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30
Timepiece
タグホイヤー フォーミュラー1 ドリームキャンペーン
Blog Plus
SEO / RSS
Podcast
by PODCAST-BP
New TB
Bar Code
Data Retrieval
Oldest Articles
(09/30)
(09/30)
(09/30)
(09/30)
(09/30)
Photo Index
Reference
Latina




RSS Reader
無料RSSブログパーツ
Misc.
◆BBS


◆Chat


◆Micro TV


Maps



顔文字教室




Copyright © Tech All Rights Reserved.
Powered by NinjaBlog
Graphics by 写真素材Kun * Material by Gingham * Template by Kaie
忍者ブログ [PR]