Wednesday, November 14, 2012

Academic Nomad

As you might know, the pre-faculty academic life can involve a lot of moving around. For me, I went from an undergraduate institution in California, to grad school in New Jersey, to a postdoctoral fellowship in England. And just a month an a half ago I moved to a second postdoctoral fellowship in Melbourne, Australia.
I actually went around the other direction. Image source.
All the moving and settling in has kept me pretty busy lately, but I'll try to write some more posts soon. Meanwhile, here's where some of my writing has appeared elsewhere on the Internet in the last few months:

  • American Physical Society Physics, "Focus: Magnetic Fields Explain Lunar Surface Features" (20 August 2012)
    Why are there swirly white blotches all over the Moon? They come from miniature magnetic forcefields. And they might some day lead to Star Trek-style deflector shield technology, at least to protect spacefarers from the Solar Wind.
  • Foundational Questions Institute (FQXi) Blog, "Losing Neil Armstrong"
    (3 September 2012)
    The death of Neil Armstrong hit me kind of hard. Here I share some thoughts about what the human spaceflight program means to me, and I tell the story of how my grandfather played a part in saving the Apollo 11 mission astronauts from certain death.
  • The Economist "Babbage" Science and Technology Blog, "Becoming an astronaut: Frequent travel may be required"
    (6 September 2012)
    I recently applied to the NASA astronaut program. I made the first cut (and I'm still waiting to hear if I'll make the second). If you've ever wondered what the astronaut job application process is like, check out this piece. (Note: In case you're not familiar with the Economist style, the tech blog posts are all in the third person, with the correspondent referred to as "Babbage.")
I should be able to write more actual blog posts in the coming weeks. Stay tuned!

Friday, August 31, 2012

The Long Dark Tea-Time of the Cosmos


(This post is adapted from a longer, more rambling, and somewhat more technical post I wrote for a group blog, here.)

There have been a few truly transitional moments in the history of the Universe in which something fundamental about the cosmic environment changed. Some of these -- the beginning and end of cosmic inflation, reheating, big bang nucleosynthesis -- altered the very nature of spacetime or the kinds of particles that populated it, and all happened within the first few minutes. The first atoms formed a few hundred thousand years later, marking another milestone.  For the 13 billion or so years since then, though, you could argue that it's all been a bit samey.  Except for cosmic reionization.

Cosmic reionization can be explained in just a few words: the gas in the Universe went from being mostly neutral to mostly ionized. That might sound trivial, but it turns out that the implications are profound -- reionization is the reason we are able to see other galaxies billions of light-years away, and if we can understand how it happened, we will understand the formation of the very first stars and galaxies in the Universe.

But I should back up for a moment. In order to see why reionization matters, you need to know something about recombination and the cosmic dark ages.

Timeline of the Universe, showing recombination, the dark ages (not even labelled because that epoch just isn't  interesting enough, apparently), reionization and the age of galaxies. Source. Credit: Bryan Christie Design)

Great ball of (primordial) fire

Recombination is probably the most inaccurately named event in the history of the Universe, on account of the fact that there was no "combination" before it.*  In the beginning, there was the all-encompassing energy-matter-plasma-fireball, the product of the first cosmic explosion, which rapidly expanded in all directions.  We sometimes refer to this as the "hot big bang."  This fireball was formed mainly of protons and electrons, all of which were hot and unbound and bouncing off photons and generally being really energetic.  (Charged particles that aren't bound together are called ions and ionized gas is called plasma, so you could call it a plasmaball instead if you like, but I'll stick with "fireball" for the purpose of dramatic imagery.)  In the fireball, the particles and photons were tightly coupled, meaning that they were all mixed up and interacting in a big indistinguishable mess.  But as spacetime expanded and the fireball got cooler, the particles lost some of their frenetic energy.  Eventually, there came a time when the fireball was cool and diffuse enough that protons and electrons could chill out and become bound atoms.  The photons were still there, but now instead of just ricocheting off ions, they could get absorbed by atoms, or sail right by them in the newly abundant spaces between.  Some photons still occasionally broke atoms apart, but the Universe was becoming diffuse enough that atoms spent more time bound than not.

Illustration of the transition between the cosmic fireball and the post-recombination Universe. Red spheres are protons, green spheres are neutrons, blue dots are electrons and yellow smudges are photons. The color bar on the bottom represents the average temperature (or energy) of the Universe at that epoch. Source.
(*Terminology note: "Recombination" is also sort of technical term in physics, which in general refers to the joining of an electron and a proton, without regard to whether that particular electron and proton had made up an atom before.  In the very early universe, inside the cosmic fireball, hydrogen atoms would sometimes form, but they'd be broken up immediately by energetic photons.  The name "recombination," when talking about the epoch, refers to the time when the hydrogen atoms that formed could stay bound for an appreciable amount of time.)

And so, at the epoch of recombination, around 300,000 years after the big bang, the gas went from being ionized to neutral.  Recombination set off the decoupling era -- the time when the matter and radiation that were previously tightly coupled (i.e., interacting a lot) became more free to do their own thing.  Decoupling is also known as last-scattering, because it was the last moment when photons would immediately be scattered off matter as they flew around.  After decoupling, the photons were free to sail around unimpeded and travel for long distances.  Which is where the cosmic microwave background (CMB) comes from -- the newly decoupled photons free-streaming through the Universe out of the great primordial fireball.

Map of the cosmic microwave background, the radiation leftover from the primordial cosmic fireball. Tiny fluctuations in the temperature of microwave radiation coming to us from  all directions give us clues about how matter was distributed at the earliest times in the Universe. In this rendering, we would be at a tiny dot in the center of the sphere. Source.

And then we wait

The next phase of the Universe was, in many ways, distinctly unexciting.  It's called the dark ages.  During the dark ages, the Universe was full of cooling neutral gas (mostly hydrogen), and that gas was very very slowly coming together into clumps via gravity.  At decoupling, the fluctuations seeding these clumps were more dense than their surroundings by only about one part in 100,000.  Those tiny blips, which we see in the CMB, were enough to tip the scales of gravity to draw more matter together into bigger and bigger clumps.  But it took a while for anything particularly interesting to happen.  Sometime between 100 and 500 million years after the big bang, one of these little clumps became dense enough to form the first star, and that defined the "first light" of the universe.  (Of course it wasn't strictly the first light -- the fireball made plenty of light, and we still see it as the CMB -- but it was the first starlight.)

So, if we had a big enough telescope, could we look far enough back into the Universe to see that first star?

Unfortunately, no.  It turns out the dark ages were dark for two reasons.  One was that there wasn't any (visible) light being produced at the time.  The other is that neutral hydrogen is actually pretty opaque to starlight.

Atoms and molecules can only absorb photons at particular frequencies -- those corresponding to transitions between the energy levels of the electrons.  During the dark ages, any photon whose energy was in the sweet spot for a hydrogen atom transition would very likely be absorbed.  Radio waves or other low-energy photons could get through because there weren't any transitions of the right energies to take them, but visible light was another story.  It's easy for a hydrogen atom to absorb visible-light photons and use them to knock its electrons into higher energy levels (the same goes for ultra-violet light).  Those atoms release the photons again eventually, but in different directions, so the vast majority of the light produced by the first stars isn't able to make it all the way to our telescopes.

Opacity and transparency. The primordial fireball was opaque like fire is opaque: energetic particles couple with photons and keep them from free-streaming away. The edge of the wall of flame is like the last-scattering surface, where the light is finally free to escape. The dark ages were opaque like fog is opaque: the light was absorbed and scattered and attenuated. Once reionization cleared the "fog" of of the dark ages, light was able to travel unimpeded. Photo sources: herehere and here (but really here).

Here comes the sun(s)...

Once stars were forming in earnest,  though, astrophysics really got going, and fun things started to happen.  The vast majority of the gas in the Universe (called the intergalactic medium, or IGM) was still neutral at this point -- mostly hydrogen, not doing much -- but each star or galaxy that formed would heat the gas around it and make a bubble of ionized gas.  As more and more of these bubbles formed, the intergalactic medium had a sort of swiss-cheese nature, with bubbles of ionized gas growing and coming together, burning away the fog of the dark ages.

Once there were enough stars and galaxies to ionize a significant fraction of the IGM, we finally had reionization: the (aptly named) epoch when the Universe went from being neutral to being fully ionized again.  And this time, the universe was much less dense and the starlight could easily pass through the ionized gas, so the IGM became transparent to starlight. And that's why we can see other galaxies -- because there's very little neutral gas left to absorb the light en route.

Artist's conception of bubbles of ionized gas percolating through the IGM during the dark ages. The CMB is at the far left, and the right is the present-day Universe. Source: illustration from a Scientific American article by Avi Loeb, which can be found here.
When did reionization happen?  And why does it matter?  Second question first: it matters because understanding reionization means understanding how the first sources of light in the Universe formed and how the IGM turned into the galaxies and clusters and all the amazing stuff we see today.  Also, it's a major milestone in the Universe's history, and a phase transition of the entire IGM, so it seems important.

Back to the other question: we think reionization happened around a billion years after the big bang, though probably gradually and clumpily and at different times in different places, and we're still trying to pin down the exact epoch.  There are a few ways to go about figuring it out.  One is to look for the Universe not being transparent.  In astronomy, opacity usually manifests as something absorbing light from something behind it.  On a foggy day, you know the fog is there because it makes it hard to see things that are far away, not because you really see the water droplets in the air.  Reionization is similar -- you know you're getting close to it if some of the light from a distant source (a quasar, generally) is absorbed before it gets to you.

Unfortunately, looking at absorption only tells us roughly when reionization was pretty much over, since it doesn't take much neutral hydrogen (about one part in 100,000) to absorb all the light from a distant quasar.

Another way to pin down reionization is to look at some subtle effects it has on the CMB, but that would take another blog post to even begin to describe, so I'll just say the CMB gives us a pretty good idea of the earliest reionization might have started, but it's hard to get much more than that.

So where does that leave us?  We can't use visible light, because that's absorbed as soon as the IGM is slightly neutral.  And the CMB tells us a lot about the early universe, and gives us a hint about the beginning of reionization, but doesn't tell us when the bulk of it happened.

Radio astronomy FTW

The big innovation, the thing that institutions all over the world are investing in, is looking for radio signals coming from the neutral hydrogen itself.  Neutral hydrogen has a low-energy transition that, when it occurs, emits or absorbs a photon with a wavelength of 21 cm: it's called the 21 cm line. (The frequency is about 1420 MHz.)  This wavelength puts it in the radio part of the electromagnetic spectrum, so we see it as radio waves.

The origin of 21 cm radiation. In the higher-energy state, the hydrogen atom's electron and proton are aligned. If one flips its spin, the atom is in a lower-energy state and a 21 cm photon is produced. Source.
The reason 21 cm radiation can let us peer into the dark ages is twofold.  One, it's so low-energy that it doesn't take a lot to excite it, so you can get 21 cm radiation being produced even if there's not a heck of a lot going on (just atoms colliding and a few stray photons).  The other advantage is that radio waves are really hard for neutral hydrogen to absorb.  An atom creates a 21 cm photon in the dark ages, and then the universe expands a little, making that photon just a little longer in wavelength, and then it's too low-energy to be absorbed by anything.  So all we have to do is set up a radio telescope and wait for it to arrive here!

Is it here yet? (Photo by Mike Dodds)
Okay, so it's not quite that simple.  Because the photons stretch out as the Universe expands, we're really talking about something like 100-200 MHz for "21 cm" photons from the epoch of reionization and the end of the dark ages.  There are some major downsides to working at those frequencies.  One is that you're now smack in the middle of all sorts of terrestrial radio communication: FM radio, cell phones, satellite transmissions... it's a big mess.  Also, at low frequencies, the Earth's ionosphere is highly refractive and can do all sorts of horrible things to your signals as they're coming down from space.  Somewhere in the tens of MHz, the ionosphere is completely opaque.  So if you want to pick up 21 cm radiation from the epoch of reionization, you have to find a place that's relatively radio-quiet (i.e., unpopulated) to do this sort of study, or you have to find a way to deal with the radio noise. (One example of a relatively radio-quiet place is the Australian outback. Another is the Moon.)

A major challenge that you definitely can't get away from is our own Galaxy.  The Galaxy produces a lot of radiation which is extremely bright at the low frequencies we're dealing with here.  Galactic radio signals are typically about 10,000 brighter than the signal from reionization.  And it doesn't help that the radiation is spatially varying in weird and complex ways.  Here's a map of the Galactic radiation at 408 MHz.  It's pretty bright, and it gets worse for lower frequencies.

Galactic synchrotron radiation at 408 MHz -- the emission is stronger at lower wavelengths. The color scale here gives the brightness temperature (a measure of the intensity of the signal) in Kelvins. For comparison, the 21 cm reionization signal would be around 10 mK. Source.
In spite of the challenges, there's a lot of effort right now going into building the telescopes to see this signal, because it would allow us to actually probe the IGM in the epoch of reionization.  Ideally, we'd get pictures like this:
Simulation of reionization. Source.
Each square represents a bit of the Universe at a different moment in cosmic history, going forward in time as you move left to right and top to bottom.  In the upper left-hand panel (0.4 billion years after the big bang), the IGM is largely neutral.  In the lower-right hand panel (0.8 billion years), it's ionized.  The features in the other panels are ionized bubbles forming and growing.  Each of these simulation panels represents just a small patch of sky, but in theory you can imagine doing a full-sky map.  Taking into account the expansion of the Universe (and consequent stretching of photons) and tuning the telescope to different frequencies, you ideally get a map of all the neutral hydrogen at each epoch.

I should also point out: the dark ages and epoch of reionization cover a lot of the observable Universe. This sketch shows roughly how much volume is covered by different kinds of observations, where we're in the middle, looking out.  The z values are redshift -- a measure of how much the Universe has expanded since that time.  (So the edge, the farthest away in space and time, is at a redshift of infinity, since the Universe is infinitely bigger now than at the big bang; the redshift today is zero.  Reionization was between redshifts 6 and 10 or so.)  In the diagram, the colorful part in the middle contains most of the galaxies we've seen directly.  The thick dark circle near the edge is the CMB.  Everything inwards of z=50 can be probed with 21cm observations, and almost everything outwards of z=6 can't be seen any other way.
Schematic of how much of the Universe we're seeing with different kinds of observations. Red, yellow and green are optical. The black circle around the edge is the CMB.  Everything in blue can be observed with 21 cm radio signals. Source: Tegmark & Zaldarriaga 2009.

If you build it...

There's something of a global competition collaboration going on right now to try to get at this signal, because it would open a whole new window on the evolution of the Universe.  You may have heard of the Square Kilometer Array, which is going to be the world's largest array of radio telescopes when it's completed in a decade or so.  It'll be split between South Africa and Western Australia, and one of the key goals of the project is to look deeper into the epoch of reionization than we ever have before, using the 21 cm line.  In the meantime, there's the Low-Frequency Array (LOFAR), the Murchison Widefield Array (MWA), and lots of other projects that are just getting going.  It's a big industry.

But before we get too excited, I should reiterate that dealing with the foregrounds and instrumental calibration and stuff is hard.  There are actually a number of intermediate steps (including getting an all-sky average signal, or doing some kind of statistical detection) that would have to happen before any attempt at mapping (or "tomography").  But mapping remains the ultimate goal.  And if we can map out what the neutral hydrogen in the Universe was doing in the first couple of billions of years, we can basically watch the Universe as we know it come into being.  And that would be pretty darn cool.

Credit: SPDO/TDP/DRAO/Swinburne Astronomy Productions.

Friday, July 27, 2012

You Don't Have to Blow Up the Universe to Be Cool


This was supposed to be a story about dark energy. It still is -- dark energy is one of the most intriguing mysteries in cosmology, after all -- but it's mostly a story about cosmic doom, why I love theoretical physics, and why you shouldn't believe everything you read on io9.

What goes up...

It's difficult to express to a non-physicist just how weird dark energy is, because most people are used to encountering things that they don't understand in physics, and they generally assume that someone else is on top of the situation. That's not the case with dark energy. Here's an analogy. Let's say you're throwing a ball in the air. There are logically two possibilities: (1) You throw it, it goes up for a while, slows down, and falls back to Earth. (2) If you happen to have superhuman strength, you throw it hard enough that it escapes Earth's atmosphere and then sort of coasts forever through the void. But imagine neither of those things happen. Imagine instead that you toss the ball up in the normal way, and it looks like it's starting to slow down, but just as you think it's about to reach its maximum height and come back, it suddenly speeds up and shoots off into space.

That's not supposed to happen.
[Source: Norwalk Citizen Online, Christian Abraham / Connecticut Post. ]
Dark energy is like that. It's actually the exact same physics. The big bang is like the throw, starting off the expansion of the Universe. That expansion means distant galaxies are all moving away from us, but since all those galaxies have mass and gravity is still always attractive, ultimately everything in the Universe should be pulling on everything else. This should slow the expansion down, through the same kind of attraction that pulls the ball toward the Earth, slowing it down and keeping it from floating away. But a couple decades ago astronomers discovered that the expansion isn't slowing down at all. There's something out there in the cosmos that's acting against the gravity of all those galaxies. It's not just keeping the Universe from recollapsing, it's actually pushing all the galaxies apart faster and faster, accelerating the expansion. And just as physicists would be at a loss to explain why your baseball suddenly went (non-)ballistic, everything we understand about physics tells us this should not be happening to the Universe. We call it dark energy because we have no idea what it is.

The cosmological constant

We have some theories, of course. In fact, there are probably hundreds of theories, many of them difficult to distinguish from one another with the data we currently have. The most familiar and longest-standing idea is that of the cosmological constant -- a sort of fudge factor that Einstein originally put into his equations of gravity. He wasn't trying to explain acceleration -- at the time, he thought the Universe was static, and he needed an anti-gravity term to balance out the pull of all the mass in the Universe. He discarded the extra term in embarrassment when the expansion of the Universe was discovered, but this new acceleration is making many cosmologists now think we need to put it back in.
The equation governing the acceleration of the expansion of the Universe, with a cosmological constant term. The gravity term includes the density (p) and pressure (ρ) of all matter and energy -- the minus sign means this term slows the expansion. The cosmological constant term (with Λ) has a positive sign, and therefore contributes to acceleration. The parameter a is the scale factor measuring the size of the Universe, and the double dots indicate the second derivative (acceleration) with respect to time. 
A definining property of the cosmological constant is, unsurprisingly, that it is constant. In fact -- and this is almost weirder than the acceleration -- the density of the "stuff" described by the cosmological constant stays the same even as the Universe expands. If you have a box filled with cosmological constant, and you suddenly make the box twice as large without opening it or putting anything in, you now have twice as much cosmological constant in your box. As I said: it's weird.

The cosmological non-constant?

Unfortunately, the cosmological constant isn't really that appealing a solution, since it still looks a lot like a fudge factor and it seems somewhat arbitrary. The main alternative is dynamical dark energy, which is any kind of dark energy that can change with time. Most theories of dynamical dark energy (often just called "dark energy" as opposed to a cosmological constant, which is sort of a special case) involve scalar fields. Until recently, we had no evidence whatsoever for scalar fields in nature, even though they were constantly popping up in theories. Now that we think we might have discovered the Higgs boson (yay!), we have evidence for the first scalar field: the Higgs field. The Higgs field itself doesn't have anything to do with dark energy, but it's comforting that at least one example of a scalar field might actually exist. The nice thing about a scalar field is that it can have the same value everywhere in space while varying with time, which is just what you need if you want some kind of time-dependent dark energy that fills the Universe.

So how do we distinguish between a cosmological constant and dynamical dark energy? The usual way is to look at the relationship between the dark energy's pressure (denoted p) and density (denoted, somewhat confusingly, by the Greek letter rho: ρ). One of the key features of any form of dark energy is the fact that it has negative pressure.

In general relativity, pressure is a form of energy, and energy has a gravitational effect -- your pressure adds to your gravitational field. (So, gravitationally, pressure pulls.) Negative pressure, therefore, subtracts from a gravitational field, and counteracts gravity -- it pushes. For a cosmological constant, the pressure is exactly -1 times the density: p=-ρ. (I'm using units where the speed of light is 1. You could also write this as p=-ρc2.) For other forms of dark energy, there could be a different relationship.

We use a parameter called the equation of state, w=p/ρto describe the ratio of pressure to density. All substances have one: pressureless matter has w=0; radiation has w=1/3. For a cosmological constant, w=-1.

As far as we can tell from astronomical measurements, w is pretty darn close to -1. Every measurement we've done is consistent with w=-1, and every time we improve on our measurements, we find a value of w even closer to -1. But it would be hard to say for sure that w is exactly -1, because all measurements have uncertainties associated with them. We may at some point measure a value of w that is infinitesimally close to -1, but, without some other reason to believe that we have a cosmological constant, we'll never be able to say that it's not just very slightly higher or lower.

The importance of asking "What if...?"

Until about 10 years ago, no one really talked about the idea that w could be less than -1. Anything with w<-1 was called phantom energy and was considered way too uncouth to be plausible. There are good theoretical reasons for this: constructing a theory with w<-1 is difficult, and if you manage to do it, you've probably had to do something tricky like introduce a negative kinetic energy, which is the sort of thing that would make a ball roll up a hill instead of down. You might even accidentally invent a theory with time travel and wormholes. So it was generally thought that we should leave w<-1 alone, and people made constraint plots like this:
Fraction of the Universe made of matter (Ωm) plotted against the dark energy equation of state parameter (w). Values in the orange region have a good fit to the data. [Source: Caldwell, Kamionkowski & Weinberg 2003 (PRL, 91, 071301)]
This is a plot of the fraction of the Universe made of matter (Ωm) versus w. The colored swaths are where the parameters are allowed by different kinds of observations. The orange is the most favored region. You can see from the plot that everything converges around w=-1: a cosmological constant.

But a group of theorists at Dartmouth and Caltech (Rob Caldwell, Mark Kamionkowski and Nevin Weinberg) looked at that and thought, "Maybe it's not converging at w=-1 -- maybe it just looks that way because it's really converging at some value of w less than -1. What would happen if that were the case?"
Same as above figure, but with the range extended to allow w<-1.
[Source: Caldwell, Kamionkowski & Weinberg 2003 (PRL, 91, 071301)]
And then they wrote my favorite paper ever [Caldwell, Kamionkowski & Weinberg 2003 (PRL, 91, 071301)].

Theory is awesome

It really is an amazing paper. Honestly, you should check it out. I wouldn't ordinarily recommend a theoretical physics paper to a general audience, but this paper is so well written, so accessible, and so beautiful that I can't resist. And it's only 4 pages long.

The authors start from a very simple idea: "What if some day we look at the data and we find out that w<-1?" It doesn't sound like a revolutionary idea, but no one had ever followed that idea to its logical conclusion. So they do it, and after jotting down just a few fairly simple equations, they discover that the universe would rip itself apart.

How often do you get to invent an ultimate cosmic doomsday in the course of your professional life? This is the kind of work I got into theoretical physics to do. It's awesome.

Here's how it works. I said before that w=-1 is a cosmological constant -- the energy density doesn't increase or decrease as the Universe expands. It turns out that if w>-1, that means that the energy density goes down as the Universe expands (like ordinary matter). Expand a box of matter and you have the same amount of matter, but more space, so your matter is now less dense. But if w<-1, the energy density increases as the Universe expands. Think about that for a minute. If you have a box of phantom energy, and you suddenly make the box twice as big, you now have more than twice as much phantom energy in your box.

Aside from being unsettling, this kind of behavior can actually have some pretty gruesome consequences for the Universe. If we stick with our familiar cosmological constant, then as the Universe expands, even though all the galaxies are moving away from each other, anything that's gravitationally bound stays bound, because there's just not enough dark energy in any bound system (like a solar system or a galaxy) to mess with it. But with phantom energy, the amount of dark energy in any bit of space is increasing all the time, so a planet orbiting a star will actually eventually be pushed away to drift on its own. Everything will become isolated.

And that's not even the worst of it. Caldwell and his colleagues realized that if the density of dark energy is increasing with time, it will eventually be accelerating the expansion of space so quickly that the cosmic scale factor -- the parameter that measures the characteristic size of a region of space -- will reach infinity in a finite time. If the scale factor is infinite, that means that the space in between any two points is infinite, no matter how close they were to begin with. It means that spacetime itself is literally torn apart. When Caldwell and his colleagues realized they'd discovered a new possible end state of the Universe, they dubbed it, appropriately, the big rip.

video
Animation of the big rip (link to original). From Caldwell, Kamionkowski & Weinberg's paper: It will be necessary to modify the adopted slogan among cosmic futurologists — ‘‘Some say the world will end in fire, Some say in ice’’ — for a new fate may await our world.  [Source: NASA/STScI/G.Bacon]


DOOOOM!

Having just invented a new cosmic doomsday, the authors decided to go a step further. They worked out exactly when the big rip would occur for any given value of w, and then, for a specific example (w=-1.5, which would have a big rip about 21 billion years from now), they worked out exactly how long we'd have to wait before all of the cosmic structures we know and love be destroyed. Galaxy clusters would be erased 1 billion years before the end. The Milky Way would be dismantled with 60 million years to go. At doom-3 months, the Earth would drift away from the Sun. With 30 minutes to go, our planet would explode, and atoms would be ripped apart in the last 10-19 seconds. Discussing this handy timetable of doom, the authors state with admirable detachment that, were humans to survive long enough to observe the big rip, we might even get to watch the other galaxies get torn apart as we await the end of days. I'm sure that would be lovely.

io9, you have forsaken me

Given my affection for the original phantom energy paper, you can imagine I was intrigued the other day to see an article on the io9 website proclaiming "The Universe Could Tear Itself Apart Sooner Than Anyone Believed." Could it be some new evidence for phantom energy, I thought? Sadly, no. It turned out to be an utterly overblown scare-piece that had hidden all the beauty of the physics behind false assertions and dramatic flaming-Earth graphics.

The io9 post discusses the work of Li and colleagues, researchers in China who have published an article called "Dark Energy and the Fate of the Universe." The paper isn't bad, or even really wrong (though I don't agree with all of it). But it's really nothing new or interesting. It starts from the assumption that dark energy is dynamic and that it is evolving to have w<-1 in the future. It then uses a new parameterization of the evolution of w to draw conclusions about the fate of the Universe.

I won't go into a lot of details, but the gist is as follows. If you want to determine if w is changing with time, you have to start with some model for how it's changing -- basically, you have to assume a functional form. You look at data from the past, determine what w was then, and choose some function for w that changes with time and try to measure its parameters. In cosmology, we usually discuss time in terms of redshift (denoted by z), which is a measure of how much the Universe has expanded since whatever bit of the past we're observing. The redshift z decreases with time and is zero today; future times have negative redshifts.

A typical parameterization of dark energy looks like this: w(z) = w0 + wa (z/(1+z)). The form doesn't matter so much except in that w0 is the value of w today, a positive value of wa means w is decreasing with time, and a negative wa means it's increasing. This parameterization has the property that it goes to infinity in the future at z=-1. Li and colleagues don't like this, but it's hard for me to see why it matters. A redshift of -1 corresponds to an infinite scale factor, which is a big rip. If the only problem with the formula occurs when the big rip is actually in progress, it's hard to see why that should be a big deal for determining anything that happens up to that point.

In any case, they have an alternative, slightly more complicated parameterization, for which w doesn't go to infinity at z=-1: w(z) = w0 + wa [ln(2+z)/(1+z) - ln(2)]. In their formulation, a positive wa means w is increasing, and a negative wa means w is decreasing. They run some simulations and find out that the best-fit points for w0 and wa -- the values the data seem to be pointing to -- imply a big rip will occur.
Constraints on wa and w0 for the model by Li and colleagues. The red point indicates their best fit. The green point is a cosmological constant. The brown region is the best-fit region and the blue region is the 95.4% confidence region. [Source: Li X D et al. 2012 (Sci China-Phys Mech Astron 55,1330)] 
Shouldn't I be scared?

The fact that the best-fit point implies a big rip sounds important, but it isn't really. Many of the latest results have a best-fit value for w that's less than -1; the data just aren't yet good enough for us to draw any conclusions. A cosmological constant easily fits the data, and there's no compelling evidence that dark energy is anything more exotic. Also, as Li and colleagues readily admit, all their conclusions are based on the assumption that dark energy follows their own special functional form -- if it doesn't (and there's no reason to think it would), there's nothing they can say about what would happen.

Nonetheless, Li and colleagues go on to calculate when the big rip would occur with both their best-fit value and their worst-case value (the value still allowed by the data in which the big rip happens soonest) and they say that doomsday could be as soon as 16.7 billion years from now. They even include their own timetable of doom, with earlier times than the original one.

It's a reasonable calculation to make, but I wouldn't call it newsworthy. The comparison they make to say it's "earlier than we thought" is with Caldwell's doomsday value, which used an arbitrarily chosen w=-1.5 for illustrative purposes, not from a fit to any data. The io9 people apparently got hooked by an unreasonably enthusiastic press release and ran with it, trying to stir up the paper's conclusions to make it as significant and alarming as possible.

Disappointingly, the io9 article also contains several blatantly wrong statements, such as "cosmologists are pretty sure dark energy has a value less than -1" (not true!) and "a likely value of -1.5" (completely ruled out!) and "the cosmologists are fairly convinced that w will continue to exhibit a value less than -1 well into the future" (also totally wrong!). Phantom energy is truly an awesome idea, but I don't think many cosmologists would say it's especially likely, and certainly none of us would bet the house. The theoretical problems are substantial and the data just aren't good enough yet for us to say anything either way. The big rip scenario is still fun to think about; it's not necessary to think it's actually imminent to appreciate that. Probably dark energy is a cosmological constant -- and plenty weird enough.

Did I mention theory is awesome?

As a theorist, I encounter a lot of really bizarre ideas. Sometimes I encounter an idea like phantom energy, which is incredibly cool and leads to some truly revolutionary possibilities ... but is probably ultimately wrong. Other times, I get to study something like dark energy, which is mind-bending in a totally different way: not because it breaks physics and makes the Universe blow up, but because it is, contrary to all our understanding, actually out there, just waiting for us to figure it out.