Thursday, June 27, 2013

Passing through cosmic walls

Foam. Image source: DoITPoMS
Axions are hypothetical particles that are presently being searched for as possible dark matter candidates. The axion is a particle associated with the spontaneous breaking of a symmetry in the early universe. Unlike the case for the Higgs field, there can be a large number of ground states for the axion field. These states all have the same energy, but different values of the field. Since the ground states all have the same energy, they can coexist with each other, filling the universe with patches of different values of the axion fields, separated by boundaries called 'domain walls'.

The best visualization that came to my mind is a foam-like structure that fills the universe, though you shouldn't take this comparison too seriously.

At the domain walls, the axion field has to change values in order to interpolate between the different domains. This position-dependence of the field however creates a contribution to the energy density. Since the energy density of the domain walls decays slower with the expansion of the universe than the energy density of ordinary matter, this can become problematic, in the sense that it's in conflict with observation.

There are various ways to adjust these models or to pick the parameter ranges so that the domain walls do not appear to begin with, decay quickly, or are unlikely to be present in our observable universe. These are the most commonly used strategies for those interested in the axion as a particle. But in the recent years there has also been an increasing interest in using the domain walls themselves as gravitational sources, and so it has been suggested that they might play the role of dark energy or make contributions to dark matter.

In an interesting paper that appeared recently in PRL, Pospelov et al lay out how we could measure if planet Earth passed through such a domain wall
    How do you know if you ran through a wall?
    M. Pospelov, S. Pustelny, M. P. Ledbetter, D. F. Jackson Kimball, W. Gawlik, D. Budker
    Phys. Rev. Lett. 110, 021803 (2013)
    arXiv:1205.6260 [hep-ph]
(Apparently the arXiv-title did not survive peer review.)

The idea is to use the coupling of the gradient of the axion field, which is non-zero at the domain walls, to the spin of particles of the standard model. A passing through the domain wall would oh-so-slightly change the orientation of spins and align them into one direction.

This could be measured with devices normally used for very sensitive measurements of magnetic fields, optical magnetometers. Optical magnetometers consist basically of a bunch of atoms in gaseous form, typically alkali metals with one electron in the outer shell. These atoms are pumped with light into a polarized state of higher angular momentum, and then their polarization is measured again with light. This measurement is very sensitive to any change to the atomic spin's orientation, which may be caused by magnetic fields - or domain walls.

In the paper, and in a more recent follow-up paper, they estimate that that presently existing technology can test interesting parameter ranges of the model when other known constraints (mostly astrophysical) on the coupling of the axion have been taken into account. It should be mentioned though that they consider not a pure QCD axion, but a general axion-like field, in which case the relation between the mass of the particle and its coupling is not fixed.

The sensitivity to the event of a domain wall passing can be increased by not only reading out one particular magnetometer, but by monitoring many of them at the same time. Then one can look for correlations between them. This way one is not only able to better pick out a signal from the noise, but from the correlation time one could also determine the velocity of the passing through the domain wall.

I think this is an interesting experiment that nicely complements existing searches for dark matter. I also like it for its generality. Maybe while searching for axion domain walls, we'll find something else that we're moving through and that happens to couple very weakly to spins.

Monday, June 24, 2013

Science should be more like religion

When historians will look back at the 21st century, they’ll refer to it as the century in which religion died and the scientific revolution was completed. I am saying this not despite the battles of ignorance fought by the religious right and fundamentalists’ seeds of violence, but because of them. They fight, with words or with weapons, because they are afraid. And while these fights draw much attention, the number of non-believers is quietly and inevitably rising.
% of Christian, non-Christian religious,
and non-believers, UK, 1983 - 2009.
Source: Wikipedia.

What are they afraid of? They’re afraid of science. And I, I am afraid we have ourselves to blame for that. All we scientists ever do, it seems to me, is to explain how science differs from religion, ignoring the commonalities with science and the benefits that religion brings. Yes, that’s the scientists ignoring the facts.

Scientists aim to develop consistent explanations for the world. They observe and try to understand. Then they try to use what they have learned. They want coherent stories, want to fit the pieces together. They wonder and they seek, they doubt and they improvise. Scientists try and fail, and try again and gradually weave consistent stories to be shared with everybody.

In contrast, many religious stories are not only internally inconsistent, they are also inconsistent with observation and experience. In the best case, they are simply superfluous. Praying does not cure tetanus. The Earth wasn’t created 10,000 years ago. There’s no god sitting on a cloud throwing lightening down on blasphemous bloggers.

Yes, science clashes with religion. We’ve said it often enough.

The human brain excels at finding patterns, solving problems, and developing accurate theories. It abhors inconsistencies so much it will fake facts to remove them. Consciously accepting inconsistency necessitates a constant effort. Religions require the believer to accept these inconsistencies and not to ask. That takes time and energy. Believers must constantly belie themselves. Science doesn’t require you to accept inconsistencies. In fact, it encourages you not to accept them, and thereby frees up all that creative power, the patterns seeking, the story weaving. This clearly speaks for the scientific way. So why are people afraid of science?

They are afraid that science will replace hope with statistical odds, the aurora with equations, and love with oxytocin. They are afraid that science will take the wonder out of life and not give anything back. They are afraid they will have to give up their belief in an immortal soul, in miracle cures, in final answers, and get nothing in return. And we, we’re failing them because we don’t tell them what it is that they get in return.

Almost all scientists I know are atheists. They’re not atheists because they have been rendered unable to believe in God and are now suffering from a meaningless existence. They’re atheists because they don’t need religion. God, as Laplace put it, is an unnecessary hypothesis. And above all, god is a waste of time.

Far from taking the wonder out of human existence, science adds to it. We’re part of nature and science is the only way of understanding our place and our role.

If you’re in love and you read up on what is known on the chemical pathways and neurological activity, far from degrading you to a bundle of synapses, it embeds your love into the course of evolution and the complexity of human culture. If you look at the night sky and know that beyond the Milky Way there are billions of other galaxies, full with solar systems much like our own, your knowledge adds to the wonder. If you are pregnant and you subscribe to one of the dozens of calendars that tells you when your baby’s heart starts beating, how its nervous system develops, and when it is able to hear, then imagine that just 50 years ago you wouldn’t even have gotten an ultrasound image. Now you can have it in 3D. You’re growing a child, in an amazingly intricate and yet virtuously orchestrated process. You’re part of the circle of life. And without science you wouldn’t know much about that circle. You’re part of nature. Enjoy. And don’t forget to take the folic acid.

Religions offer people a community to belong to and a place to go. They offer shared traditions and spiritual guidance. Science doesn’t. Not because we don’t have a community or have nothing to offer. We’re just not letting people know what science has to give, making them believe science will only take away from them. We’re excluding others by not sharing our wonder.

Since the advent of the internet, we’ve gone a long way to making science more human. Scientists speak by themselves and of themselves. But few if any touch ground considered that of religion.

Yalom notes four existential fears: Fear of death, freedom, isolation, and meaninglessness. We all have these fears. They drive many people to religion because it’s the most obvious answer. And while science addresses these fears, scientists shy away from these topics. There are great speakers among the scientists, but most of them preach to the choir of an already scientifically-minded audience. Neil DeGrasse Tyson is one of the few who isn’t afraid crossing this line. He inspires those on both sides. And look how many listen. Carl Sagan did too. I have some more fingers on my hand, tell me who to count.

Most scientists feel awkward if the word “spirituality” is as much as mentioned, and the last thing they want to do is preach. I myself am guilty of course of never writing about living the atheists’ life. I didn’t study physics to be a preacher, and neither did any of my colleagues. So here’s the problem. A communication problem. Who’ll preach the wonders of science to those who most need to hear about them?

Some days ago I buzzed in Jehova’s witnesses thinking it’s DHL. Then I tried to wave them away saying we’re atheists. “Oooh!” said one of the men and raised his arms, pretending to be shocked, “How did this happen?” I asked him what he meant, and he said “Well, one doesn’t get born this way.” He’s wrong of course. We’re all born as atheists. We’re also born being social and in need of each other to talk through our problems. And as long as science doesn’t offer the community that religions provide so effortlessly, as long as scientists stand aloft of spiritual guidance, people will be afraid that their taxes pay scientists to remove the wonder from the world rather than adding to it.

Update: There's an interesting post from Lubos on the topic. He makes some goods points.

Thursday, June 20, 2013

Testing spontaneous localization models with molecular level splitting

Gloria's collapse model.
We in the quantum gravity groups all over the planet search for a unified framework for general relativity and quantum theory. But I have a peripheral interest also in modifications of general relativity and quantum mechanics since altering one of these two ingredients can change the rules of the game. General relativity and quantum mechanics however work just fine as they are, so there is little need to modify them. In fact, modifications typically render them less appealing to the theoretician, for not to say ugly.

Spontaneous localization models for quantum mechanics are, if you ask me, a particularly ugly modification. In these models, one replaces the collapse upon observation in the Copenhagen interpretation by a large number of little localizations that have the purpose of producing eigenstates upon observation. These localizations that essentially focus the spread of the wave-function are built into the dynamics by some stochastic process, and the rate of collapse depends on the mass of the particles (the higher the mass, the higher the localization rate). The purpose of these models is to explain why we measure the effects of superposition, but never a superposition itself, and never experience macroscopic objects in superpositions.

Unfortunately, I have no reason to believe that nature gives a damn what I find ugly or not, and quite possibly you don’t care either. And so, as a phenomenologist, the relevant question that remains is whether spontaneous localization models are a description of nature that agrees with observation.

And, to be fair, on that account spontaneous localization models are actually quite appealing. That is because their effects, or the parameters of the model respectively, can be bounded both from above and below. The reason is that the collapse processes have to be efficient enough to produce eigenstates upon observation, but not so efficient as to wash out the effects of quantum superpositions that we observe.

The former bound on the efficient production of observable eigenstates becomes ambiguous however if you allow for a many worlds interpretation because then you don’t have to be bothered by macroscopic superpositions. Alas, the intersection of the groups of many worlds believers and spontaneous localization believers is an empty set. Therefore, the spontaneous localization approach has a range of parameters with macroscopic superpositions that is “philosophically unsatisfactory,” as Feldman and Tumulka put it in their (very readable) paper (arXiv:1109.6579). In other words, if you allow for a many worlds situation whose main feature is the absence of collapse, then there really is no point to add stochastic localization on top of that. So it’s either-or, and thus requiring absence of macroscopic superpositions bounds possible parameters.

Still, the notion of what constitutes “macroscopic reality” is quite fuzzy. Just to give you an idea of the problem, the estimates by Feldman and Tumulka go along such lines:
“To obtain quantitative estimates for the values [of the model parameters] that define the boundary of the [philosophically unsatisfactory region], we ask under which conditions measurement outcomes can be read off unambiguously... For definiteness, we think of the outcome as a number printed on a sheet of paper; we estimate that a single digit, printed (say) in 11-point font size, consists of 3 x 1017 carbon atoms or N = 4 x 1018 nucleons. Footnote 1: Here is how this estimate was obtained: We counted that a typical page (from the Physical Review) without figures or formulas contains 6,000 characters and measured that a toner cartridge for a Hewlett Packard laser printer weighs 2.34 kg when full and 1.54 kg when empty. According to the manufacturer, a cartridge suffices for printing 2 x 104 pages...”
And so on. They also discuss the question whether chairs exist:
“One could argue that the theory actually becomes empirically refuted, as it predicts the nonexistence of chairs while we are sure that chairs exist in our world. However, this empirical refutation can never be conclusively demonstrated because the theory would still make reasonable predictions for the outcomes of all experiments...”
Meanwhile on planet earth, particle physicists calculate next-to-next-to-next-to leading order corrections to the Higgs cross-section.

Sarcasm aside, my main problem with this, and with most interpretations and modifications of quantum mechanics, is that we already know that quantum mechanics is not fundamentally the correct description of nature. That’s why we teach 2nd quantization to students. To make matters worse, most of such modifications of quantum mechanics deal with the non-relativistic limit only. I thus have a hard time getting excited about collapse models. But I’m digressing - we were discussing their phenomenological viability.

In fact, Feldman and Tumulka’s summary of experimental (ie non-philosophic) constraints isn’t quite as mind-enhancing as the nonexistent chair I’m sitting on. (Hard science, my ass.) Some experimental constraints they are discussing: The stochastic process of these models contributes to global warming by injecting energy with each collapse and since there’s some cave in Germany which doesn’t noticeably warm up in July, this gives a constraint. And since we have not heard any “spontaneous bangs” around us that would accompany the collapses in certain parameter ranges, we get another constraint. Then there’s atom interferometry. And then there’s this very interesting recent paper

In this paper the authors calculate how spontaneous localization affects quantum mechanical oscillation between two eigenstates. If you recall, we previously discussed how the observation of such oscillations allows to put bounds on decoherence induced by coupling to space-time foam. For the space-time foam, neutral Kaons make a good system for experimental test. Decoherence from space-time foam should decrease the ability of the Kaons to oscillate into each other. The bounds on parameters are meanwhile getting close to the Planck scale.

For spontaneous localization the effect scales differently with the mass though, and is thus not testable in neutral Kaon oscillation. Since the localization effects get larger with large masses, the authors recommend to instead look for the effects of collapse models in chiral molecules.

Chiral molecules are pairs of molecules with the same atomic composition but with a different spatial arrangement. And some of these molecules can exist in superpositions of such spatial arrangements that can transform into each other. In the small temperature limit, this leads to an observable level splitting in the molecular spectrum. The best known example may be ammonia.

Now if collapse models were correct, then these spatial superpositions of chiral molecules should localize and the level splitting, which is a consequence of superpositions of two eigenstates, become unobservable. The authors estimate that with current measurement precision the bound from molecular level splitting is about comparable to that of atom interferometry (where interference should become unobservable if spontaneous localization is too efficient, thus leading to a bound). Molecular spectroscopy is a presently very active research area and with better resolution and larger molecules, this bound could be improved.

In summary, this nice paper gives me hope that in the soon future we can put the ugly idea of spontaneous localization to rest.

Monday, June 17, 2013

Phenomenological Quantum Gravity

Participants of the 2012 conference on 
Experimental Search for Quantum Gravity.
The search for quantum gravity and a theory of everything captures the public imagination like no other area in theoretical physics. It aims to answer three questions that every two-year old could ask if they would just stop being obsessed with cookies for a minute: What is space? What is time? And what is matter? We know that the answers we presently have to these questions are not fundamentally correct; they are merely approximately correct. And we want to know. We really really want to know. (The cookies. Are out.)

Strictly speaking of course physics will not tell you what reality is but what reality is best described by. Space and time are presently described by Einstein’s theory of general relativity; they are classical entities that do not have quantum properties. Matter and radiation are quantum fields described by the standard model. Yet we know that this cannot be the end of the story because the quantum fields carry energy and thus gravitate. The gravitational field thus must be compatible with the quantum aspects of matter sources. Something has to give, and it is generally expected that a quantization of gravity is necessary. I generally refer to ‘quantum gravity’ as any approach to solve this tension. In a slight abuse of language, this also includes approaches in which the gravitational field remains classical and the coupling to matter is modified.

Quantizing gravity is actually not so difficult. The problem is that the straight-forward, naive, quantization does not give a theory that makes sense as a fundamental theory. The result is said to be non-renormalizable, meaning it is a good theory only in some energy ranges and cannot be taken to describe the very essence of space, time, and matter. There are meanwhile several other, not-so-na├»ve, approaches to quantum gravity – string theory, loop quantum gravity, asymptotically safe gravity, causal dynamical triangulation, and a handful of others. The problem is that so far none of these approaches has experimental evidence.

This really isn’t so surprising. To begin with, it’s a technically hard problem that has kept some of the brightest minds on the planet occupied for decades. But besides this, returns on investment have diminished with the advent of scientific knowledge. The low hanging fruits have all been picked. Now we have to develop increasingly more complex experiments to find new physics. This takes time, not to mention effort and money. With that, progress slows.

And quantum gravity is a particularly difficult area for experiment. It’s not just a weak force, it’s weaker than the weak force! This grammatical oxymoron is symptomatic of the problem: Quantum effects of gravity are really, really tiny. Most of the time when I estimate an effect, it turns out to be twenty or more orders of magnitude below experimental precision. I’ve sometimes joked I should write a paper on “50 ways one cannot test quantum gravity”, just to make use of these estimates. It’s clearly not a low hanging fruit, and we shouldn’t be surprised it takes time to climb the tree.

Some people have claimed on occasion that the lack of a breakthrough in the area is due to sociological problems in the organization of knowledge discovery. There are indeed problems in the organization of knowledge discovery today. We use existing resources inefficiently, and I do think this hinders progress. But this is a problem which affects all of academia and is not special to quantum gravity.

I think the main reason why we don’t yet know which theory describes gravity in the quantum regime is that we haven’t paid enough attention to the phenomenology.

One reason phenomenological quantum gravity hasn’t gotten much attention so far is that it has long been believed experimental evidence for quantum gravity is inaccessible to experiment (a belief promoted prominently by Freeman Dyson). The more relevant reason is though that in the field of theoretical physics it’s a very peculiar research topic. In all other areas of physics, researchers share either a common body of experimental evidence and aim to develop a good theory. Or they share a theoretical framework and aim to explore its consequences. Phenomenological quantum gravity has neither a shared theory nor a shared set of data. So what can the scientist do in this situation?


The phenomenology of quantum gravity proceeds by the development of models that are specifically designed to test for properties of the yet-to-be-found theory of quantum gravity. These phenomenological models are normally extensions of known theories and are developed with the explicit aim of testing for general features. These models do not aim to be fundamental theories on their own.

Examples of such general properties that the fundamental theory might have are: violations or deformations of Lorentz-invariance, additional space-like dimensions, the existence of a minimal length scale or a generalized uncertainty principle, holography, space-time fluctuations, fundamental discreteness, and so on. I discuss a few examples below. If we develop a model that can be constrained by data, we will learn what properties the fundamental theory can have, and which it cannot have. This in turn can serve as guidance for the development of the theory.

In practice, these phenomenological models quantify deviations from general relativity and/or quantum field theory. One expects that the only additional dimensionful scale in these models is the Planck scale, which gives a ‘natural’ range for the expected size of effects in which all dimensionless constants are of order one. The aim is then to find an experiment that is sensitive to this natural parameter range. Since most of these models do not actually deal with quanta of the gravitational field, I prefer to speak more generally of “Planck scale effects” being what we are looking for.

Example: Lorentz-invariance violation

The best known example that demonstrates that effects are measureable even when they are suppressed by the Planck scale are violations of Lorentz-invariance. You expect violations of Lorentz-invariance in models for space-time that make use of a preferred frame that violates observer-independence, for example some regular lattice or condensate that evolves with some special time-slicing.

Such violations of Lorentz-invariance can be described by extensions of the standard model that couple to a time-like vector field and these couplings change the predictions of the standard model. Even though the effects are tiny, many of them are measureable.

The best example is maybe vacuum Cherenkov-radiation: the spontaneous emission of a photon by an electron. This process is normally entirely forbidden which makes it a very sensitive probe. With Lorentz-invariance violation, an electron above a certain energy will start to lose energy by radiating photons. We thus should not receive electrons above this threshold from distant astrophysical sources. From the highest energies of electrons of astrophysical origin that we have measured we can thus derive a bound on the possible violation of Lorentz invariance. This bound is today already (way) beyond the Planck scale, which means that the natural parameter range is excluded.

This shows that we can constrain Planck scale effects even though they are tiny.

Now this is a negative result in the sense that we have ruled out certain properties. But from this we have learned a lot. Approaches which induce such violations of Lorentz-invariance are no longer viable.

Example: Lorentz-invariance deformation

Deformations of Lorentz-invariance have been suggested as symmetries of the ground state of space-time. In contrast to violations of Lorentz-invariance, they do not single out a preferred frame. They generically lead to modifications of the speed of light, which can become energy-dependent.

I have explained a great many times that I think these models are flawed because they bring more problems than they solve. But leaving aside my criticism of the model, it can be experimentally tested. The energy dependence of the speed of light is tiny – a Planck scale effect – but the measurable time-difference adds up over the distance that photons of different energies travel. This is why highly energetic photons from distant gamma ray bursts are presently receiving a lot of attention as possible probes of quantum gravitational effects.

The current status is that we are just about to reach the natural parameter range expected for a Planck scale effect. It is presently a very active research area.

Example: Decoherence induced by space-time foam

If space-time undergoes quantum fluctuations that couple to all matter fields, this may induce decoherence in quantum mechanical oscillations. We discussed this previously in this post. In oscillations of neutral Kaon systems, we are presently just about to reach Planck scale sensitivity.

Misc other examples

There is no lack of creativity in the community! Some other examples of varying plausibility that we have discussed on this blog are Craig Hogan’s quest for holographic noise, Bekenstein’s table-top experiment that searches for Planck-length discreteness, massive quantum oscillators testing Planck-scale modified commutation relations, and searches for evidence for a generalized uncertainty in tritium decay. There is also a vast body of work on leftover quantum gravitational effects from the early universe, captured in various models for string cosmology and loop quantum cosmology, and of course there are cosmic (super) strings. There are further proposed tests for the idea that gravity is just classical (still a little outside the natural parameter range), and suggestions to look for dimensional reduction.

This is not an exhaustive list but just to give you a sense of the breadth of the topics.

Demarcation issues

What counts and what doesn’t count as phenomenological quantum gravity is inevitably somewhat subjective. I do for example not count the beyond the standard model physics of grand unification, though, if you believe in a theory of everything, this might be relevant for quantum gravity. I also don’t count applications of AdS/CFT because these do not describe gravitational systems in our universe, though arguably they are examples for some quantized version of gravity. I also don’t count general modifications of quantum theory or general relativity, though these might of course be very relevant to the problem. I don’t label these phenomenological quantum gravity mostly for practical reasons, not for ideological ones. One has to draw the line somewhere.


I often get asked which approach to quantum gravity I believe in. When it comes to my religious affiliation, I’m not only an atheist, I was never Christianized. I have never belonged to any church and I have no intention to join one. The same can be said about my research in quantum gravity. I don’t belong to any church and have never been Christianized. I have on occasion erroneously been called a string theorist and I have been mistaken for working on loop quantum gravity. Depending on the situation, that can be amusing (on a conference) or annoying (in a job interview). For many people it still seems to be hard to understand that the phenomenology of quantum gravity is a separate research area that does not built on the framework of any particular approach.

The aim of my work is to identify the most promising experiments to find evidence for quantum gravity. For that, we need phenomenological models to quantify the effects, and we need to understand the models that we have (for me that includes criticizing them). I follow with interest the progress in various approaches to quantum gravity (presently I’m quite excited about Causal Sets) and I try to develop testable phenomenological models based on these developments. On the practical side, I organize conferences and workshops to bring together theoreticians with experimentalists who have an interest in the topic to stimulate exchange and the generation of new ideas.

What I do believe in, and what I hope the above examples illustrate, is that it is possible for us to find experimental evidence for quantum gravity if we ask the right questions and look in the right places.

Friday, June 14, 2013

Nordita’s First Workshop for Science Writers, Summary

Patrick Sutton
George and I came up with the idea for this workshop one year ago at a reception of an earlier Nordita workshop. Yes, alcohol was involved. We talked about how science writers often feel like they’re running on a treadmill, having to keep up with the frenetic pace of publishing, only seldom getting a chance to take a few days off to gain some broader perspective. And we talked about how researchers too are running on a treadmill, having to keep up with the pace of their colleagues’ publications, and often feel that science writers miss the broader perspective.

And so we set ourselves the goal to get everybody off the treadmill for a few days.

Our “workshop for science writes”, which took place May 27-29, was devised for both, the writers and the physicists: For the writers to hear what topics in astrophysics and cosmology will soon be on the agenda and what science journalists really need to know about them. And for the physicists to share both their knowledge and their motivation, and to caution against common misunderstandings.

We modeled the workshop on “boot camps” organized by the Space Telescope Science Institute, Woods Hole Oceanographic Institute, U.C. Santa Cruz, and other institutions. Our workshop was a very intense and tightly packed meeting, with lectures by experts on selected topics in astrophysics and cosmology, followed by question and answer sessions.

George, wired.
On Tuesday afternoon, we visited the phonetics lab at Stockholm University, which was a fun excursion into a totally different area of science. At the lab, participants could analyze their voice spectra and airflow during speech, and learn the physics behind speech production. They could also take an EEG, which the researchers at the lab use to study which brain areas are involved in language processing and how that changes during infancy.

On Tuesday evening, one of the participants of the workshop, Robert Nemiroff, gave a public lecture at CosmoNova. The fully booked lecture took the audience on a tour through the solar system and beyond, projected on the 17m IMAX screen, while Robert explained the science behind the amazing photos and videos. Besides the stunning images, it was also great to see so many people interested in the laws of physics that shape our universe. (The guy sitting next to me held a copy of Lee Smolin’s new book on his lap which caused me some cognitive dissonance though.)

It was admittedly quite an organizational challenge to find the right level of technical details for an audience that physicists rarely deal with. I think however that the question and answer sessions as well as a large number of breaks were useful for participants to talk to lectures individually. We also had many interesting discussions about the tension between scientific accuracy and popular science writing. As you can guess, I inevitably come down on the side of scientific accuracy.

George turned out to be an excellent organizer, though clearly not used to the physicists compulsive ignorance of deadlines and reminders. I found it quite interesting that when I sent out mass emails to the participants that asked for reply, the first cohort of replies would come almost exclusively from the science writers, frequently within minutes. Among the physicists there were but two who'd answer within 24 hours and meet the deadlines, the rest waited for multiple reminders. The other interesting contrast was that the science writers were considerably more comfortable and engaged with social media.

For me, it was a great pleasure to get to know such an interesting and diverse group of people. I’m neither an astrophysicist nor a cosmologist nor a science writer, and I learned a lot at this workshop - it will probably inspire some more blogposts.

You can find soundbites and links from the meeting on twitter here, and slides of the lectures here.

George Musser, Robert Nemiroff, I, and a bunch of beautiful flowers.

Thursday, June 06, 2013

Quantum gravity phenomenology \neq detecting gravitons

First direct evidence for gravitons.
I’ve never met Freeman Dyson, but I’ve argued with him many times.

Almost every time I give a seminar about my research field, the phenomenology of quantum gravity, I find myself in the bizarre situation of first having to convince the audience that it is a research field. And that even though hundreds of people work on it. I have been organizing and co-organizing a series of conferences on Experimental Search for Quantum Gravity, and in each installment we had to turn away applicants due to space limitations. The arXiv is full with papers on the topic, more than I can keep up with on this blog, and it’s in the popular press more often than I’d like*. Why are my fellow physicists so slow to notice? I make Freeman Dyson responsible for this.

Dyson has popularized the idea that quantum gravity is inaccessible to experiment and thereby discouraged studies of phenomenological consequences of quantum gravity. In a 2004 review of Brian Greene’s book “The Fabric of the Cosmos” he wrote:
“According to my hypothesis [...] the two theories [general relativity and quantum theory] are mathematically different and cannot be applied simultaneously. But no inconsistency can arise from using both theories, because any differences between their predictions are physically undetectable.”
And in a 2012 essay for the Edge Annual Question, he still pushed the idea of quantum gravitational effects being unobservable:
“I propose as a hypothesis... that single gravitons may be unobservable by any conceivable apparatus. If this hypothesis were true, it would imply that theories of quantum gravity are untestable and scientifically meaningless. The classical universe and the quantum universe could then live together in peaceful coexistence. No incompatibility between the two pictures could ever be demonstrated. Both pictures of the universe could be true, and the search for a unified theory could turn out to be an illusion.”
The problem with this argument is that he equates the observation of a single graviton with evidence for a quantization of gravity. But the two are not the same. If single gravitons were unobservable, it would not imply that “theories of quantum gravity are untestable and scientifically meaningless.”

It might indeed be that we will never be able to detect gravitons. One can estimate the probability of detecting gravitons and even with extremely futuristic detectors the size of Jupiter put in orbit around a Newton star, chances would be slim. (See this paper for estimates.) Clearly not an experiment you want to write a grant proposal for.

But we don’t need to detect single gravitons to find experimental evidence for quantum gravity.

Look around. The fact that atoms are stable is evidence for the quantization of the electromagnetic interaction. You don’t need to detect single photons for that. You also don’t need to resolve atomic structures to find evidence for the atomic theory. Brownian motion famously provided this evidence, visible by eye. And Planck introduced what is now known as “Planck’s constant” before Einstein’s Nobel-prize winning explanation for the photoelectric effect.

If we pay attention to the history of physics, it is thus plausible that we can find evidence for quantum gravity without directly detecting gravitons. The quantum theory of gravity might have consequences that we can access in regimes where gravity is weak, as long as we ask the right questions.

Some people have a linguistic problem with calling something a “quantum gravitational effect” if it isn’t actually an effect that directly involves quanta of the gravitational field. This is why I instead often use the expression “Planck scale effects” to refer to effects beyond the standard model that might be signatures of quantum gravity.

Interestingly, Christine recently pointed me to a writeup of a 2012 talk by Freeman Dyson, in which he discusses the possibility of detecting gravitons without jumping to the conclusion that an inability to detect gravitons means that quantum gravity is a subject for philosophers. Instead, Dyson is very careful with stating:
“One hypothesis is that gravity is a quantum field and gravitons exist. A second hypothesis is that the gravitational field is a statistical concept like entropy or temperature, only defined for gravitational effects of matter in bulk and not for effects of individual elementary particles… If a graviton detector is in principle impossible, then both hypotheses remain open.”
A hooray for Dyson!

Unfortunately, there are still other people barking up the same tree, for example by pulling the accelerator argument. For example John Horgan writes:
“String theory, loop-space theory and other popular candidates for a unified theory postulate phenomena far too minuscule to be detected by any existing or even conceivable (except in a sci-fi way) experiment. Obtaining the kind of evidence of a string or loop that we have for, say, the top quark would require building an accelerator as big as the Milky Way.”
Horgan is well known for proclaiming The End of Science, and it seems indeed he’s run out of science when he wrote the above. To begin with, string theory doesn’t “postulate... phenomena,” what would be the point of doing this? It postulates, drums please, strings. And I’m not at all sure what “loop-space theory” is supposed to be. But leaving aside this demonstration of Hogan’s somewhat fuzzy understanding of the subject, if we could build a detector the size of the Milky Way, we’d be able to test very high energies, all right. But that doesn’t mean we can conclude this is the only way to find evidence for quantum gravity.

Luckily Horgan has colleagues who think before they write, like George Musser who put it this way:
“[Q]uantum gravity” and “experiment” are… like peanut butter and chocolate. They actually go together quite tastily.
(I had meant to write a summary of which possible experiments for quantum gravity pheno are presently being discussed and how plausible I think they are to deliver results, but I got distracted by Dyson’s above mentioned paper on graviton detection. The summary will follow some other time. Update: The summary is here.)

*Almost everything I read in the popular press about evidence for quantum gravity is wrong or misleading or both. But then you already knew I would complain about this :p

Monday, June 03, 2013

Why do Science?

I sat down to write a piece explaining why scientific research is essential to our societies and why we should invest in applied and basic science. Then I recalled I don’t believe in free will. This isn’t always easy... So I took out the “should” from the title because it’s not like we have a choice. Evidently, we do science! The question is why? And will we continue?

Natural selection, then and now

Developing accurate theories of nature that allow making predictions about the world are an evolutionary advantage. Understanding our environment and ourselves enables us to construct tools and shape nature to our needs. It makes thus sense that natural selection favors using brains to develop theories of nature.

As it is often the case though, natural selection favored traits that then extend beyond the ones immediately relevant for survival. And so the human brain has become very adept at constructing consistent explanations generally. If we encounter any inconsistency, we mentally chew on it and try to find a solution. This is why we cannot help but write thousands of papers on the black hole information paradox. This is why Dyson’s belief that inconsistencies between quantum mechanics and general relativity will forever remain outside experimental detection does not deter physicists from trying to resolve this inconsistency: It’s nature, not nurture.

In fact, our brain is so eager to create consistent theories that it sometimes does so by denying facts which won’t fit. This is why we are prone to confirmation bias, and in extreme cases paralyzed people deny they are not able to tie their shoes or lift an arm (examples from Ramachandran’s book “Phantoms in the Brain.”)

But leaving aside the inevitable overshooting, evolution has endowed us with a brain that is able and eager to develop consistent explanations. This is why we do science.

The question whether we will continue to do science, and what type of science, is more involved than asking whether scientific thinking has benefitted the reproduction of certain genes. The reason is that we have become so good at using nature to our needs that evolution no longer acts by just selecting the phenotypes best adapted to a given environment. Instead, we can make the environment fit to us.

Today, the major effort of societies is eradicating risks and diseases, optimizing crops and agricultural yields, and developing all kinds of technologies to minimize exposure to natural events. Natural selection of course still proceeds. It’s a process that acts on adaptive systems so generally and unavoidably that Lee Smolin famously uses it to explain the evolution of universes. But what does change is the mechanism that creates the mutations among which the “fittest” has an evolutionary advantage. Since we humans now create large changes on the environment in which we have to survive, the technologies that enable us to make these changes have become part of the random mutations among which selection acts. Backreaction can no longer be neglected.

In other words, natural selection can only act on expressions of genes and ideas together. The innovation provided by scientific progress is now part of the mutations that create species better adapted to the environment.

Applied and basic research

The purpose of scientific research is thus to act as an innovation machine. It enables humans to “fit” better to their environment. This is the case at least for applied research. So what then is the rationale to engage in basic research?

First note that what is usually referred to as “basic research” is rarely “non-applied,” but rather it’s “not immediately applied”. Basic research is commonly pursued on the rationale that it is the precursor of applications in the far future, a future so far that it isn’t yet possible to tell what the application might be. This basic research is necessary to sustain innovation in the long run.

Also note that what is commonly referred to as an “application” doesn’t cover the full scope of innovation that scientific research brings. Scientific insight, especially paradigm shifts, have the potential to entirely reshape the way we perceive of ourselves and our place in the world. This can have major cultural and social impacts that have nothing to do with the development of technologies.

Marxist thought for example has thrived on the belief that we differ only in the chances and opportunities given to us and not by heritable talents that lead to different performances, a fact now known to be scientifically fallacious. Planned economy seems like a good idea if you believe in a clockwork universe in which you can make accurate predictions, an idea that doesn’t seem so good if you know something about chaos theory. Adam Smith’s “invisible hand” is based on the belief that self-organization is natural and leads to desirable outcomes, and we’re only slowly learning the problems in managing risk in complex and highly connected networks. The ongoing change in attitude towards religion is driven by science shining light on inconsistencies in religious storytelling. And many scientists seem to be afraid what it could do to society if people realized that they have no free will. All these are non-technological examples of innovation created by scientific knowledge.

Having said that, we are left to wonder about the scientific research that is neither applied (immediately or in the far future) nor has any other impact on our societies. There very possibly is such research. But we don’t know in advance whether or not a piece of research will become relevant in the future. I previously referred to this research as “knowledge for the sake of knowledge.” Now I am thinking that a better description would have been You-never-know-ledge.


Since we have to manage finite resources on this planet, there is always the question how much energy, time, money, and people to invest into any one human activity for the most beneficial outcome. This is a question which has to be addressed on a case-by-case basis and greatly depends on what is meant with “beneficial”, a word that would bring us back to opinions and “should”s. So the above considerations don’t tell us how much investment into science is enough. But they do tell us that we need continuous investment into scientific research, both applied and basic, to allow mankind to sustain and improve the Darwinian “fit” to the environment that we are changing and creating ourselves.