Monday, March 27, 2017

Book review: “Anomaly!” by Tommaso Dorigo

Anomaly! Collider Physics and the Quest for New Phenomena at Fermilab
Tommaso Dorigo
World Scientific Publishing Europe Ltd (November 17, 2016)

Tommaso Dorigo is a familiar name in the blogosphere. Over at “A Quantum’s Diary’s Survivor”, he reliably comments on everything going on in particle physics. Located in Venice, Tommaso is a member of the CMS collaboration at CERN and was part of the CDF collaboration at Tevatron – a US particle collider that ceased operation in 2011.

Anomaly! Is Tommaso’s first book and it chronicles his time in the CDF collaboration from the late 1980s until 2000. This covers the measurement of the mass of the Z-boson, the discovery of the top-quark and the – eventually unsuccessful – search for supersymmetric particles. In his book, Tommaso weaves together the scientific background about particle physics with brief stories of the people involved and their – often conflict-laden – discussions.

The first chapters of the book contain a brief summary of the standard model and quantum field theory and can be skipped by those familiar with these topics. The book is mostly self-contained in that Tommaso provides all the knowledge necessary to understand what’s going on (with a few omissions that I believe don’t matter much). But the pace is swift. I sincerely doubt a reader without background in particle physics will be able to get through the book without re-reading some passages many times.

It is worth emphasizing that Tommaso is an experimentalist. I think I hadn’t previously realized how much the popular science literature in particle physics has, so-far, been dominated by theorists. This makes Anomaly! a unique resource. Here, the reader can learn how particle physics is really done! From the various detectors and their designs, to parton distribution functions, to triggers and Monte Carlo simulations, Tommaso doesn’t shy away from going into all the details. At the same time, his anecdotes showcase how a large collaboration like CDF – with more than 500 members – work.

That having been said, the book is also somewhat odd in that it simply ends without summary, or conclusion, or outlook. Given that the events Tommaso writes about date back 30 years, I’d have been interested to hear whether something has changed since. Is the software development now better managed? Is there still so much competition between collaborations? Is the relation to the media still as fraught? I got the impression an editor pulled the manuscript out under Tommaso’s still typing fingers because no end was in sight 😉

Besides this, I have little to complain about. Tommaso’s writing style is clear and clean, and also in terms of structure – mostly chronological – nothing seems amiss. My major criticism is that the book doesn’t have any references, meaning the reader is stuck there without any guide for how to proceed in case he or she wants to find out more.

So should you, or should you not buy the book? If you’re considering to become a particle physicist, I strongly recommend you read this book to find out if you fit the bill. And if you’re a science writer who regularly reports on particle physics, I also recommend you read this book to get an idea of what’s really going on. All the rest of you I have to warn that while the book is packed with information, it’s for the lovers. It’s about how the author tracked down a factor of 1.25^2 to explain why his data analysis came up with 588 rather than 497 Z \to b\bar b decays. And you’re expected to understand why that’s exciting.

On a personal note, the book brought back a lot of memories. All the talk of Herwig and Pythia, of Bjorken-x, rapidity and pseudorapidity, missing transverse energy, the CTEQ tables, hadronization, lost log-files, missed back-ups, and various fudge-factors reminded me of my PhD thesis – and of all the reasons I decided that particle physics isn’t for me.

[Disclaimer: Free review copy.]

Wednesday, March 22, 2017

Academia is fucked-up. So why isn’t anyone doing something about it?

A week or so ago, a list of perverse incentives in academia made rounds. It offers examples like “rewarding an increased number of citations” that – instead of encouraging work of high quality and impact – results in inflated citation lists, an academic tit-for-tat which has become standard practice. Likewise, rewarding a high number of publications doesn’t produce more good science, but merely finer slices of the same science.

Perverse incentives in academia.
Source: Edwards and Roy (2017). Via.

It’s not like perverse incentives in academia is news. I wrote about this problem ten years ago, referring to it as the confusion of primary goals (good science) with secondary criteria (like, for example, the number of publications). I later learned that Steven Pinker made the same distinction for evolutionary goals, referring to it as ‘proximate’ vs ‘ultimate’ causes.

The difference can be illustrated in a simple diagram (see below). A primary goal is a local optimum in some fitness landscape – it’s where you want to go. A secondary criterion is the first approximation for the direction towards the local optimum. But once you’re on the way, higher-order corrections must be taken into account, otherwise the secondary criterion will miss the goal – often badly.


The number of publications, to come back to this example, is a good first-order approximation. Publications demonstrate that a scientist is alive and working, is able to think up and finish research projects, and – provided the paper are published in peer reviewed journals – that their research meets the quality standard of the field.

To second approximation, however, increasing the number of publications does not necessarily also lead to more good science. Two short papers don’t fit as much research as do two long ones. Thus, to second approximation we could take into account the length of papers. Then again, the length of a paper is only meaningful if it’s published in a journal that has a policy of cutting superfluous content. Hence, you have to further refine the measure. And so on.

This type of refinement isn’t specific to science. You can see in many other areas of our lives that, as time passes, the means to reach desired goals must be more carefully defined to make sure they still lead where we want to go.

Take sports as example. As new technologies arise, the Olympic committee has added many additional criteria on what shoes or clothes athletes are admitted to wear, which drugs make for an unfair advantage, and they’ve had to rethink what distinguishes a man from a woman.

Or tax laws. The Bible left it at “When the crop comes in, give a fifth of it to Pharaoh.” Today we have books full of ifs and thens and whatnots so incomprehensible I suspect it’s no coincidence suicide rates peak during tax season.

It’s debatable of course whether current tax laws indeed serve a desirable goal, but I don’t want to stray into politics. Relevant here is only the trend: Collective human behavior is difficult to organize, and it’s normal that secondary criteria to reach primary goals must be refined as time passes.

The need to quantify academic success is a recent development. It’s a consequence of changes in our societies, of globalization, increased mobility and connectivity, and is driven by the increased total number of people in academic research.

Academia has reached a size where accountability is both important and increasingly difficult. Unless you work in a tiny subfield, you almost certainly don’t know everyone in your community and can’t read every single publication. At the same time, people are more mobile than ever, and applying for positions has never been easier.

This means academics need ways to judge colleagues and their work quickly and accurately. It’s not optional – it’s necessary. Our society changes, and academia has to change with it. It’s either adapt or die.

But what has been academics’ reaction to this challenge?

The most prevalent reaction I witness is nostalgia: The wish to return to the good old times. Back then, you know, when everyone on the committee had the time to actually read all the application documents and was familiar with all the applicants’ work anyway. Back then when nobody asked us to explain the impact of our work and when we didn’t have to come up with 5-year plans. Back then, when they recommended that pregnant women smoke.

Well, there’s no going back in time, and I’m glad the past has passed. I therefore have little patience for such romantic talk: It’s not going to happen, period. Good measures for scientific success are necessary – there’s no way around it.

Another common reaction is the claim that quality isn’t measurable – more romantic nonsense. Everything is measurable, at least in principle. In practice, many things are difficult to measure. That’s exactly why measures have to be improved constantly.

Then, inevitably, someone will bring up Goodhart’s Law: “When a measure becomes a target, it ceases to be a good measure.” But that is clearly wrong. Sorry, Goodhard. If you want to indeed optimize the measure, you get exactly what you asked for. The problem is that often the measure wasn’t what you wanted to begin with.

With use of the terminology introduced above, Goodhard’s Law can be reformulated as: “When people optimize a secondary criterion, they will eventually reach a point where further optimization diverts from the main goal.” But our reaction to this should be to improve the measure, not throw the towel and complain “It’s not possible.”

This stubborn denial of reality, however, has an unfortunate consequence: Academia has gotten stuck with the simple-but-bad secondary criteria that are currently in use: number of publications, the infamous h-index, the journal impact factor, renown co-authors, positions held at prestigious places, and so on. 

We all know they’re bad measures. But we use them anyway because we simply don’t have anything better. If your director/dean/head/board is asked to demonstrate how great your place is, they’ll fall back on the familiar number of publications, and as a bonus point out who has recently published in Nature. I’ve seen it happen. I just had to fill in a form for the institute’s board in which I was asked for my h-index and my paper count.

Last week, someone asked me if I’d changed my mind in the ten years since I wrote about this problem first. Needless to say, I still think bad measures are bad for science. But I think that I was very, very naïve to believe just drawing attention to the problem would make any difference. Did I really think that scientists would see the risk to their discipline and do something about it? Apparently that’s exactly what I did believe.

Of course nothing like this happened. And it’s not just because I’m a nobody who nobody’s listening to. Similar concerns like mine have been raised with increasing frequency by more widely known people in more popular outlets, like Nature and Wired. But nothing’s changed.

The biggest obstacle to progress is that academics don’t want to admit the problem is of their own making. Instead, they blame others: policy makers, university administrators, funding agencies. But these merely use measures that academics themselves are using.

The result has been lots of talk and little action. But what we really need is a practical solution. And of course I have one on offer: An open-source software that allows every researcher to customize their own measure for what they think is “good science” based on the available data. That would include the number of publications and their citations. But there is much more information in the data which currently isn’t used.

You might want to know whether someone’s research connects areas that are only loosely connected. Or how many single-authored papers they have. You might want to know how well their keyword-cloud overlaps with that of your institute. You might want to develop a measure for how “deep” and “broad” someone’s research is – two terms that are often used in recommendation letters but that are extremely vague.

Such individualized measures wouldn’t only automatically update as people revise criteria, but they would also counteract the streamlining of global research and encourage local variety.

Why isn’t this happening? Well, besides me there’s no one to do it. And I have given up trying to get funding for interdisciplinary research. The inevitable response I get is that I’m not qualified. Of course it’s correct – I’m not qualified to code and design a user-interface. But I’m totally qualified to hire some people and kick their asses. Trust me, I have experience kicking ass. Price tag to save academia: An estimated 2 million Euro for 5 years.

What else has changed in the last ten years? I’ve found out that it’s possible to get paid for writing. My freelance work has been going well. The main obstacle I’ve faced is lack of time, not lack of opportunity. And so, when I look at academia now, I do it with one leg outside. What I see is that academia needs me more than I need academia.

The current incentives are extremely inefficient and waste a lot of money. But nothing is going to change until we admit that solving the problem is our own responsibility.

Maybe, when I write about this again, ten years from now, I’ll not refer to academics as “us” but as “they.”

Wednesday, March 15, 2017

No, we probably don’t live in a computer simulation

According to Nick Bostrom of the Future of Humanity Institute, it is likely that we live in a computer simulation. And one of our biggest existential risks is that the superintelligence running our simulation shuts it down.

The simulation hypothesis, as it’s called, enjoys a certain popularity among people who like to think of themselves as intellectual, believing it speaks for their mental flexibility. Unfortunately it primarily speaks for their lacking knowledge of physics.

Among physicists, the simulation hypothesis is not popular and that’s for a good reason – we know that it is difficult to find consistent explanations for our observations. After all, finding consistent explanations is what we get paid to do.

Proclaiming that “the programmer did it” doesn’t only not explain anything - it teleports us back to the age of mythology. The simulation hypothesis annoys me because it intrudes on the terrain of physicists. It’s a bold claim about the laws of nature that however doesn’t pay any attention to what we know about the laws of nature.

First, to get it out of the way, there’s a trivial way in which the simulation hypothesis is correct: You could just interpret the presently accepted theories to mean that our universe computes the laws of nature. Then it’s tautologically true that we live in a computer simulation. It’s also a meaningless statement.

A stricter way to speak of the computational universe is to make more precise what is meant by ‘computing.’ You could say, for example, that the universe is made of bits and an algorithm encodes an ordered time-series which is executed on these bits. Good - but already we’re deep in the realm of physics.

If you try to build the universe from classical bits, you won’t get quantum effects, so forget about this – it doesn’t work. This might be somebody’s universe, maybe, but not ours. You either have to overthrow quantum mechanics (good luck), or you have to use qubits. [Note added for clarity: You might be able to get quantum mechanics from a classical, nonlocal approach, but nobody knows how to get quantum field theory from that.]

Even from qubits, however, nobody’s been able to recover the presently accepted fundamental theories – general relativity and the standard model of particle physics. The best attempt to date is that by Xiao-Gang Wen and collaborators, but they are still far away from getting back general relativity. It’s not easy.

Indeed, there are good reasons to believe it’s not possible. The idea that our universe is discretized clashes with observations because it runs into conflict with special relativity. The effects of violating the symmetries of special relativity aren’t necessarily small and have been looked for – and nothing’s been found.

For the purpose of this present post, the details don’t actually matter all that much. What’s more important is that these difficulties of getting the physics right are rarely even mentioned when it comes to the simulation hypothesis. Instead there’s some fog about how the programmer could prevent simulated brains from ever noticing contradictions, for example contradictions between discretization and special relativity.

But how does the programmer notice a simulated mind is about to notice contradictions and how does he or she manage to quickly fix the problem? If the programmer could predict in advance what the brain will investigate next, it would be pointless to run the simulation to begin with. So how does he or she know what are the consistent data to feed the artificial brain with when it decides to probe a specific hypothesis? Where does the data come from? The programmer could presumably get consistent data from their own environment, but then the brain wouldn’t live in a simulation.

It’s not that I believe it’s impossible to simulate a conscious mind with human-built ‘artificial’ networks – I don’t see why this should not be possible. I think, however, it is much harder than many future-optimists would like us to believe. Whatever the artificial brains will be made of, they won’t be any easier to copy and reproduce than human brains. They’ll be one-of-a-kind. They’ll be individuals.

It therefore seems implausible to me that we will soon be outnumbered by artificial intelligences with cognitive skills exceeding ours. More likely, we will see a future in which rich nations can afford raising one or two artificial consciousnesses and then consult them on questions of importance.

So, yes, I think artificial consciousness is on the horizon. I also think it’s possible to convince a mind with cognitive abilities comparable to that of humans that their environment is not what they believe it is. Easy enough to put the artificial brain in a metaphoric vat: If you don’t give it any input, it would never be any wiser. But that’s not the environment I experience and, if you read this, it’s not the environment you experience either. We have a lot of observations. And it’s not easy to consistently compute all the data we have.

Besides, if the reason you build an artificial intelligences is consultation, making them believe reality is not what it seems is about the last thing you’d want.

Hence, the first major problem with the simulation hypothesis is to consistently create all the data which we observe by any means other than the standard model and general relativity – because these are, for all we know, not compatible with the universe-as-a-computer.

Maybe you want to argue it is only you alone who is being simulated, and I am merely another part of the simulation. I’m quite sympathetic to this reincarnation of solipsism, for sometimes my best attempt of explaining the world is that it’s all an artifact of my subconscious nightmares. But the one-brain-only idea doesn’t work if you want to claim that it is likely we live in a computer simulation.

To claim it is likely we are simulated, the number of simulated conscious minds must vastly outnumber those of non-simulated minds. This means the programmer will have to create a lot of brains. Now, they could separately simulate all these brains and try to fake an environment with other brains for each, but that would be nonsensical. The computationally more efficient way to convince one brain that the other brains are “real” is to combine them in one simulation.

Then, however, you get simulated societies that, like ours, will set out to understand the laws that govern their environment to better use it. They will, in other words, do science. And now the programmer has a problem, because it must keep close track of exactly what all these artificial brains are trying to probe.

The programmer could of course just simulate the whole universe (or multiverse?) but that again doesn’t work for the simulation argument. Problem is, in this case it would have to be possible to encode a whole universe in part of another universe, and parts of the simulation would attempt to run their own simulation, and so on. This has the effect of attempting to reproduce the laws on shorter and shorter distance scales. That, too, isn’t compatible with what we know about the laws of nature. Sorry.

Stephen Wolfram (from Wolfram research) recently told John Horgan that:
    “[Maybe] down at the Planck scale we’d find a whole civilization that’s setting things up so our universe works the way it does.”

I cried a few tears over this.

The idea that the universe is self-similar and repeats on small scales – so that elementary particles are built of universes which again contain atoms and so on – seems to hold a great appeal for many. It’s another one of these nice ideas that work badly. Nobody’s ever been able to write down a consistent theory that achieves this – consistent both internally and with our observations. The best attempt I know of are limit cycles in theory space but to my knowledge that too doesn’t really work.

Again, however, the details don’t matter all that much – just take my word for it: It’s not easy to find a consistent theory for universes within atoms. What matters is the stunning display of ignorance – for not to mention arrogance –, demonstrated by the belief that for physics at the Planck scale anything goes. Hey, maybe there’s civilizations down there. Let’s make a TED talk about it next. For someone who, like me, actually works on Planck scale physics, this is pretty painful.

To be fair, in the interview, Wolfram also explains that he doesn’t believe in the simulation hypothesis, in the sense that there’s no programmer and no superior intelligence laughing at our attempts to pin down evidence for their existence. I get the impression he just likes the idea that the universe is a computer. (Note added: As a commenter points out, he likes the idea that the universe can be described as a computer.)

In summary, it isn’t easy to develop theories that explain the universe as we see it. Our presently best theories are the standard model and general relativity, and whatever other explanation you have for our observations must first be able to reproduce these theories’ achievements. “The programmer did it” isn’t science. It’s not even pseudoscience. It’s just words.

All this talk about how we might be living in a computer simulation pisses me off not because I’m afraid people will actually believe it. No, I think most people are much smarter than many self-declared intellectuals like to admit. Most readers will instead correctly conclude that today’s intelligencia is full of shit. And I can’t even blame them for it.

Saturday, March 11, 2017

Is Verlinde’s Emergent Gravity compatible with General Relativity?

Dark matter filaments, Millenium Simulation
Image: Volker Springel
A few months ago, Erik Verlinde published an update of his 2010 idea that gravity might originate in the entropy of so-far undetected microscopic constituents of space-time. Gravity, then, would not be fundamental but emergent.

With the new formalism, he derived an equation for a modified gravitational law that, on galactic scales, results in an effect similar to dark matter.

Verlinde’s emergent gravity builds on the idea that gravity can be reformulated as a thermodynamic theory, that is as if it was caused by the dynamics of a large number of small entities whose exact identity is unknown and also unnecessary to describe their bulk behavior.

If one wants to get back usual general relativity from the thermodynamic approach, one uses an entropy that scales with the surface area of a volume. Verlinde postulates there is another contribution to the entropy which scales with the volume itself. It’s this additional entropy that causes the deviations from general relativity.

However, in the vicinity of matter the volume-scaling entropy decreases until it’s entirely gone. Then, one is left with only the area-scaling part and gets normal general relativity. That’s why on scales where the average density is high – high compared to galaxies or galaxy clusters – the equation which Verlinde derives doesn’t apply. This would be the case, for example, near stars.

The idea quickly attracted attention in the astrophysics community, where a number of papers have since appeared which confront said equation with data. Not all of these papers are correct. Two of them seemed to have missed entirely that the equation which they are using doesn’t apply on solar-system scales. Of the remaining papers, three are fairly neutral in the conclusions, while one – by Lelli et al – is critical. The authors find that Verlinde’s equation – which assumes spherical symmetry – is a worse fit to the data than particle dark matter.

There has not, however, so far been much response from theoretical physicists. I’m not sure why that is. I spoke with science writer Anil Ananthaswamy some weeks ago and he told me he didn’t have an easy time finding a theorist willing to do as much as comment on Verlinde’s paper. In a recent Nautilus article, Anil speculates on why that might be:
“A handful of theorists that I contacted declined to comment, saying they hadn’t read the paper; in physics, this silent treatment can sometimes be a polite way to reject an idea, although, in fairness, Verlinde’s paper is not an easy read even for physicists.”
Verlinde’s paper is indeed not an easy read. I spent some time trying to make sense of it and originally didn’t get very far. The whole framework that he uses – dealing with an elastic medium and a strain-tensor and all that – isn’t only unfamiliar but also doesn’t fit together with general relativity.

The basic tenet of general relativity is coordinate invariance, and it’s absolutely not clear how it’s respected in Verlinde’s framework. So, I tried to see whether there is a way to make Verlinde’s approach generally covariant. The answer is yes, it’s possible. And it actually works better than I expected. I’ve written up my findings in a paper which just appeared on the arxiv:


It took some trying around, but I finally managed to guess a covariant Lagrangian that would produce the equations in Verlinde’s paper when one makes the same approximations. Without these approximations, the equations are fully compatible with general relativity. They are however – as so often in general relativity – hideously difficult to solve.

Making some simplifying assumptions allows one to at least find an approximate solution. It turns out however, that even if one makes the same approximations as in Verlinde’s paper, the equation one obtains is not exactly the same that he has – it has an additional integration constant.

My first impulse was to set that constant to zero, but upon closer inspection that didn’t make sense: The constant has to be determined by a boundary condition that ensures the gravitational field of a galaxy (or galaxy cluster) asymptotes to Friedmann-Robertson-Walker space filled with normal matter and a cosmological constant. Unfortunately, I haven’t been able to find the solution that one should get in the asymptotic limit, hence wasn’t able to fix the integration constant.

This means, importantly, that the data fits which assume the additional constant is zero do not actually constrain Verlinde’s model.

With the Lagrangian approach that I have tried, the interpretation of Verlinde’s model is very different – I dare to say far less outlandish. There’s an additional vector-field which permeates space-time and which interacts with normal matter. It’s a strange vector field both because it’s not – as the other vector-fields we know of – a gauge-boson, and has a different kinetic energy term. In addition, the kinetic term also appears in a way one doesn’t commonly have in particle physics but instead in condensed matter physics.

Interestingly, if you look at what this field would do if there was no other matter, it would behave exactly like a cosmological constant.

This, however, isn’t to say I’m sold on the idea. What I am missing is, most importantly, some clue that would tell me the additional field actually behaves like matter on cosmological scales, or at least sufficiently similar to reproduce other observables, like eg baryon acoustic oscillation. This should be possible to find out with the equations in my paper – if one manages to actually solve them.

Finding solutions to Einstein’s field equations is a specialized discipline and I’m not familiar with all the relevant techniques. I will admit that my primary method of solving the equations – to the big frustration of my reviewers – is to guess solutions. It works until it doesn’t. In the case of Friedmann-Robertson-Walker with two coupled fluids, one of which is the new vector field, it hasn’t worked. At least not so far. But the equations are in the paper and maybe someone else will be able to find a solution.

In summary, Verlinde’s emergent gravity has withstood the first-line bullshit test. Yes, it’s compatible with general relativity.

Thursday, March 02, 2017

Yes, a violation of energy conservation can explain the cosmological constant

Chad Orzel recently pointed me towards an article in Physics World according to which “Dark energy emerges when energy conservation is violated.” Quoted in the Physics World article are George Ellis, who enthusiastically notes that the idea is “no more fanciful than many other ideas being explored in theoretical physics at present,” and Lee Smolin, according to whom it’s “speculative, but in the best way.” Chad clearly found this somewhat too polite to be convincing and asked me for some open words:



I had seen the headline flashing by earlier but ignored it because – forgive me – it’s obvious energy non-conservation can mimic a cosmological constant.

Reason is that usually, in General Relativity, the expansion of space-time is described by two equations, known as the Friedmann-equations. They relate the velocity and acceleration of the universe’s normalized distance measures – called the ‘scale factor’ – with the average energy density and pressure of matter and radiation in the universe. If you put in energy-density and pressure, you can calculate how the universe expands. That, basically, is what cosmologists do for a living.

The two Friedmann-equations, however, are not independent of each other because General Relativity presumes that the various forms of energy-densities are locally conserved. That means if you take only the first Friedmann-equation and use energy-conservation, you get the second Friedmann-equation, which contains the cosmological constant. If you turn this statement around it means that if you throw out energy conservation, you can produce an accelerated expansion.

It’s an idea I’ve toyed with years ago, but it’s not a particularly appealing solution to the cosmological constant problem. The issue is you can’t just selectively throw out some equations from a theory because you don’t like them. You have to make everything work in a mathematically consistent way. In particular, it doesn’t make sense to throw out local energy-conservation if you used this assumption to derive the theory to begin with.

Upon closer inspection, the Physics World piece summarizes the paper:
which got published in PRL a few weeks ago, but has been on the arxiv for almost a year. Indeed, when I looked at it, I recalled I had read the paper and found it very interesting. I didn’t write about it here because the point they make is quite technical. But since Chad asked, here we go.

Modifying General Relativity is chronically hard because the derivation of the theory is so straight-forward that much violence is needed to avoid Einstein’s Field Equations. It took Einstein a decade to get the equations right, but if you know your differential geometry it’s a three-liner really. This isn’t to belittle Einstein’s achievement – the mathematical apparatus wasn’t then fully developed and he was guessing its way around underived theorems – but merely to emphasize that General Relativity is easy to get but hard to amend.

One of the few known ways to consistently amend General Relativity is ‘unimodular gravity,’ which works as follows.

In General Relativity the central dynamical quantity is the metric tensor (or just “metric”) which you need to measure the ratio of distances relative to each other. From the metric tensor and its first and second derivative you can calculate the curvature of space-time.

General Relativity can be derived from an optimization principle by asking: “From all the possible metrics, which is the one that minimizes curvature given certain sources of energy?” This leads you to Einstein’s Field Equations. In unimodular gravity in contrast, you don’t look at all possible metrics but only those with a fixed metric determinant, which means you don’t allow a rescaling of volumes. (A very readable introduction to unimodular gravity by George Ellis can be found here.)

Unimodular gravity does not result in Einstein’s Field Equations, but only in a reduced version thereof because the variation of the metric is limited. The result is that in unimodular gravity, energy is not automatically locally conserved. Because of the limited variation of the metric that is allowed in unimodular gravity, the theory has fewer symmetries. And, as Emmy Noether taught us, symmetries give rise to conservation laws. Therefore, unimodular gravity has fewer conservation laws.

I must emphasize that this is not the ‘usual’ non-conservation of total energy one already has in General Relativity, but a new violation of local energy-densities does that not occur in General Relativity.

If, however, you then add energy-conservation to unimodular gravity, you get back Einstein’s field equations, though this re-derivation comes with a twist: The cosmological constant now appears as an integration constant. For some people this solves a problem, but personally I don’t see what difference it makes just where the constant comes from – its value is unexplained either way. Therefore, I’ve never found unimodular gravity particularly interesting, thinking, if you get back General Relativity you could as well have used General Relativity to begin with.

But in the new paper the authors correctly point out that you don’t necessarily have to add energy conservation to the equations you get in unimodular gravity. And if you don’t, you don’t get back general relativity, but a modification of general relativity in which energy conservation is violated – in a mathematically consistent way.

Now, the authors don’t look at all allowed violations of energy-conservation in their paper and I think smartly so, because most of them will probably result in a complete mess, by which I mean be crudely in conflict with observation. They instead look at a particularly simple type of energy conservation and show that this effectively mimics a cosmological constant.

They then argue that on the average such a type of energy-violation might arise from certain quantum gravitational effects, which is not entirely implausible. If space-time isn’t fundamental, but is an emergent description that arises from an underlying discrete structure, it isn’t a priori obvious what happens to conservation laws.

The framework proposed in the new paper, therefore, could be useful to quantify the observable effects that arise from this. To demonstrate this, the authors look at the example of 1) diffusion from causal sets and 2) spontaneous collapse models in quantum mechanics. In both cases, they show, one can use the general description derived in the paper to find constraints on the parameters in this model. I find this very useful because it is a simple, new way to test approaches to quantum gravity using cosmological data.

Of course this leaves many open questions. Most importantly, while the authors offer some general arguments for why such violations of energy conservation would be too small to be noticeable in any other way than from the accelerated expansion of the universe, they have no actual proof for this. In addition, they have only looked at this modification from the side of General Relativity, but I would like to also know what happens to Quantum Field Theory when waving good-bye to energy conservation. We want to make sure this doesn’t ruin the standard model’s fit of any high-precision data. Also, their predictions crucially depend on their assumption about when energy violation begins, which strikes me as quite arbitrary and lacking a physical motivation.

In summary, I think it’s a so-far very theoretical but also interesting idea. I don’t even find it all that speculative. It is also clear, however, that it will require much more work to convince anybody this doesn’t lead to conflicts with observation.

Thursday, February 23, 2017

Book Review: “The Particle Zoo” by Gavin Hesketh

The Particle Zoo: The Search for the Fundamental Nature of Reality
By Gavin Hesketh
Quercus (1 Sept. 2016)

The first word in Gavin Hesketh’s book The Particle Zoo is “Beauty.” I read the word, closed the book, and didn’t reopen it for several months. Having just myself finished writing a book about the role of beauty in theoretical physics, it was the absolutely last thing I wanted to hear about.

I finally gave Hesketh’s book a second chance and took it along on a recent flight. Turned out once I passed the somewhat nauseating sales pitch in the beginning, the content considerably improved.

Hesketh provides a readable and accessible no-nonsense introduction to the standard model and quantum field theory. He explains everything as well as possible without using equations.

The author is an experimentalist and part of the LHC’s ATLAS collaboration. The Particle Zoo also has a few paragraphs about how it is to work in such large collaborations. Personally, I found this the most interesting part of the book. Hesketh also does a great job to describe how the various types of particle detectors work.

Had the book ended here, it would have been a well-done job. But Hesketh goes on to elaborate on physics beyond the standard model. And there he’s clearly out of his water.

Problems start when he begins laying out the shortcomings of the standard model, leaving the reader with the impression that it’s non-renormalizable. I suspect (or hope) he wasn’t referring to non-renormalizability but maybe Landau poles or the non-convergence of the perturbative expansion, but the explanation is murky.

Murky is bad, but wrong is worse. And wrong follows. Fore example, to generate excitement for new physics, Hesketh writes:
“Some theories suggest that antimatter responds to gravity in a different way: matter and antimatter may repel each other… [W]hile this is a strange idea, so far it is one that we cannot rule out.”
I do not know of any consistent theory that suggests antimatter responds differently to gravity than matter, and I say that as one of the three theorists on the planet who have worked on antigravity. I have no idea what Hesketh is referring to in this paragraph.

It does not help that “The Particle Zoo” does not have any references. I understand that a popular science book isn’t a review article, but I would expect that a scientist at least quotes sources for historical facts and quotations, which isn’t the case.

He then confuses a “Theory of Everything” with quantum gravity, and about supersymmetry (SuSy) he writes:
“[I]f SuSy is possible and it makes everything much neater, it really should exist. Otherwise it seems that nature has apparently gone out of its way to avoid it, making the equations uglier at the same time, and we would have to explain why that is.”
Which is a statement that should be embarrassing for any scientist to make.

Hesketh’s attitude to supersymmetry is however somewhat schizophrenic because he later writes that:
“[T]his is really why SuSy has lived for so long: whenever an experiment finds no signs of the super-particles, it is possible merely to adjust some of these free parameters so that these super-particles must be just a little bit heavier, just a little bit further out of reach. By never being specific, it is never wrong.”
Only to then reassure the reader
“SuSy may end up as another beautiful theory destroyed by an ugly fact, and we should find out in the next years.”
I am left to wonder which fact he thinks will destroy a theory that he just told us is never wrong.

Up to this point I might have blamed the inaccuracies on an editor, but then Hesketh goes on to explain the (ADD model of) large extra dimensions and claims that it solves the hierarchy problem. This isn’t so – the model reformulates one hierarchy (the weakness of gravity) as another hierarchy (extra dimensions much larger than the Planck length) and hence doesn’t solve the problem. I am not sure whether he is being intentionally misleading or really didn’t understand this, but either way, it’s wrong.

Hesketh furthermore states that if there were such large extra dimensions the LHC might produce microscopic black holes – but he doesn’t mention with a single word that not the faintest evidence for this has been found.

When it comes to dark matter, he waves away the possibility that the observations are due to a modification of gravity with the magic word “Bullet Cluster” – a distortion of facts about which I have previously complained. I am afraid he actually might not know any better since this myth has been so widely spread, but if he doesn’t care to look at the subject he shouldn’t write a book about it. To round things up, Hesketh misspells “Noether” as “Nöther,” though I am willing to believe that this egg was laid by someone else.

In summary, the first two thirds of the book about the standard model, quantum field theory, and particle detectors are recommendable. But when it comes to new physics the author doesn’t know what he’s talking about.

Sunday, February 19, 2017

Fake news wasn’t hard to predict – But what’s next?

In 2008, I wrote a blogpost which began with a dark vision – a presidential election led astray by fake news.

I’m not much of a prophet, but it wasn’t hard to predict. Journalism, for too long, attempted the impossible: Make people pay for news they don’t want to hear.

It worked, because news providers, by and large, shared an ethical code. Journalists aspired to tell the truth; their passion was unearthing and publicizing facts – especially those that nobody wanted to hear. And as long as the professional community held the power, they controlled access to the press – the device – and kept up the quality.

But the internet made it infinitely easy to produce and distribute news, both correct and incorrect. Fat headlines suddenly became what economists call an “intangible good.” No longer does it rely on a physical resource or a process of manufacture. News now can be created, copied, and shared by anyone, anywhere, with almost zero investment.

By the early 00s, anybody could set up a webpage and produce headlines. From thereon, quality went down. News makes the most profit if it’s cheap and widely shared. Consequently, more and more outlets offer the news people want to read –that’s how the law of supply and demand is supposed to work after all.

What we have seen so far, however, is only the beginning. Here’s what’s up next:
  • 1. Fake News Get Organized

    An army of shadow journalists specializes in fake news, pitching it to alternative news outlets. These outlets will mix real and fake news. It becomes increasingly hard to tell one from the other.

  • 2. Fake News Becomes Visual

    “Picture or it didn’t happen,” will soon be a thing of the past. Today, it’s still difficult to forge photos and videos. But software becomes better, and cheaper, and easier to obtain, and soon it will take experts to tell real from fake.

  • 3. Fake News Get Cozy

    Anger isn’t sustainable. In the long run, most people want good news – they want to be reassured everything’s fine. The war in Syria is over. The earthquake risk in California is low. The economy is up. The chocolate ratio has been raised again.

  • 4. Cooperations Throw the Towel

    Facebook and Google and Yahoo conclude out it’s too costly to assess the truth value of information passed on by their platforms, and decide it’s not their task. They’re right.
  • 5. Fake News Has Real-World Consequences

    We’ll see denial of facts leading to deaths of thousands of people. I mean lack of earthquake warning systems because the risk was believed fear-mongering. I mean riots over terrorist attacks that never happened. I mean collapsed buildings and toxic infant formula because who cares about science. We’ll get there.

The problem that fake news poses for democratic societies attracted academic interest already a decade ago. Triggered by the sudden dominance of Google as search engine, it entered the literature under the name “Googlearchy.”

Democracy relies on informed decision making. If the electorate doesn’t know what’s real, democratic societies can’t identify good ways to carry out the people’s will. You’d think that couldn’t be in anybody’s interest, but it is – if you can make money from misinformation.

Back then, the main worry focused on search engines as primary information providers. Someone with more prophetic skills might have predicted that social networks would come to play the central role for news distribution, but the root of the problem is the same: Algorithms are designed to deliver news which users like. That optimizes profit, but degrades the quality of news.

Economists of the Chicago School would tell you that this can’t be. People’s behavior reveals what they really want, and any regulation of the free market merely makes the fulfillment of their wants less efficient. If people read fake news, that’s what they want – the math proves it!

But no proof is better than its assumptions, and one central assumption for this conclusion is that people can’t have mutually inconsistent desires. We’re supposed to have factored in long-term consequences of today’s actions, properly future-discounted and risk-assessed. In other words, we’re supposed to know what’s good for us and our children and grand-grand-children and make rational decisions to work towards that goal.

In reality, however, we often want what’s logically impossible. Problem is, a free market, left unattended, caters predominantly to our short-term wants.

On the risk of appearing to be inconsistent, economists are right when they speak of revealed preferences as the tangible conclusion of our internal dialogues. It’s just that economists, being economists, like to forget that people have a second way of revealing preferences – they vote.

We use democratic decision making to ensure the long-term consequences of our actions are consistent with the short-term ones, like putting a price on carbon. One of the major flaws of current economic theory is that it treats the two systems, economic and political, as separate, when really they’re two sides of the same coin. But free markets don’t work without a way to punish forgery, lies, and empty promises.

This is especially important for intangible goods – those which can be reproduced with near-zero effort. Intangible goods, like information, need enforced copyright, or else quality becomes economically unsustainable. Hence, it will take regulation, subsidies, or both to prevent us from tumbling down into the valley of alternative facts.

In the last months I’ve seen a lot of finger-pointing at scientists for not communicating enough or not communicating correctly, as if we were the ones to blame for fake news. But this isn’t our fault. It’s the media which has a problem – and it’s a problem scientists solved long ago.

The main reason why fake news is hard to identify, and why it remains profitable to reproduce what other outlets have already covered, is that journalists – in contrast to scientists – are utterly intransparent about their doings.

As a blogger, I see this happening constantly. I know that many, if not most, science writers closely follow science blogs. And the professional writers frequently report on topics previously covered by bloggers – without doing as much as naming their sources, not to mention referencing them.

This isn’t merely a personal paranoia. I know this because in several instances science writers actually told me that my blogpost about this-or-that has been so very useful. Some even asked me to share links to their articles they wrote based on it. Let that sink in for a moment – they make money from my expertise, don’t give me credits, and think that this is entirely appropriate behavior. And you wonder why fake news is economically profitable?

For a scientist, that’s mindboggling. Our currency is citations. Proper credits is pretty much all we want. Keep the money, but say my name.

I understand that journalists have to protect some sources, so don’t misunderstand me. I don’t mean they have to spill beans about their exclusive secrets. What I mean is simply that a supposed news outlet that merely echoes what’s been reported elsewhere should be required to refer to the earlier article.

Of course this would imply that the vast majority of existing news sites were revealed as copy-cats and lose readers. And of course it isn’t going to happen because nobody’s going to enforce it. If I saw even a remote chance of this happening, I wouldn’t have made the above predictions, would I?

What’s even more perplexing for a scientist, however, is that news outlets, to the extent that they do fact-checks, don’t tell customers that they fact-check, or what they fact-check, or how they fact-check.

Do you know, for example, which science magazines fact-check their articles? Some do, some don’t. I know for a few because I’ve been border-crossing between scientists and writers for a while. But largely it’s insider knowledge – I think it should be front-page information. Listen, Editor-in-Chief: If you fact-check, tell us.

It isn’t going to stop fake news, but I think a more open journalistic practice and publicly stated adherence to voluntary guidelines could greatly alleviate it. It probably makes you want to puke, but academics are good at a few things and high community standards are one of them. And that is what journalisms need right now.

I know, this isn’t exactly the cozy, shallow, good news that shares well. But it will be a great pleasure when, in ten years, I can say: I told you so.

Friday, February 17, 2017

Black Hole Information - Still Lost

[Illustration of black hole.
Image: NASA]
According to Google, Stephen Hawking is the most famous physicist alive, and his most famous work is the black hole information paradox. If you know one thing about physics, therefore, that’s what you should know.

Before Hawking, black holes weren’t paradoxical. Yes, if you throw a book into a black hole you can’t read it anymore. That’s because what has crossed a black hole’s event horizon can no longer be reached from the outside. The event horizon is a closed surface inside of which everything, even light, is trapped. So there’s no way information can get out of the black hole; the book’s gone. That’s unfortunate, but nothing physicists sweat over. The information in the book might be out of sight, but nothing paradoxical about that.

Then came Stephen Hawking. In 1974, he showed that black holes emit radiation and this radiation doesn’t carry information. It’s entirely random, except for the distribution of particles as a function of energy, which is a Planck spectrum with temperature inversely proportional to the black hole’s mass. If the black hole emits particles, it loses mass, shrinks, and gets hotter. After enough time and enough emission, the black hole will be entirely gone, with no return of the information you put into it. The black hole has evaporated; the book can no longer be inside. So, where did the information go?

You might shrug and say, “Well, it’s gone, so what? Don’t we lose information all the time?” No, we don’t. At least, not in principle. We lose information in practice all the time, yes. If you burn the book, you aren’t able any longer to read what’s inside. However, fundamentally, all the information about what constituted the book is still contained in the smoke and ashes.

This is because the laws of nature, to our best current understanding, can be run both forwards and backwards – every unique initial-state corresponds to a unique end-state. There are never two initial-states that end in the same final state. The story of your burning book looks very different backwards. If you were able to very, very carefully assemble smoke and ashes in just the right way, you could unburn the book and reassemble it. It’s an exceedingly unlikely process, and you’ll never see it happening in practice. But, in principle, it could happen.

Not so with black holes. Whatever formed the black hole doesn't make a difference when you look at what you wind up with. In the end you only have this thermal radiation, which – in honor of its discoverer – is now called ‘Hawking radiation.’ That’s the paradox: Black hole evaporation is a process that cannot be run backwards. It is, as we say, not reversible. And that makes physicists sweat because it demonstrates they don’t understand the laws of nature.

Black hole information loss is paradoxical because it signals an internal inconsistency of our theories. When we combine – as Hawking did in his calculation – general relativity with the quantum field theories of the standard model, the result is no longer compatible with quantum theory. At a fundamental level, every interaction involving particle processes has to be reversible. Because of the non-reversibility of black hole evaporation, Hawking showed that the two theories don’t fit together.

The seemingly obvious origin of this contradiction is that the irreversible evaporation was derived without taking into account the quantum properties of space and time. For that, we would need a theory of quantum gravity, and we still don’t have one. Most physicists therefore believe that quantum gravity would remove the paradox – just how that works they still don’t know.

The difficulty with blaming quantum gravity, however, is that there isn’t anything interesting happening at the horizon – it's in a regime where general relativity should work just fine. That’s because the strength of quantum gravity should depend on the curvature of space-time, but the curvature at a black hole horizon depends inversely on the mass of the black hole. This means the larger the black hole, the smaller the expected quantum gravitational effects at the horizon.

Quantum gravitational effects would become noticeable only when the black hole has reached the Planck mass, about 10 micrograms. When the black hole has shrunken to that size, information could be released thanks to quantum gravity. But, depending on what the black hole formed from, an arbitrarily large amount of information might be stuck in the black hole until then. And when a Planck mass is all that’s left, it’s difficult to get so much information out with such little energy left to encode it.

For the last 40 years, some of the brightest minds on the planets have tried to solve this conundrum. It might seem bizarre that such an outlandish problem commands so much attention, but physicists have good reasons for this. The evaporation of black holes is the best-understood case for the interplay of quantum theory and gravity, and therefore might be the key to finding the right theory of quantum gravity. Solving the paradox would be a breakthrough and, without doubt, result in a conceptually new understanding of nature.

So far, most solution attempts for black hole information loss fall into one of four large categories, each of which has its pros and cons.

  • 1. Information is released early.

    The information starts leaking out long before the black hole has reached Planck mass. This is the presently most popular option. It is still unclear, however, how the information should be encoded in the radiation, and just how the conclusion of Hawking’s calculation is circumvented.

    The benefit of this solution is its compatibility with what we know about black hole thermodynamics. The disadvantage is that, for this to work, some kind of non-locality – a spooky action at a distance – seems inevitable. Worse still, it has recently been claimed that if information is released early, then black holes are surrounded by a highly-energetic barrier: a “firewall.” If a firewall exists, it would imply that the principle of equivalence, which underlies general relativity, is violated. Very unappealing.

  • 2. Information is kept, or it is released late.

    In this case, the information stays in the black hole until quantum gravitational effects become strong, when the black hole has reached the Planck mass. Information is then either released with the remaining energy or just kept forever in a remnant.

    The benefit of this option is that it does not require modifying either general relativity or quantum theory in regimes where we expect them to hold. They break down exactly where they are expected to break down: when space-time curvature becomes very large. The disadvantage is that some have argued it leads to another paradox, that of the possibility to infinitely produce black hole pairs in a weak background field: i.e., all around us. The theoretical support for this argument is thin, but it’s still widely used.

  • 3. Information is destroyed.

    Supporters of this approach just accept that information is lost when it falls into a black hole. This option was long believed to imply violations of energy conservation and hence cause another inconsistency. In recent years, however, new arguments have surfaced according to which energy might still be conserved with information loss, and this option has therefore seem a little revival. Still, by my estimate it’s the least popular solution.

    However, much like the first option, just saying that’s what one believes doesn’t make for a solution. And making this work would require a modification of quantum theory. This would have to be a modification that doesn’t lead to conflict with any of our experiments testing quantum mechanics. It’s hard to do.

  • 4. There’s no black hole.

    A black hole is never formed or information never crosses the horizon. This solution attempt pops up every now and then, but has never caught on. The advantage is that it’s obvious how to circumvent the conclusion of Hawking’s calculation. The downside is that this requires large deviations from general relativity in small curvature regimes, and it is therefore difficult to make compatible with precision tests of gravity.
There are a few other proposed solutions that don’t fall into any of these categories, but I will not – cannot! – attempt to review all of them here. In fact, there isn’t any good review on the topic – probably because the mere thought of compiling one is dreadful. The literature is vast. Black hole information loss is without doubt the most-debated paradox ever.

And it’s bound to remain so. The temperature of black holes which we can observe today is far too small to be observable. Hence, in the foreseeable future nobody is going to measure what happens to the information which crosses the horizon. Let me therefore make a prediction. In ten years from now, the problem will still be unsolved.

Hawking just celebrated his 75th birthday, which is a remarkable achievement by itself. 50 years ago, his doctors declared him dead soon, but he's stubbornly hung onto life. The black hole information paradox may prove to be even more stubborn. Unless a revolutionary breakthrough comes, it may outlive us all.

(I wish to apologize for not including references. If I’d start with this, I wouldn’t be done by 2020.)

[This post previously appeared on Starts With A Bang.]

Sunday, February 12, 2017

Away Note

I'm traveling next week and will be offline for some days. Blogging may be insubstantial, if existent, and comments may be stuck in the queue longer than usual. But I'm sure you'll survive without me ;)

And since you haven't seen the girls for a while, here is a recent photo. They'll be starting school this year in the fall and are very excited about it.

Thursday, February 09, 2017

New Data from the Early Universe Does Not Rule Out Holography

[img src: entdeckungen.net]
It’s string theorists’ most celebrated insight: The world is a hologram. Like everything else string theorists have come up with, it’s an untested hypothesis. But now, it’s been put to test with a new analysis that compares a holographic early universe with its non-holographic counterpart.

Tl;dr: Results are inconclusive.

When string theorists say we live in a hologram, they don’t mean we are shadows in Plato’s cave. They mean their math says that all information about what’s inside a box can be encoded on the boundary of that box – albeit in entirely different form.

The holographic principle – if correct – means there are two different ways to describe the same reality. Unlike in Plato’s cave, however, where the shadows lack information about what caused them, with holography both descriptions are equally good.

Holography would imply that the three dimensions of space which we experience are merely one way to think of the world. If you can describe what happens in our universe by equations that use only two-dimensional surfaces, you might as well say we live in two dimensions – just that these are dimensions we don’t normally experience.

It’s a nice idea but hard to test. That’s because the two-dimensional interpretation of today’s universe isn’t normally very workable. Holography identifies two different theories with each other by a relation called “duality.” The two theories in question here are one for gravity in three dimensions of space, and a quantum field theory without gravity in one dimension less. However, whenever one of the theories is weakly coupled, the other one is strongly coupled – and computations in strongly coupled theories are hard, if not impossible.

The gravitational force in our universe is presently weakly coupled. For this reason General Relativity is the easier side of the duality. However, the situation might have been different in the early universe. Inflation – the rapid phase of expansion briefly after the big bang – is usually assumed to take place in gravity’s weakly coupled regime. But that might not be correct. If instead gravity at that early stage was strongly coupled, then a description in terms of a weakly coupled quantum field theory might be more appropriate.

This idea has been pursued by Kostas Skenderis and collaborators for several years. These researchers have developed a holographic model in which inflation is described by a lower-dimensional non-gravitational theory. In a recent paper, their predictions have been put to test with new data from the Planck mission, a high-precision measurement of the temperature fluctuations of the cosmic microwave background.


In this new study, the authors compare the way that holographic inflation and standard inflation in the concordance model – also known as ΛCDM – fit the data. The concordance model is described by six parameters. Holographic inflation has a closer connection to the underlying theory and so the power spectrum brings in one additional parameter, which makes a total of seven. After adjusting for the number of parameters, the authors find that the concordance model fits better to the data.

However, the biggest discrepancy between the predictions of holographic inflation and the concordance model arise at large scales, or low multipole moments respectively. In this regime, the predictions from holographic inflation cannot really be trusted. Therefore, the authors repeat the analysis with the low multipole moments omitted from the data. Then, the two models fit the data equally well. In some cases (depending on the choice of prior for one of the parameters) holographic inflation is indeed a better fit, but the difference is not statistically significant.

To put this result into context it must be added that the best-understood cases of holography work in space-times with a negative cosmological constant, the Anti-de Sitter spaces. Our own universe, however, is not of this type. It has instead a positive cosmological constant, described by de-Sitter space. The use of the holographic principle in our universe is hence not strongly supported by string theory, at least not presently.

The model for holographic inflation can therefore best be understood as one that is motivated by, but not derived from, string theory. It is a phenomenological model, developed to quantify predictions and test them against data.

While the difference between the concordance model and holographic inflation which this study finds are insignificant, it is interesting that a prediction based on such an entirely different framework is able to fit the data at all. I should also add that there is a long-standing debate in the community as to whether the low multipole moments are well-described by the concordance model, or whether any of the large-scale anomalies are to be taken seriously.

In summary, I find this an interesting result because it’s an entirely different way to think of the early universe, and yet it describes the data. For the same reason, however, it’s also somewhat depressing. Clearly, we don’t presently have a good way to test all the many ideas that theorists have come up with.

Friday, February 03, 2017

Testing Quantum Foundations With Atomic Clocks

Funky clock at Aachen University.
Nobel laureate Steven Weinberg has recently drawn attention by disliking quantum mechanics. Besides an article for The New York Review of Books and a public lecture to bemoan how unsatisfactory the current situation is, he has, however, also written a technical paper:
    Lindblad Decoherence in Atomic Clocks
    Steven Weinberg
    Phys. Rev. A 94, 042117 (2016)
    arXiv:1610.02537 [quant-ph]
In this paper, Weinberg studies the use of atomic clocks for precision tests of quantum mechanics. Specifically, to search for an unexpected, omnipresent, decoherence .

Decoherence is the process that destroys quantum-ness. It happens constantly and everywhere. Each time a quantum state interacts with an environment – air, light, neutrinos, what have you – it becomes a little less quantum.

This type of decoherence explains why, in every-day life, we don’t see quantum-typical behavior, like cats being both dead and alive and similar nonsense. Trouble is, decoherence takes place only if you consider the environment a source of noise whose exact behavior is unknown. If you look at the combined system of the quantum state plus environment, that still doesn’t decohere. So how come on large scales our world is distinctly un-quantum?

It seems that besides this usual decoherence, quantum mechanics must do something else, that is explaining the measurement process. Decoherence merely converts a quantum state into a probabilistic (“mixed”) state. But upon measurement, this probabilistic state must suddenly change to reflect that, after observation, the state is in the measured configuration with 100% certainty. This update is also sometimes referred to as the “collapse” of the wave-function.

Whether or not decoherence solves the measurement problem then depends on your favorite interpretation of quantum mechanics. If you don’t think the wave-function, which describes the quantum state, is real but merely encodes information, then decoherence does the trick. If you do, in contrast, think the wave-function is real, then decoherence doesn’t help you understand what happens in a measurement because you still have to update probabilities.

That is so unless you are a fan of the the many-worlds interpretation which simply declares the problem nonexistent by postulating all possible measurement outcomes are equally real. It just so happens that we find ourselves in only one of these realities. I’m not a fan of many worlds because defining problems away rarely leads to progress. Weinberg finds all the many worlds “distasteful,” which also rarely leads to progress.

What would really solve the problem, however, is some type of fundamental decoherence, an actual collapse prescription basically. It’s not a particularly popular idea, but at least it is an idea, and it’s one that’s worth testing.

What has any of that to do with atomic clocks? Well, atomic clocks work thanks to quantum mechanics, and they work extremely precisely. And so, Weinberg’s idea is to use atomic clocks to look for evidence of fundamental decoherence.

An atomic clock trades off the precise measurement of time for the precise measurement of a wavelength, or frequency respectively, which counts oscillations per time. And that is where quantum mechanics comes in handy. A hundred years or so ago, physicist found that the energies of electrons which surround the atomic nucleus can take on only discrete values. This also means they can absorb and emit light only of energies that corresponds to the difference in the discrete levels.

Now, as Einstein demonstrated with the photoelectric effect, the energy of light is proportional to its frequency. So, if you find light of a frequency that the atom can absorb, you must have hit one of the differences in energy levels. These differences in energy levels are (at moderate temperatures) properties of the atom and almost insensitive to external disturbances. That’s what makes atomic clocks tick so regularly.

So, it comes down to measuring atomic transition frequencies. Such measurements works by tuning a laser until a cloud of atoms (usually Cesium or Rubidium) absorbs most of the light. The absorbtion indicates you have hit the transition frequency.

In modern atomic clocks, one employs a two-pulse scheme, known as the Ramsey method. A cloud of atoms is exposed to a first pulse, then left to drift for a second or so, and then comes a second pulse. After that, you measure how many atoms were affected by the pulses, and use a feedback loop to tune the frequency of the light to maximize the number of atoms. (Further reading: “Real Clock Tutorial” by Chad Orzel.)

If, however, between the two pulses some unexpected decoherence happens, then the frequency tuning doesn’t work as well as it does in normal quantum mechanics. And this, so Weinberg’s argument, would have been noticed already if decoherence were relevant for atomic masses on the timescale of seconds. This way, he obtains constraints on fundamental decoherence. And, as bonus, proposes a new way of testing the foundations of quantum mechanics by use of the Ramsey method.

It’s a neat idea. It strikes me as the kind of paper that comes about as spin-off when thinking about a problem. I find this an interesting work because my biggest frustration with quantum foundations is all the talk about what is or isn’t distasteful about this or that interpretation. For me, the real question is whether quantum mechanics – in whatever interpretation – is fundamental, or whether there is an underlying theory. And if so, how to test that.

As a phenomenologist, you won’t be surprised to hear that I think research on the foundations of quantum mechanics would benefit from more phenomenology. Or, in summary: A little less talk, a little more action please.

Wednesday, January 25, 2017

What is Physics? [Video]

I spent the holidays watching some tutorials for video animations and learned quite a few new things. The below video collects some exercise projects I did to answer a question Gloria asked the other day: “Was ist Phykik?” (What is phycics?). Embarrassingly, I doubt she learned more from my answer than how to correctly pronounce physics. It’s hard to explain stuff to 6-year olds if you’re used to dealing with brainy adults.

Thanks to all the tutorials, however, I think this video is dramatically better than the previous one. There are still a number of issues I'm unhappy about, notably the timing, which I find hard to get right. Also, the lip-synching is poor. Not to mention that I still can’t draw and hence my cartoon child looks like a giant zombie hamster.

Listening to it again, the voiceover seems too fast to me and would have benefited from a few breaks. In summary, there’s room for improvement.

Complete transcript:

The other day, my daughter asked me “What is physics?”

She’s six. My husband and I, we’re both physicists. You’d think I had an answer. But best I managed was: Physics is what explains the very, very small and very, very large things.

There must be a better explanation, I said to myself.

The more I thought about it though, the more complicated it got. Physics isn’t only about small and large things. And nobody uses a definition to decide what belongs into the department of physics. Instead, it’s mostly history and culture that marks disciplinary boundaries. The best summary that came to my mind is “Physics is what physicists do.”

But then what do physicists do? Now that’s a question I can help you with.

First, let us see what is very small and very large.

An adult human has a size of about a meter. Add some zeros to size and we have small planets like Earth with a diameter of some tenthousand kilometers, and larger planets, like Saturn. Add some more zeros, and we get to solar systems, which are more conveniently measured with the time it takes light to travel through them, a few light-hours.

On even larger scales, we have galaxies, with typical sizes of a hundred-thousand light years, and galaxy clusters, and finally the whole visible universe, with an estimated size of 100 billion light years. Beyond that, there might be an even larger collection of universes which are constantly newly created by bubbling out of vacuum. It’s called the ‘multiverse’ but nobody knows if it’s real.

Physics, or more specifically cosmology, is the only discipline that currently studies what happens at such large scales. This remains so for galaxy clusters and galaxies and interstellar space, which fall into the area of astrophysics. There is an emerging field, called astrobiology, where scientists look for life elsewhere in the universe, but so far they don’t have much to study.

Once we get to the size of planets, however, much of what we observe is explained by research outside of physics. There is geology and atmospheric science and climate science. Then there are the social sciences and all the life sciences, biology and medicine and zoology and all that.

When we get to scales smaller than humans, at about a micrometer we have bacteria and cells. At a few nanometers, we have large molecular structures like our DNA, and then proteins and large molecules. Somewhere here, we cross over into the field of chemistry. If we get to even smaller scales, to the size of atoms of about an Angstrom, physics starts taking over again. First there is atomic physics, then there is nuclear physics, and then there is particle physics, which deals with quarks and electrons and photons and all that. Beyond that... nobody knows. But to the extent that it’s science at all, it’s safely in the hands of physicists.

If you go down 16 more orders of magnitude, you get to what is called the Planck length, at 10^-35 meters. That’s where quantum fluctuations of space-time become important and it might turn out elementary particles are made of strings or other strange things. But that too, is presently speculation.

One would need an enormously high energy to probe such short distances, much higher than what our particle accelerators can reach. Such energies, however, were reached at the big bang, when our universe started to expand. And so, if we look out to very, very large distances, we actually look back in time to high energies and very short distances. Particle physics and cosmology are therefore close together and not far apart.

Not everything in physics, however, is classified by distance scales. Rocks fall, water freezes, planes fly, and that’s physics too. There are two reasons for that.

First, gravity and electrodynamics are forces that span over all distance scales.

And second, the tools of physics can be used also for stuff composed of many small things that behave similarly, like solids fluids and gases. But really, it could be anything from a superconductor, to a gas of strings, to a fluid of galaxies. The behavior of such large numbers of similar objects is studied in fields like condensed matter physics, plasma physics, thermodynamics, and statistical mechanics.

That’s why there’s more physics in every-day life than what the breakdown by distance suggests. And that’s also why the behavior of stuff at large and small distances has many things in common. Indeed, methods of physics can, and have been used, also to describe the growth of cities, bird flocking, or traffic flow. All of that is physics, too.

I still don’t have a good answer for what physics is. But next time I am asked, I have a video to show.

Thursday, January 19, 2017

Dark matter’s hideout just got smaller, thanks to supercomputers.

Lattice QCD. Artist’s impression.
Physicists know they are missing something. Evidence that something’s wrong has piled up for decades: Galaxies and galaxy clusters don’t behave like Einstein’s theory of general relativity predicts. The observed discrepancies can be explained either by modifying general relativity, or by the gravitational pull of some, so-far unknown type of “dark matter.”

Theoretical physicists have proposed many particles which could make up dark matter. The most popular candidates are a class called “Weakly Interacting Massive Particles” or WIMPs. They are popular because they appear in supersymmetric extensions of the standard model, and also because they have a mass and interaction strength in just the right ballpark for dark matter. There have been many experiments, however, trying to detect the elusive WIMPs, and one after the other reported negative results.

The second popular dark matter candidate is a particle called the “axion,” and the worse the situation looks for WIMPs the more popular axions are becoming. Like WIMPs, axions weren’t originally invented as dark matter candidates.

The strong nuclear force, described by Quantum ChromoDynamics (QCD), could violate a symmetry called “CP symmetry,” but it doesn’t. An interaction term that could give rise to this symmetry-violation therefore has a pre-factor – the “theta-parameter” (θ) – that is either zero or at least very, very small. That nobody knows just why the theta-parameter should be so small is known as the “strong CP problem.” It can be solved by promoting the theta-parameter to a field which relaxes to the minimum of a potential, thereby setting the coupling to the troublesome term to zero, an idea that dates back to Peccei and Quinn in 1977.

Much like the Higgs-field, the theta-field is then accompanied by a particle – the axion – as was pointed out by Steven Weinberg and Frank Wilczek in 1978.

The original axion was ruled out within a few years after being proposed. But theoretical physicists quickly put forward more complicated models for what they called the “hidden axion.” It’s a variant of the original axion that is more weakly interacting and hence more difficult to detect. Indeed it hasn’t been detected. But it also hasn’t been ruled out as a dark matter candidate.

Normally models with axions have two free parameters: one is the mass of the axion, the other one is called the axion decay constant (usually denoted f_a). But these two parameters aren’t actually independent of each other. The axion gets its mass by the breaking of a postulated new symmetry. A potential, generated by non-perturbative QCD effects, then determines the value of the mass.

If that sounds complicated, all you need to know about it to understand the following is that it’s indeed complicated. Non-perturbative QCD is hideously difficult. Consequently, nobody can calculate what the relation is between the axion mass and the decay constant. At least so far.

The potential which determines the particle’s mass depends on the temperature of the surrounding medium. This is generally the case, not only for the axion, it’s just a complication often omitted in the discussion of mass-generation by symmetry breaking. Using the potential, it can be shown that the mass of the axion is inversely proportional to the decay constant. The whole difficulty then lies in calculating the factor of proportionality, which is a complicated, temperature-dependent function, known as the topological susceptibility of the gluon field. So, if you could calculate the topological susceptibility, you’d know the relation between the axion mass and the coupling.

This isn’t a calculation anybody presently knows how to do analytically because the strong interaction at low temperatures is, well, strong. The best chance is to do it numerically by putting the quarks on a simulated lattice and then sending the job to a supercomputer.

And even that wasn’t possible until now because the problem was too computationally intensive. But in a new paper, recently published in Nature, a group of researchers reports they have come up with a new method of simplifying the numerical calculation. This way, they succeeded in calculating the relation between the axion mass and the coupling constant.

    Calculation of the axion mass based on high-temperature lattice quantum chromodynamics
    S. Borsanyi et al
    Nature 539, 69–71 (2016)

(If you don’t have journal access, it’s not the exact same paper as this but pretty close).

This result is a great step forward in understanding the physics of the early universe. It’s a new relation which can now be included in cosmological models. As a consequence, I expect that the parameter-space in which the axion can hide will be much reduced in the coming months.

I also have to admit, however, that for a pen-on-paper physicist like me this work has a bittersweet aftertaste. It’s a remarkable achievement which wouldn’t have been possible without a clever formulation of the problem. But in the end, it’s progress fueled by technological power, by bigger and better computers. And maybe that’s where the future of our field lies, in finding better ways to feed problems to supercomputers.

Friday, January 13, 2017

What a burst! A fresh attempt to see space-time foam with gamma ray bursts.

It’s an old story: Quantum fluctuations of space-time might change the travel-time of light. Light of higher frequencies would be a little faster than that of lower frequencies. Or slower, depending on the sign of an unknown constant. Either way, the spectral colors of light would run apart, or ‘disperse’ as they say if they don’t want you to understand what they say.

Such quantum gravitational effects are miniscule, but added up over long distances they can become observable. Gamma ray bursts are therefore ideal to search for evidence of such an energy-dependent speed of light. Indeed, the energy-dependent speed of light has been sought for and not been found, and that could have been the end of the story.

Of course it wasn’t because rather than giving up on the idea, the researchers who’d been working on it made their models for the spectral dispersion increasingly difficult and became more inventive when fitting them to unwilling data. Last thing I saw on the topic was a linear regression with multiple curves of freely chosen offset – sure way to fit any kind of data on straight lines of any slope – and various ad-hoc assumptions to discard data that just didn’t want to fit, such as energy cuts or changes in the slope.

These attempts were so desperate I didn’t even mention them previously because my grandma taught me if you have nothing nice to say, say nothing.

But here’s a new twist to the story, so now I have something to say, and something nice in addition.

On June 25 2016, the Fermi Telescope recorded a truly remarkable burst. The event, GRB160625, had a total duration of 770s and had three separate sub-bursts with the second, and largest, sub-burst lasting 35 seconds (!). This has to be contrasted with the typical burst lasting a few seconds in total.

This gamma ray burst for the first time allowed researchers to clearly quantify the relative delay of the different energy channels. The analysis can be found in this paper
    A New Test of Lorentz Invariance Violation: the Spectral Lag Transition of GRB 160625B
    Jun-Jie Wei, Bin-Bin Zhang, Lang Shao, Xue-Feng Wu, Peter Mészáros
    arXiv:1612.09425 [astro-ph.HE]

Unlike supernovae IIa, which have very regular profiles, gamma ray bursts are one of a kind and they can therefore be compared only to themselves. This makes it very difficult to tell whether or not highly energetic parts of the emission are systematically delayed because one doesn’t know when they were emitted. Until now, the analysis relied on some way of guessing the peaks in three different energy channels and (basically) assuming they were emitted simultaneously. This procedure sometimes relied on as little as one or two photons per peak. Not an analysis you should put a lot of trust in.

But the second sub-burst of GRB160625 was so bright, the researchers could break it down in 38 energy channels – and the counts were still high enough to calculate the cross-correlation from which the (most likely) time-lag can be extracted.

Here are the 38 energy channels for the second sub-burst

Fig 1 from arXiv:1612.09425


For the 38 energy channels they calculate 37 delay-times relative to the lowest energy channel, shown in the figure below. I find it a somewhat confusing convention, but in their nomenclature a positive time-lag corresponds to an earlier arrival time. The figure therefore shows that the photons of higher energy arrive earlier. The trend, however, isn’t monotonically increasing. Instead, it turns around at a few GeV.

Fig 2 from arXiv:1612.09425


The authors then discuss a simple model to fit the data. First, they assume that the emission has an intrinsic energy-dependence due to astrophysical effects which cause a positive lag. They model this with a power-law that has two free parameters: an exponent and an overall pre-factor.

Second, they assume that the effect during propagation – presumably from the space-time foam – causes a negative lag. For the propagation-delay they also make a power-law ansatz which is either linear or quadratic. This ansatz has one free parameter which is an energy scale (expected to be somewhere at the Planck energy).

In total they then have three free parameters, for which they calculate the best-fit values. The fitted curves are also shown in the image above, labeled n=1 (linear) and n=2 (quadratic). At some energy, the propagation-delay becomes more relevant than the intrinsic delay, which leads to the turn-around of the curve.

The best-fit value of the quantum gravity energy is 10q GeV with q=15.66 for the linear and q=7.17 for the quadratic case. From this they extract a lower limit on the quantum gravity scale at the 1 sigma confidence level, which is 0.5 x 1016 GeV for the linear and 1.4 x 107 GeV for the quadratic case. As you can see in the above figure, the data in the high energy bins has large error-bars owing to the low total count, so the evidence that there even is a drop isn’t all that great.

I still don’t buy there’s some evidence for space-time foam to find here, but I have to admit that this data finally convinces me that at least there is a systematic lag in the spectrum. That’s the nice thing I have to say.

Now to the not-so-nice. If you want to convince me that some part of the spectral distortion is due to a propagation-effect, you’ll have to show me evidence that its strength depends on the distance to the source. That is, in my opinion, the only way to make sure one doesn’t merely look at delays present already at emission. And even if you’d done that, I still wouldn’t be convinced that it has anything to do with space-time foam.

I’m skeptic of this because the theoretical backing is sketchy. Quantum fluctuations of space-time in any candidate-theory for quantum gravity do not lead to this effect. One can work with phenomenological models, in which such effects are parameterized and incorporated as new physics into the known theories. This is all well and fine. Unfortunately, in this case existing data already constrains the parameters so that the effect on the propagation of light is unmeasurably small. It’s already ruled out. Such models introduce a preferred frame and break Lorentz-invariance and there is loads of data speaking against it.

It has been claimed that the already existing constraints from Lorentz-invariance violation can be circumvented if Lorentz-invariance is not broken but instead deformed. In this case the effective field theory limit supposedly doesn’t apply. This claim is also quoted in the paper above (see end of section 3.) However, if you look at the references in question, you will not find any argument for how one manages to avoid this. Even if one can make such an argument though (I believe it’s possible, not sure why it hasn’t been done), the idea suffers from various other theoretical problems that, to make a very long story very short, make me think the quantum gravity-induced spectral lag is highly implausible.

However, leaving aside my theory-bias, this newly proposed model with two overlaid sources for the energy-dependent time-lag is simple and should be straight-forward to test. Most likely we will soon see another paper evaluating how well the model fits other bursts on record. So stay tuned, something’s happening here.