Friday, November 27, 2015

Away note

I’ll be traveling the next two weeks. First I’ll be going to a conference on “scholarly publishing” in the picturesque city of Tromsø. The “o” with the slash is Norwegian and the trip is going to beat my personal farthest-North record that is currently held by Reykjavik (or some village with an unpronouncable name a little North of that).

I don’t have the faintest clue why they invited me, to give a keynote lecture out of all things, in company of some Nobelprize winner. But I figured I’d go and tell them what’s going wrong with peer review, at least that will be entertaining. Thanks to a stomach bug that my husband brought back from India, by means of which I lost an estimated 800 pounds in 3 days, “tell them what's going wrong with peer review” is so far pretty much the whole plan for the lecture.

The week after I’ll be going to a workshop in Munich on the question “Why trust a theory?”. This event is organized by the Munich Center for Mathematical Philosophy, where I already attended an interesting workshop two years ago. This time the workshop is dedicated to the topics raised in Richard Dawid’s book “String Theory and the Scientific Method" which I reviewed here. The topic has since been a lot on my mind and I’m looking forward to the workshop.

Monday, November 23, 2015

Dear Dr B: Can you think of a single advancement in theoretical physics, other than speculation, since the early 1980's?

This question was asked by Steve Coyler, who was a frequent commenter on this blog before facebook ate him up. His full question is:
“Can you think of a single advancement in theoretical physics, other than speculation like Strings and Loops and Safe Gravity and Twistors, and confirming things like the Higgs Boson and pentaquarks at the LHC, since Politizer and Wilczek and Gross (and Coleman) did their thing re QCD in the early 1980's?”
Dear Steve:

What counts as “advancement” is somewhat subjective – one could argue that every published paper is an advancement of sorts. But I guess you are asking for breakthroughs that have generated new research areas. I also interpreted your question to have an emphasis on “theoretical,” so I will leave aside mostly experimental advances, like electron lasers, attosecond spectroscopy, quantum dots, and so on.

Admittedly your question pains me considerably. Not only does it demonstrate you have swallowed the stories about a crisis in physics that the media warm up and serve every couple of months. It also shows that I haven’t gotten across the message I tried to convey in this earlier post: the topics which dominate the media aren’t the topics that dominate actual research.

The impression you get about physics from reading science news outlets is extremely distorted. The vast majority of physicists have nothing to do with quantum gravity, twistors, or the multiverse. Instead they work in fields that are barely if ever mentioned in the news, like atomic and nuclear physics, quantum optics, material physics, plasma physics, photonics, or chemical physics. In all these areas theory and experiment are very closely tied together, and the path to patents and applications is short.

Unfortunately, advances in theoretical physics get pretty much no media coverage whatsoever. They only make it into the news if they were experimentally confirmed – and then everybody cheers the experimentalists, not the theorists. The exceptions are the higher speculations that you mention, which are deemed news-worthy because they supposedly show that “everything we thought about something is wrong.” These headlines are themselves almost always wrong.

Having said that, your question is difficult for me to answer. I’m not a walking and talking encyclopedia of contemporary physics, and in the early 1980s I was in Kindergarten. The origin of many research areas that are hot today isn’t well documented because their history hasn’t yet been written. This is to warn you that I might be off a little with the timing on the items below.

I list for you the first topics that come to my mind, and I invite readers to submit additions in the comments:

  • Topological insulators. That’s one of the currently hottest topics in physics, and many people expect a Nobelprize to go into this area in the near future. A topological insulators is a material that conducts only on its surface. They were first predicted theoretically in the mid 80s.

  • Quantum error correction, quantum logical gates, quantum computing. The idea of quantum computing came up in the 1980s, and most of the understanding of quantum computation and quantum information is only two decades old. [Corrected date: See comment by Matt.]

  • Quantum cryptography. While the first discussion of quantum cryptography predates the 1980s, the field really only took off in the last two decades. Also one of the hottest topics today because first applications are now coming up. [Corrected date: See comment by Matt.]

  • Quantum phase transitions, quantum critical points. I haven’t been able to find out exactly when this was first discussed, but it’s an area that has flourished in the last 20 years or so. This is work mainly lead by theory, not experiment.

  • Metamaterials. While materials with a negative refraction index were first discussed in the mid 60s, this wasn’t paid much attention to until the late 1990, when further theoretical work demonstrated that materials with negative permittivity and permeability should exist. The first experimental confirmation came in in 2000, and since then the field has exploded. This is another area which will probably see a Nobelprize in the soon future. You have read in the news about this under the headline “invisibility cloak.”

  • Dirac (Weyl) materials. These are materials in which excitations behave like Dirac (Weyl) fermions. Graphene is an example. Again I don’t really know when this was first predicted, but I think it was past 1980.

  • Fractional Quantum Hall Effect The theoretical explanation was provided by Laughlin in 1983, and he was awarded a Nobelprize in 1998, together with two experimentalists. [Added, see comment by Flavio.]

  • Inflation. Inflation is the rapid expansion in the early universe, a theoretical prediction that served to solve a lot of problems. It was developed in the early 1980s.

  • Effective field theory/Renormalization group running. While the origin of this framework go back to Wilson in 1975, this field has only taken off in the mid 90s. This topic too is about to become hot because the breakdown of effective field theory is one of the possible explanations for the unnatural parameters of the Standard Model indicated by recent LHC data.

  • Quantum Integrable Systems. This is a largely theoretical field that is still waiting to see its experimental prime-time. One might argue that the first papers on the topic were written already by Bethe in the 1930s, but most of the work has been in the last 20 years or so.

  • Conformal field theory. Like the previous topic, this area is still heavily dominated by theory and is waiting for its time to come. It started taking off in the mid 1990s. It was topic of one of the first-ever arxiv papers.

  • Geometrical frustration, spin glasses. Geometrically frustrated materials have a large entropy even at zero temperature. You have read about these in the context of monopoles in spin-ice. Much of the theoretical work on this started only in the mid 1980s and it’s still a very active research area.

  • Cosmological Perturbation Theory. This is the mathematical framework necessary to describe the formation of structures in the universe. It was developed starting in the 1980s.

  • Gauge-gravity duality (AdS/CFT). This is a relation between different types of field theories which was discovered in the late 1990s. Its applications are still being explored, but it’s one of the most promising research directions in quantum field theory at the moment.
If you want to get a visual impression for what is going on in physics you can browse arxiv papers using Paperscape.org. You see there all arxiv papers as dots. The larger the dot, the more citations. The images in this blogpost are screenshots from Paperscape.

You can follow this blog on facebook here.

Tuesday, November 17, 2015

The scientific method is not a myth

Heliocentrism, natural selection, plate tectonics – much of what is now accepted fact was once controversial. Paradigm-shifting ideas were, at their time, often considered provocative. Consequently the way to truth must be pissing off as many people as possible by making totally idiotic statements. Like declaring that the scientific method is a myth, which was most recently proclaimed by Daniel Thurs on Discover Blogs.

Even worse, his article turns out to be a book excerpt. This hits me hard after just having discovered that someone by name Matt Ridley also published a book full of misconceptions about how science supposedly works. Both fellows seem to have the same misunderstanding: the belief that science is a self-organized system and therefore operates without method – in Thurs’ case – and without governmental funding – in Ridley’s case. That science is self-organized is correct. But to conclude from this that progress comes from nothing is wrong.

I blame Adam Smith for all this mistaken faith in self-organization. Smith used the “invisible hand” as a metaphor for the regulation of prices in a free market economy. If the actors in the market have full information and act perfectly rational, then all goods should eventually be priced at their actual value, maximizing the benefit for everyone involved. And ever since Smith, self-organization has been successfully used out of context.

In a free market, the value of the good is whatever price this ideal market would lead to. This might seem circular but it isn’t: It’s a well-defined notion, at least in principle. The main argument of neo-conservatism is that any kind of additional regulation, like taxes, fees, or socialization of services, will only lead to inefficiencies.

There are many things wrong with this ideal of a self-regulating free market. To begin with real actors are neither perfectly rational nor do they ever have full information. And then the optimal prices aren’t unique; instead there are infinitely many optimal pricing schemes, so one needs an additional selection mechanism. But oversimplified as it is, this model, now known as equilibrium economics, explains why free markets work well, or at least better than planned economies.

No, the main problem with trust in self-optimization isn’t the many shortcomings of equilibrium economics. The main problem is the failure to see that the system itself must be arranged suitably so that it can optimize something, preferably something you want to be optimized.

A free market needs, besides fiat money, rules that must be obeyed by actors. They must fulfil contracts, aren’t allowed to have secret information, and can’t form monopolies – any such behavior would prevent the market from fulfilling its function. To some extent violations of these rules can be tolerated, and the system itself would punish the dissidents. But if too many actors break the rules, self-optimization would fail and chaos would result.

Then of course you may want to question whether the free market actually optimizes what you desire. In a free market, future discounting and personal risk tends to be higher than many people prefer, which is why all democracies have put in place additional regulations that shift the optimum away from maximal profit to something we perceive as more important to our well-being. But that’s a different story that shall be told another time.

The scientific system in many regards works similar to a free market. Unfortunately the market of ideas isn’t as free as it should be to really work efficiently, but by and large it works well. As with the market economies though, it only works if the system is set up suitably. And then it optimizes only what it’s designed to optimize, so you better configure it carefully.

The development of good scientific theories and the pricing of goods are examples for adaptive systems, and so is natural selection. Such adaptive systems generally work in a circle of four steps:
  1. Modification: A set of elements that can be modified.
  2. Evaluation: A mechanism to evaluate each element according to a measure. It’s this measure that is being optimized.
  3. Feedback: A way to feed the outcome of the evaluation back into the system.
  4. Reaction: A reaction to the feedback that optimizes elements according to the measure by another modification.
With these mechanisms in place, the system will be able to self-optimize according to whatever measure you have given it, by reiterating a cycle going through steps one to four.

In the economy the set of elements are priced goods. The evaluation is whether the goods sell. The feedback is the vendor being able to tell how many goods sell. The reaction is to either change the prices or improve the goods. What is being optimized is the satisfaction (“utility”) of vendors and consumers.

In natural selection the set of elements are genes. The evaluation is whether the organism thrives. The feedback is the dependence of the amount of offspring on the organisms’ well-being. The reaction is survival or extinction. What is being optimized are survival chances (“fitness”).

In science the set of elements are hypotheses. The evaluation is whether they are useful. The feedback is the test of hypotheses. The reaction is that scientists modify or discard hypotheses that don’t work. What is being optimized in the scientific system depends on how you define “useful.” It once used to mean predictive, yet if you look at high energy physics today you might be tempted to think it’s instead mathematical elegance. But that’s a different story that shall be told another time.

That some systems optimize a set of elements according to certain criteria is not self-evident and doesn’t come from nothing. There are many ways systems can fail at this, for example because feedback is missing or a reaction isn’t targeted enough. A good example for lacking feedback is the administration of higher education institutions. They operate incredibly inefficiently, to the extent that the only way one can work with them is by circumvention. The reason is that, by my own experience, it’s next to impossible to fix obviously nonsensical policies or to boot incompetent administrative personnel.

Natural selection, to take another example, wouldn’t work if genetic mutations scrambled the genetic code too much because whole generations would be entirely unviable and feedback wasn’t possible. Or take the free market. If we’d all agree that tomorrow we don’t believe in the value of our currency any more, the whole system would come down.

Back to science.

Self-optimization by feedback in science, now known as the scientific method, was far from obvious for people in the middle ages. It seems difficult to fathom today how they could not have known. But to see how this could be you only have to look at fields where they still don’t have a scientific method, like much of the social and political sciences. They’re not testing hypotheses so much as trying to come up with narratives or interpretations because most of their models don’t make testable predictions. For a long time, this is exactly what the natural sciences also were about: They were trying to find narratives, they were trying to make sense. Quantification, prediction, and application came much later, and only then could the feedback cycle be closed.

We are so used to rapid technological progress now that we forget it didn’t used to be this way. For someone living 2000 years ago, the world must have appeared comparably static and unchanging. The idea that developing theories about nature allows us to shape our environment to better suit human needs is only a few hundred years old. And now that we are able to collect and handle sufficient amounts of data to study social systems, the feedback on hypotheses in this area will probably also become more immediate. This is another opportunity to shape our environment better to our needs, by recognizing just which setup makes a system optimize what measure. That includes our political systems as well as our scientific systems.

The four steps that an adaptive system needs to cycle through don’t come from nothing. In science, the most relevant restriction is that we can’t just randomly generate hypotheses because we wouldn’t be able to test and evaluate them all. This is why science heavily relies on education standards, peer review, and requires new hypotheses to tightly fit into existing knowledge. We also need guidelines for good scientific conduct, reproducibility, and a mechanism to give credits to scientists with successful ideas. Take away any of that and the system wouldn’t work.

The often-depicted cycle of the scientific method, consisting of hypotheses-generation and subsequent testing, is incomplete and lacks details, but it’s correct in its core. The scientific method is not a myth.


Really I think today anybody can write a book about whatever idiotic idea comes to their mind. I suppose the time has come for me to join the club.

Monday, November 16, 2015

I am hiring: Postdoc in AdS/CFT applications to condensed matter

I am hiring a postdoc for a 3-year position based at Nordita in Stockholm. The research is project-bound, funded by a grant from the Swedish Research Council. I am looking for someone with a background in AdS/CFT applications to condensed matter and/or analogue gravity. If you want to know what the project is about, have a look at these recent papers. It’s a good contract with full benefits. Please submit your application documents (CV, research interests, at least two letters) here. Further questions should be addressed to hossi[at]nordita.org

Thursday, November 12, 2015

Mysteriously quiet space baffles researchers

The Parkes Telescope. [Image Source]

Astrophysicists have concluded the yet most precise search for the gravitational wave background created by supermassive black hole mergers. But the expected signal isn’t there.


Last month, Lawrence Krauss rumored that the newly updated gravitational wave detector LIGO had seen its first signal. The news spread quickly – and was shot down almost as quickly. The new detector still had to be calibrated, a member of the collaboration explained, and a week later it emerged that the signal was probably a test run.


While this rumor caught everybody’s attention, a surprise find from another gravitational wave experiment almost drowned in the noise. The Parkes Pulsar Timing Array Project just published results from analyzing 11 years’ worth of data in which they expected to find evidence for gravitational waves created by mergers of supermassive black holes. The sensitivity of their experiment is well within the regime where the signal was predicted to be present. But the researchers didn’t find anything. Spacetime, it seems, is eerily quiet.

The Pulsar Timing Array project uses the 64 m Parkes radio telescope in Australia to monitor regularly flashing light sources in our galaxy. Known as pulsars, such objects are thought to be created in some binary systems, where two stars orbit around a common center. When a neutron star succeeds in accreting mass from the companion star, an accretion disk forms and starts to emit large amounts of particles. Due to the rapid rotation of the system, this emission goes into one particular direction. Since we can only observe the signal when it is aimed at our telescopes, the source seems to turn on and off in regular intervals: A pulsar has been created.

The astrophysicists on the lookout for gravitational waves use the fastest-spinning pulsars as enormously precise galactic clocks. These millisecond pulsars rotate so reliably that their pulses get measurably distorted already by minuscule disturbances in spacetime. Much like buoys move with waves on the water, pulsars move with the gravitational waves when space and time is stretched. In this way, the precise arrival times of the pulsars’ signals on Earth gets distorted. The millisecond pulsars in our galaxy are thus nothing but a huge gravitational wave detector that nature has given us for free.

Take the pulsar with the catchy name PSR J1909-3744. It flashes us every 2.95 milliseconds, a hundred times in the blink of an eye. And, as the new experiment reveals, it does so to a precision within a few microseconds, year after year after year. This tells the researchers that the the noise they expected from supermassive black hole mergers is not there.

The reason for this missing signal is a great puzzle right now. Most known galaxies, including our own, seem to host huge black holes with masses of more than a million times that of our Sun. And in the vastness of space and on cosmological times, galaxies bump into each other every once and then. If that happens, they most often combine to a larger galaxy and, after some period of turmoil, the new galaxy will have a supermassive binary black hole at its center. These binary systems emit gravitational waves which should be found throughout the entire universe.

The prevalence of gravitational waves from supermassive binary black holes can be estimated from the probability of a galaxy to host a black hole, and the frequency in which galaxies merge. The emission of gravitational waves in these systems is a consequence of Einstein’s theory of General Relativity. Combine the existing observations with the calculation for the emission, and you get an estimate for the background noise from gravitational waves. The pulsar timing should be sensitive to this noise. But the new measurement is inconsistent with all existing models for the gravitational wave background in this frequency range.

Gravitational waves are one of the key predictions of General Relativity, Einstein’s masterwork which celebrates its 100th anniversary this year. They have never been detected directly, but the energy loss that gravitational waves must cause has been observationally confirmed in stellar binary systems. A binary system acts much like a gravitational antenna: it constantly emits a radiation, just that instead of electromagnetic waves it is gravitational waves that the system sends into space. As a consequence of the constant loss of energy, the stars move closer together and the rotation frequency of binary systems increases. In 1993 the Physics Nobel Prize went to Hulse and Taylor for pioneering this remarkable confirmation of General Relativity.

Ever since, researchers have tried to find other ways to measure the elusive gravitational waves. The amount of gravitational waves they expect depends on their wavelength – roughly speaking, the longer the wavelength, the more of them should be around. The LIGO experiment is sensitive to wavelengths of the order of some thousand km. The network of pulsars however is sensitive to wavelengths of a several lightyears, corresponding to 1016 meters or even more. At these wavelengths astrophysicists expected a much larger background signal. But this is now excluded by the recent measurement.

Estimated gravitational wave spectrum. [Image Source]

Why the discrepancy with the models? In their paper the researchers offer various possible explanations. To begin with, the estimates for the number of galaxy mergers or supermassive binary black holes could be wrong. Or the supermassive black holes might not be able to form close-enough binary systems in the mergers. Or it could be that the black holes experience an environment full with interstellar gas, which would reduce the time during which they emit gravitational waves. There are many astrophysical scenarios that might explain the observation. An absolutely last resort is to reconsider what General Relativity tells us about gravitational-wave emission.

 You have just witnessed the birth of a new mystery in physics.


[This post previously appeared at Starts with a Bang.]

Tuesday, November 10, 2015

Dear Dr. B: What do physicists mean when they say time doesn’t exist?

That was a question Brian Clegg didn’t ask but should have asked. What he asked instead in a recent blogpost was: “When physicists say many processes are independent of time, are they cheating?” He then answered his own question with yes. Let me explain first what’s wrong with Brian’s question, then I’ll tell you something about the existence of time.

What is time-reversal invariance?

The problem with Brian’s question is that no physicist I know would ever say that “many processes are independent of time.” Brian, I believe, didn’t mean time-independent processes but time-reversal invariant laws. The difference is important. The former is a process that doesn’t depend on time. The latter is a symmetry of the equations determining the process. Having a time reversal-invariant law means that the equations remain the same when the direction of time is reversed. This doesn’t mean the processes remain the same.

The mistake is twofold. Firstly, a time-independent process is a very special case. If you watch a video that shows a still image, it doesn’t matter if you watch it forward or backward. So, yes, time-independence implies time-reversal invariance. But secondly, if the underlying laws are time-reversal invariant, the processes themselves are reversible, but not necessarily invariant under this reversal. You can watch any movie backwards with the same technology that you can watch it forwards, yet the plot will look very different. The difference is the starting point, the “initial condition.”

The fundamental laws of nature, for all we know today, are time-reversal invariant*. This means you can rewind any movie and watch it backwards using the same laws. The reason that movies look very different backwards than forwards is a probabilistic one, captured in the second law of thermodynamics: entropy never decreases. In large open systems, it instead tends to increase. The initial state is thus almost always very different from the final state.

Probabilities enter not through the laws themselves, but through these initial conditions. It is easy enough to set up a bowl with flour, sugar, butter, and eggs (initial condition), and then mix it (the law) to a smooth dough. But it is for all practical purposes impossible to set up dough so that a reverse-spinning mixer would separate the eggs from the flour.

In principle the initial state for this unmixing exists. We know it exists because we can create its time-reversed version. But you would have to arrange the molecules in the dough so precisely that it’s impossible to do. Consequently, you never see dough unmixing, never see eggs unbreaking, and facelifts don’t make you younger.

It is worth noting that all of this is true only in very large systems, with a large number of constituents. This is always the case for daily life experience. But if a system is small enough, it is indeed possible for entropy to decrease every once in a while just by chance. So you can ‘unmix’ very small patches of dough.

What does any of this have to do with the existence of time? Not very much. Time arguably does exist. In a previous blogpost I explained that the property of being reality isn’t binary (true or false), but it is graded from “mostly true” to “most likely false.” Things don’t either exist or don’t exist, they exist at various levels of immediacy, depending on how detached they are from direct sensory exploration.

Space and time are something we experience every day. Einstein taught us space and time are combined in space-time, and its curvature is the origin of gravity. We move around in space-time. If space-time wasn’t there, we wouldn’t be there because there wouldn’t be any “there” to be at, and since space and time belong together time exists the same way as space does.

Claiming that time doesn’t exist is therefore misusing language. In General Relativity, time is a coordinate, one that is relevant to obtain predictions for observables. It isn’t uniquely defined, and it is not itself observable, but that doesn’t make it non-existent. If you’d ask me what it means for time to exist, I’d say it’s the Lorentzian signature of the metric, and that is something which we need for our theories to work. Time is, essentially, the label to order frames in the movie of our universe.

Why do some physicists say that time isn’t real?

When physicists say that time doesn’t exist they mean one of two things: 1) The passage of time is an illusion, or 2) Time isn’t fundamental.

As to 1). In our current description of the universe, the past isn’t fundamentally different from the future. It will look different in outcome, but it will be made of the same stuff and it work the same way. There is no dividing line that separates past and future, and that demarks the present moment.

Our experience of there being a “present” comes from the one-sidedness of memory-formation. We can only form memory about things from a time where entropy was smaller, so we can’t remember the future. The perception of time passing comes from the update of our memory in the direction of entropy increase.

In this view, every moment in time exists in some way, though from our personal experience at each moment most of them are remote from experience (the past) or inaccessible from experience (the future). The perception of existence itself is time-dependent and also individual. You might say that the future is so remote to your perception, and prediction so close to impossible, that it is on the level of non-existence. I wouldn’t argue with you about that, but if you learn some more General Relativity your perception might shift.

Now this point of view irks some people, by which I mean Lee Smolin. Lee doesn’t like it that the laws of nature we know today do not give a special relevance to a present moment. He argues that this signals there is something missing in our theories, and that time should be “real.” What he means by that is that the laws of nature themselves must give rise to something like a present moment, which is not presently the case.

As to 2). We know that General Relativity cannot be the fundamental theory of space and time because it breaks down when gravity becomes very strong. The underlying theory might not have a notion of time, instead space and/or time might be emergent – they might be built up of something more fundamental.

I have some sympathy for this idea because I find it plausible that Euclidean and Lorentzian signatures are two different phases of the same underlying structure. This necessarily implies that time isn’t fundamental, but that it comes about in some phase transition.

Some people say that in this case “time doesn’t exist” but I find this extremely misleading. Any such theory would have to reproduce General Relativity and give rise to the notion of time we presently use. Saying that something isn’t real because it’s emergent is a meaningless abuse of terminology. It’s like saying the forest doesn’t exist because it’s made of trees.

In summary, time is real in a well-defined way, has always been real, and will always be real. When physicists say that time isn’t real they normally use it as a short-hand to refer to specific properties of their favorite theories, for example that the laws are time-reversal invariant, or that space-time is emergent. The one exception is Lee Smolin who means something else. I’m not entirely sure what, but he has written a book or two about it.

* Actually they’re not, they’re CPT invariant. But if you know the difference then I don’t have to explain you the difference.

Monday, November 09, 2015

Another new social networking platform for scientists: Loop

Logo of Loop Network website.

A recent issue of Nature magazine ran an advertisement feature for “Loop,” a new networking platform to “maximize the impact of researchers and their discoveries.” It’s an initiative by Frontiers, an open access publisher. Of course I went and got an account to see what it does and I’m here to report back.

In the Nature advert, the CEO of Frontiers interviews herself and answers the question “What makes Loop unique?” with “Loop is changing research networking on two levels. Firstly, researchers should not have to go to a network; it should come to them. Secondly, researchers should not have to fill in dozens of profiles on different websites.”

So excuse me when I was awaiting a one-click registration that made use of one of my dozens of other profiles. Instead I had to fill in a lengthy registration form that, besides name and email, didn’t only ask for my affiliation and country of residence and job description and field of occupation, domain, and speciality, but also for my birthdate, before I had to confirm my email address.

Since that network was so good at “coming to me” it wasn’t possible after registration to import my profile from any other site, Google Scholar, ORCID, Linkedin, ResearchGate, Academia.edu or whathaveyou, facebook, G+, twitter if you must. Instead, I had to fill in my profile yet another time. Neither, for all I can tell, can you actually link your other accounts to the Loop account.

If you scroll down the information pages, it turns out what the integration refers to is “Your Loop profile is discoverable via the articles you have authored on nature.com and in the Frontiers journals.” Somewhat underwhelming.

Then you have to assemble a publication list. I am lucky to have a name that, for all I know, isn’t shared by anybody else on the planet, so it isn’t so difficult to scan the web for my publications. The Loop platform came up with 86 suggested finds. These appeared in separate pop-up windows. If you have ever done this process before you can immediately see the problem: Typically in these lists there are many duplicate entries. So going through the entries one by one without seeing what is already approved means you have to memorize all previous items. Now I challenge you to recall whether item number 86 had appeared before on the list.

Finally done with this, what do you have there? A website that shows a statistic for how many people have looked at your profile (on this site, presumably), how many people have downloaded your papers (from this site, presumably) and a number of citations which shows zero for me and for a lot of other profiles I looked at. A few people have a number there from the Scopus database. I conclude that Loop doesn’t have its own citation metric, and neither uses the one from Google Scholar or Spires.



As to the networking, you get suggestions for people you might know. I don’t know any of the suggested people, which isn’t surprising because we already noticed they’re not importing information, so how are they supposed to know who I know? I’m not sure what I would like to follow any of these people for, why that would be any better than following them elsewhere, or not at all. I followed some random person just because. If that person actually did something (which he doesn’t, much like everybody else whose profile I looked at), presumably it would appear in my feed. From that angle, it looks much like any other networking website. There is also a box that asks me to share something with my network of one.

In summary, for all I can tell this website is as useless as it gets. I don’t have the faintest clue what they think it’s good for. Even if it’s good for something it does a miserable job at telling me what that something is. So save your time.

Friday, November 06, 2015

New music video

Yes!

I know you can hardly contain the excitement about my new lipstick and the badly illuminated blue screen, so please enjoy my newest release, exclusively for you, dear reader.

I actually wrote this song last year, but then I mixed myself into a mush. In the hope that I've learned some things since, I revisited this project and gave it a second try. Thought I'm still not quite happy with it (I never seem to get the vocals right), I strongly believe there's a merit to finishing up things. Also, if I have to hear this thing once again my head will implode (though at least that would set an end to the concussion symptoms and neck pain I caused myself with the hair shaking). Lesson learned: hitting your forehead against a wall isn't really pleasant. If you feel like engaging in it, you should at the very least videotape it, because that justifies just about anything stupid.

Sunday, November 01, 2015

Dumb Holes Leak

Tl;dr: A new experiment demonstrates that Hawking radiation in a fluid is entangled, but only in the high frequency end. This result might be useful to solve the black hole information loss problem.

In August I went to Stephen Hawking’s public lecture in the fully packed Stockholm Opera. Hawking was wheeled onto the stage, placed in the spotlight, and delivered an entertaining presentation about black holes. The silence of the audience was interrupted only by laughter to Hawking’s well-placed jokes. It was a flawless performance with standing ovations.


In his lecture, Hawking expressed hope that he will win the Nobelprize for the discovery that black holes emit radiation. Now called “Hawking radiation,” this effect should have been detected at the LHC had black holes been produced there. But time has come, I think, for Hawking to update his slides. The ship to the promised land of micro black holes has long left the harbor, and it sunk – the LHC hasn’t seen black holes, has not, in fact, seen anything besides the Higgs.

But you don’t need black holes to see Hawking radiation. The radiation is a consequence of applying quantum field theory in a space- and time-dependent background, and you can use some other background to see the same effect. This can be done, for example, by measuring the propagation of quantum excitations in Bose-Einstein condensates. These condensates are clouds of about a billion or so ultra-cold atoms that form a fluid with basically zero viscosity. It’s as clean a system as it gets to see this effect. Handling and measuring the condensate is a big experimental challenge, but what wouldn’t you do to create a black hole in the lab?

The analogy between the propagation of excitations on background fluids and in a curved space-time background was first pointed out by Bill Unruh in the 1980s. Since then, many concrete examples have been found for condensed-matter systems that can be used as stand-ins for gravitational fields; they are summarily known as “analogue gravity system” – this is “analogue” as in “analogy,” not as opposed to “digital.”

In these analogue gravity systems, the quantum excitations are sound waves, and the corresponding quantum particles are called “phonons.” A horizon in such a space-time is created at the boundary of a region in which the velocity of the background fluid exceeds the speed of sound, thereby preventing the sound waves from escaping. Since these fluids trap sound rather than light, such gravitational analogues are also called “dumb,” rather than “black” holes.

Hawking radiation was detected in fluids a few years ago. But these measurements only confirmed the thermal spectrum of the radiation and not its most relevant property: the entanglement across the horizon. The entanglement of the Hawking radiation connects pairs of particles, one inside and one outside the horizon. It is a pure quantum effect: The state of either particle separately is unknown and unknowable. One only knows that their states are related, so that measuring one of the particles determines the measurement outcome of the other particle – this is Einstein’s “spooky action at a distance.”

The entanglement of Hawking radiation across the horizon is origin of the black hole information loss problem. In a real black hole, the inside partner of the entangled pair eventually falls into the singularity, where it gets irretrievably lost, leaving the state of its partner undetermined. In this process, information is destroyed, but this is incompatible with quantum mechanics. Thus, by combining gravity with quantum mechanics, one arrives at a result that cannot happen in quantum mechanics. It’s a classical proof by contradiction, and signals a paradox. This headache is believed to be remedied by the, still missing, theory of quantum gravity, but exactly what the remedy is nobody knows.

In a new experiment, Jeff Steinhauer from the Israel Institute of Technology measured the entanglement of the Hawking radiation in an analogue black hole; his results are available on the arxiv.

For this new experiment, the Bose Einstein condensate was trapped and put in motion with laser light, making it an effectively one-dimensional system in flow. In this trap, the condensate had low density on one half and a higher density in the other half, achieved by a potential step from a second laser. The speed of sound in such a condensate depends on the density, so that a higher density corresponds to a higher speed of sound. The high density region thus allowed the phonons to escape and corresponds to the outside of the horizon, whereas the low density region corresponds to the inside of the horizon.

The figure below shows the density profile of the condensate:

Figure 1 from 1510.00621. The density profile of the condensate. 

In this system then, Steinhauer measured correlations of fluctuations. These flowing condensates don’t last very long, so to get useful data, the same setting must be reproduced several thousand times. The analysis clearly shows a correlation between the excitations inside and outside the horizon, as can be seen in the figure below. The entanglement appears in the grey lines on the diagonal from top left to bottom right. I have marked the relevant feature with red arrows (ignore the green ones, they indicate matches between the measured angles and the theoretical prediction).


When Steinhauer analyzed the dependence on the frequency, he found a correlation only in the high frequency end, not in the low frequency end. This is as intriguing as confusing. In a real black hole all frequencies should be entangled. But if the Hawking radiation was not entirely entangled across the horizon, that might allow information to escape. One has to be careful, however, to not read too much into this finding.

To begin with, let us be clear, this is not a gravitational system. It’s a system that shares some properties with the gravitational case. But when it comes to the quantum behavior of the background, that may or may not be a useful comparison. Even if it was, the condensate studied here is not rotationally symmetric, as a real black hole would be. Since the rotational symmetry is essential for the red-shift in the gravitational potential, I actually don’t know how to interpret the low frequencies. Possibly they correspond to a regime that real black holes just don’t have. And then the correlation might just have gotten lost in experimental uncertainties – limitations by finite system size, number of particles, noise, etc – on which the paper doesn’t have much detail.

The difference between the analogue gravity system, which is the condensate, and the real gravity system is that we do have a theory for the quantum properties of the condensate. If gravity was quantized in a similar way, then studies like the one done by Steinhauer, might indicate where Hawking’s calculation fails – for it must fail if the information paradox is to be solved. So I find this a very interesting development.

Will Hawking and Steinhauer get a Nobelprize for the discovery and detection of the thermality and entanglement of the radiation? I think this is very unlikely, for right now it isn’t clear whether this is even relevant for anything. Should this finding turn out to be key to developing a theory of quantum gravity however, that would be groundbreaking. And who knows, maybe Hawking will again be invited to Stockholm.