Saturday, July 29, 2006
The story had begun in November 2004, when I got an email form someone I had never heard before working at a Dutch research institute I had never heard before. Peter Hoyng told me that he was preparing a textbook on astrophysics and cosmology, condensing into a book the course he has been teaching at the University of Utrecht since several years. He was looking for an illustration of a heavy ion collision that he would like to use in the part on the early universe where the transition from the primordial quark gluon plasma to a hadron gas is discussed. By chance, he had found a snapshot from a simulation of a lead-lead collision at the CERN-SPS in an online talk I had prepared for my PhD advisor a year before. Now, he was interested in a more detailed explanation of the figure, and asked for the permission to use it in his book.
Of course, I was extremely pleased by this request. I provided him with the explanation and a colour version of the figure file, and asked him to tell me when the book will be in print. The next time I heard from him was half a year later, last July, when he contacted me again. He told me about delays in the publishing procedure because of a change of the publisher, and asked me for a black-and-white version of the figure, following the request of the new publisher. I was happy to help him, and completely forgot the whole story - until this week, when I found his book, together with a short note, in my mailbox.
Obviously, there was one more change of the publishing house, since now, colour is used again for the illustrations. The book is very neatly produced, as a part of the Astronomy and Astrophysics Library at Springer. It starts with a motivation for the need of general relativity in astrophysics, introduces the geometry of Riemann spaces and general relativity, and goes on with the Schwarzschild metric, compact stars, and black holes. Two chapters discuss gravitational waves and Fermi-Walker transport (including a discussion of Gravity Probe B). The remaining chapters are devoted to cosmology: the Robertson-Walker metric, the evolution of the universe, observations, the early universe, and inflation. You can download the detailed table of contents on the publishers website of the title.
So far I have only had a cursory look a the book, so this is not a review. But from what I have seen, it looks very interesting and worth reading. I especially appreciate the discussion of interferometric gravitational wave detectors, and Gravity Probe B. Moreover, I am happy to see the onion-like diagrams of light paths in the expanding universe, which you may know from Ned Wrights web site. I have seen these types of diagrams for the first time in a paper by Ellis and Rothman in the American Journal of Physics. I found them extremely useful to develop some kind of visual understanding of the expanding universe, and I wonder how any textbook about cosmology can be without them.
All this, of course, was not the first thing I looked up in the book. I searched the index for quark-gluon plasma, and on page 241, I found my illustration:
It shows a snapshot from a simulation of a collision of two lead nuclei immediately after an off-center impact with an energy of 17.4 GeV per nucleon pair (corresponding to the CERN-SPS), as calculated with the code I have used in my thesis. Unaffected so-called spectator nucleons are white, while deconfined quarks and antiquarks are represented as coloured spheres. There are no gluons in this model - the effect of glue is all subsumed in a linear, confining potential, which is used to describe the interaction between quarks. It comes out of this simple model is that quarks quickly team up in colour-neutral quark-antiquark or three-quark configurations, which are mapped to mesons and baryons, respectively. For better visibility, the figure is stretched in the beam direction by the gamma factor to undo the Lorentz contraction of the colliding nuclei. The gamma factor at this collision energy is of order 10, and the spatial configuration of the collding system in the centre-of-momentum frame is already quite pancake-like...
I guess I will have to write in much more detail about my simulations of the quark-gluon plasma, and this exciting topic in general. I will do so some time... But for now, I am just proud to see this figure of mine reproduced in a textbook.
TAGS: SCIENCE, PHYSICS, BOOKS
Thursday, July 27, 2006
"When I was young,
It seemed that life was so wonderful,
A miracle, oh it was beautiful, magical.
And all the birds in the trees,
Well they'd be singing so happily,
Joyfully, playfully watching me.
But then they send me away
To teach me how to be sensible,
Logical, responsible, practical.
And they showed me a world
Where I could be so dependable,
Clinical, intellectual, cynical.
There are times
- When all the world's asleep,
The questions run too deep
For such a simple man.
Won't you please,
Please tell me what we've learned
I know it sounds absurd
But please tell me who I am."
Friday, July 21, 2006
On my top ten list there is the question whether the parameters of the standard model (SM) can be derived from within a yet-to-be-found theory of everything (TOE). And if so, how. Can we make sense out of this collection of numbers? Lately, this question has been dominated by a topic I still can't make any sense out of: the Anthropic Principle (AP).
In this post I want to share some of my thoughts on the AP, and its sense or non-sense. You might also want to read my earlier post The Principle of Finite Imagination (alias: The Liver Post).
I want to state at the beginning that I don't want to discuss whether life on earth is 'intelligent' or 'civilized', so you might want to replace 'life' with 'intelligent life' or 'civilization' if you feel like it.
Below you find my thoughts on the following four statements that I have encountered:
- A: The conditions we observe in our universe are such that they allow the existence of life. (Or, equivalently: If the conditions were such that they didn't allow the existence of life, we wouldn't be here to observe them.)
- B: If we assume that the conditions are according to some random distribution function then we live in a typical sample, and we are typical for the universe we life in.
- C: The conditions we observe in our universe are optimal for the existence of life.
- D: The conditions we observe in our universe are optimal in some other sense.
Is usually called the weak Anthropic Principle (AP). It's scientific content has been debated over and over again, see e.g. Lee Smolin's paper and the following argument with Leonard Susskind. The AP is not a theory, and I honestly have no idea what the scientific status of 'principle' is supposed to be. I think the main issue for the physicist is whether or not the AP allows to make predictions, and whether it is scientifically useful. Or at least this should be the main issue for us.
Without any doubt, A is a true statement. That means not only it can't be falsified: it can't be false. But this does not mean it can't also be useful. It's a device one can use to draw conclusions. And indeed, one can use it to derive constraints on observables. However, this is no more a scientific theory than the use of mathematically true statements.
E.g. consider you compute a prediction for the lifetime of muons at relativistic speed with Lorentz-transformations and use cosh2 - sinh2 = 1. If your prediction is correct, you'd not claim that this is due to the use of a trigonometric identity. Instead, the agreement of your computation with the observed value it a confirmation that Special Relativity is a successful description of reality.
Likewise with Weinberg's bound on the cosmological constant. In a nutshell the argument is: when the cosmological constant was larger than some value, then galaxies would not have formed and we would not be here. Deriving this bound thereby using A, and finding it fulfilled by the observed value is a confirmation of having properly used a sensible theory for gravity and an appropriate model for density fluctuations. If the derived bound would have turned out not to be fulfilled, we'd not have concluded that A is wrong. Instead, we'd have concluded there is something wrong with our understanding of structure formation.
So, A can be used in the context of a theory or a model to make predictions, just that any conclusions drawn from this are not about A, but about the theory or the model.
Is also called the Principle of Mediocrity. For me, one of two crucial question here is the distribution of the parameters. This has to be given (preferably motivated, maybe just postulated) through some kind of a model. From this, one can then find out the most probable configurations. Since we are typical, we belong to one of these. If the parameters in this configuration do agree with our observations we could conclude that the distribution of the random parameters was in accordance with the expectation B that we live in a typical universe.
However, the randomness of the distribution always leaves a sneaky way out. If we measure some parameters, then the distribution of parameters as evaluated from the model will agree with some probability. How small do we allow this probability to be still acceptable, i.e. typical? Or, how natural is natural? Let's say we set this acceptable probability to a value X. Then, we'd have to discard the model leading to the distribution when the observed values would have a smaller probability.
One can do this. This is basically a search for random distributions that have maxima not at the observed values, but sufficiently close around them to be in accordance to our acceptance limit X.
A second huge problem of this approach which I see is obviously what it means for the universe that we are 'typical'. What of our universe has to be typical, and at which stage of the evolution does it have to be typical? I have no idea how the notion of being typical could be put on solid feet.
So, in my eyes B is the construction of models within which the observed parameters of the SM have a certain probability. The higher this probability, the better the model. (Hopefully, the model itself has less free parameters than the SM itself). The central statement of being typical is very ill defined and vague.
Approaches to describe nature like this were essentially the reason why I left Heavy Ion Physics (replace 'acceptable probability' with 'errorbars').
Is a more sophisticated version of B, where being typical is replaced with being optimal. It suffers from the same problem of dealing with a very vague quantity, that is the 'optimalness for the existence of life'.
To underlay this with a physical approach, what we really want is some function of these parameters that we aim to predict -- a function which is optimal for the actually observed values. For the case C, this function would have to be
Optimalness-for-Life(Parameters of the SM)
Applying a variational principle to this 'function' seems to be hopeless, but what one can do instead is tuning the variables (parameters of SM) up and down to see whether the optimalness decreases. I.e. the poor man's way to determining a minimum. This is essentially what has been done in a huge amount of examples, and results typically in statements like: When the size of my cellphone was just 2% smaller, then life would not be possible.
Despite the fact that this way one can only check for local minima, and that one can not really draw conclusions when keeping some parameters fixed and varying only a few, imo the largest problem is the absence of a reasonable definition of Optimalness-for-Life. There is way too much ambiguity attached to it. What can we possibly learn from this? Only that - assuming we live in a universe optimal for life - our idea of being optimal is not in disagreement with observation.
So, in my eyes C is an improvement over B but the central point of 'being optimal for life' is too vague to allow sensible insights into the secrets of nature.
Can abstractly be formulated as: there is some function of the parameters to be determined that is optimal for the observed values. The question then is what this function is. Apparently, the universe is not such that it optimizes the amount of US$ on my bank account. Too bad.
Lee Smolin proposes that the number of black holes could be such a function (Cosmological Natural Selection), whose value is maximized for our universe. Though the function Number-of-black-holes(parameters of the SM + LambdaCDM) itself is unknown, at least it's a well defined quantity. Here again, one can test whether we are in a local extremum by tuning around parameters and estimate the effect. It seems, the number of black holes is not such a bad guess (to me this is really surprising.)
Imo, it's in this regard not even important whether or not all the universes that belong to the non-optimal parameters actually 'really' exist. When I make a variation over the metric in GR to find the optimal and realized configuration, I don't think of the other ones as being alternative universes. However, in Lee's scenario the other universes do 'really' exist, and the claim is then that we are likely to live in a universe where the number of black holes is as large as possible. This then has the additional virtue of providing a reason why the number of black holes is the function to be extremal (for further details, see hep-th/0407213 or The Life of the Cosmos) .
One way or the other, D comes down to the question whether there is a function that is optimized when the parameters of the SM have the values we observe. And which in addition to reproducing known number allows us to learn something new (i.e. make at least one falsifiable prediction).
But then, the question lying at hand is whether this function can be derived from the fundamental principles of the TOE. It might be that this is not the case, but that e.g. the initial conditions play a central role. An example that has been used elsewhere (sorry, forgot where) are the orbits of planets in the solar system, which have historically been thought to arise from some symmetric construction. Today we'd say the orbits of the planets follow when we have given the initial stress-energy distribution, and the quantity to be optimised is the Lagrangian of GR plus that of the matter field. We would not expect the orbits of the planets to be predictable from the SM of particle physics plus GR. Or from putting Dodecahedrons inside Icosahedrons (see Platonic Solids).
But even if the function to be optimized can be derived from the TOE, in practice it might not be a useful way to deal with it in the full context. Just like we don't explain liver growth starting from the SM of particle physics, I find it a reasonable expectation that a macroscopic description of our universe might be more useful to determine the parameters of the SM.
However, I'd say our insights about a possible TOE are not yet deep enough to let us conclude that not even some of the parameters in the SM might be explained within such a fundamental theory.
TAGS: SCIENCE, PHYSICS, ANTHROPIC PRINCIPLE
Wednesday, July 19, 2006
TAGS: HUMOR, PHYSICS
Monday, July 17, 2006
When I started my position at the University of Arizona, Keith suggested an interesting work about neutrinos to me. I didn't know very much about neutrino physics at this time (okay, I didn't know anything at all). However, I could immediately relate to these elusive particles with small masses that interact only weakly, and which have caused not little physicists to scratch their head.
During the following year, I learned a lot about neutrinos. Here, I'd like to give you a short and very basic introduction of what turned out to be a very fascinating and lively field of theoretical as well as experimental physics.
This is a three-step programme... today is for beginners.
|When we were kids, my younger brother drove me crazy. Each time my grandma gave us 50 Pfennig, we would go to get ice cream. But when we arrived at the store, my brother could never made up his mind between vanilla and chocolate.|
Thus, whenever we left home, I asked my brother what flavour he'd go for today. He'd start with a definite 'It's chocolate day', but after a minute he'd mumble something. Then it was 'Maybe vanilla', then 'No, chocolate', then again 'Better vanilla'... When we arrived at the store, he was caught somewhere between vanilla and chocolate. That is, until I'd yell at him to make up his mind before the queue behind us would just shove us away.
This I-scream issue was solved when the store got a new owner who introduced chocolate coating. So, my brother could get vanilla with chocolate and didn't have any more flavor problems.
Neutrinos come in three flavours. Each of them belongs to a charged fermion. There is the electron, the muon and the tau-neutrino. Neutrinos are produced in interactions (e.g. in the sun) and start their travel with one of these flavours. However, while time goes by, they can change from one flavor to another, and back, in a periodic process. Which flavor you find then depends on the time that passed after the neutrino was produced - or the distance it travelled during this time, respectively.
There are however mixtures of neutrino flavors which remain unchanged when you start with them, like the vanilla-with-chocolate choice. These time-independent choices have distinct masses, and are therefore called mass-eigenstates (there are also three of them). An important thing to know is that oscillations only take place when these masses are different. This implies that at least two of the masses have to be non-zero.
This is what makes neutrino-oscillations so interesting, because in the Standard-Model of particle physics, the neutrino masses are exactly zero. By examining the properties of the elusive neutrino however, we find that this can not be the case. We are therefore testing physics beyond the standard model - a challenge for every theoretical physicist, and a promising source for new insights.
The typical distance it takes for the neutrino to return to it's original state is the oscillation length. It depends on the energy of the neutrino. The larger the energy, the longer the oscillation length. It is also related to the differences of the squared masses. The smaller the difference, the larger the oscillation length. For zero difference, if would take forever for the oscillation to happen.
The second relevant quantity is the maximal fraction of flavor that can change into a different flavor. This is parametrized in the 'mixing angle', and measures how 'mixed up' the flavors are. An angle of Pi/4 refers to 'maximal mixing' at which one flavor can change completely into another.
|The standard-model does not predict the number of flavors. In principle there could be more than three, but we know from experiments that - when equally light as the three known flavors - such additional particles are not produced in any reaction we have ever observed.|
Such hypothetical extra neutrinos are therefore referred to as 'sterile', and would not be detected through the usually studied reactions. (They could, however, be detected indirectly as a missing signal, or from cosmological observations.)
By now, the properties of the neutrino-oscillations, mixing angles and squared mass differences, are measured very precisely. But for the theoretical physicist, the situation is a little unsatisfactory. Though one can calculate with the assumption of neutrino-oscillations, we don't know why the neutrino-masses are so small, how these masses are embedded in the standard model, or why the mixing between the flavors is so large. There is lots of stuff left to do...
During the last years, it has been confirmed with high precision that neutrino oscillation indeed happens. This is quite an impressive achievement as the mass-differences that have been measured are extremely small, less than 1 billion of the proton's mass.
The existence of neutrino-oscillations solves the puzzle of the solar neutrino deficit. Based on models of processes in the sun, one can compute how many electron neutrinos the earth should receive from the sun's nuclear fusion. However, far too little of the electron neutrinos were measured on earth, and it has been speculated that something is wrong with our understanding of the sun.
But eventually in 2002, the SNO-collaboration also measured the two other flavors, the muon- and tau-neutrinos by what is called a neutral-current interaction. Such, they were able show that the missing electron neutrinos indeed arrive - but with a different flavor. Since this total number measured is very close to what is expected from the sun's production of neutrinos, this also excludes substancial oscillation into sterile neutrinos (very small mixtures into sterile neutrinos are still not completely outruled).
Besides in the sun, neutrinos are also produced in the earth's atmosphere from cosmic rays. Here, it's a mixture of electron and muon neutrinos that one expects down on earth, with twice as many muon-neutrinos as electron neutrinos. These atmospheric neutrinos, which have a much higher energy than the solar neutrinos, have also been measured, and their behavior fits very good to the expectations from neutrino oscillations.
Besides this, neutrinos are in huge amounts produced in nuclear reactors, and in lesser amounts in natural radioactive decays in the earth's crust. Both of which are currently subject to intensive experimental studies.
Detecting a neutrino is not easy, because it interacts only very weakly. What one basically does it to take a large amount of something you know fairly well, and place detectors around it. And wait. Different experiments used e.g. solutions of cadmium chloride in water, chlorine containing fluids, heavy water, etc. Every once in a while, a neutrino will interact with one of the atom cores. This reaction produces charged secondary particles, traces of which can eventually be observed in the surrounding detectors. The larger the amount of stuff you place your detectors around, the larger the probability you actually see something.
I am always impressed by these experiments. My favourite detector is Super-Kamiokande. Here is a photo, where you see the large water tank (half filled) surrounded by the detectors. Isn't this beautiful?
Now that we have analysed the characteristics of neutrinos when they propagate, we can use them as a tool to further studies, e.g. about the properties of sun, or as messengers from far away places in the universe.
Wow, I just noticed that the Wikipedia entry on neutrinos has been thoroughly cleaned up! (I read it on Friday, and thought that it provides a rather unbalanced view. It's much better now, but still very dominated by experiment.)
TAGS: SCIENCE, PHYSICS, NEUTRINOS
Updated on July 19th 2006
Friday, July 14, 2006
The other important thing always connected with the quatorze juillet is, of course, Le Tour de France. This grand road bicycle race is usually about half-way at this date. Traditionally, French racers try to win this stage of that day, but today, it was the Ukrainian Yaroslav Popovych who arrived first in the beautiful medieval town of Caracsonne. Popovych rides for the Discovery Channel Team, the former team of last years' champion Lance Armstrong. The maillot jaune, the yellow jersey of the best cyclist in the overall standing, is worn by the American Floyd Landis, a former team mate of Lance Armstrong. It seems that Floyd Landis has good prospects to carry the maillot jaune to Paris, continuing the unique, six-year long series of American winners started by Lance Armstrong.
This year's Tour de France is special in many respect. It may not have raised the same public interest as in the last years because of its overlap with the soccer world cup, but that is not the point. It is the first Tour after Lance Armstrong, and here in Germany, there has been big hope that Jan Ullrich may have a last chance to repeat his victory of 1997. His big rival would have been Ivan Basso, the champion of this year's Giro d'Italia.
But then, two days before the start of the Tour, a big shock hit all cycling fans: As a result of a doping scandal uncovered in Spain earlier this year, both favourites were denied participation in the race, together with nearly 50 other cyclists. There are more allegations against Jan Ullrich in the meantime, which is a quite sad and tragic story.
Cycling has always had a problem with doping, probably even more so than other sports. Now, with this latest scandal, there have been fears that it may imply the end of the Tour. I strongly hope that this will not be the case, and that, on the contrary, it will increase further the awareness of cyclists, cyling teams, and the public against doping. Even without big names and dominating champions, cycling can be a very interesting sport to watch - it is not just some men sitting on bycicles and struggling like hell.
And it may also help to bring more to limelight what for me has always been the star of the Tour: La France Profonde, the wonderful landscapes of rural France. Tomorrow's stage will cross Le Midi, from Béziers to Montélimar. It is a region where you can spend a marvelous summer holiday. But even if you can not travel there just now: If you have a chance to follow tomorrow's stage on TV, you can see both the race, and marvel at the beautiful landscapes, villages and towns of Southern France that are transvered by the cyclists. I will do so.
Thursday, July 13, 2006
Mirko caused some harsh criticism, so let me briefly state my opinion about the complicated ménage a trois between the scientist, the journalist and the public.
It is certainly not an easy task to communicate science to the broad public, but one that I consider to be very important - and neglected. To reach people it is necessary to omit details and technical terms. However, I would be very depressed if the average person from the street was immediately able to understand every detail of what I have been working on for several years. And I admit that this does reflect in my language. Ideally, the science journalist should be able to translate this incomprehensible physics speech into everyday language. To do so, matters must inevitably be simplified and details must be dropped.
I think there is no doubt that this is necessary.
However, the question is how much is necessary? IMO, compromises have to be made on both sides. The journalist wants to have some few, possibly catchy, sentences that excite the reader. The scientist wants to have details mentioned whose importance might not immediately be obvious. This conflict is not solved by retreating to fantastic stories and science fiction, spiced up with deliberately misquoted statements. Science is not just a catchy sentence.
E.g. the article in 'World of Wonders' stated that tiny black holes would be created from colliding protons. This is actually incorrect, as the black holes would be made from parton-parton, not proton-proton collisions. I could now go on and explain you why the difference is enormously important, but I'd agree that this is a point that might be dropped for the sake of better readability.
However, it is a different matter to write about the mini black holes without even mentioning that they undergo Hawking radiation. To stick with the above example, the mentioned article begins by describing macroscopic black holes as monsters of the universe (All-Monster) and murder-holes (Mörderloch), features sentences like the smallest mistake can erase all life on earth (der kleinste Fehler [kann] alles Leben auf der Erde auslöschen), then quotes Sabine Hossenfelder, and goes on by painting a catastrophe: a black wall would swallow the earth almost with the speed of light (Eine schwarze Wand würde die Erde fast mit Lichtgeschwindigkeit verschlingen). The only remotely reasonable statement is a quotation by Bernard Carr, but which also does not mention the evaporation. The article suceeds in completely ignoring the fact that the high temperature of the tiny black holes is THE important difference to the large 'monster-holes'.
The is definitely a 'detail' that should not have been dropped.
Unfortunately, it is my impression that in science journalism the weight is currently strongly on the over-simplified side, and tends towards plump sensations. Obviously, the marketing strategy is to reach as many people as possible with a provocative headline, and entertaining or scary news (Killer Bees Attacking! New Ice Age is coming! Beast Volcano! ). Just get people to buy the magazine or to watch the show. Quality inevitably suffers from this.
However, when working as a journalist - so I imagine - one is forced to play this game to a certain degree. And I understand that Mirko has to fulfill the demands of his employer. Writing articles that won't get printed at all doesn't help either. We all have to live.
So, I can't put the blame on him, but end up - as usually - blaming the 'society', the 'system' or 'the modern times'. It might be more satisfactory to call the single journalist an arrogant idiot, but that also is an oversimplification which omits many details.
Most of the times, I should say, science journalists do a very good job in a complicated field. It probably requires significant diplomatic skills to deal with sometimes hot-blooded scientists. And it requires a lot of verbal practice to make a story out of some equations scribbled on a notepad. As Wolfgang stated correctly, we don't produce fascinating results on a daily base. But then, when written the right way, almost everything can become an interesting journey (with the possible exception of knitting patterns) - this is what good journalism does.
An attitude that I have met frequently and which always upsets me is that when confronted with physics or maths, many people say something like I never understood that in school and that's it. To big parts, such a perception is based on just being unfamiliar with the matters. And it's this perception of ignorance that is used to argue why the average reader is not capable to understand how a nuclear reactor works (but we all know they are dangerous), what a black hole is (but we all know they are dangerous) or what a free radical is (but we all know they are dangerous, right?).
Now take an average newspaper with a tax-reform debate. How many people actually understand all the terms and arguments used in this debate? Go ask Mike from 7/11 to explain you what inflation, purchasing power, Neo-liberalism or Keynesianism is. But after having read these words repeatedly, people just think they know roughly what is meant. Maybe they do. But more often, the results are some oversimplified conclusions in the manner: If I pay more taxes I have less money. Which are based on very insufficient knowledge of the matter and are dangerously naive on the long run. One way or the other, editors apparently have no problem printing such messy technical language.
Anyway, what I meant to say was that the average reader is capable of much more details and technical terms than he himself might think. It's mostly a matter of being used to technical terms that makes us feel comfortable with them. Lack of knowledge does not equal stupidity.
It's a big advantage of online publications that it is possible to let the reader decide how many details he wants to know by adding some links or references for further reading. Or maybe just by putting the more messy explanations on a separate site. This, I admit, is much more complicated in printed media.
Some words about this blog
As you might have noticed in the side bar, our blog has a new contributor, and I would like to welcome Mirko Herr to this blog. I am looking forward to read more from him.
I also added a section about, where you find some details about us.
TAGS: SCIENCE, JOURNALISM, SCIENCE AND SOCIETY
Wednesday, July 12, 2006
Okay, I admit, I am sitting around in my office and waste precious working time, after I eventually finished what I had begun to refer to as The-Eternal-Neutrino-Paper :-)
I can recommend the new album, it has quality. I will have to listen to it some more times but it has some nice tracks, doesn't sound too similar to the previous album Absolution, and the booklet is carefully designed, nice photos. Which one is the guy completely in black? He's kind of cute.
Speaking of guys in black: Andi, you will like Assassin: "War is overdue, The time has come for you, To shoot your leaders down, Join forces [...] Oppose and disagree, Destroy demonocracy " I think I like Map Of The Problematique better. Wish I had a map...
Anyway. Have to get some work done now. What's my next paper about? Oh yes, Angelina Jolie ;-)
Can I believe
When I don't trust
All your theories
Turn to dust
TAGS: MUSIC, CD
Note added Aug. 6th
- Video+Audio: Muse-Starlight
Monday, July 10, 2006
Above my desk there is a postcard with a quotation by Winston Churchill. It says: Never, Never, Never give up. I bought this some years ago after receiving a particularly nonsensical referee report, and it's been moving with me since.
|Until I read Peter Woit's book I did not know this quotation was used by David Gross at the end of his closing talk on the Strings 2003 conference. These are the opening lines of chapter I of 'Not Even Wrong', though in this version the quotation has five Never's.|
My grandma taught me there is a thin line between stubbornness and stupidity. I'd say the intention of Peter Woit's book is to draw the line for the case of String Theory. It remains up to you though, on which side you place yourself. Maybe the two Never's make all the difference.
The book can be divided roughly in two parts. The first part, chapter 1-9, are an introduction into the Standard Model and its problems. The second part, chapter 11-18 are a survey on the achievements of String Theory, or rather the absence thereof. Chapter 10 I don't really know what to do with.
If Lee Smolin hadn't already said it (back flap) I'd have said the book is courageous, cause it provides all the necessary criticism that was - and is - omitted in most introductions into the subject. Criticism is always uncomfortable, for both sides. So, I am not entirely comfortable with this review either.
Unfortunately, the whole purpose of the book - to point out the 'Failure of String Theory and the continuing challenge to unify the laws of physics' (subtitle) - makes the second part of the book a rather depressing read.
The first half is an introduction into the main concepts:
A short history of experimental particle physics (2), quantum mechanics (3), quantum field theory (4), gauge symmetries (5) and the Standard Model (6 + 7). Then, problems of the Standard Model (8) are discussed, after which follows a chapter about the need to go beyond the Standard Model, and about some attempts to do so (9).
So far, the book could have been an average popular science book, but imo not an especially well written one. Even though I personally like the briefness of the introduction (having read about one thousand popular introductions to quantum mechanics), there are definitely better ways to do it. I would e.g. recommend Lisa Randall's 'Warped Passages' which is indeed a very readable, and also entertaining. If you like it brief, try Lee Smolin's 'Three Roads to Quantum Gravity'; if you like it fast and furious, try Joao Magueijo's 'Faster Then the Speed of Light'. Unfortunately ( since it defies the intention of Peter's book) I'd also say Brian Greene's 'Elegant Universe' is a much more elegant introduction into the basic concepts.
Furthermore, I myself do appreciate the use of technical terms and mentioning of mathematical abstractions, because an interested reader will have it much easier to build up on it. In this regard, the references given at the end of each chapter are also very useful. I do appreciate this because I had to read an enormous amount of pop science books in high school before I found the relevant words 'tensor calculus' and 'differential geometry'.
However, on the other hand this means, these introductions will be very hard to read without at least basic knowledge in first semester physics. Is it really necessary for an introduction into quantum mechanics to elaborate on the relations between 'a very specific representation of the group U(1), the representation as transformations of the complex plane' and Fourier analysis (p.48)?
Chapter 10 about 'New Insights in Quantum Field Theory and Mathematics' then provides you with detailed explanations on topics like
'The Wess-Zumino-Witten two-dimensional quantum field theory turns out to be closely related to the representation theory of Kac-Moody groups. [...] The Hilbert space of the Wess-Zumino-Witten model is a representation not only of the the Kac-Moody group but of the group of conformal transformations (actually, this is a serious over-simplification [...])'
'Analytic fields could be classified by an integer, the so-called degree [...]The number of such fields of degree one was known since the nineteenth century to be 2875, and the number for degree two had been calculated to be 609,250 [...]The physicist's mirror space method predicted that there were 317,206,375 analytic fields of degree three [...].'
I guess, you really have to be R. Penrose to call this 'compulsive reading' (front flap).
However, if you made it through chapter 10, the books gets better in the second part. The next chapters summarize points of criticism on String Theory/Supersymmetry. None of which was really new or surprising for me, but it is good to have them written down as clearly as Peter Woit does it.
Having given up to expect a popular science book, here, I'd have wished for more technical details.
The later chapters then seem to me like a collection of essays that are rather vaguely connected to each other. E.g. there is an elaboration on the alleged beauty and elegance of String Theory (13), the religious aspects of the string community (14), and a chapter on the Bogdanov affair (15). I found myself explaining to Stefan that I think the point of the latter chapter was an example for how difficult it has become to sort out the crap in the field, and how peer review fails, and not that Peter tried to publish how he was misquoted by the brothers.
Then there are some well meant but unfocused attempts to analyze the problems in the community (16), which annoyingly are not very constructive. However, in big parts I share Peter Woit's view
'This huge degree of complexity at the heart of current research into superstring theory means that there are many problem for researchers to investigate, but it also means that a huge investment in time and effort is required to master the subject well enough to begin such research. [...] '
'Besides raising a huge barrier of entry to the subject, the difficulty of superstring theory also makes it hard for researchers to leave. By the time they achieve some real expertise they typically have invested a huge part of their carrier in studying superstrings [...]'
Then there is the unavoidable landscape issue in chapter 17, which I refuse to comment on, and a chapter on 'Other Points of View' (18), which also mentions LQG. This disappointingly short chapter is somewhat counter-effective to the claim that there are alternatives to superstring theory that should be pursued with more effort than currently invested.
An overall remark is that I find it quite interesting how Peter Woit gives a view on things from the mathematical side. E.g. I was not aware about most of the cross-relations mentioned in chapter 10, and some of it I will surely look into closer. I also used to say the M in M-theory stands for maths. By reading the book I came to realize that this is probably not a good interpretation either.
[...] it's very clear to me how my mathematician colleagues would answer the question of whether superstring theory is mathematics. They would uniformly say 'Certainly not!'
Maybe, I have been living in the US for too long, but I could not avoid asking myself which target group this book was written for. I definitely would not recommend the book to my mum (even though she's a maths high school teacher), or my younger brother (who has a MS). Moreover, for the 'interested layman', the discussion about the details of String Theory will be rather boring. I really hate to point it out, but even though Peter's or Lubos' blog might leave you with that impression, the world does not revolve around the question whether the KKLT mechanism can be considered elegant or not. And when you step back and out of the line of your peer group, the landscape discussion might rank somewhere close by the question whether aliens encoded a message in our DNA.
However, there is an audience this book is definitely addressed to. If you are a student who has just begun learning String Theory, or consider going into the field, you should without doubt read this book. I'd go so far to recommend reading it along with any String Theory lecture. It will give you much a better base to judge for yourself whether you want to enter the field.
(p. 237) Quote from Michael Atiyah:
'In the United States I observe a trend toward early specialization driven by economic considerations. You must show early promise to get good letters of recommendations to get good first jobs. You can't afford to branch out until you have established yourself and have a secure position. The realities of life force a narrowness in perspective [...] I am distressed by the coercive effect of today's job market.'
Since there are people who buy books by the cover: it features 'an artistically enhanced picture of particle tracks in the Big European Bubble Chamber'. That could have been nice, but the overlay with the title on the front flap is very sloppily done, and it just looks cheap. However, it will look nice in a bookshelf.
To summarize, I'd say Peter Woit's book 'Not even Wrong' is not an entertaining and easy to read popular science book. Neither does it provide sufficient details to be a technical introduction into the problems and alternatives to String Theory. I would really be interested to read a more technical version of the second part, and I genuinely hope Peter Woit finds the time (and the publisher) to do so. In his book, he carefully summarizes problems in Superstring Theory, and points out weaknesses of the current research programmes. The book will tell you all the things String theorists know but don't talk about in public.
If this was an amazon review, I'd give three stars. However, I am very suspicious of amazon reviews: Those who take the time to write reviews are usually either completely upset or totally excited, whereas the broad middle range is often missing.
Sunday, July 09, 2006
Some weeks ago I wrote how I was upset about an article in World of Wonders, written by the science journalist Mirko Herr. For one, I did not particularly like the article. Titled 'The World's most Dangerous Experiment',The the scientific content was vanishingly small, sensations were sold on shaky ground, the illustrations had been better used for a sci-fi movie (e.g. a 'black hole' that looked suspiciously like a solar eclipse in a pair of open hands).
But, living in the US, that's something I got used to.
What did upset me about this particular article was that I was being quoted in a context that made my words appear with exactly the opposite intention from what I had. It's like taking a sip from your Starbucks coffee (same as always) and then noticing there is plenty of caramel syrup in it (Yuck).
Okay, maybe I was just upset because I actually found the journalist was a nice guy. The nice guy wrote me an email some days later, and since then we have been in contact. He apologized for quoting me in a misleading way, and we had an interesting discussion about the problems of communicating science to the broad public.
I do not share his opinion in all points, but that's what makes the world interesting. I asked him to write a brief contribution to my blog which you find below. It seems, we also have a different opinion about what 'brief' is.
Paragraphs and bold-faces are mine.
By Mirko Herr
First of all I want to express my gratitude towards Sabine for inviting me to write this piece about science journalism and its pitfalls. Recently Sabine criticized my work and I have to admit that this was not without reason. Scientists, journalists and the media do form a rather complicated ménage a trois. But then, all such relationships are difficult. On the other hand, they can be genuinely exciting and of the greatest importance. If only all the partners involved knew how to deal with the situation. Time and again, I have asked myself a number of questions, concerning the role of science journalism, the mistakes made by journalists and scientists, and my hopes for a bright future of our ménage. The good thing about doing this online is that you can always click on to some cartoon website once you get bored. Feel free to do so.
Unlocking the Ivory Tower - Why we need Science Journalism
Cloning, stem cell research, the Human Genome Project, global warming - these are some science news that made front page headlines all over the world in recent years. The public understands that science is a force to be reckoned with, a force that will shape the future of all of us. Also there are the fields of science that do not or never will affect our everyday life, and yet they are of great interest to the public because they provide answers to philosophical questions. I am talking about fields like cosmology or evolution. So there is a desire for all kinds of scientific news - why not let the scientists quench it? They are the experts, after all. And there is a whole lot of scientists who try to do just that. They write books for a wider public, they publish articles in Scientific American, some even host TV shows. Nonetheless, they are only a minority. Most scientists just do what they are best at: research. And they come up with fantastic results, almost on a daily basis.
Just too bad that nobody takes any notice, because the average person does not read Science or Nature. They do not even know that such magazines exist. The average person, that is my mom. She has always been a housewife in Germany, her education in the early 60’s never went beyond 8th grade. Though she sometimes wonders how the universe came into being or whether it would be possible to clone our family cat, she would never read a book by Steven Hawking or a copy of the German Scientific American. People like my mom make up the vast, vast majority of our societies and to reach them, scientists need the help of mass media. Enter the journalist. I like to compare my trade to that of an interpreter. It is our job to explain science on such a basic level that my mom will understand it. And even more, she has to enjoy reading or hearing about it. In the best case, she must feel entertained and enlightened while reading a piece of science journalism.
And I believe, that a journalist, who has to be a generalist, will rather achieve this goal than the scientist, who has to be a specialist by nature. Grasping the essential point of a study and explaining it in clear sentences without using any technical terms, that is the everyday bread of every science journalist. That is our expertise. And I think it is an absolute necessity to keep the whole public informed about what is going on in science, and not just the few who read popular science books; first of all because science is a wonderful undertaking of our global society and it is terribly exciting, secondly because science touches matters of deep moral questions like human cloning and everybody has to have a chance to come up with an informed opinion, thirdly because virtually all members of society finance science through taxes and should get some learning as a dividend.
When Proteins meet Protons – Where journalists fail
I believe that a good science journalist has to be a generalist. Well, one can not know or read everything. But while the scientist becomes more and more of a specialist, the journalist can keep an eye on a much wider field, seeing connections, that some specialist might miss, asking questions that a specialist wouldn’t ask. And once he has gathered all the information he needs, done so through extensive research, interviews and travels to the most important labs, he may sit down for a couple of weeks and write his wonderfully balanced, almost literary article. After that, he or she may enjoy a cup of tea with the mad hatter and the white rabbit.
The truth is, most journalists work with maddening deadlines, short resources and have hardly any idea what science is all about. The vast majority of my colleagues have studied languages and other humanities. They have the average scientific knowledge of a person with the average higher education background. And that knowledge is scarily scarce. Without the help of wikipedia, they will have trouble explaining the difference between proton and protein or neutron and neuron. I would even go so far as to say that about half of my colleagues do not speak enough English to do an interview in that language or read a text with a deeper understanding. Now, that is the average journalist. A journalist who tries to specialize in science ought to know a bit more about that field. But in about 70 per cent of the cases when you have a journalist at the other end of the telephone line, it will not be a real science journalist. Most of the time, it is somebody working on an article remotely related to a scientific issue and is just looking for an expert to harvest two or three decent quotes.
If lack of knowledge is one of our failures, sensationalism is the other. 30 years ago, in an age without cable TV, the Internet and a myriad of magazines, every article in a magazine, every piece of footage on TV was like a candle in the dark. Its mere existence attracted consumers. Today, a simple newspaper article is still like a candle, only one that is burning in the middle of Times Square. If one wants to be heard in today’s tempest of information, one has to be unique. Some media outlets achieve that by being absolutely impeccable in their reporting. Others achieve it just by screaming out loud. This leads to a reporting that stresses the most sensational aspects. A reporting, that is very unlike the scientific process of carefully drawn conclusions stated in a most technical language. Very often, such a reporting becomes too simplistic and absolutely not to the liking of the scientist. I do not think that such a sensationalism serves the reader or other consumers. Unfortunately, media outlets with a sensationalistic tendency are very successful. And in the media business of today, decisions are not solely made by journalists, managers have a great deal of influence, too. And they, by nature, have to focus on money. With the tempest of information still growing in strength, sensationalism will not go away, it will only get worse.
Word vs. term – Where scientists could improve
A few days ago, I read a press release with the following headline: “Long-lived magnetic fluctuations in a crystal”. It consisted of sentences like: “MnF2, the material studied by the researchers, is an antiferromagnet. In this ionic material, each Mn2+ ion carries a net spin oriented in the opposite direction from that in which its neighbors point.” Who of you wants to know more? (Well, you can, following this link). My mom wouldn’t. She wouldn’t even understand the headline. And it is the headline that catches 90 per cent of the readers. You can not possibly underestimate the importance of the headline. And “Long-lived magnetic fluctuations in a crystal” is an absolute bummer. Now this was not the original publication of the study, which was published in Science. It was a press release by the Max Planck Institute for Solid State Physics. I admit, a press release is not written for the general public, but for a wider public. Most of the readers of a press release are journalists, but most of them would just ignore a press release as the one cited above. I am among them. Even though I grasp the meaning of this study, it is way too far out for a popular science magazine. And even the best science journalist would have to do a complete translation of such a press release.
Most of the time, it is pretty much the same when you talk to a scientist. That is a problem that most specialists have: they find it hard to distance themselves from their technical terms. There is nothing wrong with technical terms as long as you use them while talking to fellow specialists. They are absolutely necessary to clarify details. The general public just doesn’t care for details, and therefore the journalist has to explain things using the most simple words. We would be quite happy if the scientists would meet us somewhere in the middle between lab speak and everyday language. We love the scientist who can explain his work using plain words. When Steven Hawking wrote “A short history of time”, somebody told him that every mathematical formula would cut his readership by half. The same is true with every chemical formula or every word that makes one long for a dictionary. I wish scientists would more often think of my mom when they talk to me, or maybe of their own grandma. If I need more details, more specific information, I will ask for it. And there is one sentence I am to hear in almost every interview I do, a sentence I do not particularly like: “Matters are more complicated, you can not put it that simply.” I know scientists have to worry about the criticisms they might get from colleagues. I know that you want to see coverage of all the details. And that is OK, but please give me a few decent understandable quotes that I can use. And believe me, most of the time matters are not really that complicated.
This shall be enough, I don’t want to fill all of Sabines blog. Let me just and finally say this: I know I have made some gross generalizations in this text. There are scientists with a wonderful talent for getting their message across to the public, women and men who inspire the thoughts of millions. And there are wonderful science journalists, women and men with a deep understanding for the subjects they are writing about, with a great talent for language and simple, but sound explanations. I am striving to become one of them, yet there is still a long way to go. I hope that along the way I will learn a lot more exiting science and get in touch with many more exciting scientists. I would be glad to learn your opinion on these matters. Feel free to send me an e-mail, my address is mirkoherr(at)web(dot)de
Saturday, July 08, 2006
The following week, Horst stomped into my office with the article in Scientific American (The Universe's Unseen Dimensions, August 2000), just to find it already lying on my desk. I am not sure who put it there, it wasn't me, but apparently there was no way around reading the papers by Arkani-Hamed & Co.
I can't say I liked what I read. I liked the original Kaluza-Klein idea, but these extra dimensions had little, if anything, to do with it. Anyway, I was completely stuck with my work (work on something I found out years later had been done already in the 80ies) and thought I could give the 'modern' extra dimensions a try.
We kept telling ourselves the topic would vanish soon, and we shouldn't spent to much time on it. Instead, the idea of phenomenologically accessible extra dimensions has flourished since (in an almost scary way), and the parameters of the models are by now included in the Particle Data Group's search for physics beyond the Standard Model.
Here, I would like to briefly introduce the main concepts, together with some references, to give you an impression of what I have been working on.
For a very readable introduction on the subject to non-experts, I can recommend Lisa Randall's book 'Warped passages'.
- 1. Why Extra Dimensions?
2. Models With Extra Dimensions
3. Observables of Extra Dimensions
4. Further Reading
1. Why Extra Dimensions?
My motivation to study models with extra dimensions is simple. As long as I don't know of any good reason why we live in 3+1 dimensions, the question whether our spacetime has additional dimensions is definitely worth the effort of examination. This means one has to figure out how the assumption of additional extra dimensions can be included into our current theory, the Standard Model (SM), in such a way that this is compatible with our present day observation, and then ask what observable consequences this yields.
However, we first have to explain why we don't see any of the extra dimensions in our daily live, since we rarely witness things vanish into the 5th dimension. The most common way to do this is to assume that the extra dimensions are compactified on a small radius (ADD and UXD-models). Another way is to give the extra dimensions a strong curvature, which basically makes it hard to escape into them (RS-models).
In the ADD and RS-model, we - or the particles of the SM respectively - are bound to a 3-dimensional submanifold. This submanifold is often referred to as 'our brane', whereas the whole higher dimensional spacetime is called 'the bulk'.
The setup of these brane-world models is motivated by string theory, and whenever you post a paper and forget to cite Antoniadis '90, I can picture him jumping up and down in his office, tearing out his last some hears - before he writes you a polite email demanding to be cited appropriately. Which I have done hereby.
The attractive feature of models with extra dimension is that they provide us with an useful description to predict observable effects beyond the SM. They do by no means claim to be a theory of first principles or a candidate for a grand unification! Instead, their simplified framework allows the derivation of testable results which can in turn help us to gain insights about the underlying theory.
On the other hand this means that theories with extra dimensions are not consistent on their own. E.g. they don't explain without invoking further mechanisms why which particles are bound to the brane, or how the extra dimensions are stabilized
2. Models with Extra Dimensions
There are different ways to build a model with an extra dimensional space-time. The most common ones are:
2. a) Large Extra Dimensions
The ADD-model proposed by Arkani-Hamed, Dimopoulos and Dvali in '98 adds d extra spacelike dimensions without curvature, in general each of them compactified to the same radius. All SM particles are confined to our brane, while gravitons are allowed to propagate freely in the bulk.
- The Hierarchy Problem and New Dimensions at a Millimeter
- New Dimensions at a Millimeter to a Fermi and Superstrings at a TeV
- Phenomenology, Astrophysics and Cosmology of Theories with Sub-Millimeter Dimensions and TeV Scale Quantum Gravity
The higher dimensional theory comes with a higher dimensional Planck scale Mf which can be close by a TeV. The large observed value of our Planck scale is then caused by the presence of the extra dimensions: In contrast to all the other interactions, gravity dilutes into the extra dimensions. Thereby, the gravitational potential falls off faster at distances smaller than the radius of the extra dimensions. At larger distances however, the usual behaviour is recovered, but with an already weakend strength. This is shematically illustrated in the figure below.
These models thus explain why gravity is so much weaker than the other interactions (or at least reformulate it in a geometrical language).
This in turn means that at smaller distances, gravity is much stronger than what we expect from the extrapolation of the 3-dimensional force law. It will run with a different power law in the radial distance r, which is related to the number of dimensions as 1/rd+1.
These extra dimensions are called 'large' because the radius is much larger than the inverse of the new fundamental scale. It turns out that the model with one extra dimension is incompativle with observation (the extra dimension would have about the size of the solar system). For d=2, the radius of the extra dimension can be as large as 1/10 mm. The larger the number of extra dimensions, the smaller the radius.
b) Universal Extra Dimensions
Within the model of universal extra dimensions all particles (or in some extensions, only gauge fields) can propagate in the whole higher dimensional spacetime. These extra dimensions typically have radii of ~ 10-18 m are compactified on an orbifold to reproduce SM gauge degrees of freedom. These models come closest to the original idea of Kaluza and Klein.
- Bounds on Universal Extra Dimensions
- Collider Implications of Universal Extra Dimensions
- Probes Of Universal Extra Dimensions at Colliders
It is worth noting that, unlike the ADD-model, no location along the extra dimension is exeptional and thus, translational invariance holds. This means, that the momentum in direction of the extra dimensions is conserved.
c) Warped Extra Dimensions
The setting of the model from Randall and Sundrum is a 5-dimensional spacetime with
an non-factorizable, so called 'warped' geometry. Roughly spoken, when you go into the direction of the extra dimensions, all your scales will be stretched by a factor depending on the distance to our brane. The solution for the metric is found by analyzing the solution of Einstein's field equations with a constant energy density on our brane where the SM particles live. In the type I model the extra dimension is compactified, in the type II model it is infinite. The resulting metric is an AdS-Space, which makes the model particularly interesting.
d) Split Fermions
The split fermion model is not exactly a model on its own, but it serves as a quick fix for some problems that arise within models with a lowered fundamental scale. Namely, contributions in the SM that e.g. cause the proton to decay, are usually suppressed by the large value of the Planck scale. If the Planck scale is lowered they can become quite troublesome, and would allow the proton to decay rather fast. Since - luckily - the proton seems to be very long lived, it remains to explain why these processes do not occur.
In the split fermion model, the wave-functions that correspond to the particles of the standard model, are localized around different positions in the direction of the extra dimensions.
To compute the effective coupling between these particles, and thus, the strength of the above mentioned decay modes, one has to integrate the product of these wave-functions over the extra dimension. This overlap can be tiny, even with small separations. This is not only useful to suppress higher dimensional operators (also flavor changing ones), but can also be used to explain the very different masses of the fermions.
- Hierarchies without Symmetries from Extra Dimensions
- Yukawa Hierarchies from Split Fermions in Extra Dimensions
- Split Fermions in Extra Dimensions and Exponentially Small Cross-Sections at Future Colliders
3. Observables of Extra Dimension
The above mentioned models lead to a vast number of observable predictions, for high energy physics, high precision measurements and astrophysics. The current constraints on the parameters of the models can be found in the Particle Data Book.
a) Newtons Law
The most obvious experimental test for the existence of extra dimensions is a measurement of the Newtonian potential at sub-mm distances, since we have seen above that large extra dimensions predict a different power law. Cavendish-like experiments which search for deviations from the 1/r potential have been performed during the last years with increasing precision and do currently require the extra dimensions to have radii not larger than ~ 0.045 mm (which disfavors the case of two extra dimensions).
- Sub-millimeter Tests of the Gravitational Inverse-square Law
- New Experimental Limits on Macroscopic Forces Below 100 Microns
- Measuring Gravity on Small Length Scales
- Upper limits to submillimetre-range forces from extra space-time dimensions
- Short-Range Searches for Non-Newtonian Gravity
Periodic boundary conditions, as caused by compactification, lead to geometrically quantized momenta in the direction of the extra dimensions. This means the momentum in the extra dimension can only come in discrete steps. The step-size is the inverse of the radius. A particle with non-zero momentum in the extra dimension will appear to have less momentum left for the usual dimensions. It will thus behave on our brane as if it had an additional mass.
A particle that is allowed to enter the extra dimensions will therefore come with a whole 'tower' of momenta that on our brane appear like copies of the same particle with different masses. These so-called KK-excitations of the particles can in principle be produced in scattering experiments, if the energy is high enough to provide enough momentum.
- TeV Strings and Collider Probes of Large Extra Dimensions
- Particle Physics Probes Of Extra Spacetime Dimensions
- Probes Of Universal Extra Dimensions at Colliders
- On Kaluza-Klein States from Large Extra Dimensions
c) Real and Virtual Graviton Production
In the ADD-model the graviton will have a tower of KK-excitations, and since the radii of the dimensions are large, the mass spacing will be very small. It takes a whole lot of these flimsy gravitons to add up to an observable contribution. Typically, these contributions become comparable to SM-processes if the total energy of a collision reaches the new fundamental scale.
Real graviton production would lead to an apparent loss of energy, since the graviton does not lead to a signal in the detector. Also, virtual exchange of gravitons can take place, which modifies predictions for processes made within the SM.
d) Black Hole Production
As we have seen, in the ADD-model gravity on distances significantly smaller than the radius of the extra dimension, is much stronger than in the usual three-dimensional scenario. The horizon of a black hole is the surface at which photons can no longer escape the gravitational pull. In the presence of extra dimensions, this happens at a much larger distance. Black holes can therefore be produced easier. The density that is needed to cause a gravitational collapse is such that it can be reached at future colliders.
Whenever two colliding particles with sufficiently high energy come closer together than the horizon radius of their total energy, the system will collapse and cause a black hole. One can estimate the number of black holes that would be produced at the LHC. For Mf ~ 1TeV one finds about 1 black hole per second.
These black holes would not be stable. Due to quantum effects, they undergo Hawking evaporation with a very high temperature (~ 300 GeV ~ 1015 K) and decay before they reach the detector. They will however give a very distinct signature.
- Black Holes in Theories with Large Extra Dimensions: a Review
- What Black Holes Can Teach Us
- Black hole and brane production in TeV gravity: A review
- Black Holes at Future Colliders and Beyond - a Review
4. Further Reading
4 a) Reviews and Lectures
- Cargese Lectures on Extra Dimensions
Author: R. Rattazzi
- TASI 2004 Lectures on the Phenomenology of Extra Dimensions
Author: Graham D. Kribs
- An Introduction to Extra Dimensions
Author: Abdel Pérez-Lorenzana
- TASI Lectures on Extra Dimensions and Branes
Author: Csaba Csaki
- Physics of Extra Dimensions
Author: Rula Tabbash
b) Brief Intros
- Introduction to extra dimensions
Author: M. Quiros
- Gravity and Large Extra Dimensions
Authors: V H Satheesh Kumar, P K Suresh
- Review on Extra Dimensions from the Particla Data Booklet
Authors: G.F. Giudice and J.D.Wells
- Greg Landsberg: Searching for Extra Dimensions
- John Terning: Extra Dimensions
- Symmetry Magazine: The Search For Extra Dimensions
- Physicsl Review Focus: In Search of Hidden Dimensions
- Spacedaily: In Search Of Extra Dimensions
- Physicsweb: The search for extra dimensions
- Chicago Chronicle: Chicago physicists believe extra dimensions exist as they search for more clues
- The Official String Theory Web Site: Looking for Extra Dimensions
- The Elegant Universe: Imagining Other Dimensions
- Quasar: New Dimensions at LHC
Will be updated from time to time. I invite you to send me your links or references.
TAGS: PHYSICS, SCIENCE