Saturday, July 30, 2011

Interna

Lara and Gloria are now 7 months old. During the last month, they have made remarkable progress. Both can now roll over either which way, and they also move around by pushing and pulling. They have not yet managed to crawl, but since last week they can get on all fours, and I figure it's a matter of days till they put one knee in front of the other and say good-bye to immobility. They still need a little support to sit, but they do better every day.

While the twins haven't paid any attention to each other during the first months, now they don't pay attention to anything else. Gloria doesn't take any note of me if Lara is in the room, and Lara loses all interest in lunch if Gloria laughs next door. The easiest way to stop Gloria from crying is to place her next to her sister. However, if one leaves them unattended they often scratch and hit each other. I too am covered with bruises (but hey, it rattles if I hit mommy's head!), scratches (how do you cut nails on a hand that's always in motion?), and the occasional love bite that Lara produces by furiously sucking on my upper arm (yes, it is very tasty). Gloria is still magically attracted to cables, and Lara has made several attempts to tear down the curtains.

Lara and Gloria are now at German lesson 26: da-da, ch-ch, dei-dei-dei, aga-a-gaga. It is funny that they make all these sounds but haven't yet attempted to use them for communication. They just look at us with big eyes when we speak and remain completely silent. Though they seem to understand a few words like Ja, Nein, Gut, Milch. They also clearly notice if I speak English rather than German.

For the parental reading, this month I've enjoyed Ingrid Wickelgren's article "The Miracle of Birth is that Most of Us Figure Out How to Mother - More or Less." Quoting research that shows some brain is useful for parenting too, she writes:
"To take care of a baby's needs, mom needs to be able to juggle tasks, to prioritize on the fly, rapidly, repeatedly and without a lot of downtime... Mothering tests your attention span, ability to plan, prioritize, organize and reason as much as does a day at the office."

Well, it somewhat depends on what you used to do in that office of course. But yeah, I suppose some organization skills come in handy for raising twins. I won't lie to you though, singing children's rhymes isn't quite as intellectually stimulating as going with your colleague through the new computation. But Gloria always laughs when I read to her the titles of new papers on the arXiv.

On the downside, the Globe and Mail reported the other day on "Divorce, depression: The ugly side of twins," summing up "the infant treadmill":
"Cry. Breastfeed. Bottle-feed. Burp. Breast pump. Diaper. Swaddle. Ninety minutes of baby maintenance, then 90 minutes of trying to stay on top of sleep and domestic chores, then repeat. And so on."

Oh, wait, they forgot cleaning the bottles, doing the laundry, picking up baby because she's been spitting all over herself, washing baby, changing her clothes, changing bed sheets, putting baby back into bed, putting bottles into sterilizer, put laundry into dryer, take the other baby out of bed because she's been spitting... Indeed, that's pretty much how we spent the first months. But it gets better and thanks, we're all doing just fine.

You can also disregard all the above words and just watch the below video. And if you think they're cute, don't forget they'll get cuter for two more months, so check back ;-)


PS: Oh, and please excuse the green thing in the video. New software and I haven't yet really figured out how it works.

Thursday, July 28, 2011

Prediction is very difficult

Niels Bohr was a wise man. He once said: "Prediction is very difficult, especially about the future." That is especially true when it comes to predictions about future innovations, or the impact thereof.

In an article in "Bild der Wissenschaft" (online, but in German, here) about the field of so-called future studies, writer Ralf Butscher looked at some predictions made by the Fraunhofer Institute for Systems and Innovation Research (ISI) in 1998. The result is sobering: In most cases, their expert panel didn't even correctly predict the trends of already developed technologies over a time of merely a decade. They did for example predict the human genome would be sequenced by 2008. In reality, it was sequenced already in 2001. They did also predict that by 2007 a GPS-based toll-system for roads would be widely used (in Germany). For all I know no such system is on the horizon. To be fair, they said a few things that were about right, for example that beginning in 2004, flat screens would replace those with cathode-ray tubes. But by and large it seems little more than guesswork.

Don't get me wrong - it's not that I am dismissing future studies per se. It's just that when it comes to predicting innovations, history shows such predictions are mostly entertaining speculations. And then there are the occasional random hits.

I was reminded of this when I read an article by Peter Rowlett on "The unplanned impact of mathematics" in the recent issue of Nature. He introduces the reader to 7 fields of mathematics that, sometimes with centuries delay, found their use in daily life. It is too bad the article is access restricted, so let me briefly tell you what the 7 examples are. 1) The quaternions who are today used in algorithms for 3-d rotations in robotics and computer vision. 2) Riemannian geometry, today widely used in physics and plenty of applications that deal with curved surfaces. 3) The mathematics of sphere packing, used for data packing and submission. 4) Parrondo's paradox, used for example to model disease spreading. 5) Bernoulli's law of large numbers (or probability theory more broadly) and its use for insurance companies to reduce risk. 6) Topology, long thought to have no applications in the real world and its late blooming in DNA knotting and the detection of holes in mobile phone network coverage. (Note to reader: I don't know how this works. Note to self: Interesting, look this up.) 7) Fourier transform. There would be little electrodynamics and quantum mechanics without it. Applications are everywhere.

Rowlett has a call on his website, asking for more examples.

The same issue of Nature also has a commentary by Daniel Sarewitz on the NSF Criterion 2 and its update, according to which all proposals should provide a description of how they will advance national goals, for example economic competitiveness and national security. Sarewitz makes it brilliantly clear how absurd such a requirement is for many branches of research:
"To convincingly access how a particular research project might contribute to national goals could be more difficult than the proposed project itself."

And, worse, the requirement might actually hinder progress:
"Motivating researchers to reflect on their role in society and their claim to public support is a worthy goal. But to do so in the brutal competition for grant money will yield not serious analysis, but hype, cynicism and hypocrisy."
I fully agree with him. As I have argued in various earlier posts, the smartest thing to do is reducing pressure on researchers (time pressure, financial pressure, peer pressure, public pressure) and let them take what they believe is the way forward. And yes, many of them will not get anywhere. But there is nobody who can do a better job in directing their efforts than they themselves. The question is just what's the best internal evaluation system. It is puzzling to me, and also insulting, that many people seem to believe scientists are not interested in the well-being of the society they are part of, or are somehow odd people whose values have to be corrected by specific requirements. Truth is, they want to be useful as much as everybody else. If research efforts are misdirected, it is not a consequence of researchers' wrongheaded ideals, but of these clashing with strategies of survival in academia.

Sunday, July 24, 2011

Blablameter

"Vigorous writing is concise."
~William Strunk, The Elements of Style (1918)


Fun: Die Zeit writes about Bernd Wurm who studied communication science and got frustrated by the omnipresence of empty words in advertisements and press releases. So he developed a software, the "Blablameter," that checks a text for unnecessary words and awkward grammar that obscures content. The Blablameter ranks text on a scale from 0 to 1: the higher the "Bullshit Index," the more Blabla. You find the Blablameter online at www.blablameter.de; it also works for English input. In the FAQ, Wurm warns that the tool does not check a text for actual content and is not able to judge the validity of arguments, it is merely a rough indicator for writing style. He also explains that scientific text tends to score highly on the Blabla-index.

Needless to say, I couldn't resist piping some abstracts of papers into the website. Here's the results, starting with the no-nonsense writing:


And yes, I did pipe in some text from this blog. My performance seems to have large fluctuations, but is mostly acceptable.

Did you come across anything with a Blabla-Index smaller than 0.08 or larger than 0.66?

Friday, July 22, 2011

Do cell phones cause tinnitus?

Forget about cancer caused by cell phones, what about that ringing in your ear? About 10-15% of the adult population suffer from chronic tinnitus. I've had a case of tinnitus after a back injury. Luckily it vanished after 3 months, but since then I'm very sympathetic to people who go nuts from that endless ringing in their ear. A recent study by a group of researchers from Vienna now looked into the correlation between cell phone use and tinnitus. The results are published in their paper Tinnitus and mobile phone use, Occup Environ Med 2010;67:804-808. It's not open access, but do not despair because I'll tell you what they did.

The researchers recruited a group of 100 sufferers that showed up in some hospital in Vienna. They only picked people for whom no physiological, psychological or medical reason for the onset of their tinnitus was found. They excluded for example patients with diseases of the middle ear, hypertension, and those medicated with certain drugs that are known to influence ear ringing. They also did hearing tests to exclude people with hearing loss, of which one might suspect that their tinnitus was noise induced. Chronic tinnitus was defined as lasting longer than 3 months. About one quarter of the patients had had it already longer than 1 year at the time of recruitment. 38 of the 100 found it distressing "most of the time," and 36 "sometimes." The age of the patients ranged from 16 to 80 years.

The researchers then recruited a control group of also 100 people that were matched to the sufferers in certain demographic factors, among others the age group, years of education and whether they lived in- or outside the city.

At the time the study was conducted (2004), 92% of the recruits used a cell-phone. (I suspect the use was strongly correlated with age, but no details on that in the paper). At the time of onset of their tinnitus, only 84% of the sufferers had used a cellphone, and another 17% had used it for less than a year at that time. The recruits, both sufferers and controls, were asked for their cell phone habits by use of a questionnaire. Statistical analysis showed one correlation at the 95% confidence level: for cellphone use longer than 4 years at the onset of tinnitus. In numbers: In the sufferer's group, the ratio between those who had used a cellphone never or less than one year to those who had used it more than 4 years was 34/33. In the control group it was 41/23.

They then discuss various possible explanations, such as the possibility that cell phone radiation affects the synthesis of nitric oxide in the inner ear, but also more banally that a "prolonged contrained posture" or "oral facial manoeuvres" affect the blood flow unfavorably. (Does chewing gum cause tinnitus?)

The result is just barely significant, i.e. just at the edge of the confidence interval. There's a 5% chance of that result happening just coincidentally by unlucky sampling. So the researchers conclude very carefully that "high intensity and long duration of mobile phone use might be associated with tinnitus." Note that it's "associated with" and not "caused by." Needless to say, if you Google "cell phones tinnitus" you'll find several pages incorrectly proclaiming that "The researchers concluded that long term use of a mobile phone is a likely cause of tinnitus," or that the "study suggests cell phones may cause a chronic ringing in the ears." If such a Google search lead you here, the study concludes nothing of that sort. Instead, the authors finish with saying that there "might" be a link and that the issue should be "explored further."

So, do cell phones cause tinnitus? Maybe. Should you stop sleeping with the phone under your pillow? Probably.

In any case, I was left wondering why they didn't ask for phone habits generally. I mean, if it's the posture or movements connected with calling, what's it matter if it's a cell phone or a landline?

Monday, July 18, 2011

Book review: World Wide Mind by Michael Chorost

World Wide Mind
The Coming Integration of Humanity, Machines, and the Internet
By Michael Chorost

Is it surprising that self-aware beings become increasingly aware of their self-awareness and start pushing the boundaries? The Internet, Google, iPhones and wifi on every street corner have significantly changed the way we interact, share information and solve problems. Meanwhile, neuroscientists have made dramatic progress in deciphering brain activity. They have developed devices that allow to type using thoughts instead of fingers and monkeys with brain implants have learned how to move a robot arm with their thoughts. These are two examples that Michael Chorost discusses in his book, and that he then extrapolates.

Chorost's extrapolation is a combination of these developments in communication and information technology and neuroscience: Direct brain-to-brain communication by thought transmitted via implants rather than by typed words, combined with wireless access to various soft- and hardware to supplement to our cognitive skills.

I agree with Chorost that this "World Wide Mind" is the direction we are drifting, and that the benefits can be huge. It is interesting though if you read the comments to my two earlier posts that many people seemed to be scared rather than excited by the idea, mumbling Borg-Borg-Borg to themselves. Is is refreshing and also curageous then that Michael Chorost in his book addresses the topic from a quite romantic viewpoint.

Chorost describes himself as a short, deaf, popular science writer. He wears a Cochlear implant that allows him to hear by electric stimulation of the auditory system (content of his previous book, which I however didn't read). Chorost started writing "World Wide Mind" single and finished as a married man. He writes about his search for a partner and what he learned along the way about communication and what today's communication on the internet is lacking. The ills produced by our presently incomplete and insatisfactory online culture he believes will be resolved if we overcome the limitations of this exchange. He does not share the pessimism Jaron Lanier put forward in his book "You are not a gadget". (He does however share Lanier's fondness of octupi and a link to this amazing video with the reader.)

In "World Wide Mind" Chorost wants to offer an outlook of what he believes is doable if today's technology is pushed forward hard enough. He focuses mostly on optogenetics, a recently florishing field of study that has allowed to modify some targeted neurons' genetic code such that their activity can be switched on and off by light signals (most famously, this optogenetically controlled mouse running circles in blue light). He also discusses what scientists have learned about the way our brains store and process input. Chorost then suggests that it seems doable to record each person's pattern of neuronal activity for certain impressions, sights, smells, views, words, emotions and so on (which he calls "cliques") and transmit them to be triggered by somebody else's implant in that person's brain where they would cause a less intense signal of the corresponding clique. That would then allow us, so the idea, to share literally everything.

Chorost offers some examples what consequences this would have that seem to me however quite bizarre. Improving on Google's flu tracker, he suggests that the brain implants could "detect the cluster of physical feelings related to flu -- achiness, tiredness, and so on -- and send them directly to the CDC." I'm imagining in the future we can track the spread of yeast infections via shared itchiness, thank you very much. Chorost also speculates that "The greater share of the World Wide Mind's bandwidht might be devoted to sharing dreams" (more likely it would be devoted to downloadable brain-sex), and that "linking the memory [of what happened at some place to the place] could be done very easily, via GPS." I'm not sure I'd ever sleep in a hotel room again.

He barely touches in one sentence on what to me is maybe the most appealing aspect of increased empathy, a bridging of the gap between the rich and the poor, both locally and globally, and his vision for science gives me the creeps for it would almost certainly stiffle originality and innovation due to a naive sharing protocol.

"World Wide Mind" is a very optimistic book. It is a little too optimistic in that Chorost spends hardly any time discussing potential problems. He has a few pages in which he acknowledges the question of viruses and shizophrenia, but every new technology has problems, he writes, and we'll be able to address them. The Borg, he explains, are scary to us because they lack empathy and erase the individual. A World Wide Mind, in contrast, would enhance individuality because better connectivity fosters specialization that eventually improves performance. Rather than turning us into Borg, "brain-to-brain technologies would be profoundly humanizing."

It is quite disappointing Chorost does not at all discuss the cognitive biases we know we have, and what protocols might prevent them from becoming amplified. Nor does he, more trivially, address the point that everybody has something to hide. Imagine you're ignoring a speed limit sign (not that I would ever do such a thing). How do you avoid this spreading through your network, ending up in a fine? Can you at all? And let's not mention that reportedly a significant fraction of the adult population cheats on their partner. Should we better wait for the end of monogamy before we move on with the brain implants? (It may be close than you think.) And, come to think of it, let's better wait for the end of the Catholic Church as well. Trivial as it sounds, these issues will be real obstacles in convincing people to adapt such a technology, so why didn't Chorost spend a measly paragraph on that?

Chorost's book is an easy read. On the downside, it lacks in detail and explanation. His explanation of MRI for example is one paragraph saying it's a big expensive thing with a strong magnet that "can change the orientation of specific molecules in a person's body, letting viewers see various internal structures clearly." And that's it. He also talks about neurotransmitters without ever explaining what that is, and you're unlikely to learn anything about neurons that you didn't already know. Yes, I can go and look up the details. But that's not what I buy a book for.

"World Wide Mind" sends unfortunately very unclear messages that render Chorost's arguments unconvincing. He starts out stressing that the brain's hardware is its software, and so it's quite sloppy he then later, when discussing whether the Internet is or might become self-aware, confuses the Internet with the World Wide Web. According to different analogies that he draws upon, blogs either "could be seen as a collective amygdala, in that they respond emotionally to events" and Google (he means the search protocol, not the company) "can be seen as forming a nascent forebrain" or some pages later it can be seen as an organ of an organism, or a caste of a superorganism.

Chorost also spends a lot of words on some crazy California workshop that he attended where he learned about the power of human touch (in other words, the workshop consisted of a bunch of people stroking each other), but then never actually integrates his newly found insights about the importance of skin-contact with the World Wide Mind. This left me puzzled because the brain-to-brain messaging he envisions is able to transfer one's own neuronal activity only, which means essentially rather than tapping on your friend's shoulder, you'd have to tap your own shoulder and send it to your friend. And Chorost does not make a very convincing case when he claims that we'd be easily able to distinguish somebody else's memory from our own because it would lack in details. He does that after he discussed in length our brains' tendency to "confabulation," the creation of a narrative for events that didn't happen or didn't make sense to protect our sense of causality and meaning, something he seems to have forgotten some chapters after explaining it.

In Summary: the book is very readable, entertaining and it is smoothly written. If you don't know much about the recent developments in neuroscience and optogenetics, it will be very interesting. The explanations are however quite shallow and Chorost's vision is not well worked out. On the pro-side, this gives you something to think about yourself, and the book requires with only 200 pages not a big time investment.

Undecided? You can read the prologue and 1st Chapter of the book here, and Chapter 4 here. Michael Chorost tweets and is on facebook.

Friday, July 15, 2011

Collective excitement

I woke up this morning to find my twitter account hacked, distributing spam. I'm currently reading Michael Chorost's new book “World Wide Mind” and if his vision comes true the day might be near when your praise of the frozen pizza leaves me wondering if your brain has been hacked. Book review will follow when I'm done reading. If the babies let me that is. Here, I just want to share an interesting extract.

On the risk of oversimplifying 150 pages, a “clique” is something like an element of the basis of your thoughts. Might be a thing, a motion, an emotion, a color, a number, and so on, like e.g. black, dog, running, scary... It's presumably encoded in some particular pattern of neurons firing in your brain, patterns that however are different from person to person. The idea is that instead of attempting brain-to-brain communication by directly linking neurons, you identify the pattern for these “cliques.” Once you've done that, a software can identify them from your neuronal activity and submit them to somebody else where they get translated into their respective neuronal activity.

In Chapter 10 on “The Future of Individuality,” Chorost speculates on the enhanced cognitive abilities of an interconnected World Wide Mind:
“[I]magine a far-flung group of physicists thinking about how to unify quantum mechanics and general relativity (the most important unsolved problem in physics). One of them has the germ of an "aha" idea, but it's just a teasing sensation rather than a verbally articulated thought. It evokes a sense of excitement that her [brain implant] can pick up. Many cliques in her brain would be activated, many of them subconsciously. The sensation of excitement alerts other physicists that something is up: they suddenly feel that sense of aha-ness themselves. The same cliques in their brains are activated, say these: unification problem, cosmological constant, black holes, Hawking radiation.

An apparent random assortment, but brains are good at finding patterns in randomness. New ideas often come from a fresh conjunction of old ones. In a group intimately familiar with a problem, the members don't need to do a whole lot of talking to understand each other. A few words are all that are needed to trigger an assortment of meaningful associations. Another physicist pushes those associations a little further in his own head, evoking more cliques in the group. Another goes to his keyboard and types out a few sentences that capture it, which go out to the group; perhaps they are shared on a communally visible scratch pad. The original physicist adds a few more sentences. Fairly rapidly, the new idea is sketched out in a symbology of words and equations. If it holds up, the collective excitement draws in more physicists. If it doesn't, the group falls apart and everyone goes back to what they were doing. This is brainstorming, but it's facilitated by the direct exchange of emotions and associations within the group, and it can happen at any time or place.”

Well, I'm prone to like Chorost's book as you can guess if you've read my last year's post It comes soon enough in which I wrote “The obvious step to take seems to me not trying to get a computer to decipher somebody's brain activity, but to take the output and connect it as input to somebody else. If that technique becomes doable and is successful, it will dramatically change our lives.”

Little did I know how far technology has come already, as I now learned from Chorost's book. In any case, the above example sounds like right out of my nightmare. I'm imagining, whenever one of my quantum gravity friends has an aha-moment we're all getting a remote-triggered adrenaline peak and jump all over it. We'd never sleep, brains would start fuming, we'd all go crazy in about no time. Even if you'd manage to dampen this out, the over-sharing of premature ideas is not good for progress (as I've argued many times before). Preemies need intensive care, they need it warm and quiet. A crowd's attention is the last thing they need. Sometimes it's not experience and knowledge of all the problems that helps one move forward, but lack thereof. Arthur C. Clarke put it very well in his First Law:
“When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”

The distinguished scientist may be wrong, but he certainly will be able to state his opinion very clearly and indeed have a lot of good reasons for it. He still may be wrong in the end, but by then you might have given up thinking through the details. Skepticism and debunking is a central element of research. Unfortunately, one sometimes throws out the baby with the bathwater of bad ideas. “Collective excitement” based on a sharing of emotions doesn't seem like the best approach to science.

Sunday, July 10, 2011

Love to wonder

The July issue of 'Physik Journal' (the membership journal of the German Physical Society) has an interview with Jack Steinberger. Steinberger is an experimental particle physicist who in 1988 won the Nobelprize, with Leon Lederman and Melvin Schwartz, for his 1962 discovery of the muon neutrino. He is German born, but his family emigrated to the USA in 1934. Steinberger just celebrated his 90st birthday. What does a physicist do at the age of 90? Here's an excerpt from the interview (by Stefan Jorda):

You still come to your office at CERN every day?

I came by bike until last year, but then I fell and now I take the bus. I get up at five and arrive at half past six.

Every morning?

Not on Saturdays and Sundays. But I have nothing else to do. I read my email, then I go to the arXiv and look at the new papers in astrophysics. On the average, it's about 50 to 100, many of them are very bad. I read the abstracts, this takes one and a half hour, then I print those 5 to 10 that may be of interest to me. I try to understand them during the rest of the day. Then at 4pm I take the bus back home.

Since when are you interested in astrophysics?

In 1992 COBE detected the inhomogeneities in the cosmic microwave background, that was wonderful. It was a big challenge for me, as a particle physicist, to understand it, because one has to know general relativity and hydrodynamics. Back then I was still a little smarter and really tried to learn these things. Today I am interested for example in active galactic nuclei. The processes there are very complicated. I try to keep track, but there are many things I don't understand, and a lot simply is not understood.

(Any awkward English grammar is entirely the fault of my translation.)

Should I be lucky enough to live to the age of 90, that's how I would like to spend my days, following our ongoing exploration and increasing understanding of nature. Okay, maybe I would get up a little later. And on Saturday I'll bake a cake or two because my grand-grand children come for a visit. All nine of them.

"Men love to wonder, and that is the seed of science."
~Ralph Waldo Emerson

Tuesday, July 05, 2011

Getting cuter by the day...

If you've been wondering what age babies are the cutest, there's a scientific answer to that. Yes, there is. In the year 1979, Katherine A. Hildebrandt and Hiram E. Fitzgerald from the Department of Psychology at Michigan State University published the results of their study on "Adults' Perceptions of Infant Sex and Cuteness."

A totally representative group of about 200 American college students of child psychology were shown 60 chromatic photographs of infant faces: 5 male and 5 female each for six age levels (3, 5, 7, 9, 11, and 13 months). The babies were photographed by a professional photographer under controlled conditions when their facial expressions were judged to be relatively neutral, and the infants' shoulders were covered with a gray cape to hide clothing.

The study participants were instructed to rate the photos on a 5-point scale of cuteness (1: not very cute, 2: less cute than average, 3: average cuteness, 4: more cute than average, 5: very cute). The average rating was 2.75, ie somewhat less than averagely cute. The authors write that it's probably the selection of photos with neutral facial expressions and the grey cape which accounted for the students' overall perception as slightly less cute than average. And here's the plot of the results:
So, female cuteness peaks at 9 months.

For the above rating the participants were not told the gender of the child, but asked to guess it, which provided a 'perceived gender' assignment to each photo. In a second experiment, the participants were told a gender which however was randomly picked. It turned out that an infant perceived to be male but labeled female was perceived to be less cute than if it was labeled male. Thus the authors conclude that cuter infants are more likely to be perceived as female, and cuteness expectations are higher on females.

Partly related, Gloria just woke up:

Friday, July 01, 2011

Why do we live in 3+1 dimensions? Another attempt.

It's been a while since we discussed the question why we experience no more and no less than 3 spatial dimensions. The last occasion was a paper by Karch and Randall who tried to shed some light on the issue, if not very convincingly. Now there's a new attempt on the arXiv:
    Spacetime Dimensionality from de Sitter Entropy
    By Arshad Momen and Rakibur Rahman
    arXiv: 1106.4548 [hep-th]

    We argue that the spontaneous creation of de Sitter universes favors three spatial dimensions. The conclusion relies on the causal-patch description of de Sitter space, where fiducial observers experience local thermal equilibrium up to a stretched horizon, on the holographic principle, and on some assumptions about the nature of gravity and the constituents of Hawking/Unruh radiation.

What they've done is to calculate the entropy and energy of the Unruh radiation in a causal patch of any one observer in a de Sitter spacetime with d spatial dimensions. Holding the energy fixed and making certain assumptions about the degrees of freedom of the particles in the radiation, the entropy has a local maxium at d= 2.97 spacelike dimensions, a minimum around 7 and goes to infinity for large d. Since the authors restrict themselves to d less or equal to 10, this seems to say for a given amount of energy the entropy is maximal for 3 spacelike dimensions. Assuming that the universe is created by quantum tunneling, the probability for creation is larger the larger the entropy, thus it would be likely then that we live in a space with 3 dimensions.

To calculate the entropy one needs a cutoff the value of which is fixed by matching it to the entropy associated with the de Sitter horizon, so that's where the holographic principle becomes important.

Not only is it crucial that they add an upper bound on the number of dimensions by some other argument, their counting also depends on the number of particles and the dimensions they can propagate into. They are assuming only massless particles contribute, and these are photons and gravitons. Massive particles even with small masses, the authors write, are "unacceptable" because then the cutoff could be sensitive to the Hubble parameter. By considering only photons and gravitons as massless particles they are assuming the standard model. So even in the best case one could say they have a correlation between the number of dimensions and the particle content. Also, in braneworld models the total number of spatial dimensions isn't necessarily the one determining degrees of freedom at low energy; a possibility the authors explicitly say they're not considering.

Thus, as much as I'd like to see a good answer to the question, I'm not very convinced by this one either.