I recently came across the following Whitman poem:

When I Heard the Learn’d Astronomer

When I heard the learn’d astronomer,
When the proofs, the figures, were ranged in columns before me,
When I was shown the charts and diagrams, to add, divide, and measure them,
When I sitting heard the astronomer where he lectured with much applause in the lecture-room,
How soon unaccountable I became tired and sick,
Till rising and gliding out I wander’d off by myself,
In the mystical moist night-air, and from time to time,
Look’d up in perfect silence at the stars.

This poem beautifully captures the feeling that when you quantitatively analyze something (be it Nature or literature), it often feels like some of the initial beauty and magic of the phenomenon disappears [1].

As a scientist, the position that a scientific viewpoint somehow diminishes ‘beauty and magic’, is something you run into once in a while, so it’s good to have an answer. My own reply is that while it’s true that analysis tends to strip many phenomena of some kind of immediate (and often trivial) appeal, digging deeper almost always reveals new layers of beauty.

I had developed some examples to go along with this argument, based on my own experiences, but a couple of years ago, I watched an interview with Richard Feynman [2], and his answer is so much better than mine that I’ll leave the rebuttal of Whitman to him:


After writing the above, I googled the poem – I guess I should have done that before writing – and found a lot of fun/interesting discussions. One commenter pointed to a modern version of Whitman’s standpoint courtesy of the Insane Clown Posse (from Miracles, 2009):

Water, fire, air and dirt
Fucking magnets, how do they work?
And I don’t wanna talk to a scientist
Y’all motherfuckers lying, and getting me pissed.

Check out the pages below for more. Particularly the comment thread for the first post is a treasure trove:


[1] My own favorite example is that – when conditions are good – there are 9110 stars visible to unaided human eye. I’m pretty sure that bringing up this factoid could ruin a romantic evening under the stars. Anyway, I’m rambling.

[2] From the BBC program Horizon. Interview recorded in 1981 – the whole thing is highly recommended.

NetSci 2013: Venue and Dates

It’s time to get out your pencils and mark your 2013 calendars:

NetSci 2013 will take place June 3rd – 7th at the new The Royal Library (the Black Diamond) in Copenhagen, Denmark.

Along with fellow organizing committee members Petter Holme, Joachim Mathiesen, and Alan Mislove, I’m excited to announce that we’ve secured an incredible venue for NetSci 2013.

In order to provide non-Copenhageners with a sense of how amazing this space is going to be, I’ve included a few photos:

And the interior is spectacular as well:

And the venue is, of course, just the beginning – we have many more pleasant surprises planned for NetSci 2013. Stay tuned for updates.

Image credits (in order of appearance):

Conference: Applications of Network Theory

Just a quick advertisement for an exciting European conference co-organized by my fellow NetSci 2013 organizer Petter Holme. It takes place in Stockholm, Sweden in early April. The speaker line-up looks pretty good, despite the fact that they invited me [1].

Conference on Applications of Network Theory

Date & Location: 7 – 9 April 2011 at AlbaNova in Stockholm Sweden

Organizers: Peter Minnhagen (Umeå) and Petter Holme (Umeå)

Invited speakers:
Lada Adamic, University of Michigan
Albert-Laszlo Barabási, Northeastern University
Jordi Bascompte, Consejo Superior de Investigaciones Cientificas
Sebastian Bernhardsson, Niels Bohr Institute
Vincent Blondel, University of Louvain
Aaron Clauset, University of Colorado
Sergey Dorogovtsev, University of Aveiro
Birgitte Freiesleben de Blasio, University of Oslo
Thilo Gross, MPI Dresden
Kimmo Kaski, Aalto University
Beom Jun Kim, Sungkyunkwan University
Renaud Lambiotte, FUNDP
Vito Latora, Catania University
Sune Lehmann, Technical University of Denmark
Fredrik Liljeros, Stockholm University
Jukka-Pekka Onnela, Harvard University
Juyong Park, Kyung-Hee University
Veronica Ramenzoni, MPI Nijmegen
Martin Rosvall, Umeå University
Jari Saramäki, Aalto University
Bo Söderberg, Lund University
Brian Uzzi, Northwestern University
Jevin West, University of Washington

Description: The main idea is to convene key world-class researchers on complex networks and let them interact freely with the Nordic groups interested in the area. The program will be divided into four thematic areas: biological networks, general network theory, technological networks, and social networks. Many of the intended participants are interested in several of these points. Much progress in network theory has been made by analogies from different fields, and complex-network researchers value this, therefore we believe such a schedule will not seem unattractive to participants. In addition to the regular schedule during the Nordita program, of one or two talks per day, we will arrange a more intense, three day workshop April 7-9. One purpose of this workshop, is to attract researchers not able to stay the extended time required by the program.

This workshop is being organized as part of a long-program on networks at NORDITA.

Registration deadline: 15 March 2011 or when 70 participants have registered.


[1] I stole this last charming and self deprecating sentence from Aaron Clauset’s blog.

Tell a Story!

Although I’m trying to cut down on my podcasts use — to see if a bit of mind-wandering might be good for my brain [1] — I still allow myself to listen to podcasts to alleviate the pain of some of the dreariest of chores (e.g. cleaning the bathroom). On those occasions I’m currently working my way though the podcast RadioLab‘s excellent back catalog [2].

While always interesting and informative, the RadioLab podcast I listened to yesterday is worth a special shout-out. The podcast featured a simple recoding of the speech co-host Robert Krulwich was invited to give at CalTech’s commencement back in 2008 (you can listen to it here). During the passionate (and funny) speech, Krulwich argues for the value of science communication; not just in general, but also when people ask you about your work:

But because this is your day, and because this person loves you, or because he can’t think of anything to say after “hi,” he asks about your work. And to make it still more interesting, let’s assume that if you explain to this person what you’ve been working on, you might have to use certain words like “protein” or “quark” or “differential” or maybe “hypotenuse.” And if you do, he is going to listen to you very, very politely, but upstairs, those words are going to mean not a whole lot. […] So … here’s my question: When you are asked, “What are you working on?” should you think, “There’s no way I can talk about my science with this guy, because I don’t have the talent, or the words, or the patience to do it—it’s too hard, and anyway, what’s the point?” [3]

Now, Krulwich argues (and I wholeheartedly agree) that it’s important to come up with a good answer to this question.

The ‘science story’ is a weapon against the ‘nut-case story’

So the podcast is great, and you should be listening to it, rather than reading this. But just in case you’re not convinced, I’ll highlight a few of the elements I think are most important. First of all, Krulwich has a good argument as to why science communication is important. It’s not because there’s an intrinsic value in enlightening the spirit of man. It’s because reason is at war with all sorts of irrational/crazy causes:

[E]ven if it’s hard to explain, even if you know they don’t really want to hear it, not really, I urge you to give it a try. Because talking about science, telling science stories to regular folks, is important. In a way, it’s crucial. Scientists need to tell stories to nonscientists, because science stories—and you know this—have to compete with other stories about how the universe works, and how it came to be. And some of those other stories—Bible stories, movie stories, myths—can be very beautiful and very compelling. But to protect science and scientists—and this is not a gentle competition—you’ve got to get in there and tell your version of how things are, and why things came to be.

So Krulwich makes the excellent point that to most people a story is just a story. And a science-story is no different from a religion-story. The only way to defend science is to tell better stories; to tell stories that are more compelling — also on an emotional level.

Are metaphors bad?

The other element in the talk that I wanted to highlight is the Krulwich’s discussion of the use of metaphor and (potential) lack of precision in science communication:

And yet many scientists remain wary of metaphors, of adjectives […] But the job we face is to put more stories out there about nature that are true and complex—not dumbed down—and that still have the power to enthrall, to excite, and to remind people that there’s a deep beauty, a many-leveled beauty in the world. What scientists say is hard-won information, carefully hewn from the world. It’s not the offhand opinions of a tribe of privileged intellectuals who look down on everybody. It’s my sense that if more scientists wanted to, they could learn how to tell their stories with words and pictures and metaphors, and people would hear and remember those stories and not be as willing to accept the other folks’ stories. Or at least there’ll be a tug of war, and I think that the science stories will, surprisingly, very often win.

To me, the key words here are ‘true and complex’ coupled with ‘a deep beauty’. It’s true that you can’t really explain measurement in quantum mechanics to someone who doesn’t know what an eigenvalue is, etc. But you can still convey the absolute weirdness and wonder of the laws that govern all things quantum.

Science itself should not be dominated by metaphor or vagueness; science is about incremental discovery of complex relations. This process of discovery is built on the precision and clarity of scientific colleagues and the giants on whose shoulders we stand [4]. But that doesn’t mean that you shouldn’t spin an entertaining yarn to explain and motivate your research. So tell a story! Just remember to stay clear of condescension and to stay true to the complex reality that underlies your work.


[1] Steven Johnson’s excellent new book Where Good Ideas Come From suggests that a bit of mind-wandering is one way allow ideas to ‘bubble’ to the surface (if I remember correctly). I’m not sure that it works, but I guess that a break from the usual near-constant stream of input can’t be a bad thing.

[2] In case someone’s interested, my other favorite podcasts are (1) the absolutely unmissable Mark Kermode and Simon Mayo’s Film Reviews which features the best film reviews in the universe (sorry @ebertchicago), (2) NYT’s Book Review, and (3) the classic This American Life.

[3] Text here and in the following was copied from the official transcript. Download it here [pdf].

[4] Some attentive readers may have noticed my subtle reference to Newton’s famous metaphor (which, according to Wikipedia, doesn’t really originate from a Newton quote).

2010 in review

The artificial intelligence engine at WordPress (who hosts this page) sent me an email with some stats on how the site has been doing since I set it up back in June. According to the analysis, the page is “fresher than ever”, so I’m delighted. The email even had a convenient button to post the whole thing right at the bottom. And since I haven’t posted anything for a while I thought, “why not”.

No review of my online 2010 would be complete, however, without mentioning the Twittermood project I did with Alan Mislove, YY Ahn, JP Onnela, and Niels Rosenquist. That project earned us 302 713 views on YouTube (at the time of writing) and global press attention with large amounts TV, radio, print, and internet coverage (click here for full details). Recently, the visualization was mentioned first among Mashable’s best infographics of 2010, which generated a mini-surge of traffic for the YouTube video.

Anyway, the unedited message is below:

The stats helper monkeys at WordPress.com mulled over how this blog did in 2010, and here’s a high level summary of its overall blog health:

Healthy blog!

The Blog-Health-o-Meter™ reads Fresher than ever.

Crunchy numbers

Featured image

A helper monkey made this abstract painting, inspired by your stats.

A Boeing 747-400 passenger jet can hold 416 passengers. This blog was viewed about 3,600 times in 2010. That’s about 9 full 747s.

In 2010, there were 13 new posts, not bad for the first year! There were 38 pictures uploaded, taking up a total of 53mb. That’s about 3 pictures per month.

The busiest day of the year was July 22nd with 207 views. The most popular post that day was Worlds Colliding. Part II.

Where did they come from?

The top referring sites in 2010 were twitter.com, ccs.neu.edu, barabasilab.com, iq.harvard.edu, and barabasilab.neu.edu.

Some visitors came searching, mostly for sune lehmann, sune lehman, sune, lehmann sune, and sune lehmann nature.

Attractions in 2010

These are the posts and pages that got the most views in 2010.


Worlds Colliding. Part II July 2010


About June 2010


Press June 2010


Visualizing Link Communities November 2010
1 comment


Mood, twitter, and the new shape of America July 2010

Visualizing Link Communities

When YY Ahn, Jim Bagrow, and I published our paper on communities of links in complex networks, we did share the code for the algorithm, but one of the essentials missing from our package was a good way to visualize the highly overlapping link communities.

Link-communities Visualization

Thus, I’m delighted to report that Rob Spencer over at Scaled Innovation has done a great job of visualizing the detected link communities (including a new client-side implementation, I might add). The technical details are interesting and available.

The example displayed above is lifted from Scaled Innovation and shows the network of characters in The Wizard of Oz. In addition to the central visualization reproduced above (see below for details),  the page also shows the full link dendrogram and many other treats; everything is beautifully crafted. Note the community assignment matrix on the right, which is a neat way of probing the issue of nested communities. On the page, Rob has a number of interesting observations regarding visualization of the link communities and explains the layout above in further detail. I quote:

The good news is that the ABL method is powerful and flexible. The challenge is that the communities it reveals are of links, not nodes, and therefore not as obvious to portray and interpret. So far the literature method is to use a traditional force-based network diagram and color the lines between the dots, rather than color the dots. Not bad, but this has the limitations of force-directed network diagrams have always had: a big “wow factor” but of limited practical interpretive use because of the spaghetti of crossing lines. So here you’ll find outright experiments, and that means that some will be different!

In the upper circular graph the dots are the nodes and the polygons show community membership of those nodes (the colors match the table and dendrogram); line crossing is minimized by working around in cluster-joining order (same as the ROYGBIV color order). Communities are equally distributed around the circle with anchor points shown as black-centered dots; each node is placed as the weighted sum of its coordinates of each anchor to which it belongs, plus some random jitter to separate nodes with single community membership. The community ordering and coloring has an interesting result: the diagram gets simpler to see as the number of communities is increased, even far above the partition density “optimum”.

The method is fast because it’s completely deterministic and drawn in one pass, i.e. it’s not an iterative force-relaxation method.

Pervasive overlap and visualizations

While Rob’s visualization shows tremendous progress on a number of fronts (just compare it to our own – primitive – first stab at visualizing the network of characters in Les Miserables), I still think that node based visualizations of the link communities work best when we study ego-networks (a single person and her neighbors).

As we point out in the paper, we can visualize the ego-network precisely because the central node’s communities are largely non-overlapping. So in the example above, Dorothy is the Ego, placed in the center of the visualization, while the various non-overlapping story lines appear as communities surrounding her.

One of the consequences of pervasive overlap (when every node is a member of multiple communities), is that we can no longer display the communities as block structures in the network adjacency matrix. Roughly speaking, to form a block structure, we need a single block per node. Some overlap is possible within the framework of block modeling, but when we can have more communities than nodes, this approach breaks down.

A similar problem arises in visualization. My guess is that any strategy for visualizing pervasive overlap where nodes are the basis of the visualization will ultimately turn out to be problematic for a full network. One possible solution is to follow the example of CFinder and construct a visualization based on the network of communities but with the ability to zoom into each community. At the local level, Rob’s visualization would be perfect.

Comments/ideas are welcome. Note – this post can also be found at the Complexity and Social Networks Blog.

Twittermood 2: Election special

The midterm elections are coming up, so we decided to create our own little twitter mood election center.

“Twitter has grown to become an important aspect of public debate and leading up to Tuesday’s midterms, the Twitterverse is abuzz with conversations on the topics that will decide the individual races.

It is well known that the state you live in plays a role in deciding what issues you care about. By utilizing the fact that conversations on twitter are public, we can geocode individual tweets, and study where Americans are talking about specific issues.

In this way, Twitter allows us to extrapolate from millions of water cooler conversations and show where the conversations are taking place right now.”

Check it out by clicking on one of the images below:

Standard representation

Basically, the idea was to play around with the Twitter stream and do something in real-time for the midterm elections. So we decided to dig into where people are talking about the various issues that are shaping the debate leading up to the election.

See the page for full details.

The end of Supporting Material?

Maybe this is how it happens: You see an interesting (seemingly innocuous) paper and decide to read it. Upon finding it very information-dense, you decide to take a look at the supporting information (SI) and notice that the SI has a word count greater in size than an average PhD thesis. Or maybe it’s when you decide to print the SI and realize something unusual is going on when your printer is still spitting out paper after half an hour.

However you have become aware it, scientific practice has been changing in the last few years. If I remember correctly, supporting information packages started becoming the norm for papers (at least in some journals) a only few years ago and the average SI length has been growing steadily ever since.

Now something interesting has happened. From November 1st and onwards, The Journal of Neuroscience (JNS), a leading Journal in that field, will no longer allow authors to include supplemental material when submitting new manuscripts (JNS agrees to link to non-peer reviewed supporting material on the author’s own site). The decision is explained in detail by Editor-In-Chief John Maunsell, who presents a lucid and interesting argument. He explains that on one hand, the decision was made to make the task of peer reviewing a paper more manageable, i.e. to help the referees:

Although [JNS], like most journals, currently peer reviews supplemental material, the depth of that review is questionable. Most well qualified reviewers are overburdened with requests to review manuscripts, and many feel that it is too much to ask them to also evaluate supplemental material that can be as extensive as the article itself. It is obvious to editors that most reviewers put far less effort (often no effort) into examining supplemental material. Nevertheless, we certify the supplemental material as having passed peer review.

This surely is an accurate description of the situation many referees find themselves in. Going over every equation and argument in a 100 page SI takes several days, an amount of time that most academics simply don’t have available. (In fact the current state of peer review, even without mammoth SI’s, has been argued to be suffering from serious problems.)

On the other hand the decision is also intended to protect the authors.

Another troubling problem associated with supplemental material is that it encourages excessive demands from reviewers. Increasingly, reviewers insist that authors add further analyses or experiments “in the supplemental material.” These additions are invariably subordinate or tangential, but they represent real work for authors and they delay publication. Such requests can be an unjustified burden on authors. In principle, editors can overrule these requests, but this represents additional work for the editors, who may fail to adequately referee this aspect of the review.

Reviewer demands in turn have encouraged authors to respond in a supplemental material arms race. Many authors feel that reviewers have become so demanding they cannot afford to pass up the opportunity to insert any supplemental material that might help immunize them against reviewers’ concerns.

The “supplemental material arms race” described eloquently above is another element that I, as an author, can relate to—and suspect that many others feel the same.

With no room for peer reviewed SI, each manuscript must be self contained and convincing on its own merits:

A change is needed if we are to maintain the integrity and value of peer-reviewed articles. We believe that this is best accomplished by removing the supplemental material from the peer review process and requiring that each submission be evaluated and approved as a complete, self-contained scientific report […] With this change, the review process will focus on whether each manuscript presents important and compelling results.

I think most scientists can agree that large SI’s present a challenge to the scientific method as we know it. As is argued by JNS, large SI’s present a challenge to referees and authors alike and contain the potential for a potentially harmful “SI arms race”.

But let’s consider the suggested solution. In my interpretation, the proposed solution is to introduce more trust into the process. By eliminating the peer reviewed SI, the Editor-In-Chief is effectively stating that referees should trust that the authors have done their legwork (data preprocessing, programming, statistical analysis, and other “boring” elements underlying the main results) properly.

Of course, the entire foundation of peer review is trust. As referees we begin our task trusting that authors have done their work properly and presented their results honestly. Even a good referee can only be expected to catch mistakes and problems in the material presented to him. So why not a little additional trust?

Personally, I am unsure what to think. On one side, I wholeheartedly agree that there are important problems with the current state of affairs. But, on the other side, I think that there are important arguments against allowing too much of the ‘legwork’ to left out of the peer review process. Firstly, examples of scientific misconduct are many and the elimination of peer reviewed SI will make sloppy or dishonest science easier. Secondly, and more importantly, as John Timmer at Ars Technica has recently pointed out, the increasing use of computers could potentially put an end to the entire concept of scientific reproducibility (precisely because of extensive preprocessing of data, etc). Without peer reviewed SI, this problem will even more difficult to counter.

Regardless of the pros and cons, this is an interesting move by JNS. Since JNS allows fairly long articles (typically over ten pages), getting rid of the SI might be easier for JNS and other journals aimed at specific scientific disciplines, than for highly cited interdisciplinary journals – say Science or Nature – where word-count restrictions for main text are taken very seriously.

It will be interesting to see if this policy of “no supporting material” catches on.

Bipartite Network gets a Makeover

I guess my research is slowly changing focus and is more and more about some kind of data science (although I still bill myself as a physicist turned network scientist). While statistics and mathematical models are still driving this type of research, an increasingly important part of data science is visualization – finding neat ways to display subtle and complicated mathematical concepts in a way that is immediately understandable.

Sometimes, however, visualization can be completely gratuitous eye-candy. Last week, I played around with displaying a weighted bipartite network. One of the default layouts looked something like this:

Adding Bezier curves, more pleasing node shapes, and a little color, the final network comes across slightly more pleasing to the eye (in my opinion, anyway):

Stay tuned for the next episode of ‘Pimp my Network’.

Worlds Colliding. Part II

Back in March, I wrote a post entitled Worlds Colliding explaining the failure of Google Buzz as a failure to understand the fundamental structure of complex networks.

Buzz received a large amount of criticism for automatically adding the most contacted people from your inbox to your Buzz follower list. My post explained that because individuals in social network are a member of many social contexts (family, work, friends, etc), nodes from all of these to a single list would cause these contexts to collide (e.g. adding both your wife and your (no longer) secret mistress to your list of followers).

The last couple of days, the following talk (from July 1st) by Paul Adams who is a User Experience Researcher at Google has been very visible on the interwebs.

From the looks of it, the good people at the Googleplex have either been reading my blog and the accompanying scientific paper and are scrambling to keep up (I consider this scenario highly unlikely) or, the User Experience Group at Google was never in touch with the group behind Buzz.

Let me repeat that last part for dramatic effect: the User Experience Group at Google was never in touch with the group behind Buzz. The knowledge about pervasive overlap and overlapping communities was present within Google, but never diffused to their initial social networking attempt. So the failure of Buzz was in some sense due to separate worlds within Google not communicating properly. That strikes me as textbook case of tragic irony.

Update, July 15th

I’ve included YY‘s recent slides from the New Frontiers in Complex Networks conference as a quick intro to our thinking regarding pervasive overlap.

The proper reference is Link communities reveal multiscale complexity in networks. Nature (2010), doi:10.1038/nature09182.