Size does matter: Larger brains can hold more!

Size-mattersA long time ago in a galaxy far, far away, people didn’t have mobile phones to store their phone numbers in. People actually had to use their own memory to store these long numbers. But getting these numbers into long-term memory could be a real pain. People had to write the number down, say it over and over again to themselves and with each verbal iteration, something annoying would happen – the number would fade out of memory. To get the number into long-term memory you had to keep repeating the number, over and over again, fast enough to beat the fade away.

This short-term, fast fading memory is called working memory. It’s like the RAM in a computer: it holds everything in your mind ready for action, simulation or a decision. Working memory is related to our IQ and even to some mental disorders, but we don’t know why some people can fit a lot more information into their working memory than others. Yes, it’s very unfair. As I mentioned before in the post on extending your mind, some people can hold huge amounts of information in their mind and even manipulate it, trying out different ideas etc, while other brains or minds can only hold small amounts.

Why do you have the particular capacity you have? How can we investigate these differences between people? It turns out the key to answering these questions is to get people to remember information in only one of their 5 senses, such as vision. By doing this we narrow down the field of things to investigate and can look at the precise brain anatomy related to just that one sense in different people and figure out which parts of your brain allow greater information capacity.

14618772953_45f8cbf809_zThis is exactly what we did in a recent paper from my lab. We found that people with larger brains could hold more temporary information in their mind. Specifically, people with larger visual parts of their brains were the ones who could hold more visual information in their minds. This is interesting for a number of reasons. One reason is it suggests that the physical parameters of our brains set the limits to what we can do with non-physical things like the contents of our mind. In other words, the visual cortex is like a bucket: the larger the bucket the more water it can hold. The larger your visual cortex the more visual information you can hold in mind. With more information in mind, you can do… well, a lot more.

Visual working memory capacity is predicted by anatomical properties of V1.

Visual working memory capacity is predicted by anatomical properties of primary visual cortex.

But all this doesn’t make it any fairer for those that can’t hold much ‘in mind’. The next logical question is: Why do I have a large or small brain? Well, when it comes to visual cortex, data suggests that our genes play a role. The cortex, the outer layer of the brain, is a like a gooey grey sheet that is all wrinkled up on itself. In fact, there are two different components to the size or volume of the primary visual cortex: thickness and surface area. These two different measures seem unrelated to each other, but both have a heritable component. In other words, it seems that your parents or ancestors might have passed your visual cortex down to you, or at least it’s size.

So does all this come down to luck? Well, as with most things, yes and no. There is now some promising research looking at how training or practice can literally change the architecture of your brain.

The now famous book, The brain that changes itself is a great general read on the topic. However, there are many more specific research papers, investigating how practicing visual tasks can change not only your vision, but parts of your brain as well.


Here is the reference and link to our original paper:

Bergmann, J., Genç, E., Kohler, A., Singer, W.A. & Pearson, J. (2014). Neural anatomy of primary visual cortex limits visual working memory. Cerebral Cortex.

Want to know more about all of this? How to measure visual working memory or brain size? Let us know in the comments.

How scared should we be about IBM’s new ‘TrueNorth’ chip? Is it Self-aware and Conscious?

hfjtdfjdThe results of artificial intelligence, have well, thus far been a complete disappointment.

“Machines will be capable, within twenty years, of doing any work that a man can do.” -Herbert Simon, 1965.

But are things about to change? With this week’s announcement of ‘TrueNorth’, the new brain-inspired computer chip from IBM, set to take computers in an entirely new direction, might artificial intelligence, robotics, the internet and yes, the scary sci-fi self-aware ‘skynet’ type of networks, have come one step closer?

What makes us conscious? What makes us self-aware? These are tough questions that philosophers have pondered on for thousands of years. Nowadays psychologists and neuroscientists have been attacking these questions empirically and mathematically. There are entire conferences concentrating on the scientific study of consciousness (e.g. the Association for the Scientific Study of Consciousness; ASSC).

So what makes our brains conscious? The current leading theories on consciousness all propose it has something to do with the way information is processed in the brain. For example, one theory called the Information Integration theory (IIT), by Giulio Tononi, talks about the degree of integrated and differentiated information (check out these links for more info: link1 link2). Other more ‘in the wild’ medical applications focus on the detail or level of complexity in brain activity (read this article for more on this or check out this great episode of Radio lab), and they are beginning to be applied during general anesthesia.

These and other theories propose something along the lines of the following: once a system, any system, brain or computer chip, can ‘hold’ or process information in just the right way (lets just say it’s coherent and integrated etc. ) it should simply become conscious.

Is IBM’s new TrueNorth chip one step towards a conscious non-biological entity?

“We have taken inspiration from the cerebral cortex to design this chip,” says IBM’s chief scientist for brain-inspired computing, Dharmendra Modha.

“It can see an accident about to happen.” – Modha.

IBM1IBM says the new chip TrueNorth, communicates via an inter-chip interface, which enables ‘seamless scalability’. In other words, it should be no problem to up-scale TrueNorth to any size we want. This might just be my paranoia kicking in, but I want to ask whether, because the neuromorphic TrueNorth system is completely scalable, could it be scaled up to the size and power of the human brain? What would that be like? Would this TrueNorth network become self-aware like us?

Besides Hollywood, there are some serious discussions developing around the real dangers of large scale/scalable AI systems. For example check out Superintelligence by Nick Bostrom. He writes about the real danger of a Superintelligence that could surpass our own capabilities, and if it is possible to control such an entity. In theory we do have one advantage: we are the ones creating it, so we should be able to predict its occurrence.

Elon Musk, of PayPal, Tesla Motors and SpaceX fame, who has an impressive track record of predicting future technology trends and acting on them, recently tweeted:

“…We need to be super careful with AI. Potentially more dangerous than nukes.”- Elon Musk, 2 Aug 2014

Elon Musk with president Obama at Cape Canaveral in 2010 – photo by Steve Jurvetson.

Elon Musk with president Obama at Cape Canaveral in 2010 – photo by Steve Jurvetson.

As we don’t yet know when or how conscious self-awareness is created, we won’t necessarily know if we do create it.

Maybe TrueNorth is already a little conscious. Maybe 10 TrueNorth chips working together will become conscious? Maybe 100 TrueNorth chips?

The human brain has around 86 billion (give or take) neurons in it. If each TrueNorth chip has around 1 million ‘neurons’ on board, then perhaps if we hook up 86 thousand of them we might have something closer to resembling the human brain…

What would happen then?

As far as I know, IBM hasn’t yet done anything like this. But if TrueNorth chips become as common as smart phones, IBM seems to be predicting this, then this scenario may be realised someday soon.

2070083278_7484ac1432_mWhat signs do we need to be on the lookout for? How would be know if the 86k- TrueNorth network is conscious? Because we don’t have a ‘consciousness meter’ we don’t know how to test such a network for consciousness. We have no way of knowing!

As we don’t yet have a measure or operational definition of consciousness, is it dangerous to build something that our best theories suggest could become conscious? Is this akin to building the AI nuke of which Musk warns us? In this analogy we are building something with potential bomb-like capacity, but with no ability to monitor safety, because we don’t know when it will become bomb-like or if it already is.

On the other hand, maybe all this talk really just belongs in Hollywood, and the chances of us actually creating AI are negligible. But the issue is that we don’t know, and currently have no way of finding out. Would such Superintelligence eventually attempt some form of communication with us?

What if it didn’t… We would never know that it even existed.


Here’s a link to the actual neuromorphic TrueNorth paper in science, although it is behind a pay wall so you might not have access.


Are you scared? Think this would make a great Hollywood film? Want to know more about all of this?  Let us know in the comments.

How to instantly boost your brainpower by extending your mind…

Does this sound familiar? You’re in a meeting discussing the future direction of your new company, should you pivot this way or stay the course?

You try and imagine, one future vs. the other, you visualise a timeline of your startup, the ups and downs, the transformations, the new and old products. At the same time you try and visualise the company you want, the bright future, the success, changing the world – but as all this imagery flashes through your mind, something annoying happens. You try really hard, using all your mental effort to squeeze these mental simulations into existence all simultaneously. To make the right choice you need to have a clear comparison between your options, but every time you try and hold all the information in mind, it almost hurts, you almost get there at peak effort, but with each new visualization, another is lost. You just can’t do it, your mind disobeys your orders, it simply won’t hold all the information at once, it won’t or can’t.

3861339752_ba217e3640_zI want to let you in on a little secret, some of us can hold more information in mind than others. Yes that’s right, it’s completely unfair. You might know a high capacity individual, someone who seems to  stare magically into space, compare all the many many options and come up with a clear new idea – all done virtually in their mind.

Well you heard it here, just like some of us have newer faster computers with more ram, some of use can do more and see more with our minds – YES I KNOW IT’S COMPLETELY UNFAIR.

But there is a way to level the playing field.

I want to let you in on a little secret, a surprisingly simple way to overcome this unfairness. If you are not already doing this, you will be amazed at how much more capacity you can add to you mind (mind extension/ boost brainpower).

Here is the solution: Use physical aids, like a whiteboard, cards or a huge touch screen, anything to get the information down, something visible so you don’t have to hold all the information in your mind. Yes yes, I hear you saying we all take notes or use a white board in meetings, that’s nothing new.

What I telling you goes further. It is a clear and simple strategy to keep your mind free and empty so you can focus your entire mind on coming up with new ideas.



1. Make (physical) visual symbols that clearly show or represent all the elements, ingredients or pieces of your puzzle that are required.

2. The physical visual symbols need to be dynamic, so you can move them around. This is important, if they are not dynamic you will just end up having to use your mind again (and we don’t want this).

3. You need to be able to add and subtract the physical visual symbols on the fly.

Put all the pieces of the puzzle down, dynamically so you can move them around.

You need to be able to play with the geometry here, the relationships, in anyway you want – which thing comes first, which ones are related etc. If you cannot shift these chunks of information around using your symbols, you will fall back into the same trap again. You will start trying to hold them all in your mind, and you’re back to square one, the capacity of your mind will be all used up!

mind_extensionThe only thing your mind should be doing is looking at the global relationship and playing around to come up with new ideas, the light bulb moment.

If you start having to imagine other elements, then stop. You are using your mental capacity on the ingredients or content again. You are ‘wasting’ your mind on representation, you want to save your mind for creation.

Stop and make new symbols. Your mind should be only focusing on new ideas.

Using these physical visual objects you have freed up your mind to do the actual important stuff – coming up with the new creative ideas.

Its very simple, you CANNOT do both things at once. You cannot visualise all your options or content and move them around, looking at the big picture and come up a new creative idea. When you attempt this you use up all your brain’s power on creating and holding the elements in mind – it’s a waste (in this situation). The priority is the creative new idea, NOT holding the ingredients in mind.


There is plenty of good science to back all this up:

1. What we can hold in mind is servery limited to only a few things around 3-4 visual items (Baddeley, 2012; Luck et al., 2013), but you can hold more numbers in mind.

2. Holding information in mind like this is called using Working Memory and it’s effortful and can be hard.

3. Each thing we hold in working memory can interfere with other things in working memory (Franconeri et al., 2013).

4. Multitasking is a myth, what we actually do when multitasking is switch back and forth between different tasks. But when we do this we make more errors in each task (Rogers et al., 1995).

John Medina has some great and easy to follow material on this topic in his book Brain Rules.

Specifically for easy to follow info on multitasking checkout the section of Brain Rules on Attention

Mind Extension:

More generally this idea of extending your mind has been around for a while. Well-known philosophers David Charmers and Andy Clarke have written about it, have a look here.

It is a simple but profound idea, that everything we use to perform actions is in some way an extension of our mind – just like using the visual symbols above.

One of the most common examples of this in recent times is using your cell phone to remember phone numbers for you. Once, not too long ago we used to spend time and effort memorizing phone numbers, just like the example I started out with above, this puts a tax on working memory, while using it to memorize the phone number you cannot use it for other things. Now many of us use a smart phone that effortlessly stores phone numbers alongside photos and the name of each contact individual, so we don’t have to waste our working memory on phone number memorization.

Here is a great Sydney TED talk by Chalmers on the extended mind:

As Chalmers mentions, Google can be thought of as an amazing extension of our mind. Rather than spending hours, days or weeks committing things to memory, we now have information on DEMAND.

Having information on demand at the click of a button has major implications for entrepreneurs, scientists and perhaps most of all to education. I have an upcoming Post on information on DEMAND so stay tunned.

Have an interesting story of freeing your mind? How have you used technology to free your mind so you can focus on new ideas? Please share…



Baddeley, A. (2012). Working Memory: Theories, Models, and Controversies. Annual review of psychology, 63(1), 1–29. doi:10.1146/annurev-psych-120710-100422

Franconeri, S. L. et al. (2013). Flexible cognitive resources: competitive content maps for attention and memory. Trends in Cognitive Sciences, 17(3), 134–141. doi:10.1016/j.tics.2013.01.010

Luck, S. J. et al. (2013). Visual working memory capacity: from psychophysics and neurobiology to individual differences. Trends in Cognitive Sciences, 1–10. doi:10.1016/j.tics.2013.06.006

Rogers, R. D. et al. (1995). Costs of a predictible switch between simple cognitive tasks. Journal of experimental psychology General, 124(2), 207

How rewarding the conservative is killing innovation and discovery in science

There’s a great little trick that audiences can play on someone giving a lecture (try it next time you’re in a lecture). All the audience has to do is pick one side of the auditorium to be the happy side, and the other to be the angry side. If you are on the ‘happy’ side smile each time the lecturer looks over in your direction. If you are on the ‘angry’ side, frown and look unhappy. You will soon notice the speaker gravitating towards the ‘happy side’ of the auditorium.

185472365_7ae7f2303b_zThere is a near universal premise behind this little trick that is also the backbone of behavioural economics – we respond to rewards. Simply rewarding certain actions results in dramatic changes to behaviour. Behavioural economists and psychologists have shown countless times that rewarding or incentivizing behaviour can have profound effects on society, having both good and bad long-term outcomes.

There are many different ways we hear about new science. Conference talks, discussions with colleagues, reading published papers, reviewing papers for journals or even reviewing research grants. When a colleague tells me about a new piece of exciting research, the natural dynamic is that the attention is on them, they are giving the ‘presentation’, and I am the audience. I really only have a few possible response options:

1. Tell them honestly how cool and exciting their finding is.

2. Ask for clarification, then, once I understand, give some form of positive response.

3. Take on the role of skeptic and ask about alternative causes of their finding. To put it bluntly: try and show how their discovery is wrong.

Of course I’m not trying to be negative (I promise), but this scenario poses an interesting situation. As the ‘audience’ listening to my colleagues breakthrough discovery there are two ways to impress them: come up with a more exciting breakthrough, or logically falsify their discovery by giving an alternative explanation of their data. So if I don’t have a competitive discovery of my own, then the only way I can impress my colleague is to be more conservative or skeptical than they are, and tell them I cannot be sure their discovery is what they say it is. If I want to look good or impress, my only real option is to out-conserve them.4046758837_ec664ec1b5_z

Of course being conservative is an important part of science, precision and being sure of our claims is a must. However, the inherent asymmetry of falsification in the scientific method, naturally forces an audience into a position of more conservative = better. So once my colleague has finished telling me about their amazing new research, I have an urge to impress them, as one naturally does in the presence of a person they respect, so I quickly rack my brain until I come up with an alternative explanation of their data. “What about this…” I ask, “Couldn’t this also explain your data? You might need to run further control experiments to exclude this potential confound.” “Oh you are right, good point” my colleague replies with a sigh.

This in isolation is not really a problem, until we realise that what just transpired was analogous to the happy side of the audience smiling at the lecturer. Just like the lecturer and the smiles, by being more conservative than my colleague I was rewarded by their respect. So again, like the lecturer who will return to the happy side of the auditorium, next time I hear about a colleague’s new research I will move to the conservative side of the spectrum to get my reward.

rewarding_conservativeMight incentivizing being conservative, maybe, just maybe, produce an environment in which individuals shy away from risky or ambiguous science? Once the habit of conservatism has formed, it won’t just be applied to the work of others, but also our own. Novel leftfield ideas will be put aside as too crazy, too risky and potential discoveries will be lost. On a mass scale scientific progress will slow and operate in a much more conservative parametric ‘safe-zone’.

Its hard to know what kind of impact incentivizing conservatism might have on a large scale, but for a moment think through all the scenarios in which moving to the conservative side of the spectrum can win you the incentive of respect. Asking questions after a conference talk has the potential for a big reward, as you get the opportunity to impress many people at once. What about when reviewing papers? Or grants? Both provide opportunities to be rewarded for moving to the conservative side of the spectrum. When reviewing a paper, if I want a journal editor to respect me, the best course of action is to come up with an alternative to what the authors are claiming in the manuscript: maximizing conservatism wins me respect. Unnecessarily boosting conservatism like this forces people to use more resources to check all possible alternatives, while promoting conservative safe-science.

What is the effect of incentivizing conservatism year-in and year-out like this? Is this a bad thing? After all we want to be sure about science, especially if there are important or dangerous implications, say like with climate science or life saving medical procedures. However, at the same time we also want to maximise breakthrough discoveries. The discoveries that will most profoundly change our lives are the ones we aren’t expecting and can’t predict and it is precisely these that are lost by the science community becoming more and more conservative.

Black Swans, the now famous term coined by Nassim Taleb. It refers to the huge impact of unexpected rare events.

Black Swans, the now famous term coined by Nassim Taleb. It refers to the huge impact of unexpected rare events.

Do the rewards of discovery (respect etc.) outweigh the rewards for conservative skepticism (also respect)? Yes, probably, but there is a huge difference in difficulty between breakthrough discoveries and being conservative. There is still no real recipe or how-to guide for breakthrough discoveries. Whereas, for doubt and skepticism, simple logic will give you all the alternative possible explanations you need. Which means anyone can be a skeptic anytime it’s easy. However, coming up with new breakthroughs is hard and unpredictable. In other words, being conservative in science is an easier reward than going for a novel discovery.

Is there a way to prevent or counter the conservatism in science? One idea is that by simply acknowledging the nature of the incentivizing system, we should be less influenced by it.

Many venture capitalists invest in countless different ventures, knowing full well that the majority will fail, but they are relying on the radical success of a small minority. Just one huge success can more than make up for all the failures. Nassim Nicholas Taleb coined the phrase “Black Swan” to describe the huge impact of a rare and unexpected event. Black Swan investing, an investing strategy in which you bet on an extreme event, positive or negative, occurring at some stage in the market. Over time this investment strategy can be costly because everyday the rare event doesn’t occur costs you money, but then when a Black Swan event eventually occurs (GFC, a volcano, Google etc.), the win is so great that it dwarfs the accumulated slow loss.

Could we apply this Black Swan strategy to science? What would this look like? What would it involve? One way to apply such a strategy in science would be to fund and green-light high-risk, high-reward research projects, knowing full well that most of these projects will fail. But a small percent, and it only needs to be small, will end up being Black Swan discoveries, that have a huge impact on technology, medicine or our understanding the world around us.

Would such an ‘open-minded’ liberal research strategy by design change the conservative nature of science? Maybe not, but perhaps by explicitly acknowledging that certain science projects are by design high-risk and high failure, the reward incentives might shift away from conservative skepticism. In other words, by shifting the focus or value to discovery as opposed to doubt and conservatism, we might just be able to boost the number of life-changing discoveries.

Agree? Disagree? I’d love to hear from you…

Measuring the mind’s private images

Mental imagery, the voluntary retrieval and representation of sensory information from memory, has a fascinating biography. Historically, mental-imagery research suffered criticism because of methodological constraints caused by imagery’s inherent private nature. Recently, many objective research methods have been introduced that allow a more direct investigation into the mechanisms and neural substrates of mental imagery. These new methods have spurred numerous new discoveries, culminating in a flurry of impactful publications over the past few years.

Although imagery played a distinct role in discussions of mental function for thousands of years, empirical work on imagery did not gain strong traction until the last 30 or 40 years. Despite this recent traction, mental-imagery research has still not enjoyed the same degree of investigative attention that other psychological topics have. For example, this graph, shows that the number of articles published each year tfigure_1hat include the phrase “mental imagery” in the title, compared with those that include “visual attention” or “visual working memory,” is relatively low.

In the 1970s cognitive psychologists started to develop tricky methods to measure and study mental imagery objectively. Some of the early discoveries demonstrated a clear relationship between the content of mental images and the time it took to generate or manipulate them (Kosslyn et al., 1978; Shepard et al., 1971). The larger the imagery manipulation, the longer it took to complete, suggesting a correspondence between imagery and physical space.

More recently, there has been a jump in brain-imaging work investigating mental imagery. A recent trend of analyzing the information content of fMRI patterns (instead of the mean amplitude change) has yielded interesting results. This work is often described as decoding because one of the more popular methods trains an algorithm to decode, or make a prediction about, the experimental condition or task, on the basis of the spatial pattern of the fMRI signal across a brain area.

More recently there has been a jump in brain imaging work investigating mental imagery. A recent trend of analysing the information content of fMRI patterns (instead of the mean amplitude change) has yielded interesting results. This work is often described as ‘decoding’, as one of the more popular methods trains an algorithm to decode, or make a prediction about the experimental condition or task, based on the spatial pattern of the fMRI signal across a brain area (Tong et al., 2012).

Recent work from our lab has demonstrated that imagery can facilitate subsequentperception (Pearson et al., 2008). By separating the period of imagery generation and perception in time, the effects of imagery can be examined without the potential confounds of attention (Carrasco et al., 2004). This work demonstrated that when individuals imagine one of two patterns, that pattern has a much higher probability of being perceptually dominant in a subsequent brief binocular rivalry presentation (Pearson et al., 2008; 2011). In other words, the content of the mental image primes subsequent dominance in binocular rivalry – it changes visual awareness of the rivalry display. Binocular rivalry is a visual phenomenon that occurs when two different visual stimuli are presented one to each eye, such that they are forced to coexist at the same visual location. One pattern tends to be dominant over the other, forcing it out of awareness. Binocular rivalry has been a hugely popular tool to study visual awareness in recent times (Tong et al., 2006). However, this work used rivalry as a tool to measure the sensory strength or ‘visual energy’ of mental imagery, enabling individual episodes of imagery to be assessed in an indirect and objective sensory manner. This discovery is also interesting in its own right, as it demonstrates that what we imagine can literally change how we see the world.

To read more on recent developments in objective methods to measure mental imagery check out the recent paper from which some of the above text was taken:

Pearson, J. (2014). New directions in mental imagery research: the binocular rivalry technique and decoding fMRI patterns. Current Directions in Psychological Science. 23(3), 178-183.


Or catch my upcoming tutorial at the 2014 ASSC meeting: Seeing what’s not there and measuring it: Conscious perception without a stimulus




Carrasco, M. et al. (2004). Attention alters appearance. Nature neuroscience, 7(3), 308–313. doi:10.1038/nn1194

Kosslyn, S. M. et al. (1978). Visual images preserve metric spatial information: evidence from studies of image scanning. J Exp Psychol Hum Percept Perform, 4(1), 47–60.

Pearson, J. et al. (2008). The functional impact of mental imagery on conscious perception. Current biology : CB, 18(13), 982–986. doi:10.1016/j.cub.2008.05.048

Pearson, J. et al. (2011). Evaluating the Mind’s Eye: The Metacognition of Visual Imagery. Psychol Sci. doi:10.1177/0956797611417134

Shepard, R. N. et al. (1971). Mental rotation of three-dimensional objects. Science (New York, NY), 171(3972), 701–703.

Tong, F. et al. (2006). Neural bases of binocular rivalry. Trends Cogn Sci, 10(11), 502–511. doi:10.1016/j.tics.2006.09.003

Tong, F. et al. (2012). Decoding Patterns of Human Brain Activity. Annual review of psychology, 63(1), 483–509. doi:10.1146/annurev-psych-120710-100412