Johnjoe McFadden

Johnjoe McFadden is Professor of Molecular Genetics at the University of Surrey, and has published more than 100 articles in scientific journals on subjects as wide-ranging as bacterial genetics, tuberculosis, idiopathic diseases and computer modeling of evolution. He lectured extensively in the UK, Europe, the USA and Japan and his work has been featured in radio, television and national newspaper articles. He wrote the popular science book, Quantum Evolution, which examines the role of quantum mechanics in life, evolution and consciousness.

 

He also writes articles regularly for the Guardian newspaper in the UK on topics as varied as quantum mechanics, evolution and genetically modified crops. Most controversial were two papers published in the Journal of Consciousness Studies, in which McFadden proposed that the brain's em information field is the physical substrate of conscious awareness: Synchronous Firing and its Influence on the Brain's Electromagnetic Field: Evidence for an Electromagnetic Field Theory of Consciousness, and The Conscious Electromagnetic Information (CEMI) Field Theory: The Hard Problem Made Easy?

 

This interview was conducted by Norm Nason and was originally published in the website, Machines Like Us, on November 26, 2007. © Copyright Norm Nason—all rights reserved. No portion of this interview may be reproduced without written permission from Norm Nason.

 

 

NORMWelcome, Johnjoe. It's a pleasure having you here.

 

JOHNJOE: And its always a pleasure to chat with you, Norm.

 

NORMI want to ask you about the particulars of your work, but before I do, perhaps we should first touch on some of the problems cognitive scientists face when trying to construct a viable theory of consciousness. The so called "binding problem," for instance, refers to how neurons associated with different aspects of perception are able to combine to form a united perceptual experience. The "mind-body problem" deals with the question of how the mind is able to move our physical bodies. Please tell us more about the difficulties one faces when trying to construct a cohesive model of consciousness.

 

JOHNJOE: The basic problem is that our subjective experience of consciousness does not correspond to the neurophysiology of our brain. When we see an object, such as a tree, the image that is received by our eyes is processed, in parallel, in millions of widely separated brain neurons. Some neurons process the colour information, some process aspects of movement, some process texture elements of the image. But there is nowhere in the brain where all these disparate elements are brought together. That doesn’t correspond to the subjective experience of seeing a whole tree where all the leaves and swaying branches are seen as an integrated whole. The problem is understanding how all the physically distinct information in our brain is somehow bound together to the subjective image: the binding problem.

 

NORMThe famous experiments conducted by neurobiologist Benjamin Libet suggest that we are consciously aware of making a decision to act after our brain initiates that action -- but before the action actually takes place (moving an arm, for instance). This raises the question: do we have free will, or is the feeling that we are in control of our destiny merely an illusion?

 

JOHNJOE: I am puzzled why Libet’s findings are considered at all surprising. In fact anything other than his findings would be astonishing. Consider if Libet (and further studies) had been unable to detect any changes in brain activity prior to our awareness of an intention to perform an action. Awareness would then be an uncaused cause—a ghost in the machine—an effect that had no physical cause. This would mean that awareness would contradict all the laws of causality—it would be magic. Consciousness would then stand apart from the rest of science and force us to revaluate every scientific notion based on causality and determinism.

 

But of course awareness, like all other events, is caused by preceding event in our brain. So it is not causality that is problematic but our notion of free will. Again, this is only problematic if we think of free will as an uncaused cause -- a ghost in the machine. My conception of free will is that it is the influence of the brain’s em field—our conscious mind—on the operations that the brain directs: our actions. So consciousness is not a mere steam whistle of brain action (as Huxley suggested) but plays a vital role in determining our actions. To put it another way, if consciousness was not playing a role our actions would be very different—we would act like be robots. But this conscious em field—our ‘free’ will is not an uncaused cause: its structure and dynamics are determined by earlier activity in the brain. It isn’t really free in the sense of non-deterministic. But then how could it be without invoking magic?

 

NORMYour explanation for consciousness is one of the most unique and intriguing that I have come across. You have done an excellent job of both providing evidence for your theory, and defending it against criticism. Please give us an overview of your cemi field theory.

 

JOHNJOE: Put simply, the cemi field is that component of the brain's electromagnetic (em) field that influences our actions. The theory proposes that the seat of consciousness is the brain's em field. This then solves the binding problem because all of the information in scattered neurons will be unified in the brain's em field. A number of researchers (e.g. Susan Pocket) have proposed this much but the cemi field goes one step further and proposes that the cemi field loops back to influence brain activity via electromagnetic induction: the brain's em field influences neuronal membrane potentials and thereby the probability of neuron firing and thereby influences our actions. This influence we experience as 'free will'.

 

NORM: You have pointed out that all aspects of your cemi field theory are testable. Has any progress been made on this front?

 

JOHNJOE: The cemi field theory predicts that synchronous firing of neurons will have a greater influence on our actions than asynchronous neuron firing. This is because synchronous activity will generate in phase em field disturbances that will have a greater chance of influencing neuron firing patterns. So a major experimental prediction of the model is that willed actions and awareness will correlate with synchronous neuron firing. In my papers I describe lots of experiments that have demonstrated this in animal models and human studies (eg. EEG studies). Since then there have been lots of additional studies that support this coupling of synchrony and neuronal activity. For instance:

 

Womelsdorf T, Schoffelen JM, Oostenveld R, Singer W, Desimone R, Engel AK, Fries P. (2007) Modulation of neuronal interactions through neuronal synchronization. Science. 2007 Jun 15;316(5831):1609-12.

 

Abstract: Brain processing depends on the interactions between neuronal groups. Those interactions are governed by the pattern of anatomical connections and by yet unknown mechanisms that modulate the effective strength of a given connection. We found that the mutual influence among neuronal groups depends on the phase relation between rhythmic activities within the groups. Phase relations supporting interactions between the groups preceded those interactions by a few milliseconds, consistent with a mechanistic role. These effects were specific in time, frequency, and space, and we therefore propose that the pattern of synchronization flexibly determines the pattern of neuronal interactions.

 

And that this mechanism is involved in awareness:

 

Melloni L, Molina C, Pena M, Torres D, Singer W, Rodriguez E. (2007) Synchronization of neural activity across cortical areas correlates with conscious perception. J Neurosci. 2007 Mar 14;27(11):2858-65.

 

Abstract: Subliminal stimuli can be deeply processed and activate similar brain areas as consciously perceived stimuli. This raises the question which signatures of neural activity critically differentiate conscious from unconscious processing. Transient synchronization of neural activity has been proposed as a neural correlate of conscious perception. Here we test this proposal by comparing the electrophysiological responses related to the processing of visible and invisible words in a delayed matching to sample task. Both perceived and non-perceived words caused a similar increase of local (gamma) oscillations in the EEG, but only perceived words induced a transient long-distance synchronization of gamma oscillations across widely separated regions of the brain. After this transient period of temporal coordination, the electrographic signatures of conscious and unconscious processes continue to diverge. Only words reported as perceived induced (1) enhanced theta oscillations over frontal regions during the maintenance interval, (2) an increase of the P300 component of the event-related potential, and (3) an increase in power and phase synchrony of gamma oscillations before the anticipated presentation of the test word. We propose that the critical process mediating the access to conscious perception is the early transient global increase of phase synchrony of oscillatory activity in the gamma frequency range.

 

Put simply the cemi field is that component of the brain’s electromagnetic (em) field that influences our actions. The theory proposes that the seat of consciousness is the brain’s em field. This then solves the binding problem because all the information in scattered neurons will be unified in the brain’s em field. A number of researchers (e.g. Sue Pockett) have proposed this much but the cemi field goes one step further and proposes that the cemi field loops back to influence brain activity via electromagnetic induction: the brain’s em fields influences neuronal membrane potentials and thereby the probability of neuron firing and thereby influence our actions. This influence we experience as ‘free will’.

 

NORMWhat are the consequences of your cemi field theory for artificial intelligence?

 

JOHNJOE: One of the proposals of the cemi field theory is that the cemi field performs field computation in the brain (a process very similar to quantum computing) and this is the major advantage of consciousness that has been selected by natural selection. Computers currently lack this level of interaction and thereby lack the cemi field mediated general intelligence that is provided by field computing. I therefore predict that computers that compute only through wires will never acquire general intelligence and will never be aware.

 

However, there is nothing magical about cemi field awareness: it could be simulated by a computer with an architecture that allowed computations to take place through field interactions. Such computers would acquire natural intelligence and awareness.

 

NORM: It sounds like a marvellous post graduate student project. Do you know of any efforts to build a computer in this way?

 

JOHNJOE: Bruce MacLennan at the The University of Tennessee has proposed that field-level information processing might be able to perform some computational manipulations, such as Fourier transforms and wavelet transforms, linear superpositions or Laplacians, more efficiently than digital computers. Efforts to design optical computers through—for instance, the use of Vertical Cavity Surface Emitting Laser arrays (VCSEL) to interconnect circuit boards and thereby exploit field-level information transfer and processing—is also ongoing.

 

An intriguing experiment performed by the School of Cognitive & Computing Sciences (COGS) group at Sussex University that appears to have (accidentally) evolved a field-sensitive electronic circuit. The group used a silicon chip known as a field-programmable gate array (FPGA), comprised of an array of cells. Electronic switches distributed through the array allow the behaviour and connections of the cells to be reconfigured from software. Starting from a population of random configurations, the hardware was evolved to perform a task, in this case, distinguishing between two tones. After about 5,000 generations the network could efficiently perform its task. When the group examined the evolved network they discovered that it utilized only 32 of the 100 FPGA cells. The remaining cells could be disconnected from the network without affecting performance. However, when the circuit diagram of the critical network was examined it was found that some of the essential cells, although apparently necessary for network performance (if disconnected, the network failed), were not connected by wires to the rest of the circuit. According to the researchers, the most likely explanation seems to be that these cells were contributing to the network through electromagnetic coupling—field effects—between components in the circuit. It is very intriguing that evolution of an artificial neural network appeared to capture field effects spontaneously as a way of optimizing computational performance. This suggests that natural evolution of neural networks in the brain would similarly capture field effects, precisely as proposed in the cemi field theory. The finding may have considerable implications for the design of artificial intelligence.

 

It would be fascinating to follow up these ‘accidental experiments’ with a targeted research programme to develop field-sensitive computers; but sadly, the funding bodies (at least those in Europe) have not been convinced by several of my grant proposals. But if anyone is out there who would like to sponsor some really exciting blue-sky research, get in touch!

 

NORMYou are a professor of molecular genetics, and yet you write extensively about the brain and cognition. How did work in your primary field lead you to think so deeply about the other?

 

JOHNJOE: I came across some work indicating that quantum mechanics might be responsible for a phenomenon in microbial genetics: adaptive mutation. I decided to write a book about quantum mechanics and biology and proposed to include a chapter on the Penrose/Hameroff theory of quantum consciousness. However, I found their quantum theory utterly unconvincing. But researching into the topic convinced me that consciousness required a field (to solve the binding problem). So I started to look around for a field in the brain that could account for consciousness. I didn’t have to look far!

 

NORMIn your book Quantum Evolution you make the case for evolution being driven by quantum effects within the genes. Can you summarize your provocative thesis for us?

 

JOHNJOE: DNA is already a quantum code. The precise position of protons along the double helix is responsible for the genetic code and if you want to describe the position of protons then you must use quantum mechanics. I proposed in Quantum Evolution that DNA was sometimes able to enter of state of quantum coherence where the double helix could explore multiple coding states. This quantum coherent DNA may account for phenomena such as adaptive mutations and even the origin of life.

 

NORMOne of the unanticipated consequences of the Human Genome Project has been the discovery that epigenetics plays a major role in not only cellular differentiation, but perhaps in transgenerational inheritance, or so-called "cell memory" as well. For the benefit of readers who may not be familiar with the term, epigenetics essentially refers to the turning on or off of genes as a result of environmental factors—a sort of Lamarckian inheritance, possibly effecting later generations. Although this remains speculative, if it does occur then some instances of evolution would indeed be separated from standard genetic inheritance. Do you think epigenics may be another aspect to evolution and adaptation?

 

JOHNJOE: There is no question that epigenetic inheritance occurs, often due to the inheritance of proteins from mother to daughter cell. But I doubt that it is of major importance as it seems to be a rather unreliable means of inheritance.

 

NORMWhat do you mean by unreliable?

 

JOHNJOE: Protein molecules aren’t strictly segregated, like chromosomes, so some cells will inherit the information but others won’t—there will be a stochastic element to inheritance. This strikes me as unreliable and, if the proteins were encoding important information, evolution would substitute more reliable genetic inheritance.

 

NORMNot long ago I sent you a link to a paper by a team from Vanderbilt University, a blueprint for assembling a synthetic cell using only 115 genes. You responded by saying something to the effect that if they pulled that off, you'd eat your shorts! Can you elaborate on your feelings about recent efforts to build minimal synthetic life forms?

 

JOHNJOE: There seems to be a strong evolutionary drive towards simplicity. If a gene is no longer required by an organism then it rapidly (in evolutionary terms) mutates to become an inactive pseudogene. It is therefore likely that living cellular organisms—subject to this kind of selection -- are as simple as it gets in terms of cellular life. The simplest autonomously replicating living organisms are a groups of bacteria called mycoplasmas, which encode about 500 genes. Even these are not really autonomously replicating as they are parasites that grow inside living cells. If life could be simper, mycoplasmas would be simpler.

 

NORMGiven enough time and resources, do you think it is possible for living cells to be synthesized from scratch? In other words, are there any philosophical reasons why life cannot be created artificially?

 

JOHNJOE: With enough time and resources, yes. But probably not in my lifetime!

 

NORMA basic principle of this website is that science and religion are incompatible. How do you feel about this?

 

JOHNJOE: I agree. They are different spheres of human thought that do not mix. It was William of Ockham who first made this clear in the fourteenth/fifteenth century by disproving all the standard proofs of God as elaborated by Thomas Aquinas. Ockham (though a monk and deeply religious man) argued that religion is based on faith not reason; whereas science had to be based on reason. That divide still stands.

 

NORMYou have written articles suggesting that human dabbling in our own genome to banish genetic diseases and the aging process is not only warranted but necessary. Currently Aubrey de Grey appears to be spearheading the effort to fund anti-aging research. How do you view the current efforts to confront the aging process, and what success do you think these efforts may have in the future?

 

JOHNJOE: I’d certainly be happy to obtain therapy to halt or even reverse the aging process (give me back my hair!), and I am sure most people would similarly like to stay alive fit and healthy for much longer. It is happening already and I believe the end point will eventually be reached when people will not need to age or die unless they wish to. I’d certainly embrace such a world.

 

NORMFinally, Johnjoe, I hear that you are working on a new book about Ockham's Razor. Would you care to give us an introduction?

 

JOHNJOE: I’ve already mentioned my favourite medieval thinker, William of Ockham. He is famous today for his razor, Ockham’s Razor, which is the principle of parsimony that states that we should prefer simpler explanations over complex ones: ‘entities should not be multiplied beyond necessity’. The more I consider this principle the more convinced I am that it is one of the most fundamental tools of science, perhaps even its most important tool. Consider the creationists. They are notoriously difficult to prove wrong because they can invent elaborate creationist ‘explanations’ of the all the evidence or facts you can muster. The final argument against their preposterous theories rests finally with Ockham’s Razor: it is much simpler to assume that one explanation—evolution—is real than to accept the multitude of elaborate mechanisms of floods, God’s intervention, etc., to explain the fossil and molecular record.

 

Ockham’s Razor is fundamental to any scientific theory. Consider for a moment those famous laws of motion described by Sir Isaac Newton. His third law states that “for every action there is an equal and opposite reaction.” Every time an object is pushed then the object pushes back with a force that is precisely equal to the applied force. The law is fundamental to our understanding of even the simplest mechanics. Kick a rock along the beach with your bare foot and your toes will experience the rock’s reaction: it kicks back. A fish propels itself through the water by using its fins to push water backwards (the action); the water reacts by pushing the fish forward (the reaction). A bird beats its wing against the air and the air reacts by pushing the bird skyward. The principle also underpins most mechanical technology from the simple catapult to the motor car or space rocket.

 

But, there is another law that could account for all these facts just as well as Newton’s. It is, for every action there is an equal and opposite reaction, plus a pair of demons who push with equal and opposite force on either side of the object experiencing the action.” So now alongside Newton’s force we have demonic forces that push with equal and opposite strength so that their net effect is zero: they cancel each other out. Like Newton’s law, this demon-assisted law is also consistent with experience, experimentation and perfectly accounts for the action of animal locomotion and mechanical devices. It is just as good at ‘fitting the facts’ as the original theory. But it is burdened with additional entities.

 

And then there is a third theory that states that, for every action there is an equal and opposite reaction, plus two pairs of demons who push with equal and opposite force on either side of the object experiencing the action. This third theory also fits the facts just as well as Newton’s. And then of course there is a fourth theory with three pairs of demons and a fifth with four pairs of demons and so on. We could also have variations of the demon theories with some demons pushing harder than others so long as their total effect is always equal and opposite. Then maybe some angels could be involved in all the pushing and pulling, working alongside the demons.

 

The point is that all the mechanical data that has ever been generated fits not only Newton’s parsimonious laws but also an infinite number of less parsimonious theories with additional entities. All of them are equally good at accounting for the facts; so we can not use empirical data to choose between them. They all make exactly the same predictions; so we could never devise an experiment to separate them. So why did Sir Isaac formulate his parsimonious laws and not more complex ones? Sir Isaac writes, “... we are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances”: Ockham’s Razor will explore the history of science as a process of simplification as more and more phenomena were embraced by simpler laws. The medieval universe with its angels and spheres were replaced by the modern solar system and the law of gravity. All the major advances have been accompanied by an Ockhamist reduction of complexity.

 

Ockham’s Razor will explore the role of the famous razor from Copernicus to Grand Unified Theories (GUT’s). The book will be threaded through with the life and times of William Of Ockham, who was a very interesting chap. Born in 1288 in the village of Ockham in Surrey (not far from my university), he became a Franciscan scholar who taught at Oxford but was summoned to Avignon to answer charges of heresy. His studies there led him to conclude that the pope himself was a heretic who should be deposed. He fled Avignon, along with a group of Franciscan rebels who were fighting the pope’s attack on Franciscan poverty (as portrayed in Umberto Eco’s Name of the Rose), and secured the protection of the Holy Roman Emperor. He continued writing polemical articles against the pope and defending the independence of secular authority. He died in Munich in 1437.

 

Ockham was one of the founders of nominalism which claims that metaphysical entities are merely words without reality. The principle was enormously liberating and threw a huge quantity of metaphysical lumber out of medieval philosophy and stimulated a new empirical approach to science. William of Ockham is therefore a hugely important figure in the history of science, philosophy, politics and the history of humanism, but is very little known outside of academic circles. Ockham’s Razor hopes to make the live and work of this remarkable man accessible to the popular science audience.

 

NORMI very much look forward to reading your book. When will it be published?

 

JOHNJOE: I’m still writing it, so hopefully sometime in 2008/09.