Cell(f) Comprehension

By Will Smythe

“The only solid piece of scientific truth about which I feel totally confident is that we are profoundly ignorant about nature… It is this sudden confrontation with the depth and scope of ignorance that represents the most significant contribution of twentieth-century science to the human intellect.” 

– Lewis Thomas

 Just in the last few weeks, results of the ENCODE (“Encyclopedia of DNA Elements”) project were published in several scientific journals around the world.  We had previously thought that up to 90% of our genetic material was “junk DNA” – performing no particular function.  It now appears that up to 80% is indeed functional, and may be working in a fascinating, but incredibly complex fashion, using both biochemical processes as well as three- and four-dimensional spatial relationships to regulate human genes (which make up only about 2% of the genome).  Interesting news?  Yes, but many scientists working in this area had already posited much of this.

So, what’s the big deal?  The ENCODE project findings are just a metaphor here.  The “big deal” is the increasingly unfathomable encyclopedia of biologic information about human life that has been, and continues to be, compiled.

Simply put, at some point during the past decade or two, we passed an important landmark in human history.  There was no fanfare or announcement, and most would not have recognized or have had any reason to be aware of its occurrence.  We came to a time at which the human mind was incapable of fully comprehending the human cell.

To quote the famous biomedical philosopher Yogi Berra – “it ain’t the heat, it’s the humility.”

In his Metaphysics, Aristotle commented, “In the case of all things which have several parts and in which the totality is not, as it were, a mere heap… the whole is something beside the parts”.  If we have been paying attention, it has become apparent that in consideration of the human form that the parts are themselves perhaps more complex than we ever imagined.

Back in high school (and a little later than Aristotle, despite what my children may think), my biology teacher stood in front of the class one day, and said with great thick-lensed and horn-rimmed authority that the “body is arranged from more to less complexity in this way – systems, organs, tissues and finally, cells.”  Thirty years ago, the assertion that the cell was the basic “building block” of biologic life would not necessarily have been questioned too vigorously, even by biologists.  In this paradigm, cells could be thought of much like bricks placed in a wall that supports an architecturally complex greater structure.  One could now argue that she was wrong; however, and that the complexity required to take masses of human cells and coalesce and coordinate them to construct a muscle, for example, is less than that required to create and maintain individual muscle cells themselves.

We may doggedly pursue scientific “truths” and perhaps seek a better general “understanding” of many things, but to fully comprehend them is different.  In view of this realization, the inability of the human mind to fully grasp the complexity of its biologic building blocks has always been there, just waiting for us to reach that point in time where we learned just enough to become aware, or to be “confronted with the depth and scope of ignorance”, as Dr. Thomas reminds us in the opening quote for this essay,  upon which we base our assumptions.  Of course, man has been in this situation before, and many times, in terms of coming to grips with the overwhelming complexity of nature and our shortcomings in comprehension of the entirety of it.   The universe comes to mind as an example, considering where we are now, and where we used to be.  Steven Hawkings once said that “my goal is simple.  It is a complete understanding of the universe, why it is as it is, and why it exists at all.”  Inspiring, but is this really feasible?

Edwin Hubble, the namesake of one of the modern world’s most important tools of discovery, the Hubble Telescope, comments in his 1936 book, The Realm of the Nebulae:

“…in cosmology, theory presents an infinite array of possible universes, and observation is eliminating them, class by class, until now the different types among which our particular universe must be included have become increasingly comprehensible.”

 Hubble’s description of how the structure of the universe is gradually unveiled implies two things – one, that the elimination of earlier theories leads us closer to scientific “truth”, and two, that “comprehension” comes from narrowing the various possibilities under consideration.  Karl Popper, the philosopher of science who suggested that justification was inferior to skepticism and elimination of previous theory, would argue that the former is true, and I agree.  It was important, for example, to rule out the theory that the stars were torches carried by ancient charioteers.  However, I would argue as well that the latter consideration, of this process leading to comprehension for both the universe, and the human cell, is not.  The more we learn about the universe, the more we come to understand what we do not know, based on the fact that the “infinite array of possible universes” we were considering did not include, in fact, many possibilities that we could not have imagined, or simply did not imagine due to the need for more discovery.  The human cell is no different from the universe in this way.  Hubble’s greatest contribution came from proving that we did not exist as the universe’s only galaxy, and that the nebulae he visualized in the newest generation telescopes he was using were too distant to be in our own – they were, in fact, other galaxies.  He did not make that contribution by narrowing options, he added a new one and started us down a path to discovering the seemingly infinite complexity of the cosmos.  He nudged us over the edge to a vista of new discoveries, and an increasing human inability to comprehend them collectively.

Similar to Hubble’s use of the early more powerful telescopes, I would suggest that that J. Craig Venter, and the improvements his group at Celera Genomics introduced to genetic sequencing are what pushed us over the cliff in cellular biology.  Venter successfully sequenced a human genome several months before the federally funded Human Genome Project (HGP) accomplished that task, and since that time we have been trying to drink from a veritable high pressure fire hose of cellular discovery.  During the hundred-year period between 1900 and 2000, about one thousand genes correlated in some direct way to the development of a particular human disease were discovered.  Over the last ten years, since the HGP was completed, and new methods of genetic sequencing have been developed, we have easily tripled that number.  Functional genomic research, or the effort to understand gene and protein interactions that capitalize on new genomic information, has exploded recently, building on genomic sequencing findings, and a large number of genes and effector proteins that participate in everyday normal cellular activities have been discovered as a result, along with some entirely new pathways (processes in the cell that use proteins and other molecules in concert to achieve some cellular function).  Although these finding might have been anticipated by some, other recent developments in cellular biology have not been as predictable, and more are likely coming soon.

We have known for instance, since the work of Watson, Crick and Franklin in the 1950’s that deoxyribonucleic acid, or DNA, is the code of life, and the genetic material of both current and inherited traits and dispositions.  A closely related molecule, ribonucleic acid, or RNA, was thought to carry out a few specific jobs in the cell, and in the process existing solely for the purpose of helping DNA do its job. One set of RNAs worked by sending the message, or “instructions” from the DNA out from the nucleus of the cell, with another set ferrying amino acids, the building blocks of proteins, to areas where they could be combined into these more complex proteins.  What we did not anticipate was that there was an entire unknown RNA biology, and that these molecules have many other functions in human cells, including defense against viral infections, turning on and off the expression of genes, protecting against or causing cancer, enzymatic activities, or at times functioning as genetic material itself.  As a matter of fact, even though the existence of RNA has been known since early in the twentieth century, Science magazine dubbed the discovery of one class of small RNA molecules, small interfering RNA, or siRNA, which are capable of “turning off” human and viral genes, its “breakthrough of the year” only nine years ago, in 2002.  A Nobel Prize was awarded for the work leading up to the discovery of these molecules in 2006 to Craig Mello and Andrew Fire, at that time investigators working at the University of Massachusetts.  Interestingly, one of the members of the Nobel committee that year, Erna Moller, was quoted in the Associated Press following the announcement of the award, “Andrew Fire and Craig Mello’s research helped shed new light on a complicated process that had confused researchers for years. “It was like opening the blinds in the morning,” she said, “Suddenly you can see everything clearly.”

It may have allowed us to see more clearly this particular aspect of the cell’s biologic machinery, but perhaps not with the naked eye or mind.  It added to the depth and breadth of information about the human cell – a volume of information that render it so increasingly complex that the organ that contemplates it – the human brain – cannot possibly fully grasp it’s own cellular constituency.

I posed a question recently to a few well-known scientists working in areas that involve, by definition, a great deal of cell biology expertise, asking them simply “If I asked you to go to a large white board and diagram a ‘complete’ human cell, with all chromosomes, all genes, all membranes and constituents, all organelles, all pathways, enzymes and other effector proteins, all active RNA constructs, etc., could you do it?”   Dr. Irving L. Weismann is the Ludwig Professor of Clinical Investigation and Cancer Research and the Director of the Stanford Institute for Stem Cell Biology and Regenerative Medicine.  One could argue that Dr.Wiesmann knows something about cell biology, and perhaps as much as anyone – he was the first investigator to isolate any stem cell in any species when he discovered the hematopoietic or blood-forming stem cell in mice in the 1980’s, and he went on from there to identify the human hematopoietic stem cell, the human neuronal stem cell, and the human leukemia stem cell. His reply to my question?

“Not a chance on earth, and I hope no one has that much in their mind…”.

A review of some of the numbers, and the exponentially combinatorial nature of how things work in the cell is instructive.  As far as we know, there are between 22,000 and 30,000 human genes in each cell regulated by hundreds of thousands of other DNA sequences (see “ENCODE” above), and the genes are capable of making 3-4 times more proteins than simple sequences would predict – well more than 100,000 of them.  These proteins interact along with virtually countless other biochemicals and other substances in various pathways driving cellular processes and events, and are organized as well, with other constituents such as carbohydates, lipids, minerals, etc., into a myriad of structural components (“scaffolding”) and sub cellular organelles – literally, “little organs”.

One of these organelles provides an example of the complexity of the overall enterprise, the mitochondrion.  The mitochondrion is often referred to as the cell’s “energy plant”, as it is the producer of adenine tri-phosphate (ATP), which powers a lot of those pathways and processes referred to earlier.  However, it also participates in cell death and survival, cell signaling and growth, and a number of other activities.  It even has its own DNA, which although a fraction of the size of the DNA residing in the parent cell nucleus, uses a different “code”.  Add to this complexity the myriad processes and pathways and groups of proteins (at times several working together at one time) that turn genes on and off, and the fact that we are increasingly learning that some regions of the DNA that were felt to be silent (more than 90% of it – only about 1-2% has known gene sequences) are actually not, and have functional significance.  Layered on top of this is the dynamic nature of our cells, which change structure and function depending on the environment.  We are learning, for example, that cancer cells do not sit passively by and allow themselves to be attacked by chemotherapy – they react by turning on a number of “defensive” genes that are normally quiescent, and even “eat” their own organelles at times to “ride out” the stress.  Weismann is right, who could possibly get “that much into their mind”?

One might argue that technology, and a generational difference of opinion regarding what needs to be “known”, versus what needs to be “understood” or “completely comprehended” has changed the way that many of us view personal data management in general.  Immediately accessible electronic information may be rendering any existential angst regarding issues such as not fully comprehending the stuff we are made up of somewhat mute. Winifred Gallagher, in her book Rapt: Attention And The Focused Life, argues convincingly (and at times ominously) that we are becoming so engaged with electronic media that we are losing the ability to focus and concentrate, and learn some things at the depth at which might be optimal.  She references studies, and calls on a number of experts that demonstrate and agree that the true value of “multitasking” is a myth, and that humans were not neurologically engineered to read Facebook on a computer or iPad screen, text someone on a smartphone, listen to iTunes, watch a reality show on television (or perhaps any of the other devices enumerated), and do homework simultaneously, at least with any degree of anticipated retention of whatever we were supposed to be learning.  She even argues that “whenever you squander attention on something that doesn’t put your brain through its paces and stimulate change, your mind stagnates a little and life feels dull.”  Although I get it, I am not sure that the medical students I teach, or my teenage son would agree with that sentiment, nor that they feel that they need to necessarily focus or commit as much to long term memory.  Some might even suggest that the ability to be immediately connected to one another, and to whatever they need to know in the moment, is fun and convenient.  What took the last generation days to accomplish, a student can accomplish now in a fraction of the time,  and the time investment, which might encourage someone to commit facts to memory rather than to be forced to repeat the drudgery of the process of data acquisition, has been incredibly diminished.  If, for example, I had been asked as a medical student to summarize the last ten articles that were written on the topic of HIV infection, I would have had to take the following steps…  go to the library and look at a very thick citation index book (which was always a few months out of date) to find a listing of what had been recently published in that area.  I would then have had to find the bound or shelved journals in which those articles had been published, and copy them.  It would then be likely that my library did not carry all of the journals where articles had been published, and I would then have had to ask the librarian to order the articles and have them copied manually at another library, and mailed or faxed back to the one I was using.  This process would take a total of 2-3 hours up front, and then a wait of a week or two for the missing citations to be mailed.  All this can be accomplished today, using the website Pubmed, and the internet, in something less than 15 minutes.

E. O. Wilson, is an American biologist and theorist who has argued for a combining and coordination of the sciences, and the sciences with the arts.  He does, however, seem to understand that achievement of “Consilience”, as he terms this objective, will not lead to the ability of humans to comprehend science in its totality, and states in the book by the same name that, “we are drowning in information, while starving for wisdom. The world henceforth will be run by synthesizers, people able to put together the right information at the right time, think critically about it, and make important choices wisely.”  It isn’t clear to me that there is a difference in the fund of knowledge that the medical students I teach accumulate as they move through the curriculum – one that teaches them at its fundamental level, in fact, about nothing more important than the human cell – how it works, what is built from it, and what can happen when things go awry.

In objective fairness, and in an effort to avoid waxing nostalgic about the “days of the giants” when I, and every generation before me was learning to be a doctor, the amount one needs to know to be abreast of modern medicine, like knowledge of the universe, is obviously growing quickly.   I am, however, beginning to perceive a subtle shift in regard to what is committed by some students to long term memory, or can be recounted without accessing any sort of database.  Why commit something to memory if you can pull a phone out of your pocket, or turn to the ubiquitous computer keyboard and have the answer immediately?  What I am suggesting here is that we are perhaps becoming more comfortable with less depth of comprehension in some circumstances, in exchange for breadth and access – becoming therefore increasingly comfortable with and dependent on our immediate access to information, and increasingly admitting our inability to handle the information any other way.  The human mind, for the vast majority of us, has limited ability to store, process and make sense of things.  Perhaps these students will be much better synthesizers, as Wilson suggests, than their predecessors, and better doctors – just as long as they don’t lose electrical power, or run out of batteries.

So, if we are not to be existentially challenged by our incomprehension of self at the basic level, and we are comfortable with abrogating any individual imperative to do so, how do we manage this and other increasingly complex sets of scientific data?  In this light, the challenge presented by our biology is really just a metaphor for all knowledge,  scientific and otherwise, continually expanding in increasingly discrete areas.  If no one person, or even reasonably-sized groups of people can comprehend the whole, shouldn’t we still seek comprehension of the whole, and aren’t there likely important things that will be learned from that comprehension?  There has been a lot of attention paid to the concept of “big science” around the world, whereby large groups of researchers have been encouraged to work together across organizations and continents to solve difficult problems in science.  This has led to better understanding of many processes in many areas, perhaps most dramatically in physics, where the discovery and characterization of sub-atomic particles and their behavior has been facilitated by mass collaboration. However, even these groups have tended to focus on relatively discrete areas of science, digging deeper and deeper into singular questions rather than attempting to coalesce knowledge from disparate areas.

Obviously, we will increasingly turn to computing science, and so-called “artificial intelligence” (AI) to achieve the goal of supplanting, or at least augmenting somehow our biologically-limited storage and processing ability.  Daniel Hillis is the Chairman of Applied Minds, and has been a leading voice in the AI discussion for more than two decades. He suggests, in an essay entitled “A Forebrain for the World Mind”, a world in which we have a coordinated electronic data management network that is much more than an internet – “the Internet knows no more about the information it handles than the telephone system knows about the conversations that take place over its lines. Most of those zillions of transistors are either doing something very trivial or nothing at all”.  He compares this current technology to the unconscious mind, or the “hindbrain”.  What he sees on the horizon is a linkage of human minds and technology, more of a “forebrain” – “this is the development that will make a difference: a method for groups of people and machines to work together to make decisions in a way that takes advantage of scale… Given this, we will finally have access to intelligence greater than our own. The world mind will finally have a forebrain, and this will change everything”.

During the Inquisition, Galileo was tried by the church for his views on our small part of Hubble’s expanding universe, namely his heretical assertion that our nearby planetary neighbors and the sun did not rotate around the earth. He was found guilty, and was punished by papal authority for this with imprisonment.  He later commented bitterly that “in matters of science the authority of a thousand is not worth the humble reasoning of a single individual”.  He was absolutely correct; however, when one considers the complexities of the human cell, the humble reasoning of a single, distracted individual may well not be worth the coming together of thousands upon thousands of electronically linked minds as Hillis suggests.

This scenario is likely, as well as a receding sense of just how much information we need to store away forcefully in our long term memory to be effective in many disciplines, as the knowledge in those disciplines continues to expand.  Sure, much will still come to reside there, in our limited biologic neural databases, based on repetitive use and regardless of the manner in which it is learned – previously by writing it on a blackboard a thousand times, and now perhaps by looking it up on Wikipedia a thousand times.  However, the ability to synthesize large amounts of information, as Wilson suggests, will perhaps come to be more valued in the future than encyclopedic memory for minute facts.

One of the continually empty promises of our modern advances in communication and access to information has been that we will have more discretionary time.  Ironically, we have never spent more time “working” than right now.  We spend less time doing the things that make us, collections of incomprehensible cells, uniquely human in our  incomprehensible universe.  If we are able to both learn more by allowing information to be stored and comprehended somewhere else, and access it for our use when needed rather than spending time chasing it down and continually trying to stuff it into our finite minds, could there be benefit over and above scientific consilience?  Might we have more time to be uniquely “humane”?  Might we find time to work at understanding one another, to value depth and breadth of relationships as much as we now value depth and breadth of factual recall?  Would we refocus on turning scientific knowledge, knowledge that will continue to accumulate, into human benefit and alleviation of pain, suffering, war, and fear?  Many would argue that we haven’t done those things commensurate with what we now know.  Would we gradually, like cultures that we now consider to be more “primitive”, come to value more highly time spent creating art and beauty?  I’m not sure.

What I am certain of is that I don’t lie in bed at night and fret over the fact that I can’t comprehend what I am made of.  Maybe I am smarter by just admitting this, as Lewis Thomas suggests in the opening quote for this essay.  Thomas, a physician, and a former President of Memorial Sloan-Kettering Cancer Center, was seemingly comfortable with this concept, and suggested that our obvious ignorance and our open-ended opportunity to discover was what made medical science enthralling.  He even posited that we think of the world as a giant single cell.  This was his interpretation and slight edit of the “Gaia Hypothesis”, whereby the earth is characterized as a living organism, rather than a place where life is simply casually smeared on inanimate rock, or carelessly plunked into dispassionate seas.  Lewis commented that, “I have been trying to think of the earth as a kind of organism, but it is a no go… it is most like a single cell.”   Perhaps what Lewis was trying to relate was the earth’s immediate incomprehensibility, complexity and interrelatedness, as well as our need to work at keeping it alive.

I’m not sure what the future of knowledge portends, or if it coming to terms with a world mind will be good, or bad for mankind.  I don’t know whether it will allow us to become more focused on humanity, or render humans irrelevant, leading to our replacement as a species by something non-cellular and far smarter than we are, or ever could be.  I am certain that a “change” has begun in the way that we relate to information; however, based on the behavior of the next generation, the first to be linked to one another and to all the world’s knowledge in an increasingly seamless fashion.  

I just hope that as all this happens, and as I get older and even less capable of self-comprehension (or any other type of comprehension for that matter) that someone reminds me to recharge my I(insert term here), or to look a bit more frequently at #thingsyoucantrememberanymore on Twitter.

Advertisements

2 thoughts on “Cell(f) Comprehension

  1. Pingback: Will of Reason « The Way I See It…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s