Planet Earth/print version
Every moment of your life will be from the perspective of a single planet— Planet Earth. You were born here and you will die here. This textbook is a guide to your home, to your place in the universe. By taking this course, you will learn about your home planet. How it works and how we know it works this way. This course is an users-manual for planet Earth, with direct recommendations for future generations such as yourself to maintain its health and natural wonders. As an astute student, you will be introduced to the theoretical principles of science, and how to defend yourself from the spread of ignorance. You will learn about Earth’s dimensions and motions, and how to navigate its surface. You will learn how energy originates from the closet star (the sun), its moon, and other sources of energy in the Earth’s active core, and how this energy can be used and stored. You will learn basic scientific principles of matter, the make-up of substances that form the field of chemistry. You will examine the planet’s atmosphere, the air that you are breathing as you read this, and how that air is slowly changing. You will explore the vast abundance of Earth’s water, covering the planet in enormous oceans, abundant lakes, and rivers, as well as frozen water locked within snow and ice. You will learn how to predict wind and storms and how climates shift. You will lead your own exploration of the solid interior of the Earth, the composition of mountains, rocks, and dirt. You will learn about life, the most unique feature of the planet. You will explore theories of how life arose and how it has evolved and changed over time. Learning that you are of Earth and the story of your own origin on this planet. You will undertake an examination of the great biomes of jungles, forests, and deserts, and the life that exists within them. You will survey the important field of biology, as you learn about life and its interactions with the planet. In the end, you will come to face the ominous future of your own planet, of the changes that are now occurring. Your planet is not the same as your ancestors, nor grandparents, nor your parents at your age — Earth today is quickly being altered, and you will need to adapt to this change. This course will teach you how to prepare for this change and how to protect the planet from further alteration to the point that it becomes life-less. This class will be challenging, but with enough dedication and commitment you will succeed in learning the material. You will cherish the knowledge presented in this class for the rest of your life.
Intended Audience of this Textbook
This textbook is written for an audience of introductory college students in a non-science degree program. It is intended to provide a detailed comprehensive knowledge of Planet Earth, including basic aspects of physics, chemistry, geology and biology. As a major scientific overview of the entirety of Planet Earth, the intention is to only present key concepts that will enhance, enrich, and engage the readers interest in Earth Sciences. It is intended to make any reader, such as yourself, at least a little more knowledgeable of the amazing place that we all live within.
Purpose of Writing an Open Text and What that Means
All of the text and modules of the Planet Earth course are offered under a Creative Commons with Attributions license, which means that you are free to share and redistribute the material in any medium or format, and adapt remix, transform, and build upon the material for any purpose, even commercially. Just be sure to attribute the text with the author's name and course name, and indicate where you found the information. The purpose of making this text free to disseminate is that it contains valuable information that you should feel free to share and discuss as widely as possible. Science adapts to new knowledge, and as such this text can be updated and modified as new discoveries are made. An open text also ensures that the knowledge remains affordable to the average student such as yourself. Feel free to pass on the information that you learn in this course and you are free to make printed copies. The referenced text is available as a Wikibook, on the Wikipedia Website.
About the Author
Benjamin J. Burger is a geologist who earned his Masters of Science degree in 1999 at Stony Brook University in New York and Doctorate in 2009 at the University of Colorado in Boulder, and spent five years working at the American Museum of Natural History in New York City. He has also worked as a professional geologist in the states of Utah, Colorado, and Wyoming. He joined the Utah State University faculty in 2011 and continues to teach and conduct research as an Associate Professor in the Department of Geoscience at the Uintah Basin – Vernal Campus of Utah State University located in northeastern corner of Utah. Many of his course lectures and educational content can be found on YouTube or on his website at www.benjamin-burger.org
About this textbook
This book was written with the support of a grant offered by Utah State University Libraries, Academic & Instructional Services and College of Science to support faculty and instructors at Utah State University—State Wide Campuses to create Open Educational Resources to support their online courses in the United States of America. These grants are made to reduce barriers to student success, as well as to encourage faculty and instructors to try new, high-quality, and lower cost ways to deliver learning materials to students through Open Educational Resources.
The majority of the first edition of the textbook was written between 2019 and 2020, with the intention that the textbook to be offered free of charge to all participants in GEO 1360 Planet Earth, an online course offered at Utah State University. As an Open Educational Resource this textbook if offered for any faculty, instructor, or teacher to adopt for their own courses they teach, and is distributed under a Creative Commons License. If you notice any errors or mistakes please contact the author.
Hyperlinks will be referenced throughout the text to encourage further reading on any particular topic, most of these will point toward a Wikipedia article or an original scientific publication. These referenced hyperlinks will follow a similar style and format as seen in the popular Wikipedia website, where sources of specific information can be referenced and verified with a simple link. Every attempt was made to ensure the referenced external links that you will find within the modules are verified in print and online sources, including peer-reviewed scientific papers, publications of scientific societies, government organizations, and mainstream news organizations. There is no guarantee these external links will remain available online or whether they will be archived for future assessing electronically in the future. Furthermore, there is no guarantee that your university or college will have a subscription to the article to view online. However, most of these external references should be accessible to you if you wish to explore a topic more in-depth than provided in the text, especially many of the Wikipedia entries. Only information covered within the text of this course will be used on quizzes and exams, as the reference hyperlinks serve to support statements and data within the main body of this course. You are not responsible for information that exists outside of this course on external webpages.
Vocabulary and Glossary of Terms
Important scientific terms will be in bold print, and may have a hyperlink to a clear definition of that term. These terms should be defined in your notes, as they will likely be referenced in quiz and exam questions. Use of flashcards with the term and its definition might be an important study tool for the exams.
Table of Contents
Section 1: EARTH’S SIZE, SHAPE, AND MOTION IN SPACE
- a. Science: How do we Know What We Know.
- b. Earth System Science: Gaia or Medea?
- c. Measuring the Size and Shape of Earth.
- d. How to Navigate Across Earth using a Compass, Sexton, and Timepiece.
- e. Earth's Motion and Spin.
- f. The Nature of Time: Solar, Lunar and Stellar Calendars.
- g. Coriolis Effect: How Earth’s Spin Effects Motion Across its Surface.
- h. Milankovitch cycles: Oscillations in Earth’s Spin and Rotation.
- i. Time: The Invention of Seconds using Earth’s Motion.
Section 2: EARTH’S ENERGY
- a. What is Energy and the Laws of Thermodynamics?
- b. Solar Energy.
- c. Electromagnetic Radiation and Black Body Radiators.
- d. Daisy World and the Solar Energy Cycle.
- e. Other Sources of Energy: Gravity, Tides, and the Geothermal Gradient.
Section 3: EARTH’S MATTER
- a. Gas, Liquid, Solid (and other states of matter).
- b. Atoms: Electrons, Protons and Neutrons.
- c. The Chart of the Nuclides.
- d. Radiometric dating, using chemistry to tell time.
- e. The Periodic Table and Electron Orbitals.
- f. Chemical Bonds (Ionic, Covalent, and others means to bring atoms together).
- g. Common Inorganic Chemical Molecules of Earth.
- h. Mass spectrometers, X-Ray Diffraction, Chromatography and Other Methods to Determine Which Elements are in Things.
Section 4: EARTH’S ATMOSPHERE
- a. The Air You Breath.
- b. Oxygen in the Atmosphere.
- c. Carbon Dioxide in the Atmosphere.
- d. Green House Gases.
- e. Blaise Pascal and his Barometer.
- f. Why are Mountain Tops Cold?
- g. What are Clouds?
- h. What Makes Wind?
- i. Global Atmospheric Circulation.
- j. Storm Tracking.
- k. The Science of Weather Forecasting.
- l. Earth’s Climate and How it Has Changed.
Section 5: EARTH’S WATER
- a. H2O: A Miraculous Gas, Liquid and Solid on Earth.
- b. Properties of Earth’s Water (Density, Salinity, Oxygen, and Carbonic Acid).
- c. Earth’s Oceans (Warehouses of Water).
- d. Surface Ocean Circulation.
- e. Deep Ocean Circulation.
- f. La Nina and El Nino, the sloshing of the Pacific Ocean.
- g. Earth's Rivers.
- h. Earth’s Endangered Lakes and the Limits of Freshwater Sources.
- i. Earth’s Ice: Glaciers, Ice Sheets, and Sea Ice.
Section 6: EARTH’S SOLID INTERIOR
- a. Journey to the Center of the Earth: Earth’s Interior and Core.
- b. Plate Tectonics: You are a Crazy Man, Alfred Wegener.
- c. Earth’s Volcanoes, When Earth Goes Boom!
- d. You Can’t Fake an Earthquake: How to Read a Seismograph.
- e. The Rock Cycle and Rock Types (Igneous, Metamorphic and Sedimentary).
- f. Mineral Identification of Hand Samples.
- g. Common Rock Identification.
- h. Bowen’s Reaction Series.
- i. Earth’s Surface Processes: Sedimentary Rocks and Depositional Environments.
- j. Earth’s History Preserved in its Rocks: Stratigraphy and Geologic Time.
Section 7: EARTH’S LIFE
- a. How Rare is Life in the Universe?
- b. What is Life?
- c. How did Life Originate?
- d. The Origin of Sex.
- e. Darwin and the Struggle for Existence.
- f. Gregor Mendel’s Game of Cards: Heredity.
- g. Earth's Biomes and Communities.
- h. Soil: Living Dirt.
- i. Earth’s Ecology: Food Webs and Populations.
Section 8: EARTH’S HUMANS AND FUTURE
- a. Ötzi’s World or What Sustainably Looks Like.
- b. Rise of Human Consumerism and Population Growth.
- c. Solutions for the Future.
- d. How to Think Critically About Earth's Future.
Section 1: EARTH’S SIZE, SHAPE, AND MOTION IN SPACE
1a. Science: How do we Know What We Know.
The Emergence of Scientific Thought
The term science comes from the Latin word for knowledge, scientia, although the modern definition of science only appears during the last 200 years. Between the years of 1347 to 1351 a deadly plague swept across the Eurasian Continent, resulting in the death of nearly 60% of the population. The years that followed the great Black Death as the plague came to be called was a unique period of reconstruction which saw the emergence of the field of science for the first time. Science became the pursuit of learning knowledge and gaining wisdom, it was synonymous with the more widely used term of philosophy. It was born in the time when people realized the importance of practical reason and scholarship in the curing of diseases and ending famines, as well as the importance of rational and experimental thought. The plague resulted in a profound acknowledgement of importance of knowledge and scholarship to hold a civilization together. An early scientist was indistinguishable from being a scholar.
Two of the most well-known scholars to live during this time was Francesco “Petrarch” Petrarca and his good friend Giovanni Boccaccio, both were enthusiastic writers of Latin and early Italian and enjoyed a wide readership of their works of poetry, songs, travel writing, letters, and philosophy. Petrarch rediscovered the ancient writings of Greek and Roman figures of history and worked to popularize them into modern Latin, particularly re-discovering the writings of the Roman statesman Cicero, who had lived more than thousand years previously. This pursuit of knowledge was something new, both Petrarch and Boccaccio proposed the kernel of thought in a scientific ideal that has transcended into the modern age, that the pursuit of knowledge and learning does not conflict with religious teachings, as the capacity of intellectual and creative freedom is in itself divine. Secular pursuit of knowledge based on truth complements faith and religious doctrines which are based on belief and faith. This idea manifested during the Age of Enlightenment and eventual the American Revolution as an aspiration for a clear separation of church and state. This sense of freedom to pursue knowledge and art, unhindered by religious doctrine lead to the Italian Renaissance of the early 1400s.
The Italian Renaissance was fueled as much by this new freedom to pursue knowledge as it was the global and economic shift that brought wealth and prosperity to northern Italy, and later in northern Europe and England. This was a result of the fall of the Eastern Byzantine Empire, and the rise of a new merchant class of the city states of northern Italy which took up the abandoned trade routes throughout the Mediterranean and beyond. The patronage of talented artists and scholars arose during this time, as wealthy individuals financed not only artists, but also the pursue of science and technology. The first universities, places of learning outside of monasteries and convents came into fashion for the first time, as wealthy leaders of the city states of northern Italy sought talented artists and inventors to support within their own courts. Artists like Leonard da Vinci, Raphael and Michelangelo received commissions from wealthy patrons including the church and city states to create realistic artworks from the keen observation of the natural world. Science grew out of art, as the direct observation of the natural world lead to deeper insights into the creation of realistic paintings and sculptures. This idea of the importance of observation found in Renaissance art transcended into the importance of observation in modern science today. In other words, science should reflect reality by the ardent observation of the natural world.
The Birth of Science Communication
Science and the pursuit of knowledge during the Renaissance was enhanced to a greater extent by the invention of the printing press with moveable type, allowing the widespread distribution of information in the form of printed books. While block and intaglio prints using ink on hand carved wood or metal blocks predated this period, re-movable type allowed the written word to be printed and copied onto pages much more quickly. The Gutenberg Bible was first printed in 1455, with 800 copies made in just a few weeks. This cheap and efficient way to replicate the written word had a dramatic effect on society, as literacy among the population grew. It was much more affordable to own books and written work than any previous time in history. With a little wealth, the common individual could pursue knowledge through the acquisition of books and literature. Of importance to science, was the new-found ease to which you could disseminate information. The printing press lead to the first information age, and greatly influenced scientific thought during the middle Renaissance, in the second half of the 1400s. Many of these early works were published with the mother tongue, the language spoken in the home, rather than the father tongue, the language of civic discourse found in the courts and the churches of the time, which was mostly Latin. These books spawn the early classic works of literature we have today in Italian, French, Spanish, English, and other European languages spoken across Europe and the world.
One of the key figures of this time was Nicolaus Copernicus who published his mathematical theory that the Earth orbited around the sun in 1543. The printed book entitled De revolutionibus orbium coelestium (On the Revolutions of the Celestial Spheres), written in the scholarly father tongue of Latin ushered in what historians called the Scientific Revolution. The book was influential because it was widely read by fellow astronomers across Europe. Each individual could verify the conclusions made by the book by carrying out the observations of their own. The Scientific Revolution was not so much what Nicolaus Copernicus discovered and reported (which will be discuss in depth later), but that the discovery and observations he made could be replicated by others interested in the same question. This single book led to one of the most important principles in modern science, that any idea or proposal must be verified through replication. What makes something scientific is that it can be replicated or verified by any individual interested in the topic of study. Science embodied at its core the reproducibility of observations made by individuals, and ushered in the age of experimentation.
During this period of time such verifications of observations and experiments was a lengthy affair. Printing costs of books and the distribution of that knowledge was very slow, and often subjected to censure. This was also the time of the Reformation first lead by Martin Luther who protested corruption within the Catholic Church, leading to the establishment of the Protestant Movement in the early 1500s. This schism of thought and belief brought about primarily by the printing of new works of religious thought and discourse lead to the Inquisition. The Inquisition was a reactionary system of courts established by the Catholic Church to convict individuals who showed signs of dissent from the established beliefs sent forth by doctrine. Printed works that hinted at free thought and inquiry were destroyed and their authors imprisoned or executed. Science which had flourished in the century before suffered during the years of the Inquisition, but it also brought about one of the most important episodes in the history of science involving of one of the most celebrated scientists of its day, Galileo Galilei.
The Difference between Legal and Scientific Systems of Inquiry
Galileo was a mathematician, physicist, and astronomer who taught at one of the oldest Universities in Europe— the University of Padua. Galileo got into an argument with a fellow astronomer named Orazio Grassi who taught at the Collegio Romano in Rome. Grassi had published a book in 1619 on the nature of three comets he had observed from Rome, entitled De Tribus Cometis Anni MDCXVIII (On Three Comets in the Year 1618). The book angered Galileo, who argued that comets were an effect of the atmosphere, and not real celestial bodies. Although Galileo had invented an early telescope for observing the moon, planets and comets, he had not made observations of the three comets observed by Grassi. As a rebuttal Galileo published his response in a book entitled The Assayer which in a flourish he dedicated to the Pope living in Rome. The dedication was meant as an appeal to authority, in which Galileo hoped that the Pope would take his side in the argument.
Galileo was following a legal protocol for science, where evidence is presented to a judge or jury, or Pope in this case, and they decide on a verdict based on the evidence presented before them. This appeal to authority was widely in use during the days of the Inquisition, and still practiced in law today. Galileo in his book The Assayer presented the notion that mathematics is the language of science. Or in other words, the numbers don’t lie. Despite being wrong, the Pope sided with Galileo, which embolden Galileo to take on a topic he was interested in, but considered highly controversial by the church— the idea that Earth rotated around the Sun proposed by Copernicus. Galileo wanted to prove it using his own mathematics.
Before his position at the university, Galileo had served as a math tutor for the son of Christine de Lorraine the Grand Duchess of Tuscany. Christine was wealthy and highly educated, and more open to the idea of a heliocentric view of the solar system. In a letter to her, Galileo proposed the rational for undertaking a forbidden scientific inquiry, invoking the idea that science and religion were separate and that biblical writing was meant to be allegorical. Truth could be found in mathematics, even when it contradicted the religious teachings of the church.
In 1632 Galileo published the bookDialogue Concerning the Two Chief World Systems in Italian and dedicated to the grandson of Christine, and Grand Duke of Tuscany. The book was covertly written in an attempt to get beyond the censures of the time, who could ban the work if they found that it was heretical to the teachings of the church. The book was written as a dialogue between three men (Simplicio, Salviati, and Sagredo), who over the course of four days debate and discuss the two world systems. Simplico argues that the Sun rotates around the Earth, while Salviati argues that the Earth rotates around the Sun. The third man Sagredo is neutral, and listens and responses to the two theories as an independent observer. While the book was initially allowed to be published it raised alarm among members of the clergy, and charges of heresy were brought forth against Galileo after its publication. The book was banned as well as all the previous writings of Galileo. The Pope who had previously supported Galileo, saw himself as the character Simplicio, the simpleton. Furthermore, a letter to Christine was uncovered and brought forth during the trial. Galileo was found guilty of heresy, and excommunicated from the church and placed under house arrest for the rest of his life. Galileo’s earlier appeal to authority appeared to be regaled as he faced these new charges. The result of Galileo’s ordeal was that fellow scientists felt that he had been wrongfully convicted, and that the authority, whether religious or governmental was not the determiner of truth in scientific inquiry.
Galileo’s ordeal established the important principle of the independence of science from authority in the determination of truth in science. The notion of appealing to authority figures should not be a principle of scientific inquiry. Unlike the practice of law, science was governed not by judges or juries, who could be fallible and wrong, nor was it governed through popular public opinion or even voting.
This led to an existential crisis in scientific thought. How can one define truth, especially if you can’t appeal to authority figures in leadership positions to judge what is true?
How to Become a Scientific Expert and Scientific Deduction
The first answer came from a contemporary of Galileo, René Descartes a French philosopher who spent much of his life in the Dutch Republic. Descartes coined the motto, Ego cogito, ergo sum, I think there for I am, which was taken from his well-known preface entitled, Discourse on the Method, published in both French and Latin in 1637. The essay is an exploration of how one can determine truth, and is a very personal exploration of how he himself determines what is true or not. René Descartes argued for the importance of two principles in seeking truth.
First was the idea that it requires much reading, taking and passing classes, but also exploring the world around you— traveling and learning new cultures and meeting new people. He recommended joining the army, and living not only in books and university classrooms, but living life in the real world and learning with everything that you do. Truth was based on common sense, but only after careful study and work. What Descartes advocated was that expertise in a subject came not only from learning and studying a subject over many years but also practice in the real-world environment. A medical doctor who had never practiced nor read any books on the subject of medicine was a poorer doctor to one who attended many years of classes and kept up to date on the newest discoveries in books and journals, and had practiced for many years in a medical office. The expert doctor would be able to discern a medical condition much more readily than a novice. With expertise and learning, one could come closer to knowing the truth.
The second idea was that anyone could obtain this expertise if they worked hard enough. René Descartes basically states that he was a normal average student, but through his experience and enthusiasm for learning more, he was able over the years to become an expert enough to discern truth from fiction, hence he could claim, I think there for I am.
What René Descartes advocated was that if you have to appeal to authority, seek experts within the field of study of your inquiry. These two principles of science should be a reminder that in today’s age of mass communication (Twitter, Facebook, Instagram) for everyone, much falsehood is perpetrated by novices in the spread of lies unknowingly, and to combat these lies or falsehoods one must be educated and well informed through an exploration of written knowledge, educational institutions, and life experiences in the real-world, and if you don’t have these, then seek experts.
René Descartes philosophy had a profound effect on science, although even himself would reference this idea to “le bon sens” or common sense.
Descartes philosophy went further to answer the question of what if the experts are wrong? If two equally experienced experts disagree, how do we know who is right if there is no authority we can call upon to decide. How can one uncover truth through their own inquiry? Descartes answer was to use deduction. Deduction is where you form an idea and then test that idea with observation and experimentation. An idea is true until it is proven false.
The Idols of the Mind and Scientific Eliminative Induction
This idea was flipped on its head, by a man so brilliant that rumors exist that he wrote William Shakespeare’s plays in his own free time, although no evidence exists to prove these rumors true, they illustrate how widely regarded he was considered even today. The man’s name was Francis Bacon, and he advanced the method of scientific inquiry that today we call the Baconian approach.
Francis Bacon studied at Trinity College Cambridge England, and rose up the ranks to become Queen Elizabeth’s legal advisor thus becoming the first Queen’s Counsel. This position lead Francis Bacon to hear many court cases and take a very active role in interpreting the law on behalf of the Queen’s rule. Hence, he had to devise a way to determine truth on a nearly daily basis. In 1620 be published his most influential work, Novum Organum Scientiarum, or the New Instrument of Science. It was a powerful book.
Francis Bacon contrasted his new method of science from those advocated by René Descartes by stating that even experts could be wrong, and that most ideas were false, rather than true. According to Bacon, falsehood among experts comes from four major sources, or in his words Idols of the Mind.
First was the personal desire to be right— the common notion that you consider yourself smarter than anyone else, this he called idola tribus. And it extended to the impression you might have that you are on the right track or had some brilliant insight even if you are incorrect in your conclusion. People cling to their own ideas, and value them over others, even if they are false. This could also come from a false idea that your mother, father, or grandparent told you was true, and you held onto this idea more than others, because it came from someone you respect.
The second source of falsehood among experts comes from idola specus. Bacon used the metaphor of a cave where you store all that you have learned, but we can use a more modern metaphor, watching YouTube videos or following groups on Social Media. If you consume only videos or follow writers with a certain world view you will become an expert on something that could be false. If you read only books claiming that the world is flat, then you will come to a false conclusion that the world is flat. Bacon realized that as you consume information about the world around you, you are susceptible to false belief due to the random nature in what you learn and where you learn those things from.
The third source of falsehood among experts come from what he called idola fori. Bacon viewed that falsehood resulted from the misunderstanding of language and terms. He viewed that science, if it seeks truth should clearly define the words that it uses, otherwise even experts will come to false conclusions by their misunderstandings of a topic. Science must be careful to avoid ill-defined jargon, and define all terms it uses clearly and explicitly. Words can lie, and when used well can cloak falsehood, as truth.
The final source of falsehood among experts results from the spectacle of idola theatri. Even if the idea makes a great story, it may not be true. Falsehood comes within the spectacle of trending ideas or widely held public opinions, which of course come and go based on fashion or popularity. Just because something is widely viewed, or in the modern sense gone viral on the internet, does not mean that it is true. Science and truth are not popularity contests nor does it depend on how many people come to see it in theaters, or how fancy the computer graphics are in the science documentary you watched last night, nor how persuasive the Ted Talk. Science and truth should be unswayed by public perception and spectacle. Journalism is often engulfed within the spectacle of idola theatri, reporting stories that invoke fear and anxiety to increase viewership and outrage, and often they are untrue.
These four idols of the mind lead Bacon to the conclusion that knowing the truth was an impossibility, that in science we can get closer to the truth, but we can never truly know what we know. We all fail at achieving “truth.” Bacon warned that “truth” was an artificial construct formed by the limitations of our perceptions, and that is easily cloaked or hidden in falsehood, principally by the Idols of the Mind.
So if we can’t know absolute truth, how can we get closer to the truth? Bacon proposed something philosophers call eliminative induction. Start with observations and experimentations, and using that knowledge to look for patterns, eliminating ideas which are not supported by those observations. This style of science, which starts with observations and experiments resulted in a profound shift in scientific thinking.
Bacon viewed science as focused on the exploration and the documentation of all natural phenomena. The detailed cataloguing of all things observable, all experiments untaken and the systematic analysis among multitudes of observations and experiments for threads of knowledge that lead to the truth. While previous scientists proposed theories and then sought out confirmation of those theories, Bacon proposed first making observations, and then drawing theories which best fit the observations that had been made.
Francis Bacon realized that this method was powerful, and proposed the idea that “With great knowledge comes great power.” He had seen how North and South American Empires, such as the Aztecs had been crushed by the Spanish during mid-1500s, and how knowledge of ships, gun powder, cannons, metallurgy and warfare had resulted in the fall and collapse of whole civilizations of peoples in the Americas. The Dutch utilized the technology of muskets against North American tribes focusing on the assassination of its leaders, as well as the wholesale manufacturing of wampum beads, which destroyed North American currencies and the native economies. Science was power because it provided technology that could be used to destroy nations and conquer people.
He foresaw the importance of exploration and scientific discovery if a nation was to remain of importance in a modern world. With Queen Elisabeth’s death in 1603, Francis Bacon encouraged her successor King James to colonize the Americas, envisioning the ideal of a utopian society in a new world. This utopian society he called Bensalem in his unfinished science fiction book New Atlantis. This utopian society would be an industry into pure scientific inquiry, where researchers could experiment and document their observations with finer detail and from the observations great patterns and theories could emerge that would lead to new technologies.
Francis Bacon’s utopian ideals took hold within his native England, especially within the Parliament of England, under the House of Lords who viewed the authority of the King with less respect than any time in its history. The English Civil War and the execution of its King, Charles I in 1649 tossed the country of England into chaos, and many people fled to the American Colonies in Virginia during the rise of Thomas Cromwell’s dictatorship.
But with the reestablishment of a monarchy in 1660 the ideas laid out by Francis Bacon came to fruition with the founding of the Royal Society of London for Improving Natural Knowledge, or simply, the Royal Society. It was the first truly modern scientific society and it still exists today.
A scientific society is dedicated to research and sharing of discovery among its members. They are considered an “invisible college” since scientific societies are where experts in the fields of science come and learn from each other and demonstrate new discoveries and publish new results of experiments that they had conducted. As one of the first scientific societies, the Royal Society in England welcomed experiments of grand importance, but also insignificant small-scale observations at their meetings. The Royal Society received support from its members, but also from the monarchy, Charles II, which viewed the society as a useful source of new technologies where new ideas would have important implementations in both state warfare and commence. Its members included some of England’s most famous scientists of the time including Isaac Newton, Robert Hooke, Charles Babbage, and even the American Colonialist Benjamin Franklin. Membership was exclusive to upper class men with English citizenship, who could finance their own research and experimentation.
Most scientific societies today are open to membership of all citizens and genders, and have had a profound influence on the sharing of scientific discoveries and knowledge among its members and the public. In the United States of America, the American Geophysical Union and Geological Society of America rank as the largest scientific societies dedicated to the study of Earth Science, but hundreds of other scientific societies exist in the fields of chemistry, physics, biology and geology. Often these societies hold meetings, where new discoveries are shared with scientists by its members giving presentations, and societies have their own journals which publish research and distribute these journals to libraries and fellow members of the society. These journals are often published as proceedings, which can be read by those who cannot attend meetings in person.
The rise of scientific societies allowed the direct sharing of information and a powerful sense of community among the elite experts in various fields of study. It also put into place an important aspect of science today, the idea of peer review. Before the advent of scientific societies all sorts of theories and ideas where published in books, and most of these ideas were fictitious to the point that even courts of law favored verbal rather than written testimonies, because they felt that the written word was much farther from the truth, than the spoken word. Today we face a similar multitude of false ideas and opinions expressed on the internet. It is easy for anyone to post a webpage or express a thought on a subject, you just need a computer and internet connection.
To combat widely spreading fictitious knowledge, the publications of the scientific societies underwent a review system among its members. Before an idea or observation was placed into print in a society’s proceedings it had to be approved by a committee of fellow members, typically between 3 to 5 members who agreed that it was with merit. This became what we call peer-review. A paper or publication that underwent this process was given the stamp of approval among the top experts within that field. Many manuscripts submitted for peer review are never published, as one or more of the expert reviewers may find it lacking evidence and reject it. However, readers found peer review articles of much better quality than other printed works, and realized that these works carried more authority than written works that did not go through the process.
Today peer review articles are an extremely important aspect of scholarly publication, and you can exclusively search among peer reviewed articles by using many of the popular bibliographical databases or indexes, such as Google’s scholar.google.com, GeoRef published by the American Geosciences Institute, available through library subscription, and Web of Science published by the Canadian based Thomson Reuters Corporation and also available only through library subscription. If you are not a member of a scientific society, retrieved online articles are available for purchase, and many are now accessible to non-members for free online depending on the scientific society and publisher of their proceedings. Most major universities and colleges subscribe to these scholarly journals, and access may require a physical visit to a library to read articles.
While peer reviewed publications carry more weight among experts than news articles and magazines published by the public press, they can be subjected to abuse. Ideas that are revolutionary and progress science and discovery beyond what your current peers believe is true are often rejected from publication because it may prove them wrong. Furthermore, ideas that conform to the current understanding of the peer reviewer’s ideas are often approved for publication. As a consequence, peer review favors more conservatively held ideas. Peer review can be stacked in an author’s favor when their close friends are the reviewers, while a newbie in a scientific society might have much more trouble getting their new ideas published and accepted. The process can be long, with some reviews taking several years to get an article published and accepted. Feuds between members of a scientific society can cause members to fight among themselves over controversial subjects or ideas. Max Planck, a well-known German physicist, lamented that “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it.” In other words, science progresses one funeral at a time.
Another limitation of peer review is that the articles are often not read outside of the membership of the society. Most local public libraries do not subscribe to these specialized academic journals. Access to these scholarly articles are limited to students at large universities and colleges and members of the scientific society. Scientific societies were seen in the early centuries of their existence as the exclusive realm of privileged wealthy high-ranking men, and the knowledge contained in these articles were locked away from the general public. Public opinion of scientific societies, especially in the late 1600s and early 1700s, viewed them as secretive and often associated with alchemy, magic and sorcery, with limited public engagement of the experiments and observations made by its members.
The level of secretively rapidly changed during the Age of Enlightenment in the late 1700s and early 1800s with the rise of well-read newspapers which reported to the public on scientific discoveries. The American, French and Haitian Revolutions likely were brought about as much by a desire for freedom of thought and press, as it was fueled by the opening of scientific knowledge and inquiry into the daily lives of the public. Most of the founding members of the United States of America where avocational or professional in their scientific inquiry, directly influenced by the scientific philosophy of Francis Bacon, particularly Thomas Jefferson.
Major Paradigm Shifts in Science
In 1788 the Linnean Society of London was formed, which became the first major society dedicated to the study of biology and life in all its forms. Name after the Swedish biologist Carl Linnaeus, who laid out an ambiguous goal to name all species of life in a series of updated books first published 1735. The Linnean Society was for members interested in discovering new forms of life around the world. The great explorations of the world during the Age of Enlightenment, resulted in the rising status of the society, as new reports of strange animals and plants were studied and documented.
The great natural history museums were born during this time of discovery to hold physical examples of these forms of life for comparative study. The Muséum National d'histoire Naturelle in Paris was founded in 1793 following the French Revolution. It was the first natural history museum established to store the vast variety of life forms from the planet, and housed scientists who specialized on the study of life. Similar natural history museums in Britain and America struggled to find financial backing until the mid-1800s, with the establishment of a permeant British Museum of Natural History (now known as the Natural History Museum of London) in the 1830s, and the American Museum of Natural History, and Smithsonian Institute following American Civil War in the 1870s.
The vast search for new forms of life resulted in the discovery by Charles Darwin and Alfred Wallace that through a process of natural selection, life forms can evolve and originate into new species. Charles Darwin published his famous book, the Origin of Species by Means of Natural Selection in 1859, and like Copernicus before him, science was forever changed. Debate over the acceptance of this new paradigm proposed by Charles Darwin resulted in a schism among scientists of the time, and resulted in a new informal society of his supporters dubbed Club X and lead by Thomas Huxley, who became known as Darwin’s Bulldog. Articles which supported Darwin’s theory were systematic rejected in the established scientific journals of the time, and the members of Club X established the journal Nature, which is today considered one of the most prestigious scientific journals. New major scientific paradigm shifts often result in new scientific societies.
The Industrialization of Science
Public fascination of natural history and the study of Earth grew greatly during this time in the late 1700s and early 1800s, with the first geological mapping of the countryside and naming of layers of rocks. The first suggestions of an ancient age and long history for the Earth were suggested with the discovery of dinosaurs and other extinct creatures by the mid-1800s.
The study of Earth lead to the discovery of natural resources such as coal, petroleum, and valuable minerals, and advances in the use of fertilizers and agriculture which lead to the Industrial Revolution.
All of this was due to eliminative induction advocated by Francis Bacon, but it was beginning to reach its limits. Charles Darwin wrote of the importance of his pure love of natural science based solely on observation and the collection of facts, coupled with a strong desire to understand or explain whatever is observed. He also had a willingness to give up any hypothesis no matter how beloved it was to him. Darwin distrusted deductive reasoning were an idea is examined by looking for its confirmation in the world, and strongly recommended that science remain based on blind observation of the natural world, but realized that observation without a hypothesis, without a question, was foolish. For example, it would be foolish to measure the orientation of every blade of grass in a meadow, just for the sake of observation. The act of making observations assumed that there was a mystery to be solved, but the solution of which, should remain unverified until all possible observations are made.
Darwin was also opposed to the practice of vivisection, the cruel practice of making observations upon experiments and dissections on live animals or people that would lead to an animal or person’s suffering, pain or death. There was a dark side to Francis Bacon’s unbridled observation when it came to experimenting on living people and animals without ethical oversight. Mary Shelley’s Frankenstein published in 1818 was the first of a common literary trope of the mad scientist and the unethical pursuit of knowledge through the practice of vivisection, and the general cruelty of experimentation on people and animals. Yet these experiments advanced knowledge, particularly in medicine, and they still remain an ethical issue science grabbles with even today.
Following the American Civil War and into World War I, governments became more involved in the pursuit of science then they had in any prior time, with the founding of federal agencies for the study of science, including maintaining the safety of industrial produced food and medicine. The industrialization of the world left citizens dependent on the government for oversight in the safety of food that was purchased for the home rather than grown at the home. New medicines which were addictive or poisonous were tested by governmental scientists before they could be sold. Governments mapped in greater detail their borders with government funded surveys, and charted trade waters for the safe passage of ships. Science was integrated into warfare and the development of airplanes, tanks and guns. Science was assimilated within the government, which funded its pursuits, as science became instrumental to the political ambitions of nations.
However, freedom of inquiry and the pursuit of science through observation was restricted around the rise of authoritarianism and national identity. Fascism arose in the 1930s through the dissemination of falsehoods which stoked hatred and fear upon the populations of Europe and elsewhere. The rise of propaganda using the new media of radio and later television nearly destroyed the world of the 1940s, and the scientific pursuit of pure observation was not enough to question political propaganda.
The Modern Scientific Method
During the 1930s Karl Popper, who watched the rise of Nazi fascism in his native Austria, set about codifying a new philosophy of science. He was particularly impressed by a famous experiment conducted on Albert Einstein’s theory of General Relativity. In 1915 Albert Einstein proposed, using predications on the orbits of planets in the solar system, that large masses aren’t just attracted to each other, but that matter and energy are curving the very fabric of space. To test the idea of curved space, scientists planned to study the position of the stars in the sky during a solar ellipse. If Einstein’s theory was correct, the star’s light would bend around the sun, resulting in an apparent new position of the stars around the sun, and if he was incorrect, the stars would remain in the same position. In 1919, Arthur Eddington lead a trip to Brazil to observe a total solar ellipse and using a telescope he confirmed that the stars’ positions did changed during the solar ellipse due to General Relativity. Einstein was right! The experiment was in all the newspapers, and Albert Einstein went from an obscure physicist to a someone synonymous with genius.
Influenced by this famous experiment Karl Popper dedicated the rest of his life to the study of scientific methods as a philosopher. Popper codified what made Einstein’s theory and Eddington’s experiment “scientific.” It carried the risk of proving his idea wrong. Popper wrote that in general what makes something scientific is the ability to falsify an idea through experimentation. Science is not just the collection of observations, because if you view it under the lens of a proposed idea you are likely to see confirmation and verification everywhere. Popper wrote “the criterion of the scientific status of a theory is it's falsifiability, or refutability, or testability.” Popper (1963) Conjectures and Refutations. And as Darwin wrote, a scientist must give up their theory if it is falsified through observations, and if a scientist tries to save it with ad hoc exceptions it destroys the scientific merit of the theory.
Popper developed the modern scientific method that you find in most school textbooks. A formulaic recipe where you come up with a testable hypothesis, you carry out an experiment which either confirms the hypothesis or refutes it, and then you report your results. Scientific writing shifted during this time to a very structured format - introduce your hypothesis, describe your experimental methods, report your results, and discuss your conclusions. Popper also developed a hierarchy of scientific ideas, with the lowest being hypotheses which are unverified testable ideas, above which sat theories which are verified through many experiments, and finally principles, which have been verified to such an extent that no exception has ever been observed. This does not mean that principles are truth, but they are supported by all observations and attempts at falsification.
Popper drew a line in the sand to distinguish what he called science and pseudo-science. Science is falsifiable, whereas pseudo-science is unfalsifiable. Once a hypothesis is proven false, it should be rejected, but this does not mean that it should be abandoned.
For example, a hypothesis might be “Bigfoot exists in the mountains of Utah.” The test might be “Has anyone ever captured a bigfoot?” with the result “No”, then “Bigfoot does not exist.” However, this does not mean that we stop looking for bigfoot, but that it is not likely this hypothesis will be supported. However, if someone continues to defend the idea that Bigfoot exists in the mountains of Utah, despite the lack of evidence, the idea moves into the realm of pseudo-science, where as “Bigfoot does not exist” moves into the realm of science. There is a greater risk that someone will find a bigfoot and prove it wrong, but if you cling to the idea that bigfoot exists without evidence, then it is not science, it is pseudo-science, because it is now unfalsifiable.
How Governments Can Awaken Scientific Discovery
On August 6th and 9th 1945, the United States dropped atomic bombs on the cities of Hiroshima and Nagasaki in Japan ending World War II. It sent a strong message that scientific progress was powerful. Two weeks before the dramatic end of the war, Vannevar Bush wrote to President Franklin Roosevelt that “Scientific progress is one essential key to our security as a nation, to our better health, to more jobs, to a higher standard of living, and to our cultural progress.”
What Bush proposed was that funds should be set aside for pure scientific pursuit which would cultivate scientific research within the United States of America, and drafted his famous report called Science, The Endless Frontier. From the recommendations in the report, five years later in 1950 the United States government created the National Science Foundation for the promotion of science. Unlike agency or military scientists, which were full time employees, the National Science Foundation offered grants to scientists for the pursuit of scientific questions. It allowed funding for citizens to pursue scientific experiments, travel to collect observations, and carry out scientific investigations of their own.
The hope was that these grants would cultivate scientists, especially in academia, that could be called upon during times of crisis. Funding was determined by the scientific process of peer-review rather than the legal process of appeal to authority. However, the National Science Foundation has struggled since its inception as it has been railed against by politicians with a legal persuasion, who argue that only Congress or the President should be the ones to decide what scientific questions are deserving funding. Most government science funding supports military applications as funded directly by politicians, rather than panels of independent scientists, as demonstrated by finances of most governments.
How to Think Critically in a Media Saturated World
During the post-war years until the present time, false ideas were not only perpetrated by those in authority, but also by the meteoritic rise of advertising— propaganda designed to sell things.
Mass media of the late 1900s and even today, the methods of science inquiry become more important in combating falsehood, not only among those who practiced science, but the general public. Following modern scientific methods, skepticism became a vital tool in not only science, but critical thinking and the general pursuit of knowledge. Skepticism assumes that everyone is lying to you. But people are especially prone to lie to you when selling you something. The common mid-century phrase “There's a sucker born every minute” exalted the pursuit of tricking people for profit, and to protect yourself from scams and falsehood one need to become skeptical.
To codify this in a modern scientific framework, Carl Sagan developed his “baloney detection kit”, outlined in his book The Demon-Haunted World: Science as a Candle in the Dark. A popular Professor at Cornell University in New York Sagan, was best known for his television show Cosmos, had been diagnosed with cancer when he set out to write his final book. Sagan worried that like a lit candle in the dark, science could be extinguished if not put into practice.
He was aghast to learn of the general public believed in witchcraft, magic stones, ghosts, astrology, crystal healing, holistic medicine, UFOS, Bigfoot and the Yeti, sacred geometry, and opposition to vaccination and the inoculation of cured diseases. He feared that with a breath of wind, scientific thought would be extinguished by the widespread belief in superstition. To prevent that, before his death in 1996, he left us with this “baloney detection kit.” A method of skeptical thinking to help evaluate ideas.
Step one: Wherever possible there must be independent confirmation of the “facts.”
Step two: Encourage substantive debate on the evidence by knowledgeable proponents of all points of view.
Step three: Arguments from authority carry little weight— “authorities” have made mistakes in the past. They will do so again in the future. Perhaps a better way to say it is that in science there are no authorities; at most, there are experts.
Step four: Spin more than one hypothesis. If there’s something to be explained, think of all the different ways in which it could be explained. Then think of tests by which you might systematically disprove each of the alternatives.
Step five: Try not to get overly attached to a hypothesis just because it’s yours. It’s only a way station in the pursuit of knowledge. Ask yourself why you like the idea. Compare it fairly with the alternatives. See if you can find reasons for rejecting it. If you don’t, others will.
Step six: Quantify. If whatever it is you’re explaining has some measure, some numerical quantity attached to it, you’ll be much better able to discriminate among competing hypotheses. What is vague and qualitative is open to many explanations. Of course there are truths to be sought in the many qualitative issues we are obliged to confront, but finding them is more challenging.
Step seven: If there’s a chain of argument, every link in the chain must work (including the premise) — not just most of them.
Step eight: Occam’s Razor. This convenient rule-of-thumb urges us when faced with two hypotheses that explain the data equally well to choose the simpler.
Step nine: Always ask whether the hypothesis can be, at least in principle, falsified. Propositions that are untestable, unfalsifiable are not worth much. You must be able to check assertions out. Inveterate skeptics must be given the chance to follow your reasoning, to duplicate your experiments and see if they get the same result.
The baloney detection kit is a causal way to evaluate ideas through a skeptical lens, and borrows heavily from the scientific method, but has enjoyed a wider adaption outside of science, as a method in critical thinking.
Carl Sagan never witnessed the incredible growth of mass communication through the development of the Internet at the turn of the last century, and the rapidity of how information could be shared globally instantaneously which has become a powerful tool, both in the rise of science, but also propaganda.
Accessing Scientific Information
The newest scientific revolution of the early 2000s regards the access to scientific information, and the breaking of barriers to free inquiry. In the years leading up to the Internet, scientific societies relied on traditional publishers to print journal articles. Members of the societies would author new works, as well as review other submissions for free and on a voluntary basis. The society or publisher would own the copyright to the scientific article which was sold to libraries and institutions for a profit. Members would receive a copy as part of their membership fees. However, low readership for these specialized publications with high printing costs resulted in expensive library subscriptions for these publications.
With the advent of the internet in the 1990s traditional publishers begin scanning and archiving their vast libraries of copyright content onto the Internet, allowing access through paywalls. University libraries with an institutional subscription account would allow students to connect through a university library to access articles, while locking the content of the archival articles beyond paywalls to the general public.
Academic scientists were locked into the system because tenure and advancement within universities and colleges was dependent on their publication record. Traditional publications carried higher prestige, despite having low readership.
Publishers exerted a huge amount of control on who had access to scientific peer-reviewed articles, and students and aspiring scientists at universities were often locked out of access to these sources of information. There was a need to revise the peer-review traditional model.
Open Access and Science
One of the most important originators of a new model for the distribution of scientific knowledge was Aaron Swartz. In 2008, Swartz published a famous essay entitled Guerilla Open Access Manifesto, and lead a life as an activist fighting for free access to scientific information online. Swartz was fascinated with online collaborative publications, such as Wikipedia. Wikipedia was assembled by gathering information from volunteers who contribute articles on topics. This information is verified and modified by large groups of users on the platform who keep the website up to date. Wikipedia grew out of a large user community, much like scientific societies, but with an easy entry to contribute new information and edit pages. Wikipedia quickly became one of the most visited sites on the internet in the retrieval of factual information. Swartz advocated for Open Access, and that all scientific knowledge should be accessible to anyone. He petitioned for Creative Commons Licensing, and strongly encouraged scientists to publish their knowledge online without copyrights or restrictions to share that information.
Open Access had its adversaries in the form of law enforcement, politicians and governments with nationalist or protectionist tendencies, and private companies with large revenue streams from intellectual property. These adversaries to Open Access argued that scientific information could be used to make illicit drugs, new types of weapons, hack computer networks, encrypt communications and share state secrets and private intellectual properties. But it was private companies with large sources of intellectual property who worried the most about the Open Access movement, lobbying politicians to enact stronger laws to prohibit the sharing of copyright information online.
In 2010 Aaron Swartz used a computer located at MIT’s Open Campus to download scientific articles from the publisher JSTOR using a MIT computer account. After JSTOR noticed a large surge of online request from MIT it contacted campus police. The campus police arrested Swartz, and he was charged with 35 years in prison and $1,000,000 in fines. Faced with the criminal charges, Swartz committed suicide in 2013.
The repercussions of Aaron Swartz’s ordeal pushed scientists to find alternative ways to distribute scientific information to the public, rather than relying on corporate for-profit publishers. The Open Access movement was ignited and radicalized, but details even today are still being worked out among various groups of scientists.
The Ten Principle Sources of Scientific Information
There are ten principle sources of scientific information you will encounter, each one should be viewed with skepticism. These sources of information can be ranked based on a scale of the reliability of the information they present. Knowing the original source can be difficult to determine, but by categorizing the sources into these ten categories you can distinguish the relative level of truthfulness of information presented. All of them report some level of falsehood, however the higher the ranking, the more likely the material contains less falsehoods, and approaches more truthful statements.
1. Advertisements and Sponsored Content. Any content that is intended to sell you something and made with the intention of making money. Examples include commercials on radio, television, printed pamphlets, paid posts on Facebook and Twitter, sponsored YouTube videos, webpage advertisements, and spam email and phone calls. These sources are the least reliable sources of truthfulness.
2. Personal Blogs or Websites. Any content written by a single person without any editorial control or any verification by another person. These include personal websites, YouTube videos, blog posts, Facebook, Twitter, Reddit posts and other online forums, and opinion pieces written by single individuals. These sources are not very good sources of truthfulness, but can be insightful in specific instances.
3. News Sources. Any content produced by journalist with the intention of maintaining interest and viewership with an audience. Journalism is the production and distribution of reports on recent events, and while subjected to a higher standard of fact checking (by an editor or producer), they are limited by the need to maintain interest with an audience who will tune in or read its content. News stories tend to be shocking, scandalous, feature famous individuals and address trending or controversial ideas. They are written by non-experts who rely on the opinion of experts who they interview. Many news sources are politically orientated in what they report. Examples of new sources are cable news channels, local and national newspapers, online news websites, aggregated news feeds, and news reported on the radio or broadcast television. These tend to be truthful, but often with strong biases on subjects covered, factual mistakes and errors, and a fair amount of sensationalism.
4. Trade Magazines or Media. Any content produced on a specialized topic by freelance writers who are familiar with the topic which they are writing about. Examples include magazines which cover a specialized topic, podcasts hosted by experts in the field, and edited volumes with chapters contributed by experts. These tend to have a higher level of truthfulness because the writing staff who create the content are more familiar with the specialized topics covered, with some editorial control over the content.
5. Books. Books are lengthy written treatments on a topic, which require the writer to become familiar with a specific subject of interest. Books are incredible sources of information and are insightful to readers wishing to learn more about a topic. They also convey information that can be inspirational. Books are the result of a long-term dedication on behalf of an author or team of authors, which are experts on the topic, or become experts through the research that goes into writing a book. Books encourage further learning of a subject, and have a greater depth of content than other sources of information. Books require editorial oversight if its contents are published traditionally. One should be aware that authors can write with a specific agenda or view point, which may express falsehoods.
6. Collaborative Publications or Encyclopedias. These are sources of scientific information produced by teams of experts with the specific intention of presenting a consensus on a topic. Since the material presented must be subjected to debate, they tend to carry more authority as they must satisfy skepticism from multiple contributors on the topic. Examples of these include Wikipedia, governmental agency reports, reports by the National Academy of Sciences, and the United Nations.
7. Preprints, Press Releases, and Meeting Abstracts. Preprints are manuscripts submitted for peer-review, but made available online to solicit additional comments and suggestions by fellow scientists. In the study of the Earth, the most common preprint service is eartharxiv.org. Preprints are often picked up by journalists and reported as news stories. Preprints are a way for scientists to get information conveyed to the public more quickly than going through full peer-review and help establish a precedent by the authors on a scientific discovery. They are a fairly recent phenomenon in science, first developed in 1991, with arxiv.org (pronounced archive) a moderated web-service which hosts papers in the fields of science.
Press releases are written by staff writers at universities, colleges and government agencies when an important research study is going to be published in a peer-review paper. Journalist will often write a story based on the press release, as they are written for a general audience and avoid scientific words and technical details. Most press releases will link to the scientific peer-reviewed paper that has been published, so you should also read the referenced paper.
Meeting abstracts are short summaries of research that are presented at scientific conferences or meetings. These are often reported on by journalists who attend the meetings. Some meeting abstracts are invited, or peer-reviewed before scientists are allowed to present their research at the meeting, others are not. Abstracts represent ongoing research that is being presented for scientific evaluation. Not all preprints, and meeting abstracts will make it through the peer-review process, and while many ideas are presented in these formats not all will be published with a follow up paper. In scientific meetings scientists can present their research as a talk or as a poster. Recordings of the talks are sometimes posted on the internet, while copies of the posters are sometimes uploaded as preprints. Meeting abstracts are often the work of graduate or advance undergraduate students who are pursuing student research on a topic.
8. Sponsored Scholarly Peer-Review Articles with Open Access. Sponsored Scholarly Peer-Review Articles with Open Access are publications that are selected by an editor and peer-reviewed, but the authors pay to publish the article with the journal if accepted. Technically these are advertisements or sponsored content since there is an exchange of money from the author or creator of the material to the journal who allows the article to be accessible to the public on the journal’s website, however their intention is not to sell a product. With the Open Access movement, many journals publish scholarly articles in this fashion, since the published articles are available to the public to read, free of charge. However, there is abuse. The Beall’s List was established to list predatory or fraudulent scholarly journals that actively solicit scientists and scholars, but don’t offer quality peer-review and hosting in return of money exchanged for publication. Not all Sponsored Scholarly Peer-Review Articles with Open Access are problematic, and many are well respected, as many traditional publications offer options to allow public access to an article in exchange of money from the authors. The publication rates for these journals vary greatly between publishers asking for a few hundred dollars up to the price of a new car. Large governments and well-funded laboratories tend to publish in the more expensive Open Access journals, which often offer press releases and help publicize their work on social media.
9. Traditional Scholarly Peer-Review Articles behind a Paywall. Most scholarly scientific peer-reviewed articles are written in traditional journals. These journals earn income only from subscriptions from readers, rather than authors paying to publish their works. Authors and reviewers are not paid any money, and there is no exchange of money to publish in these journals. They are reviewed by 3 to 5 expert reviewers who are contacted by the editor to review the manuscript before any consideration of publication. Copyright is held by the journal, and individual articles can be purchased online. Many university and college libraries will have institutional online and print subscriptions with specific journals, so you can borrow a physical copy of the journal from the library if you like to read the publication. Older back issues are often available for free online when the copyright has expired. Most scientific articles are published in this format.
10. Traditional Scholarly Peer-Review Articles with Open Access. These journals are operated by volunteers, allowing authors to submit works for consideration to peer-review, and are not required to pay any money to the journal if the article is accepted for publication. Editors and reviewers work on a voluntarily basis, with web hosting services provided by endowments and donations. Articles are available online for free downloading by the general public, without any subscription to the journal and are Open Access. Copyright can be retained by the author or journal, or distributed under a Creative Commons License. These journals are rarer, because they are operated by a volunteer staff of scientists.
Researchers studying a topic will often limit themselves to sources 5 through 10 as acceptable sources of information, while others may be more restrictive and consult only 8 through 10 sources or only fully peer-reviewed sources. Any source of information can present falsehoods and any source can touch upon truth, but the higher the scale in this scheme, the more verification the source had to go through to get published.
Imagine that a loved one is diagnosed with cancer, and you want to learn more about the topic. Most people will consult sources 1 through 3, or 1 through 5, but if you want to learn what medical professionals are reading, sources above 5 are good sources to read, since they are more likely verified by experts than lower ranking sources of information. The higher the ranking the more technical the writing will be and the more specific the information will be. Remember it is important that you consult many sources of information to verify the information you consume is correct.
Why We Pursue Scientific Discovery
Hope Jahren wrote in her 2016 book Lab Girl, “Science has taught me that everything is more complicated than we first assume, and that being able to derive happiness from discovery is a recipe for a beautiful life.” In a modern world where it appears that everything has been discovered and explored, and everything ever known has been written down by someone, it is refreshing to know that there are still scientific mysteries to discover.
For a budding scientist it can be incredibly daunting as you learn about science. The more you learn about a scientific topic, the more you become overwhelmed by its complexity. Furthermore, any new scientific contribution you make is often meet by extreme criticism from scientific experts. Scientists are taught to be overly critical and skeptical of new ideas, and rarely embrace new contributions easily, especially from someone new to the field. Too frequently young scientists are told by experts what to study and how to study it, but science is still a field of experimentation, observation, and exploration. Remember science should be fun. The smallest scientific discovery often leads to the largest discoveries.
Hope Jahren discovered that hackberry trees have seeds which are made of calcium carbonate (aragonite), while this is an interesting fact alone, it opened the door onto a better understanding of past climate change, since oxygen elements in aragonite crystals can be used to determine the annual growing temperature of the trees that produced them. This discovery allowed scientists to determine the climate whenever hackberry trees dropped their seeds, even in rock layers millions of years old, establishing a long record of growing temperatures extending millions of years into the Earth’s past.
Such discoveries lead to the metaphor that scientists craft tiny keys that unlock giant rooms, and it is not until the door is unlocked that people, even fellow scientists, realize the implications of the years of research used to craft the tiny key. So, it is important to derive happiness from each and every scientific discovery you make, no matter how small or how insignificant it may appear. Science and actively seeking new knowledge and new experiences will be the most rewarding pursuit of your life.
1b. Earth System Science: Gaia or Medea?
Earth as a Puddle
“Imagine a puddle waking up one morning and thinking, 'This is an interesting world I find myself in — an interesting hole I find myself in — fits me rather neatly, doesn't it? In fact it fits me staggeringly well, must have been made to have me in it!' This is such a powerful idea that as the sun rises in the sky and the air heats up and as, gradually, the puddle gets smaller and smaller, frantically hanging on to the notion that everything's going to be alright, because this world was meant to have him in it, was built to have him in it; so the moment he disappears catches him rather by surprise. I think this may be something we need to be on the watch out for.” -Douglas Adams The Salmon of Doubt
In December 1968, Astronaut William A. Anders on Apollo 8 took a picture of the Earth rising above the Moon’s horizon. It captured how small Earth is when viewed from space, and this image suddenly had a strange effect on humanity. Our planet, our home is a rather small insignificant place when viewed from the great distance of outer space. Our lives are collectively from the perspective of Earth looking outward, but with this picture, taken on the surface of the Moon looking back at us, we realized that our planet is really just a small place in the universe.
Earth System Science was born during this time in history, in the 1960s, when the exploration of the moon and other planets, allowed us to turn the cameras back on Earth and study it from afar. Earth System Science is the scientific study of Earth’s component parts and how these components, the solid rocks, liquid oceans, growing life forms and gaseous atmosphere, function, interact, and evolve, and how these interactions change over long timescales. The goal of Earth System Science is to develop the ability to predict how and when those changes will occur from naturally occurring events, as well as in response to human activity. Using the metaphor of Douglas Adams’ salient puddle above, we don’t want to get surprised if our little puddle starts to dry up!
A system is a set of things working together as parts of a mechanism or an interconnecting network, and Earth System Science is interested in how these mechanisms work in unison with each other. Scientists interested in these global questions simplify their study into global box models. Global box models are analogies that can be used to help visualize how matter and energy move and change across an entire planet from one place or state to another.
For example, the global hydrological cycle can be illustrated by a simple box model, where there are three boxes, representing the Ocean, the Atmosphere, and Lake and Rivers. Water evaporates from the ocean into the atmosphere where it forms clouds. Clouds in the atmosphere rain or snow on the surface of the ocean and land filling rivers and lakes (and other sources of fresh water), which eventually drain into the ocean. Arrows between each of the boxes indicate the direction that water moves between these categories. Flux is the rate at which matter moves from one box into another, which can change depending on the amount of energy. Flux is a rate, which means that it is calculated as a unit over time, in the case of water, this could be determined as volume of water that it rains or snows per year.
There are three types of systems that can be modeled. Isolated Systems, in which energy and matter cannot enter into the model from the outside. Closed, in which energy, but not matter, can enter into the model, and Open, in which both energy and matter can enter into the model from elsewhere.
Global Earth Systems are regarded as closed systems since the amount of matter entering Earth from outer space is a tiny fraction of the total matter that makes up the Earth. In contrast, the amount of energy from outer space, in the form of sunlight is large. Earth is largely open to energy entering the system, and closed to matter. (Of course, there are rare exceptions to this when meteorites from outer space strike the Earth).
In our global hydrological cycle, if our box model was isolated, allowing no energy and no matter to enter the system, there would be no incoming energy for the process of evaporation and the flux rate between the ocean and atmosphere would decrease to zero. In isolated systems with no exchange of energy and matter, over time they will slow down and eventually stop functioning, even if they have an internal energy source. We will explore why this happens when we discuss energy. If the box model is open, such as if ice-covered comets frequently hit the Earth from outer space, there would be a net increase in the total amount of water in the model, or if water was able to escape into outer space from the atmosphere, there would be a net decrease in the total amount of water in the model over time. So it is important to determine if the model is truly closed or open to both matter and energy.
In box models we also want to explore all possible places where water can be stored, for example water on land might go underground to form groundwater and enter into spaces beneath Earth’s surface, hence we might add an additional box to represent groundwater and its interaction with surface water. We might want to distinguish water locked up in ice and snow, by adding another box to represent frozen water resources. You can begin to see how a simple model can over time can become more complex, as we consider all the types of interactions and sources that may exist on the planet.
A reservoir is a term used to describe a box which represents a very large abundance of matter or energy relative to other boxes. For example, the world’s ocean is a reservoir of water, because most of the water is found in the world’s oceans. A reservoir is relative and can change if the amount of energy or matter in the source decreases in relation to other sources. For example, if solar energy from the sun increased and the ocean’s boiled and dried away, the atmosphere would become the major reservoir of water for the planet, since the portion of water locked in the atmosphere would be more than found in the ocean. In a box model, a reservoir is called a sink when more matter is entering the reservoir, then is leaving it. while a reservoir is called a source if more matter leaves the box, then is entering it. Reservoirs are increasing in size when they are a sink, and decreasing in size when they are a source.
Sequestration is a term used when a source becomes isolated and the flux between boxes is a very slow rate of exchange. Groundwater, which represents a source of water isolated from the ocean and atmosphere can be considered an example of sequestration. Matter and energy which is sequestrated have very long residence times, the length of time energy and matter reside in these boxes.
Residence times can be very short, such as a few hours when water from the ocean evaporates, and then falls back down into the ocean as rain, or very long, such as a few thousand years when water is locked up in ice sheets, and even millions of years underground. Matter that is Sequestered is locked up for millions of years, such that it is taken out of the system.
An example of Sequestration is an earth system box model of salt (NaCl), or sodium chloride. Rocks weather in the rain, resulting in the dissolution of sodium and chloride, which are transported to the ocean dissolved in water. The ocean is a reservoir of salt, since salt will accumulate over time by the process of the continued weathering of the land. Edmund Halley (who predicted a comet’s return, which later posthumously was named after him) proposed in 1715 that the amount of salt in the oceans is related to the age of the Earth, and suggested that salt has been increasing in the world’s ocean over time, and which will become saltier and saltier into the future. However, this idea was proved false, when scientists determined that the world’s oceans have maintained a similar salt content over its history. There had to be a mechanism to remove salt from ocean water. The ocean loses salt through the evaporation of shallow seas and land locked water. The salt left behind from the evaporation of the water in these regions is buried under sediments, and becomes sequestered underground. The flux of incoming salt into the ocean from weathering is similar to the flux leaving the ocean by the process of evaporated salt being buried. This buried salt will remain underground for millions of years. The salt cycle is at an equilibrium, as the oceans maintain a fairly persistent rate of salinity. The sequestration of evaporated salt is an important mechanism that removes salt from the ocean. Scientists begin to wonder if Earth exhibits similar mechanisms that maintain an equilibrium, through a process of feedbacks.
Equilibrium is a state in which opposing feedbacks are balanced and conditions remain stable. To illustrate this, imagine a classroom, which is climate controlled with a thermostat. When the temperature in the room is above 75 degrees Fahrenheit the air conditioner turns on, when the temperature in the room is below 65 degrees Fahrenheit the heater turns on. The temperature within the class room will most of the time be at equilibrium between 65 to 75 degrees Fahrenheit, as the heater and air conditioner are opposing forces that keep the room in a comfortable temperature range. Imagine now that the room becomes filled with students, which increase the temperature in the room, when the room reaches 75 degrees Fahrenheit the air conditioner turns on, cooling the room. The air conditioner is a negative feedback. A negative feedback is where there is an opposing force that reduces fluctuations in a system. In this example the increase in the heat of the students in the room is opposed by the cooling of the air conditioner system turning on.
Imagine that a classmate plays a practical joke, and the thermostat is switched. When the temperature in the room is above 75 degrees Fahrenheit the heater turns on, when the temperature in the room is below 65 degrees Fahrenheit the air conditioner will turn on. With this arrangement, when students enter the class room and the temperature slowly reaches 75 degrees Fahrenheit, the heater turns on! The heater is a positive force in the same direction as the heat produced by the students entering the room. A positive feedback is where there are two forces that join together in the same direction, which leads to instability of a system over time. The classroom will get hotter and hotter, even if the students leave the room, the class room will remain hot, since there is no opposing force to turn on the air conditioner. It likely will never drop down to 65 degrees Fahrenheit with the heater turned on. Positive feedbacks are sometimes referred to as vicious cycles. The tipping point in our example is 75 degrees Fahrenheit when the positive feedback (the heater) turned on, resulting in the instability of the system, and leading to a very miserably hot classroom experience. Tipping points are to be avoided if there are systems in place with positive feedbacks.
Gaia or Medea?
One of the most important discussions within Earth System Science is whether the Earth exhibits mostly negative feedbacks or positive feedbacks, and how well regulated are these conditions that we find on Earth today. The two hypothesizes are named after two figures in Greek mythology, Gaia the Goddess of Earth, and Medea lover of Jason, who murdered her own children. The Gaia Hypothesis maintains that the Global Earth System maintains an equilibrium or long-term stability through various negative feedbacks that oppose destabilization of the planet. The Medea Hypothesis maintains that the Global Earth System does not maintain a stable equilibrium resulting in frequent episodes of catastrophic events. From a geological point of view, the Gaia Hypothesis predicts Uniformitarianism, that past geological processes through time have mostly remained continuous and uniform, while the Medea Hypothesis predicts Catastrophism, where most past geological processes are the result of sudden, short-lived, and violent events.
This dichotomy of the optimistic view of the Gaia Hypothesis and pessimistic Medea Hypothesis is a simplification, in reality the true Earth System likely exhibits both types of negative feedbacks and positive feedbacks that interact in complex ways.
Imagine a classroom, now equip with two thermostats, a normal negative feedback, that turns on the air conditioner when the room gets above 75 degrees, and a malfunctioning positive feedback, that turns on the heater when the room gets above 80 degrees. Assuming that the classroom begins with a temperature of 70 degrees, and each student that enters the classroom raises the temperature by 1 degree. The air conditioning will turn on when 5 students enter the classroom. This air conditioner is weak and only lowers the temperature by 1 degree every 10 minutes.
The classroom will maintain an equilibrium temperature, as long as the rate of students entering the room is below 10 students per 100 minutes. For example, if 7 students enter the classroom at the same time, the temperature would rise by 7 degrees, to 77 degrees— turning on the air conditioner when it crossed 75 degrees, and take 20 minutes to lower the temperature back down to 75 degrees. Another 4 students could enter the classroom, raising the temperature to 79 degrees, turning on the air conditioner and lowering the temperature down to 75 degrees in 40 minutes.
However, if 12 students enter the room at the same time, the temperature will rise to 82 degrees turning on both the air conditioner at 75 degrees and heater at 80 degrees. If the heater warms the room faster (+2 degrees every 10 minutes) than the air conditioner which is able to only cool the room (-1 degree every 10 minutes). This positive feedback will cause the classroom to increase in temperature until it is a hot oven, because the net temperature will increase +1 degree every 10 minutes. The tipping point was when the 12 students entered the room all at once, which set off this vicious cycle, of a positive feedback. If the rate of students entering classroom remains low, the room temperature will remain stable, and appear to be govern by the Gaia Hypothesis. However, if the rate of students entering the classroom is fast, the temperature could become unstable, as govern by the Medea Hypothesis.
In determining the mechanisms of how Earth Systems play out over time, we also need to be aware of the fallacy of the salient puddle at the beginning of this chapter- a puddle that believes it is perfectly tailored to the environment it finds itself in. The Gaia Hypothesis views the Earth as a perfectly working system that is able to adjust to changes and maintain an equilibrium state. This is similar to the salient puddle believing that it fits perfectly within the environment that it finds itself within. In contrast, the Medea Hypothesis views that there will enviably be an event that dries of up of the puddle, and that the puddle is not at equilibrium under a warm sun.
Who came up with these ideas?
The Gaia Hypothesis, has a longer pedigree, and was first formulated in the 1970s by James Lovelock and Lynn Margulis. Initially called the Earth feedback hypothesis, the name Gaia was proposed by the writer William Golding, the author of Lord of the Flies, and close neighborhood friend of Lovelock. Lovelock was an expert on air quality and respiratory diseases in England, but took up the study of the Earth’s sulfur cycle, noting negative feedbacks that appeared to regulate cloud cover. In the 1970s he had been invited to work on the Viking Missions to Mars by NASA to evaluate the Martian atmosphere for the possible presence of life. Lovelock suggested that if the Viking lander found significant oxygen in the Martian atmosphere it would be indicative of life existing there. Instead the Viking lander found that the Martian atmosphere was 96% carbon dioxide, similar to the atmosphere of Venus. Working with Lynn Margulis, the two formulated a hypothesis that there were natural negative feedback systems on Earth that kept both oxygen and carbon dioxide in the atmosphere within a low range, with photosynthesizing plants and microbes taking in carbon dioxide and producing oxygen, while animals take in oxygen and produce carbon dioxide. Life on Earth appeared to keep the atmosphere stable in relation to these two types of gasses in the atmosphere. Without life, carbon dioxide remained high in the atmosphere of Mars and Venus.
The Medea Hypothesis is a newer, and a more frightening idea, first proposed by Peter Ward, an American paleontologist in a book published in 2009 (The Medea Hypothesis: Is Life on Earth Ultimately Self-Destructive?). Ward started out as a marine biologist, but a traumatic diving accident that left his diving partner dead, pushed him to study marine organisms found in rocks rather than deep underwater. Ward became interested in fossil ammonites along the coast of Europe, which flourished in the oceans of the Mesozoic Era, during the age of the dinosaurs, but had become extinct with the dinosaurs, 66 million years ago. Ward studied ammonites and other fossils, and became fascinated with mass extinction events that have occurred in Earth’s History. He became keenly interested in the Permian-Triassic Extinction event in South Africa, which divides the Paleozoic Era (ancient life) with the Mesozoic Era (middle life), the great time divisions of Earth’s history. This extinction event was one of the worse, called colloquially the Great Dying, and appears to have be caused by an imbalance of too much carbon dioxide in the atmosphere. Ward saw these episodes of mass extinction events in the rock record as evidence of when the Earth system became out of balance and resulted in catastrophic change. Ward with Donald Brownlee, postulated that Earth’s atmosphere millions of years in the future would lose all its carbon dioxide as tectonic and volcanic activity on Earth ceases, resulting in no new sources of carbon dioxide released into the atmosphere. Carbon dioxide would become sequestered underground as photosynthesizing plants and microbes die and are buried, and there would be no new carbon dioxide emitted from volcanoes. As a result, less and less carbon dioxide would be available in the atmosphere, ultimately dooming the planet with the inevitable extinction of all photosynthesizing life forms.
Neither of the advocates of the two hypotheses view the Earth system exclusively governed by either hypothesis, but a mix of both negative and positive feedbacks working on a global scale over long time intervals. Another way to frame the Gaia and Medea Hypotheses is to ask whether the global Earth System behaves mostly under negative or positive feedbacks. Of course, the goal of this class is to determine how you can avoid positive feedback loops that would result in catastrophic change to your planet, while keeping your planet balanced with negative feedback loops and remain a habitable planet for future generations.
1c. Measuring the Size and Shape of Earth.
Introduction to Geodesy
Geodesy is the science of accurately measuring and understanding the Earth's size and shape, as well as Earth’s orientation in space, rotation, and gravity. Geodesy is important in mapping the Earth’s surface for transportation, navigation, establishing national and state borders, and in real estate, land ownership and management of resources on the Earth’s surface. Each of us carry an extremely accurate geodetic tool in our pocket (a smart phone or table), that only recently the United State Military allowed civilian use of Global Positioning Systems (GPS). GPS utilizes Earth orbiting satellites to pin-point with a high degree of an accuracy your location on planet Earth. The recent advancement of GPS allows everything from tracking packages, mapping migrating animals, to designing self-driving cars. It is astonishing to consider that before the advent of civilian use of GPS Earth Orbiting Satellites in the late-1990s, all mapping, tracking and navigation was carried out with rudimentary tools. These rudimentary tools have established a fairly accurate measurement of Earth’s size and shape for two and half millennia.
The sun rises in the east and sets in the west due to the rotation of the Earth around its polar axis, resulting in each longitude having a different temporal occurrence of when the sun is highest in the sky. Scholars knew that if one possessed an accurate clock, which was set to a specific noon-time, one could calculate the difference in time when the sun was highest in the sky at any location on Earth and compare that with the standard time set on a clock. Using this difference in time you could determine your distance in Longitude from the standard, which was called a Meridian.
If you have ever traveled by airplane (or car) across time zones, and had to set your watch to the new local time on your arrival, you have experienced this effect. You could determine the distance in Longitude you traveled by how many hours you have to adjust your watch. While no accurate clocks existed to these ancient scholars to determine Longitude with great accuracy, scholars attempted to determine longitude as best they could to generate maps along a grid system of Latitude and Longitude laid over a globe.
The History of Measuring Latitude
The earliest written texts that illustrate the Earth as a spherical body date to the writings of Parmenides of Elea, who lived in Elea, a Phocian Greek colony in what is today Southern Italy. These writings, mostly as Greek Poems, described the cosmos as a spherical moon orbiting around a spherical Earth, date to around 535 BCE. Sailors of the Mediterranean Sea had likely learned of the curvature of the Earth from the observation of ships on large bodies of water. As ships on the open ocean traverse farther and farther away from an observer they appear to sink below the horizon. The Moon and its phases in the sky also alluded to the spherical nature of both the Moon and the Earth, as well as the record of solar and lunar ellipses, when the spherical Moon or Earth block the sun’s light. There is no record that these and other early maritime navigators had calculated the circumference or radius of the Earth, but likely had discovered the spherical nature of Earth when exploring on the open waters of the ocean.
Eratosthenes of Cyrene was born on the northern coast of Africa around 276 BCE and following an education in Athens Greece, was appointed chief librarian in Alexandria Egypt. The Library of Alexandria had been founded by Ptolemy I Soter, a companion to Alexander the Great, who served as the ruler of Egypt after its conquest. The library was the center of learning and education, and housed the great works of Greek and Egyptian writing of the day. Eratosthenes had the full benefit of being at the center of this educational center, and wrote proficiently, although sadly few of his writings survive today. A textbook written by Cleomedes, a Greek Scholar, a few centuries later describes a famous experiment conducted by Eratosthenes.
On a little island called Elephantine in the middle of the Nile River, near present day Aswan, was a water well, which during the longest day of the year the sun would shine directly down the dark well onto the surface of the water. For a few moments the sun’s reflection was perfectly centered within the well. Eratosthenes was curious if the same thing could happen in Alexandria, about 524 miles north of Elephantine Island. Rather than dig a well, Eratosthenes held up a rod (or more technically a gnomon, which is a rod that casts a shadow), perfectly perpendicular with the ground, and observed the sun’s shadow on the ground as the time approached noon on the longest day of the year in Alexandria, when the sun would be at its highest ascent in the sky. The sunlight hitting the vertical gnomon or rod in Alexandria produced a shadow even at noon. Eratosthenes measured the minimum length of the shadow, noting that the difference between the sun being directly overhead in Elephantine Island to the south and slightly overhead in Alexandria in the north, was likely due to the curvature of the Earth.
Eratosthenes also realized that if the sun was very far away, and sun light traveled parallel to the Earth’s surface, he could use the length of the shadow to calculate the circumference of the Earth along the north-south axis. He knew the distance between Alexandria and Elephantine Island, was 5000 stades, a unit of measurement lost to time, but roughly equivalent to 524 miles (843 kilometers). Eratosthenes calculated that the angle from the center of the Earth was about 1/50th (7.2 degrees), suggesting a pole-to-pole or meridional circumference of 26,200 miles (42,165 kilometers), which is remarkably close to our modern calculated circumference of 24,860 miles (40,008 kilometers). Eratosthenes also realized that by measuring the lengths of shadows on sticks, one could deduce your position north or south. The farther north one traveled the longer the shadows would be. Shadow length was also dependent on the time of year, which could be corrected using solar calendars, for example Eratosthenes measured the minimum mid-day shadow in Alexandria from a standard-length gnomon or rod for each day of the year. A traveler could carry a similar standard-length gnomon or rod and measure the length of shadow and compare this with the measured shadow for that day in Alexandria. This would tell the traveler how far north or south of Alexandria the traveler was.
Eratosthenes discovered not only the size and shape of the Earth, but also this amazing method to determine Latitude. Like a climbing ladder, latitude is the north-south direction between the poles measured in degrees, with the Equator, the middle belt of the Earth equal distant from the poles at 0 degrees, and the poles at 90 degrees, north and south respectively. Eratosthenes is often credited as the originator of Geography, the study of the arrangement of places and physical features on the Earth.
Note that Elephantine Island in Egypt is very close to the Tropic of Cancer, the most northerly circle of latitude on Earth at which the Sun is directly overhead on noon of the June (Summer) solstice, or longest day of the year for the Northern Hemisphere. There is also a circle of latitude called the Tropic of Capricorn, which is the most southernly circle of latitude on Earth at which the sun can be directly overhead on noon of the December (Winter) solstice. This is because the Earth is tinted at 23.5 degrees relative to its orbital plane.
Using the technique of cast shadows did not work well on ships and boats, because of the rocking motion while on the water. To determine latitude at sea sailors would use the night sky, and measure the angle above the horizon to the North Star (Polaris) and compare this with star charts for the time of year.
The innovations of the Indian Mathematician Aryabhata in India around 500 CE, who calculated the irrational nature of pi (π) unlocking the use of calculating the circumference of the Earth using trigonometry. Aryabhata mathematics was translated into Arabic, and put into use by early Muslim scholars, particular the Muhammad ibn Musa al-Khwarizmi (referred to as Algorithmi by Latin speakers) head librarian of the House of Wisdom in Baghdad. He published a number of ingenious calculations of the positions of various cities and places. To determine latitude, he used a simpler method than casting shadows. He would take measurements using a plumb line (a weight dangled from a string), and measure the angle from the top of a peak or mountain to the observed horizon in the distance. This angle would tell you the degrees between the top of the peak and the horizon point, if you knew this distance, you could more accurately calculate the Earth’s circumference. While this allowed a more precise measure of the meridional circumference of Earth, it still did not provide a way to measure the equatorial circumference of Earth. Scholars assumed that the Earth was a perfect sphere and that the equatorial and meridional circumference of the Earth would be equal, but this equatorial measurement had not been determined. It was particularly difficult to determine your location along the east-west axis. Muhammad ibn Musa al-Khwarizmi invented algebra, and a way to position sets of numbers along a x-y grid system. While determining the latitude of any city was fairly a straight forward affair by this time, determining the Longitude or the east-west direction was problematic. Ptolemy, a Greek scholar eight centuries before, had attempted to map the Mediterranean Sea, but failed to determine distances along the east-west axis and had over-estimated the length of the sea. Muhammad ibn Musa al-Khwarizmi set about an attempt to determine both Latitude and Longitude of all the major cities, in his Book of the Description of the Earth published in Arabic in 833 CE.
The History of Measuring Longitude
The inaccuracy of determining Longitude resulted in one of the worst misunderstandings of geography in 1492 CE. Christopher Columbus’s expedition from Spain across the Atlantic Ocean, was a leap of faith that he would reach India or Asia on the other side of the ocean. When his expedition found land, (the Island of Hispaniola) he was convinced that they had arrived in India, as he was unable to determine his position in longitude with any accuracy. In 1499, Alonso de Ojeda, a companion on one of Columbus’s expeditions, lead his own voyage back across the Atlantic with Amerigo Vespucci, an Italian scholar who was onboard to attempted to map these new lands. The expedition followed the coast line southward along the coast of present-day Venezuela and Brazil, to the mouth of the Amazon River. Along the way Vespucci took readings of the Latitude, and was amazed as he observed southern constellations in the night sky he at only read about. His measurements of Latitude took him within 6 degrees of the equator, far more south than expected if the land was India. In desperation he attempted to measure the position of Longitude using the Moon and Mars. Vespucci had with him charts of Mars’s position in the night sky relative to the Moon back in Europe, and noted the times of the year that Mars would be obscured by the Moon. He measured the distances between the Moon and Mars during these evenings when the Moon would obscure by Mars in Europe, but was visible in the night sky on board the ship those same evenings. By measuring the angle between the distance of the Moon and Mars on those dates listed in his charts, he could estimate the Longitude of their position, and came to the realization that they were not close to India, but had discovered a large continent, that extended far to the south. In 1507, the German cartographer Martin Waldseemüller named this new continent America, in honor of Amerigo Vespucci’s discovery on the first accurate map of the world, Universalis Cosmographia.
A better estimate of Longitude was needed, especially as sailors traversed the world more frequently in the centuries between 1500 and 1700, and the early colonization of America by Europeans. Monarchies offered huge sums of money to any scientist who could accurately determine Longitude, with Robert Hooke, a founding member of the Royal Society attempted to devise a spring-loaded clock or using a pendulum to measure time, and hence Longitude. John Harrison an expert clock maker, devised the first truly accurate clocks, or marine chronometers, that could be used to determine Longitude with a great deal of accuracy by 1761.
The marine chronometer or clock would be set to Greenwich Mean Time (GMT), with noon or 12:00 pm set at the point of time that the Royal Observatory in Greenwich England observed the sun at its highest point in the sky. Greenwich England was set as 0 degrees longitude, and hence the Prime Meridian. Sailors could easily determine their longitude by looking at the marine chronometer, when the sun was highest in the sky, and read the clock set at GMT. The time depicted indicates how far east or west you are from the Prime Meridian.
Latitude and Longitude are measured using quadrate degrees, divided into 60 minutes and 60 seconds. For example, a Latitude of 40°27′19″ North and Longitude of 109°31′43″ West. Indicating a place 40 degrees 27 minutes 19 seconds north of the equator and 109 degrees 31 minutes and 43 seconds west of the Prime Meridian in Greenwich England.
In modern usage, Latitude and Longitude is often given in decimal format, for example 40.45552° and -109.52875°, with positive Latitude indicating the Northern hemisphere and negative Latitude indicating the Southern hemisphere, while Negative Longitude indicating West of the Prime Meridian and Positive Longitude as East of the Prime Meridian. Any place on the surface of the Earth can be described with these two simple numbers. In fact, you can copy and paste any decimal Latitude and Longitude into a Google search box and find its location on a map.
Using the refined Latitude and Longitude, the meridional circumference of the Earth is 24,860 miles (40,007.86 kilometers), while the equatorial circumference of the Earth is 24,901 miles (40,075.017 kilometers), indicating a slight bulge around the equator of 67.157 kilometers, so not a perfect sphere, but a slight oblate spheroid.
While knowing latitude and longitude is significant, determining distances between points on the Earth is a more important concept for everyday travelers. Many techniques were developed by early navigators through the principle of triangulation. Triangulation is the process of determining a location by forming triangles of known points. In ancient Utah, and through-out the American Southwest, the Ancient Pueblo designed towers in the desert, which were lit by fires. A traveler could navigate distances by taking the angle between two points, such as lit fires observed during the night, and know with certainty the direction and distance to travel to reach a destination. With the inaccuracy of latitude and longitude, early maritime navigators used triangulation of light houses along a coast to help navigate dangerous coastlines into the safety of bays and safe harbors, when their estimates of navigation were off. In China, triangulation was used to determine distances between cities, as well as the heights of mountains.
Triangulation works by taking the angles between two points of a known distance apart, and an unknown point in the distance, or the distance from a point to the line of sight between two other points from which you can measure the angle from the line of sight and the original starting point. These are expressed using trigonometric expressions, that require you to known two angles and a single distance, to calculate the third distance. Triangulation requires lines of sight, and worked best in desert environments with few obstructions of the view. Triangulation is difficult in dense forests with abundant trees or on the open ocean with few observable objects on the horizon. Triangulation was used to map much of the interior of the continents, through a network of measurements starting often along coastline cities at sea level or important city centers which had determined accurate latitude and longitude.
The concept of triangulation would become a very important concept to determine the size and shape of Earth during the space age, when Earth orbiting satellites could be used with great accuracy measuring the latitude and longitude of any point on Earth, and measure distances and elevations with a great deal of certainty. Rather than measure angles, multilateration uses distance in three-dimensions to find a point that lies at the intersection of three spheres, where the distance of the three radii of the spheres are known.
Measuring Earth from Space
On October 4, 1957 the Soviet Union successfully launched Sputnik I, the first human designed satellite into Earth’s orbit nearly 2 feet in diameter (58.5 cm). Sputnik I was of a simple spherical design, but emitted two radio frequencies that could be received on Earth. Based on the emitted radio frequencies the position of the satellite could be determined by the doppler effect. The doppler effect is when the frequency of a wave changes depending on the traveling direction of an object. When Sputnik was traveling toward a location, radio receivers on Earth would detect higher frequencies, while Sputnik was traveling away from a location, radio receivers would detect lower frequencies. Anyone with a radio receiver could determine when Sputnik was directly overhead, because the radio frequency would change pitch due to the doppler effect.
Detecting the radio frequencies emitted by Sputnik allowed any radio receiving station on Earth to know its location relative to an orbiting satellite emitting radio waves. This allowed for the positioning of points on the Earth with a greater degree of certainty. Over the next several decades numerous satellites were launched into space, and set into orbit around the Earth. Most of these early satellites emitted radio signals which could be received on Earth’s surface. Much like triangulation, if you have a minimum of three satellites in orbit above a location, a receiver could triangulate its location from the distance of the radio signals emitted from three or more satellites in space.
One of the amazing breakthroughs of these early satellites was that it allowed for the detailed measurement of any location on Earth relative to the center of the Earth, and hence altimetry. Measuring the height of a location above the center of the Earth, rather than above sea level. This innovation allowed a more precise measurement of the topography of the Earth’s surface. Sea level varies with daily and monthly tides, up and down, making it not a very good baseline, and various mathematical models of Earth’s dimensions had been used instead.
As Sputnik first circled Earth, Gladys West, a young African-American mathematician was working in Dalhgren Virginia at a navy base involved in programing early main-frame computers to calculate rocket trajectories. With the advent of Sputnik, the United States Military quickly realized the importance of satellite data in determining missile trajectories and use of long-distance rockets. Gladys West was a proficient mathematician, and in the 1980s the Navy gave her the seemly impossible task to determine the topography of the ocean surface using satellite data from the newly launched GEOSAT satellite. This meant a refinement of triangulations to such a precision that the altimetry of swells and tides of the ocean could be measured from any ship as it navigated the oceans. West devised a system of mathematical corrections so that the surface topography of the ocean and land could be compared to a reference ellipsoid, called a geoid. A geoid is a pure mathematical model of the Earth, without its irregular topography, with the most commonly used model being the World Geodetic System (WGS84), however two other older geoid models are frequently used on maps for the Continental United States, the North American Datum of 1927 (NAD27), and North American Datum of 1983 (NAD83), which can differ by as much as 95 to 47 meters across North America, and were based on models first developed in 1866 for use in mapping. They differ slightly because they model the equatorial bulge of the Earth differently.
The World Geodetic System (WGS84) was a much better geoid to use for global applications, and widely used as an international standard. Gladys West developed a mathematical model to eliminate error, allowing precise dynamic sea surface topography, as well as latitude and longitude to be calculated with onboard ship computers in the 1980s. This innovation lead to the use of GPS navigation found in most cell phones, ships and vehicles today.
Today there are a number of Earth orbiting satellites that work not by sending radio waves and calculate distance by using the doppler effect, but by transmitting time stamped radio waves from onboard high precision atomic clocks. Each satellite emits a radio wave transmitting its current time of broadcast, when the radio wave is received the time is compared to another onboard atomic clock, and the difference between the two times is the length of time it took the radio waves traveling at the speed of light to reach the receiver. With at least three satellites emitting signals, the location of each satellite can be determined, although to be more precise, four or more emitting satellites are used to establish their locations relative to each other for greater accuracy. Using a GPS receiver, the spherical waves of emitted radio transmissions from four or more satellites can be used to find a precise location anywhere on Earth. The more satellites that can be triangulated using a receiver the more precise the location can be. The number changes because of the Earth’s rotation moves the position on Earth relative to the number of visible satellites that are orbiting above.
The United States GPS (Global Positioning System) navigation satellites are a network or constellation of around 33 satellites in orbit above Earth, each providing real-time signals to Earth used in high precision calculation of any location on Earth. There are five other networks of satellites developed by other countries, including the GLONASS network maintained by Russia, the Galileo network maintained by the European Union, the BeiDou network maintained by China, and the planned IRNSS and QZSS maintained by India and Japan respectively.
The precision of knowing any location on Earth is now at the sub-centimeter (less than an inch) for fixed ground GPS receivers. This technological break-through allows for the measurement of the movement of Earth’s surface and crust on a millimeter scale. One of the more ambiguous projects using this technology was EarthScope which operated from 2012 to 2019 that deploy thousands of GPS receiving stations across the continental United States, to measure the movement of the ground at each location. These GPS receivers observed the movement of continental plates, showing the relative quick movement (up to around 40 mm a year) of the ground below Southern California in respect to the interior of the rest of the continental United States, such as Utah. Such GPS receivers also demonstrates a twice a day vertical shift in the ground up and down due to the gravitational pull of the moon of 55 centimeters, and 15 centimeters due to the sun’s gravitational pull. So, while you are seemly living on a solid unmoving Earth, it is in fact dynamically moving each day up and down as the solid interior is stretched out by the passage of the moon and sun, and horizontally as tectonic continental plates shift under foot.
In an age where you can quickly determine your position on Earth in Latitude and Longitude with high precision, it may seem strange to learn about navigation methods using a Compass, Sexton and Timepiece, since most explorers of the Earth carry a GPS unit with them on their trips, or at least have a smart phone, or other electronic device with GPS capabilities. However, you may find yourself easily lost if that unit fails. You should prepare yourself to learn to navigate using older methods, including a compass, sexton and timepiece.
More than a thousand years ago, in China, it was discovered that rubbing an iron needle on a rock containing magnetite caused the needle to become magnetized. If the needle was placed through a cork and placed in a bowl of water, the needle would orientate itself along a specific direction as it floated on the surface of the water. Because all magnetized needles exhibit the same orientation, it became a useful tool for navigation. During the Song dynasty in China, about a thousand years ago, the compass was perfected as an orientating tool that travelers could carry with them, often with the magnetized needle balanced on a sharp point, under glass, to make it more practical than a cork in an open bowl of water. Widely regarded as one of the most important discoveries in China, a compass shows the cardinal directions (North, East, South, West), as they relate to the magnetized needle which is orientated along the Earth’s magnetic field.
The compass needle will orient itself along the north-south magnetic axis of the Earth, which differs depending on where you are, and changes with time. Typically, there is a N on the compass marking the direction of magnetic North. The needle will often have a red tip to indicate the southern direction, which you can line up with the S. When you hold the compass in your hand you can move it around until the needle lines up with the N and S, with the red tip (if there is one) pointing toward the S. The direction is now lined up with the Magnetic Poles of the Earth.
The North Magnetic Pole is a wandering point that currently is located at latitude 86.54°N and longitude 170.88°E, but wanders with an expected location in 2020 of 86.391°N 169.818°E. The oldest record of the magnetic pole’s location is in 1590 when it was found at 73.923°N 248.169°E. If the magnetic pole wanders so much, it may seem that a compass is an impractical tool for navigation. However, it is useful to get a quick bearing, as the magnetic needle often points north, unless you are high in the Arctic circle, or it can be adjusted if you know the declination of your location. A declination, sometimes called magnetic variation, is the angle between magnetic north and true north. Declination is written as degrees east or west from true north, or often shortened to positive degrees when east and negative when west. Magnetic declination changes over time and with location, so if using a compass for precise navigation you will need to keep it updated with your location.
The compass points along the magnetic axis, and a declination value is needed to obtain true north from a compass. Most topographic maps published by the United States Geological Survey will list the declination in the corner of the map, followed by a year. However, if you need to obtain the most current declination for your location you can look it up with the National Centers for Environmental Information, which is part of the National Oceanic and Atmospheric Administration of the Federal Government. They maintain an online Magnetic Field Calculator at (https://www.ngdc.noaa.gov/geomag/calculators/magcalc.shtml?#declination).
Once you find the current declination for your location, you will need to adjust your compass. For example, a declination 10° 21' E, you will need to have the needle point to 10° 21' E so that the N will point true north. Some compasses allow you to adjust the outer ring that marks the degrees that encircle the magnetic needle, so that you can set the declination for your location before you set out on a trip, for example by moving it right 10° 21' E, the compass needle will point true north, or N. Note that as a general rule in the United States locations west of the Mississippi River have an easterly magnetic declination, while east of the Mississippi River will have a westerly magnetic declination. A map of magnetic declinations is called an Isogonic Chart.
Compasses work because the Earth has a liquid and solid core made of mostly iron, which is magnetized because of its motion (called the Earth’s Dynamo). This solid inner core of the Earth orbits slowly over time, and results in a dynamic magnetic field that changes. Every few thousand or sometimes several millions of years, the magnetic pole will reverse directions. These reversals take several decades, and occur when the magnetic field of the Earth weakens. Several reversals have occurred in the last 1 million years, including an event 41,000 years ago, that only lasted 250 years, and another 780,000 years ago, which fixed the north and south magnetic poles at their current orientation. Magnetitic reversals are random events, likely due to the orientation of the solid and liquid iron core within the Earth. However, there have been some extremely long episodes during the age of dinosaurs in the Cretaceous Period when the magnetic field remained stable for millions of years. If the magnetic field undergoes a reversal during your life time, you will need to keep up to date with your local magnetic declinations, which may make using a magnetic compass problematic until the poles are fully reoriented.
Compass needles orient themselves relative to any magnetic field, such that if you hold a magnet close to the needle, the needle will orient to that field, causing the needle to turned in orientation with the magnet. Some regions of the Earth have large deposits of magnetite, or iron ore that can cause compasses to not line up with the Earth’s magnetic field, such as the iron rich region north of Duluth Minnesota, or the mysterious Bangui magnetic anomaly in the Central African Republic. Also, be aware that high voltage electricity also emits a magnetic field, which can cause compass needles to not orient correctly if you are under an electrical pole. There are also magnets found in mobile phones which also can cause the needles to not orient correctly when carried in a pocket with a mobile phone.
The Earth’s magnetic field is three-dimensional, such that the magnetic needle will not only point horizontally along the magnetic field but also tilt slightly vertical in orientation to a three-dimensional magnetic field. If you could measure this tilt on a compass needle, you could use it to determine latitude, as the needle will tilt more as you approach the magnetic poles. This technique has been used to determine ancient latitudes of Earth’s continents overtime, by measuring magnetic fields of the vertical tilt and horizontal orientation of magnetite grains buried in the rock record.
Compasses are used in navigation with a map, allowing a traveler to take a bearing. A compass bearing is the direction which you are headed, as shown by a compass and determined from a map.
Imagine that you are trying to find a cabin, and have come to a creek. You have a map showing the cabin on one side of creek, but you don’t know if you are on the same side of the creek as the cabin.
If the map has a north arrow, you can orient your compass with the arrow. Lay down the compass, and turn both the compass and map, until the map’s north arrow lines up with north on the compass. Now you will be able to determine if you are on the right side of the creek, since you will be facing the same direction as depicted on the map. You can also determine the direction of the cabin from your location, and take a heading (measured on the compass in degrees, either in 360 degrees, or quadrate NE, SE, SW, NW each representing 90 degrees). For example, you might find that the line from your location by the river to the cabin is 25° degrees to the Northeast). As you walk you try to keep the compass bearing of your travel at 25° degrees Northeast looking at the compass, until you reach the cabin. Taking a compass bearing helps travelers to orient their direction of travel, so they don’t walk in circles. This is especially useful in dense forests or jungles where you are likely to get lost or turned around.
During the Song dynasty in China, compasses were paired with an odometer, which measured a distance not with steps, but using a wheel that would tick off rotations, providing very accurate distances between towns and villages. However, map making was oriented in respect to what a traveler might see as they journeyed through a land. With the rise of Kublai Khan during the Yuan dynasty and visits from European traders on the Silk Road, such as Marco Polo in the 1330s, China turned toward explorations westward, along the coasts of India, Arabia and Africa. During the Ming dynasty the explorer Zheng He produced the Mao Kun map, a map using only compass bearings and distances on an annotated scroll. The map is unique since it reads like a series of instructions for navigating an ocean voyage, much like one would play through a video game.
For navigation it is often useful to calculate your position in latitude and longitude and this requires using two other tools, a sextant and chronometer.
A Sextant is an instrument that measures the angle between an astronomical object, such as a star and the Earth’s horizon. It does this by using a mirror that reflects the sky against an image of the horizon, as you move the angle of the mirror. All you have to do is line up the astronomical object by changing the angle with the horizon line and read the angle measured. Often sextants have shaded glass to use if measuring the angle of the sun in the sky, so that you don’t damage your eye. One of the most important astronomical objects to measure using a sextant is the angle above the horizon to the star Polaris (the North Star). If you were to take a line through the axis of the North and South pole it would project northward toward a position near Polaris. Polaris can be found in the night sky by following the outer lip of the Big Dipper, or Ursa Major the Great Bear constellation. If you ever seen a time lapse photo of the night sky, all the stars will appear to rotate around this point in the night sky, because the Earth is rotating along this axis. This is called the Celestial North Pole. Once you have found Polaris in the night sky, you just measure the angle between the star and Earth’s horizon. The angle is equal to your latitude.
In the southern hemisphere, there is no bright star near the Celestial South Pole, to find this point in the night sky, you have to draw a line from two constellations in the night sky. Where the line from the two bright stars in the Southern Cross and the two bright stars in Centaurus cross is the Celestial South Pole. Since it is important for navigation in the Southern Hemisphere, the Southern Cross is depicted on the flags of Australia, New Zealand, Papua New Guinea and Samoa, as well as Brazil, which shows both the Southern Cross and Centaurus on its flag.
Using a sextant to find your latitude is fairly straight forward when at sea, since the horizon line is easy to find, but when you are traveling in canyons, mountain valleys, and dense jungles it can be difficult to determine the horizon line. To help with finding the horizon, a leveling bubble can be used instead of matching the horizon line.
A sextant can also be used to determine latitude during the day, by Noon Sighting, or measuring the angle of the sun at its highest ascent at the local noon time. This angle needs to be subtracted from 90° and is recorded as the zenith distance. To convert zenith distance into latitude, you can look up the declination of the sun for each day of the year in a Nautical Almanac, which lists sun’s declination for the noon hour for each day. By subtracting or adding the declination from the zenith distance, you can find your latitude, depending on the position of the sun in the sky relative to the hemisphere.
Latitude = (90º – Noon Sighting angle) + declination
Latitude = Declination – (90º – Noon Sighting angle)
If you measure the exact time of the sun’s highest ascent at the local noon time, you can compare this time with Greenwich Meridian Time (GMT), to determined longitude. If you maintain an accurate clock set to GMT, you can calculate your approximate longitude by adding 15° to each 1-hour difference observed from the local noon time if west of GMT. For example, if you record a local noon time of 5:00 pm GMT you are 75° West of the Greenwich Meridian. Note that you need to adjust for a slight difference between solar time and clock time, which differs by a few minutes depending on the time of year.
Rather than using a sextant, if you are on land, you can also use a gnomon or rod that casts a shadow and recorded the time set to GMT as the shadow length decreases during the approach to the local noon time, when the sun is highest in the sky. The GMT time at which the shadow is the shortest can be used to calculate the longitude, just remember to adjust for the slight difference between solar time and clock time.
Using these primitive methods, you can determine latitude and longitude within a kilometer (about a 1,000 yards), and this can be double checked with a modern-day GPS unit.
Public Land System
Knowing latitude and longitude was of vital importance during times of war, and Thomas Hutchins fulfilled those duties as an Engineer mapping for the British Military in the mid-1700s. When hostilities broke out between the French and British forts in the western frontier of the American Colonies. Thomas Hutchins was called upon to map out the agreed peace treaty in 1763, which transferred land between the Appalachian Mountains and Mississippi River to British Colonial control, this region included the future states of western Pennsylvania, Ohio, Indiana, and Illinois. Hutchins also mapped parts of the Mississippi River and documented many of the native populations and mapped the various native towns along the rivers. He laid out plans for the city of Pittsburgh at the junction of the Ohio and Allegheny Rivers in western Pennsylvania, and mapped parts of Florida and Louisiana as well.
When the American Revolution broke out in 1776, he served with British forces, but was arrested for treason in sending coded messages to revolutionary forces. Taken to England, he escaped from prison, and made his way to France, where he met up with Benjamin Franklin, and returned to America as a patriot. After the war, Thomas Hutchins begin mapping parts of Ohio using a new grid system, called the Public Land System (PLS). The public land system was a way to map a region by overlaying a grid of 36 numbered 1-square miles grids, with each set of 36 grids given a numbered township north or south and range west or east in reference to a local meridian line. For example, a location could be given as Section 10 Township 10 South Range 3 West. Indicating a square mile that was located in the 10th square of the 36-squire mile grid, 10 townships to the south and 3 ranges to the west of the local meridian. As America’s first national geographer, the success of his system inspired its adoption by Thomas Jefferson, and all land west of the original American colonies, excluding Texas, would be mapped using this system over the next century.
The PLS system in the United States became the legal method to determine property lines, as lands were purchased during the Louisiana Purchase in 1803, or taken by force during the Mexican-American War in 1846. The PLS system was used by the United States to allocate land ownership, and used to push European settlement westward with the acquisition of lands from native tribes and peoples. Having an accurate grid map of a large land mass is a major advantage during conflict. Surveyors were often used by the military during this time, such as Captain John Fremont, who was able to acquire most of California without any shots fired during the Mexican-American War, due to his knowledge gained surveying the west. Accurate maps give small numbers of soldiers a huge advantage to mobilize and coordinate during times of war. These surveys also resulted in the United States government to allocate land ownership, that stripped much of the indigenous lands away from native peoples, and was used to established reservation boundaries for native people in the Americas. The Public Land System is still used as a legal grid system to designated land ownership for much of the western part of the United States of America.
Universal Transverse Mercator
A grid system does not fit perfectly on a round globe, and each section would not be a prefect square mile, since each line projecting north-south would converge as they approached the poles. The grid system worked for the latitudes in the continental United States, but was filled with many sections that were not completely square, and often corrections were added to the grid system in a haphazard fashion based on each local meridian. In other words, the PLS system was very localized to the western parts of the United States, and a system that could not be adapted to the rest of the world.
During World War II, there was a need to develop a better grid system of the Earth, projected on an accurate spherical model of the Earth. The United States Military adopted a method called the Universal Transverse Mercator coordinate system (UTM). The benefit of a global grid system is it takes a three-dimensional ellipsoid model of the Earth, such as the World Geodetic System (WGS84), and projects a grid onto a two-dimensional Cartesian coordinate system, which allows you to calculate distances very quickly from two points on a map.
The UTM system divides the Earth into 60 north-south zones, each representing 6° of longitude. The zones are numbered, with zones 19 to 10 covering the continental United States, and zone 1 in the middle of the Pacific Ocean, near the International Date Line. Each zone has a central meridian, or north-south line that serves as an east-west refence point, and the equator which serves as the north-south reference point.
A location is defined by the distance the point is from these lines in meters. The distance from the central meridian of each zone is called the Easting, and the distance from the equator is called the Northing. For example, a location could be described as UTM Zone 18 585,000 m easting 4,515,500 m northing. You can convert UTM into Latitude and Longitude using an online converter (such as http://www.rcn.montana.edu/resources/converter.aspx), or convert Latitude and Longitude into UTM coordinates. On modern topographic maps, both UTM and Latitude and Longitude are indicated on the edges of the map, so that points can be easily located with either system. Using this knowledge, you can locate your position anywhere on Earth.
1e. Earth’s Motion and Spin.
Earth’s Rotation Each Day
Right now, as you are reading this, your body is traveling at an incredibly fast speed through outer space. We can calculate one component of this speed by taking Earth’s circumference based on the ellipsoid model for the Earth’s dimensions, which exhibits an equatorial circumference of 24,901.46 miles (40,075.02 km). The Earth completes a rotation around its axis every day, or more precisely every 23 hours, 56 minutes, and 4 seconds. If you are located at the equator, your velocity (speed) can be calculated by dividing 24,901.46 miles by 23 hours, 56 minutes, and 4 seconds, which equals 1,040.45 miles per hour. Of course, this depends on your latitude, and decreases as you approach the poles.
One way to imagine this rotation is if you have ever watched an old record album spin, or a free spinning bike wheel. The central axis of the spinning album or wheel is stationary, while the outer edges of the circle are traveling the circumference of the circle with each rotation, the further you move from the center of rotation, the quicker your speed. In other words, the larger the wheel, the faster the rotation, and the more distance is covered per unit time.
Early scientists such as Galileo, were aware of this motion and were curious as to why we don’t feel this motion on the surface of the Earth. If you imagine an ant crossing a spinning record album, at the edges the ant would feel the fast motion as air zoomed by, and the pull of a centrifugal force working to fling the poor ant off the spinning record album, but as the ant crawled toward the center its feeling of motion would decrease.
The same thing can be felt if you have ever been on a merry-go-round, the closer you are toward the center the less you feel the motion of your spin. However, on Earth we don’t feel like we are traveling at over a 1,000 miles per hour at the Equator, and standing still near the north or south pole.
This bizarre paradox inspired Isaac Newton to study motion, and in the process, discovered gravity, and the three laws of motion that govern how all objects move in the universe. His discoveries were published in 1687, in his book Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy).
Before we can discuss why we don’t feel the rotational force of the Earth, we need to define some terms.
Velocity (or speed) is a measure of an objects’ distance traveled by the length of time the object took to travel that distance. For example, a car might have a velocity of 50 Miles per Hour (80.5 Kilometers per Hour).
Acceleration is the rate of change of velocity per unit of time. For example, if a car is traveling at 50 Miles per Hour for 50 Miles and does not change speed, then it has 0 acceleration. A car that is stationary and not moving, also has 0 acceleration. This is because in both examples the velocity does not change.
Mathematically it is more difficult to calculate acceleration, one way to do it is to find the change of velocity for each unit of time. For example, a car going from 0 to 50 Miles Per Hour over a 5-hour long race course, we can find the speed at each 1-hour intervals and average them.
At the starting line the car is traveling at 0 miles per hour. At 1 hour the car is traveling at 10 miles per hour. At 2 hours the car is traveling at 20 miles per hour. At 3 hours the car is traveling at 30 miles per hour. At 4 hours the car is traveling at 40 miles per hour. At 5 hours the car is traveling at 50 miles per hour. Each hour the car increases its velocity by 10 miles per hour.
So, the average acceleration is equal to the average change in velocity divided by the average change in time, so the average of 10, 10, 10, 10, 10, in this example. The average acceleration is equal to 10 miles per hour, per hour (or hour squared).
If you know a little calculus, we can find what is called instantaneous acceleration, or the acceleration using the formula:
Basically, what this equation is stating is that acceleration is the derivative of velocity with respect to time.
What Isaac Newton, suggested as to the reason we don’t feel this spinning motion on Earth is that the velocity of the Earth’s rotation is constant. Objects that are set into motion and have a constant velocity are said to exhibit inertia. These objects have zero acceleration.
Acceleration is when velocity changes over time. Isaac Newton realized that objects in motion will stay in motion, unless acted upon by another force. This is referred to as the law of inertia. In the weightless environment of outer space, an astronaut can spin a basketball and it will continue to spin at that velocity unless it hits another object, or another object acts against that motion. The reason that we don’t feel the spin of the Earth is that everything is spinning at this constant velocity, or exhibiting the same inertia force.
Newton asked a simple question, why do objects, such as apples, fall to the Earth rather than get flung into outer space due to the rotation of the Earth?
He set about measuring the acceleration of falling objects. For example, an apple dropped from a tower. Just before the apple is dropped its velocity is 0 meters per second, but after 1 second, the ball is traveling at 10 meters per second. At 2 seconds the ball is traveling at 20 meters per second. At 3 seconds the ball is traveling at 30 meters per second. At 4 seconds the ball is traveling at 40 meters per second. At 5 seconds the ball is traveling at 50 meters per second. This sounds familiar. Each second the ball increases its velocity by 10 meters per second. So, the acceleration of the falling object is 10 meters per second per second (or second squared) or 10 m/sec2.
A century of experiments would show that falling objects on Earth’s surface have an acceleration of 9.8 m/sec2. All objects, no matter their mass, will fall at this rate.
(In reality objects will be hitting air (gas) as they fall, an object in motion will stay in motion until its hit by another object, in this case air particles. This air adds resistance during free fall. So objects like parachutes, which are broad, wide and capture lots of air as they fall, or feathers will fall more slowly than the standard acceleration of 9.8 m/sec2).
The force of a falling object is related to both mass and acceleration.
Force is measured by the mass (measured in kilograms) multiplied by the acceleration (measured in meters per second squared). Isaac Newton was rewarded by having a unit of measurement named after him!
One Newton unit of force is equal to 1-kilogram x 1 m/sec2.
Hence a bowling ball with a mass of 5 kilograms will exert a force of 5 kilograms x 9.8 m/s2 or 49 Newtons. A beach ball with a mass of 2 kilograms will exert a force of 2 kilograms x 9.8 m/s2 or 19.6 Newtons. An object measured in Newtons is a weight, since weights incorporate both mass and the acceleration. The unit of Pounds (lbs.) is also a unit of weight.
This common acceleration on the surface of the Earth is the acceleration due to gravity, which is 9.8 m/sec2. Isaac Newton realized that there was a force acting to keep objects against the surface of the Earth, and it was directly related to the mass of the Earth. The larger the mass an object had, the more its gravitational force would be. It was also related to the object’s proximity, the closer the object was, the more acceleration due to gravity the object will have. Using this mathematically relationship, Newton proposed that the 9.8 m/sec2 acceleration of gravity could be used to find out how much mass the Earth had, using this formula.
g = 9.8 m/sec2 and is the acceleration of gravity on the surface of the Earth.
re = the radius of the Earth, or distance from the center of the Earth to the surface which can be found if we know the circumference of the Earth.
Me = the mass of the Earth, measured in kilograms.
G = the gravitational constant “sometimes called Big G”, a constant number, with the units of m3/kg ⋅s2.
Mass = Density x Volume. Density is the how compact a substance is and is measured relative to another substance, such as water. In other words, density is how well a substance or object floats or sinks. Volume is the cubic dimensions or space that an object takes up.
Isaac Newton did not know the value of Big G (the gravitational constant), but knew that it was a tiny number, since the mass and radius of the Earth were very large numbers, and the result of the equation had to equal 9.8 m/sec2.
The Quest to Find Big G
Newton’s work spurred a new generation of scientists to try to determine Big G, the gravitational constant. One way to determine Big G was to determine the density, volume and radius of the Earth. We can solve for Big G using this formula,
where g is the acceleration of gravity on Earth, r is the radius of the Earth from its center to the surface, D is density of Earth, and V is Earth’s volume.
One of Isaac Newton’s colleagues was Edmond Halley. Halley was one of the most brilliant scientists of the day, and is famous for his calculations of the periodicity of comets, in fact Halley’s Comet is named after him. However, he is less well known for his hypothesis that the Earth was hollow on the inside. He proposed that Earth’s density, and hence mass, was much smaller than if Earth was composed of a very dense solid inner core. During the late 1600s and early 1700s, scientists debated what the density of Earth was. Newton suggested an average density about 5 times more than water, while Halley suggested an average density less than water for the interior of the Earth. The problem was no one knew the value of Big G.
During the next century there was much discussion on the density of the Earth (the value for D). Expeditions into caverns and dark caves around the world were trying to find an entrance to the hollow center of the Earth. This debate captured the interest of a little short man named John Michell, who was the head of a church in Yorkshire, England, but dabbled in science in his spare time, and often wrote to fellow scientists of the day, including Benjamin Franklin. In his spare time, he thought of an experiment to measure Big G, by using a set of big very dense lead balls placed in close proximity to a set of smaller, but also very dense lead balls suspended from a string tied to a balancing rod. When the large lead balls are placed next to the smaller lead balls, the force of gravity will attract the two balls to each other. This attraction causes the balancing rod to shift slightly. To measure this movement or change in the balancing rods angle, a light was reflected off a mirror set on top of the balancing rod. Knowing the mass and radius of the lead balls, allowed one to solve for the gravitational constant, or Big G, which if known could be used to determine Earth’s density.
One of John Michell’s close friends was Henry Cavendish, a well born son of a wealthy scientist. Henry suffered from what would be called autism today, as he was incredible shy, and struggled to carry on conversations with anyone not his close friend. Then at the age of 68, John Michell died, and left his experiment to Henry Cavendish to complete. In a large building, Henry reconstructed the experiment with the lead balls near his home, and calculated an accurate measure of Big G, the gravitational constant, which is 6.674×10−11 m3/kg⋅s2.
Using this number for Big G, it was demonstrated the Earth is not hollow, and that it is in fact, denser then rocks near the surface of the Earth which are about 3 g/cm³, with an average density of 5.51 g/cm3, or 5.5 times greater than water. This proved that the Earth is not hollow on the inside, but much denser than average rocks found on the Earth’s surface.
Henry Cavendish’s accurate calculation of the gravitational constant allows you to calculate any object’s acceleration of gravity given its mass and radius from its center of mass. The relationship between an object’s mass, radius, and the acceleration of gravity is a fundamental concept in understand the motion of not only Earth, but other planets, moons, and stars. As well as the gravitational forces acting to hold astrological objects in orbit with each other. Furthermore, it explains why large objects in the universe take on spherical shapes around the center of mass. The acceleration of gravity also explains why we don’t feel the Earth’s spin, and why objects and substances on Earth don’t get flung into outer space. They are held against the Earth by its gravitational force.
Will the Earth every stop spinning?
Should you worry about whether the Earth’s rotation or spin will slow down, and could there ever be a day in the future in when the Earth would stop rotating?
The length of the day is the time the Earth rotates once, with each longitude facing the sun once and only once during this daily rotation. If the Earth’s spin is slowing down over time, the length of the day will increase, resulting in longer days and longer nights. Today the Earth takes 23 hours, 56 minutes and 4.1 seconds to complete a rotation. (Note that it takes precisely 24 hours for the sun to reach its highest point in the sky each day, which is slightly longer than Earth’s spin, since the Earth moves a little relative to the sun each day).
Of course, the amount of daylight and night varies depending on your location and time of year, because the Earth rotates around a polar axis that is tilted at 23.5° in relationship to the sun. This is why people in Alaska (at a higher latitude) experience longer daylight during the month of July, and longer darkness during the month of December, than someone living near the Equator. The question to ask is, has the length of the Earth’s spin remained constant at 23 hours, 56 minutes and 4.1 seconds?
Like a spinning top, the Earth’s spin could be slowing down. Measuring the length of each rotation of the Earth requires clicking a very accurate stopwatch each day, and recording the time it takes for Earth to make one rotation. For the most part it says pretty close to 23 hours, 56 minutes and 4.1 seconds. However, the length does fluctuate by about 4 to 5 milliseconds. In other words, 0.004 to 0.005 seconds are added or subtracted from each day. These fluctuations appear to be on a decadal cycle, so the days in the 1860s were shorter by 0.006 seconds compared to days in the 1920s. These decadal fluctuations are believed to be the result of the transfer of angular momentum between the Earth’s fluid outer core and surrounding solid mantle, as well as tidal friction forces of the ocean as it slushes back and forth over the surface of the Earth while it spins. Weaker fluctuations occur over a yearly cycle, with days in June, July and August shorter by 0.001 seconds compared to days in December, January and February. These weaker fluctuations are cause by the atmosphere and ocean friction as the Earth spins, producing an oscillation called the “Chandler Wobble” after the American Scientist S. C. Chandler. In fact, the Earth is not just a solid mass of rock, we have a liquid ocean and a gaseous atmosphere that impacts the length of each day. It is like you are on a washing machine spinning around with wet clothes, and depending on where those clothes are in each spin cycle, there will be some variation in the speed of the spin itself.
Climate change also can have a rather important impact on the length of the day. If we were to compare the average day length during the last glacial period (25,000 years ago) to today, the day would be shorter. This is because of the Earth’s polar moment of inertia has decreased. As the great polar ice sheets that covered much of the polar regions started to melt, the distribution of the Earth’s mass shifted, from near the center of the spinning planet at the polar regions (as ice sheets), toward the equator (as melted ocean water). This change in inertia is the same phenomenon you observe when an ice skater brings his or hers arms out during a spin. The speed of the spin slows down. So as the Earth’s great ice sheets melted over the last 25,000 years, the Earth, like the spinning ice skater, projected more of its mass outward from its center toward the Equator when all that polar ice melted, slowing the spin.
While these fluctuations are interesting, they are small (several milliseconds), but we are interested in finding out when the Earth will stop spinning, and for that question we need a much longer record of day lengths, going back millions of years.
Fossil organisms keep records for the length of each year, month and day millions of years in Earth’s past. Fossil corals that live in the inter-tidal zone of the ocean are subjected to twice daily tides caused by the rotation of the Earth and gravitational pull of the moon, and amplified by the relative location of the sun. These changes in water depth result in a record in the growth rings, as well as cyclic sediments such as tidal rhythmites and banded iron formations. Using this information, Earth’s rotation has increased by 15.84 seconds every million years.
The answer is our nearest neighbor— the moon!
The moon is Earth’s only natural satellite with an equatorial circumference of 10,921 km (or 6,786 miles), about 27% the size of Earth. It rotates around Earth each lunar month of 27.32 days, in an unusual orbit called a synchronous rotation. This results in the strange fact that the Moon always keeps nearly the same face or surface pointed toward Earth. The opposite side of the moon, which you don’t see from Earth in the night sky, is erroneously called the “dark side” of the moon.
Both sides are illuminated once every 29.5 Earth days, as the moon rotates around Earth, resulting in difference phases of illumination of the moon by the sun. The moon’s axis of rotation is only slightly tilted to 5.14° in respect to the sun, and has been slowing down by Earth’s rotation to became “tidally” locked with the Earth.
With the moon’s slower lunar month-long rotation around Earth, it acts like a slow brake applied to Earth’s spin. The Earth will slow down to match the moon’s orbit of 27.32 days or 559.68 hours. At this point the Earth will be locked with the same spin as the orbit of the moon around the Earth.
An Earth with the length of rotation equal to the current lunar month would make the days on Earth last for 27.32 days, resulting in extreme daytime and nighttime temperatures like those experienced on the life-less surface of the moon today! Is this something for you to worry about?
Not anytime soon, Earth is slowed by the braking of the Moon by just a few seconds every million years, such that it will not be until 121 billion years in the future that Earth will become locked in this death orbit with the Moon, and by then, the Earth and Moon would likely have been engulfed by an expanding Sun!
The effect of the Moon’s orbit around the Earth can be observed with shifts in the ocean tide. When the moon is positioned directly above a position on the Earth (sublunar), the ocean at that position will be pulled closer to the moon due to the moon’s gravitational force producing a high tide along the coastline. An equal high tide will be felt on the opposite or antipodal side of the Earth as well. A low tide will be observed when the moon is not located on either the sublunar or antipodal sides of the Earth. As liquid water is more directly influenced by the attraction of the Moon’s gravity than rocks that compose the solid Earth you are likely more familiar with ocean tides, but there is also Earth tides which causes the Earth to bulge with the motion of the Moon. The sun also exerts some gravitational pull on Earth and can change the magnitude of the tides depending on the seasons. You can now explain the length of a day, and the length of a lunar month, the tides, but what causes the length of a year.
Earth’s Orbit around the Sun, The Year.
The Earth as a whole is not only spinning, but also traveling through space on an orbital path around the sun. Unlike the moon, Earth has a very dramatic tilt of its pole axis of 23.5° relative to the sun, such that during half of this voyage around the sun, the northern hemisphere faces the sun, and the southern hemisphere faces away from the sun. The tilt of Earth’s rotation of 23.5° results in longer days for the northern hemisphere when it faces the sun (June, July, August), and shorter days for the southern hemisphere, while the shorter days for the northern hemisphere (November, December, January) relate to longer days in the Southern Hemisphere. Because of the tilt of Earth’s axis, we have the four seasons of Summer, Fall, Winter, and Spring, which differ depending on your location in each hemisphere.
You might be surprised to learn that the orbit around the sun is not a perfect circle, as often depicted in illustrations of the solar system, but travels in an elliptical orbit around the sun. This can be demonstrated on Earth by documenting the sun’s position at noon every day of the year, which depicts a figure-8, called an analemma in the sky. The sun’s position at noon on the top of the figure 8 will happen on the day of the summer solstice, while the sun’s position at noon on the bottom of the figure 8 will happen on the day of the winter solstice, with the distance between the two points in the sky measuring Earth’s tilt of 23.5°. However, the width of the figure-8 is due to the elliptical path of the Earth around the sun. The figure-8 is not a perfect 8, but with one loop larger than the other.
This is due to the fact that Sun is not positioned directly in the center of Earth’s elliptical orbit around it. During December-January, the Earth is closer to the sun, while in June-July, the Earth is farther away. The time of year when the Earth is closest to the sun is called the Perihelion, while the time of year when the Earth is farthest from the sun is called the Aphelion.
This is opposite of what you might think, as in the Northern Hemisphere, you are closer to the sun during the cold winter months, while during the hot summer months you are farther from the sun.
The distance from the sun varies from 0.9833 AU to 1.0167 AU, where AU is the Astronomical Unit, which is the average distance between the Sun and the Earth, which is defined as 150 million kilometers (93 million miles). Hence every year the distance from the Earth to the Sun differs by about 5 million kilometers (3.1 million miles).
While Earth’s orbit around the sun may seem like a bunch of numbers and facts to memorize, the discovery that the Sun and not the Earth was the center of the solar system was a major scientific discovery. The reason for this revolution in thought was that for centuries an equally valid explanation for the yearly cycle of Earth’s orbit was proposed.
Ptolemy’s Incorrect Geocentric Model of the Solar System
In the years shortly after the death of the Pharaoh Cleopatra and the fall of the city of Alexandria, Egypt to Roman annexation, an astronomer living in the city by the named of Claudius Ptolemy devised a model of the solar system. Ptolemy’s passion was mapping the stars, and he noticed that each night the path of Mars would move differently in reference to other stars in the night sky. Over the course of several years around 58 CE, he documented the path of Mars in the night sky demonstrating that Mars looped in the night sky over the course of several months. For example, Mars would move with the stars each night for several weeks, but then circle back for several weeks, before looping back around before heading off in the direction it started on.
Because the path of Mars looped back, Ptolemy regarded this motion as a retrograde motion, and when Mars was progressing normally with the stars, a prograde motion. Ptolemy followed the Greek tradition of Aristotle, that the Earth was the center of the universe. So why were the planets of Mars and Venus looping in the night sky, they should be traveling in straight paths across the night sky, since they were orbiting the Earth, rather than the Sun? He devised a complex geocentric model of the solar system suggesting that the orbit of Mars, as well as other known planets like Venus followed an epicycle, an additional circular orbital path in addition to their orbits around Earth. It would be a century and a half before Ptolemy’s model of the solar system would be disproven.
Copernicus’s Correct Heliocentric Model of the Solar System
Nicolaus Copernicus published his alternative idea in his book De revolutionibus orbium coelestium (On the Revolutions of the Heavenly Spheres) in 1543. The Heliocentric view of the Solar System placed the Sun at the center of the Solar System rather than the Earth. In doing so, Copernicus demonstrated that the epicycle orbits were actually due to the observation from Earth of the passing of Mars in its own orbit around the Sun.
Copernicus viewed the Solar System as if the planets were racing around a circular track. Earth was on the inside track, while Mars was on an outside track. As Earth moved along its track on the inside, the view of Mars on the outside track would change. The retrograde motion comes naturally as a consequence of viewing a moving Mars from the perspective of a moving Earth. Copernicus rejected the epicycles needed to produce retrograde motion, rather the planets moved in a circular orbit around the Sun. Copernicus’s book is one of the most important books in science ever published, but still needed some modification, such as the fact that the Earth rotates around the Sun in an elliptical path, as do other planets rather than a circular orbit.
How fast you are traveling through space?
At the beginning of this module we talked about how fast you are traveling through space, and used the rotation or spin of the Earth, we can now add the component of Earth’s orbit through space around the sun. The distance Earth travels around the Sun is 940 million km (584 million mi) which it accomplishes every 365.256 days. The year is not evenly divided into days so calendars have to add extra day every 4 years, or “leap years.” We can determine the velocity of this motion around the Sun, and determine that the Earth, and everything on its surface is traveling at a remarkably fast speed around the Sun of 66,619.94 miles per hour, or 107,230.73 kilometers per hour. Imagine, if you will, that as you sit there reading this you are traveling at this incredibly fast speed on a planet sling-shooting around a Star, at 30 times the speed of the fastest airplane. Because you are!
This astonishing fact, that you are on board a fast-moving object speeding through outer space, inspired Richard Buckminster Fuller in 1969, to coin the concept that Earth is simply a spaceship traveling through the vastness of the universe. Spaceship Earth, as he called our planet, is just a giant vessel, like a battleship sailing cross an empty ocean of space. He warned that you and all life on life on the planet should be prepared for a long voyage.
Earth’s Galactic Voyage
John Michell, the short fat clergyman from Yorkshire England, who devised the experiment that proved the Earth was not hollow, in a letter written in 1784 proposed that there might be objects in the universe that have so much mass that their gravitational force of acceleration would suck in even light rays, and he called these mysterious super massive objects, dark stars. Today, we called them Black Holes. The quest to find these mysterious super massive objects in the universe was enhanced by his suggestion that gravitational effects of these objects might be seen in nearby visible bodies. However, they remained a mathematical curiosity of simply taking Newton’s equations and extrapolating them for objects with enormous mass— millions of times more than the Sun.
In 1950, no one had observed one of these super massive objects in the universe, and Jocelyn Bell, a young girl at a boarding school in England was struggling with the female only curriculum centered around the domestic subjects of cooking and sewing. When a science class was offered only the boys were allowed to attend. Furious, she and her parents protested, and she was allowed to attend the science class with two other female students. Jocelyn Bell loved physics most of all, and in 1965 went on to study physics at the University of Cambridge. She joined a team of researchers listening for radio waves from outer space. They had been picking up blips and squawks of radio waves from faint stars. Scientists called these signals quasi-stellar radio sources, which the American astronomer Hong-Yee Chiu simplified by calling them quasars. In the summer of 1967 Jocelyn Bell and her professor Antony Hewish were looking over the printouts of the newly constructed array of radio telescopes built to detect these quasar signals from space. She noticed a regular pattern of blips every 1.3373 seconds, while tempted to attribute these radio wave patterns to aliens, they joking called the regular pulse of signals little green men, but realized as others had that this radio signal was produced by a super massive object with enormous gravitational forces. When viewed through a telescope, the signal was coming from a faint star, which was recognized as a Neutron Star, an extremely massive star, which was spinning at an incredibly fast rate, with a pulse of electromagnetic radiation emitted every 1.3373 seconds. These radio signals are thought to be produced as nebulous clouds of gas are pulled into these supermassive stars forming an accretion disk, which emits powerful magnetic fields and radio waves as these gases fall through the disk into the neutron star— like colossal bolts of lightning.
Scientists realized that these super massive objects could be detected by using large arrays of radio telescopes to map these signals coming from space onto the sky.
Researchers focused their attention to the center of the one of the brightest stars in the night sky, which is actually a cluster of stars called Messier 87, also known as Virgo A, the brightest point in the Virgo Constellation. It had been recognized as a cluster of stars by Charles Messier in 1781, and classified by Edwin Hubble in 1931 as an elliptical nebula of stars. Today its known as a galaxy consisting of billions of stars.
Radio waves from Messier 87 indicated that near its center is a super massive object, representing a black hole. In 2019, the Event Horizon Telescope, a network of radio telescopes focused on this point and imaged the signals coming from its center, producing the first image of a black hole, which resembles a ghostly dark spot surrounded by light. At the center of the dark spot is an object that is 6.5 billion times the mass of the Sun and 55 million light years away.
The Event Horizon Telescope is also focused on a point in the night sky first detected by radio waves in 1974, which is thought to be the center of our own galaxy of stars, the Milky Way. In the night sky, a streak of stars appears to sweep across the night sky when viewed on an especially dark night. These stars are your closest stellar neighbors existing within your own galaxy. The Milky Way is a collection of billions of stars, including the Sun that swirl around a central point. The center of the Milky Way is located near the star Sagittarius A* (pronounced Sagittarius A-star), which is in the Sagittarius constellation. Here it has been observed that nearby stars swirl around a point, which is the location of another black hole that is 4 million times more massive than the Sun, and only 25,000 light years away. It is the nearest black hole to you.
Astronomers have measured the rate of the Sun’s rotation around this point at the center of the Milky Way Galaxy, and determined that the entire Solar System takes about 240 to 230 million years to travel around this galactic orbit around this black hole. The last time our Solar System occupied this space relative to Sagittarius A*, was before dinosaurs had evolved on Earth!
However, don’t assume this passage of the Solar System around this point is slow. Earth, and the entire Solar System is zipping along this path at an incredibly fast rate of travel.
Given that 1 light year is equal 5.879 x 1012 miles, and that it takes 240 million years to travel a circumference of 157,080 million light years, or a path of 9.23471 x 1017 miles in 2.1024 x 1012 hours, our Solar System is zipping around this black hole at a velocity of 439,246 miles per hour!
You are truly on a very fast spaceship, Earth’s motion in relation to its polar axis is between 0 and 1,040.45 miles per hour (1674.44 km/hr), dependent on your latitude. Earth’s motion in relation to the Sun is 66,620 miles per hour (107,214.5 km/hr), and Earth’s galactic motion in relation to Sagittarius A* is 439,246 miles per hour (706,898 km/hr).
1f. The Nature of Time: Solar, Lunar and Stellar Calendars.
What is Time?
Time is measured by the motion of the Earth in respect other astrological objects. A day is measured as the time it takes Earth to fully rotate around its polar axis. A lunar month is measured as the time the moon takes to fully rotate around the Earth, while a year is measured as the time Earth rotates around the Sun. The measurement of time is of vital importance to measure, as time determines when to plant and harvest crops, signal migrations for hunting, forecast the weather and seasons, catch an airplane at the airport, attend class, and take your final exam. Time has become ever more divided into hours, minutes and seconds, and hence every moment of your life can be accounted for in this celestial motion of our planet.
Length of the Earth’s Year
For centuries, time has been a direct measure of Earth’s velocity or speed, and its motion relative to other astrological objects. The oldest Solar and Lunar calendars designed to track the passage of the Sun and Moon in the sky date to 7,000 years ago, well before the Bronze Age, when humans were still using stone tools, but were utilizing agriculture and had domesticated animals. It became important to keep track of the passage of time to signal the time to start the planting of seeds and harvesting of crops. And for nomadic groups it also signaled the time to move a camp, as winter and foul weather could entrap nomadic groups during an unexpected winter storm, as well as signal times of the year to meet in large groups with various other tribes.
One of the key issues for people to solve was how many days were in a year, remarkably each culture and civilization around the world came to a close agreement that there are 365 days in a year, with some recognizing an extra quarter day. To make this calculation each group of people had to determine the motion of the sun either from the horizon, or at the highest point the sun reached at noon in the sky.
Ancient stone monuments and solar calendars are found around the world, which measure the motion of the sun for each day. The length of days between the Summer Solstice when the Sun is highest in the sky at noon, marked the turning point, as did the Winter Solstice when the Sun is lowest in the sky at noon. People living in equatorial regions, in which the seasons are not as pronounced, were less concerned with the passage of the Sun, and the measurement of the solar year, rather they often keep track of the passage of the Moon and its phases in the night sky. Solar record keeping of time is found across the cultures of Europe, Asia as well as Pre-Columbian America, while lunar record keeping of time is more standard in the Middle East and Africa, in regions closer to Earth’s equator.
Ancient monuments erected to measure the passage of time include Stonehenge in England which appears to line up with the Summer Solstice, the ancient tombs or stone dwellings of Maeshowe in Scotland, and Newgrange in Ireland, built nearly 5,000 years ago, in which sun light projects into the darken stone shelter on key days of the year. Some of the most sophisticated early solar calendars are found in the Americas, including El Castillo, or the Kukulcán's Pyramid in Yucatan Mexico. Constructed about 1,200 years ago, the pyramid contains 365 steps in total, divided by four cardinal directions of 91 steps each. The four-sided pyramid faces a direction that during the spring and autumn equinoxes the sun cases a shadow that resembles a feathered serpent, which is also depicted in sculptures found around the pyramid. Near the ancient Pueblo City of Chaco Canyon in New Mexico, there is an isolated butte where a 1,000-year-old solar calendar in the form of a swirling petroglyph can be found. A dagger of sunlight falls within its center on the Summer Solstice which moves to the edge of the swirling petroglyph on the Winter Solstice. In fact, the entire city of Chaco Canyon, and surrounding structures appears to be aligned to solar directions, as suggested by Anna Sofaer, and followed by Pueblo peoples today.
Combining Solar and Lunar Calendars
Solar and Lunar calendars became key to the synchronization of times in which people would gather, for religious events, festivals or sporting competitions, such as the 4-year cycle of the Olympic games in Greece or annual Powwow ceremonies in North America. However, each calendar differed, as a mix of days divided either in lunar months or laid over a solar year, or some mix of the two. The oldest calendar known that synchronizes the solar and lunar days is a 19 solar year cycle divided into 235 lunar months developed by Meton of Athens, an early Greek astronomer. After 235 lunar months, the moon and sun would restart their cycle in the sky, and the cycle would repeat, starting on the Summer Solstice. This early calendar was modified 200 years later by another Greek astronomer, named Callippus, who recognized an additional 76 year-long cycle determined by the variations in the length of the four seasons, which ranged from 94 to 89 days. This new calendar was set at the summer solstice in the year 330 BCE, and to keep track of this calendar geared mechanisms to count the days and position of the moon, sun and stars was invented. An example is found in the Antikythera mechanism, a maritime calendar dating to about 100 BCE.
Months of the Year
Our modern calendar, consisting of a solar year divided into 12 pseudo-lunar months was first implemented in Rome, around 700 BCE, which added the months of Ianuarius and Februarius to the already named 10 months used prior, which were number (1) Martius, (2) Aprilis, (3) Maius, (4) Iunius, (5) Quintilis, (6) Sextilis, (7) September, (8) October, (9) November, (10) December. Unlike the Greek calendar which combined solar and lunar cycles, the Roman calendar utilized just the solar cycle, which meant that the months had a set length of days. However, this presented a problem, since the solar year can’t be divided into an even 365 days. Every four years there would need to be an extra day to add to the calendar. Early Roman calendars would add an extra month (called Mercedonius), and kept the number of the days alternating between 355 days (a 12-month lunar year) and 377 days, with an average of 366.25 days. This meant that the summer solstice would vary by just a day on this calendar. However, this calendar was complicated because some years had an extra month, and if a city or region forgot or disregarded this extra month, the calendar would be off. After over 700 years of use, Julius Caesar issued a reform to the Roman calendar, by eliminating the extra month, and adding an extra day every four years, Leap Years. This calendar was called the Julian Calendar after Julius Caesar, who was killed on the Ides of March (March 15th) in 44 BCE. His adopted son Octavian rose to power in Rome, and in 27 BCE took the name Augustus, which became the new name of the month of Sextilis, since his rule occurred after Julius Caesar, so he named the month preceding August, which was Quintilis to July, in honor of Julius Caesar. The months were now named January, February, March, April, May, June, July, August, September, October, November, and December, with a leap day added to the month of February. This calendar was adopted across the entire Roman empire, and used for over a fifteen hundred years, and still used in some regions of the world today.
However, in 1582 CE the dates of the spring equinox were off by 10 days, because under the Julian Calendar each year was 0.0075 days shorter than the Earth’s rotation around the sun, which is not much, but represents 11.865 days after using the system for 1,582 years. To correct this Aloysius Lilius, an astronomer advocated for a jump forward of 10 days, which was proposed by Christopher Clavius to Pope Gregory XIII, who decreed that October 4th 1582 would advance to October 15th 1582. To help prevent future wandering, leap years were modified as occurring every four years, except years that are evenly divisible by 100, but are leap years if they are evenly divisible by 400. The idea was unpopular and not fully adopted everywhere, particularly by Protestant and Eastern Orthodox countries, but over the centuries the Gregorian Calendar, which is the best fit to the Earth’s rotation around the Sun has become the standard solar calendar.
However, even today calendars utilizing the moon’s rotation are still used. The lunar Hijri calendar observed in the Middle East and North Africa (and by religious followers of the Muslim faith) follows 12 lunar months divided into 355 days in a year. This lunar calendar determines the months of Ramadan and Dhū al-Ḥijjah, which are important holy lunar months in the Islamic calendar. Because this lunar calendar follows the moon, the months of Ramadan and Dhū al-Ḥijjah don’t line up with the solar calendar, so the start of the month of Ramadan shifts in relation to the Gregorian Calendar. In 2020 CE Ramadan started on April 24th and in the year 2030 CE on December 26th. The Islamic calendar numbers the 12-month lunar years from the first year of Muhammad’s journey to Medina, abbreviated AH (for Anno Hegirae).
The Method of Counting Years
Using a solar calendar in Rome, the years were counted since the founding of the city of Rome or AUC (for Ab urbe condita) in 754 BCE. Early Christians numbered the annual years by Anno Diocletiani, or years since the rule of the Roman Emperor Diocletian, who persecuted early Christians in Nicomedia near present day Istanbul and destroyed earlier records. These years were later modified by a monk named Dionysius Exiguus, who adjusted the year from 247 Anno Diocletiani to 532 Anno Domini by calculating the birth year of Jesus Christ to the reign of the Roman Emperor Augustus during his 28th year in power. Hence AD for (Anno Domini, which means the year of our Lord) became standardized across Christian nations of Europe by around 700 CE. With the wide spread use of the solar Gregorian Calendar, the AD system of numbering solar years remained in place, but referenced in non-religious scientific use by denoting the Common Era (CE) and Before Common Era (BCE). This system is used throughout this text. Note that using this system 1 BCE is preceded by 1 CE, with BCE decreasing over time, and CE increasing over time, with the birth of Jesus Christ as the 1st year in this system.
Using the sun or moon are not the only ways in which to keep track of time, since the Earth rotates around its polar axis each year, stars will appear on the horizon at different times of the night, and appear in different locations depending on the time of year (just like the sun and moon). A stellar calendar was first used in Egypt and recorded in a series of texts collectively called the Book of Nut, which is also depicted in the Tomb of Ramses IV, who reigned in Egypt around 1150 BCE. The Book of Nut recognizes 36 stars or combination of stars in the night sky, which arise in the night sky in different places every 10 days. The ten-day periods are called decans, and represent 360 nights in a year which is close to the actual 365.242 nights in a modern calendar year.
Length of the Earth’s Day
Using stars to measure time is referred to as sidereal time. Sidereal time differs from solar time (which uses the position of the sun), the reason for this is that Earth makes one rotation around its axis in a sidereal day and during that time it moves a short distance (about 1°) along its orbit around the Sun. So after a sidereal day has passed, Earth still needs to rotate slightly more before the Sun reaches local noon according to solar time. A mean solar day is, therefore, nearly 4 minutes longer than a sidereal day. Hence the rotation of the Earth is 23 hours 56 minutes and 4.1 seconds, while the Solar day is 24 hours. Another way to think of this is that a star represents a distance point of reference, while the closer Sun point of reference has moved slightly in relation to the orbiting Earth. The difference between the Sidereal Year, when a star returns to the exact same position in the night sky, and the Solar Year (also called a Tropical Year) when the sun returns to the exact same position in the daytime sky is only about 21 minutes shorter. Astronomers use Sidereal time to track stars in the night sky, while clocks tend to read Solar time.
Hours in an Earth Day
In the Book of Nut, the day is divided into 24 hours, representing 12 hours of the night, and 12 hours of the day. These were measured by the times stars appeared in the night sky, and Sun dials, which measured the 12 hours of the day. However, keeping track of the hours of the day were not necessarily standard units of time. For example, in China hours were recorded as the Shí-kè (時 - 刻) during day light hours by tracking the sun, and Gēng-diǎn (更 - 點) during the night with the sound of gongs. In medieval Europe hours were announced from bell towers, which divided the day into 8 non-standard hours called Matins, Prime, Terce, Sext, None, Vespers, Sunset and Compline, and while tracking the Sun and Stars positions could be used to determine these hours, they were not standardized.
With the advent of mechanical clocks during the Renaissance, keeping track of the hours of the day became more uniform, with the introduction of 60 minutes to a standard hour in a 24-hour day, which was first measured by a clockmaker named Jost Bürgi in 1579 CE for use in astrological tracking of the stars and planets at night in the court of Emperor Rudolph II working alongside Johannes Kepler. But the terms for minutes and seconds in reference to time originate with the work of Johannes de Sacrobosco, who was the first to calculate the shift in the occurrence of the Spring Equinox according to the Julian Calendar in 1235 CE. As smaller standard units of measurement of time, minutes and seconds were vital to understanding very long-term oscillations in the orbit of the Earth and help calculate the occurrence of the Spring Equinox.
1g. Coriolis Effect: How Earth’s Spin Effects Motion Across its Surface.
As a spinning spheroid, Earth is constantly in motion, however one of the reasons you don’t notice this spinning motion is because the Earth’s spin lacks any acceleration, and has a set speed or velocity. To offer an example that might be more familiar, imagine that Earth is a long train traveling at a constant speed down a very smooth track. The people on the train don’t feel the motion. In fact, the passengers maybe unaware of the motion of the train when the windows are closed and they don’t have any reference to their motion from the surrounding passing landscape. If you were to drop a ball on the train it will fall straight downward from your perspective, and give the illusion that the train is not moving. This is because the train is traveling at a constant velocity, and hence has zero acceleration. If the train were to slow down, or speed up, the acceleration would change from zero, and the passengers would suddenly feel the motion of the train. If the train speeds up, and exhibit a positive acceleration, the passengers would feel a force pushing them backward. If the train slows down, and exhibit a negative acceleration, the passengers would feel a force pushing them forward. A ball dropped during these times would move in relationship to the slowing or increasing speed of the train. When the train is moving with constant velocity and zero acceleration we refer to this motion as inertia. An object with inertia has zero acceleration and a constant velocity or speed. Although the Earth’s spinning is slowing very very slightly, its acceleration is close to zero, and is in a state of inertia. As passengers on its surface we don’t feel this motion because Earth's spin is not speeding up or slowing down.
Differences in Velocity due to Earth's Spheroid Shape
The Earth is spherical in shape, represented by a globe. Because of this shape, free moving objects that move across the Earth’s surface long distances will experience differences in their velocity due to Earth’s curvature. Objects starting near the equator and moving toward the pole will have a faster starting velocity, because they are starting at the widest part of the Earth, and slow down as they move toward the poles. Likewise, objects starting near the poles and move toward the equator will increase their velocity, as they move toward the widest part of the Earth’s spin. These actions result in accelerations that are not zero, but positive and negative.
As a consequence, the path of free moving objects across large distances of the spinning surface of Earth curve. This effect on the object’s path is called the Coriolis Effect. Understanding the Coriolis effect is important to understand the motion of storms, hurricanes, ocean surface currents, as well as airplane flight paths, weather balloons, rockets and even long-distance rifle shooting. Anything that moves across different latitudes of the Earth will be subjected to the Coriolis effect.
Tossing a Ball on a Merry-go-round
The best way to understand the Coriolis effect is to imagine a merry-go-round, which represents a single hemisphere of a spinning Earth. The merry-go-round has two people, Sally who is sitting on the edge of the spinning merry-go-round (representing a position on the Equator of the Earth), and George who is sitting at the center of the merry-go-round (representing a position on the North Pole of the Earth). George at the North Pole has a ball. He rolls the ball to Sally at the Equator. Because Sally has a faster velocity as she is stilling on the edge of the merry-go-round, by the time that the ball arrives at Sally’s position she will have moved left, and the ball rolled straight would miss her location. The ball’s path is straight from perspective above the merry-go-round, but from the perspective of George it appears that the ball’s path is moving toward the right of Sally. The key to understand this effect, is that the velocity of the ball increases as it moves from George to Sally, hence its acceleration is positive, while both George and Sally have zero acceleration. If we were to map the path of the rolled ball from the perspective of the merry-go-round, the path of the ball will curve clock-wise. Since the Earth is like a giant merry-go-round, we tend to view this change to the path of free moving objects as a Coriolis force. This force can be seen in the path of any free moving object that is moving across different latitudes. The coriolis force adheres to three rules.
The Rules of Earth's Coriolis Force
1) The Coriolis force is proportional to the velocity of the object relative to the Earth; if there is no relative velocity, there is no Coriolis force.
2) The Coriolis force increases with increasing latitude; it is at a maximum at the North and South Poles, but with opposite signs, and is zero at the equator in respect to the mapped surface of the Earth, but does exert some upward force at the equator.
3) The Coriolis force always acts at right angles to the direction of motion, in the Northern Hemisphere it acts to the right of the starting observation point, and in the Southern Hemisphere to the left of the starting observation point. This results in Clockwise motion in the Northern Hemisphere and Counter-Clockwise motion in the Southern Hemisphere.
These paths are difficult to empirically predict, as they appear to curve in respect to the surface of the Earth. One example where the Coriolis force comes into your daily life is when you travel by airplane crossing different latitudes. Because the Earth is moving below the airplane as it flies, the path relative to the Earth will curve, resulting in the airplane having to adjust its flight path to account for the motion of the Earth below. The Coriolis force also effects the atmosphere and ocean waters because these gasses and liquids are able to move in respect to the solid spinning Earth.
How to calculate the coriolis effect for a moving object across Earth's surface. The red line is an object on earth traveling from point P at the velocity (V_total). For the coriolis effect only the horizontal speed (V_horizontal) is of importance. To get from one can say:
The coriolis force is then calculated by taking:
where is velocity of the Earth's spin at the Equator, and m is the mass of the object.
A Common Misconception Regarding Flushing Toilets in Each Hemisphere
A common misconception is that because of the Coriolis effect there are differences in the direction of the swirl of draining water from sinks, toilets and basins depending on the hemisphere. This misconception arose from a famous experiment that was conducted over a hundred years ago, where water was drained from a very large wooden barrel. After the water was allowed to sit and settle for a week (so that there was no influence of any agitation in the water). A tiny plug at the base on the barrel was pulled, and the water slowly drained out of the barrel. Because of the large size of the barrel, the water moved with different velocities across the breath of the barrel. Water on the equator side of the barrel move slightly further than the water on the pole side of the barrel resulting in a path toward the right in the Northern Hemisphere, resulting in a counter-clockwise direction of the draining water.
Since this famous experiment, popular accounts of the Coriolis effect have focused on this phenomenon, despite the fact that most drains are influence more by the shape of the basin and flow of water. Rarely do they reflect the original experimental conditions of a large basin or barrel, and water that is perfectly settled in the basin. Recently the experiment has been replicated with small child swimming pools,(see https://www.youtube.com/watch?v=mXaad0rsV38) about several meters across. Even at this small size, it was found that the water drained counter-clock-wise in the Northern Hemisphere, and clock-wise in the Southern Hemisphere. Note that the effect will be more pronounced the closer the experiment is conducted near the poles.
Why the Swirl is in the Opposite Direction to the Movement?
Why does the water swirl through the drain opposite to the direction of movement of the water? The motion of the water is relative to the drain plug, since the path from the equatorial side will be moving faster than at the center drain plug, the path will curve right in the Northern Hemisphere, resulting in a path that overshoots the center drain plug on the right side, from this point near the drain plug, the water will be pulled by gravity to flow through the drain toward the water’s left side resulting in a counter-clock-wise direction as it drains. In the Southern Hemisphere it will be in the opposite direction.
Satellites that track hurricanes and typhoons demonstrate that storms will curve to the right in the Northern Hemisphere and left in the Southern Hemisphere, but the clouds and winds will swirl in the opposite direction into the eye of the storm in a counter-clock-wise direction in the Northern Hemisphere and clock-wise direction in the Southern Hemisphere— similar to experiments with large barrels and child-size swimming pools. However, the storm’s overall path will be in a clock-wise direction in the Northern Hemisphere and counter-clock-wise direction in the Southern Hemisphere. This is why hurricanes hit the eastern coast and Gulf of Mexico in North America. Including the states of Florida, Louisiana, Texas, the Carolinas, Georgia and Virginia, and rarely if ever the western coast, such as California and the Baja California Peninsula of Mexico.
The Trajectory of Moving Objects on Earth's Surface
If the trajectory of the free moving object does not cross different latitudes the path will be straight, since velocity in relationship to the Earth’s surface will remain the same, and acceleration will be zero. Coriolis effect behaves in three dimensions and the higher the altitude the more velocity will be exerted on the object. This is because the object’s position higher in the sky will result in a longer orbit around the Earth, and quicker velocity. Thus, changes in velocity can occur at any latitude if the free moving object changes altitude, and at the equator this force is vertical, leading to a peculiar net flow upward of air around the equator of the Earth, which is called the Intertropical Convergence Zone.
1h. Milankovitch cycles: Oscillations in Earth’s Spin and Rotation.
"The heavy iron door closed behind me....I sat on my bed, looked around the room and started to take in my new social circumstances… In my hand luggage which I brought with me were my already printed or only started works on my cosmic problem; there was even some blank paper. I looked over my works, took my faithful ink pen and started to write and calculate...When after midnight I looked around in the room, I needed some time to realize where I was. The small room seemed to me like an accommodation for one night during my voyage in the Universe." - Milutin Milanković Summer 1914.
Milutin Milanković The Imprisoned Scientist
Arrested and imprisoned while returning from his honeymoon, in the summer of 1914, Milutin Milanković found himself alone in his prison cell. As a successful engineer and expert in concrete and bridge building, Milutin was wealthy, in love with his new wife, and hopelessly obsessed about a cosmic scientific program that wrestled his mind even as he was being imprisoned. Milutin was born in Serbia, and had taken a position as the Chair of Mathematics across the border in the Austro-Hungarian city of Budapest, the same year that Gavrilo Princip a Serbian, assassinated the heir to the Austro-Hungarian throne, Archduke Franz Ferdinand. The assassination would plunge the globe into the first World War. And as a Serbian returning to the Austro-Hungarian Empire, Milutin was arrested and placed in jail. His new wife, Kristina Topuzovich, implored the authorities to let her husband go, but as the hostiles between nations escalated during the summer, his chances of release diminished. Milutin Milanković was singularly obsessed with a problem in science, and it had to do with the motion of the Earth through space. In the years leading to 1914, scientists had discovered that the Earth had experienced large widespread Ice Ages in its recent past. Cold periods of time which had lengthen glaciers, carved mountains, and deposited large boulders, which covered the landscape of northern Europe and North America in giant ice sheets. Geologists around the Northern Hemisphere had discovered strong geological evidence for these previous episodes of ice ages, yet climate scientists had yet to discover a reason why they had occurred. Milutin had wrestled with the idea, and wondered if it had to do with long term cycles in Earth’s orbit?
Earth’s Tilt Causes the Seasons
The winter and summer months are a result of Earth’s tilt of 23.5 degrees, such that as Earth rotates around the Sun during the months of June, July, and August the Northern Hemisphere points toward the sun, lengthening the Solar Day. While in December, January, and February the Southern Hemisphere points toward the sun lengthening its day. Milutin wondered if there was a similar but much longer orbital cycle with Earth that would cause a long-term cycle of ice ages. Animation of Earth as seen from the Sun, through its yearly orbit.
Before his imprisonment, Milutin begun studying something called Earth’s Precession. For nearly two thousand years, astronomers mapping the stars had made note of a slight shift in the position of stars in the night sky. The position of Earth’s axis relative to Polaris (the North Star), appears to move in a circular path in the night sky, which is estimated to complete a circle in 25,772 years. This odd observation also helps to explained the odd fact that the Sidereal Year, when a star returns to the exact same position in the night sky after a full rotation around the sun and the Solar Year (also called a Tropical Year) when the sun returns to the exact same position in the day time sky (noon between two summer solstices) is about 21 minutes shorter. This 21 minutes shorter Solar Year reflects this slight wobble with Earth’s orbit, which was called a precession cycle. After 25,772 years the position of the stars and the solstice will return to their original starting positions.
The best way to explain the axial precession cycle is to watch a spinning top or gyroscope, which tends to wobble during its spin, such that the axis of the spin rotates in a circle when viewed from above. Milutin realized that this cycle could be the clue to unlock the reason for the ice ages in Earth’s past, because it was an example of a long-term orbital cycle, however it was not the only one.
This precession of Earth’s orbit was first recognized with the vernal and autumnal equinoxes, called the Precession of the Equinoxes. As the Earth spins, the Sun’s path forms a great-circle referred to as an ecliptic around the Earth. The celestial equator is a great-circle projecting out from Earth’s equator onto the starry night sky. The angle between these two great-circles is 23.5º the tilt of the Earth’s axis. When these two great circles intersect, determined by observing the rising sun relative to the background stars, defines the equinox. The equinox occurs when the Earth’s tilt is perpendicular to the direction toward the sun, and the days are equal length across the Earth. This occurs around March 21st and September 23rd. However, astronomers noticed that the background stars that forecast the coming of the equinox was shifting. Resulting in a Precession of the Equinoxes in the night sky. Of course, this is a result of the axial precession, which shifts the ecliptic relative to the celestial equator, as well.
Milutin realized that during this cycle, the axis of the Earth would project more toward the sun and further from the sun at different times of the year. For example, there would be a period of several thousand years when the North Hemisphere would wobble more toward the sun during the summer (making it hotter), and a period of several thousand years when the North Hemisphere would wobble away from the sun during the winter (making it colder).
Working with the University, Milutin’s wife arranged for Milutin to be transferred to the library at the Hungarian Academy of Sciences. He was forbidden to leave and was under guard during the War, but allowed to continue his research.
Earth’s precession was not the only long-term variation in Earth’s orbit. The tilt of Earth’s orbit oscillates between 24.57 º and 22.1 º, with a current tilt of 23.43677° and is slowly decreasing. This shift in the tilt of axis completes a cycle about every 41,000 years. This is referred to as Earth’s obliquity. The first accurate measure of Earth’s tilt was determined by Ulugh Beg (الغ بیگ), who built the great Samarkand Observatory, in present day Uzbekistan. His calculation of 1437 CE was 23.5047°, indicating a decrease of 0.06783º over 583 years, indicating a complete cycle of 42,459 years for Earth’s obliquity. The oscillations in Earth’s tilt would result in more severe winters and summers during these long-periods of several thousand years when Earth’s tilt was greater. While in the library, Milutin pondered the effect of Earth’s changing obliquity on Earth’s climate and moved onto a third orbital variation.
The third long-term variation in Earth’s orbit that Milutin examined was the shape of Earth’s elliptical orbit around the sun. During this yearly orbit around the sun, the Earth is closest to the sun at the perihelion (which occurs around January 4th), and furthest from the sun at the aphelion (which occurs around July 5th). Mathematically we can define a term for circles called eccentricity. When eccentricity is zero, the circle will be a perfect circle, the difference between the distances across the widest part of the circle and the narrowest part of the circle is zero. If we define the circle as actually an ellipse, rather than a perfect circle, the difference between the widest part of the ellipse and the narrowest part of the ellipse will be greater than zero, we call this difference eccentricity.
To measure Earth’s eccentricity requires that we observe the daily path of the sun’s shadow for a full year and determine the precise time it reaches its highest point in the sky (shortest shadow each day). The difference from the local noon time, and when it is observed at the highest point in the sky will shift slightly through the year. This is because the Earth’s path relative to the Sun will have slight differences in velocity as Earth moves around the Sun over a year along this elliptical path. Sometimes the Earth will be slower, during the narrow width days, and sometime faster during the wider width days of the year. The differences in local noon time, and observed solar path can be used to determine Earth’s eccentricity, which is currently 0.0167. It is nearly a circle, but not exactly.
Another way to measure the Earth’s eccentricity is to measure the width and shape of the analemma of the Sun each day of the year. This is the position in the sky each day that the sun reaches its highest point. The shape of the analemma is dependent on the elliptical path of the Earth around the Sun, and will result in figure-8 path for eccentricity less than 0.045, however as Earth’s eccentricity approaches 0.045 or greater it will become more tear-drop in shape with no intersection.
Earth’s eccentricity oscillates between 0.057 and 0.005, which means that sometimes Earth will be more circular in its path around the sun, and sometimes more elliptical. When the eccentricity is greater, Earth will be further and closer to the Sun during those points in its orbit. In the Northern Hemisphere, the perihelion occurs in the winter, resulting in a milder winter, while the aphelion occurs in the summer, resulting in a cooler summer. When eccentricity is less, the winters and summers will be colder and hotter in the North Hemisphere, but the opposite in the Southern Hemisphere. However, the distance to the sun oscillates only several thousand kilometers, and has a fairly mild effect on climate. The eccentricity cycles between its extremes every 92,000 years, although it oscillates in an odd pattern because these changes in eccentricity are due to the interaction of Earth with other planets in the solar system, particularly Mars, Venus, and Jupiter, which pull and tug at Earth, stretching Earth’s yearly orbital path around the sun. The largest planets such as Jupiter and Saturn also tug on the sun. Hence astronomers define something called the barycenter of the solar system. A barycenter is defined as the center of mass of two or more bodies in orbit with each other. When the two bodies are of similar masses (such two stars) the barycenter will be located between them and both bodies will orbit around it. However, since the sun is much much larger than the planets in the solar cycle, the barycenter is a moving point orbiting near sun’s core. This is what causes the changes in eccentricity, as Jupiter and Saturn (and other planets) tug and pull on Earth and the Sun, causing the center of the solar system not to be exactly in the direct center of the sun.
The three major orbital influences on Earth's climate
Milutin Milanković was released after the war, and applied his mathematical knowledge to better understand Earth’s past climate. He did this by looking at the mathematical summation of these three major orbital influences on Earth. By combining these three long term orbital variations, he lay out a prediction for scientists to determine Earth’s oscillations with its climate over its long history. His publications and notes on Earth’s motion and its relationship to long term climate was published in book form in 1941 in German, entitled Kanon der Erdbe-strahlung und seine Anwendung auf das Eiszeitenproblem (The Canon of the Earth’s Irradiation and its Application to the Problem of Ice Ages). Milanković mathematically predicted oscillations of Earth’s climate and long-term cycles that have been verified repeatably in ice core data, and ancient sedimentary rocks, which show these long-term cycles in Earth’s orbit, which today we call Milanković Cycles.
1i. Time: The Invention of Seconds using Earth’s Motion.
The Periodic Swing of a Pendulum
Earth’s motion would play a vital role in unlocking the knowledge of how to measure the units of seconds, and standardized the accuracy of time. The first breakthrough was made by Galileo Galilei in 1581 while attending a particularly boring lecture, as the story has been retold and likely fictionalized. In the room was a chandelier swinging by a breeze from an open window. The rate of the swings seemed to be independent of the length of the swing, as the chandelier arched for a longer distance it appeared to move at a faster rate. Galileo was the first to discover that pendulums behave isochronicially, meaning that the periodic swing of a pendulum is independent of the amplitude (the angle an object is let go) or width of the arc of the swing. The rate of a pendulum’s swing was also independent of the mass of the object at the end of the pendulum , however it is dependent on the length of the string of the hanging object.
If two weights were hung from strings of equal length, and started in a rocking motion at the same time they would match the exact rate of each swing, and allow for accurate time keeping. The use of pendulums for time keeping was later perfected by Christiaan Huygens, who wrote a book on the use of pendulums for clocks, published in 1673. In fact, pendulums in the 1670s were at their height in terms of scientific curiosity. A pendulum with a set length would be set to rocking, and the number of swings would be counted for a full sidereal day or when a star reached the same position the following night. It was tedious work counting every swing of a pendulum for an entire day and night, until the star reached the same point in the sky. The ability to measure experiments and observations by seconds using a fixed length pendulum revolutionized science.
In 1671 the French Academy sent Jean Richer to the city of Cayenne in French Guiana South America, near the Equator. Although set to observe the positions of Mars in the night sky to calculate the distance from Earth to Mars, Jean Richer also took with him a pendulum with a fixed length counted out for the number of swings in Paris for a full sidereal day. While in French Guiana he did the same experiment, and determined that the number of pendulum swings differed between the two cities. Previously it had been thought that the only thing that altered the rate of the pendulum swing was the length of the pendulum. This curious experiment, and others like it led to the fundamental scientific concept regarding the moment of inertia as proposed by Christiaan Huygens.
The moment of inertia is equal to the mass of an object multiplied by the radius squared from the center of mass or
Isaac Newton, a contemporary and friend of Christiaan Huygens realized that the differences in the number of swings on the pendulum between these two places on Earth was because each city was located at a slightly different radius from the center of Earth’s mass. With Paris closer to the center of the Earth, while city of Cayenne further from the center of the Earth with a position closer to the Equator. The rate of the pendulum swing was thus due to the differences in the acceleration of Earth’s gravity at each city.
The length of time for each pendulum swing
- where L is the length of the pendulum and g is the local acceleration of gravity. This equation works only for pendulums with short swings with small amplitudes, and a set moment of inertia. (If you get that pendulum really rocking in an accelerating car, you need to use a much more complicated formula).
The length of a pendulum in Paris France which resulted in a 1-second swing was almost officially defined as the official length of 1-meter. However, the length of a meter was chosen as a Meridian in Paris, with the official 1-second pendulum of 0.9937 meters in length, which became the standard in most clocks. However, this length was adjusted slightly to account for variations in Earth’s gravity, and one of the reasons the 1-second length pendulum was not used as the standard unit of 1-meter in 1791 when the metric system was first designed.
Evidence of a Spinning Earth: Foucault’s Pendulum
The work of Isaac Newton and Christiaan Huygens on pendulums confirmed that Earth’s gravity acted as an acceleration. One way to demonstrate this acceleration is to imagine a pendulum inside a car. Both the pendulum and car are stationary before the start of a race, but once the race begins, the car increases its speed down the race track and the pendulum inside the car will be pulled backward due to the inertia of the acceleration of the race car (as will the driver). However, if the car travels a consent speed (velocity) with zero acceleration, as long as the car does not change its speed, and no object touches the pendulum, the pendulum inside the car will remain stationary even as the car is traveling at a high speed. Isaac Newton realized that Earth’s gravity behaves like a pendulum in an accelerating car. Hence, we refer to Earth’s gravity, as the acceleration of gravity, or in mathematical formulas as little g.
One of the most amazing experiments, which has been replicated around the world is the use of a large pendulum to demonstrate the spinning motion of the Earth, as well a method to calculate latitude. It was first performed by Léon Foucault, the inventor of the gyroscope, which he hoped to use to see the rotation of the Earth’s spin. However, it was his experiment with a pendulum that he is best known for. Foucault build a very long pendulum in his attic, and set it in motion by tying a string to the end of the pendulum at a set amplitude, and using a flame burned the string, which set the pendulum in motion without any jostling. He watched its movement and noticed that it started to rotate very slowly. The reason for this rotational motion was the fact that the pendulum was not moving, but the ground beneath his feet was— with the rotation of the Earth!
In a famous demonstration, Léon Foucault built a giant pendulum in the Panthéon of Paris, and showed to the public that a pendulum swings will rotate around a circle in 31.8 hours in Paris. At each point in its rotation the swinging pendulum will mark a path. The length of time this rotation occurs is related to the position in latitude, as a large pendulum at the north or south pole would mark the path of Earth’s rotation in 23 hours 56 minutes and 4.1 seconds. But in Paris with at a latitude 48.8566° N, the rotation of the Earth beneath the pendulum takes larger, and the closer to the equator the longer this rotation will become, until it no longer rotates as it approaches the equator.
The reason for this is lack of rotation at the equator is that the plane of reference of the pendulum and the spinning Earth are the same, while at the Earth’s poles the spinning Earth below the pendulum is spinning clockwise around the pendulum or counter-clock wise at the South Pole. If you have ever observed a Foucault Pendulum for any length of time, you will feel a sense of vertigo, as you realized that the motion of the swinging pendulum is not moving, but the Earth is. A Foucault pendulum still is running inside the Panthéon of Paris today, as well as numerous other places around the world as this demonstration validates Earth’s spinning motion.
A failed experiment to measure the Earth’s spin using the speed of light
Léon Foucault may best be known for his pendulum in the Panthéon of Paris, but he invented something that would revolutionized science, and it had to do with his obsession with the motion of Earth’s spin, and it involved using the spinning motion of a mirror to measure the speed of light for the first time in history.
The apparatus consisted of a beam of light shone on a spinning wheel of mirrors that reflected a beam of light on to a stationary mirror, which reflected the light back to the set of spinning mirrors. In the time that the light took to reflect off the stationary mirror and return to the spinning wheel of mirrors, the spinning mirrors would have move slightly with a slightly different orientation. This change results in a beam of light that would not reflect directly back to the exact original light source, but defected at a slight angle depending on the speed of the spinning wheel of mirrors. Using this simple apparatus Léon Foucault calculated the speed of light was close to its modern determined value today of 299,792,458 meters per second, he found a value between 298 million and 300 million meters per second!
Light had baffled scientists as it appeared to behave as both a particle and sometimes a wave. In 1801 Thomas Young preformed a simple experiment. He cut two slits in a box, and shinned light through the slits and observed that the two beams of light shone on a nearby wall were interfering with each other like ripples of passing waves. A similar phenomenon is observed when two stones are dropped at the same time into a lake, the rippling waves will interfere with each other as they radiate out from the dropped stones. If light was a wave, as this experiment suggested, then light waves must be passing through some medium, which scientists called the aether. The existence of this aether was difficult to prove, but two American scientists Albert A. Michelson and Edward Morley dedicated their lives to prove the existence of aether using the speed of light, and the rotation of the Earth, but they failed.
Taking inspiration from the motion of pendulums, the speed of light must vary depending on the motion of the Earth’s spin. At the Equator, if one shone a light in the north-south direction and another light beam in the east-west direction, the speed of light should be different, because the Earth is spinning below each of the two beams of light at a rate of 1,040.45 miles per hour (1674.44 km/hr) in the East-West direction. Just like in the motion of pendulums, light should show differences as waves in this “wind” of aether traveled against Earth’s spin.
In 1887, Albert A. Michelson and Edward Morley measured the speed of light in two different directions as precisely as possible in a vacuum of air, and each time they found the same results, the two speeds of light, no matter their orientation were exactly the same! There appeared to be no invisible aether, but the speed of light, unlike Earth’s gravity appear to be a constant. How could this be?
The Michelson and Morley experiment is one of the most famous failed experiments, and while it did not prove the presence of aether, it lead to a major breakthrough in science.
The solution to the problem was solved by a brilliant Dutch scientist named Hendrik Lorentz, who suggested that reason why the experiment failed was that the distance measured was slightly different, because of the difference in Earth’s speed or velocity.
To demonstrate this mathematically Hendrik Lorentz imagined two beams of bouncing light between mirrors traveling at different but constant speeds. If the speed of light between the two mirrors was held the same, and if you knew the constant velocity of the two light beams, the path of the faster moving light beam would travel a longer distance, since as the light traveled the mirror would move in relationship to the slower moving, or stationary light beam. The faster the velocity of the light beam the longer the path it has to take between the mirrors, and to complete this path in the same amount time, suggested that time is relative to velocity. In a series of mathematical equations known as the Lorentz transformations, Lorentz calculated the time dilation also known as the length dilation by an expression of
Where v is the velocity of an object with zero acceleration (such as the spinning Earth) and c is the speed of light. The larger this number, the shorter the length of a meter will become and the longer time will be. Graphing this mathematical equation results in low values when velocity is less than half the speed of light, but very high values when the velocity approaches the speed of light, and reaching infinity and breaking down when velocity of an object reaches the speed of light.
It is likely the most frightening equation you will ever see, as at its root it determines the universal speed limit of anything with mass in the universe. Faster than light travel for anything with mass is an impossibility according to Lorentz transformations, and with the distance to the nearest star over 4 light years away (roughly 25 trillion miles away). A rocket shot into space at Earth’s current rate of galactic motion of 439,246 miles per hour, the rocket would take around 6,500 years to reach the nearest star, far longer than your life span. Despite science fiction depicted in movies and video games where distances across the universe are short, easily traveled, and populated by aliens, these imaginations are simply wishful thinking. Earth will always be your home. You are inevitably stuck here.
The Michelson and Morley experiment is still being replicated, most recently with the Laser Interferometer Gravitational-Wave Observatory (LIGO), which is a set of two observatories in Washington and Louisiana that each measure the distance between two mirrors oriented in different directions. Any changes in the distance between the mirrors spaced 4 kilometers apart, and measured extremely precisely by observed changes in the light wave frequencies (to the breath of a single atom’s width), are due to gravitational waves caused by the collisions of super massive black holes and neutron stars millions of light years from Earth. We may not be able to visit these places, but we can observe them on Earth, as gravitational waves flickering nearly imperceivable through light.
The Lorentz transformation was intensely studied by one of Lorentz’s students, a young Albert Einstein, who with the aid of Lorentz, formulated his theory of Special Relativity in 1905 in his paper On the Electrodynamics of Moving Bodies. Both Lorentz and Einstein showed how your notion of time is relative to your motion, or more precisely Earth’s velocity. Your sense of time is interwoven with the planet’s motion through space. A few months after his publication of special relativity in 1905, Einstein asked the question big, what does this constant of the speed of light have to do with Mass and Energy? Resulting in Einstein’s famous equation . Where E is energy, m is mass, and c is the speed of light, but before you can learn more about this famous equation of Einstein’s, you will need to learn more about Earth’s energy and matter. \newpage
Section 2: EARTH’S ENERGY
2a. What is Energy and the Laws of Thermodynamics?
On Bloom Street in Manchester England is a tiny pub called “The Goose.” Based on online reviews it is not a very good pub with dirty bathrooms and a rude bar tender, and over the years its name has changed with each owner. It is located in the heart of the Gay Village district of Manchester, but if you travel back in time two hundred years ago, you could purchase a Joule Beer at the pub. Joule Beer was crafted by a master brewer from Manchester named Benjamin Joule, who made a strong English port. A beer that had made him famous and rich in the bustling English city. When his son James Joule was born with a spinal deformity, he lavished him with an education fit for the higher classes. More a scientist than a brewer, his son James Joule became obsessed with temperature. He would always carry a thermometer wherever he went, and measure differences in temperature. Taking diligent notes of all his observations, particularly when helping his father brew beer. Determining the precise temperature for activities such as brewing was an important skill his father taught him, but James took it to the extreme. Thermometers where not necessary a new technology for the day, Daniel Fahrenheit and Anders Celsius had devised thermometers nearly a century before, which still bear their respective unit of measurement in degrees (Fahrenheit and Celsius). No, James Joule was singularly obsessed with temperature because it simply fascinated him. What fascinated him the most, was how you could change temperature of substances such as a pail of water using all sorts of ways. One could place it over a burning fire, one could run an electric current through it, or one could stir it at a fast rate, and each of these activities would raise the temperature of the water. Measuring the change in temperature was a way to compare mathematically the various methods employed to heat the water. James Joule had developed a unique way to measure vis viva.
Vis viva is Latin for living force, and in the century before James Joule was born, the term was used to described the force or effect that two objects had when they are colliding with each other. Isaac Newton determined vis viva as the sum of an object’s mass multiplied by its velocity. The faster the object traveled and the more mass the object had, the more vis viva the object would carry with it. Gottfried Leibniz, on the other hand, argued that velocity was much more important, and that faster objects would have an exponential increase in vis viva. While the two men debated, it was a woman, who discovered the solution.
Émilie du Châtelet, the Cannonball and the Bullet
Her name was Émilie du Châtelet, and she perhaps one of the most famous scientists of her generation. Émilie was born into lesser nobility, married a rich husband, and dedicated herself to science. She studied with some of the great mathematicians of the time, invented financial derivatives, took the famous poet Voltaire as a lover, and wrote several textbooks on physics. In her writings she described an experiment where lead balls of different mass are dropped into a thick layer of clay from different distances. The depth of the ball into the clay was exponentially greater for balls dropped higher, than they were by increasing their mass. This demonstrated that that it is velocity that is more important rather than mass, but this is difficult to measure.
Imagine a cannon ball and a bullet. The cannon ball measures 10 kilograms and the bullet 0.1 kilogram (smaller in mass), if each are fired at the same velocity, the cannon ball would clearly cause more damage because it has more mass. However, if the bullet traveled 10 or 100 times faster, would it cause an equal amount of damage? The clay experiments showed that the bullet needed to only travel 10 times faster to cause an equal amount damage. Although this was difficult to quantify, as measuring vis viva was challenging.
Origin of the Word Energy
In 1807, the linguist and physicist Thomas Young, who would later go on to decipher Egyptian hieroglyphs using the Rosetta Stone, coined the scientific term Energy, from the ancient Greek word, ἐνέργεια. Hence it was said that Energy = Mass x Velocity2. This was the first time the word Energy was used in a modern sense. Today, would call this Kinetic Energy, energy caused by the motion or movement of something. The equation is actually:
Where Ek is the Kinetic Energy, m is the mass of the object and v is the velocity. Note that there is a 1/2 in the equation. This slight modification was proposed later by Gaspard-Gustave Coriolis, for which the Coriolis Effect is named after, about the same time that James Joule was having his obsession with thermometers.
What really is Energy?
James Joule demonstrated with his experiments that this Energy could be measured by the heat (change in temperature in a pail of water) that the activity produced. Initially his experiment involved electricity. In 1843 he demonstrated at a scientific meeting of the British Association for the Advancement of Science, that water with an electrical current passing through it would heat up, resulting in a gain in temperature. He wondered if he could demonstrate that kinetic energy (from the motion of objects) would also heat up the water. If true he could calculate a precise unit of measurement of Energy, using temperature changes observe in water. At the time, there was great many skeptics of his ideas, as many scientists of the day believe that there was a substance, a self-repellent fluid or gas called caloric which moved from cold bodies to warm bodies, an idea supported by the knowledge that oxygen was required for fire. James Joule thought this idea silly. He countered with his own idea that what caused the water to raise its temperature was that the water was “excited” by the electricity, the fire, or the motion. These activities caused the water to vibrate. If he could devise an experiment to show that the motion of an object would change the temperature of the water, he could directly compare energy from a burning fire, electricity, and the classic kinetic energy of moving objects.
The Discovery of a Unit of Energy, the Joule
In 1845 he conducted his most famous experiment, a weight was tied to a string, which pulled a paddle wheel, stirring water in an insulated bucket. A precise thermometer measured the slight change in temperature in the water as the weight was dropped. He demonstrated that all energy, whether it was kinetic energy, electric energy, or chemical energy (such as fire), was all equivalent, furthermore, James Joule summarized his discovery in stating that when energy is expended, an exact equivalent of heat is obtained. Today, Energy is measured in Joules (J), in honor of his discovery.
Such that J is Joules, kg is kilograms, m is meters, N is Newtons (a measure of force), Pa is Pascal (a unit of pressure), and W is Watts (a unit of electricity), and s is seconds.
The common modern usage of electrical energy is measured in kilowatt-hours which is the unit that you will find on your electric company bill. A kilowatt-hour is equivalent to 3.6 megajoules (1,000 Watts x 3,600 seconds = 3.6 million Joules).
In 1847, James Joule presented his research at the annual British Association of Science in the city of Oxford, which was attended by the most brilliant scientists of the day, Michael Faraday, Gabriel Stokes, and a young scientist by the name of William Thomson. While he won over Michael Faraday and Gabriel Stokes, he struggled to win over the young scientist named Thomson, who was fascinated, but skeptical of the idea. James Joule returned home from the scientific meetings absorbed with how to win over the skeptical William Thomson. At home, his summer was filled with busy plans of his wedding to his lovely fiancé, a girl named Amelia Grimes. They planned a romantic wedding and honeymoon to the French Alps, and while looking over the lovely brochures of places to visit in the French Alps, he stumbled upon the great, um a very romantic waterfall dropping down through the mountains called the Cascade de Sallanches. He convinced Amelia that they should visit the romantic waterfall, and wrote to William Thomson, to see if he could meet him and his new wife in the French Alps, he had something he wanted to show him.
In 1847 the romantic couple, and the skeptical William Thomson arrived at the waterfall to conduct an experiment. You could see why James Joule found the waterfall intriguing. The water does not drop simply from a cliff, but tumbles off rocks and edges as it cascades down the mountain side, and all this energy as the water falls adds heat, such that as James Joule explained to the skeptical William Thomson, the temperature of the water at the bottom of the waterfall will be warmer than the water at the top of the waterfall. Taking his most trusted thermometer, he measured the temperature of the bottom pool of water, and hiked up to the top of the waterfall to measure the top pool of water. The spray of the water resulted in different values. Wet and trying not to fall in the water, James Joule was comically doing all he could to convince the skeptical William Thomson, but the values varied too much to tell for certain. Nevertheless, the two men became lasting friends, and a few years later, when Amelia died during child birth, and he lost his only infant daughter a few days later, James Joule retreated from society, but kept up his correspondence with William Thomson.
The experience at Cascade de Sallanches had a major impact on William Thomson. Watching the tumbling waterfall, he envisioned the tiny particles of water becoming excited, vibrating with this energy, as they bounced down the slope. He envisioned heat as vibrational energy inside molecules, and with increased heat, the water would turn to steam, and float away as excited particles of gas, and if cooled would freeze into a solid, as the vibrational energy decreased. He imagined a theoretical limit to temperature, a point so cold, that you could go no colder, where no energy, no heat existed, an absolute zero temperature.
Absolute Zero, and the Kelvin Scale of Temperature
William Thomson returned to the University of Glasgow, a young brash professor, intrigued with this idea, of an absolute zero temperature. A temperature so cold, that all the vibrational energy of matter would be absent. What would happen if something was cooled to this temperature, with the help of James Joule, he calculated that this temperature would be −273.15° Celsius. Matter could not be cooled lower than this temperature. In the many years of his research and teaching, William Thomson invented many new contraptions, famously helped to lay the first transatlantic telegraph line, and was made a noble, taking on the name Lord Kelvin, after the river that ran through his home near the University of Glasgow. Today, scientist use his temperature of −273.15° Celsius, as equal to 0° Kelvin, a unit of measurement that describes the temperature above absolute zero. Kelvins are often used among scientists, over Celsius (which is defined with 0° Celsius as the freezing point of water), because it defines the freezing point of all matter. Furthermore, Lord Kelvin postulated that the universe was like a cup of tea, left undrunk, slowly cooling down toward this absolute coldest temperature.
Scientists have since cooled substances down to the very brink of this super low temperature (the current record is 1 x 10-10 ° Kelvin, with larger refrigerated spaces achieving temperatures as low as 0.006° Kelvin). At these low temperatures, scientists have observed some unusual activity, including the presence of Bose–Einstein condensate, superconductivity and superfluidity. However, scientists still detect a tiny amount of vibrational energy in atoms at this cold temperature, a vibrational energy that holds the atoms together called zero-point energy, which had been predicted previously. The background temperature of the universe is around 2.73° Kelvin, which is heated slightly above absolute zero, such that in the coldest portions of outer space the temperature is still a few degrees above absolute zero.
Using this scale, your own solar system ranges from a high of 735° Kelvin on the surface of Venus to a low of 33° Kelvin on the surface of Pluto. The Earth ranges from 185° to 331° Kelvin, but mostly hovers around the average temperature of 288° Kelvin. Earth’s Moon varies more widely, with its thin atmosphere between 100° to 400° Kelvin, making its surface both colder and hotter than the extreme temperatures measured on Earth.
Winters in Edinburgh Scotland are cold and damp. Forests were of limited supply in the low lands of Scotland, such that many of the city’s occupants in the early 1800s turned to coal to heat their homes. Burned in their fireplace, the coal provided a method to heat homes, but it had to be shipped into the city from England or Germany. The demand for coal was growing, as the city grew in population. A group of investors suggested bringing in local coal from the south. They constructed a trackway for which horse drawn carts could be used to carry heavy loads of coal into the city, however near the city was a steep incline, too steep for horses to pull up the heavy loads of coal. In was along this passage of track, that two large steam engines where purchased to pull the carts up this incline. Each steam engine was feed a supply of coal to burn in its furnace, heating water in a boiler, which turned to steam. The steam could be opened into a cylinder which would slide back and forth transferring heat into mechanical energy. The cylinder would turn a pully, and pull the carts of coal up the incline into the city. As young boy, whose father managed the transport of coal into the city, William Rankine was fascinated with the power of these large steam engines. Soon the horses where replaced by the new technology of steam locomotives, which chugged along with the power of burning coal. Rides were offered to passengers, and soon the rail line meant to transport coal, became a popular way for people to travel. William Rankine studied Engineering, and became the top scientist in the emerging field of steam power, and the building and operation of steam locomotives. In 1850, he published the definitive book on the subject, but his greatest work was likely a publication in 1853 in which he described the transfer of energy.
James Joule had shown that motion could be transformed into heat, while the study of steam locomotives had demonstrated to William Rankine that heat could be transformed into motion. Rankine fully endorsed Joule’s idea of the conservation of energy, but he realized something unique was happening when energy was being transferred in a steam locomotive. First the water was heated using the fire from burning coal, this boiling water produced steam, but the engineer of the locomotive could capture this energy, holding it until the valve was open, and the steam locomotive begin to move. Rankine called this captured energy, Potential Energy.
A classic example of potential energy is when a ball is rolled up an incline. At the top of the incline the ball has gained potential energy. It could be held there forever, but at some point, the ball will release that energy and roll back down the incline, producing Kinetic Energy. In likewise fashion, a spring in a watch could be wound tight, storing potential energy, while the once the spring is sprung, the watch will exhibit kinetic energy, as the hands on its face move recording time. A battery powering a tablet computer is potential energy stored when charged, but once used for watching Netflix videos, its kinetic energy is released. The energy it took to store the potential energy is equal to the energy that was released as kinetic energy.
Rankine called kinetic energy, actual energy, since it did actual work. In his famous paper in 1853 he simply states, “actual energy is a measurable, transferable, and transformable affection of a substance, the presence of which causes the substance to tend to change its state in one or more respects; by the occurrence of which changes, actual energy disappears, and is replaced by potential energy, which is measured by the amount of a change in the condition of a substance, and that of the tendency or force whereby that change is produced (or, what is the same thing, of the resistance overcome in producing it), taken jointly. If the change whereby potential energy has been developed be exactly reversed, then as the potential energy disappears, the actual energy which had previously disappeared is reproduced.” To summarize William Rankine, the amount of energy that you put into a device is the same amount of energy that comes out of the device, even if there is a delay between the storage of the potential energy and the release of the kinetic energy.
Furthermore, Rankine went on to state that “The law of the conservation of energy is already known, viz: that the sum of the actual and potential energies in the universe is unchangeable.” It was a profound statement, but one also uttered by James Joule, that energy in the universe is finite, a set amount. Energy cannot be spontaneously created or spontaneously removed, it only moves from one state to another, alternating between potential and kinetic energy. The power to move the steam locomotives was due to the release of potential energy stored in buried coal, the coal was produced by ancient plants, which stored potential energy from the energy of the sun. Each step in the transfer of energy was a pathway back to an original source of the energy within the universe— energy just did not come from nothing. Such scientific laws or rules were verified by years of failed attempts to make perpetual motion machines. Machines that continued to work, without a source of energy were impossible.
However, this scientific law or rule that energy cannot be spontaneously created was proven incorrect in 1905 by Albert Einstein, who first proposed that E=m∙c2. The total amount of energy (E) is equal to the total amount of mass (m), multiplied by the speed of light (c) squared. If you could change the amount of mass, it would produce a large amount of energy. This equation would go on to demonstrate a new source of energy— nuclear energy, in which mass is reduced or gained, and results in the spontaneous release of energy. This scientific rule or law called the Law of the Conservation of Energy, had to be modified to state that in an isolated system with constant mass, energy cannot be created or destroyed. The study of energy transfer became known as Thermodynamics, thermo- for study of heat, and dynamics- for study of motion.
Entropy and Noether’s Theorem
Using the law of the conservation of energy, engineers imagined a theoretical device that alternated energy between potential energy and kinetic energy with zero loss of energy due to heat. In physics this is referred to as Symmetry. Energy put into a system is the same amount of energy that is retrieved from a system. Yet, experiment after experiment failed to show this, there appeared to always be a tiny loss of energy when energy went between states. This tiny loss of energy is called Entropy. Entropy is a thermodynamic quantity representing the unavailability of a system's thermal energy for conversion into mechanical work, often interpreted as the degree of disorder or randomness in the system. In the world of William Rankine, entropy was the loss of energy through heat, which prevented any system from being purely symmetrical between states of energy exchange. Entropy is the loss of energy over time which increases disorder in a system. However, the law of the conservation of energy forbid the destruction of energy. Einstein’s discovery that changes in mass can unlock spontaneous energy suggested that there might exist changes that unlock the spontaneous destruction of energy.
In 1915, at the University of Göttingen two professors were struggling to reconcile the Law of the Conservation of Energy and Einstein’s new Theory of Relativity, when they invited one of the most brilliant mathematicians to help them out. Her name was Emmy Noether. Emmy was the daughter of a math professor at the nearby University of Erlangen, and had taken his place in teaching, although as a woman she was not paid for her lessons to the students. She was a popular, and somewhat eccentric teacher, who students either adored, or were baffled by. When shown the problem, she realized that Law of the Conservation of Energy, described a symmetrical relationship between potential energy and kinetic energy, and could be reconciled with special relativity through an advance algebra technique called symmetry- flipping two math equations when they result in an identical, but symmetrical relationship. It would be like looking at a mirror to describe what exists in a room reflected in the mirror. It was a brilliant understanding, and resulted in a profound insight into the conservation of energy, which directly lead to the birth of quantum physics today.
The important implications of Noether’s Theorem for you to understand is that Entropy is directly related to a system’s velocity and time. Energy is lost in the system due to the system’s net velocity in the universe or likewise time that has passed during that conversion of energy. Here on Earth, the motion of the Earth, which is measured either in time or velocity is the reason for the loss of that tiny amount of energy during the conversion between potential and kinetic energy. This insight is fascinating when you consider systems of energy traveling at the speed of light. Approaching the speed of light, time slows down until it stops, at which point the transformation of potential and kinetic energy is purely symmetrical, such that there is no entropy. Such insight, suggests that light, itself a form of energy, does not observe any entropy (heat loss), as long as it is traveling at the speed of light. Of course, light can slow as it hits any resistance such as gas particles in the Earth’s atmosphere, or solid matter such as your face on a sunny day. At this point, heat is released. Light traveling at the speed of light through the near vacuum of outer space, can travel at incredible far distances from galaxies on the other side of the universe to your eye on a dark starry night. It is because of this deep insight into Emmy Noether’s mathematically equations, that we can explain entropy, in the notion of time and velocity, as observed here on planet Earth.
The Four Laws of Thermodynamics
You can summarize what you have learned about the nature of energy and energy exchange into four rules, or laws of thermodynamics.
Law 0: If two thermodynamic systems are each in thermal equilibrium with a third, then they are in thermal equilibrium with each other. This basically states that we can use thermometers to measure energy as heat, when they are brought in equilibrium within a system.
Law 1: Law of the Conservation of Energy, which states that in an isolated system with constant mass, energy cannot be created or destroyed.
Law 2: The Law of Entropy, for energy in an isolated system traveling less than the speed of light, when there is an energy transfer between potential and kinetic energy there will be a slight loss of the availability of energy applied to a subsequent transfer due to the system’s velocity or time difference. Hence systems will become more disordered and chaotic over time.
Law 3: The Law of Absolute Zero: As the temperature of a system approaches absolute zero (−273.15°C, 0° K), then the value of the entropy approaches a minimum.
Energy usage has become more critical in the modern age, as society has invented new devices that convert energy into work, whether it is in the form of heat (to change the temperature of your home), motion (to transport you to school in a car), or electricity (to display these words on a computer), as well as the storage of energy as potential energy for later usage (such as charging your cellphone to later text your friends this evening). The laws of thermodynamics define how energy is moved between states, and how energy systems become more disorder through time by entropy.
2b. Solar Energy.
A Fallen Scientist
On a cold late November day in 1997, Fred Hoyle found himself injured, with his body smashed against tomb-like granitic rocks at the base of a large cliff near Shipley Glen in central England. His shoulder bones were broken, his kidneys malfunctioning, and blood dripped from his head. He could not move, on the edge of death. He had no notion how long he had been laying at the base of the cliff, as it was dark, the moss-covered stones and cragged tree branches hovered over him in the dim light. But then it came. The sun. It illuminated the sky, burning bright, and he remembered who he was.
Fred Hoyle was a physicist of the sun. Fifty years before, he wrote his greatest works, a series of scientific papers published between 1946 and 1957, and in the process discovered how the sun generates its energy through the generation of mass, in particular the generation of new forms of atoms of differing masses. He had founded a new field of science called stellar nucleosynthesis. In the early years of the twentieth century, scientists had discovered that enormous amounts of energy could be released with the spontaneous decay of radioactive atoms. This loss of atomic mass, is called nuclear fission, and by the 1940s, this power was harnessed in the development of atomic weapons and nuclear power. The sun, however, emits its energy due to nuclear fusion, the generation of atoms with increasing atomic mass. Fred Hoyle was at the forefront of this research, as he suggested that all atoms of the universe, are formed initially in stars, such as the sun.
Above him, the injured Fred Hoyle observed the sun. Its bright light illuminating the morning sky, glowing a yellowish white. It is a giant of the solar system, 1.3 million planets the size of Earth could fit inside the volume of the sun. It has a mass 333,000 times larger than the Earth. It is beyond the imagination of large, and while stars elsewhere in the universe dwarf the sun, its enormous size is nearly incomprehensible.
Stars are classified by their color (which is related to temperature), and luminosity (or brightness), which is related to the size of the star. The sun sits in the center of a large pack of stars called the Main Sequence, in a diagram plotting color and luminosity called the Hertzsprung–Russell diagram. The sun’s yellowish-white light spectrum indicates an average surface temperature of 5,778 Kelvin, and luminosity of 1 solar unit. Along the main sequence of stars, each star can be grouped by their color.
Annie Jump Cannon developed what is called the Harvard system, which uses letters to denote different colors, which relates to temperature. Using this system, the sun is a class G-star. The hottest blue stars are class O, with the cooler red stars class M. The series was used to map the night sky, denoting stars following a sequence from hottest to coolest (O, B, A, F, G, K, and M). O and B stars tend to be blueish in color, while A and F stars tend to be white in color, G are more yellow, while K and M are pink to red in color. 90% of stars fit on this Main Sequence of stars, however some odd-balls lay outside of this Main Sequence, including, the highly luminous Giant Stars (Supergiants, Bright Giants, Giants and Subgiants), and lower luminous White Dwarfs.
The Anatomy of the Sun
The outer crown of the sun is its atmosphere, composed of a gaseous halo seen when the Sun is obscured by the Moon during a solar eclipse. As a highly dynamic layer, giant flares erupt from this region of the Sun. The Corona is an aura of plasma (composed of highly charged free electrons) much like lightning bolts, which reach far into the space around the Sun. Solar Prominences are loop-like features that rise 800,000 kilometers above the Sun, and Solar Flares, arise from the margins of dark Sun Spots. Sun Spots are cooler regions of the Sun’s Photosphere, which are a few thousand degrees Kelvin cooler than the surrounding gas. These Sun Spots have been observed for hundreds of years, and follow an 11-year cycle, related to the Sun’s magnetic orbit around its core. Sun spots appear just above and below the Sun’s equator in a clear 11-year burst of activity. Sun flares during this enhanced Sun Spot activity result in charge particles hitting the outer most atmosphere of Earth, resulting in a colorful Aurora in the night sky near the Earth’s poles during these events. Sun spot activity is closely monitored, since it can affect Earth orbiting satellites. (see http://www.solarham.net/). Measurements of solar irradiance hitting the upper atmosphere, measured by NASA satellites Nimbus 7 (launched in 1978) and Solar Maximum Mission (launched in 1980), among others since then, show slight increases of total solar irradiance striking the upper atmosphere during periods of sun spot activity. This is due to faculae, which are brighter regions that accompany sun spot activity, giving an overall increase of solar irradiance, however when darker sun spots dominate over the brighter faculae regions there has also been brief downward swings of lower solar irradiance during sun spot activity as well. Measured solar irradiance of the upper atmosphere since these satellites were launch in 1978 has shown that the sun’s energy striking the Earth’s upper atmosphere only varies between 1369 and 1364 watts/meter2 during these events (Willson & Mordvinov, 2003: Geophyscial Research Letters).
Temperatures are highest in the upper atmosphere of the sun, since these blasts of plasma excite the free particles producing temperatures above 1 million degrees Kelvin. The lowest level of the Sun’s atmosphere is the Transition Zone, which can bulge upward through convection. Convection is the movement of energy along with matter which can effectively transport energy from the inner portion of the Sun upward. Below the Transition Zone is the much thicker Chromosphere, where the temperature is around 5,778 Kelvin. The Chromosphere is the red color seen only during Solar ellipses. Below the Chromosphere, is the Photosphere, which unlike the upper atmospheric layers of the Sun is held by Sun’s gravity and represents more dense matter. Although the Sun does not have a clearly defined surface, the top of the denser Photosphere can be viewed as the “surface” of the sun, since it is made of more dense matter. Below the Photosphere are two zones which transport energy outward from the Core of the Sun. The upper zone is the Convection Zone, in which Energy is transferred by the motion of matter (Convection), while the lower zone is the Radiative Zone, in which Energy is transferred without the motion of matter, called Conduction. Between these two zones is the Tachocline. The inner Core of the Sun, representing about 25% of the Sun’s inner radius, is under intense gravitational force, enough to result in nuclear fusion.
The Sun’s Nuclear Fusion
The Sun’s energy is generated from the intense gravity crushing particles called protons into neutrons. When two protons are crushed together in the core of the sun, overcoming the electro-magnetic force that normally repels them, one proton will release sub-atomic particles and convert to a neutron. The result of this change from proton to neutron releases energy, as well as a positron and neutrino. The Earth is bombarded every second with billions of these solar neutrinos, which pass through matter unimpeded and often undetected, since they are neutrally changed and have an insufficient amount of mass to interact with other particles. Released positrons make their way upward from the core of the sun and interact with electrons which encircle the sun, and are annihilated when each positron comes in contact with an electron. Electrons, which are abundant as negatively charged plasma of the extremely hot outer layers of the sun, prevent positrons from reaching the Earth.
In chemistry, the simplest atom is just a single negatively charged electron surrounded by a single positively charged proton. What chemist’s called hydrogen. With an addition of a neutron, the atom turns from being hydrogen, to something called deuterium, which is an isotope of hydrogen. Deuterium carries double the atomic mass of hydrogen, since each proton and neutron carries an atomic mass of 1, so deuterium has a mass of 2. While electrons, neutrinos and positrons have an insignificant amount of mass, or a mass close to zero.
The incredible gravitational force of the sun, breaks apart atoms, with electrons pushed upward away from the core of the sun, forming a plasma of free electrons in outer layers of the sun, while leaving protons within the center of the sun. Electrons, which carry a negative charge are attracted to Protons, which carry a positive charge. Outside of the gravity of the sun, the free Protons and Electrons would attract each other to form the simplest atom, the element Hydrogen. Inside the core of the sun, however, the Protons are crushed together forming Neutrons. This process is called the Proton–Proton chain reaction. There is some debate on how protons in the core of the Sun are crushed together, recent experiments suggest that protons are brought so close together they form diprotons, where two protons come together forming a highly unstable isotope of Helium. Elements are named based on the number of protons they contain, for example Hydrogen contains 1 proton, while Helium contains 2 protons. During this process protons are also converted to Neutrons. The addition of Neutrons helps destabilize atoms of Helium, that contain 2 protons. The Proton–Proton chain reaction within the sun takes free Protons, and converts them through a process of steps into atoms of Helium, which contain 2 Protons and 2 Neutrons. Elements with differing numbers of Neutrons are called Isotopes, hence, inside the core of the sun are the following types of atoms:
1 proton + 0 neutrons (Hydrogen)
2 protons + 0 neutrons (Helium-2 isotope)
1 proton + 1 neutron (Hydrogen-2 isotope/ Deuterium)
2 protons + 1 neutron (Helium-3 isotope)
2 protons + 2 neutrons (Helium-4 isotope)
Through this process, Hydrogen with a single Proton is converted to Helium-4 with two Protons and two Neutrons, resulting in larger atoms inside the core of the Sun over time.
How the Sun Made the Larger Elements
The proposed idea for this proton-proton chain reaction within the sun’s core was first theorized by a group of physicists in 1938, working together to solve the question of how the sun generates its energy. While attending the annual meeting of the Washington Conference of Theoretical Physics, the participates worked out the possible path of reactions. One of the members of this group, was a Jewish German immigrant named Hans Bethe, who was a professor at Cornell University in New York.
Upon returning from the conference, Hans Bethe and Charles Critchfield begin study of larger elements, and their possible generation within the sun, and larger stars. They discovered something remarkable when fused atoms within a sun’s core gained 6 or more protons, they could act like a Catalytic cycle to enhance the production of Helium-4 from Hydrogen. A catalyst is a substance that does not get used up in a reaction and will continue to act repeatedly over and over again in a reaction. Bethe and Critchfield discovered that in the presence of atoms with 6, 7 and 8 protons, these atoms can facilitate the fusion of Hydrogen into Helium at a faster rate, working as a catalyst. This process is called the CNO-Cycle, since it requires larger atoms, the elements Carbon-Nitrogen-Oxygen to be present within a sun’s core. Our own Sun is a rather smaller Star, and as such contains a smaller amount of energy generated through the CNO-Cycle. It is estimated that only about 1.7% of the Sun’s energy is generated by the CNO-Cycle, however, in larger Stars the CNO-Cycle is an important process in the generation of energy, especially in stars of higher temperatures.
The discovery of the CNO-Cycle in the Sun and other Stars by Bethe and Critchfield, was marred by the rise of Adolf Hitler in Germany in the late 1930s. Hans Bethe, still a citizen of Germany, but of Jewish heritage, worked during this time to get his mother and family out of Germany. In fact, the paper describing the CNO-Cycle won a cash prize from the journal, which helped fund his mother’s emigration to the United States. Hans Bethe’s talent for understanding the physics of how nuclear fusion and fission worked was recognized by the United States military, who appointed him to lead the theoretical division for the top-secret Los Alamos Laboratories in the design and construction of the first nuclear weapons during World War II.
Even with the design and implementation of fission nuclear bombs in the 1940s, scientists were racing to figure out how larger atoms could form naturally within the heart of stars by the fusion of atoms.
All this was spinning in Fred Hoyle’s mind as he lay at the base of the cliff. He knew all this. He had felt left out of nuclear research having served during World War II only in the capacity of a specialist in radar research. It was only after the war that he became passionately interested in the Sun’s nuclear fission. How larger and larger atoms could form inside larger stars. Inspired by the research conducted during the war in the United States, Fred Hoyle developed during the 1940s the concept of nucleosynthesis in stars to explain the existence of elements larger than Helium-4. Our own Sun generates its energy mostly by the Proton-Proton chain reaction, in other words, burning Hydrogen to form Helium-4. With the occurrence of Carbon-Nitrogen-Oxygen, this process could be accelerated, but, Fred Hoyle pondered how elements larger than Helium-4 could exist and be formed through the same process. He called this secondary process Helium-burning, a process of fusion of atoms together to produce even larger atoms, the numerous elements named on a periodic table of elements. Fred Hoyle, William Fowler and the wife and husband team of Margaret and Geoffrey Burbidge drafted a famous paper in 1957, which demonstrated that larger atoms could in fact be generated in very large stars, and that the natural abundances of those elements within surround planets lead to greater knowledge of the steps taken to produce them within the star’s history. These steps lead upward to the production of atoms with 26 protons, through a process of fusion of smaller atoms to make bigger atoms, but atoms with more than 26 protons required a special case- they formed nearly instantaneously in a gigantic explosion called a Supernova.
Thus, it was theorized that the basic distribution of elements within our solar system was formed through a stepwise process of fusion in a gigantic star that eventually went critical and exploded in a supernova event, injecting atoms of various sizes across a Nebula, a cloud of gas and dust blasted into outer space. This gas and dust, the Nebula, formed slowly over thousands of years into a protostellar-protoplanetary disk, that eventually lead to the formation of our Solar System, and every atom within it. Carl Sagan often quoted this strange fact with the adage, “We are all made of stardust!”
Will the Sun Die?
The fact that our Solar System formed from an exploding giant star, leads to the question of what will happen to our Sun over time? The fuel of the sun is atoms of Hydrogen, in other words single Protons which over time are converted to Neutrons, or more specifically Helium-4 atoms that contain 2-Protons and 2-Neutrons. Eventually, there will remain no more Hydrogen within the core of the Sun, as this fuel will be replaced with Helium-4. At this point, the Sun will contract, and compress inward and become more and more dense. At some point the increasing gravity will cause the Helium-4 to fuse into larger atoms, and the Sun will begin a process of burning Helium-4 as a fuel source, which will result in an expansion of the sun outward, well beyond its current size, forming a Red Giant. During this stage in its evolution, the Sun will engulf Mercury, Venus, and even Earth, despite burning at a cooler temperature. The Earth is ultimately doomed, but so is the Sun.
Eventually the Helium-4 will be exhausted, and the Sun will contract for its final time, reducing its energy output drastically, until it forms a faint Planetary Nebulae, composed of larger atoms of burnt out embers of carbon, nitrogen, oxygen, and the remaining atoms of this former furnace of energy, until it is crushed to about the size of Earth. Scientist’s estimate that this process will take 6 billion years to play out, and that Earth, with an age today of 4.6 billion years, has about that many years remaining until our planet is destroyed during the Red Giant stage of our Sun’s coming future.
The Big Bang
When Fred Hoyle tried to raise himself up from his splayed position below the cliff, he winced in pain, bringing back the memory of his most noteworthy quote, a phase he mentioned during a radio interview in 1949. A phase that suggested that not only that the Solar System had its beginning with a violent explosion, but that a much older explosion birthed the entire universe. Hoyle, rejected an origin of the universe vehemently, favoring an idea that the universe has always existed, and that it lacked any beginning. During the 1949 radio interview, Hoyle explained his steady state hypothesis, by contrasting his ideas to the notion of a “Big Bang,” an explosive birth of the universe. A new idea that was proving interesting to other scientists but Hoyle rejected it. During the 1960s, Hoyle begin rejecting any idea that seemed to contradict his own, and became noteworthy for his contrarian scientific views. In 1962, a young student named Steven Hawking applied to study with Fred Hoyle at Cambridge, but was picked up by another professor to serve as his advisor. This was a good thing, as Hoyle became entrenched in his idea that the universe had no beginning, so much so, that in 1972 after a heated argument with his colleagues over hiring practices, Hoyle quit his teaching position at the university and retired to the countryside. That same year he had received a knighthood, and he struck out on his own.
However, the memory of that period likely brought a painful sensation to his heart. Outside of the academic halls, Fred Hoyle became a burr toward the established scientists of the day, drafting more and more controversial and strange ideas and finding some success in publishing science fiction novels with his son. In 1983, he was excluded by the Nobel prize committee, which awarded the prize to his co-author William Fowler and Subrahmanyan Chandrasekhar for their work on stellar nucleosynthesis. Snubbed, Fred Hoyle fell into obscurity. However, the concept of a “Big Bang” would come to define the future exploration of theoretical physics, and make Steven Hawking a household name.
Although, Fred Hoyle was rescued from his fall from the cliff and transported to the hospital, he never recovered from his fall from science. A fall that could have been averted if he had observed the growing number of breakthroughs regarding the nature of light, and startling discoveries that proved the universe did indeed have a beginning.
2c. Electromagnetic Radiation and Black Body Radiators.
Color and Brightness
During the 1891-92 academic year, a young woman named Henrietta Leavitt enrolled in a college class on astronomy, and it changed her life. Her fascination with stars was ignited during the class. A class that was only offered to her due to the Society for the Collegiate Instruction for Women at the Harvard Annex, and at a time when women were not allowed to enroll at the main Harvard University. Leavitt, in her final year, was left curious after earning an A-, with an ambitious eagerness to study the stars as a full-time profession. After the class, and even after she had graduated college, she began volunteering her time at the Harvard University observatory, organizing photographic plates that were taken of stars nightly observed using the new high-power telescope at the university. The photographic plates were being used by researchers to catalogue stars, noting their color and brightness.
Astronomers were very interested in measuring the distance from Earth to these stars observed in the night sky. Scientists had known for many years the distance to the moon and sun, by measuring something called parallax. Parallax is the effect where the position of an object appears to differ when viewed from different observational positions. For example, closing one eye, hold up your thumb and take a sighting with your thumb, so that your thumb lines up to an object far away. If you were to switch eyes, you will notice that the far away object jumps to a different position relative to your thumb. Using some basic Math, you can calculate how far away the object is from you, as the closer the object is, the more it will change position based on your observational point of view. However, when distances are very far, the difference in the positions from different observation points on Earth are so small relative to the distance to the object that they can’t be measured. Stars were just too far away to measure the actual distance from Earth, and scientists were eager to learn the size and dimensions of the universe.
Henrietta Leavitt journey to discover a tool to measure these stellar distances, was a lengthy one. Although she beginning working on a report describing her observations, she was interrupted with travel to Europe, and a move to Wisconsin, were rather than teach science, she got a job teaching art at Beloit College. Her experience in Wisconsin, and the cold climate, resulted in her becoming very ill, and she lost her ability to hear. Left deaf for the rest of her life with the illness, she wrote back to Harvard about gaining employment there to help organize and work on the photographic plates of stars, a pursuit that still interested her. She returned to her work, which resulted in a remarkable discovery.
Astronomers measure what is called the Apparent Magnitude of stars by measuring a star’s brightness. Large stars far away will have equal brightness as closer smaller stars, as it was impossible to tell distances to stars and determine a star’s Absolute Magnitude. The Apparent Magnitude of a star was measured on the photographic plates taken by the observatory telescope, but Henrietta Leavitt observed a strange relationship when she looked at a subset of the 1,777 stars in her catalogue. She looked at 25 stars located in the small Magellanic cloud, that were believed to be roughly the same distance from Earth. These stars were in a cluster and close together. Furthermore, these 25 stars were recognized as cepheid variable stars, which are stars that pulse in brightness over the course of several days to weeks.
Henrietta Leavitt carefully measured the brightness of these stars for days to weeks, and determined the periodicity of the pulses in brightness, and found that the brighter the star, the longer the periodicity of its pulses. Since these stars were roughly the same distance from Earth, this relationship indicated a method of how you could tell how far away a star was from Earth, by looking at the periodicity of the pulses in brightness. If two stars had equal brightness, but one had a longer periodicity between pulses of brightness, the star with the shorter periodicity between pulses would be closer. Hence, Henrietta Leavitt discovered a yardstick to measure the universe. She published her findings in 1912, in a short 3-page paper, dictated to her supervisor Edward Pickering. Her discovery would come to importance later, but first you should learn what light really is.
What is Light and Electromagnetic Radiation?
What is light? For artists, light is a game of observation, as without it, there is no way of seeing, only darkness. Historically light was seen as a construction of the mind, of how your eyes take in your surroundings, but centuries of experiments show that light is caused externally by the release of energy into the surrounds. A very good analogy for the concept of light is to imagine a ball that is rolling up and down over a hill as it travels. Using Noether’s theorems we can suggest that this ball is oscillating between a position at the top of the hill, where the energy is stored as potential energy, and at the bottom of the hill where the energy has been released as kinetic energy causing the ball to then rise up over the next hill. Since it travels at the speed of light, the ball never loses energy by entropy as it rises up the next hill.
The name for this traveling mass-less ball is a Photon, and the distance between the hills is called the Wave Length. Hence light can be viewed as both a particle and wave. The hills, or wave lengths can be oriented up and down, side to side or diagonally in any orientation to the path of travel for the Photon. Polarized light is where the orientation is limited to a single direction.
If you ever seen a modern 3D movie in a theatre, film makers use polarizing lens in 3D glasses to project two sets of images at the same time, the right eye has the light oriented in one direction, while the left eye has the light oriented in another direction (often perpendicular). So the blurry movie image can be broken into two separate images for each eye at the exact same time, making an illusion of dimensionality. If you cut out the two lens in your 3D glasses, you can orient them perpendicular to each other so one lens allows only vertical oriented light waves while the other allows only horizontal oriented light waves, resulting in darkness.
This is called cross-polar— as no light can pass. However, you can place a crystal or lens between the two lens of polarizing light, which can bend or reflect the light in a different orientation, doing so will allow some light waves to bend or change orientation between the two lens and allowing light to then travel through the previous black lens, this is called birefringence. Birefringence is an optical property of a material having a refractive index that depends on the polarization and propagation direction of light. It is an important principle in crystallography and has resulted in the break-throughs in liquid crystal displays (LCD) flat-panel television displays that are found in proliferation hung on walls of sports-bars, airports and living rooms around the world. Different voltages can be applied to each liquid crystal layer representing a single pixel on the screen. This voltage shifts the birefringence of the crystal, allowing light to pass through the top polarizing lens that previously block the light. Color can be added with color filters. Hence, if you are reading these words on an electronic LCD display, then it is likely due to this bending of the orientations of polarized light allowing you to do so.
Wavelength of Light and Color
The photon particle travels at the maximum speed of light or very near the maximum speed of light, but can have differing amounts of energy, based on the distance of the wave-length. A photon bouncing over shortly spaced steep hills has more energy, than a photon that bounces over distantly spaced gently sloping hills. Using this analogy, light behaves both as a particle and a wave. This was first demonstrated by Thomas Young (the polymath that translated Egyptian Hieroglyphs, and coined the word Energy) in 1801, who placed two slits in some paper, and shone a light through them, demonstrating a strange pattern on a screen as the light waves interacted with each other, causing there to be a pattern of interference in the light projected on a screen. Similar to the ripples seen in a pond, when two rocks are dropped into the water. This interference is caused by the two beams of light waves intersecting with each other.
Light can be split into different wave-lengths by use of a prism, the resulting rainbow of colors is called a spectrum. A spectrum separates light of diffing wave-lengths, rather than orientations, resulting in light bands of different colors. A rainbow is a natural feature caused by rain drops acting as a prism to separate out the visible colors of light.
Normal sunlight looks white, but is in fact a mix of light traveling over differing wavelengths, light purple light travels over the shortest wavelengths, with an average wavelength of 400 nm (1 nm = 0.000000001 meters or 1x10-9 meters), while dark red travels of the longest wavelengths, with an average of 700 nm wavelength. The mnemonic ROY G. BIV for the colors in the visible light spectrum is a helpful mnemonic for the order of colors from longest to shortest wavelength. Red, Orange, Yellow, Green, Blue, Indigo, Violet, with Red having the longest wavelength, hence least amount of energy, while Violet (or Purple) having the shortest wavelength and hence most amount of energy. Light can travel along wavelengths that are both above and below these values, this special “invisible” light is collectively called Electromagnetic Radiation, which refers to both visible and non-visible light along the spectrum.
Sunlight contains both visible and non-visible light, and hence scientists call this energy the Sun’s Electromagnetic Radiation. Infra-Red light is light with a longer wavelength than visible light, while Ultra-Violet light is light with a shorter wavelength than visible light. Ultra-Violet or (UV) light contains more energy, and can with prolonged exposure cause sun burns, and eventually skin cancer. Sunscreen blocks this higher energy light from hitting the skin, and UV sunglasses block this damaging light from hitting the eye, causing cataracts. The lower energy Infra-Red light is important in the development of “night-vision” googles, as these glasses shift low energy Infra-Red light into the visible spectrum. This is useful for Thermo-imaging, as warmer objects will exhibit shorter wave-lengths of Infra-Red light, than colder objects. The most highly energized light on the spectrum of electromagnetic radiation is gamma rays. This very short wavelength electromagnetic radiation is the type of light that first emerges from nuclear fission in the Sun’s core. Gamma rays have so much energy, that they can pass through solid matter. While often invoked in Comic Books as the source of super powers, Gamma rays are the most dangerous form of electromagnetic radiation, in fact this “radiation” from nuclear fusion and fission results in a form of light, which can pass through materials, such as the tissues of living animals and plants, and in doing so seriously damage the molecules in these life forms resulting in illness and death. Slightly lower, but still highly energetic electromagnetic radiation, are the short-wavelength X-rays, which are also known for their ability to pass through material, and used by doctors to see your bones. X-Rays can also be damaging to living tissue, and prolonged exposure can cause cancer, and damage to living cells. Nuclear radiation is the collective short-wave electromagnetic radiation of both gamma and X-rays, which can pass through materials, and are only stopped by material composed of the highest mass atoms, such as lead. The next type of Electromagnetic radiation is the slightly longer wavelengths of Ultra-Violet light, followed by the visible light that we can see, which is a very narrow band of light waves. Below visible light is Infra-Red, which is light that has less energy than visible light, and given off from objects that are warm. Surprisingly, some of the longest wave length electromagnetic radiation are microwaves, which are below Infra-Red, with wave lengths between 1 to 10 centimeters. Microwaves were developed in radar communications, but it was discovered that they are an effective way to heat water molecules that are bombarded with electromagnetic radiation at this wavelength at large amplitudes. If you are using the internet wirelessly on WiFi your data is being sent to your computer or tablet over wavelengths of about 12.5 centimeters, just below the microwave frequency, and within the longest wavelength range of electromagnetic radiation— radio waves. Radio waves can have wave lengths longer than a meter, which means that they carry the lowest amount of energy along the scale of electromagnetic radiation.
Wavelength of Light, Energy and how you see the World
There is an important consideration to think about regarding the relationship between wave length and the amount of energy that a photon carries. If the wavelength is short, the photon has to travel a farther absolute distance, then a photon traveling with a longer wavelength, which travels a path that is straighter. Light waves are like observing two racing cars that complete the race at the exact same time, but one of the racing cars had to take a more winding path, than the other. Light only shifts into a longer wavelength and reduces its energy when it interacts with mass, the more mass the light wave impacts, the more reduced its energy will be, and the longer the resulting wavelength. This is how you observe the universe, how you see! Photons when they collide with mass, shift into a longer wavelength and exhibit less energy some of this energy is transferred to the atoms resulting in heat. This shift in wavelength causes anything with sufficient mass to reflect light that is of different colors and shades, by altering the wavelength.
Color is something that every artist understands, but the modern science of color emerged with a painting of one of the most famous blind individuals in American history, Helen Keller. Helen Keller was born with sight and hearing, but quickly loss both as a baby when she fell ill, locked in darkness and silence for the rest of her life, she learned how to communicate through the use of her hands, using touch. Later she authored many books, and went on to promote equal rights for women. Her remarkable story captured a great amount of international interest among the public, and a portrait was commissioned of her by an artist named Albert H. Munsell in 1892. Munsell painted an oil painting of Helen Keller, which hangs in the American Foundation for the Blind, and the two became good friends. The impression likely had a lasting effect on Albert Munsell, as he began research on color shortly afterward, more as a curious scientist in an attempting to understand color than as an artist. Focusing on landscape art, Munsell likely understood a unique method artists employ to limit light when trying to capture a bright land or sea scape. This is done by holding a plate of red glass in front of the view that is to be painted. Because red has the longest wavelength of the visual spectrum, the wavelengths are shifted to longer lengths beyond our ability to see, lower wavelengths become so low they become darkened or absent (as infra-red), while brighter light results in just red visible light. Hence the value of the light can be rendered much easier in a painting or drawing.
Munsell began to classify color by its grayness, on a scale from 0 as pure black to 10 as pure white, with various shades of gray between these values. This measure of color is called value, and could be seen among all colors if a red filter was placed over them, or in modern way, by taking a black and white photograph of the colors, the observable difference of color is lost, but the value of the color is retained. For example, if a deep rich yellow paint has the same value as a bright red paint, under a black and white photograph the two colors would look identical. Hue was the named color; red, yellow, green, blue, purple and violet, and represents the wavelength of the visible spectrum. The last classification of color, was something Munsell called Chroma. Chroma is how intense the color is, for example a color with high chroma would be neon-like or very bright and annoying. These high-chroma colors are caused by light waves that have a higher amplitude in their wavelengths. Amplitude is a measure of how high or tall the light waves are, which is another parameter that light has, in addition to wavelength, energy and orientation.
Albert Munsell was impressed by his new classification of color, and set about educating 4th to 9th graders in Boston on his new color theory, as a new elementary school art curriculum. Munsell’s color classification had a profound effect on society and industry, as a new generation of students were taught about color from an early age. His classification of color resulted in a profound change in fashion, design, art, food, cooking, and advertising. But his color science also had a profound effect on a Henrietta Leavitt, at Harvard. Albert Munsell was invited by Edward Pickering to give a talk to the women astronomers under his supervision.
Although, Leavitt did not hear Albert Munsell’s talk, as she had lost her hearing by this time, she undoubtedly saw his color classification, and may have realized the importance in the difference between hue (the wavelength of light) and chroma (the amplitude or brightness of light). It was shortly afterward that she published her famous 1912 paper, which found a relationship between brightness (Apparent Magnitude) of stars and their periodicity. This paper sent shockwaves through the small astronomical community as it offered a yard-stick to measure the universe.
Astronomers were eager to attempt to measure distances to the stars using this new tool. Early attempts yielded different distances, however. One of the first systematic attempts was offered by Harlow Shapley director of the Mount Wilson observatory in Southern California. Using this yard-stick he estimated that the universe was about 300,000 light years distant from the Earth, much larger than previous estimates, but still rather small compared to modern estimates today. He viewed that the stars in the night sky were all within the Milky Way Galaxy, not all astronomers agreed with him, some viewed the Milky Way Galaxy as an island, among a sea of many other galaxies in the universe. Soon afterward, Harlow Shapley joined Henrietta Leavitt at Harvard, after the death of Edward Pickering. This left the Mount Wilson Observatory back in California in the hands of a handsome young astronomer named Edwin Hubble.
Using Light to Measure the Expansion of the Universe
Edwin Hubble was a star athlete in track and field in high school, and played basket-ball in college, leading the University of Chicago in its first conference title. After college he was awarded a Rhode Scholar to go to Oxford England to study law. Upon his return, Edwin Hubble found a job teaching high school Spanish, physics and math, as well as coaching the high school basketball team, but after his father’s death, Edwin Hubble returned to school to pursue a degree in astronomy at the University of Chicago. In 1917, war broke out, and Hubble joined the army serving in Europe during World War I.
Returning to the United States, Hubble got a job at the new Mount Wilson Observatory in California, where later he took over after the departure of Harlow Shapley. He continued to focus on Cepheid variable stars, hoping to better measure the universe, using the tool that Henrietta Leavitt had invented. Hubble focused his attention on a star in the Andromeda spiral nebulae, he named V1 in 1923. Over weeks he observed the shift in brightness of the star measuring the periodicity which he determined was 31.4 days between maximum brightness. Using this measurement, he estimated that the distance to the Andromeda spiral nebula was over 1,000,000 light years away, a galaxy beyond our own galaxy. He wrote to SShapley, who responded to a colleague “Here is the letter that destroyed my universe." It did not destroy a universe, rather Edwin Hubble demonstrated a much much larger universe than ever imagined, filled with other galaxies like the Milky Way. The diameter of the universe is today estimated at an astonishing large distance of 93,000,000,000 or 93 billion light-years!
But Edwin Hubble’s greatest discovery was not just the vastness of the universe, but that it was expanding at an incredible rate. This discovery was made by examining the spectrum of light waves from star light.
Black Body Radiators
In a dark forest somewhere on Earth is a fire burning in the center of a ring of stones, and a group of humans organized around the flames. Fire has come to define what it means to be human, with its emergence so early in human history, even prior to the origin of our species, about 1 million years ago, at a time when Homo erectus ventured out of Africa and beyond. If you have ever observed the flames of a fire, you will note the shifting colors, the yellow, reds, and deep in the hot embers the blues, and possibility violets. These shifting colored flames represent the cascade of electromagnetic radiation emitted by fire that heats the surrounding air, and provides light on a dark night. The color of the flames can directly tell us the temperature of the flames, as the shorter the wave length of light emitted, the hotter the flame will be. We can also tell how hot stars are by the careful study of the color of the light spectrum they emit.
If a blacksmith places a black iron ball into a fire, they will observe changing colors as the iron ball is heated, the colors start with a black iron ball that will slowly start to glow a deep reddish color, then a brighter yellow, at even hotter temperatures the iron will glow greenish-blue and at super-heated temperatures will take on light purplish color. Examining a spectrum of colors emitted from the “Black body” radiating iron ball will demonstrate a trend toward shorter wavelengths of light emitted from the ball, as the ball is heated in the fire. A black-body is an idealized object that emits electromagnetic radiation when heated or cooled (it also absorbs this light as well).
The spectrum of light given off by the heated iron ball or “black body radiator” can be used to calculate its temperature. The same method can be used to calculate the temperature of stars, including the temperatures previously mentioned for the sun’s surface temperature (5,778 Kelvin). There is no need to take a thermometer to the hot surface of the sun, we can measure its temperature using the sun’s own light. We can also measure the temperatures of stars millions of light years away, using the same principle. The study of the spectrums of electromagnetic radiation is called Spectroscopy. In Germany during the 1850s, a scientist named Gustav Kirchhoff was fascinated with the spectrum of electromagnetic radiation given off by heated objects, and coined the term “black-body” radiators in 1862. Kirchhoff was curious what would happen if he heated or excited with electricity – gas particles, rather than solid matter, like an iron ball. Would the gas glow through the same spectrum of light as it was heated? Experiments showed that the gas would give off a very narrow spectrum of wavelengths. For example, a sealed glass jar with the gas neon, would produce bright bands of red and orange light, while argon could produce blue, among other wave lengths of colored light, gases of mercury a more bluish white. These gas-filled electric lights were developed commercially into neon-lighting and fluorescent lamps, with a wide variety of spectrums of color at very discrete wavelengths.
Kirchhoff conducted a series of experiments where a solid black body was heated, in a chamber of a purified gas, and noted that in the spectrum of the light that was not allowed to passed through the gas was same wavelengths that were emitted when the gas was heated. When these wavelengths of light are absorbed by a gas, they leaving discrete lines in the observed spectrum. Depending on the gas particles the light traveled through the resulting spectrum of absorbed light waves were unique to each type of gas. Astronomers, such as Edwin Hubble observed within a star’s spectrum of light similar absorption lines.
This proved to be a method to determine the composition of a star. For example, this is how we know that the sun is composed of mostly hydrogen and helium, the absorption lines for those gases are indicated in the spectrum of the sun’s light. Working in Kirchhoff’s lab was a young scientist named Max Planck who wondered why objects heated up at very high temperatures seemed not to decrease wavelength indefinitely. After conducting experiment after experiment, Max Planck determined a value to convert electromagnetic radiation wavelength to a measure of energy. This special value became known as Planck’s constant h. Currently h = 6.62607015 x 10-34 Joules per Hertz, such that
Where E is the energy produced by the electromagnetic radiation, h is Planck’s constant, c is the speed of light, and λ is the wavelength. Note that as a function of this equation— as the wavelength increases, energy decreases. Planck’s constant is a very important number in physics and chemistry, because it relates to the size of atoms, and the distances electrons orbit the nucleus of atoms, as such Planck’s constant is also important in quantum physics. The importance of this equation is that it allows for the direct comparison between a light’s wave length and energy. Realize that energy is a measurement of the vibrational forces within particles, in other words a measurement of heat.
Fundamentally it is important to remember that electromagnetic radiation (both visible and non-visible light) is an effective way to transport energy across space. The energy within electromagnetic radiation is released as heat when electromagnetic radiation impacts particles with mass, when this happens the electromagnetic radiation increases the length of its wavelength while transferring some of its energy into the particles. The particles increase their vibrational motion (a measure of heat). This fundamental concept explains how the Earth receives nearly all its energy, through the bombardment of light from the Sun. The Earth also receives some energy through the release of electromagnetic radiation due to the decay of radioactive atoms, which first formed during the explosive Supernova event, but have ever since been decaying. Hence electromagnetic radiation is produced by nuclear fission and fusion, but that is not the only method to produce electromagnetic radiation.
Glowing rocks or fluorescence
In most natural history museums, there is a dark room hidden away with a display of various ordinary looking rocks. These assembled rocks however are subjected to a daily cycle of the room’s lights turning on and off, but what draws the public’s attention to these rocks, is when the room is plunged into darkness – the rocks glow. This glow is called fluorescence, and it is caused by the spontaneous production of electromagnetic radiation in the form of photons. When light waves, or any type of electromagnetic radiation impacts an atom, especially an atom that is fixed in place by its bonding in solid matter, the energy transfer from the incoming light rather than resulting in an increase in vibrational energy (heat), is instead converted into the electron field, resulting in an increase in the electron energy state. Over time, and sometimes over extraordinary long periods of time, the electron will spontaneously drop to a lower energy state, and when it does so it will release a photon. If enough atoms are affected by the incoming radiation, the dropping electron states will release enough photons to be seen in the visual spectrum, and the rock will appear to glow. Note that the incoming wavelengths of light will have to carry energy levels above the visual spectrum, and its often UV-light that is used, but could be even shorter wavelengths of electromagnetic radiation.
When you see a rock fluorescence, it is the release of these photons from the drop of electron states after the electrons have moved into higher energy states when subjected to small wavelength electromagnetic radiation, such as UV-light, X-rays, and even Gamma rays. In fact, the reason radioactive material glows, is due to the release of electron energy states of the surrounding material that is subjected to the high energy and small wavelength electromagnetic radiation these radioactive materials produce.
There are a number of other ways to excite electrons into higher energy states, that can cause the spontaneous release of photons. When an object is subjected to electromagnetic radiation, scientists called the spontaneous release of photons, phosphorescence. When the object is subjected to heat or an increase in temperature, this is called thermoluminescence, for example the glow of the “black body” radiator or iron ball is an example of thermoluminescence, and is caused by the electrons dropping energy states and releasing photons, when subjected to increasing heat. The final type of fluorescence is triboluminescence which is caused by motion, or kinetic energy. Triboluminescence is found when two rocks, such as rock containing quartz are smacked together, the resulting flash of light is due to electron energy states jumping and dropping quickly releasing photons. When electrons are free from the polar attraction of protons in atoms, these free electrons are called electricity, and their motion produces photons, seen in the electric sparks that flash as electricity jumps between wires.
What is Electricity?
Electricity, is the physical phenomena associated with the motion of electrons. Typically, electrons are locked to atoms by their attraction with protons that reside in the nucleus or center of atoms. Electrons exhibit a negative charge (-) and are attracted to positively charged (+) matter, such as protons. Special material composed of metallic bonds is conductive to electron motion, due to the fact that electrons can easily move between atoms linked by metallic bonds. Copper, iron, nickel, and gold all make good conductors for the motion of electrons. Electrons can also move through polarized molecules (molecules that have positively and negatively charged poles or sides). This is why electrons can pass through water with dissolved salts, living tissue, and various liquids with dissolved polarizing molecules, and why it is dangerous to touch a charged electric current and why you receive a shock when you do so.
The free flow of electrons is called plasma, and occur when electrons are stripped from atoms. A good example of plasma is lightening in a thunder storm, which is the free flow of electrons between negatively charged clouds, and the positively charged ground. Electrons move across a wire in a current from the negative toward the positive charged ends of the wire. When electrons move across a wire, they generate an electromagnetic field, such that a compass laid within this invisible field will reorient its needle to this magnetic field. This electromagnetic field was first investigated by Michael Faraday, and has led to an amazing assortment of inventions used in our daily lives, such as electric motors used in electric vehicles. When electrons are in motion they can dropped into lower energy states and release electromagnetic radiation, or light. This is what powers light bulbs, computers, and many of the electric devices that we use in our daily lives.
How do you make electricity?
How is the flow of electrons generated? How do you make electricity? Well there are four fundamental methods of electric generation.
1. Electromagnetic radiation or light, such as sunlight. When photons strike electrons, they increase their energy states. This was famously demonstrated first by Heinrich Hertz. When electrical sparks are exposed to a beam of UV-light, the wavelength of the light in the spark shifts from longer to shorter wavelengths. This interaction between electromagnetic radiation and electrons is called the photoelectric effect. This is how solar power works, such as solar panels that generate electricity, but is also how living plants generate energy through photosynthesis.
2. Kinetic Energy. The motion of materials can strip off electrons from materials, generating an electric charge. Such demonstrations of this can be seen with the build-up of static electricity in materials which can gain an excess of electrons due to two materials being in contact with each other, with one type of material as an insulator (meaning it prevents the flow of electrons between atoms), and the other type of material as a conductor (meaning it allows the free flow of electrons between atoms). Electrons will build up, on the surface of the conducting material and is discharged as a spark or an electrostatic discharge. Industrial power planets most often utilize this type of electric generation, using motion. Large magnets rotate within closed loops of conducting material (such as copper wire), drawing electrons into the copper wire, which flow out on electric lines to homes and businesses. Large rotating turbines are often powered by hot steam (coal, natural gas, nuclear, or geothermal power plants), the flow of water (hydroelectric dams), or wind (wind turbines) that keep the conducting material rotating and generating electrons.
4. Thermal Energy. Electricity can be generated by a thermogradient, where a heated surface is placed in close association with a cold surface, and two materials with differing electric conducting properties are place between the thermogradient, allowing the build-up of electrons on one side, generating a current with the opposite side. Thermoelectric generated electricity is used in electrical generation of wearable devices, which utilizes the thermal gradient of a person’s body heat. It also is used to generate electricity from “waste heat,” that is heat that is generated by the combustion of fuels, such as in a combustion engine or power plant, as a secondary method to boost electrical generation. Such conversion between thermal energy to electrical energy can allow you to charge your cell phone simply by using the heat in a cup of coffee or tea, as demonstrated recently by the work of Ann Makosinski showcased on the Late-Night Show.
5. Chemical Energy. An electrical charge can be built up and stored in a battery. The term battery was first coined by Benjamin Franklin, who took a series of Leyden Jars and lined them in a row connected by metal wires to increase the electric stock he received when he touched the top of a Leyden Jar. With a row of these jars lined up, they resembled a row of cannons, a reference to the military term for a “battery” of cannons. Leyden Jars do not generate electricity on their own, but allows an easy way to store electrons and an electric charge.
As the simplest type of battery, a Leyden Jar is a jar wrapped in a conducting metal, filled with a conducting liquid (typically water with dissolved salt), with a nail or metal wire dropped through the lid, making sure that the outer metal does not come in contact with the metal wire or nail in the lid. Using a rod and clothe, electrons can be added to the jar’s lid, by passing a charged rod (after rubbing it with a cloth to build up a static charge), and the electrons will flow into the nail (called an anode, or - end) and into the water (referred to as an electrolyte). Since these electrons can’t pass through the glass jar to the outer surface metal (called the cathode, or + end), they will collect within the jar, until a circuit is made between the lid (anode or – end) and outside of the jar (cathode or + end). If this circuit is made by a person, they will feel a shock. If a wire is attached with a light bulb, the light bulb will light up.
Modern batteries can generate electricity by having two different types of liquid electrolytes separated by a membrane that allows the passage of electrons, but not the molecules in the liquid. Hence, overtime electrons will accumulate in one side (becoming negatively charged), while depleted in the other side (becoming positively charged) of the two chambers of electrolytes. Some batteries, once the electrons have returned to the other side, will be expended, while others will allow a reverse charge to be applied to the battery (a flow of electrons in the opposite direction), which resets the difference in the number of electrons between the two chambers of electrolytes, and hence re-charge the battery. However, over time, the molecules will lose their chemical abilities to donate and receive electrons, and even rechargeable batteries will have a limited life-span. However new technologies are increasing the length of battery life, particularly with molecules that contain the highly reactive element of lithium.
Most often chemical energy generates heat through an exothermic chemical reaction (such as the combustion of gasoline), and heat is then used to generate electricity in one of the ways mentioned previously.
When electrons move along a conducting material in a single direction of flow, this is referred to as a direct current or (DC), which is common in batteries. However, often electrons are passed through an alternator which produces a flow of electrons alternating back and forth along the wire in waves, which is called an alternating current (AC). Typically, electricity in most electric appliances in your home run on alternating current, because it is more efficient in transporting a continuous flow of electrons long distances over metal wires. However, most batteries provide electrons through direct current.
Sunlight as an Energy Source for Earth
Sunlight is the ultimate original source of most electrical generation for planet Earth. Electric energy can be stored for long periods of time as chemical energy, such as in batteries, but also in ancient fossilized lifeforms which previously used photosynthesis to produced hydrocarbons, which are broken down over long geological periods into natural gas, gasoline, or coal. These “fossil fuels” can combust and generate heat in exothermic reactions to generate electricity through heat and motion.
Theoretical Nature of the Universe's Energy
Scientists have debated the theoretical nature of the universe in regard to the long-term trend of energy available. Lord Kelvin, and the classical laws of thermodynamics view energy as slowly being depleted from the universe due to entropy, and eventually the universe will face a “heat death,” when all the energy has been depleted. Other scientists, discovering the link between matter and energy, such as Albert Einstein, who suggests a balance of energy flow between matter and energy, extending the life of the universe. While more recently, scientists have hypothesized increasing energy far into the future toward a “Big Crunch” or “Big Bounce” where all matter could come back together in the universe, and maybe cycle back to another Big Bang. Such cosmological hypotheses, while of interest, do not yield much support from scientific evidence so far gathered. However, there is evidence for the ongoing rapid expansion of the universe, suggesting that the expanding universe is slowly losing energy overtime, as if the universe is one long extended massive explosion ignited with a Big Bang.
When Edwin Hubble studied the visual spectrum of star light at the observatory in Mount Wilson, California, he could calculate the temperature and composition of these far way stars. Now, with the ability to determine distances to these stars by comparing brightness and periodicity, he noticed a strange relationship. The farther away a star or galaxy was from Earth, the more the visual spectrum was shifted toward the red side, such that absorption lines were moved over slightly toward longer wavelength light. In measuring this shift in the spectrum of star light, Hubble graphed the length of this shift verses distance to the star or galaxy observed, and found that a greater shift was observed the farther the distance to the star or galaxy.
This phenomenon became known as the red-shift. Hubble used this graph to calculate what has since been named the Hubble Constant, which is a measure of the expansion of the universe. Hubble first published his estimate of this expansion using the notation of kilometers per second per Mpc (megaparsec). A megaparsec is a million parsecs, or equivalent to 3.26 million light years, or 31×1018 km. It is an extremely long distance. Astronomers argued about his first estimates, and the next hundred years there has been continued debate over the exact value of Hubble’s Constant.
An Earth orbiting satellite was launched in 1990 that bears Edwin Hubble’s name, the Hubble space telescope attempted to address this question. Above the atmosphere of Earth, the Hubble telescope was able to measure the red-shift of distant stars as well as their periodicity in brightness, allowing for a refined measure of this constant, which was found to be 73.8 +/- 2.4 km/s/Mpc. For every megaparsec (about 3.26 million light years) distance, the universe is expanding 73.8 km/sec faster. A star 100 Mpc away from Earth would be expanding at 7,380 km/sec from Earth.
While another telescope, bearing Max Planck’s name, the Planck spacecraft launched by ESA in 2009, has looked at the invisible microwave electromagnetic radiation coming from the universe which also exhibits a red shift, and found a slightly slower expansion of the universe of 67.80 +/- 0.77 km/s/Mpc. This measurement is the growing distance between stars.
One way to imagine the universe, is as rising bread dough, with the stars as chocolate chips spread throughout the dough. As the dough rises, or expands, the distance between each of the chocolate chips within the dough increases. This expansion can be faster than the speed of light because nothing is traveling that distance, but the distance itself is expanding between points.
Using the 73.8 km/s/Mpc found with the Hubble telescope, and the discovery of the farthest object observed from Earth (the galaxy GN-z11 in the constellation of Ursa Major), measured at 112,738 Mpc away from Earth (with a redshift of 11.1). The distance between Earth and this distant galaxy GN-z11 is expanding about 28 times faster than the speed of light! In other words, the last time Earth and GN-z11 shared the same space was 12.940 billion years ago, with a distance expanding ever faster away from each other. If we play this universe expansion backward, we find that universe is estimated to be about 13.5 billion years old, and has been expanding outward faster than the speed of light in every direction from Earth. Note that this rate of expansion of the universe expressed within the distance of 1 meter is an expansion of only the width of a single atom, every 31.7 years. Since the formation of the Earth 4.5 billion years ago, the expansion of the universe has added only 1 centimeter per meter. However, over the vast distances of space, this universal expansion is relatively large.
Steven Hawking wrote in one of his lectures before his death in 2018 “The expansion of the universe was one of the most important intellectual discoveries of the 20th century, or of any century.” Indeed, from the perspective of someone living on Earth, it is as if all the stars in the night sky are racing away from you, like a cosmic children’s game of tag, and you are it. This expanding universe is conclusive evidence of the complete isolation of the solar system in the universe, as well as the extreme precious and precarious nature of planet Earth.
2d. Daisy World and the Solar Energy Cycle.
Incoming Solar Radiation from the Sun
Since 1978, NASA has employed a series of satellites that measure the amount of incoming solar radiation from the sun, measured as irradiance which is the amount of radiant flux received by a surface. The newest instrument NASA has deployed is the Total and Spectral Solar Irradiance Sensor-1 (TSIS-1) that was installed on the International Space Station in 2017.
Since then, it has measured Earth’s solar irradiance at nearly a precise constant of 1,360.7 watts per meter2 which is known as the solar constant. This is equivalent to 23 60-watt light bulbs arranged on a 1-meter square title on the ceiling, or 1.36 kW per square meter of ceiling space.
To imagine lighting a 50 square meter room with the power of the sun’s irradiation for a single 12-hour day, would be 816 kW/hr, and cost about $110 a day on average, dependent on the local cost of electricity. Imagining this spread out over the surface of the Earth and it would cost $1,098 trillion dollars a day. That is a huge amount of energy striking the Earth, but not all of this energy makes it through the atmosphere, as much of the energy (up to 90%) gets absorbed or reflected back into space as the light interacts with gas particles in the atmosphere, with much of that solar irradiation getting reflecting back into outer space.
When Earth is viewed from Saturn, it appears like a bright star. This light is caused by the reflected light from the sun’s light striking Earth. Like a small shiny mirror left high on a giant mountain. This is why other planets in the Solar system appear to shine brightly in the night sky, they are reflecting sunlight back to Earth, and are not generating their own light source. This reflection of light is called albedo. A pure mirrored surface reflecting all light will have an albedo close to 1, while a pure black surface (a black body radiator) will have an albedo of 0, indicating all the light energy will be absorbed by its surface. This is why you get hot in a black shirt compared to a white shirt on sunny days, since the black shirt will absorb more of the sun’s light.
All other surfaces will be somewhere along this range. Clouds typically have an albedo between 0.40 and 0.80, indicating between 40 to 80% of the sun’s light is reflected back into outer space. Open ocean water however, has an albedo of only 0.06, with only 6% of the light reflected back into outer space. However, if the water becomes frozen, ice has an albedo closer to white clouds, between 0.50 and 0.70.
The Young Faint Sun Paradox
In 1972, Carl Sagan and George Mullen published a paper in Science accessing the surface temperatures of Mars and Earth through time. They discussed a quandary regarding the early history of Earth’s surface temperatures. If the sun’s radiation was less than today’s solar radiation (say only 70%) would this not have caused Earth to have been a frozen planet for much of its early history? Geological evidence supports a liquid ocean early in Earth’s history, yet if the solar irradiation was much fainter than today, sea ice would have become more common with its higher albedo, spread across more of the surface area of the planet. The fainter solar irradiation would have been reflected back into space, resulting in Earth being locked up in ice, and completely frozen.
The faint sun paradox can be solved if however, Earth had a different atmosphere than today that allowed more incoming solar irradiation at shorter wavelengths and blocked more outgoing solar irradiation at longer wavelengths.
An analogy of this would be a person working at a low-end job making $100 a week, but only spending $25, while another person working at a high-end job making $500 a week, but spending $450. The low-end worker would net $75 in savings a week, while the high-end worker will net only $50 in savings in a week. Indeed, geological evidence indicates that the early atmosphere of Earth lacked oxygen, which blocks incoming solar irradiation via an ozone layer, and contained abundant amounts of carbon dioxide which blocks long wave-length solar irradiation within the infra-red spectrum from leaving the Earth. Thus, more light was coming in, and less was leaving, resulting in a net warmer world than expected from just the total solar irradiation, which was fainter.
In 1983 after receiving heavy criticism for his concept of a Gaia Hypothesis, James Lovelock teamed up with Andrew Watson, an atmospheric scientist and global modeler to build a simple computer model to simulate how a simplified planet could regulate surface temperature through a dynamic negative feedback system to adjust to changes in solar irradiation. This model became known as the Daisy World model. The modeled planet contains only two types of life: black daisies with an albedo of 0, and white daisies with an albedo of 1, with a gray ground surface with an albedo of 0.5. Black daisies absorb all the incoming light, while white daisies reflect all the incoming light back into space. There is no atmosphere in the Daisy World, so we don’t have to worry about absorption and reflection of light above the surface of the simple planet.
As solar irradiation increases, black daisies become more abundant as they are able to absorb more of the sun’s energy, and quickly they become the prevalent life form of the planet. Since the planet is warming due to its surface having a lower albedo, quickly it becomes a hotter planet, which causes the white daisies to grow in abundance, as they do so, the world starts to reflect more of the sun light back into space, cooling the planet. Over time, the surface temperatures of the planet will reach an equilibrium and stabilize, so that it does not vary much despite changes in the amount of solar irradiation increasing. As the sun’s irradiation increases, it will be matched by an increase abundance of white daisies over black ones. Eventually, solar irradiation will increase to a point where white daisies are unable to survive on the hot portions of the planet, and they begin to die, revealing more of the gray surface of the planet, which absorbs half the light’s energy. As a result, the planet quickly starts to absorb more light, and quickly heats up, killing off all the daises and leaving a barren gray planet. The Daisy World illustrates how a planet can reach a dynamic equilibrium in regard to surface temperatures and how there are limits or tipping points in regard to these negative feedback systems. Such a simple model is extremely powerful in documenting how a self-regulating system works and the limitations of such regulating systems. Scientists, since this model was introduced in 1983, have greatly expanded the complexity of Daisy World models, by adding atmospheres, oceans and differing life forms, but ultimately, they all reveal a similar pattern of stabilization followed by a sudden collapse.
The Daisy World invokes some mental gymnastics as it ascribes life forms to a planet, but we can model an equally simple life-less planet; one more similar to an early Earth. A water world with a weak atmosphere. Just like the 1995 sci-fi action movie starring Kevin Costner, the Water World is just open ocean and contains no land. The surface of the ocean water has a low albedo of 0.06, which absorbs most of the incoming solar irradiation. As the sun’s solar irradiation increases and the surface temperatures of the Water World begin to heat up, the water reaches high enough temperatures that it begins to evaporate into a gas, resulting in an atmosphere of water, and with increasing temperatures, the atmosphere begins to form white clouds. These white clouds have a high albedo of 0.80 meaning more of the solar irradiation is reflected back into space before it can reach the ocean’s surface, and the planet begins to cool. Hence, just like in the Daisy World, the Water World can become a self-regulating system with an extended period of equilibrium. However, there is a very narrow tolerance here, because if the Water World gets too cooled down, then sea ice will form. Ice on the surface of the ocean with a high albedo of 0.70 is a positive feedback, meaning that if ice begins to cover the oceans, it will cause the Water World to cool down, which causes more ice to form on the surface of the Earth. In a Water World model, the collapse is toward a planet locked in ice— a Frozen World.
There is evidence that early in Earth’s own history, the entire planet turned into a giant snow ball. With ever increasing solar irradiation a Frozen World will remain frozen, until the solar irradiation is high enough to begin to melt the ice to overcome the enhanced albedo of its frozen surface.
The world, at this point will quickly and suddenly return to a Water World again, although if solar irradiation continues to increase the oceans will eventually evaporate, despite increasing cloud cover and higher albedo, leaving behind dry land with an extremely thick heavy water atmosphere of clouds. Note that a heavy atmosphere of water clouds will trap more of the outgoing long-wave infra-red radiation, resulting in a positive feedback. The Water World will eventually become a hot Cloud World.
Examples of both very cold Frozen Worlds and very hot Cloud Worlds exist in the Solar System. Europa, one of the four Galilean moons of Jupiter is an example of Frozen World, with a permeant albedo of 0.67. The surface of Europa is locked under thick ice sheets. The moon orbits the giant planet of Jupiter which pulls and tugs on its ice-covered surface, producing gigantic cracks and fissures on the moon’s icy surface with an estimated average surface temperature of −171.15 ° Celsius or 102 on the Kelvin scale.
Venus, the second planet from the Sun is an example of a Cloud World, with its thick atmosphere, which traps the sun’s irradiation. In fact, the surface of Venus is the hottest place in the Solar System, besides the Sun, with a surface temperature of 462° Celsius or 737 on the Kelvin scale, nearly hot enough to melt rock, and this is despite an albedo slightly higher than that of Europa of around 0.69 to 0.76.
The Solar System contains both end states of Water Worlds, and Earth appears to be balanced in an ideal Energy Cycle, but as these simple computer models predict, Earth is not immune from these changes and can quickly tip into either a cold Frozen World like Europa or extremely hot Cloud World like Venus. Ultimately, as the sun increases its solar radiation with its eventual expansion, a more likely scenario for Earth is a Cloud World, and you just have to look at Venus to imagine the long-term very hot future of planet Earth.
2e. Other Sources of Energy: Gravity, Tides, and the Geothermal Gradient.
The sun may appear to be Earth’s only source of energy, but there are other much deeper sources of energy hidden inside Earth. In the pursuit of natural resources such as coal, iron, gold and silver during the heights of the industrial revolution, mining engineers and geologists took notice of a unique phenomenon as they dug deeper and deeper into the interior of the Earth. The deeper you travel down into an underground mine, the warmer the temperature becomes. Caves and shallow mines near the surface, take on a yearly average temperature making hot summer days feel cool in a cave and cold winter days feel warm, but as one descends deeper and deeper underground, ambient temperatures begin to increase. Of course, the amount of increase in temperature varies depending on the proximity you are to an active volcano or upwelling magma, but in most regions on land, a descendant 1,000 meters underground will increase temperatures between 25 to 30° Celsius. One of the deepest mines in the world is the TauTona Mine in South Africa, which descends to depths of 3,900 meters with ambient temperatures rising between 55 °C (131 °F) and 60 °C (140 °F), rivaling or topping the hottest temperatures ever recorded on Earth’s surface. Scientists pondered where this energy, this heat within the Earth comes from.
Scientists of the 1850s viewed the Earth like a giant iron ball heated to glowing hot temperatures in the blacksmith-like furnace of the sun and slowly cooling down ever since its formation. Such view of a hot Earth, bore its origins to the rise of industrial iron furnaces that dot the cityscapes of the 1850s, suggested that Earth, like poured molten iron was once molten and over its long history has cooled. Suggesting that the observed heat experienced deep underground in mines was the cooling remnant of Earth’s original heat from a time in its ancient past when it was forged from the sun. Scientists term this original interior heat within Earth left over from its formation, Accretionary heat.
Lord Kelvin and the First Scientific Estimate for the Age of Earth
As a teenager, William Thomson pondered the possibility of using this geothermal gradient of heat in Earth’s interior as a method to determine the age of the Earth. He imagined the Earth to have cooled into its current solid rock from an original liquid molten state, and that the temperatures on the surface of the Earth had not changed significantly over the course of its history. The temperature gradient was directly related to how long the Earth had been cooling. Before changing his name from William Thomson to Lord Kelvin, he acquired an accurate set of measurements of the Earth’s geothermal gradient from reports of miners in 1862, and returned to the question of the age of the Earth.
Lord Kelvin assumed three initial criteria, first was that Earth was once a molten hot liquid, with a uniform hot temperature, and second that this initial temperature was about 3,900 °C, hot enough to melt all types of rocks. Lord Kelvin also assumed that the temperature on Earth’s surface would be the same throughout its history near 0 °C. Like a hot potato thrown into an icy freezer, the center of the Earth would retain its heat at its core, while the outer edges of the Earth would cool with time. He devised a simple formula:
Where T is equal to the initial temperature, 3,900 °C. G is the geothermal gradient he estimated to about 36 °C/km from those measurements in mines, and k was the thermal diffusivity, or the rate that a material cools down measured in meters per second. While Lord Kelvin had established estimates for T, G, and used the constant π, he still had to determine k the thermal diffusivity. In his lab, he experimented with various materials, heating them up and measuring how quickly heat was conducted through the material, and found a good value to use for the Earth of 0.0000012 meters squared per second. During these experiments of heating various materials and measuring how quickly they cooled down, Lord Kelvin was aided by his assistant a young student named John Perry. It must have been exciting when Lord Kelvin calculated an age of the Earth to around 93 million years, although he gave a broad range in his 1863 paper between 22 to 400 million years. Lord Kelvin’s estimate gave hope to Charles Darwin’s budding theory of evolution, which required a long history for various lifeforms to evolve, but ran counter to a notion that Earth had always existed.
John Perry who idolized his professor, graduated and moved on to a prestigious teaching position in Tokyo Japan. It was there in 1894 he was struck by a foolish assumption that they had made in trying to estimate the age of the Earth, and it may have occurred to him after eating some hot soup on the streets of Tokyo. In a boiling pot of soup, heat is not dispersed through conduction the transfer of heat energy by simple direct contact, but dispersed through convection, that is the transfer of energy with the motion of matter, an in the case of the Earth, the interior of the planet may have acted like a pot of boiling soup, the liquid bubbling and churning bringing up not only heat to the surface, but also matter. John Perry realized if the heat transfer of the interior of the Earth was like boiling soup, rather than an iron ball, the geothermal gradient would be prolonged far longer near the surface due to the upwelling of fresh liquid magma from below. In a pot of boiling soup, the upper levels will retain higher temperatures because the liquid is mixing and moving as it is heated on the stove.
In 1894, John Perry published a paper in Nature, indicating the error in Lord Kelvin’s previous estimate for the age of the Earth. Today, we know from radiometric dating that the Earth is 4.6 billion years ago, 50 times longer than Lord Kelvin’s estimate. John Perry explained the discrepancy, but it was another idea that captured Lord Kelvin’s attention. The existence of an interior source of energy within the Earth, thermonuclear energy, that could also claim to keep the Earth’s interior hot.
Earth’s Interior Thermonuclear Energy
Unlike the sun, Earth lacks enough mass and gravity to trigger nuclear fusion at its core. However, throughout its interior, the Earth contains a significant number of large atoms (larger than iron) that formed during the initial giant supernova explosion that formed the solar system. Some of these large atoms, such as thorium-232 and uranium-238 are radioactive. These elements have been slowly decaying ever since their creation around the time of the initial formation of the sun, solar system and Earth. The decay of these large atoms into smaller atoms is called nuclear fission. During the decay, these larger atoms are broken into smaller atoms, some of which can also decay into even smaller atoms, like the gas radon which decays into lead. The decay of larger atoms into smaller atoms produces radioactivity, a term coined by Marie Skłodowska-Curie. In 1898, she was able to detect electromagnetic radiation emitted from both thorium and uranium, and later she and her husband demonstrated that radioactive substances produce heat. This discovery was confirmed by another female scientist named Fanny Gates, who demonstrated the effects of heat on radioactive materials, while the equally brilliant female scientist discovered that radioactive solid substances produced from the decay of thorium and uranium, further decay to a radioactive gas, called radon.
These scientists worked and corresponded closely with a New Zealander, named Ernest Rutherford, who in 1905 published a definitive book on “Radio-activity.” This collection of knowledge begun to tear down the assumptions made by Lord Kelvin. It also introduced a major quandary in Earth sciences. How much of Earth’s interior heat is a product of accretionary heat and how much is a product of thermonuclear heat from the decay of thorium and uranium?
A century of technology has resulted in breakthroughs in measuring nuclear decay within the interior of the Earth. Nuclear fusion in the sun causes beta plus (β+) decay, in which a proton is converted to a neutron, and generates a positron and neutrino, as well as electromagnetic radiation. In nuclear fission, in which atoms break apart, beta minus (β−) decay occurs. Beta minus (β−) decay causes a neutron to convert to a proton, and generates an electron and antineutrino as well as electromagnetic radiation. If a positron comes in contact with an electron the two sub-atomic particles annihilate each other. If a neutrino comes in contact with an antineutrino the two sub-atomic particles annihilate each other. Most positrons are annihilated in the upper regions of the sun, which are enriched in electrons, while neutrinos are free to blast across space, zipping unseen through the Earth, and are only annihilated if they come in contact with antineutrinos produced by radioactive beta minus (β−) decay from nuclear fission on Earth.
Any time of day, trillions of neutrinos are zipping through your body, followed by a few antineutrinos produced by background radiation. Neither of these subatomic particles cause any health concerns, as they can’t break atomic bonds. However, if they strike a proton, they can emit a tiny amount of energy, in the form of a nearly instantaneous flash of electromagnetic radiation.
The Kamioka Liquid-scintillator Anti-Neutrino Detector in Japan is a complex experiment designed to detect anti-neutrino’s emitted during radioactive beta minus (β−) decay cause by both nuclear reactors in energy generating power plants, as well as natural background radiation from thorium-232 and uranium-238 inside the Earth.
The detector is buried deep in an old mine, and consists of a steel sphere filled with a balloon filled liquid scintillator, and buffered by a layer of mineral oil. Light within the steel sphere is detected by highly sensitive phototubes mounted on the inside surface of the steel sphere. Inside the pitch-black sphere any tiny flash of electromagnetic radiation can be detected by the thousands of phototubes that line the surface of the sphere. These phototubes record tiny electrical pulses, which result from the collision of antineutrinos striking protons. Depending on the source of the antineutrinos, they will produce differing amounts of energy in the electrical pulses. Antineutrinos produced by nearby nuclear reactors can be detected, as well as natural antineutrinos caused by the fission of thorium-232 and uranium-238. A census of background electrical pulses indicates that Earth’s interior thermonuclear energy accounts for about 25% of the total interior energy of the Earth (2011 Nature Geoscience 4:647–651, but see 2013 calculations at https://arxiv.org/abs/1303.4667) the other 75% is the accretionary heat, left over from the initial formation of the Earth. Thorium-232 is more abundant near the core of the Earth, while uranium-238 is found closer to the surface. Both elements contribute to enhancing the geothermal gradient observed in Earth’s interior, and extending Earth’s interior energy beyond that predicated for a model involving a cooling Earth with only heat leftover heat from its formation. A few other radioactive elements contribute to Earth’s interior heat, such as potassium-40, but the majority of Earth’s interior energy is a result of residual heat from its formation.
Comparing the total amount of Earth’s interior energy sources with the amount Earth receives via the Sun, reveals an order of magnitude of difference. The entire interior energy from Earth accounts for only about 0.03% of Earth’s total energy. The other 99.97% comes from the sun’s energy, as measured above the atmosphere. It is important to note that it is estimated that current human populations utilize about 30 Tetrawatts or about 0.02% of Earth’s total energy. Hence, the interior energy of Earth and the resulting geothermal gradient could support much of the energy demands of large populations of humans, despite the fact that it accounts for a small amount of Earth’s total energy budget.
Gravity, Tides and Energy from Earth’s Inertia
While the vast amount of Earth’s energy comes from the Sun, and a small amount comes from the interior of the Earth, a complete census of Earth’s energy should also discuss a tiny component of Earth’s energy that is derived from its motion and the oscillations of its gravitational pull with both the Moon and the Sun.
Ocean and Earth tides are caused by the joint gravitational pull of the Moon and Sun. They daily cycle between high and low tides over a longer two-week period. Twice a lunar month, around the new moon and full moon, when a straight line can be draw through the center of the Sun, Moon and Earth, a configuration known as a syzygy, the tidal force of the Moon is reinforced by the gravitational force of the Sun, resulting in a higher than usual tides called a spring tide. When a line drawn through the center of the Sun to the Earth, and Moon to the Earth forms a 90°, or is perpendicular, the gravitational force of the Sun partially cancels the gravitational force of the moon, resulting in a weaken tide, called a neap tide. These occur when the Moon is at first quarter or third quarter in the night sky.
Daily tides are a result of Earth’s rotation relative to the position of the Moon. Tides can affect both the solid interior of the Earth (Earth tides), as well as the liquid ocean waters (Ocean tides), which are more noticeable, as ocean waters rise and fall along coastlines. Long records of sea level are averaged to indicate the average sea level along the coastline. The highest astronomical tide and lowest astronomical tide are also recorded, with the lowest record of the tide equivalent on navigational charts as the datum. Metrological conditions (such as hurricanes), as well as tsunamis (caused by earthquakes) can dramatically rise or lower sea level along coasts, well beyond the highest and lowest astronomical tides. It is estimated that tides contribute only 3.7 Tetrawatts of energy (Global Climate and Energy Project, Hermann, 2006 Energy), or about 0.002% of Earth total energy.
In this census of Earth’s energy, we did not include wind and fossil fuels such as coal, oil and natural gas, as these sources of energy are ultimately a result of input of solar irradiation. Wind is a result of thermal and pressure gradients in the atmosphere, that you will learn more of later when you read about the atmosphere, while fossil fuels are stored biological energy, due to sequestration of organic matter produced by photosynthesis, in the form of hydrocarbons, that you will learn more of as you read about life in a later chapter. \newpage
Section 3: EARTH'S MATTER
3a. Gas, Liquid, Solid (and other states of matter).
What is stuff made of?
Ancient classifications of Earth’s matter were early attempts to determine what makes up the material world we live in. Aristotle, teacher of Alexander the Great, in Ancient Greece in 343 BCE proposed five elements; earth, water, air, fire, and aether. These five elements were likely adapted from older cultures, such as ancient Egyptian teachings. The Chinese Wu Xing system, developed around 200 BCE during the Han dynasty listing the elements; Wood (木), Fire (火), Earth (土), Metal (金), and Water (水). These ideas suggested that the ingredients that make up all matter were some combination of these elements, but theories of what those elements were appeared arbitrary in early texts. Around 850 CE, the Islamic philosopher Al-Kindi who had read of Aristotle in his native Baghdad, conducted early experiments in distillation; the process of heating a liquid and collecting the cooled produced steam in a separate container. He discovered that the process of distillation could make more poignant perfumes and stronger wines. His experiments suggested that there were in fact just three states of matter; solids, liquids and gasses.
Ancient early classifications of matter differ significantly from today’s modern atomic theory of matter, that forms the basis of the field of chemistry. Modern atomic theory classifies matter into 94 naturally occurring elements, and an additional 24 elements if you include elements synthesized by scientists. The atomic theory of matter suggests that all matter is composed of a combination or mixture of these 118 elements. However, all these substances can adopt three basic states of matter as a result of differences in temperature and pressure. Hence all combinations of these elements can exist theoretically in solid, liquid and gas phases dependent on their temperature and pressure. Most states of matter can be classified as solid, liquid or gas, despite the fact that they are made up of different elements.
A good example is ice, water and steam. Ice is a solid form of hydrogen atoms bonded to oxygen atoms, symbolized by H2O, as it contains twice as many hydrogens (H) as oxygen (O) atoms. H2O is the chemical formula of ice. Ice can be heated to form liquid water. At Earth’s surface pressures (1 atmosphere) ice will melt into water at 0° Celsius (32° Fahrenheit). Likewise, water will freeze at the same temperature 0° Celsius (32° Fahrenheit). If you continue to heat the water it will boil at 100° Celsius (212° Fahrenheit). Boiling water produces steam, or water vapor, which is a form of gas. If water vapor is cooled below 100° Celsius (212° Fahrenheit), it will turn back into water.
One of the most fascinating simple experiments is to observe the temperature in a pot of water as it is heated to 100° Celsius (212° Fahrenheit). The water will rise in temperature until it reaches 100° Celsius (212° Fahrenheit), at that temperature it will remain until all the water is evaporated into steam (a gas) before the steam will rise any higher. A pot of boiling water is precisely at 100° Celsius (212° Fahrenheit), as long as it is pure water and is at 1 atmosphere of pressure (at sea level).
The amount of pressure can affect the temperatures that phase transitions take place at. For example, on top of a 10,000-foot mountain, water will boil at 89.6° Celsius (193.2° Fahrenheit), because it has less atmospheric pressure. This is why you often see adjustments to cooking instructions based on altitude, since it takes longer to cook something at higher altitudes. If you place a glass of water in a vacuum by pumping gases out of a container, you can get a glass of water to boil at room temperature. This phase transition happens when the pressure drops below about 1 kilopascal in the vacuum. The three basic states of matter are dependent on both the pressure and temperature of a substance. Scientists can diagram the different states of matter of any substance by charting the observed state of matter at any temperature and pressure. These diagrams are called phase diagrams.
A phase diagram can be read by observing the temperatures and pressures substances will change phases from solid, liquid and gas. If the pressure remains constant, you can read the diagram by following a horizontal line across the diagram, observing the temperatures a substance melts or freezes (solid-liquid) and boils or evaporates (liquid-gas). You can also read the diagram by following a vertical line across the diagram, observing the pressures that a substance melts or freezes (solid-liquid) and boils or evaporates (liquid-gas).
On the phase diagram for water, you will notice that the division between solid ice and liquid water is not a perfectly vertical line around 0° Celsius, at high pressures around 200 to 632 MPa, ice will melt at temperatures slightly lower than 0° Celsius. This zone causes ice to melt that is buried deeply under ice sheets, which increases the pressure on the ice. Another strange phenomenon can happen to water heated to 100° Celsius. If you subject normal water heated to 100° Celsius to increasing pressures, up above 2.1 GPa, the hot water will turn to solid ice and “freeze” at 100° Celsius. Hence, at very high pressures, you can form ice at the bizarrely hot temperatures of 100° Celsius! If you were able to touch this ice, you would get burned. Another strange phenomenon happens if you subject ice to decreasing pressures in a vacuum, the ice will sublimated, turn from a solid to a gas at temperatures below 0° Celsius in a vacuum. The process of a solid turning to a gas is called sublimation, and the process of a gas turning into a solid is called deposition. One of the most bizarre phenomena happens at a triple junction of the three states of matter, where the solid, liquid and gas phases can co-exist. For pure water (H2O) this happens at 0.01° Celsius and a pressure of 611.657 Pa. When water, ice or water vapor is subjected to this temperature and pressure, you get the weird phenomena of water both boiling and freezing at the same time!
What phase diagrams demonstrate is that the states of matter are a function of the space between molecules within a substance. As temperatures increase, the vibrational forces push the molecules of a substance farther apart, likewise as pressures increases, the molecules of a substance are pushed closer together. This balance between temperature and pressure dictate which phase of matter will exist at each discreet temperature and pressure.
More advanced phase diagrams may indicate different arrangements of molecules in solid states, as they are subjected to different temperatures and pressures. These more advance phase diagrams illustrate crystal lattice structural changes in solid matter that is more densely packed and can form different crystals arrangements.
Each substance has different phase diagrams, for example a substance of pure carbon dioxide (CO2), which is composed of a single carbon atom (C) bonded to two oxygen atoms (O) is mostly a gas at normal temperatures and pressures on the surface of Earth. However, carbon dioxide when cooled down to -78° Celsius undergoes deposition, and turns from a gas to a solid. Dry ice, which is solid carbon dioxide sublimates at room temperatures making a gas. It is called dry ice, because the phase transition between solid and gas at normal pressures does not go through a liquid phase like water. This is why dry ice kept in a cooler will not get your food wet, but will keep your food cold and actually much colder than normal frozen ice made of H2O.
Strange things happen when gases are heated and subjected to increasingly high pressures. At some point these hot gasses under increasing compression will become classified as a super critical fluid. Super critical fluids act both like a gas and a liquid, suggesting an additional fourth state of matter. Super critical fluids of H2O occur when water is raised to temperatures above 374° Celsius and subjected to 22.1 MPa or more of pressure, at this point the super critical fluid of water will appear like a cloudy steamy fluid. Super critical fluids of CO2 occur at temperatures above of 31.1° Celsius and subjected to 7.39 MPa or more of pressure. Because super critical fluids act like a liquid and a gas, they can be used as solvents in dry cleaning without getting fabrics wet. Super critical fluids are used in the process of decaffeinating coffee beans, as caffeine is absorbed by super critical fluids of carbon dioxide when mixed with coffee beans.
Phase diagrams can get more complex when you consider two or more substances mixed together and examine how they interact with each other. These more complex phase diagrams with two different substances are called binary systems, as they compare not only temperatures and pressures, but also the ratio of two (and sometimes more) components. Al-Kindi when developing his distillation processes, utilized the difference in boiling temperature of water (H2O) which occurs at 100° Celsius and alcohol (C2H6O) which occurs at 78.37° Celsius. The captured gas resulting from a mixture of water and alcohol heated up to 78.37° Celsius, would contain only alcohol. If this separated gas is then cooled, it would be a more concentrated form of alcohol, this is how distillation works.
Utilizing the knowledge of phase diagrams, the distribution of the different compositions of the 94 naturally occurring elements can be elucidated. And scientists can determine how substances can get enriched or depleted in these natural occurring substances as a result of changes in temperature and pressure.
Plasma is used to describe free flowing electrons, as seen in electrical sparks, lighting and found encircling the sun. Plasma is not technically a state of matter since it does not contain particles of sufficient mass. Although sometimes included as a state of matter, plasma, like electromagnetic radiation such as light which contains photons is best considered a form of energy rather than matter. Although electrons play a vital role in bonding different types of atoms together. In the next module you will be introduced to additional phases of matter at the extreme limits of phase diagrams.
Different phases of matter have different densities. Density as you may recall is a measure of a substance’s Mass per Volume. In other words, it is the number of atoms (mass) within a given space (volume). Specific gravity is the comparison of a substance’s density compared to water. It is a simple test to see if an object floats or sinks, such observations are measured as specific gravity. A specific gravity of precisely 1, means that the object has the same density as water. Substances whether solid, liquid or a gas with specific densities higher than 1 will sink, while substances with a specific gravity lower than 1 will float. Specific gravity of liquids is measured using a hydrometer. Otherwise, density is measured by finding the mass and dividing it by its measured volume (usually by displacement of water if the object is an irregular solid).
Most substances will tend to have higher density as a solid than a liquid, and most liquids have a greater density than in a gas phase. This is because solids pack more atoms together in less space, than a liquid, and much more atoms are packed in a solid phase of matter than a gas phase. There are exceptions to this rule, for example ice, the solid form of water floats. This is because there is less mass per volume in an ice cube than liquid water, as the crystal lattice of ice (H2O) forms a less dense network of bonds between atoms and spreads out over more space to accommodate this crystal lattice structure. This is why leaving a soda can in the freezer will cause it to expand and burst open. However, most substances will be denser in the solid phase than their liquid phase.
Density is measured as kg/m3 or specific gravity (in comparison to liquid water). Liquid water has a density of 1,000 kg/m3 at 4° Celsius, and steam (water vapor) has a density of 0.6 kg/m3. Milk has a density of 1,026 kg/cm3, slightly more than pure water, and the density of air at sea level is about 1.2 kg/m3. At 100 kilometers above the surface of the Earth (near the edge of outer space), the density of air drops down to 0.00000055 kg/m3 (5.5 x 10-7 kg/m3).
Remember, the acceleration of gravity (g) is dependent on an object’s mass, hence the denser an object is, the more gravitational force will be exerted on it. This previously came into discussion on calculating the density of the Earth, in refuting a hypothesis of a hollow center inside the Earth.
It is important to distinguish an object’s Mass from an object’s Weight. Weight is the combined force of gravity (g) and an object’s Mass (M), such that Weight = M x g. This is why objects in space are weightless, and objects have different weights on other planets, because the value of g differs depending on the density of each planet. However, Mass, which is equivalent to the total number of atoms within an object, remains the same no matter which planet you visit.
Weight is measured by scales that use springs pushing down which combines mass and gravity pushing an object toward the Earth and recording the displacement of the spring. Mass is measured by scales that compare an object to standards, like in a balance-type scale, where standards of known mass are balanced on the scale.
3b. Atoms: Electrons, Protons and Neutrons.
Planck’s length, the fabric of the universe, and extreme forms of matter
What would happen to water (H2O) if you subjected it to the absolute zero temperature predicted by Lord Kelvin, of 0° Kelvin or -273.15° Celsius and under a complete vacuum of 0 Pascals of pressure? What would happen to water (H2O) if you subjected it to extremely high temperatures and pressures, like those found in the cores of the densest stars in the universe?
Such answers to these questions may seemed beyond the limits of practical experimentation, but new research is discovering new states of matter at these limits. These additional states of matter exist at the extreme end of all phase diagrams; at the limits of observable temperature and pressure. It is here in the corners of phase diagrams that matter behaves in strangely weird ways. However, these new forms of matter were predicted nearly a century before it was discovered by a unique collaboration between two scientists living on different sides of the Earth.
As the eldest boy in a large family with seven younger sisters, Satyendra Nath Bose grew up in the bustling city of Calcutta, India. His family was well off, as his father was a railway engineer and a member of the upper-class Hindu society that lived in the Bengal Presidency. Bose showed an aptitude for mathematics, and rose up the ranks as a teacher and later became a professor at University of Dhaka, where he taught physics. Bose read Albert Einstein’s papers and translated his writings from English to Hindi, and started a correspondence with Albert Einstein. While lecturing his class in India on Planck’s constant and black body radiators, he stumbled upon a unique realization, a statistical mathematical mistake that Einstein had made in describing the nature of the interaction between atoms and photons (electromagnetic radiation).
As you might recall Planck’s Constant relates to how light or energy striking matter is absorbed or radiates in a prefect black body radiator. In 1900, Max Planck used his constant (), and calculated a minimum distance between wavelengths of photons possible for electromagnetic radiation. The equation is
where ℏ is the reduced Planck’s constant (h) which is equal to 1.054571817x 10-34 Joules Second or ℏ and equals h divided by 2π. G is Henry Cavendish’s calculation for gravity G= 6.67408x10−11 Meters3/Kilograms Seconds2, and c is the speed of light in a vacuum, 299,792,458 Meters per Second.
This length is called Planck’s length. It is the theoretical smallest distance between wavelengths of the highest energy electromagnetic radiation possible. It also relates to the theoretical smallest distance between electrons within an atom. The current calculated Planck’s length is 1.6 x 10-35 meters which is incredibly small, as the decimal place has 35 zeros in front of it, or is 0.000000000000000000000000000000000016 meters long.
Bohr's Model of the Atom
In physics it is the smallest measurement of distance. Satyendra Nath Bose was also aware of a new model of the atom, proposed by Niels Bohr a Danish scientist, who viewed atoms similar to how the solar system is arranged, with planets orbiting around stars, but instead of planets, tiny electrons orbiting around the atom’s nucleus. Under Bohr’s model of the atom, the simplest type of atom (hydrogen) is a single electron orbiting around a nucleus composed of a single proton.
Electron Orbital Shells
Experiments in fluorescence demonstrates that when electromagnetic radiation, such as light is absorbed by atoms, the electrons rise to a higher energy state. They subsequently fall back down to a natural energy state and release energy as photons. This is why materials glow when heated and why radioactive materials glow when subjected to gamma or x-ray electromagnetic radiation. Scientists can measure the amount of energy released as photons when this occurs, and Niels Bohr suggested that the amount of energy released appeared to be related to orbital shell distances, at tiny units measured in Planck’s lengths. Niels Bohr developed a model explaining how each orbital shell appeared to hold increasing numbers of electrons, with an increasing number of protons.
One way to think of these electron orbital shells is that they are like notches along a ruler. Electrons must encircle each atom’s nucleus from one or more of those discrete notches, which are separated by distances measured in Planck’s lengths, the smallest measurement of distance theoretically possible. To test this idea, scientists excited atoms with high energy light, and measured the amount of electromagnetic radiation that was emitted by the atoms. When electrons absorb light they move up the notches by discrete Planck lengths, however they also would move back down a notch and release photons, emitting in the process electromagnetic radiation, until they settle on a notch that is supported by an equal number of protons in the nucleus.
This effect is called the photoelectric effect. Albert Einstein earned his Nobel prize in 1921 by showing that it was the frequency of electromagnetic radiation that excites electrons by a factor of Planck’s constant in determining energy output.
Such, that E = hv, where E is the energy measured in Joules, h is Planck’s constant, and v is the frequency of the electromagnetic radiation. We can use v = c / λ, where c is the speed of light, and λ is the wavelength to determine v for the frequency of the different light wave lengths, finding that the shorter the wavelength, the higher the amount of energy.
As electrons move up the notches away from the nucleus by absorbing more electromagnetic radiation they can eventually become so excited that they can become completely free of the nucleus all together and become free electrons (electricity). This happens especially with metal materials that have a looser connection with orbiting electrons, but can theoretically happen with any type of material, given enough electromagnetic radiation subjected to the matter. This is what happens to matter when it is heated, the electrons move upward in their energy states, causing the atoms to jiggle which is subjected to the surrounding particles as electromagnetic radiation, their electromagnetic energy expands. This is why there is an overall trend with increasing temperatures and decreasing pressures toward matter that is less dense, expanding in volume from a solid to a liquid to a gas, eventually with enough energy the electrons become freed from the nucleus and result in plasma of free-flowing electrons or electricity.
These notches that the electrons encircle the nucleus are focused upon certain orbital shells of stability, such that the number of electrons exactly match the number of protons within the nucleus and fill orbital shells in a sequential order. The orbital shells of stability form the organization of the Periodic Table of Elements that you see in many classrooms.
One way to think of these orbital shells of stability is as discrete notches on a ruler, each “centimeter” on this ruler representing an orbital distance in the electron shell. There can be smaller units such as millimeters, with the smallest unit measured in Planck lengths. Scientists were eager to measure these tiny distances within atoms, but found it impossible, because the electrons behave not like planets orbiting a sun, but as oscillating waves forming a probability function around each of those discrete distances of stability. Hence it is impossible to predict the exact location of an electron along these notched distances from the nucleus. This is known as the Heisenberg Uncertainty Principle, which states that the position and the velocity of an electron cannot both be measured exactly at the same time. In a sense this makes sense. Electrons like photons encircle the nucleus traveling at the speed of light and as oscillating waves, making it impossible to measure a specific position of an electron within its orbit around the nucleus. The study of atomic structures, such as this is called quantum physics.
Satyendra Nath Bose had read Einstein’s work on the subject, and noted some mathematical mistakes in Einstein’s calculations of the photoelectric effect. Bose offered a new solution, and ask Einstein to translate the work into German for publication. Einstein generously agreed and Bose’s paper was published. Einstein and Bose took this new solution to the question of what happens to these electron orbitals when atoms are subjected to Lord Kelvin’s extremely low temperature of absolute zero.
Einstein, following Bose, proposed that the electron orbital distances would collapse, moving down to the lowest possible notch on the Planck scale. This tiny distance prevents the atom from collapsing, and is referred to as zero-point energy. What is so strange, is that all atoms no matter how many protons or electrons it contains, will result in a similar collapse of the electrons down to the lowest notch at these extremely low temperatures.
At this point the atoms become a new state of matter called Bose-Einstein condensate. Bose-Einstein condensate has some weird proprieties. First is that it is a superconductor, because electrons are weakly held to the nucleus, second all elements except helium become solids, and the strangest propriety, all atoms in this state will exhibit the same chemical properties since the electrons are so close to the nucleus and they occupy the lowest orbital shell.
Helium, which is a gas at normal room temperatures and pressures, has two protons and two electrons. When it is cooled to absolute zero in a vacuum, it remains in liquid form, rather than the denser solid, like all other elements, and only when additional pressure is added does helium eventually turn into a solid. It is the only element to do this, all others become solids at absolute zero temperatures. This is because the zero-point energy in the electron orbitals is enough to keep helium as a liquid even at temperatures approaching absolute zero. In 1995 two scientists at the University of Colorado Eric Cornell and Carl Wieman super cooled rubidium-87, generating the first evidence of Bose-Einstein condensate in a lab, which earned them a Nobel prize in 2001. Since then numerous other labs have been experimenting with Bose-Einstein condensate, pushing electrons within a hare’s-breath away from the nucleus.
What happens to atoms when subjected to intense heat and pressure? Well electrons will move up these notches until they become far enough from the nucleus that they leave the atom and become a plasma, a flow of free electrons. Hence the first thing that happens at high pressure and high temperature is the generation of electricity from the free flow of these electrons. If pressure and temperature continue to increase, the protons will convert to neutrons, releasing photons as gamma radiation and neutrinos. This nuclear fusion is what generates the energy inside the cores of stars, such as the sun. If neutrons are subjected to even more pressure and temperature, they form black holes, the most mysterious form of matter in the universe.
One of the frontiers of science is the linkage between the extremely small Planck’s length and the observed cosmic expansion of the universe as determined by Hubble’s constant. One way to describe this relationship is to imagine a fabric to matter, which is being stretched apart (expanding) at the individual atomic level resulting in an expanding universe. The study of this aspect of science is called cosmology.
In chemistry, the electrons are often considered the most important aspect of the atom, because they determine how atoms bond together to form molecules. However, electrons can move around between atoms, and even form plasma. Maybe of more importance in chemistry is the number of protons within the nucleus of the atom.
The number of protons within an atom determines the names of the elements. Such that all atoms with 1 proton are called hydrogen, atoms with 2 protons are called helium, while 3 proton atoms are called lithium. The number of protons in an atom is referred to as the Atomic Number (Z). Each element is classified by its atomic number, which appears in the top corner of a periodic table of elements, along with the chemical symbol of each element. The first 26 elements formed during fusion in the earlier proto-sun while the elements with atomic numbers higher than 26 formed during the supernova event and elements higher than 94 are not found in nature and must be synthesized in labs. Here is a list of elements listing the atomic number and name of the element, as of 2020.
Elements formed in the sun through fusion 1-Hydrogen (H) 2-Helium (He) 3-Lithium (Li) 4-Beryllium (Be) 5-Boron (B) 6-Carbon (C) 7-Nitrogen (N) 8-Oxygen (O)
Elements formed in the larger proto-sun through fusion 9-Fluorine (F) 10-Neon (Ne) 11-Sodium (Na) 12-Magnesium (Mg) 13-Aluminium (Al) 14-Silicon (Si) 15-Phosphorus (P) 16-Sulfur (S) 17-Chlorine (Cl) 18-Argon (Ar) 19-Potassium (K) 20-Calcium (Ca) 21-Scandium (Sc) 22-Titanium (Ti) 23-Vanadium (V) 24-Chromium (Cr) 25-Manganese (Mn) 26-Iron (Fe)
Elements formed from the Supernova Event 27-Cobalt (Co) 28-Nickel (Ni) 29-Copper (Cu) 30-Zinc (Zn) 31-Gallium (Ga) 32-Germanium (Ge) 33-Arsenic (As) 34-Selenium (Se) 35-Bromine (Br) 36-Krypton (Kr) 37-Rubidium (Rb) 38-Strontium (Sr) 39-Yttrium (Y) 40-Zirconium (Zr) 41-Niobium (Nb) 42-Molybdenum (Mo) 43-Technetium (Tc) 44-Ruthenium (Ru) 45-Rhodium (Rh) 46-Palladium (Pd) 47-Silver (Ag) 48-Cadmium (Cd) 49-Indium (In) 50-Tin (Sn) 51-Antimony (Sb) 52-Tellurium (Te) 53-Iodine (I) 54-Xenon (Xe) 55-Caesium (Cs) 56-Barium (Ba) 57-Lanthanum (La) 58-Cerium (Ce) 59-Praseodymium (Pr) 60-Neodymium (Nd) 61-Promethium (Pm) 62-Samarium (Sm) 63-Europium (Eu) 64-Gadolinium (Gd) 65-Terbium (Tb) 66-Dysprosium (Dy) 67-Holmium (Ho) 68-Erbium (Er) 69-Thulium (Tm) 70-Ytterbium (Yb) 71-Lutetium (Lu) 72-Hafnium (Hf) 73-Tantalum (Ta) 74-Tungsten (W) 75-Rhenium (Re) 76-Osmium (Os) 77-Iridium (Ir) 78-Platinum (Pt) 79-Gold (Au) 80-Mercury (Hg) 81-Thallium (Tl) 82-Lead (Pb) 83-Bismuth (Bi) 84-Polonium (Po) 85-Astatine (At) 86-Radon (Rn) 87-Francium (Fr) 88-Radium (Ra) 89-Actinium (Ac) 90-Thorium (Th) 91-Protactinium (Pa) 92-Uranium (U) 93-Neptunium (Np) 94-Plutonium (Pu)
Non-naturally occurring elements, synthesized in labs 95-Americium (Am) 96-Curium (Cm) 97-Berkelium (Bk) 98-Californium (Cf) 99-Einsteinium (Es) 100-Fermium (Fm) 101-Mendelevium (Md) 102-Nobelium (No) 103-Lawrencium (Lr) 104-Rutherfordium (Rf) 105-Dubnium (Db) 106-Seaborgium (Sg) 107-Bohrium (Bh) 108-Hassium (Hs) 109-Meitnerium (Mt) 110-Damstadtium (Ds) 111-Roentgenium (Rg) 112-Copernicium (Cn) 113-Nihonium (Nh) 114-Flerovium (Fl) 115-Moscovium (Mc) 116-Livermorium (Lv) 117-Tennessine (Ts) 118-Oganesson (Og)
Reading through these names is a mix of familiar elements, such as oxygen, helium, iron and gold, and the unusual. It may be the first time you have heard of indium, technetium, terbium and holmium. This is because each element has difference occurrences in nature, with some orders of magnitude more common on Earth than others. For example, the highest atomic number element, element 118 Oganesson formally named in 2016, is so rare only 5 to 6 single atoms have been reported by scientists. These elements are so extremely rare, because as the number of protons increases in the nucleus of the atom, the more unstable the atom becomes.
Atoms with more than 1 proton need additional neutrons to overcome the repulsion of the two or more protons. Protons are positively charged and will attract negative charged electrons, but these positive charges also push protons way from each other. The addition of neutrons helps stabilize the nucleus allowing multiple protons to co-exist in the nucleus. In general, the more protons that an atom contains the more unstable the atom becomes, resulting in radioactive decay. This is why elements with large atomic numbers, like 90 for Thorium, 92 for Uranium and 94 for Plutonium are radioactive. Scientists speculate on even higher atomic numbers beyond 118 might exist, where the atoms could be stable, but so far, they have not been discovered. Another important fact is that unlike electrons, protons have atomic mass. This important fact will be revisited when you learn how scientists determine what types of elements are actually within solids, liquids and gases.
The last component of atoms is neutrons. Neutrons, like protons have an atomic mass, but lack any charge, and hence are electrically neutral in respect to electrons. Neutrons form in stars by the fusion of protons, but can also appear in the beta decay of atoms during nuclear fission. Unlike protons, which can be free and stable independent of electrons and neutrons (as hydrogen ions). Free neutrons quickly decay within a few minutes to protons when on Earth. These free neutrons are produced through beta decay of larger elements, but neutrons are stable within the cores of the densest stars, which can hold them together within their gigantic gravitational accelerations within the largest type of stars; neutron stars. Neutrons almost exclusively exist on Earth within atoms next to protons adding stability to atoms with more than 1 proton. Protons and neutrons are the only atomic particles within the nucleus, and only the atomic particles with atomic mass.
3c. The Chart of the Nuclides.
On April 9th 1940 news came over the radio that Nazi Germany had invaded neutral Norway. Leif Tronstad was teaching his chemistry class at the Norwegian Institute of Technology in Trondheim in the northern part of the country, when the news arrived. As a military trained officer Leif Tronstad was trained in weapons combat, and upon hearing the news informed his students that they were now at war, and to report to the nearest military station and take up arms. He and his family left Trondheim, making the six-hour drive south to Oslo to help defend the country, but half way there, the terrible news arrived that Oslo had been overtaken by the Nazis. He took shelter in Dovre Mountains, in the rugged mountainous region in the Rondane National Park. Here he trained volunteers in the use of rifles to defend the country against the invasion. Leif Tronstad was a well-liked professor of chemistry, and had been working on a newly discovered substance. A substance that would alter the course of World War II, and lead to the creation of atomic weapons. In May of 1940, Leif Tronstad had learned that the plant which produced this substance for his lab was now under Nazi control, and that they had ordered increased production from the captured Norwegian operators. The substance was called an isotope, and it does not appear on the periodic table of elements.
What is an isotope?
Twenty-seven years earlier, the chemist Frederick Soddy was attending a dinner party, with his wife’s family in Scotland. During the dinner he got into a discussion with a guest named Margaret Todd, a retired medical doctor. Conversation likely turned to the research Soddy was doing on atomic structure and radioactivity. Soddy had recently discovered that atoms could be identical on the outside, but have differences on their insides. This difference would not appear on standard periodic tables which arrange elements by the number of electrons and protons, and that he was trying to come up with a different way to arrange these new substances. Margaret Todd suggested the term “Isotope” for these substances, Iso- meaning same, and -tope meaning place. Soddy liked the term, and published a paper late that year using the new term isotope, to denote atoms that differ only in the number of neutrons in a nucleus, but had the same number of protons.
Protons and neutrons exist only within the center of an atom, in the nucleus of atoms, and are called nuclides. An arguably better way to organized the different types of atoms is to chart the number of protons (Z) and number of neutrons (N) inside the nucleus of atoms, (see https://www.nndc.bnl.gov/ for interactive chart). Unlike a periodic table of elements, every single type of atom can be plotted on such a chart, including atoms that are not seen in nature or are highly unstable (radioactive). This type of chart is called the chart of the nuclides.
For example, we can have an atom with 1 proton and 0 neutrons, which is called hydrogen. However, we can also have an atom with 1 proton and 1 neutron, which is called hydrogen as well. The name of the element only indicates the number of protons. In fact, you can theoretically have hydrogen with 1 proton and 13 neutrons. However, such atoms don’t appear to exist on Earth, because it is nearly impossible to get 13 neutrons to come together given Earth’s pressure and temperatures with a single proton. However, such atoms might exist in extremely dense stars. A hydrogen with 1 proton and 13 neutrons would act similar to normal hydrogen, but would have an atomic mass of 14 (1 + 13), making it much heavier than normal hydrogen, with an atomic mass of only 1. Atomic mass is the total number of protons and neutrons in an atom.
Most charts of the nuclides don’t include atoms that have not been observed, however hydrogen with 1 proton and 1 neutron has been discovered, which is called an isotope of hydrogen. Isotopes are atoms with the same number of protons but different numbers of neutrons. Isotopes can be stable or unstable (radioactive). For example, hydrogen has two stable isotopes, atoms with 1 proton and 0 neutrons, and atoms with 1 proton and 1 neutron, but atoms with 1 proton and 3 neutrons are radioactive. Note that atomic mass differs depending on the isotope, such that we could call a hydrogen isotope with 1 proton and 0 neutrons (with 1 atomic mass) light, compared to an isotope of hydrogen with 1 proton and 1 neutron (with 2 atomic mass) heavy. Scientists will often refer to isotopes as either light or heavy, or by a superscript prefix, such as 1H and 2H, where the superscript prefix indicates the atomic mass.
In 1931, Harold Urey and his colleagues Ferdinand G. Brickwedde and George M. Murphy at the University of Chicago isolated heavy hydrogen (2H), by distilling liquid hydrogen over and over again to purify the liquid hydrogen to contain more of the heavy hydrogen. In discovering heavy hydrogen, Harold Urey named this type of atom, deuterium (sometimes abbreviated as D). Only isotopes of hydrogen are named, all other elements are known only by their atomic mass number, such as 14Carbon (i.e. carbon-14). The number indicates the atomic mass which is the number of protons and number of neutrons, so 14C (carbon-14) has 6 protons and 8 neutrons (6+8 = 14).
Hydrogen that contains 1 proton, and hydrogen that contains 1 proton and 1 neutron will behave similarly in their bonding properties to other atoms and are difficult to tell apart, hydrogen no matter the number of neutrons will have 1 electron to equally matching the number of protons.
They do, however have slightly different physically properties because of the difference in mass. For example, 1H, will release 7.2889 Δ(MeV), while 2H (deuterium) will release 13.1357 Δ(MeV), slightly more energy when subjected to photons, due to the fact that the nucleus of the atom contains more mass, and the electron orbital shells are pulled a few Planck’s lengths closer to the nucleus in deuterium than typical hydrogen. The excited electrons will have farther to fall and will release more energy. These slight differences in chemical properties allows isotopes to undergo fractionation. Fractionation is the process of changing the abundance or ratio of various isotopes within a substance, by either enriching or depleting various isotopes in a substance.
Water that contains deuterium, or heavy hydrogen has a higher boiling temperature of 101.4 degrees Celsius (at 1 atmosphere of pressure) compared to normal water that boils at 100 degrees Celsius (at 1 atmosphere of pressure). Deuterium is very rare, accounting for only 0.0115% of hydrogen atoms, so to isolate deuterium would require boiling away a lot of water, and keeping the last remaining drops each time, over and over again to increase the amount of deuterium in the water. Heavy water is expensive to make, because it requires so much normal water and distilling it over and over again. This is a process of fractionation.
In 1939 deuterium was discovered to be important in the production of plutonium-239 (239Pu), a radioactive isotope used to make atomic weapons. In an article published in the peer-reviewed journal Nature in 1939, the daughter of Marie Curie, Irène Joliot-Curie and her husband Frédéric Joliot-Curie described how powerful plutonium-239 could be, and how it could be made from uranium using deuterium to moderate free neutrons. The article excited much interest in Nazi Germany, and a campaign was made to produce deuterium. Deuterium bonded to oxygen in water molecules is called heavy water. In 1940 Germany invaded Norway and captured the Vemork power station at the Rjukan waterfall in Telemark, Norway, which had produced deuterium for Leif Tronstad’s lab, but now was capable of producing deuterium for the Germans in the production of the isotope plutonium-239 (238Pu).
Leif Tronstad needed to warn the world that Germany would soon have the ability to make plutonium-239 (238Pu) bombs. But the fighting across Norway was going poorly, as soon the city of Trondheim in the north surrendered, and Leif Tronstad was now a resistance fighter in a country overrun by Nazi Germany. He sent a coded message to Britain warning them of the increased production of deuterium by the Germans. But he was unable to verify the message was received, so he had to escape Norway and warn the world. Leif Tronstad left his family’s cabin on skis and made his way over the Norwegian border with Sweden, and found passage to England. Once in England his warning was received with grave concern by Winston Churchill, who would later write “heavy water – a sinister term, eerie, unnatural, which began to creep into our secret papers. What if the enemy should get the atomic bomb before we did! We could not run the mortal risk of being outstripped in this awful sphere.”
The Race for the Atomic Bomb
Leif Tronstad wanted to lead the mission back to Norway, but was commanded by the British to train Norwegian refugees instead of returning for the impossible mission himself. In 1941 Harold Urey visited Britain where Leif Tronstad pushed Urey to convince President Franklin Roosevelt that the Allies needed to develop an atomic weapon before the Germans did. The captured Norsk Hydro heavy water production plant at Vemork, Norway gave Nazi Germany a head-start. The American military wanted to bomb the plant from the air, which was fortified under seven stories of concrete walls. Leif Tronstad pleaded not to bomb the plant and risk killing civilians because the plant also produced anhydrous ammonia which is extremely explosive. In November of 1942 the first mission was sent to Norway, led by two groups of commandoes. When the second group’s planes drifted off course in bad weather and crashed, most of the commandoes were killed in the crash, and the survivors executed by German soldiers. The first group, which had parachuted into the frozen terrain were now isolated and had to face a harsh winter alone dodging German forces who paroled the neighboring mountains and starvation. The Germans were now also on guard that an attack would soon come. In February of 1943 a Norwegian special operations team parachuted in behind enemy lines, and relocated the stranded team in the mountains. In the cloak of the night, the team scaled the rock cliffs of the mountainous valley and broke into the manufacturing room of the plant. With plastic explosives the team blew up the room, and fled over the frozen mountainous landscape. The mission was a success, however in the summer of 1943 the plant was repaired. Bombing raids by the American Air Force took out the city. The remaining produced heavy water was to be transported back to Germany, but the boat carrying it was blown up in an act of sabotage in February of 1944. By October of 1944, Leif Tronstad returned to fight in Norway as a resistance fighter. Sadly, he was killed in action in March 1945, a few months before the first atomic bombs were used on the Japanese cities of Hiroshima and Nagasaki in August of 1945 by Allied forces. The atomic bombs killed around 200,000 people, bringing a dramatic end to the war.
The Hydrogen Bomb
Knowledge of isotopes and understanding how to the read the chart of the nuclides you can understand the frighten nature of atomic power. For example, there is another type of isotope of hydrogen that contains 1 proton and 2 neutrons, called tritium (3H), which has an atomic mass of 3. Unlike deuterium which is stable, tritium is very radioactive, and will decay within a few years, with a half-life of 12.32 years. Half-life is the length of time for half of the atoms to decay, so in 12.32 years, 50% of the atoms will remain, in 24.64 years, only 25% of the atoms will remain, and each 12.32 years into the future, the percent of remaining tritium will decrease by one half. As a very radioactive isotope, tritium is made inside of hydrogen atomic bombs (H-Bombs) through a process of fission of the stable isotope 6-Lithium (6Li) with free neutrons, which acts like a catalyst by increasing energy released during this decay. In nature, tritium does not exist because it decays so quickly, but is a radioactive component of nuclear fall-out in the much more powerful H-bombs or Hydrogen-Bombs first tested after the war in 1952.
Are there hydrogen atoms with 1 proton and 3 neutrons? No, as it appears that atoms with this configuration can’t exist, hydrogen atoms with 3, 4, 5 and 6 neutrons decay so quickly that it is nearly impossible to detect them. The energy released can be measured when hydrogen atoms are bombarded by neutrons, but these atoms are so unstable they can’t exist for any length of time. In fact, for most proton and neutron combinations there are no existing atoms in nature. The number of protons and neutrons appears to be fairly equal, although the larger the atom, the more neutrons are present. For example, plutonium contains 94 protons, (the greatest number of protons in a naturally occurring element), but contains between 145 and 150 neutrons to hold those 94 protons together, and even with these neutrons, all isotopes of plutonium are radioactive, with the radioactive 244Pu isotope have the longest half-life of 80 million years. Oganesson (294Og) is the largest isotope ever synthesized and has 118 protons and 176 neutrons (118+176 = 294), but has a half-life of only 0.69 microseconds!
There are 252 isotopes of elements that don’t decay and are stable isotopes. The largest stable isotope was thought to be 209Bi, but recently it has been discovered to decay very very slowly with a half-life that is more than a billion times the age of the universe. The largest stable isotope known is 208Pb lead, which has 82 protons and 126 neutrons. There are actually three stable isotopes of lead, 206Pb, 207Pb and 208Pb, all appear not to decay over time.
3d. Radiometric dating, using chemistry to tell time.
Radiometric dating to determine how old something is – the hour glass analogy
The radioactive decay of isotopes and use of excited electron energy states have come to dominate how we tell time from the quartz crystals in your wrist watch and computer, to atomic clocks onboard satellites in space. Measuring radioactive isotopes and electron energy states is the major way we tell time in the modern age. It also enables scientists to determine the age of an old manuscript a few thousand years old, as well as uncovering the age of the Earth itself at 4.6 billion years old. Radioactive decay of isotopes has revolutionized how we measure time, from milliseconds up to billions of years, but how is this done?
First, imagine an hour glass filled with sand which drops from two glass filled spheres connected by a narrow tube. When turned over, sand from the top portion of the hour glass will fall down to the bottom, this rate of sand falling is a linear rate, which means only sand positioned near the opening between the glass spheres will fall. Over time the ratio of sand in the top and bottom of the hour glass will change, so that after 1 hour all the sand will have fallen to the bottom. Note that an hour glass can’t be used to measure years, nor can it be used to measure milliseconds, since in the case of years, all the sand will have fallen, and in measuring milliseconds, not enough sand would have fallen in that short length of time. This ratio is measured by determining the amount of sand in the top of the hour glass, and the amount of sand in the bottom of the hour glass. In chemistry dealing with radioactive decay, we call the top sand the parent element and the bottom sand the daughter element of decay.
Radiometric dating to determine how old something is – the microwave popcorn analogy
Radiometric decay does not work like an hour glass, since each atom has the same probability to decay, whereas in an hour glass, only the sand near the opening will fall. So a better analogy than an hour glass is to think about popcorn, in particular microwave popcorn. A bag of popcorn will have a ratio of kernels to popped corn, such that the longer the bag is in the microwave oven, the more popped corn will be in the bag. You can determine how long the bag was cooked by measuring the ratio of kernels and popped corn. If most of the bag is still kernels, the bag was not cooked long enough, while if most of the bag is popped corn, then it was cooked for a longer time.
The point in which half of the kernels have popped is referred to as half-life. Half-life is the time it takes for half of the parent atoms to decay to the daughter atoms. After 1 half-life the ratio of parent to daughter will be 0.5, after 2 half-lives, the ratio of parent to daughter will be 0.25, after 3 half-lives the ratio will be 0.125, and so on. Each half-life the amount of parent atoms is halved. In a bag of popcorn, if the half-life is 2 minutes, you will have half un-popped kernels and half popped popcorn, and after 4 minutes the ratio will be 25% kernels and 75% popcorn, after 6 minutes, only 12.5% of the kernels will remain. Each 2 minutes the number of kernels will be reduced by one half.
You can leave the bag in the microwave longer, but the amount of kernels will drop by only half for each additional minute, and likely burn the popcorn, leaving a few kernels still left un-popped. Radiometric dating works the same way.
What can you date?
The first thing to consider in dating Earth materials is what precisely you are actually dating. There are four basics moments that determine the start of the clock in measuring the age of Earth materials:
1) A phase transition from a liquid to a solid, such as the moment liquid lava or magma cools into a solid rock or crystal.
2) The death of a biological organism, the moment an organism (plant or animal) stops taking in new carbon atoms from the atmosphere or food sources.
3) The burial of an artifact or rock, and how long it has remained in the ground.
4) The exhumation of an artifact or rock, so how long it has been exposed to sunlight.
Radiocarbon dating or C-14 dating
There are two stable isotopes of carbon (carbon-12 and carbon-13), and one radioactive isotope of carbon (Carbon-14), a radioactive carbon with 6 protons and 8 neutrons. Carbon-14 decays, while carbon-12 and carbon-13 are stable and do not decay. The decay of carbon-14 to nitrogen-14 involves the loss of a proton. For any sample of carbon-14 half of the atoms will decay to nitrogen-14 in 5,730 years. This is the half-life, which is when half of the atoms in a sample have decayed. This means carbon-14 dating works well with materials that are between 500 to about 25,000 years old.
Radiocarbon dating was first developed in the 1940s, and pioneered by Willard Libby, who had worked on the Manhattan Project in developing the atomic bomb during World War II. After the war, Libby worked at the University of Chicago developing carbon radiometric dating, for which he won the Nobel Prize in Chemistry for in 1960. The science of radiocarbon dating has been around for a long time!
Radiocarbon dating measures the amount of time since the death of a biological organism, the moment an organism (plant or animal) stopped taking in new carbon atoms from the atmosphere or food sources. It can only be used to date organic materials that contain carbon, such as wood, plants, un-fossilized bones, charcoal from fire pits, and other material derived from organic material. Since the half-life of carbon-14 is 5,730 years, this method is great for material that is only few hundred or thousand years old, with an upper limit of about 100,000 years. Radiocarbon dating is mostly used in archeology, particularly in dating materials during the Holocene Epoch, or the last 11,650 years. The first step is to collect a small piece of organic material to date, being very careful not to contaminate the sample with organic material, such as the oils on your own hands. The sample is typically wrapped in aluminum foil to prevent contamination. In the early days of radiometric dating before the 1980s, labs would count the decay in the sample measuring the radioactivity, the more radioactivity, the younger the material was. However, a new class of mass spectrometers were developed in the 1980s giving the ability to directly measure the atomic mass of atoms in these samples, the steps are complex, but yield a more precise estimate of age. The steps involve determining the amount of carbon-14, as well as the two stable types of carbon-13 and carbon-12. Since the amount will depend on the amount of material, scientists look at the ratio of carbon-14 to carbon-12, and carbon-13 to carbon-12. The higher the ratio of carbon-14 to carbon-12 the younger the material is, while the carbon-13 to carbon-12 ratio is used to make sure there is not an excess of carbon-12 in the first measurement, and provide a correction if there is.
One of the technical problems that needed to be overcome was that traditional mass spectrometers measure only the atomic mass of atoms, and carbon-14 has the same atomic mass as nitrogen-14. Nitrogen-14 is a very common component of the atmosphere, and air that surrounds us. And this is a problem for labs. In the 1980s a new method was developed called the Accelerated Mass Spectrometry method, which deals with this problem.
The first step of the process is to take your sample and combust the carbon in a stream of pure oxygen in a special furnace or react the organic carbon with copper oxide, both of which produces carbon dioxide, a gas. The gas of carbon dioxide (which is often cryogenically cleaned) is reacted with hydrogen at 550 to 650 degrees Celsius, with a cobalt catalyst, which produces pure carbon in the form of power graphite from the sample, and water. The graphite is held in a vacuum to prevent contamination from the nitrogen-14 in the air. The vacuumed graphite powder is then purged with ultra-pure argon gas to remove any lingering nitrogen-14 which would ruin any measurement, in a glass vial. This graphite, or pure carbon is ionized, adding electrons to the carbon and making it negatively charged. Any lingering nitrogen-14 will not be negatively charged in the process, because it has an additional positive charged proton. An accelerated mass spectrometer spins the negatively charged atoms passing them through the machine at high speeds as a beam. This beam will have carbon-14, but also ions of carbon-12 bonded to 2 hydrogen, as well as carbon-13 bonded to 1 hydrogen all of which have an atomic mass of 14. To get rid of these carbon atoms bonded with hydrogen, the beam of molecules and atoms with atomic mass of 14 is passed through a stripper that removes the hydrogen bonds, and then through a second magnet. Resulting in a spread of atomic mass of carbon-12, carbon-13 and carbon-14 on the detector for each mass. The ratio of carbon-14/carbon-12 is calculated as well as the ratio of carbon-13/carbon-12 and compared to lab standards. The carbon-13/carbon-12 ratio is used to correct the ratio of carbon-14/carbon-12 in the lab and to see if there is an excess of carbon-12 in the sample, due to fractionation. To find the actual age in years, we need to find out the initial amount of carbon-14 that existed at the moment that the organism died.
Now carbon-14 is made naturally in the atmosphere from nitrogen-14 in the air. In the stratosphere these atoms of nitrogen-14 are hit by cosmic rays from the sun, which bombards the nitrogen-14 with thermal neutrons, producing a carbon-14 and an extra proton, or a hydrogen atom. This process is dependent on the magnetic field from the Earth and solar energy, which vary slightly in each hemisphere, and when solar anomalies happen, such as solar flares. Using tree ring 14-carbon/12-carbon ratios, where we know the year of each tree ring, we can calibrate 14-carbon/12-carbon ratios to absolute years for the last 10,000 years.
There are two ways to report the age of materials dated this way, one is to apply these corrections, which is called the radiocarbon calendar age or you can report the raw date determined solely from the ratio, called Carbon-14 dates. Radiocarbon calendar ages will be more precise than simple carbon-14 dates, especially for older dates.
There is one fascinating thing about determining the initial 14-carbon/12-carbon ratios for materials during the last hundred years. Because of the detonation of atomic weapons in the 1940s and 1950s, the amount of 14-carbon increased in the atmosphere dramatically after World War II, as seen in tree ring data and measurements of isotopes of carbon in carbon-dioxide of the atmosphere.
This fact was used by neurologist studying brain cells, leading to the medical discovery that new brain cells are not formed after birth, as people born before the 1940s have lower levels of 14-carbon in their brain cells in old age, than brain cells of people born after the advent of the nuclear age, which have much higher levels of 14-carbon in their cells. However, over the past few decades, neuroscientists have found two brain regions, the olfactory bulbs (where you get the sense of smell) and the hippocampus (where the storage of memories happen) that do grow new neuron cells throughout life, but the majority of your brain is composed of the same cells throughout your life.
Radiocarbon dating works great, but like a stop watch, it is not going to tell us about things much older than 100,000 years. For dinosaurs and older fossils, or rocks themselves the next method is more widely used.
Potassium-argon (K-Ar) Dating
Potassium-argon dating is a great method for measuring ages of materials that are millions of years old, but not great if you are looking to measure something only a few thousand years old, since it has a very long half-life.
Potassium argon dating measures the time since a phase transition from a liquid to a solid took place, such as the moment liquid lava or magma cools into a solid rock or crystal. It also requires that the material contain potassium in a crystal lattice structure. The most common minerals sampled for this method are biotite, muscovite, and the potassium feldspar group of minerals, such as orthoclase. These minerals are common in volcanic rocks and ash layers, making this method ideal for measuring the time when volcanic eruptions occurred.
If a volcanic ash containing these minerals are found deposited within or near the occurrence of fossils, a precise date can often be found for the fossils, or a range of dates, depending on how far stratigraphically that ash layer is found from the fossils. Potassium-40 is radioactive, but with a very long half-life of 1.26 billion years, making it ideal for determining ages in most geologic time ranges measured in millions of years. Potassium-40 decays to argon-40, as well as calcium-40, argon-40 is a gas, while calcium-40 is a solid, and very common, hence we want to look at the amount of argon-40 trapped in the crystal and compared that amount to potassium-40 contained in the crystal (both of which are fairly rare).
This requires two steps, first to find out how much potassium-40 is contained within the crystal, and second how much argon-40 gas is trapped in the crystal. One of the beautiful things about potassium-argon dating is that the initial amount of argon-40 in the crystal can be assumed to be 0, since it is a gas. Argon-40 was not present when the crystal was a liquid and cooled into a solid. The only argon-40 found within the crystal would be formed by radioactive decay of potassium-40 and become trapped inside the solid crystal after this point. One of the problems with potassium-argon dating is that you have to do two different lab methods to measure the amount of potassium-40 and the amount of argon-40, and within a single crystal and not destroy the crystal in the process of running those two separate tests. Ideally, we want to sample the exact spot on a crystal for both measurements with a single analysis. And while potassium-argon dating came about in the 1950s, it has become less common compared to another method, which is easier and more precise, and only requires a single test.
40Ar/39Ar dating method
This method uses the potassium-argon dating technique but makes it possible to do a single lab analysis on a single point on a crystal grain, making it much more precise than the older potassium-argon method. The way it works is that a crystal containing potassium is isolated, and studied under a microscope, making sure it is not cracked or fractured in any way. The selected crystal is subjected to neutron irradiation, which converts any of the potassium-39 isotopes, to argon-39 isotopes a gas that will be trapped within the crystal (this is similar to what the sun does to nitrogen-14 to change it to carbon-14). These argon-39 isotopes join any of the radiogenic argon-40 isotopes in the crystal as trapped gases, so we just have to measure the amount of argon-39 to argon-40.
The argon-39 number will determine about how much potassium was in the crystal. After being subjected to neutron irradiation the sample crystal will be zapped with a laser, that will release both types of argon gas trapped in the crystal. This gas is sucked up within a vacuum into a mass spectrometer to measure atomic masses of 40 and 39. Note that Argon-39 is radioactive, and decays with a half-life of 269 years, so any argon-39 measured was generated by the irradiation done in the lab. Often this method requires large, unfractured and well-preserved crystals to yield good results. The edges of the crystal and near cracks within the crystal, may have let some of the argon-40 gas to leak out, and will yield too young of a date. Both potassium-argon and argon-argon dating tend to give minimum ages, so if a sample yields 30 million years within a 1-million-year error, the actual age is more likely to be 31 million years, than 29 million years. Often potassium-argon and argon-argon dates are younger than other evidence suggests, and likely were determined from fractured crystals with some leakage of argon-40 gas. Studies will often show the crystal sampled and where the laser points are, and the dates calculated from each point in the crystal. The maximum age is often found near the center and far from any edge or crack within the crystal. Often this will be carried out with multiple crystals in a single rock, to get a good range, and taking the best resulting maximum ages. While potassium-argon and argon-argon are widely used it does require nicely preserved crystals of such fragile minerals grains as biotite, which means that the older the rock, the less likely good crystals can be found. It also does not work well with transported volcanic ash layers in sedimentary rocks, because the crystals are damaged in the process. Geologists were eager to use other minerals, more rugged minerals, that could last billions of years yet preserve the chemistry of radioactive decay— the mineral that meets those requirements is zircon.
Zircon Fission track dating
Zircon’s are tough and rugged minerals, found in many igneous and metamorphic rocks, and are composed of zirconium silicate (ZrSiO4), which form small diamond like crystals. Because these crystals are fairly rugged and can survive transport they are also found in many sandstones. These transported zircons in sedimentary rocks are called detrital zircons. With most zircon dating, you are measuring the time since the phase transition from a liquid to a solid, when magma cooled into a solid zircon crystal.
Zircon fission track dating is more specifically measuring the time since the crystal was cooled to 230 to 250 °C, which is called the annealing temperature. Between 900 °C to 250 °C the zircons are somewhat mushy. Zircon fission dating dates the cooler temperature when the crystal became hard, while another other method dates the hotter temperature when the crystal became a solid. Zircons are composed of a crystal lattice of zirconium bonded to silicate (silica and oxygen tetrahedrals), the zirconium is often replaced in the crystal with atoms of similar size and bonding properties, including some of the rare earth elements, but what we are interested in is that zircon crystals contain trace amounts of uranium and thorium. Uranium and thorium are two of the largest naturally occurring atoms on the periodic table. Uranium has 92 protons, while thorium has 90. Both elements are radioactive and decay, with long half-lives. These atoms of uranium and thorium act like mini-bombs inside the crystal, and when one of these high atomic mass atoms decay, it sets off a long chain reaction of decaying atoms, the fission of which causes damage to the internal crystal structure.
The older the zircon crystal is the more damage it will exhibit. Fission track dating was developed as an independent test of potassium-argon and argon-argon dating, as it does not require an expensive mass spectrometer, but simply looking at the crystal under a powerful microscope and measuring the damage caused by the radioactive decay of uranium and thorium. Zircon fission track dating is also used to determine the thermal history of rocks, as they rose up through the geothermal gradient, recording the length of time it took to cool to 250° C.
Uranium–Lead dating of Zircons
Uranium-lead dating is the most common way to date rocks and used to determine the age of the Earth, meteorites, and even rocks from the Moon and on Mars. It has become the standard method for radiometric dating, as new technology has made this method much easier. In the 1950s and 1960s, geologists were eager to figure out a way to use the uranium and lead inside zircons to get a specific date, more precise than estimates based on fission track dating, which was somewhat subjective. The problem was that all those radioactive tiny atomic bombs causing the damage to the zircon crystals over millions of years was also causing the loss of daughter product that would escape during those decay events, such as the gas Radon. The decay of the two most common isotopes of uranium (Uranium-235 and Uranium-238) is a complex chain of events, during which the radioactive gas radon is produced as one of the steps. If there are cracks or fractures in the crystal, the radon gas escapes from the crystal and as a result the ratio would yield too young of a date. If the radon gas is still held within the crystal, it would decay back to a solid, eventually as lead. Lead is not found initial within zircon crystals, and lead would only be found within zircons from the decay of uranium isotopes, allowing radiometric dating.
During the 1940s and 1950s a young scientist named Clair Cameron Patterson was trying to determine the age of the Earth by dating zircons and meteorites. Rather than look at zircons, he was trying to date meteorites, which contain the stable isotope lead-204, and used a type of uranium-lead dating simply called lead-lead dating. Clair Patterson used an isochron, which graphically compares the ratios of lead produced through the decay of uranium and thorium, lead-206 and lead-207 with stable lead-204 (an isotope not produced by radioactive decay of uranium and thorium), by plotting these ratios on a graph the resulting slope would indicate the age of the sample, the line is called an isochron, meaning the same age. Using lead isotopes recovered from the Canyon Diablo meteorite from Arizona, Patterson calculated that the Earth was between 4.5 and 4.6 billion years old in 1956.
To acquire these ratios of lead, Clair Patterson developed the first chemical clean room, as he quickly discovered abundant lead contamination in the environment around him traced to the widespread use of lead in gasoline, paints, and water pipes in the 1940s and 1950s. Patterson dedicated much of his later life fighting corporate lobbying groups and politicians to enact laws prohibiting the use of lead in household products, such as fuel and paints. The year 1956 was also the year that the solution to the uranium-lead problem was solved by the brilliant scientist George Wetherill who published a solution to the problem, something called a Concordia diagram, or sometimes called the Wetherill diagram, which allowed direct dating of zircons. There two types of Uranium isotopes in these zircons Uranium-238 (the most common) which decays to Lead-206, and Uranium-235 (the next most common) which decays to Lead-207 (with different half-lives).
If you could measure these two ratios, in a series of zircon crystals and compare the ratios graphically, you could calculate the true ratios of the zircons as if they had not lost any daughter products. Using this set of ratios, you can determine where the two ratios would cross with a given age, and hence where they would be in accordance with each other. It was a brilliant solution that solved the issue of daughter products escaping from zircon crystals. Today geologists can analyze particular points on individual zircon crystals, and hence select the best spot on the crystal that has the minimum amount of leakage of daughter products, using the Concordia diagram allows a correction to these resulting ratios.
Uranium-Lead dating requires you to determine two ratios, Uranium-238 to Lead-206, and Uranium-235 to Lead-207. The Uranium-238 to Lead-206 has a half-life of 4.46 billion years, while Uranium-235 to Lead-207 has a half-life of 704 million years, making them great for both million-year and billion-year scales of time. To do Uranium-Lead dating on zircons, rock samples are grounded, and zircons are extracted using heavy liquid separation. These zircons are analyzed under a microscope. Zircons found in sedimentary rock will yield the age when the zircon initial formed from magma, not when it was re-deposited in a sedimentary layer or bed. Zircons found in sedimentary rocks are called detrital zircons, and will yield maximum ages; for example, a detrital zircon that is 80 million years old, could be found in sedimentary rock deposited 50 million years ago, and the 30-million-year difference is the time when the zircon was exhumed and eroded from igneous rocks and transported into the sedimentary rock. A 50-million-year old zircon will NOT be found in sedimentary rocks that are in fact 80 million years old, so detrital zircons will tell you only that the rock is younger than the zircon age.
Zircons deposited or forming within igneous rock or volcanic ash layers that contain fresh zircons can yield very reliable dates, particularly when the time between crystallization and deposition is minimal.
The first step of Uranium-Lead dating is finding and isolating zircon crystals from the rock, usually by grinding the rock up, and using heavy liquid to separate out the zircon crystals. The zircons are then studied under a microscope to determine how fresh they are. If zircons were found in a sedimentary rock, they are likely detrital, and the damage observed in the crystal will tell you how fresh they are. Detrital zircons are dated in studies to determine the source of sedimentary grains by sedimentary geologists, however, they often lack the resolution for precise dates, unless the zircon crystals were deposited in a volcanic ash, and have not be eroded and transported. Once zircons are selected, they are analyzed using laser ablation inductively coupled plasma mass spectrometry, abbreviated as LA-ICP-MS, which zaps the crystals with a laser, the ablated material is sucked up and ionized in the mass spectrometer under extremely hot temperatures, and a plasma is created which passes the atoms along a tube at high speed measuring the atomic mass of the resulting atoms scattered along the length of the plasma tube. LA-ICP-MS measures larger atomic mass atoms, such as lead and uranium. LA-ICP-MS does not require much lab preparation, and zircons can be analyzed quickly resulting in large sample sizes for distributions of zircons, giving very precise dates. Zircon Uranium-Lead dating is the most common type of dating seen today in the geological literature, exceeding even the widely used Argon-Argon dating technique. It is also one of the more affordable methods of dating, requiring less lab preparation of samples.
Dating using electron energy states
One of the things you will note about these dating methods is that they are used either to date organic matter that is less than 100,000 years old, or volcanic or igneous minerals that are much older between 1-million to 5-billion years old.
That leaves us with a lot of materials that we can’t date using those methods, including fossils directly that over 100,000 years old, and sedimentary rocks, since detrital zircons will only give you the date when they turned into a solid crystal, rather than the age the sedimentary rocks they are found in. Also using these methods, we can’t determine the age of stone or clay pottery artifacts, the age of glacial features on the landscape, or fossilized bone directly.
One place that is notoriously difficult to date are cave deposits that contain early species of humans, which are often older than the limits of radiocarbon dating. This problem is exemplified by the controversial ages surrounding the Homo floresiensis discovery, a remarkably small species of early humans found in 2003 in a cave located on the island of Flores in Indonesia. Physical anthropologists have argued that the species shares morphological similarities with Homo erectus, which lived in Indonesia from 1.49 million years ago to about 500,000 years ago. Homo erectus was the first early human to migrate out of Africa, and fossils discovered in Indonesia were some of the oldest, as determined from potassium-argon and zircon fission track dating. However, radiocarbon dating from the cave where the tiny species Homo floresiensis was found were much younger than expected, yielding radiocarbon dates of 18,700 years and 17,400 years old, which is old, but not as old as the anthropologists had suggested if the species was closely related to Homo erectus. Researchers decided to conduct a second analysis, and they turned to luminescence dating.
Luminescence (optically and thermally stimulated)
There are two types of luminescence dating, optically stimulated and thermally stimulated. They measure the time since the sediment or material was last exposed to sunlight (optical) or heat (thermal). Luminescence dating was developed in the 1950s and 1960s initially as a method to date when a piece of pottery was made. The idea was that during the firing of clay in a pottery kiln to harden the pottery, the quartz crystals within the pottery would be subjected to intense heat and energy, the residuals of this energy would dim slowly long after the pottery was cooled down. Early experiments in the 1940s on heating crystals and observing the light emitted after subjecting the crystals to heat or light, showed that materials could fluorescence (spontaneously glow) and phosphorescence as a delay of light given off by the material for a longer period of time, long after the material was subjected to the initial light or heat.
If you have ever played with glow in the dark objects, you can see this when you expose the object to light, then turn off the light, there is a glow to the object for a long while until it dims to the point you can’t see it anymore. This effect is called phosphorescence. It was also known that material near radioactive-materials would also give off either spontaneous fluorescence and phosphorescence which would last, so it does not have to be heated or in light, radioactive particles can also excite material to glow as well.
What causes this glow is that by exciting electrons in the atom with intense heat or exposure to sunlight (photons) or even radioactivity, the electrons move up in energy levels, however these electrons quickly drop down in energy levels, and in doing so emitting photons as observable light as the object cooled or was removed from the light. In some materials these electrons become trapped at these higher energy levels, and slowly and spontaneously pop back down to the lower energy levels over a more extended period of time. When electrons drop down from their excited states they emit photons, prolonging the glow of the material over a longer time period and perhaps thousands of years.
Scientists wanted to measure the remaining trapped electrons in ancient pottery. The dimmer the glow observed the older the pottery would be. Earlier experiments were successful, and later this tool was expanded to materials exposed to sunlight, rather than heat. The way it works is to determine two things, first is the radiation dose rate, as this will tell you how much radiation the crystal is absorbing over time. This is usually is done by measuring the amount of radioactive elements in the sample and surrounding it. The second thing to measure is the total amount of absorbed radiation, which is measured by exposing the material to light or heat, and measuring the number of photons emitted by the material. Using these two measurements you can calculate the age since the material was subjected to the initial heat or light. There are three types of Luminescence dating.
The first is TL (or thermal luminescence dating) using heat to measure the amount of photons given off of the material. The second is infrared stimulated or IRSL, and the third is optical stimulated or OSL, both of these methods refer to how the photons are measured in the lab by stimulating them with either infrared light or visible optical light. The technique works well, but there is a limit to how dim the material can be to give you useful information, so it works well for materials that are 100 to 350,000 similar to ranges found with radiocarbon dating, but can be carried out on different material, such as pottery, stone artifacts, and the surfaces of buried buildings and stone work.
Researchers in addition to determining the radiocarbon age of Homo floresiensis, used luminescence dating and found a TL maximum date of 38,000 years old, and an IRSL minimum date of 14,000 years old, suggesting that the 18,000 years old date was correct for the skeletons found in the cave. Theses ages are when these sediments were last exposed to sunlight, and not when they were actually deposited in their current place in the cave, so there is likely a lot of mixing going on inside the cave.
Uranium series dating
As a large atom, uranium decays over a very long half-life to lead, and that there are two uranium decay chains, one for uranium-235 which decays to the stable isotope lead-207 and one for uranium-238 which decays to stable isotope lead-206. Scientists look at is just a segment of that long decay chain, the decay of uranium-238 to uranium-234, which is the first part of uranium-238 decay.
And just measure the amount of uranium-234 decaying to thorium-230. Uranium-238 decays to thorium-234 with half-life of 4.27 billion years, thorium-234 decays to protactinium-234 with a half-life of 27 days, then protactinium-234 decays to uranium-234 also with a half-life of 27 days and finally Uranium-234 with a half-life of 245,500 years decays to thorium-230.
The decay between Uranium-234 and thorium-230 can be used to measure things within a few hundred thousand years. There is a problem with this method, since scientists don’t know the initial amount of uranium within the bone or sediment that we are measuring. There is an unknown amount of uranium-234 starting out in the bone or sediment. Uranium-oxide is often carried by groundwaters moving into and out of the fossil and pores between sediment grains.
So unlike other dating methods, were the initial amount of daughter product was assumed to be zero, or a way to determine it experimentally, such as in carbon-14 dating, we can’t make that case. So we have to build a diffusion model, often this is called modeled ages.
The way this is done is that the bone is sectioned, cleaned and laser ablated at various points across its depth measuring the ratios between uranium-234 and thorium-230. Because the bone absorbed more uranium-234 over time, the outer layers of the bone will be enriched in uranium-234 compared to the internal part of the bone, using a gradient of uranium-238, uranium-234 and thorium-230 a diffusion model can be made to determine the amount of uranium-234 likely in the bone when the fossil organism died, and the amount of thorium-230 resulting from the decay of this uranium-234 as additional uranium-234 was added during the fossilization process. Because of this addition of uranium-234, and the fact that uranium-234 is very rare, (as it is produced only by the decay of uranium-238), this method is reserved for difficult cases, such as dating fossils deposited in hard-to-date cave deposits, especially in the upper limits of radiocarbon dating, between 100,000 to 500,000 year-old fossils.
Uranium series dating was used to re-exam the age of Homo floresiensis by looking at the actual fossil bone itself. Uranium series dating of the bone of Homo floresiensis resulted in ages between 66,000 and 87,000 years old(link to revised age), older than the radiocarbon dates from the nearby charcoal (17,400-18,700 years old) and luminescence dating of sediment in the cave (14,000-38,000 years old), but a modeled on the actual bones themselves. These are modeled ages, since you have to determine the diffusion of uranium into the pores of the bone as it was fossilized in the cave, which can yield somewhat subjective dates.
Uranium series dating was also done for another problematic early human cave discovery, the age of Homo naledi from the Rising Star cave in South Africa. Fossil teeth were directly dated using uranium series dating, yielding a minimum age of 200,000 years old (link to paper on the age of the fossil), which had been predicted to be about 1,000,000 years old.
Although, uranium series dating tends to have large error bars, due to the modeling of the diffusion of uranium into the fossils and rocks. Depending on how quickly and how much uranium-238 and uranium-234 was added to the fossil over time in the cave. Uranium series dating is used really only in special cases where traditional dating such as radiocarbon dating and uranium-lead dating can’t be done.
Electron Spin Resonance (ESR)
In April of 1986 the Chernobyl Nuclear Power Plant suffered a critical melt down resulting in an explosion and fire that release large amounts of radioactive materials into the nearby environment. The accident lead to the death of 31 people directly from radiation, and 237 suffered acute radiation sickness. Worry spread across Europe as to how to measure the exposure to radiation from the accident, and electron spin resonance was developed by Soviet scientists to measure the exposure to radiation by looking at teeth, particularly baby teeth of children living in the area.
Electron spin resonance is the measurement of the number of unpaired electrons within atoms. When exposed to radiation, electrons will unpair from their typical covalent bonds, and become unpaired within the orbitals resulting in a slight difference in the magnetism of the atom. This radiation damage, results in the breaking of molecular bonds, and the reason radiation causes cancers and damage to living cells. At the atomic, level radiation can break molecules, resulting in abnormally high errors in DNA and proteins within living cells. Electron spin resonance measures the amount of free radical electrons with a material.
Using this measurement, scientist measured the amount of electron spin resonance in teeth from children who lived near the accident to determine the amount of exposure they had to radiation fall-out from the Chernobyl accident. The study worked, which led to the idea of using the same technology in fossilized teeth, exposed to naturally occurring radiation in the ground.
Dating using electron spin resonance requires that we know the amount of uranium and radioactivity that is in the surrounding material through its history, and calculate the length of exposure time to this radiation. The issue however is that you have to model the amount of uranium uptake within the fossil over time, similar to the model you develop with uranium series dating. This is because in both methods of dating you can’t assume that uranium (and amount of radioactivity) in the material remained the same, as the uptake of fresh uranium over time likely occurred. Often scientists will focus on the dense crystal lattice structure of enamel, a mineral called hydroxyapatite, as it is less susceptible to the uptake of uranium.
Electron spin resonance is often paired with uranium series dating, since it has a similar range of ages that it can be used for from a 100 up to 2,000,000 years. Unpaired electrons within atoms is a more permanent state than electrons at higher energy levels seen in Luminescence dating, so older fossils can be dated, up to 2 million years old. This dating method can’t be used for the vast majority of fossils older than 2 million years old, but can be used to date the length of time a fossil or rock was buried up to that limit. Note that electron spin resonance dating is determining the length of time a fossil was buried in sediment that has a background radiation that can be measured.
Surface Exposure Dating or Beryllium-10 dating
This dating method has revolutionized the study of past Ice Ages over the last 2.5 million years, and study of the glacial and interglacial cycles of Earth’s recent climate. Surface exposure dating can determine the length of time a rock has been exposed to the sun. Ascertaining the length of time, the rock has been exposed to the sunlight allows geologists to discover the age of when that rock or boulder was deposited by a melting glacier, and the timing of the extent of those glaciers on a local level throughout past ice age events.
The way it works is that when rocks are exposed to sunlight, they are bombarded by cosmic rays from the sun, which contain neutrons. These rays result in something called spallation of the atoms in mineral crystals, resulting in the build-up of cosmogenic nuclides.
There are a number of different types of cosmogenic nuclides. For example, we had previously talked about potassium-40 being hit with neutrons in a lab setting and producing Argon-39, in argon-argon dating. The same thing happens in nature, when rocks are left in the sun for a long time, and you could measure the amount argon-39. Most geologists instead look for atoms which form solids as there are easier to extract from the rock, including Beryllium-10, one of the most widely used cosmogenic nuclides to measure.
Beryllium-10 is not found in quartz minerals common in rocks and boulders when they form, but will accumulate when the oxygen atoms within the crystal lattice structure are exposed to cosmic rays containing short-lived free neutrons. The beryllium-10 will build up within the crystals, as long as the rock is exposed to the sun. Beryllium-10 is an unstable, radioactive isotope, with a half-life of 1.39 million years, making ideal for most applications of dating during the Pleistocene Epoch. Most rocks studied so far have exposure ages of less than 500,000 years, indicating that most rocks get re-buried within half a million years.
Surface Exposure Dating is different because we are looking at the amount of beryllium-10 building up within the surface of the rock over time, so the more beryllium-10 is within the rock the longer it has been exposed to sunlight. If the rock becomes obscured from the sun through burial, or a tree grows next to it, then the build-up of the beryllium-10, will be slowed or turned off, and over time will decay to boron-10, emptying the beryllium-10 out of the rock, and resetting the clock for the next time it is exposed to the sunlight. Geologists have to be sure that the rock has been well exposed to sunlight and not shaded by any natural feature in the recent past, like trees.
Geologists will select a boulder or rock, and carefully record is location, as well as the horizon line surrounding the rock, to account for the length of sunlight exposure of any given day at that location.
A small explosive charge is drilled into the rock, and rock fragments are collected of the surface edge of the rock. The sample is grounded into a powder back in the lab, and digested with hydrofluoric acid, to isolate quartz crystals, which is turned into a liquid solution within a very strong acid. This solution is reacted to various chemicals to isolate the beryllium into a white powder, which is then passed through a mass spectrometer to measure the amount of Beryllium-10 in the rock. This amount is then compared to a model of how much sunlight the rock was exposed to at that location, with the topography of the surrounding features, and determine the length of time that rock has sat there on the surface of the Earth. It’s a pretty cool method which, has become highly important in understanding the glacial history of the Earth through time.
Magnetostratigraphy is the study of the magnetic orientations of iron minerals within sedimentary rocks. These orientations record the direction of the magnetic pole when the sedimentary rocks were deposited. Just like a magnetic compass, iron minerals when transported in lava, magma, or in sediment will orient to the current Earth’s magnetic field. The Earth magnetic field is not stationary, but moves around. In fact, the orientations of the poles switch randomly every few hundred thousand years of so, such that a compass would point toward the south pole rather than the north pole. This change in the orientation of the iron minerals is recorded in the rock layers formed at that time. Measuring these orientations between normal polarity, when the iron minerals point northward, and reversal polarity, with the iron minerals point southward, gives you events that can be correlated between different rock layers.
The thickness of these bands of changing polarity can be compared to igneous volcanic rock as well, which record both the absolute age using potassium-argon dating for example, as well as the polarity of the rocks at that time, allowing correlation between sedimentary and igneous rocks. Magnetostratigraphy is really important because it allows for the dating of sedimentary rocks, that contain fossils, even when there is no volcanic ash layers present.
There are a couple problems with magnetostratigraphy. One is that rocks can become demagnetized in sedimentary rocks. For example, the rock being struck by lightning will scramble the orientations of the iron grains. It can be also difficult to correlate layers of rock if the sedimentation rates vary greatly or you have unconformities you are unaware of. However, it really works well for many rock layers, and is a great tool to determine the age of rocks by documenting these reversals in the rock record. It was also one of the key technologies to demonstrate the motion of Earth’s tectonic plates.
Rock samples are collected in the field, by recording their exact orientation and carefully extracting the rock not to break it. The rock is then taken to a lab, where the rock is placed in iron cage to remove the interference of magnetic fields from the surrounding environment. The rock sample is cryogenic cooled down to extremely cold temperatures just above absolute zero, where the residual magnetism of the rock will be easier to measure, because sedimentary rocks are not very magnetic, more magnetic rocks, like igneous rocks don’t have to be cooled. The rock is slowly demagnetizated and the orientations vectors are recorded and spatially plotted.
These data points will fall either more toward the north or south, depending on the polarity of the Earth at the time of deposition. The time span between polar reversal events is short, a few hundred years, with the majority of time the polarity is either in normal or reverse state. Sometimes the polarity changes rapidly, while other times the polarity does not change for millions of years, such long intervals with lack of change occurred during the Cretaceous Period, during the age of dinosaurs for 40 million years, which is called the Cretaceous Superchron where the polarity stayed normal for a very long time, and geologists don’t know why this happened.
Overview of Methods
|Method||Range of Dating||Material that can be dated||Process of Decay|
|Radiocarbon||1 - 70 thousand years||Organic material such as bones, wood, charcoal, and shells||Radioactive decay of 14C in organic matter after removal from biosphere|
|K-Ar and 40Ar-39Ar dating||10 thousand - 5 billion years||Potassium-bearing minerals||Radioactive decay of 40K in rocks and minerals|
|Fission track||1 million - 10 billion years||Uranium-bearing minerals (zircons)||Measurement of damage tracks in glass and minerals from the radioactive decay of 238U|
|Uranium-Lead||10 thousand - 10 billion years||Uranium-bearing minerals (zicrons)||Radioactive decay of uranium to lead via two separate decay chains|
|Uranium series||1 thousand - 500 thousand years||Uranium-bearing minerals, corals, shells, teeth, CaCO3||Radioactive decay of 234U to 230Th|
|Luminescence (optically or thermally stimulated)||1 thousand - 1 million years||Quartz, feldspar, stone tools, pottery||Burial or heating age based on the accumulation of radiation-induced damage to electron sitting in mineral lattices|
|Electron Spin Resonance (ESR)||1 thousand - 3 million years||Uranium-bearing materials in which uranium has been absorbed from outside sources||Burial age based on abundance of radiation-induced paramagnetic centers in mineral lattices|
|Cosmogenic Nuclides (Beryllium-10)||1 thousand - 5 million years||Typically quartz or olivine from volcanic or sedimentary rocks||Radioactive decay of cosmic-ray generated nuclides in surficial environments|
|Magnetostratigraphy||20 thousand - 1 billion years||Sedimentary and volcanic rocks||Measurement of ancient polarity of the earth's magnetic field recorded in a stratigraphic succession|
3e. The Periodic Table and Electron Orbitals.
Electrons: how atoms interact with each other
If it was not for electrons inside atoms, atoms would never bond or interact with each other to form molecules, crystals and other complex materials. Electrons are extremely important in chemistry because they determine how atoms interact with each other. It is no wonder that the Periodic Table of Elements, found in most science classrooms is displayed rather than the more cumbersome Chart of the Nuclides, since the Periodic Table of Elements organizes elements by the number of protons and electrons, rather than the number of protons and neutrons.
As discussed previously, electrons are wayward subatomic particles that can increase their energy states and even leave atoms altogether to form plasma, which is also called electricity. Electricity is the flow of free electrons which can move near the speed of light across conducting material, like metal wires. In this next section we will look in detail at how electrons are arranged within atoms in orbitals. However, remember that highly excited atoms bombarded with high levels of electromagnetic radiation, such as increasing temperatures and high pressures, electrons can leave atoms, while at very cold temperatures, near absolute zero Kelvin electrons will be very close to the nucleus of atoms forming Bose–Einstein condensate. When we think of temperature (heat), what is really indicated is the energy states of electrons within the atoms of a substance, whether a gas, liquid or solid. The hotter a substance becomes the more vibrational energy electrons will have.
Electrons orbit around the nucleus at very fast speeds and under no discrete orbital path, but as an electromagnetic field called an orbital shell. The Heisenberg principle describes the impossible nature of measuring these electron orbital shells, because anytime a photon is used by a scientist to measure the position of an electron, it will move and change its energy level. There is always an uncertainty as to the exact location of an electron within the orbit around the atom’s nucleus. As such electron orbital shells are probability fields where an electron is likely to exist at any moment in time.
Negatively charged electrons are attracted to positively charged protons, such that equal numbers of electrons and protons are observed in most atoms.
Early chemists working in the middle 1800s knew of only a handful of elements, which were placed into three major groups based on how reactive they were with each other, the Halogens, Alkali Metals and Alkali Earths. By 1860, the atomic mass of many of these elements were reported, allowing the Russian scientist Dmitri Mendeleev to arrange elements based on their reactive properties and atomic mass.
While working on a chemistry textbook, Mendeleev stumbled upon the idea of each set of elements having increasing atomic mass, such that a set of Halogens would have elements of differing mass. Without knowing the underlying reason, Mendeleev organized the elements by their atomic number (number of protons), which is related to atomic mass and the number of electron orbitals, which is related to how reactive an element is to bonding with other elements. While these early Periodic Tables of Elements look nothing like our modern Periodic Table of Elements, they excited chemists to discover more elements. The next major breakthrough came with the discovery and wide acceptance of Nobel gasses, which include Helium and Argon, which are the least reactive elements known.
The Periodic Table of Elements
So how does an atom’s reactivity relate to its atomic mass? Electrons are attracted to the atomic nucleus in equal number to the number of protons, which is half the atomic mass. The more atomic mass, the more protons, and the more electrons will be attracted. However, electrons prefer to fill electron orbital shells in sets, such that an incomplete electron orbital shell will attract other electrons, despite there being an equal number of electrons to protons. If an atom has a complete set of electrons which matches the number of protons, it will be non-reactive, while elements that need 1 less or gain 1 more electron to fill an orbital set are the most reactive types of elements.
|Alkali metals2||Alkaline earth metals2||Lanthanides12||Actinides12||Transition metals2|
|Poor metals||Metalloids||Nonmetals||Halogens3||Noble gases3|
1Actinides and lanthanides are collectively known as "Rare Earth Metals." 2Alkali metals, alkaline Earth metals, transition metals, actinides, and lanthanides are all collectively known as "Metals." 3Halogens and noble gases are also non-metals.
State at standard temperature and pressure
- those with atomic number in blue are not known at STP
- those with atomic number in red are gases at standard temperature and pressure (STP)
- those with atomic number in green are liquids at STP
- those with atomic number in black are solid at STP
- those with a cyan background have unknown chemical properties.
The First Row of the Periodic Table of Elements (Hydrogen & Helium)
The first row of the Periodic Table of Elements contains two elements, Hydrogen and Helium.
Hydrogen has 1 proton, and hence it attracts 1 electron. However, the orbital shell would prefer to contain 2 electrons, so hydrogen is very reactive with other elements, for example in the presence of oxygen, it will explode! Hydrogen would prefer to have 2 electrons within its electron orbital shell, but can’t because it has only 1 proton, so it will “steal” or “borrow” other electrons from nearby atoms if possible.
Helium has 2 protons, and hence attracts 2 electrons. Since 2 electrons are the preferred number for the first orbital shell, helium will not react with other elements, in fact it is very difficult (nearly impossible) to bond helium to other elements. Helium is a Nobel Gas, which means that it contains the full set of electrons in its orbital shell.
The columns of the Period Table of Elements are arranged by the number of electrons within each orbital shell, and the atomic number (number of protons), is represented by rows.
The Other Rows of the Periodic Table of Elements
The first row of the Periodic Table of Elements is the when the first 2 electrons fill the first orbital shell, called the 1s orbital shell. The second row is when the next 2 electrons fill in the 2s orbital shell, and 6 fill the 2p orbital shell. The third row is when the next 6 orbitals fill in the 3p orbital shell, and 2 fill the 4s orbital shell. The fourth row is when the next 10 orbitals fill in the 3d orbital shell, and 6 fill in the 4p orbital, and 2 fill in the 5s orbital.
A valence electron is an outer shell electron that is associated with an atom, but not completely filling the outer orbital shell, and as such is involved in bonding between atoms. The valence shell is the outermost shell of an atom. Elements with complete valence shells (noble gases) are the least chemically reactive, while those with only one electron in their valence shells (alkali metals) or just missing one electron from having a complete shell (halogens) are the most reactive. Hydrogen, has one electron in its valence shell but also is just missing one electron from having a complete shell has unique and very reactive properties.
The number of valence electrons of an element can be determined by the periodic table group, the vertical column in the Periodic Table of Elements. With the exception of groups 3–12 (the transition metals and rare earths), the columns identify by how many valence electrons are associated with a neutral atom of the element. Each s sub-shell holds at most 2 electrons, while p sub-shell holds 6, d sub-shells hold 10 electrons, followed by f which holds 14 and finally g which holds 18 electrons. Observe the first few rows of the Periodic Table of Elements to see how this works to determine how many valance electrons ( < ) are in each atom of a specific element.
|Element||# Electrons||1s||2s||2p||2p||2p||# Valance
Notice that Helium and Neon have 0 valance electrons, which means that they are not reactive, and will not bond to other atoms. However, Lithium has 1 valance electron, if this 1 electron was removed, it would have 0 valence electrons, this makes Lithium highly reactive. Also notice that Fluorine just needs 1 more valence electron to complete its set of 2s and 2p orbitals, making Fluorine highly reactive as well. Carbon has the highest number of valence electrons in this set of elements, which will attract or give up 4 electrons to complete the set of 2s and 2p orbitals.
Understanding the number of valence electrons is extremely important in understanding how atoms form bonds with each other to form molecules. For example, the first column of elements containing Lithium on the periodic table all have 1 valence electron, and likely will bond to elements that need 1 valence electron to fill the orbital shell, such as elements in the fluorine column on the Periodic Table of Elements.
Some columns of the periodic table are given specific historical names. The first column of elements containing Lithium collectively are called the Alkali Metals (hydrogen a gas is unique and often not considered within Alkali Metals), the last column of the elements containing Helium all have 0 valence electrons, and are collectively called the Nobel Gases. Elements under the Fluorine column require 1 valence electron to fill the orbital shell and are called the Halogens, while elements under Beryllium are called the Alkaline Earth Metals, and have 2 valence electrons. Most other columns are not given specific names (sometimes collectively called Transitional Metals), but can be used to determine the number of valence electrons, for example Carbon and elements listed below will have 4 valence electrons, while all elements listed under Oxygen will have 2 valence electrons. Notice that after the element Barium, there is an insert of two rows of elements, these are the Lanthanoids and Actinoids, which contain electrons in the 4s, 4p, 4d, 4f orbitals, for a possible total of 32 electrons, a little too long to include in a nice table, and hence often these elements are shown at the bottom of the Periodic Table of Elements.
A typical college class in chemistry will go into more detail on electron orbital shells, but it is important to understand how electron orbitals work, because the configuration of electrons determines how atoms of each element form bonds in molecules. In the next section, we will examine how atoms come together for form bonds, and group together in different ways to form the matter that you observe on Earth.
3f. Chemical Bonds (Ionic, Covalent, and others means to bring atoms together).
There are three major types of bonds that form between atoms, linking them together into a molecule, Covalent, Ionic, and Metallic. There are also other ways to weakly link atoms together, because of the attractive properties related to the configuration of the molecules themselves, which includes Hydrogen bonding.
Covalent bonds are the strongest bonds between atoms found in chemistry. Covalent bonding is where two or more atoms share valence electrons to complete their orbital shells. The most-simple example of a covalent bond is found when two hydrogen atoms bond. Remember that each hydrogen atom has 1 proton, and 1 electron, however to fill the s1 orbital requires 2 electrons. Hydrogen atoms will group into pairs, each contributing an electron to the s1 orbital shell. Chemically hydrogen will be paired, which is depicted by the chemical formula H2. Another common covalent bond can be illustrated, by introducing oxygen with hydrogen. Remember that oxygen needs two valence electrons to fill its set of electron orbitals, hence it bonds to 2 hydrogen atoms, each having 1 valence electron to share between the atoms. H2O the chemical formula for ice or water is where 2 hydrogen atoms, each with an electron, bond with an oxygen atom that needs 2 electrons to fill its s2 p2 orbitals. In covalent bonds, atoms share the electron to complete the orbital shells, and because the electrons are shared between atoms covalent bonds are the strongest bonds in chemistry.
Oxygen for example, will pair up to share the 2 electrons (called a double bond), forming O2. Nitrogen does the same, pairing up to form N2, by sharing 3 electrons (called a triple bond). However, in the presence of nitrogen and hydrogen, the hydrogen will bond with nitrogen forming NH3 (ammonia) because it would require 3 electrons each from a hydrogen atom to fill all the orbitals. Carbon which has 4 valence electrons most often bonds with hydrogen to form CH4 (methane or natural gas), because it requires 4 electrons each from a hydrogen atom. Bonds that form by two atoms sharing 4 or more electrons are very rare.
The electrons shared equally between the atoms makes these bonds very strong. Covalent bonds can form crystal lattice structures, when valence electrons are used to link atoms together. For example, diamonds are composed of linked carbon atoms. Each carbon atom is linked to 4 other carbon atoms, which each share an electron between them. If the linked carbon forms a ring, rather than a lattice structure, the carbon is in the form of graphite (used at the end of pencils). If the linked carbon forms a lattice structure the crystal from is much harder, a diamond. Hence the only difference between graphite pencil lead and a valuable diamond is how the bonds between the carbon atoms are linked together in covalent bonds.
Ionic bonds are a weaker type of bond between atoms found in chemistry. Ionic bonding is where one atom gives a valence electron to complete another atom’s orbital shell. For example, lithium has a single valence electron, and would like to get rid of it, so it will give or contribute the electron to an atom of fluorine, which needs an extra valence electron. In this case the electron is NOT shared by the two atoms, however, when lithium gives away its valence electron, it becomes positively charged because it has fewer electrons than it has of protons. While fluorine, will have more electrons than protons, and will be negatively charged. Because of this charge, the atoms will be attracted together. Atoms that have different numbers of protons and electrons are called ions. Ions can be positively charged like Lithium, which are called cations or negatively charged like Florine which are called anions.
An excellent example of ionic bonding you have encountered is salt, which is composed of Sodium (Na) and Chloride (Cl). Sodium has one extra valence electron that it would like to give away, and Chloride is looking to pick up an extra electron to fill its orbital, this results in Sodium (Na) and Chloride (Cl) ionically bonding to form table salt. However, the bonds in salt are easy to break, since they are held NOT by sharing electrons, but by their different charges. When salt is dropped in water, the pull of the water molecules can break apart the sodium and chloride, resulting in Sodium and Chloride ions (the salt is dissolved within the water). Often chemical formulas of ions are expressed as Na+ and Cl- to depict the charge, where + sign indicates a cation and – sign indicates an anion. Sometimes an atom will give away or receive two or more electrons, for example Calcium will often give up two electrons, resulting in the cation Ca2+.
The difference between covalent bonds and ionic bonds is that in covalent bonds the electrons are SHARED between atoms, while in ionic bonds the electrons are GIVEN or RECEIVED between atoms. A good analogy to think of is friendship between two kids. If the friends are sharing a ball, by passing it between each other, they are covalently bonded to each other since the ball is shared equally between them. However, if one of the friends has an extra ice cream cone, and gives it to their friend, they are ionically bonded to each other.
Some molecules can have BOTH ionic and covalent bonds. A good example of this is with a common molecule Calcium Carbonate CaCO3. The carbon atom is covalently bonded to three oxygen atoms, which means that it shares electrons between the carbon and oxygen atoms. Typically carbon only covalently bonds to two oxygen atoms (forming carbon dioxide CO2), each sharing two electrons, for a total of 4. However, in the case of carbonate, three oxygen atoms are bonded to the carbon, with 2 sharing 1 electron, and one sharing 2 electrons, this results in 2 extra electrons. Hence CO3-2 has two extra electrons that it would like to give away, and is negatively charged. Calcium atoms have 2 electrons more than a complete shell, and will loose these electrons resulting in a cation with a positive charge of +2, Ca+2. Hence the ions CO3-2 and Ca+2 have opposite charges that will bond together and form CaCO3, calcium carbonate, a common molecule found in limestones and shelled organisms that live in the ocean. Unlike salt, CaCO3 does not readily dissolve in pure water, as the ionic bonds are fairly strong, however if the water is slightly acidic, CaCO3, calcium carbonate will dissolve.
A solution is called an acid when it has an abundance of ions of hydrogen within the solution. Hydrogen ions, loose 1 electron, forming a cation H+. When there is an excess of hydrogen ions in a solution, these will break ionic bonds by bonding to anions. For example, in CaCO3, the hydrogen ions can form bonds with the CO3-2, forming HCO3- ions, called bicarbonate, dissolving the CaCO3 molecule. Acids break ionic bonds, by introducing ions of hydrogen, which can dissolve molecules that form these ionic bonds. Note that a solution with an abundance of anions, such as OH-, can also break ionic bonds, and these are called bases. So a basic solution, is one with an excess of anions. In this case the calcium will form a bond with the OH- anion, forming Ca(OH)2, calcium hydroxide, which in a solution of water is known as limewater.
The ratio of ions of H+ and OH- is measured in pH, such that a solution with a pH of 7 has equal numbers of H+ and OH- ions, while acidic solutions have pH less than 7, with more H+ cations, while basic solutions have pH more than 7, with more OH- anions.
Metallic bonding is a unique feature of metals, and can be described as a special case of ionic bonding involving the sharing of free electrons among a structure of positively charged ions (cations). Materials composed on metallic bonded atoms exhibit high conductivity of electricity, as electrons are free to pass between atoms across its surface. This is why electrical wires are composed of metals like copper, gold, and iron, since they can conduct electricity across their surface, since electrons are shared evenly between many atomic bonds. Material composed of metallic bonding also have a metallic luster or shine, and can be more ductile (bend) easily because of the flexibility of these bonds.
Metallic bonds are susceptible to oxidation. Oxidation is a type of chemical reaction in which metallic bonded atoms loose electrons to an oxidizing agent (most often atoms of oxygen), resulting in metallic atoms becoming bonded covalently with oxygen. For example, iron (Fe) which can be either cations of Fe2+ (Iron-II) or Fe3+ (Iron-III) lose electrons and can receive these missing electrons with oxygen (O-2), which has two extra electrons in its orbitals, resulting in a series of molecules called iron oxides such as Fe2O3. This is why metals, such as iron rust or corrode and silver tarnishes over time, these metallic bonds react with the surrounding oxygen by gaining extra electrons from them. Oxygen is common in the air, within water and within acidic solutions (corrosive solutions), and the only way to prevent oxidation in metals is to limit the exposure to oxygen (and other atoms with an excess of electrons like fluorine).
When electrons are gained the reactive is called a reducing reaction, and is the opposite of oxidation. Collectively these types of chemical reactions are called "Redox" reactions and they form an important aspect in chemistry. Furthermore, the transfer of elections in oxidation-reduction reactions are useful way to store excessive electrons (electricity) in batteries.
Covalent, Ionic and Metallic bonding all require the exchange of electrons between atoms, and hence are fairly strong bonds, with covalent bonds being the strongest type of bond. However, molecules themselves can become polarized because of the arrangement of the atoms, such that a molecule can have a more positive and more negative side. This frequently happens with molecules containing hydrogen atoms bonded to larger atoms. These types of bonds are very weak and easily broken, but produced very important aspects in the chemistry of water and organic molecules essential for life. Hydrogen bonds form within water and is the reason for the expansion in volume between liquid water and solid ice. Water is composed of oxygen bonded to two hydrogen atoms covalently (H2O). The distribution of these two hydrogen atoms contribute an electron to the p2 orbitals, which require 6 electrons. Hence the two hydrogen atoms are pushed toward each other slightly because of the pair of electrons in the first p2 orbital, forming a “mouse-ear” like molecule. These two hydrogen atoms are more positively charged and give the molecule a slight positive charge at the hydrogen atom side compared to the other side of the oxygen atom which lacks a hydrogen atom. Hence water molecules oriented themselves with weak bonds between the positively charged hydrogen atoms and the open space negative charge side of the atoms. Hydrogen bonds are best considered an electrostatic force of attraction between hydrogen (H) atoms which are covalently bound to more electronegative atoms such as oxygen (O) and nitrogen (N). Hydrogen bonds are very weak, but provide important bonds in living organism, such as the bonds within the helix in the double helix structure of DNA (Deoxyribonucleic acid), and hydrogen bonds are important in capillary forces with water transport in plant tissue and blood vessels, as well as hydrophobic (water repelling) and hydrophilic (water attracting) organic molecules in cellular membranes.
Hydrogen bonding is often considered as a special type of the weak Van der Waals molecular forces which cause the attraction or repulsion of electrostatic interacts between electronically charged or polarized molecules. These forces are weak, but play a role in making some molecules more “sticky” than other molecules. As you will learn later on, water is a particularly “sticky” molecule because of these hydrogen bonds.
3g. Common Inorganic Chemical Molecules of Earth.
With 118 elements on the periodic table of elements, there can be a nearly infinite number of molecules with various combinations of these 118 elements. However, on Earth some elements are very rare, while others are much more common. The distribution of matter, and of the various types of elements across Earth’s surface, oceans, atmosphere, and within its inner rocky core is a fascinating topic. If you were to grind up the entire Earth, what percentage would be made of gold? What percentage made of oxygen? How could one calculate the abundances of the various elements of Earth? Insights into the distribution of elements on Earth came about during World War II, as scientists developed new tools to determine the chemical makeup of materials, one of the great scientists to lead this investigation was Victor Goldschmidt.
On November 26th 1942, Victor Goldschmidt stood among the fearful crowd of people assembled on the pier in Oslo Norway waiting for the German ship Donau to transport them to Auschwitz. Goldschmidt had a charmed childhood in his home in Switzerland, and when his family immigrated to Norway, Goldschmidt was quickly recognized for his early scientific interests in geology. In 1914 he began teaching at the local university after successfully defending his thesis on the contact metamorphism in the Kristiania Region of Norway. In 1929 he was invited to Germany to become the chair of mineralogy in Göttingen, and had access to scientific instruments that allowed him to detect trace amounts of elements in rocks and meteorites. He also worked with a large team of fellow scientists in the laboratories whose goal it was to determine the elemental make-up of the wide variety of rocks and minerals. However, in the summer of 1935 a large sign was erected on the campus by the German government that read, “Jews not desired.” Goldschmidt protested, as he was Jewish and felt that the sign was discriminatory and racist. The sign was removed, but only to reappear later in the Summer, and despite his further protest against the sign, the sign remained as ordered by the new Nazi party. Victor Goldschmidt resigned his job in Germany and returned to Norway to continue his research, feeling that any place where people were injured and persecuted only for the sake of their race or religion, was not a welcome place to conduct science. Goldschmidt had with him vast amounts of data regarding the chemical make-up of natural materials found on Earth, particularly rocks and minerals. This data allowed Goldschmidt to classify the elements based on their frequency found on Earth.
The Atmophile Elements
The first group Goldschmidt called the Atmophile elements, as these elements were gases and tended to be found in the atmosphere of Earth. These included both Hydrogen and Helium (the most abundant elements of the solar system), but also Nitrogen, as well as the heavier Nobel Gasses: Neon, Argon, Krypton and Xenon. Goldschmidt believed that Hydrogen and Helium as very light gasses were mostly stripped from the Earth’s early atmosphere, with naturally occurring Helium on Earth found from the decay of radioactive materials deep inside Earth, and trapped, often along with natural gas underground. Nitrogen forms the most common element in the atmosphere, as a paired molecule of N2. It might be surprising that Goldschmidt did not classify oxygen within this group, and that was because oxygen was found to be more abundant within the rocks and minerals he studied, in a group he called Lithophile elements.
The Lithophile Elements
Lithophile elements or rock-loving elements are elements common in crustal rocks found on the surface of continents, they include oxygen and silicon (the most common elements found in silicate minerals, like quartz), but also a wide group of alkali elements belong to this group including lithium, sodium, potassium, beryllium, magnesium, calcium, strontium, as well as the reactive halogens: fluorine, chloride, bromine and iodine, and with some odd-ball middle of the chart elements, aluminum, boron, phosphorous, and of course oxygen and silicon. Lithophile elements also include the Rare Earth elements found within the Lanthanides, and making a rare appearance in many of the minerals and rocks understudy.
The Chalcophile Elements
The next group are the Chalcophile elements or copper-loving elements. These elements are found in many metal ores, and include sulfur, selenium, copper, zinc, tin, bismuth, silver, mercury, lead, cadmium and arsenic. These elements are often associated in ore veins and concentrated with sulfur molecules.
The Siderophile Elements
The next group Goldschmidt described where the Siderophile elements or iron-loving elements, which include iron, as well as cobalt, nickel, manganese, molybdenum, ruthenium, rhodium, palladium, tungsten, rhenium, osmium, iridium, platinum, and gold. These elements were found by Goldschmidt to be more common in meteorites (most especially in iron-meteorites) when compared to rocks found on the surface of the Earth. Furthermore, these elements are common in iron-ore and associated with iron-rich rocks when they are found on Earth’s surface. The last group of elements are simply the Synthetic elements, or elements that are rarely found in nature, which include the radioactive elements found on the bottom row of the Periodic Table of Elements and produced only in labs.
Meteorites, the Ingredients to Making Earth
A deeper understanding of the Goldschmidt classification of the elements was likely being discussed at the local police station in Oslo Norway on that chilly late November day in 1942. Goldschmidt’s Jewish faith resulted in his imprisonment seven years later, when Nazi Germany invaded Norway, and despite his exodus from Germany, the specter of fascism had caught up with him. Jews were to be imprisoned, and most would face death in the concentration camps scattered over Nazi-occupied Europe. Scientific colleagues argued with the authorities that Goldschmidt’s knowledge of the distribution of valuable elements was much needed. The plea worked, because Victor Goldschmidt was released, and of the 532 passengers that boarded the Donau, only 9 would live to see the end of the war. With help, Goldschmidt fled Norway instead of boarding the ship and would spend the last few years of his life in England writing a textbook, the first of its kind on the geochemistry of the Earth.
As a pioneer in understanding the chemical make-up of the Earth, Goldschmidt inspired the next generation of scientists to study not only the chemical make-up of the atmosphere, ocean, and rocks found on Earth, but to compare those values to extra-terrestrial meteorites that have fallen to Earth from space.
Meteorites can be thought of as the raw ingredients of Earth. Mash enough meteorites together, and you have a planet. However, not all meteorites are the same, some are composed mostly of metal iron, called iron meteorites, other meteorites have equal amounts of iron and silicate crystals, called stony-iron meteorites, while the third major group, the stony meteorites are mostly composed of silicate crystals (SiO2).
If the Earth formed from the accretion of thousands of meteorites, then the percentage of chemical elements and molecules found in meteorites would give scientists a starting point for the average abundance of elements found on Earth. Through its history Earth’s composition has likely changed as elements became enriched or depleted in various places, and within various depths inside Earth. Here are the abundances of molecules in meteorites: (From Jarosewich, 1990: Meteoritics)
|Stony meteorites||(% weight)|
|Iron meteorites||(% weight)|
If Earth was a homogenous planet (one composed of a uniform mix of these elements) the average make-up of Earthly material would have a similar composition to a mix of stony and iron meteorites. We see some indications of this fact, for example SiO2 (silica dioxide) is the most common molecule in stony meteorites at 38.2%, with silica bonded to two oxygen molecules, silicon and oxygen are the most common molecules found in rocks, forming a group of minerals called silicates, which include quartz, a common mineral found on the surface of Earth. The next three molecules, MgO, FeO, and CaO are also commonly found in rocks on Earth, however, iron (Fe) which is very common in iron meteorites, and also makes up a significant portion of stony meteorites with various molecules containing FeO, FeS, and Fe in native metal form. Yet typical rocks found on the surface of Earth contain very little iron. Where did all this iron go?
Goldschmidt suggested that iron (Fe) is a Siderophile element, as well as nickel (Ni), manganese (Mn) and cobalt (Co), which sank into the core of the Earth during its molten stage. Hence over time the surface of the Earth became depleted in these elements. A further line of evidence for an iron rich core is Earth’s magnetic field observed with a compass. This magnetic field supports the theory of an iron rich core at the center of Earth. Hence siderophile elements can be thought of as elements that are more common in the center of the Earth, than on Earth’s near surface. This is why other rare siderophile elements like gold, platinum and pallidum are considered precious metals at the surface of Earth.
Goldschmidt also looked at elements common in the atmosphere, in the air that we breath and that readily form gasses with Earth’s temperatures and pressures. These atmophile elements include hydrogen and helium, which are only observed in meteorites as H2O and very little isolated helium gas. This is despite the fact that the sun is mostly composed of hydrogen and helium. If you have ever lost a helium balloon, you likely know the reason why there is so little hydrogen and helium on Earth. Both hydrogen and helium are very light elements and can escape into the high atmosphere, and even into space. Much of the solar system’s hydrogen and helium is found in the sun, which has a greater gravitational force, as well as the larger gas giant planets in the outer solar system, like Jupiter which has an atmosphere composed of hydrogen and helium. Like the sun, larger planets can hold onto these light elements with their higher gravitational forces. Earth has lost much of its hydrogen and helium, and almost all of Earth’s hydrogen is bonded to other elements preventing its escape.
Nitrogen is only found in trace amounts in meteorites, as the mineral carlsbergite, which is likely the source of nitrogen in Earth’s atmosphere. Another heavier gas is carbon dioxide (CO2), which accounts for about 0.1% of stony meteorites. However, in the current atmosphere it accounts for less than a 0.04%, and as a total percentage of the entire Earth much less than that. In comparing Earth to Venus and Mars, carbon dioxide is the most abundant molecule in the atmosphere of Venus and Mars, accounting for 95 to 97% of the atmosphere on these planets, while on Earth it is a rare component of the atmosphere. As a heavier molecule than hydrogen and helium, carbon dioxide can stick to planets in Venus and Earth’s size range. It is likely that Earth early in its history had a similar high percentage of carbon dioxide as found on Mars and Venus, however over time it was pulled out of the atmosphere. This process was because of Earth’s unusual high percentage of water (H2O). Notice that water is found in stony meteorites, and this water was released as a gas during Earth’s warmer molten history, and as the Earth cooled, it resulted in rain that formed the vast oceans of water on its surface today. There has been a great debate in science as to why Earth has these vast oceans of water and great ice sheets, while Mars and Venus lack oceans or significantly large amounts of ice. Some scientists suggest that Earth was enriched in water (H2O) from impacts with comets early in its history, but others suggest that enough water (H2O) can be found simply from the molten gasses that are found in rocks and meteorites that formed the early Earth.
So how did this unusual large amount of water result in a decrease of carbon dioxide in Earth’s atmosphere? Looking at a simple set of chemical reactions between carbon dioxide and water, you can understand why.
CO2 (g) + H2O (l) <=> H2CO3 (aq)
Note that g stands for gas, l for liquid, and aq as an aqueous solution (dissolved in water), and also notice that this reaction goes in both directions with the double arrows. Each carbon atom takes on an additional oxygen atom, which results in two extra electrons, this results in the ion CO3-2. This ion forms ionic bonds to two hydrogen ions (H+), forming H2CO3. Because these hydrogen ions can break apart from the carbon and oxygen, this molecule in a solution forms a weak acid called carbonic acid. Carbonic acid is what gives soda drinks their fizz. If water falls from the sky as rain, the amount of carbonic acid would cause a further reaction to solid rocks composed of calcium. Remember that calcium forms ions of Ca+2, making these ions ideal for reacting with the CO3-2 ions to form Calcium Carbonate (CaCO3) a solid.
Ca2+(aq) + 2HCO3- (aq) <=> CaCO3 (s) + CO2 (aq) + H2O (l)
Note that there is a 2 before the ion HCO3- so that the amount of each element in the chemical reaction is balanced on each side of the chemical reaction.
Over long periods of time the amount of carbon dioxide will decrease from the atmosphere, however, if the Earth is volcanically active and still molten with lava, this carbon dioxide would be re-released into the atmosphere as the solid rock composed of calcium carbonate is heated and melted (a supply 178 kJ of energy will convert 1 mole CaCO3 to CaO and CO2).
CaCO3 (s) → CaO (s) + CO2 (g)
This dynamic chemical reaction between carbon dioxide, water and calcium causes parts of the Earth to become enriched or depleted in carbon, but eventually the amount of carbon dioxide in the atmosphere will reach an equilibrium over time, and during the early history of Earth water scrubbed significantly amounts of carbon dioxide out of the atmosphere of Earth.
Returning to the bulk composition of meteorites, oxygen is found in numerous molecules, including some of the most abundant (SiO2, MgO, FeO, CaO). One of the reasons, Goldschmidt did not include oxygen in the atmophile group of elements was because it is more common in rocks, especially bonded covalently with silicon in silica dioxide (SiO2). Pure silica dioxide is the mineral quartz, a very common mineral found on the surface of the Earth. Hence oxygen, along with magnesium, aluminum, and calcium, is a lithophile element. Later we will explore how Earth’s atmosphere became enriched in oxygen, an element much more commonly found within solid crystals and rocks on Earth’s surface.
Isolated carbon (C) is fairly common (0.5%) in meteorites, but carbon bonded to hydrogen CH4 (methane) or in chains of carbon and hydrogen (for example C2H6) are extremely rare in meteorites. A few isolated meteorites contain slightly more carbon (1.82%) including the famous Murchison and Banten stony meteorites which exhibit carbon molecules bonded to hydrogen. Referred to as hydrocarbons, these molecules are important in life, and will play an important role in the origin of life on Earth. But why are these hydrocarbons so rare in meteorites?
This likely has to do with an important concept in chemistry called Enthalpy. Enthalpy is the amount of energy gained or lost in a chemical reaction at a known temperature and pressure. This change in enthalpy is expressed as (ΔH) and expressed in Joules of energy per Mole. A Mole is a unit of measurement that relates the number of atoms per gram of a molecule’s atomic mass. A positive change in enthalpy indicates an endothermic reaction (requiring heat), and while negative change in enthalpy releases heat resulting in an exothermic reaction (producing heat). In the case of hydrocarbon (like CH4) and the presence of oxygen, there is an exothermic reaction, that releases 890.32 Kilo Joules of energy as heat per mole.
CH4 (g) + 2O2 (g) → 2H2O (l) + CO2 (g)
The release of energy via this chemical reaction makes hydrocarbons such a great source for fuels, since they easily react with oxygen to produce heat. In fact, methane or natural gas (CH4) is used to generate electricity, heat homes and used to cook food on a gas stove. This is also why hydrocarbons are rarely found when closely associated with oxygen. Hydrocarbons are however of great importance, not only because of their ability to combust with oxygen in these exothermic reactions, but because they are also the major elements found in living organisms. Other elements that are important for living organisms are phosphorous (P), nitrogen (N), oxygen (O), sulfur (S), sodium (Na), magnesium (Mg), calcium (Ca) and iron (Fe). All of these lithophile elements are found in complex molecules within life forms near the surface of Earth which are collectively called organic molecules, which bond with carbon and hydrogen in complex molecules found within living organisms. The field of chemistry that study these complex chains of hydrocarbon molecules is called organic chemistry.
Goldschmidt’s classification of the elements is a useful way to simplify the numerous elements found on Earth, and way to think about where they are likely to be found, whether in the atmosphere, in the oceans, on the rocky surface, or deep inside Earth’s core.
3h. Mass spectrometers, X-Ray Diffraction, Chromatography and Other Methods to Determine Which Elements are in Things.
The chemical make-up and structure of Earth’s materials
In 1942, Victor Goldschmidt having escaped from Norway arrived in England, among a multitude of refugees from Europe. Shortly after arriving in London, England he was asked to teach about the occurrence of rare elements found in coal to the British Coal Utilisation Research Association, which was a non-profit group funded by coal utilities to promote research. In the audience was a young woman named Rosalind Franklin. Franklin had recently joined the coal research group having left graduate school at Cambridge University in 1941, and leaving behind a valuable scholarship. Her previous advisor Ronald Norris was veteran of the Great War, and suffered as prisoner of war in Germany. So, the advent of World War II plagued him, and he took to drinking and was not supportive of young Franklin’s research interests. In his lab, however, Franklin was exposed to the methods to the chemical analysis of using photochemistry, or light to excite materials to produce photons of differing light-waves of energy.
After leaving school, her research focused on her paid job to understand the chemistry of coal, particularly how organic molecules or hydrocarbons breakdown through heat and pressure inside the Earth and thus also leading to decreasing porosity over time (the amount of space or tiny cavities) within the coal. Franklin moved to London to work, and stayed in a boarding house with Adrienne Weill, a French chemist and refugee who was a former student of the famous chemist Marie Curie. Adrienne Weill became her mentor during the war years, while Victor Goldschmidt taught her in a classroom during the war. With the allied victory in 1945, Rosalind Franklin returned to the University of Cambridge and defended her research she had conducted on coal in 1946. After graduation, Franklin asked Adrienne Weill if she could come to France to continue her work in chemistry. In Paris, Franklin secured a job in 1947 at the Laboratoire Central des Services Chimiques de l'État, one of the leading centers for chemical research in post-war France. It was here that she learned of the many new techniques being developed to determine the chemical make-up and structure of Earth’s materials.
What allows scientists to determine what specific chemical elements are found in materials on Earth? What tools do chemists use to determine the specific elements inside the molecules that make-up Earth materials, whether they be gasses, liquids or solids?
Using Chemical Reactions
Any material will have a specific density at standard temperatures and pressures, and exhibit phase transitions at set temperature and pressures, although elucidating this from every type of material can be challenging if the material has extremely high or low temperatures/pressures for these phase transitions. More often than not, scientists use chemical reactions to determine if a specific element is found in a substance. For example, one test to determine the authenticity of a meteorite is to determine if it contains the element nickel.
Nickel as a siderophile element is rare on the surface of Earth, with much of the planet’s nickel being found in the Earth’s core. Hence, meteorites have a higher percentage of nickel than most surface rocks. To test for nickel, a small sample is ground up into a powder, and added to a solution of HNO3 (nitric acid). If nickel is present the reaction will result in ions of nickel (Ni+ and Ni2+) to be found in the solution, a solution of NH4OH (Ammonium hydroxide) is added to increase the pH of the solution by introducing OH- ions, this will cause any iron ions (Fe2+ and Fe+3) to oxidize with the OH- ions, which is a solid (and will be seen as an rust-color solid in the bottom of the solution), the final step is to pour off the clear liquid, which now should contain ions of nickel (Ni+ and Ni2+) and add Dimethyl-glyoxine (C4H8N2O2) a complex organic molecule which reacts with ions of nickel. Ni+2C4H8N2O2→Ni(C4H8N2O2)2
Nickel bis(dimethylglyoximate) is a bright red solution, and a bright red color would indicate the presence of nickel in the powdered sample. This method of determining nickel was worked out by Lev Chugaev, a Russian chemistry professor at the University of Petersburg in 1905. This type of diagnostic test may read a little like a recipe in a cookbook, but provides a “spot” test to determine the presence or absence of an element in a substance. Such tests have been developed for many different types of materials in which someone wishes to know if a particular element is present in a solid, liquid and even in gases.
Such methodologies can also separate liquids that had been mixed together by utilizing differences in boiling temperatures, such that a mixture of liquids can be concentrated back into specific liquids based on what temperature each liquid substance boils at. Such distillation methods are used in petroleum refining processes at oil and gas refineries. The heating of liquids contain crude oil can be used to separate out different types of hydrocarbon oils and fuels, such as kerosene, octane, propane, benzene, and methane.
One of the more important innovations in chemical analysis is chromatography, which was first developed by the Italian-Russian chemist, Mikhail Tsvet, whose Russian surname Цвет, means color in Russian. Chromatography is the separation of molecules based on colors, and was first developed in the study of plant pigments found in flowers. The method is basically to dissolve plant pigments in a solution of ethanol and calcium carbonate, which will separate different color bands that can be observed in a clear beaker. Complex organic molecules can be separated out using this method, and purified.
Using this principle, gasses can be analyzed in a similar way through gas chromatography, which is where gases (often solid or liquid substances that have been combusted or heated until they become a gas in a hot oven) are passed through a column with a carrier gas of pure helium. As the gasses pass by a laser sensor, the color differences are measured at discrete pressures which are adjusted within the column. Gas chromatography is an effective way to analyze the chemical makeup of complex organic compounds.
Color is an effective way to determine the chemical ingredients of Earthly materials, and it has been widely known that certain elements can exhibit differing colors in materials. Many elements are used in glass making and fireworks to dazzling effect. Pyrolysis–gas chromatography is a specialized type of chromatography in which materials are combusted at high temperatures, and the colors of the produced smaller gas molecules are measured to determine their composition. Often gas chromatography is coupled with a mass spectrometer.
A mass spectrometer measures the differing masses of molecules and ionized elements in a carrier gas (most often inert helium). Mass spectrometry examines a molecule’s or ion’s total atomic mass by passing it through a gas-filled analyzer tube between strong magnets which deflect their path based on the atomic mass. Lighter molecule/ions will deflect more than heavier molecule/ions resulting in them striking a detector at the end of the analyzer tube at different places, which produces a pulse of electric current in each detector. The more the electric current is recorded the higher the number of molecules or ions of that specific atomic mass. Mass spectrometry is the only way to measure various isotopes of elements since the instrument measures directly the atomic mass.
There are two flavors of mass spectrometers. The first is built to examine isotopic composition of carbon, oxygen, as well as nitrogen, phosphorus and sulfur. Elements common in organic compounds, but which combust in the presence of oxygen, producing gas molecules of CO2, NO2, PO4 and SO4. These mass spectrometers can be used in special cases to examine the hydrogen and oxygen isotopic composition in H2O. They are also useful to measure carbon and oxygen isotopes in calcium carbonate CaCO3, when reacted with acid producing CO2. The other flavor of mass spectrometer’s ionizes elements (remove electrons) under extremely high temperatures, allowing ions of much higher atomic mass to be measured through an ionized plasma beam. These ionizing mass spectrometers can measure isotopes of rare transitional metals, such as nickel (Ni), lead (Pb), and uranium (U), for example the ratios used in radiometric dating of zircons. Modern mass spectrometers can laser ablate tiny crystals or very small materials using an ion microprobe, capturing a tiny fraction of the material to pass through the mass spectrometer, and measure the isotopic composition of very tiny portions of substances. Such data can be used to compare the composition across surfaces of materials at a microscopic scale.
The first mass spectrometers were developed during the 1940s, after World War II, but today are fairly common in most scientific labs. In fact, the Sample Analysis at Mars (SAM) instrument on the Curiosity Rover on the surface of Mars contains a gas chromatograph coupled with a mass spectrometer, allowing scientists at NASA to determine the composition of rocks and other materials encountered on the surface of Mars. Rosalind Franklin did not have access to modern mass spectrometers in 1947, and their precursors, gigantic scientific machines called cyclotrons, were not readily available in post-war France. Instead, Rosalind Franklin trained on scientific machines that use electromagnetic radiation to study matter, using specific properties of how light interacts with matter.
The major benefit to using light (and more broadly all types of electromagnetic radiation) to study a substance, is that using light does not require that the bonds between atoms be broken or changed in order to study them. So, these techniques using light do not require that a substance be destroyed by reacting it with other chemicals or altering it by combustion to determine its composition as a gas.
Earlier crystallographers, who studied gems and jewels noticed the unique ways in which materials would absorb and reflect light in dazzling ways, and to understand the make-up of rocks and crystals, scientists used light properties or luminosity to classify these substances. For example, minerals composed of metallic bonds, will exhibit a metallic luster or shine and are opaque, while covalently and ionically bonded minerals are often translucent, allowing light to pass through. The study of light passing through crystals, minerals and rocks to determine the chemical make-up crystals is referred to as petrology. More generally, petrology is the branch of science concerned with the origin, small-scale structure, and composition of rocks and minerals and other Earth materials under a polarizing light microscope.
Refraction and Diffraction of Light
Light in the form of any type of electromagnetic radiation interacts with material substances in three fundamental ways, the light will be absorbed by the material, bounce off the material, or pass through the material. In opaque materials, like a gold coin, light will bounce off the material, or diffract from the surface, while in translucent materials, like a diamond, light will pass through the material and exhibit refraction. Refraction is how a beam of light bends within a substance, while diffraction is how a beam of light is reflected off the surface of a substance. They are governed by two important laws in physics.
Snell's Law of Refraction
Snell’s law is named after Ibn Sahl, a Persian scientist who published his early thesis on mirrors and lenses in 984 CE. Snell’s law refers to the relationship between the angle of incidence and the angle of refraction resulting from the change in velocity of the light as it passes into a translucent substance. Light slows down as it passes into a denser material. By slowing down, the light beam will bend, the amount that the beam of light bends is mathematically related to the angle that the beam of light strikes the substance and the change in velocity.
The mathematical expression can be written where the angle from perpendicular to the substance the light strikes the outer surface of the substance, and is the angle from perpendicular that the light bends within the substance. v1 is the velocity of light outside the substance, while v2 is the velocity of light within the substance.
Some translucent materials reduce the velocity of light more than others. For example, quartz (SiO2) exhibits high velocity, such that the angle of refraction is very low. This makes materials made of silica, or SiO2, such as eye glasses and windows easy to see through.
However, calcite (CaCO3) also known as calcium carbonate, exhibits very low velocities, such that the angle of refraction is very high. This makes light bend within the materials made of calcite. Crystals of calcite are often sold in rock shops or curio shops as “Television” crystals, because the light bends so greatly as to make any print placed below a crystal appear to come from within the crystal itself, like the screen of a television. Liquid water with a lower velocity than air, also bends light, resulting in the illusion of a drinking straw appearing broken in a glass containing water.
Measuring the angles of refraction in translucent materials can allow scientists to determine the chemical make-up of the material, often without having to destroy or alter the material. However, obtaining these angles of refractions often requires making a thin section of the material to examine under a specialized polarizing microscope designed to measure these angles.
Bragg's Law of Diffraction
Bragg’s law is named after Lawrence Bragg and his father William Bragg, who both discovered the crystalline structure of diamonds, and were awarded the Nobel Prize in 1915 for their work on diffraction. Unlike Snell’s law, Bragg’s law results from the inference of light waves as they diffract (reflected) off a material’s surface. The wave length of the light needs to be known, as well as the angle that the light is reflected from the surface of the substance.
The mathematical expression can be written where the light wavelength is λ, and is the angle from the horizontal plane of the surface (or glancing angle), and d is the interplanar distance between atoms in the substance or the atomic-scale crystal lattice plane.
The distance d can be determined if λ and are known. n an integer, which is the “order” of diffraction/reflection of the light wave.
Monochrome light (that is light of only one wave length) is shone onto the surface of a material at a specific angle, resulting in light reflected off the material which is measured at every angle from the material. Using this information, the specific distances between atoms can be measured.
Since the distance between atoms is directly related to the number of electrons within each orbital shell, each element on the periodic table will have different values of d, depending on the orientations of the atomic bonds. Furthermore, different types of bonds of the same elements will result in different d distances. For example, both graphite and diamonds are composed of carbon atoms, but are distinguished from each other in how those atoms are bonded together. In graphite, the carbon atoms are arranged into planes separated by d-spacings of 3.35Å, while in diamonds the d-spacings are closely together linked by covalent bonds of 1.075Å, 1.261Å, and 2.06Å distances.
These d-spacings are very small, requiring light with short wavelengths within the X-ray spectrum with a wavelength (λ) of 1.54Å. Since most atomic bonds are very small, X-ray electromagnetic radiation is typically used in studies of diffraction. The technique used to determine d-spacings within materials is called X-Ray Diffraction (XRD). It often is coupled with tools to measure X-ray fluorescence (XRF), which measures the energy states of excited electrons that actually absorb X-rays and release energy as photons. X-Ray Diffraction measures how light wave reflect off the spacing between atoms, while X-Ray Fluorescence measures how light waves are emitted from atoms which were excited by light striking the atoms themselves. Fluorescence looks at the broad spectrum of light emitted, while Diffraction only looks at monochromic light (light of a single wave-length).
Great advances have been made in both XRD and XRF tools, such that many hand-held analyzers now allow scientists to quickly analyze the chemical make-up of materials outside of the laboratory and without destruction of the materials understudy. XRD and XRF has revolutionized how materials can be quickly analyzed for various toxic elements, such as lead (Pb) and arsenic (As).
However, in the late 1940s, X-Ray diffraction was on the cutting edge of science and Rosalind Franklin was using it to analyze more and more complex organic molecules – molecules containing long chains of carbon bonded with hydrogen and other elements. In 1950, Rosalind Franklin was awarded a 3-year research fellowship from an asbestos mining company to come to London to conduct chemistry research at King’s College. Staffed with the state-of-the-art X-Ray diffraction machine, Rosalind Franklin set to work to decode the chemical bonds that form complex organic compounds found in living tissues.
The Discovery of the Chemistry of DNA
She was encouraged to study the nature of a molecule called deoxyribonucleic acid, or DNA that was found inside living cells particularly sperm cells. For several months she worked on a project to unravel this unique molecule, when one day an odd little nerdy researcher by the name of Maurice Wilkins arrived at the lab. He was furious with Rosalind Franklin, as he can been away on travel, but before leaving had been working on the very topic Rosalind Franklin was working on, using the same machine. The Chair of the Department had not informed either of them of their research work and that they were to be sharing the same equipment and lab space. This, as you can imagine, caused much friction between Franklin and Wilkins. Despite this setback Rosalind Franklin had made major breakthroughs in the few months she had access to the machine alone, and was able to uncover the chemical bonds found in deoxyribonucleic acid. These newly deciphered helical-bonds allowed the molecule to spiral like a spiral stair-case. However, Franklin was still uncertain. During 1952 both Franklin and Wilkins worked alongside each other, on different samples, and using various techniques, using the same machine. They also shared a graduate student who worked in the lab between, Raymond Gosling. Sharing a laboratory with Wilkins continued to be problematic for Rosalind Franklin and she transferred to Birkbeck College in March of 1953, which had its own X-Ray diffraction lab. Wilkins returned to his research on deoxyribonucleic acid using the X-Ray diffraction at Kings College, however a month later, in 1953 two researchers at Cambridge University, announced to the world their own solution to the structure of DNA in a journal article published in Nature.
Their names were Francis Crick and James Watson. These two newcomers published their solution before either Franklin and Wilkins had a chance too. Furthermore, they both had only recently began their quest to understand deoxyribonucleic acid, after being inspired by attending scientific presentations by both Franklin and Wilkins. Crick and Watson lacked equipment, so they spent their time using models by linking carbon atoms (represented by balls) together to form helical towers with other elements. Their insight came from a famous photograph taken by Franklin and her student Gosling, and was shown to them by Wilkins in early 1953. Their research became widely celebrated in England, as it appeared that the American scientist Linus Pauling was close to solving the mystery of DNA, and the British scientists had uncovered it first.
In 1962, Wilkins, Crick and Watson shared the Nobel Prize in Physiology and Medicine. Although, today Rosalind Franklin is widely recognized for her efforts to decipher the helical nature of the DNA molecule, she died of cancer in 1958 before she could be awarded a Nobel Prize. Today scientists can map out the specific chemistry of DNA, such that each individual molecule within different living cells can be understood well beyond its structure. Understanding complex organic molecules are of vital importance in the study of life on planet Earth.
Rayleigh and Raman scattering
There stands an old museum nestled in the bustling city of Chennai along the Bay of Bengal in eastern India, where a magnificent crystal resides on a wooden shelf accumulating dust in its display. It was this crystal that would excite one of the greatest scientists to investigate the properties of matter, and discover a new way to study chemistry from a distance. This scientist was Chandrasekhara Venkata Raman of India, often known simply as C.V. Raman. Raman grew up in eastern India with an inordinate fascination for shiny crystals and gems and the reflective properties of minerals and crystals. He amassed a large collection of rocks, minerals, and crystals from his travels. One day he purchased from a farmer a large quartz crystal which contained trapped inclusions of some type of liquid and gas inside the crystal. The quartz crystal intrigued him, and he wanted to know what type of liquid and gas was inside. If he broken open the crystal to find out, it would ruin the rarity of the crystal, as such inclusions of liquid and gasses inside a crystal are very rare. Without breaking open the crystal, if Raman used any of the techniques described previously, he could only uncover the chemical nature of the outer surface of the crystal, likely as silica dioxide (quartz).
To determine what the liquid and gas were inside, he set about inventing a new way to uncover the chemical make-up of materials that can only be seen. His discovery would allow scientists to not only know the chemical make up inside this particular crystal, but allow scientists to determine the chemical make-up of far distant stars across the universe, tiny atoms on the surfaces of metals, as well as gasses in the atmosphere.
Light has the unique ability to reveal the bonds within atomic structures without having to react or break those bonds. If a material is transparent to light, then it can be studied. Raman was well aware of the research of Baron Rayleigh, a late nineteenth century British scientist who discovered the noble gas argon by distilling air to purify it. Argon is a component of the air that surrounds you, and as an inert gas unable to bond to other atoms it exists as single atoms in the atmosphere. If a light bulb is filled with only purified argon gas, light within the bulb will pass through the argon gas producing a bright neon-like purple color. If a light bulb is filled with helium (He) gas, it would produce a bright reddish color, but the brightest light would be seen with neon (Ne), a bright orange color. These noble gases produced bright neon-colors in light bulbs and soon appeared during in the early twentieth-century as bright neon window signs in store fronts of bars and restaurants.
Why did each type of gas result in a different color in these lightbulbs? Rayleigh worked out that the color was caused by the scattering of light waves due to the differences in the size of the atoms. In the visual spectrum of light, light waves are much larger than the diameter of these individual atoms, such that the fraction of light scattered by a group of atoms is related to the number of atoms per unit volume, as well as the cross-sectional area of the individual atoms. By shining light through a material, you could measure the scattering of light and in theory determine the atom’s cross-sectional area. This Rayleigh scattering could also be used to determine temperature, since with increasing temperature the atoms enlarged in size.
In addition to Rayleigh scattering the atoms also absorb light waves of particular wave lengths, such that a broad range of light wavelengths of light passed through a gas would be absorbed at discrete wavelengths, allowing you to fingerprint certain elements within a substance based on these spectral lines of absorption.
In his lab, Raman used these techniques to determine the chemical make-up of the inclusions within the crystal, by shining light through the crystal. This is referred as spectroscopy, the study of the interaction between matter and electromagnetic radiation (light). Raman did not have access to modern lasers, so he used a mercury light-bulb and photographic plates to record where light would be scattered, as thin lines appeared where the photographic plate was exposed to light. It was time consuming work, but eventually lead to the discovery that some of the scattered light had lost energy and shifting into longer wave lengths.
Most of the observed light waves would bounce off the atoms without any absorption (elastic) or Rayleigh Scattering, while other light waves would be fully absorbed by the atoms, however, Raman found that some light waves would both bounce off the atoms and contribute some vibrational energy to the atoms (inelastic) energy, which became known as Raman Scattering. The amount of light that would scatter and be absorbed was unique to each molecule. Raman discovered a unique way to determine the chemistry of substances by using light, which today is called Raman spectroscopy, a powerful tool to determine the specific chemical makeup of complex materials and molecules by looking at the scattering and absorption of light. In the end, CV Raman determined that the mysterious fluid and gas within the quartz crystal was water (H2O) and methane (CH4). Today, the crystal still remains intact in a museum Bangalore, residing in the institute named after Raman, the Raman Research Institute of India.
Scientists today have access to powerful machines to determine the chemical make-up of Earth’s materials to such an extent that nearly any type of material, from solids, liquids and gasses and even plasma, can be determined as to which elements are found within the substance under study. Each technique has the capacity to determine the presence or absence of individual elements, as well as the types of bonds that are found between atoms. Now that you know a little of how scientists determine the makeup of Earth materials, we will examine in detail Earth’s gases, in particularly Earth’s atmosphere.
Section 4: EARTH’S ATMOSPHERE
4a. The Air You Breath.
Take a Deep Breath
Take a deep breath. The air that you inhale is composed of a unique mix of gasses which form the Earth’s atmosphere. The Earth’s atmosphere is the gas-filled sphere representing the outer most portion of the planet. Understanding the unique mix of gasses within the Earth’s atmosphere is of vital importance to living organisms that require the presence of certain gases for respiration. Air in our atmosphere is mix of gases with very large distances between individual molecules. Although the atmosphere does vary slightly between various regions of the planet, the atmosphere of Earth is nearly consistent in its composition of mostly Nitrogen (N2), representing about 78.08% of the atmosphere. The second most abundant gas in Earth’s atmosphere is Oxygen (O2) representing 20.95% of the atmosphere. This leaves only 0.97%, of which 0.93% is composed of Argon (Ar). This mix of Nitrogen, Oxygen and Argon is very unique among the solar system especially compared to neighboring planets, as Mars has an atmosphere of 95.32% Carbon dioxide (CO2), 2.6% Nitrogen (N2) and 1.9% Argon (Ar). While Venus, has an atmosphere of 96.5% Carbon dioxide (CO2), 3.5% Nitrogen (N2) and trace amounts of Sulfur dioxide (SO2). Earth’s atmosphere is strange in its abundance of Oxygen (O2) and very low amounts of Carbon dioxide (CO2). However, evidence exists that Earth begun its early history with an atmosphere similar to Venus and Mars, an atmosphere rich in carbon dioxide.
Earth’s Earliest Atmosphere
Evidence for Earth’s early atmosphere comes from the careful study of moon rocks brought back to Earth during the Apollo missions, which show that lunar rocks are depleted in carbon, with 0.0021% to 0.0225% of the total weight of the rocks composed of various carbon compounds (Cadogan et. 1972: Survey of lunar carbon compounds). Analysis of Earth igneous rocks show that carbon is more common in the solid Earth, with percentages between 0.032% and 0.220%. If Earth begun its history with a similar rock composition found on the Moon (during its molten early history), most of the carbon on Earth would have been free as gasses of carbon dioxide and methane in the atmosphere, accounting for an atmosphere that was an upwards of 1,000 times denser, and containing a majority of carbon dioxide, similar to Venus and Mars. Further evidence from ancient zircon crystals indicate low amounts of carbon in the solid Earth during its first 1 billion years of history, and supports an early atmosphere composed mostly of carbon dioxide.
Today Earth’s rocks and solid matter contain the vast majority of carbon (more than 99% of the Earth’s carbon), and only a small fraction is found in the atmosphere and ocean, whereas during its early history, the atmosphere appears to have been the major reservoir of carbon, containing most of the Earth’s total carbon, with only a small fraction lockup in rocks. Over billions of years, in the presence of water vapor, the amount of carbon dioxide in the atmosphere decreased, as carbon was removed from the atmosphere in the form of carbonic acid, and deposited as calcium carbonate (CaCO3) into crustal rocks. Such scrubbing of carbon dioxide from the atmosphere did not appear to occur on Venus and Mars which both lack large amounts of liquid water and water vapor on their planetary surfaces. This also likely resulted in a less dense atmosphere for Earth, which today has a density of 1.217 kg/m3 near sea level. Levels of carbon dioxide in the Earth’s atmosphere dramatical decreased with the advent of photosynthesizing life forms and calcium carbonate skeletons which further pulled carbon dioxide out of the atmosphere and accelerated the process around 2.5 billion years ago.
Water in the atmosphere
It should be noted that water (H2O) makes a significant component of the Earth’s atmosphere, as evaporated gas from Earth’s liquid oceans, lakes, and rivers. The amount of water vapor in the atmosphere is measured as Relative humidity. Relative humidity is the ratio (often given as a percentage) between the partial pressure of water vapor to the equilibrium pressure of liquid water at a given temperature on a smooth surface. A relative humidity of 100% would mean that the partial pressure water vapor is equal to the equilibrium pressure of liquid water, and would condense to from droplets of water, either as rain or dew on a glass window. Note that relative humidity is NOT an absolute measure of atmospheric water vapor content, for example a measured relative humidity of 100% does NOT mean that the air contains 100% water vapor, nor does 25% relative humidity mean that it contains 25% water vapor. In fact, water vapor (H2O) accounts for only between 0 to 4% of the total composition of the atmosphere, with 4% values found in equatorial tropical regions of the planet, such as rainforests. In most places, water vapor (H2O) represents only trace amounts of the atmosphere and is found mostly close to the surface of the Earth. The amount of water molecules air can hold is related to both its temperature and pressure. The higher the temperature and lower the pressure the more water molecules are found in the air. Water molecules are at an equilibrium with Earth’s air, however, if temperatures on Earth’s surface were to rise above 100° Celsius, the boiling point for water, the majority of water on the planet would be converted to gases, and make up a significant portion of Earth’s atmosphere as water vapor. Scientists debate when temperatures on Earth’s surface dropped below this high value and when liquid oceans first appeared on the surface of the planet, but by 3.8 billion years ago, Earth appeared to have oceans present.
Before this, was the period of time of a molten Earth called the Hadean (named after Hades, the underworld of Greek mythology). Lasting between 500-700 million years, the Hadean was an Earth that resembled the hot surface of Venus, but consistently bombarded with meteorite impacts and massive volcanic eruptions and flowing lava. Few, if any rocks are known from this period of time, since so much of the Earth was molten and liquid at this point in its history. Temperatures must have dropped, leading to the appearance of liquid water on Earth’s surface and resulting in less dense atmosphere. This started the lengthy process of cleansing the atmosphere of carbon dioxide.
For the next 1.3 billion years, Earth’s atmosphere was a mix of water vapor, carbon dioxide, nitrogen, and argon, with traces of foul-smelling sulfur dioxide (SO2), nitrogen oxides (NO2), it is debated whether there may have been pockets of hydrogen sulfide (H2S), methane (CH4) and ammonia (NH4), or whether these gas compounds were mostly oxidized in the early atmosphere. However, free oxygen (O2) was rare or absent in Earth’s early atmosphere, as free oxygen as a gas would only appear later, and when it did, it would completely alter planet Earth.
4b. Oxygen in the Atmosphere.
How Earth's Atmosphere became enriched in Oxygen
Classified as a lithophile element, the vast majority of oxygen on Earth is found in rocks, particularly in the form of SiO2 and other silicate minerals and carbonate minerals. During the early history of Earth most oxygen in the atmosphere was bonded to carbon (CO2), sulfur (SO2) or nitrogen (NO2). However, today free oxygen (O2) accounts for 20.95% of the atmosphere. Without oxygen in today’s atmosphere you would be unable to breath the air and die quickly.
The origin of oxygen on Earth is one of the great stories of the interconnection of Earth’s atmosphere with planetary life. Oxygen in the atmosphere arose during a long period called the Archean (4.0 to 2.5 billion years ago), when life first appeared and diversified on the planet.
Early microscopic single-celled lifeforms on Earth utilized the primordial atmospheric gasses for respiration, principally CO2, SO2 and NO2. These primitive lifeforms are called the Archaea, or archaebacteria, from the Greek arkhaios meaning primitive. Scientist refer to an environment lacking free oxygen, as Anoxic, which literally means without oxygen. Hypoxia, meaning an environment with low levels of oxygen, while Euxinic means an environment that is both low in oxygen, with a high amount of hydrogen sulfide (H2S). These types of atmospheres were common during the Archean Eon.
Three major types of archaebacteria lifeforms existed during the Archean, and represent different groups of microbial single-celled organisms, all of which still live today in anoxic environments. None of these early archaebacteria had the capacity to photosynthesize, and instead relied on chemosynthesis, the synthesis of organic compounds by living organisms using energy derived from reactions involving inorganic chemicals only, typically in the absence of sunlight.
Methanogenesis-based life forms
Methanogenesis-based life forms take advantage of carbon dioxide (CO2), by using it to produced methane CH4 and CO2, through a complex series of chemical reactions in the absence of oxygen. Methanogenesis requires some source of carbohydrates (larger organic molecules containing carbon, oxygen and hydrogen) as well as hydrogen, but these organisms produce methane (CH4) particularly in sediments on sea floor in the dark and deep regions of the oceans. Today they are also found in the guts of many animals.
Sulfate-reducing life forms
Sulfate-reducing life forms take advantage of sulfur in the form of sulfur dioxide (SO2), by using it to produce hydrogen sulfide (H2S). Sulfate-reducing life forms require a source of carbon, often in the form of methane (CH4) or other organic molecules, as well as sources of sulfur, typically near volcanic vents.
Nitrogen-reducing life forms
Nitrogen reducing life forms take advantage of nitrogen in the form of nitrogen dioxide (NO2) by using it to produce ammonia (NH4). Nitrogen-reducing life forms also require a source of carbon, often in the form of methane (CH4) or other organic molecules.
All three types of life-forms exhibit anaerobic respiration, or respiration that does not involve free oxygen. In fact, these organisms produce gasses that combust or burn in the presence of oxygen, and hence oxidize to release energy. Both methane (CH4) and hydrogen sulfide (H2S) are flammable gasses and are abundant in modern anoxic environments rich in organic carbon, such as in sewers systems and underground oil and gas reservoirs.
The Advent of Photosynthesis
During the Archean, a new group of organisms arose that would dramatically change the planet’s atmosphere, these are called the cyanobacteria. As the first single-celled organism able to photosynthesis, cyanobacteria convert carbon dioxide (CO2) into free oxygen (O2). This allows microbial organisms to acquire carbon directly from atmospheric air or ocean water. Photosynthesis, however, required the use of sunlight or photons, which prevents these organisms living permanently in the dark. They would grow into large “algal” blooms seasonally on the surface of the oceans based on the availability of sunlight. Able to live in both oxygen-rich and anoxic environments, they flourished. The oldest macro-fossils on Earth are fossilized “algal” mats called stromatolites, which are composed of thin layers of calcium carbonate secreted by cyanobacteria growing in shallow ocean waters. These layers of calcium carbonate are preserved as bands in the rocks, as some of Earth’s oldest fossils. Microscopically, cyanobacteria grow in thin threads, encased in calcium carbonate. With burial, cyanobacteria accelerated the decrease of carbon dioxide from the atmosphere, as more and more carbon was sequestrated into the rock-record as limestone, and other organic matter was buried over time.
The first appearance of free oxygen in ocean waters lead to a fifth group of organisms to evolve, the iron-oxidizing bacteria, which use iron (Fe). Iron-oxidizing bacteria can use either iron-oxide Fe2O3 (in the absence of oxygen) or iron-hydroxide Fe(OH)2 (in the presence of oxygen). In the presence of small amounts of oxygen, these iron-oxidizing bacteria would produce solid iron-oxide molecules, which would accumulate on the ocean floor, as red-bands of hematite (Fe2O3). Once the limited supply of oxygen was used up by the iron-oxidizing bacteria, cyanobacteria would take over, resulting in the deposition of siderite, an iron-carbonate mineral (FeCO3). Seasonal cycles of “algal” blooms of cyanobacteria followed by iron-oxidizing bacteria would result in yearly layers (technically called varves or bands) in the rock record, oscillating between hematite and siderite. These oscillations were enhanced by seasonal temperatures, as warm ocean water holds less oxygen than colder ocean waters, hence the hematite bands would be deposited during the colder winters when the ocean was more enriched in oxygen.
These bands of iron minerals are common throughout the Archean, and are called Banded Iron Formations (BIFs). Banded Iron Formations form some of the world’s most valuable iron-ore deposits, particularly in the “rust-belt” of North America (Michigan, Wisconsin, Illinois, and around the Great Lakes). These regions are places where Archean aged rocks predominate, preserving thick layers of these iron-bearing minerals.
The Great Oxidation Crisis
Around 2.5 to 2.4 billion years ago, cyanobacteria quickly rose as the most dominate form of life on the planet. The ability to convert carbon dioxide (CO2) into free oxygen (O2) was a major advantage, since carbon dioxide was still plentiful in the atmosphere and dissolved in shallow waters. This also meant that free oxygen (O2) was quickly rising in the Earth’s atmosphere and oceans, and quickly outpacing the amount of oxygen used by iron-oxidizing bacteria. With cyanobacteria unchecked, photosynthesis resulted in massive increases in atmospheric free oxygen (O2). This crisis resulted in the profound change in the Earth’s atmosphere toward a modern oxygen-rich atmosphere, resulting in the loss of many anoxic forms of life that previously flourished on the planet. The Great Oxidation Crisis was the first time a single type of life form would alter the planet in a very dramatic way and cause major climatic changes to the planet. The Banded Iron Formations disappeared, and a new period is recognized around 2.4 billion years ago, the Proterozoic Era.
The Ozone Layer
An oxygen-rich atmosphere in the Proterozoic resulted for the first time the formation of the ozone layer in the Earth’s atmosphere. Ozone is where three oxygen atoms are bonded together (O3), rather than just two (O2). This results from two of the oxygen atoms sharing a double covalent bond and one of these oxygen atoms sharing a coordinate covalent bond with another oxygen atom. This makes ozone highly reactive and corrosive as it easily breaks to form a single ionized atom of oxygen (O-2) which quickly bonds to other atoms. Oxygen gas (O2) is much more stable as it is made up of two oxygen atoms joined together by a double covalent bond. Ozone has a pungent smell, and is highly toxic because it easily oxides both plant and animal tissue. Ozone is one of the most common air pollutes in oil and gas fields, as well as large cities, and a major factor in air quality indexes.
Most ozone, however, is found high up in the Earth’s atmosphere, where it forms the ozone layer between 17 to 50 kilometers above the surface of the Earth, with highest concentration of ozone about 25 kilometers in altitude. The ozone is created at these heights in the atmosphere through the complex interaction with Ultra-Violet (UV) electromagnetic radiation from the sun. Both oxygen and ozone block Ultra-Violet (UV) light from the sun, acting as a sun-block for the entirety of the planet. Oxygen absorbs ultraviolet rays with wavelengths between 240 to 160 nanometers, this radiation results in breaking oxygen bonds, and results in the formation of ozone. Ozone can further absorb ultraviolet rays with wavelength between 200 to 315 nanometers, and most radiation smaller than 200 nanometers are absorbed by nitrogen and oxygen, resulting in oxygen and ozone blocking more incoming electromagnetic radiation in the form of high-energy UV light.
With oxygen’s ability to prevent incoming UV sunlight to reach the surface of the planet, oxygen had a major effect on Earth’s climate. Acting like a large reflective mirror, oxygen blocked high energy UV light, and as a consequence Earth’s climate began to drastically cool down. Colder oceans increased the absorption of oxygen into the colder water, resulting in well oxygenated oceans during this period in Earth’s history.
A new group of single-celled organisms arose to take advantage of increased oxygen levels, by developing aerobic respiration, using oxygen O2 as well as complex organic compounds of carbon, and respiring carbon dioxide (CO2). These organisms had to consume other organisms in order to find sources of carbon (and other vital elements), allowing them to grow and reproduce. Because oxygen levels likely varied greatly, these single-celled organisms could also use a less-efficient method of respiration in the absence of oxygen, called anaerobic respiration. When this happens, waste products such as lactic acid or ethanol are also produced in addition to carbon dioxide. Alcohol fermentation uses yeasts which convert sugars using anaerobic respiration to produce alcoholic beverages containing ethanol and carbon dioxide. Yeasts and other more complex single-celled organisms began to appear on Earth during this time.
Single celled organisms became more complex by incorporating bacteria (Prokaryotes), either as chloroplast that could photosynthesize within the cell or mitochondria that could perform aerobic respiration within the cell. These larger more complex single-cellular lifeforms are called the Eukaryotes and would give rise to today’s multicellular plants and animals.
An equilibrium or balance between carbon dioxide consuming/oxygen producing organisms and oxygen consuming/carbon dioxide producing organisms existed for billions of years, but the climate on Earth was becoming cooler than any time in its history. More and more of the carbon dioxide was being used by these organisms, while oxygen was quickly becoming a dominate gas within the Earth’s atmosphere, blocking more of the sun’s high energy UV light. Carbon was continually being buried either as organic carbon molecules or calcium carbonate, as these single-celled organisms died. This resulted in the sequestration or removal of carbon from the atmosphere for long periods of times.
The Cryogenian and the Snowball Earth
About 720 million years ago, the amount of carbon dioxide in the atmosphere had dropped to such low levels that ice sheets begin to form. Sea ice expanded out of the polar regions toward the equator. This was the beginning of the end of the Proterozoic, as the expansion of the sea ice reflected more and more of the sun’s rays into space with its much lower albedo. A tipping point was reached, in this well oxygenated world, where ice came to cover more and more of the Earth’s surface. This was a positive feedback as expanding ice cooled the Earth by lowering its albedo, and resulting in runaway climate change. Eventually, according to the work and research of Paul Hoffman, the entire Earth was covered in ice. An ice-covered world or snowball Earth effectively killed off many of the photosynthesizing lifeforms living in the shallow ocean waters, as these areas were covered in ice preventing sunlight penetration. Like the ice-covered moon of Jupiter, Europa, Earth was now a frozen ice planet. These great glacial events are known as the Sturtian, Marinoan and Gaskiers glacial events, which lasted between 720 and 580 million years ago. From space, Earth would appear unhabituated and covered in snow and ice.
The oxygen-rich atmosphere was effectively cut off from the life-forms that would also draw down the oxygen and produce carbon dioxide. Life on Earth would have ended sometime during this point in its history, if it were not for the active volcanic eruptions which continue to happen on Earth’s surface, re-releasing buried carbon back into the atmosphere as carbon dioxide. It is startling to note that if carbon dioxide had been completely removed from the atmosphere, photosynthesizing life, including all plants would be unable to live on Earth and without the input of gasses from volcanic eruptions, Earth would still likely be a frozen nearly life-less planet today.
Levels of carbon dioxide slowly increased in the atmosphere (an important green-house gas) and these volcanic eruptions slowly thawed the Earth from its frozen state and the oceans became ice free. Life survived, resulting in the first appearance of multicellular life forms, and the first colonies of cells, with the advent of jelly-fish and sponge-like animals and the first colonial corals found in the Ediacaran, the last moments of the Proterozoic and the early diversification of multicellular plants and animals in a new era, the era of multicellular-life, the Phanerozoic.
Today, carbon dioxide is a small component of the atmosphere, making up less than 0.04% of the atmosphere, but carbon dioxide is rising dramatically just in the last hundred years, to levels above 0.07% in many regions of the world, nearly doubling the amount of carbon dioxide in the Earth’s atmosphere in a single human lifespan. A new climatic crisis is facing the world today, one driving by rising global temperatures and rising carbon dioxide in the atmosphere.
4c. Carbon Dioxide in the Atmosphere.
Her body was found when the vault was opened. Ester Penn lay inside the large locked bank vault at the Depository Trust Building on 55 Water Street in Lower Manhattan New York. Security cameras revealed that no one had entered or left the bank vault after 9pm. Her body showed no signs of trauma, no forced entry was made into the vault, and nothing was missing. Ester Penn was a healthy 35-year old single mother of two, who was about to move into a new apartment in Brooklyn that overlooked the Manhattan skyline. Now she was dead.
On August 21st 1986, the small West Africa villages near Lake Nyos became a ghastly scene of death, when every creature, including 1,746 people, within the villages died suddenly in the night. The soundless morning brought no sounds of insects, no cries of roosters nor children playing in the streets. Everyone was dead.
Each mysterious death has been attributed to carbon dioxide toxicity. The human body can tolerate levels up to 5,000 ppm or 0.5% carbon dioxide, but levels above 3 to 4% can be fatal. A medical condition called hypercapnia occurs when the lungs are filled with elevated carbon dioxide, which causes respiratory acidosis. Normally, the body is able to expel carbon dioxide produced during metabolism through the lungs, but if there is too much carbon dioxide in the air, the blood will become enriched in carbonic acid (CO2 + H2O -> H2CO3), resulting in partial pressures of carbon dioxide above 45 mmHg.
For the villagers around Lake Nyos, carbon dioxide was suddenly released from the lake where volcanic gasses had enriched the waters with the gas, while in the case of Ms. Penn, she released the carbon dioxide when she pulled a fire alarm from within the vault, which trigger a spray of carbon dioxide as a fire suppressant1. Divers, submarine operators, and astronauts all worry about the effects of too much carbon dioxide in the air they breathe. No more dramatic episode in carbon dioxide can match the ill-fated Apollo 13 mission to the moon.
“Houston we have a problem.” – Jack Swigert
On April 14th 1970 at 3:07 Coordinated Universal Time 200,000 miles from Earth, three men wedged in the outbound Apollo 13 spacecraft heard an explosion (NASA, 2009). A moment later astronaut Jack Swigert transmitted a message to Earth “Houston, we've had a problem here.” One of the oxygen tanks on board the Service Module had exploded, which also ripped a hole in a second oxygen tank, and cut power to the spacecraft. Realizing the seriousness of the situation, the crew quickly scrabbled into the Lunar Module. The spacecraft was too far from Earth to turn around, instead the crew would have to navigate the spacecraft around the far side of the moon, and swing it back to Earth if they hoped to return alive. The Lunar Module now served as a life raft strapped to a sinking ship, the Service Module. The improvised life-raft was not designed to hold a crew of 3 people for the 4-day journey home. Oxygen was conserved by powering down the spacecraft. Water was conserved by shutting off the cooling system, and drinking became rationed to just a few ounces a day. There remained an additional worry; the buildup of carbon dioxide in the space capsule. With each out breath, the crew expelled air with about 5% carbon dioxide. This carbon dioxide would build up in the lunar module over the four-day journey, and result in death by hypercarbia; the buildup of carbon dioxide in the blood. The crew had to figure out how long the air would remain breathable in the capsule.
From Earth, television broadcasters reported the grave seriousness of the situation from Mission Control. The crew of Apollo 13 had to figure out the problem of the rising carbon dioxide in the air of the Lunar Module, if they were going to see Earth alive again.
The Keeling Curve
In 1953, Charles “Dave” Keeling, arrived at CalTech in Pasadena, California on a postdoctoral research grant to study the extraction of uranium from rocks. Assigned to the lab of Harrison Brown, his lab supervisor proved to be a dynamic figure. His advisor had a central role in the development of nuclear bombs used in Japan. During the war, he had invented a new way to produce plutonium, which allowed upwards of 5 kg (11 lbs) of plutonium to be added to “Fat Man” bomb that was dropped on the city of Nagasaki killing nearly 100,000 people in August 1945. After the event, Brown’s heart was crushed at the personal responsibility he felt for these deaths. He penned a book Must Destruction Be our Destiny? in 1945, and begin traveling around the world giving lectures on the dangers of nuclear weapons. Harrison Brown had previously advised Claire Patterson, who was the first to radiometrically date meteorites to determine the age of the Earth at 4.5 billion years using lead isotopes while at the University of Chicago. In 1951, Harrison Brown divorced his wife and remarried, took a teaching position at Caltech, and it was here that a new chemistry postdoctoral researcher Charles Keeling arrived in 1953 to his lab. Initially, Keeling was set to the task of extracting uranium from rocks, but his interests turn to atmospheric sciences looking at the chemical composition of the air, in particular measuring the amount of carbon dioxide.
Keeling set about making an instrument in the lab to measure the amount of carbon dioxide in air using a tool called a manometer. A manometer is a cumbersome series of glass tubing which measures the pressures of isolated air samples. Air samples were captured by using a glass spherical flask, cleared of air in a vacuum and locked closed. Wrapped in canvas, so that fragile glass would not break, the empty glass spherical flask would be opened outside, and the captured gas that flowed into the glass flask would be taken back to the lab to be analyzed. The manometer was first developed to measure the amount of carbon dioxide produced in chemistry experiments involved in the combustion of hydrocarbons, allowing chemists to known how much carbon was in a material. Keeling used the same techniques to determine the amount of carbon dioxide in the atmosphere, his first measured value was 310 ppm, or 0.0310% which he found during a series of measurements made at Big Sur near Monterey, California.
Interestingly, Keeling found that concentrations of carbon dioxide increased slightly during the night. One hypothesis was that as the gas became cooler, it sank during the colder portions of the day. Carbon dioxide, which has a molar mass of 44.01 g/mol, compared to a molar mass of 32 g/mol for oxygen gas (O2) and 28 g/mol for nitrogen gas (N2), is a significantly heavier gas and will sink into lower altitudes, valleys and basins. Unless the sample was taken from places where carbon was being combusted in power plants, factories or near highways, repeated experiments showed that carbon dioxide did not vary from place to place and remained near 310 ppm.
However, this diurnal cycle intrigued him and he undertook another analysis to measure the isotopic composition of the carbon, to trace where the carbon was coming from. The ratio between Carbon-13 (13C) to Carbon-12 (12C) [this is called delta C-13 or ] is higher in molecules composed of carbon bonded to oxygen, while the ratio is lower in molecules composed of carbon bonded to hydrogen, because of the atomic difference in mass. Changes in this ratio demonstrate the source of the carbon in the air. If decreases, the source of carbon is from molecules composed of hydrocarbons, including burning or combustion of organic compounds (wood, petroleum, coal, natural gas), while if increases, the source of carbon is from molecules composed of carbonates, including burning or combustion of limestone and other rocks from volcanic emissions. Keeling found using a graph called a “Keeling Plot,” that as atmospheric carbon dioxide increased in the air, the value of decreased. This indicated that the primary source or flux of carbon dioxide in the atmosphere is primarily the interchange with organic compounds or hydrocarbons. The change in daily values, appeared to be caused by the drawdown of carbon dioxide by photosynthesizing plants during the light of the day, which was not carried out during the darkness of night, allowing carbon dioxide to increase at night.
Eagerly, Keeling wanted to take this study to the next level by looking at yearly or annual changes in atmospheric carbon dioxide. He wrote grant proposals to further his study and was awarded funds by the Weather Bureau as part of the International Geophysical Year (1957-1958). Using these funds Keeling purchased four new infrared gas analyzers. Since, carbon dioxide absorbs light waves between four peaks at 1437, 1955, 2013, and 2060 nanometers in the infrared spectrum of light, light waves at these wavelengths will be absorbed by a gas that contains carbon dioxide, and the number of photons at these wavelengths can be measured to determine how much carbon dioxide is within the air. Using this more advance tool, Keeling hoped to collect measurements from remote locations around the world. Two of the locations proposed to measure the yearly cycle was at the South Pole station in Antarctica and on top of Mauna Loa in Hawaii. Hawaii was a little more conducive to staffing personal for a full year, compared to Antarctica, and only a few measurements were made from ships that passed near the South Pole in 1957. The first measurements using the new machine from Hawaii was 313 ppm (0.0313%) taken in March of 1958. For the next year, Keeling and his staff measured the changes in carbon dioxide. From March 1958 to March 1960, Keeling measured a rise up to 315 ppm and drop down to 310 ppm indicating an oscillating cycle of rising and falling carbon dioxide due to the seasons.
The photosynthesizing biosphere of the planet is unequally balanced across Earth, with most of the dense boreal forests positioned in the Northern Hemisphere. During the Northern Hemisphere spring and summer, carbon dioxide is pulled from the atmosphere as these dense forest plants grow and become green during the spring and summer days, while in the fall and winter in the Northern Hemisphere carbon dioxide returns back to the atmosphere as autumn leaves fall from trees in these deciduous forests, and plants prepare to go dormant for the cold winter. As an annual cycle, the amount of carbon dioxide in the atmosphere is a rhythmic pulse, increasing and decreasing, with the highest levels in February, and lowest levels in late August.
After funding from the International Geophysical Year had expired, funds were provided by the Scripps Institution of Oceanography program, but in 1964 congressional budget cuts nearly closed the research down. Dave Keeling worked relentlessly to secure funding to maintain the collection of data receiving grants and funding from various government agencies. His dogged determination likely stemmed from the discovery that carbon dioxide each year was increasing at a faster and faster rate. In 1970, the carbon dioxide in Hawaii was at 328 ppm (0.0328%), in 1980 at 341 ppm (0.0341%), in 1990 at 357 ppm (0.0357%), in 2000 at 371 ppm (0.0371%). In 2005, Dave Keeling passed away, but the alarming trend of the increasing amount of carbon dioxide had captured the attention of the public. In 2006, Al Gore produced the documentary “An Inconvenient Truth” about the rise of carbon dioxide in the atmosphere, stemming from Keeling’s research and a prior scientific report made in 1996, when he was Vice President. The ever-increasing plot of carbon dioxide in the atmosphere, became known as the Keeling curve. Like the air in the capsule of Apollo 13, the amount of carbon dioxide was increasing dramatically. Today in 2020, carbon dioxide has risen above 415 ppm (0.0415%) in Hawaii. By extending the record of carbon dioxide back in time using air bubbles trapped in ice cores, carbon dioxide has doubled in the atmosphere from 200 ppm to over 400 ppm, much of the increase in the last hundred years.
Isotopic measurements document where much of this increase in carbon dioxide is coming from. values are more negative today than they have ever been, indicating emission of carbon dioxide dominantly from hydrocarbons (organic molecules of carbon), such as wood, coal, petroleum and natural gas. The ever-increasing human population, and exponential use of hydrocarbon fuels, coupled with deforestation and increased wild fires is the source of this increased carbon dioxide. This increase is far greater than the carbon dioxide that is annually drawn down by spring regrowth of forests in the Northern Hemisphere. Just like oxygen dramatically changed the atmosphere of the Proterozoic, the exponential release of carbon dioxide is dramatically changing the atmosphere of Earth today.
In the last twenty years, scientists have begun measuring carbon dioxide in the atmosphere from a wider variety of locations. In Utah, more than a dozen stations today monitor carbon dioxide in the atmosphere. In Salt Lake City, carbon dioxide in 2020 typically spikes to values near 700 ppm (0.07%) during January and February, as a result of carbon dioxide gas sinking into the valleys along the Wasatch Front and the large urban population using hydrocarbon fuels, while values in Fruitland, Utah in rural Eastern Utah have high values near 500 ppm (0.05%). This means that carbon dioxide measured from Hawaii, as part of the Keeling Curve, is minimum values compared to Utah, given that the islands isolated location in the Pacific Ocean. Values found in urban cities can be nearly twice the amount of carbon dioxide compared to those currently observed on the Hawaii Island monitoring station. This makes these urban centers especially a health risk with people suffering from respiratory distress syndrome associated with diseases such as the coronavirus that killed over a 100,000 Americans in 2020.
In 2009, NASA’s planned launch of the Orbiting Carbon Observatory satellite, meet with failure on the launch pad and was lost. In 2014, the Orbiting Carbon Observatory-2 satellite was more successful, and provided some of the best measurements of carbon dioxide from space using infrared light absorption across the entire Earth. Dramatically, carbon dioxide is most abundant in the atmosphere above the North Hemisphere compared to the Southern Hemisphere, with the highest concentration of carbon dioxide during the winter months in eastern North America, Europe and eastern Asia. Carbon dioxide is mostly concentrated below 15 to 10 kilometers in the atmosphere, and rises dramatically from major urban centers and large forest fires. The Orbiting Carbon Observatory-3 was successfully launched into space in 2019, and is installed on the International Space Station. This instrument measures carbon dioxide on a finer scale than the Orbiting Carbon Observatory-2, while also looking at reflected light from vegetation to monitor global desertification.
Predicting Carbon dioxide in the atmosphere of the future.
Albert Bartlett was a wizen old professor at the University of Colorado’s physics department who spent his scientific career on one key aspect, teaching students how to understand the exponential function. What is an exponential function and how does it relate to predictions of carbon dioxide in the atmosphere of Earth’s future? An exponential function is best described in a famous Persian story, first told by Ibn Khallikan in the 13th century.
A variation of the story goes something like this, a wealthy merchant had a lovely daughter, who the king desired to marry. The merchant, knowing of how much the king was in love with his daughter offered him a deal. In 64 days, he could marry his daughter if on a chest-board, each day the king was to pay him one penny for each square. However, each day he would have to double the number of pennies from the prior square, until he filled all 64 squares on the chest-board. The king had millions of dollars in his vault, filling a chest-board with pennies was easy. He agreed. On the first day, the king lay down one penny on the first square. He laughed at how small the number was on the second day as the king lay down 2 pennies, and on the third day only 4 pennies. He laughed and laughed, he had only spent a total of 7 cents, and it was already going on day four, when he lay down 8 pennies, but things started to change on the second row of the chest board, by the 16th day he had to lay out 32,768 pennies, or $327.68. By the third row, the values increased more dramatically to fill the next row he had to come up with $42,949,672,96, or more than 42 million dollars and to fill the fourth row, he had to come up with $10,995,116,277.76, or 10 billion dollars. In fact, if the king was to fill all 64 squares, the last square would total $184,467,440,737,100,000.00, or over 184 trillion dollars! He ran out of money, and could not afford to marry the merchant’s daughter. Exponential functions can work in the opposite direction too, such as halving the number, as observed with radiometric decay used in dating.
The critical question that involves carbon dioxide in the Earth’s atmosphere, is whether it is growing exponentially or linearly over time? One way to test this is to write a mathematical function that best explains the growth of carbon dioxide in the atmosphere, which can be used to project its future growth. With the complete data set from the Keeling Curve, from 1958 to the present, we can use this data to make some predictions of carbon dioxide in the future. The annual mean rate of growth of CO2 for each year is the difference in concentration between the end of December and the start of January of each year. Between 1958-1959 the rate of growth was 0.94 ppm, but between 2015-2016 the rate of growth jumped to 3.00 ppm. In the last twenty years, the rate of growth has not been less than 1.00 ppm. Indicating an upward trend of growth, more like that seen in an exponential function.
The best fit mathematical equation using the mean or average carbon dioxide from Hawaii each year works out to be a more complex polynomial exponential function.
This is an approximate model explains the data recorded since 1958, which can be applied to the future. As a model, it is only good as a prediction and serves only as a hypothesis that can be refuted with continued data collection each year into the future. Using this mathematical model, we can insert any year, and see what the predicted value in carbon dioxide would be. For the year 2050, the predicted value of carbon dioxide would be 502.55 ppm with an annual growth rate of 3.28 ppm per year. Like the king in the story, you may laugh at this value. Compared to 2020 values around 410 ppm, it is well below the dangerous values which would make the air unbreathable around 1 to 4% or 10,000 ppm to 40,000 ppm. In fact, the air would be breathable as 502.55 ppm is only 0.05%. The growth rate, would be accelerating each year. In 2100, eighty years from this authorship of this text, the predicted value would be 696.60 ppm, with a 4.50 ppm annual growth rate.
This likely would be worse in major cities, like Salt Lake City, which would experience cold winter days of bad air-days with carbon dioxide about 1,000 ppm. While not fatal, such high values might cause health problems to citizens with poor respiration, such as new born infants, people suffering from viral respiratory diseases like the corona virus and influenza, elderly people with asthma, and diabetes. The next jump to the year 2150, predicts a level of 954.15 ppm, with a growth rate of 5.77 ppm. At this point, most of the North Hemisphere would contain unhealthy air, sporting events and outdoor recreation would be unadvisable, although people could still breath outdoor air, air filiation systems would likely be developed to keep carbon dioxide levels lower indoors. In 2200, things begin to get worst. Carbon dioxide would reach 1,275.20 ppm with an annual growth of 7.04 ppm each year. At this point, topographic basins and cold regions near sea level would cause respiratory issues, especially on cold January and February nights.
By the year 2300, carbon dioxide would be at levels around 2,107.80 ppm or 0.2%, with a growth rate of 9.58 ppm. At this point, people could only spend limited time outside, before coming home to filtered air with lower carbon dioxide. Sporting events would move inside, as exertion would cause respiratory failure. In the year 2400, carbon dioxide would be at 3,194.40 ppm or 0.3% with an annual growth of 12.12 ppm, and by the year 2500, carbon dioxide would be at 4,535.00 ppm, with an annual growth of 14.66 ppm per year. At this point, beyond the healthy recommended dose for 8-hours, the outside air would become nearly unbreathable for extended periods of time.
By the year 2797, the level of carbon dioxide would reach 1% in Hawaii, or 10,000 ppm leading to an atmosphere unbreathable, across much of the North Hemisphere. Millions and billions of people would die across the planet, unable to breath the air on days when carbon dioxide would rise above the threshold. If this model holds, and the predication is correct, Earth will be rid of humans and most animal life in the next 777 years. This is a small skip of time, as the story of the chest-board was first written down by the Persian scholar Ibn Khallikan, about the same length of time in the past. If a scholar living in the 13th century wrote an allegory that is still valid today, what words of knowledge are you likely to pass on to future generations in the 28th century? Is this mathematical model a certainty toward the ultimate track to human extinction?
One of the great scholars of exponential growth, was Donella Meadows, who wrote in her 1996 essay “Envisioning a Sustainable World": “We talk easily and endlessly about our frustrations, doubts, and complaints, but we speak only rarely, and sometimes with embarrassment, about our dreams and values.” It is vital to realize that a sustainable world, where carbon dioxide does not rise above these thresholds, that you, and the global community advert this rise in carbon dioxide, while Earth is on the first few squares of this global chest-board. If you would like to play with various scenarios of adverting such a future, check out the Climate Interactive website, https://www.climateinteractive.org, and some of the computer models based on differing policy reductions in carbon dioxide released into the atmosphere. Donella Meadows further wrote, that “The best goal most of us who work toward sustainability offer is the avoidance of catastrophe. We promise survival and not much more. That is a failure of vision.” It sounds very optimistic, and having been written nearly twenty-five years ago, when carbon dioxide was only 362 ppm, seems a miss-match to more urgent fears today as carbon dioxide has reaches 410 ppm in the atmosphere, and so much like the Apollo 13 mission, you, and everyone you love, are trapped in a space capsule breathing in a rapidly degrading atmosphere.
Was it ever this high? Carbon dioxide in ancient atmospheres
One of the criticisms of such mathematical models is that there is a finite amount of carbon on Earth, and that resources of hydrocarbons (wood, coal, petroleum, and natural gas) will be depleted well before these high values will be reached. Geological and planetary evidence from Mars and Venus, as well as the evidence regarding the atmosphere of the Archean Eon, indicate that high percentages of carbon dioxide upwards of 95% is possible if the majority of Earth’s carbon is released back into the atmosphere. Such a carbon dioxide dominate atmosphere is unlikely, given that such release would require all sequestered calcium carbonate to be transformed to carbon dioxide, however, there have been periods in Earth’s past that carbon dioxide levels were higher than they are today.
Measurement of air samples only extends back some 800,000 years, since ice core data, and the bubbles of air they trap, are only as old as the oldest and deepest buried ice in Greenland and Antarctica. Values from ice cores demonstrate that carbon dioxide for the last 800,000 years varied only between 175 to 300 ppm, and never appeared above 400 ppm, like today’s values. If we want to look at events in the past where carbon dioxide was higher, we have to look back in time, millions of years. However, directly measuring air samples is not possible the farther back in time you go, so scientists have had to develop a number of proxies to determine carbon dioxide levels in the distant past.
The Stomata of Fossil Leaves
Photosynthesis in plants requires gas exchange where carbon dioxide is taken in and oxygen is released. This gas exchange in plants happens through tiny openings on the bottom-side of leaves called stomata. The number of stomata in leaves is balanced, because the more carbon dioxide is in the air the fewer the number of stomata is needed, while less carbon dioxide in the air will result in more stomata. Plants need to minimize the number of stomata used for gas exchange, because an excess of stomata will lead to water loss and the plant will dry out.
Looking at fossil leaves under microscopes and counting the number of stomata per area, as well as greenhouse experiments calibrating plants grown with controlled values of carbon dioxide have allowed scientists to be able to extend the record of carbon dioxide in the atmosphere back in time. There are a few limitations to this method. First is that only plants living both today and in the ancient past can be used. Most often these experiments use Ginkgo and Metasequoia leaves, which are living fossil plants with long fossil records extending back over 200 million years. Leaves of these plants have to be found as fossils, with good preservation for specific periods of time. The proxy works best with carbon dioxide concentrations between 200 ppm to about 600 ppm. High values above 600 ppm, the number of stomata has decreased to a minimum, and there is not much gained for the plant to have fewer openings with such high concentrations of carbon dioxide. Published values above 600 ppm all likely have similar or nearly the same density of stomata. A plant grown in 800 ppm carbon dioxide would have similar density of stomata as a plant grown in 1,500 ppm. This makes it difficult to calculate carbon dioxide in ancient atmospheres, when values are high, above 600 ppm.
Studies of fossil leaves, coupled with other proxies for atmospheric carbon dioxide in the rock record demonstrate that carbon dioxide has remained below 500 ppm over the last 24 million years, although near 16.3 million years ago, during the Middle Miocene Climate Optimum values are thought to have been between 500 and 560 ppm, and fossil evidence of a warmer climates has been observed during this period of time. During this period ice sheets where absent in Greenland, and much of western North America and Europe was covered in arid environments, and savannah-like forests seen today in southern Africa.
Extending further back, during the Eocene Epoch between 55.5 and 34 million years ago, carbon dioxide values are thought to have been much higher than today, above 600 ppm. The Eocene Epoch was a particular warm period during Earth’s past. Crocodile fossils are abundant in Utah, as well as fossil palms. The environment of Utah was similar to Louisiana is today, wet and humid, with no snow observed in the winter as evidence by the lack of glacial deposits. The warmer climate allowed crocodiles, early primates and semitropical forests to flourish in eastern Utah, and across much of North America, in many places today that are cold dry deserts. The buildup of carbon dioxide gradually peaked at about 50 million years ago, during the Early Eocene Climate Optimum. During this time large conifer forests of Metasequoia grew across the high northern Arctic, which was inhabited by tapirs. A much more abrupt event happened 55.5 million years ago, called the Paleocene-Eocene Thermal Maximum, in which carbon dioxide is thought to have doubled over a short duration of about 70 thousand years. The Paleocene-Eocene Thermal Maximum, or PETM is thought to have raised carbon dioxide values in the atmosphere above 750 ppm, and likely upward to 1,000 to 5,000 ppm. Values returned to lower am