A person who has all the knowledge will have to know what a terabyte is, put a plug on it, how snails reproduce and, for example, how to write Lenin in Chinese. Everything is too much.
We must start by quantifying how much human knowledge is. Some say that Aristotle was the last person alive who gathered all knowledge, and who asserts that never in the two million years of human evolution could anyone know everything, not even at the beginning of time.
chris stringer , a paleoanthropologist at the Natural History Museum in London puts it this way: “Given the diverse environments in which human beings lived even before they left Africa, I doubt that a human being could maintain all the information necessary to survive throughout the area of human distribution”.
As our body of knowledge grew, at some point it exceeded a brain’s capacity to house it.
Since ancient times, there have been attempts to compile all human knowledge into great universal libraries such as the Library of Alexandria, and the encyclopedism and macropedia of the 17th and 18th centuries were the first real attempts to organize and collect all knowledge.
The 15th edition of the Encyclopedia Britannica (the Britannica) adopted the objective of systematizing all human knowledge
THE Britannica in English it is the oldest still in print (although it is no longer printed on paper) and its size has broadly remained constant for the last seventy years, at around forty million words in 500,000 topics. In 1974, the 15th edition adopted a third objective: to systematize all human knowledge.
There was a time when you read the Britannica it was entirely a challenge to the world’s cultural elite. And it was established that reading it in its entirety took between three and twenty-two years. When Fat’h Ali became the Shah of Persia in 1797, he received a complete set of the third edition of the Britannica, which he read in three years.
Writer George Bernard Shaw claimed to have read the ninth edition in its entirety, with the exception of the scientific articles. Explorers also made use of it. Philip Beaver, officer of the Royal Navi in the 18th century, read during a maritime expedition and the American admiral Richard Evelyn Byrd took to the Britannica as reading material during the five months he was at the South Pole in 1934.
Reading the Britannica had a huge reputation as it summed up all human knowledge. Today, no one thinks of reading all the information on the internet, no matter how long they stay at the Pole.
Knowledge grows at an unattainable rate
Buckminster Fuller, a strange American character, designer and futurologist, created the “Knowledge Duplication Curve”, which is useful for this game.
Fuller said that if we take the knowledge generated up to AD 1 as a basis, it took humanity 1500 years to double it (go from 1 to 2), the next doubling (go from 2 to 4) took 250 years, and so, by 1900 humanity had produced 8 times more knowledge than in year 1, by 1945 knowledge was doubling every 25 years, by 1975 it was doubling every 12 years, and currently it is estimated to be doubling every 2 years.
With this, it is possible to think that although with each passing day we know a little more, in relative terms, we know less and less of the total existing knowledge.
According to IBM, the construction of the “Internet of Things” will lead to the duplication of knowledge every 12 hoursAdvertisement
Today things are not so simple. According to experts, different types of knowledge have different growth rates. For example, nanotechnology knowledge doubles every two years and clinical knowledge doubles every 18 months. But, on average, human knowledge doubles every 13 months. According to IBM, building the “Internet of Things” will lead to a doubling of knowledge every 12 hours.
In 2003, Peter Lyman and Hal R. Varian of the University of Berkeley, performed a fascinating study, how much information, to analyze how the recorded information increases. They estimated that it grew by 30% each year from 1999 to 2002 and between one and two exabytes of new information (one billion gigabytes or 1018 bytes) were produced each year, 60% of which was digital information.
Interestingly, not forgetting that the study is from 2003, Lyman and Varian found that the world network contains about 170 terabytes of information on its surface; In volume, this is seventeen times the size of the printed collections of the US Library of Congress. Here you can read the interesting findings of that study.
In 2019, there were 40 times more bytes of data on the internet than stars in the observable universe.
Every year, cloud software company DOMO publishes a report on the amount of data collected every minute. The 2019 report, ‘Data never sleeps 7.0‘states that there were 40 times as many bytes of data that year as there are stars in the observable universe.
In DOMO’s 2018 chart, they claim that “more than 2.5 quintillion bytes of data are created every day” and that every person on Earth is estimated to generate 1.7 MB of data per second. We are 7.75 billion generating tons of data per second. Currently, the Internet is estimated to have 5 million terabytes (TB) of information, of which Google has indexed approximately 200 TB, that is, only 0.004% of the information on the Internet can be found through Google.
How much does the human brain give?
Our brain has about 100,000 million neurons that make at least a hundred trillion connections between them. These connections are the support that serves to store information, both the reproduction of the snail and the name of Lenin in Chinese. From this data, experts in computational neuroscience estimate that the maximum information storage capacity of our thinking organ is between ten and one hundred terabytes.
The human brain would have the ability to know 0.00002% of the knowledge
We can make a calculation counting on those 100 terabytes of memory, or, what amounts to the same thing, 100,000 gigabytes of memories, experiences, knowledge, etc.
If we have this figure of the 5 million terabytes of information on the internet, the human brain would have the ability to know 0.00002% of the total knowledge, if we accept that everything is on the internet.