About the Book
The key idea of this book is that perfect knowledge implies randomness. This is, in principle, a highly counterintuitive idea, since a lot of effort in science deals with the task to name, organize and classify our messy, chaotic, world. Even the kind of knowledge that explains how things work requires a previous ordering and classification. Science, apparently, is anything but random. Yet, this is not the case.
If it requires a lot of time to explain how something works, probably we do not fully understand it, and our knowledge must be incomplete. Many scientists and philosophers of science will not agree with this premise; they would argue that some theories require a lot of effort to describe because the underline concepts are inherently difficult. However, I will provide a strong evidence, based in the analysis of the history of science, that this is not necessarily true. For example, consider the calculus of derivatives. At its origin, during the time of Newton and Leibniz, it was required a lot of space to define the concept of limit of a function. Moreover, only the specialists, if any, were able to understand the idea. We had a highly incomplete knowledge. Today, the definition of derivative barely takes one line in a math book, and it is taught at high schools. Our current knowledge is much better. Long explanations usually contain a lot of redundancy: repeated ideas, unused concepts, improperly identified relations, and so on. With a better understanding, normally after considerable research, we should be able to remove that redundancy. And when there is nothing left to take away, we say that we have achieved a perfect knowledge.
If a string is incompressible, it is said to be random. Hence, if a theory is perfect, that is, it presents no redundant elements, it description must be a random string of symbols. Our common understanding suggests that random strings do not make any sense, since this is what randomness is all about. On the contrary, I will show that descriptions are random when they contain the maximum amount of information in the less space possible.
In this book it is described in detail the Theory of Nescience, a new mathematical theory that I have developed with the aim of taming the scientific unknown. The theory is based on the fact that randomness effectively imposes a limit on how much we can know about a particular research topic. Far from being a handicap, the proper understanding of this absolute epistemological limitation opens new opportunities in science and technology, both, to solve open problems, and to discover new interesting research questions. In the book I also describe some of the (surprisingly) large number of practical applications of this new theory, not only to science, but also to software engineering and finance.
About the Author
R. A. García Leiva has a Bachelors degree in Computer Science by the University of Córdoba, a Master degree in Computational Sciences by the University of Amsterdam, and a Diploma of Advanced Studies in Telematics by the Universidad Autónoma de Madrid. He worked during four years in the University of Córdoba as a scientific programmer in the areas of Geographical Information Systems and Remote Senging. He worked during three years in the Universidad Autónoma de Madrid as research engineer in the area of High Energy Physics. He worked during three years as R&D manager at Andago Ingeniería, coordinating the R&D activities of the company, in the areas of open source software and e-government. In 2008 he funded Entropy Computational Services, where he worked for five years in the areas of Social Networks, Mobile Applications, and Quatintative Trading. In 2014 he joined to the Institute IMDEA Networks, as research engineer, working in the areas of Statistical Learning and Big Data.