terça-feira, novembro 07, 2017

Incompleteness: “Build Deeper - Deep Learning Beginner's Guide" by Thimira Amaratunga



‘{
“epsilon”: 1e-07,
“float”: “float32”,
“image_data_format”:”channels_last”,
“backend”: “tensorflow”
}

In “Build Deeper - Deep Learning Beginner's Guide by Thimira Amaratunga

This book confirms other predictive system results that I have seen, where it has often been found that we human as a species who fancy ourselves as psychics or using other la-di-da methodologies can at best achieve around an 80% accuracy rate, even with good regular practice and tuning. The more accustomed you are toward reaching ever higher accuracy & precision percentile targets the more the distance to the next little increase in goal horizon. Still it does bring into question the abilities of Science and machine systems designing new machine systems, often through excluding what are regarded as unrepeatable subjective methods in favour of repeatable objectiveness. Outliers and other non-obvious patterns & so on are pushing back the boundaries at the edge of our cultural belief systems.

I don't think that any computer scientist would dispute the point that modern AI or machine learning is nowhere near the threshold of 'consciousness' or even 'general intelligence'. But it's not uncommon for words to have a different meaning within a technical field compared to how they are used in everyday communication. In regular English 'chaos' means unpredictable, whereas in mathematics it refers to the tendency of sensitive nonlinear systems to exhibit emergent attraction basins that can potentially be extremely predictable. Those are arguably even antonyms. Another example would be terms 'deterministic/nondeterministic' in Computer Science, which also differ strongly from their meanings in regular English. The point is that if you feel the need to grandstand on these trivialities, you clearly don't understand the fundamentals of the subject matter under discussion.

The human mind works mainly by analogy: This situation looks rather like that other one, so if I act in a similar way, I will probably get a similarly good result (that's a sort of fundamental though largely unconscious meta-rule that is itself endlessly confirmed with a few notable exceptions: I've worked by analogy in the past and it's worked... more or less, and I can of course learn by mistakes by learning to avoid false analogies in future).

The math on which computer and other science is based is also in a strong sense itself based on analogies, or formal mappings between partly equivalent structures. But in 'science' the analogies are usually defined starting from a well-defined base of, as it were, logical atoms that cannot fully reproduce the non-explicit analogies in which human thought, perception and action are rooted. At least, that's how I see the question / problem of AGI (Artificial General Intelligence) for now. You can map a lot of analogical processing into neural nets, but you're still starting with well-defined analogies rather than the fuzzy logic of the wetware that is our brains, which to me makes the programme very interesting methodologically, but ultimately flawed conceptually - based ultimately on a false analogy of our minds as physical machines). It's an analogy that like so many others works up to a point. And that point is the question 'what is a machine?' physical or otherwise.

My background is in math, specifically mathematical logic and the philosophy of math and mathematical physics. I'm constantly amazed by the naive doctrinaire scientism, rooted in an outdated Victorian conception of mechanism that simply assumes that because everything we do or think can be mapped into a material word including our bodies and brains, then that mapping provides a complete model of our actual individual lives, thought, consciousness and separate identities.

In formal logic, Gödel's incompleteness theorems show that the very mathematical space in which physical space-time is modelled cannot be both formally complete and consistent. And recent results in mathematical physics show that Quantum Mechanics is essentially 'incomplete' as a mathematical theory in various ways: hidden variable extensions cannot reduce the indeterminacy of the outcome of a quantum measurement - and most importantly of all, quantum measurement cannot itself be modelled in Quantum Mechanics.

Last but not least, I was at a conference on modelling of the cerebral dynamics of feeling and action recently and was once more amazed at the essential crudeness of the methodological models on show, which assumed that human central nervous system or brain physiology can be fully modeled in abstraction from its interaction with the various components of the autonomic nervous system and the endocrine system (and even the immune system, the third of the primary integrative systems of the human body).

Good luck with the research if you're involved in it, it's important and probably useful, but it won't tell you who or what you are, or what to do. That is not really how modern Machine Learning algorithms operate. Without going into too much detail, they are usually used for learning problems for which there is no simple state transition map or combinatorial solution. This means that the answer cannot be encoded into machine instructions through conventional means. So instead, heuristics are used to simulate the behavior of a probabilistic automata, and then the internal state logic is matched to a training set using back-propagation (there are other algorithms as well).

So while this is almost certainly the result of human bias, the problem is likely localized within the training data sets. I presume that they gathered pictures of faces, then had people rate them based on perceived beauty, and trained the algorithm to emulate that particular map. Which means that the results of the algorithm is a reflection of trends within society. That makes sense, seeing as people tend to rate the faces of minorities as being less 'beautiful'. This trend is attributed to multiple factors, but most significant are probably the history of colonialism and poor minority representation in pop culture. I would be willing to bet that if you were to take a survey of Guardian readers, you would find a very similar trend, because based on broad data analysis it tends to be expressed across many populations and communities in the West (and in other parts of the world as well).

Almost certainly due to entrenched bias, but far more likely those of the people who contributed to the training data as opposed to the programmers themselves. Also, research has shown that pretty much everybody displays similar biases, so it's a little bit disingenuous to pass it off on Machine Learning or these particular Programmers when it is clearly a societal problem for which we all share the responsibility to do something about.

NB: To follow all the examples in this book you’ll need to install the following: Anaconda Python, Packages from conda, OpenCV, Dlib, Theano, Keras, TensorFlow. Some of thew links in the book didn’t work (I had to build the frigging environment without any help whatsoever!!!!):

(...)


Sem comentários: