You bought a watch that tells the correct time 90% of the time you look at it. The other 10% of time it might give you an obviously incorrect time (say, showing 11PM during the day) or might be just slightly off to make you miss that important meeting. After using this watch for a while you notice that its correctness depends on several factors such as the time of the day, your location in longitude (but not latitude), your viewing angle at the watch face, what you had for breakfast the previous day, and so on. When you need to find out if you can catch the next bus, for example, you learn to first move to a location with the optimal ambient lighting, tilt your head just so, and raise your wrist at just the right speed to ensure a correct reading. You went back to the store to exchange the watch for a better one that is correct 95% of the time. However, it comes with a different set of quirks for getting the correct time. You need to remember to rub your right cheek before reading
I read that folks had observed that some machine learning models could be used to write code that runs . Let that sink in for a moment. At the same time, be aware that biases and extensive memory have been observed in the same model. This might be considered as an implementation of automatic programming , and is definitely not the first time machine learning models are used for generating code (or strings that look like code). The model that writes code (Z) when given some inputs is itself a piece of (very very very large and complex) code (Y). If expressed in a general purpose programming language, it would have perhaps thousands of variables and many more operations. No human programmer wrote this code - another piece of code (X) did. Humans -> X -> Y -> Z This is now the classical scientific fiction set up where a robot make other robots (which may in turn make other robots). In the case when Y is a neural network, X would be responsible for both the training loops