It also:
* Bakes in the assumption that there are no internal mechanisms to be discovered ("Each environment is a mixture of multivariate Gaussian distributions")
* Ignores the possibility that their model of falsification is inadequate (they just test more near points with high error).
* Does a lot of "hopeful naming" which makes the results easy to misinterpret as saying more about like-named things in the real world than it actually does.
You can ask many people to propose hypotheses and choose one at random, and perhaps with a good sample you get better experiments. You can query a Markov chain until it produces an interpret-able hypothesis. But the people or Markov chain (because English itself) has significant bias.
Also, some experiments have wider-reaching implications than others (this is probably more relevant for the Markov chain, because I expect the hypotheses it forms to be like "frogs can learn to skate").
Counter example is the decades that amyloid cascade hypothesis was the only allowed / funded research of Alzheimer disease.
> "We find that agents who choose new experiments at random develop the most informative and predictive theories of the world. "
There's a neat book about this: "Why Greatness Cannot Be Planned (The Myth of the Objective")https://www.goodreads.com/book/show/25670869-why-greatness-c...
Incidentally, the author works at OpenAI these days.
https://www.wired.com/2010/06/ff-sergeys-search/
He went backwards and started with just collecting an absurd amount of data. Later while talking to a researcher he could confirm years of research with a "simple" search in his database.
(This problem is not just limited to social scientists. I think you could, for example, construct a plausible objection to dark matter as an "explanation" that just "saves appearances" on the same basis.)
Grounded theory is probabilistically correct. Deduction if correct, is actual reality.
Don't get me wrong, I want to love induction, I have William James of Pragmatism on my wall... but the problems with induction hurt me to my core. I know deduction has problems too, but the Platonic Realist in me loves the idea of magic truths.
One, in a sufficiently advanced field of study, an idea's originator may be the only person able to imagine an experimental test. I doubt that many physicists would have immediately thought that Mercury's unexplained orbital precession would serve to either support or falsify Einstein's General Relativity -- but Einstein certainly could. Same with deflected starlight paths during a solar eclipse (both these effects were instrumental in validating GR).
Two, scientists are supposed to be the harshest critics of their own ideas, on the lookout for a contradicting observation. This was once part of a scientist's training -- I assume this is still the case.
Three, the falsifiability criterion. If an experimental proposal doesn't include the possibility of a conclusive falsification, it's not, strictly speaking, a scientific idea. So an idea's originator either has (and publishes) a falsifying criterion, or he doesn't have a legitimate basis for a scientific experiment.
Here's an example. Imagine if the development of the transistor relied on random experimentation with no preferred outcome. In the event, the inventors at Bell Labs knew exactly what they wanted to achieve -- the project was very focused from the outset.
Another example. Jonas Salk (polio vaccine) knew exactly what he wanted to achieve, his wasn't a random journey in a forest of Pyrex glassware. It's hard to imagine Salk's result arising from an aimless stochastic exploration.
So it seems science relies on people's integrity, not avoidance of any particular focus. If integrity can't be relied on, perhaps we should abandon the people, not the methods.
Science relies on replication. And any real gain society gets that comes from science is a form of replication in itself.
Integrity can't be relied on. But then, complete reliability is not necessary, just enough to make replication work.
And also, science is in a crisis due to the lack (or really large delay) of practical use. We actually don't have any other institution that ensures replication happens.
They are analyzing a toy model of science. The details and in figure 1. They have a search space that has a few Gaussians like
f(x,y,z) = A0 * expt(-(x-x0)^2-(y-y0)^2-(z-z0)^2) + A1 * expt(-(x-x1)^2-(y-y1)^2-(z-z1)^2)
but maybe in more than 3 dimensions and maybe with more than 2 Gaussians.
They want the agents to find all of Gaussians.
It's somewhat similar to a maximization problem that is easier. There are many strategies for this, from gradient ascent to random sampling to a million more of variants. I like simulated annealing.
They claim that the best method is random sampling, that only work when the search space is small. But it breaks quite fast for high dimensional problems, unless the Gaussians are so big that cover most of the space, and perhaps I'm beeing too optimistic. Add noise, overlapping Gaussians and the problem gets super hard.
Let's get to a realistic example, all the molecules with 6 Carbons and 12 Hydrogens. Let's try to find all of them and their stables 3D configuration. This is chemistry from the first year in the university, perhaps earlier, no cutting edge science.
You have 18 atoms, so 18 * 3 = 54 dimensions, and the surface of -energy has a lot of mountains ranges and nasty stuff. Most of them very sharp. Let's try to find the local points of maximal -energy, that is much easier than the full map. These are the stable molecules, that (usually) have names.
* There is a cycle one with 6 Carbons, where each Carbon has 2 Hydrogens, https://en.wikipedia.org/wiki/Cyclohexane Note that it actually has two different 3D variants.
* There is one with a cycle of 5 Carbons and 1 carbon attached to the cycle https://en.wikipedia.org/wiki/Methylcyclopentane
* There are variants with shorter cycles, but I'm not sure how stable they are and Wikipedia has no page for them.
* There is also 3 linear versions, where the 6 Carbons are a s wavy line, and there is a double bound in one of the steps https://en.wikipedia.org/wiki/1-Hexene I'm not sure why the other two version have no page in Wikipedia, I think they should be stable, but sometimes it's not a local maximum or the local maximum is to shallow and the double bound jump and the Hydrogen reorganize.
* And there may be other nasty stuff, take a look at the complete list https://en.wikipedia.org/wiki/C6H12.
And don't try to make the complete list when of molecules that includes a few Nitrogen, because the number of molecules explodes exponentially.
So this random sampling method they propose, does not even work for an elementary Chemistry problem.
The first commercial antibiotics (Sulfa drugs) were found by systemically testing thousands of random chemicals on infected mice. This was a major drug discovery method up until the 1970s or so, when they had covered most of the search space of biologically-active small molecules.
An interesting concept they mentioned was this idea of "injected serendipity" when they were screening for novel materials with a certain target performance. They proceed as normal, but 10% or so of the screened materials are randomly sampled from the chemical space.
They claimed this had led them to several interesting candidates across several problems.
But they choose chemical reactions that are usual in the lab, so they guess they will be able to make it work in the lab, and they keep most of the structure without changes. So it's closer to what they classify here as look nearby the known good points instead of a true random search.
For molecules, 10 Armstrong away is probably as good as infinite.
For how many bananas should you eat per week to become the chess world champion, you can ask Wolfram Alpha to convert 2400kcal * 7 to bananas and get an upper bound.
I think everyone agree that with infinite time a resources a brute force search is better in case there is a weird combination. But for finite time and resources you need to select a better strategy unless the search space is ridiculous small and smooth.