I was reading this new research from Google today and it reminded me of some of the recent discussions we’ve had in this thread. Mainly the argument that these AI models can’t produce anything “new” they are simply re-hashing their training data.
We introduce FunSearch, a method for searching for “functions” written in computer code, and find new solutions in mathematics and computer science. FunSearch works by pairing a pre-trained LLM,...
deepmind.google
In this study, Google have used an LLM combined with an “evaluator” (essentially a tool used to evaluate truthfulness of outputs and deter hallucinations).
Using this they have found new solutions to maths problems that human mathematicians have been working on for decades. Namely the “cap set” problem. The model was able to develop new cap sets higher than humans have managed in the last 20 years of trying. Terence Tao, arguably the most notable living mathematician, described this as his favourite open problem in mathematics and the area is now being theoretically advanced by AI.
I see this as strong evidence of what I’d mentioned previously, which is that creativity and innovation are not abstract human traits - they are emergent qualities of any entity that has the capability to combine its vast training data in new ways. In my view there is quite literally no field that is safe from a sufficiently specialised AI.