Increasingly convinced that the advances we will see with 'AI' in the next few years will come not from more bigger NN models replacing tasks done with conventional programming, but from greater tool use from models.
This matters because very few organizations have the resources to train bigger models, but writing an MCP app or opens new abilities by orchestrating tasks across LLM API is much more accessible.
It seems the leading AI companies are already doing this -- LLMs can't add, but they've all learned to call a calculator. They don't know current news, but have learned to google. Increasingly the value is not just in some raw model weights, but the platforms around them.
Sep 17, 2025 17:07Meanwhile, much science and funder rhetoric seems squarely centered on the proposition that (academic) researchers will progress by somehow 'training new AI models' on ever more data.
As an ecologist, just putting here for the record that we are never going to have an AI model 'predict the future of biodiversity'.
these tools can be useful, even transformational or foundational, but I think more in the 'duct tape + pvc piping' is foundational than in the 'one ring to rule them all' view. If even the companies are proceeding with tool use, this is something we too can build for ourselves.