Back to articles
Fragments Dec 4
How-ToTools

Fragments Dec 4

via Martin FowlerMartin Fowler (martin@martinfowler.com)

Rob Bowley summarizes a study from Carnegie Mellon looking on the impact of AI on a bunch of open-source software projects. Like any such study, we shouldn’t take its results as definitive, but there seems enough there to make it a handy data point. The key point is that the AI code probably reduced the quality of the code base - at least if static code analysis can be trusted to determine quality. And perhaps some worrying second-order effects This study shows more than 800 popular GitHub projects with code quality degrading after adopting AI tools. It’s hard not to see a form of context collapse playing out in real time. If the public code that future models learn from is becoming more complex and less maintainable, there’s a real risk that newer models will reinforce and amplify those trends, producing even worse code over time. ❄                ❄                ❄                ❄                ❄ Rob’s post is typical of much of the thoughtful writing on AI. We can see its short-te

Continue reading on Martin Fowler

Opens in a new tab

Read Full Article
1 views

Related Articles