HN CompanionHN Companion new | best | ask | show | jobs
Al is killing programming and the Python community (reddit.com)
20 points by Nash0x7e2 1 day ago | 21 comments


I don't know if my point is valid or not, but...

Stop blaming "AI" - whatever you mean by this. Whether it's an LLM, an LLM-based agent or something else - stop blaming AI and "AI" and LLMs and... you get the point.

It's not the AI that makes the decision to, sorry for being straightforward, write the worthless code which feels like a piece of useless bloated trash. It's not the AI who makes decisions to do something without even understanding the topic - no matter how exactly you define "understanding" in this context. It's not AI who is responsible for this. Because whatever AI truly is right now - an autocomplete tool, advanced chatbot or, maybe, agent - whatever it is, the decisions are made by humans. AI is not responsible for anything that is happening right now.

Humans and humans only are responsible for what's happening. It's their choice. It's their qualities that are clearly visible now. It's their behaviour.

Stop blaming kitchen knives for murders.


AI has made it exceptionally easy to generate "compiles/runs and looks plausible but is still fundamentally flawed" code at a much greater scale than ever before. Maybe the analogy should be a machine gun rather than a knife.

I've been discussing that with my friend just now, and he told me (direct citation):

Well, yeah, stop blaming the knives. Blame the cooks ("vibecoders") who think they can manage a kitchen because the knife cuts everything in half automatically. But also don't forget to blame the knife manufacturer ("AI" companies) who markets automated knives to people who don't know you shouldn't cut toward yourself.

I kind of agree. Some people don't understand how to code because they're lazy or have other issues, while others are trying to make a profit from it. I suppose you can tell who's who. But AI is directed by humans anyway. Instead of copy-pasting, a human could choose to try and write the code themselves, and then ask AI to review it and highlight areas for improvement. A human could choose to ask AI how to do things and then try to do it themselves. But if a human chooses to do things the other way, that's their choice. AI is not to blame here. It's still a human choice, and the person making it is the one who is actually responsible.

Some people smoke. Smoking kills, and not only can smokers die from it, but other people can be harmed by passive smoking as well. It's very easy to start smoking. But blaming cigarettes themselves, as objects/entities/etc. isn't the answer, I guess. It was a certain person's choice to try smoking. It was also the choice of another person to advertise smoking in one way or another, however...


Sure, it's not really the AI's fault ultimately. But you can still ask the question of whether a given codebase (or the Python ecosystem, to take the Reddit post example) would be better off if LLMs didn't exist.

This isn't a good analogy though. It's not blaming a kitchen knife, it's blaming a voice activated auto turret.

Or rather, blaming a car. Yes, a bad driver is way more dangerous than a good driver, but even the best driver can make a mistake. Like cars, it's an inherently flawed piece of technology, and like cars, its benefits are too high for most of us to ignore. Way better analogy than my auto turret one.


> but even the best driver can make a mistake

Well, if you put it this way... even the best programmer in the world, who doesn't use AI at all, can also make a mistake. Of course, their mistakes would probably be less frequent, but I guess they wouldn't blame IDE for poor syntax highlighting (if it's good enough, of course) or the compiler or interpreter for failing to spot the logical error unrelated to syntax rules. They would say "it was my mistake". The problem with AI-generated code, though, is that those who generate it almost never take responsibility for it. They'll say something like, "AI made a mistake here and there." I have never seen someone who has generated flawed code using AI to take responsibility for it. And that's the main problem.

It doesn't matter whether you're a bad driver or the best driver. If you cause an accident, you must be held responsible. As simple as that.

> Like cars, it's an inherently flawed piece of technology

Sorry, but what exactly do you mean? I'm just curious to know what you mean when you say that cars are "an inherently flawed piece of technology".


Moving half a ton metal boxes very fast in space shared with normal humans will always kill people, and making them not share the space with humans is too impractical and expensive. So cars are a technology that _will_ kill some of us, by design. But their advantage is too great to be ignore, so we accept the loss.

And yes, i broadly agree with most of what you said, people show a lack of accountability that also translate to a "i don't need to read the code" attitude. That's why to me, most people who see better than 20% increase of their productivity consistently and aren't just writing short scripts are just bad devs that deport the issues in their code to either their seniors or to later.


This post is like 6 months late. I share the same concerns that others in the thread do, but the talking point is pretty tired by now

I think my experience with Python has been a lot worse than OP's. Random Python projects on GitHub always lacked polish and documentation. If anything my enjoyment of Python has skyrocket with uv, because I don't need to spend an hour guessing which Python 3.x version is compatible with your library.

> because I don't need to spend an hour guessing which Python 3.x version is compatible with your library.

Of all the advantages uv has, I can't fathom how it's determining version compatibility other than by reading standard metadata files that pip perfectly well understands.


100%, uv is awesome

Why can't you just ignore the bad projects? AI is probably super annoying if you're a maintainer getting slop PRs; but if you're a professional I don't see how this can vex you that much.

HA! It’s like ignoring the student projects on GitHub when a professor encouraged all their students to create GitHub accounts and create repos of their social media analysis assignments.


Imagine a world where npm and all the other library repositories are 99% AI slop. And the posts about which library to choose are 99% AI-generated.

It'll be so hard to find anything in the chaff you might get your old job as a dev back. :)


We'll have AIs doing our chaff-sifting for us, too, right?

padme.jpg

It's game over, guys. With newer AI models and coding agents that you will see in the next year or two, resistance is futile. Move on to something else that is not purely a programming job. If you only move to a different programming language, the same outcome will inevitably follow you. Wages are expected to diminish. As a case in point, almost no one needs Assembly experts anymore. From a management pov, professional coding now comes down to specifying requirements precisely and having exhaustive tests for them, all AI generated.

Python was already like that. A cascade of beginners cargo-culting other beginners, because it's easy enough to get started that everyone think's they're an expert and will blog about it. Switch to a language with a bit of a barrier to entry and you avoid the problem.

People make the same posts in the rust subreddit

Unless we need to go to languages even harder? Haskell?


I'd estimate that Haskell + AI could reach cuneiform levels of inscrutability.