Archive

Perhaps nothing has defined higher education over the past two decades more than the rise of computer science and STEM. Since 2016, enrollment in undergraduate computer-science programs has increased nearly 49 percent. Meanwhile, humanities enrollments across the United States have withered at a clip—in some cases, shrinking entire departments to nonexistence.

But that was before the age of generative AI. ChatGPT and other chatbots can do more than compose full essays in an instant; they can also write lines of code in any number of programming languages. You can’t just type make me a video game into ChatGPT and get something that’s playable on the other end, but many programmers have now developed rudimentary smartphone apps coded by AI. In the ultimate irony, software engineers helped create AI, and now they are the American workers who think it will have the biggest impact on their livelihoods, according to a new survey from Pew Research Center. So much for learning to code.

Fiddling with the computer-science curriculum still might not be enough to maintain coding’s spot at the top of the higher-education hierarchy. “Prompt engineering,” which entails feeding phrases to large language models to make their responses more human-sounding, has already surfaced as a lucrative job option—and one perhaps better suited to English majors than computer-science grads.

The potential decline of “learn to code” doesn’t mean that the technologists are doomed to become the authors of their own obsolescence, nor that the English majors were right all along (I wish). Rather, the turmoil presented by AI could signal that exactly what students decide to major in is less important than an ability to think conceptually about the various problems that technology could help us solve.

  • RotaryKeyboard@lemmy.ninja
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    I’ve just spent a few weeks continually enhancing a script in a language I’m not all that familiar with, exclusively using ChatGPT 4. The experience leaves a LOT to be desired.

    The first few prompts are nothing short of amazing. You go from blank page to something that mostly works in a few seconds. Inevitably, though, something needs to change. That’s where things start to go awry.

    You’ll get a few changes in, and things will be going well. Then you’ll ask for another change, and the resulting code will eliminate one of your earlier changes. For example, I asked ChatGPT to write a quick python script that does fuzzy matching. I wanted to feed it a list of filenames from a file and have it find the closest match on my hard drive. I asked for a progress bar, which it added. By the time I was done having it generate code, the progress bar had been removed a couple of times, and changed out for a different progress bar at least three times. (On the bright side, I now know of multiple progress bar solutions in Python!)

    If you continue on long enough, the “memory” of ChatGPT isn’t sufficient to remember everything you’ve been doing. You get to a point where you need to feed it your script very frequently to give it the context it needs to answer a question or implement a change.

    And on top of all that, it doesn’t often implement the best change. In one instance, I wanted it to write a function that would parse a CSV, count up duplicate values in a particular field, and add that value to each row of the CSV. I could tell right away that the first solution was not an efficient way to accomplish the task. I had to question ChatGPT in another prompt about whether it was efficient. (I was soundly impressed that it recognized the problem after I brought it up and gave me something that ended up being quite fast and efficient.)

    Moral of the story: you can’t do this effectively without an understanding of computer science.