Published on

The end of programming

I am concerned about AI and what the impact will be on programmers.

Over the years, there have been many predictions related to the demise of the programming profession.

However, we have never been so inundated with media relating to the subject. It seems as though every other day brings a new "AI is coming for your job" article, post, or video.

I honestly am not worried, at the moment anyway, about AI being quite as disruptive when it comes to programming. I am more concerned that the "programming is dead" narrative may lead to a massive shortage of developers in coming years.


What do you think, ChatGPT?

I asked ChatGPT and this was the response:

Over the decades, there have been numerous predictions about the future of software engineers and programmers, often suggesting that their roles will be diminished or even eliminated due to technological advancements. Here are some examples of these predictions over time:

1960s - 1970s

Early Automation Concerns

  • LISP and AI (1960s): Early AI pioneers like John McCarthy believed that advances in artificial intelligence and languages like LISP would eventually lead to automated programming, reducing the need for human programmers.
  • COBOL and Business Automation (1970s): The creation of COBOL and other high-level languages led to predictions that business professionals would be able to write their own programs, reducing the need for specialized programmers.

1980s

Fourth-Generation Languages (4GLs) and CASE Tools

  • 4GLs (1980s): The advent of fourth-generation programming languages promised to simplify software development to the point where end-users could create their own applications, reducing the demand for professional programmers.
  • CASE Tools (1980s): Computer-Aided Software Engineering tools were predicted to automate many aspects of software development, including design and code generation, leading to a reduced need for manual programming.

1990s

Object-Oriented Programming and Rapid Application Development (RAD)

  • RAD Tools (1990s): Rapid Application Development tools and environments like Visual Basic were thought to allow non-programmers to create complex applications, thus potentially reducing the need for professional developers.
  • Object-Oriented Programming: The rise of object-oriented programming was seen as a way to modularize and reuse code more effectively, which some believed would lessen the need for traditional programming roles.

2000s

Web Development and Low-Code/No-Code Platforms

  • Web Development Frameworks (2000s): The proliferation of web development frameworks such as Ruby on Rails, Django, and others led to the belief that building web applications would become so straightforward that fewer specialized programmers would be needed.
  • Low-Code/No-Code Platforms (Late 2000s): Tools like Salesforce, Microsoft PowerApps, and others promised to enable business users to create their own applications without deep programming knowledge.

2010s

AI and Machine Learning

  • AI-Powered Code Generation (2010s): Advances in AI, particularly in machine learning, led to tools like DeepCoder from Microsoft, which were designed to write code based on given specifications. There were predictions that such tools could drastically reduce the need for human programmers.
  • GitHub Copilot (Late 2010s): The introduction of AI-driven code assistance tools like GitHub Copilot suggested that the role of programmers might shift more towards oversight and less towards writing boilerplate code.

2020s

Advanced AI and Autonomous Systems

  • AI and Autonomous Systems (2020s): The continuing advancements in AI, including systems like OpenAI's Codex, have led to predictions that AI could handle many programming tasks autonomously, potentially making many programming roles obsolete or drastically changing their nature.
  • Low-Code/No-Code Evolution: The maturation of low-code and no-code platforms has continued to promise that business professionals can develop complex applications without traditional programming skills, leading to speculation about the reduction in demand for software engineers.

Common Themes and Observations

  • Shifting Roles: Throughout these decades, the common theme is that while the role of programmers might change due to new tools and technologies, the need for human oversight, creativity, and problem-solving has persisted.
  • Complexity of Real-World Problems: The complexity of real-world software engineering problems has often meant that complete automation has been impractical, and skilled programmers remain necessary to handle nuanced and intricate tasks.
  • Tool Improvement, Not Replacement: New tools and technologies have frequently been found to augment rather than replace programmers, making them more productive rather than redundant.

While many predictions have suggested a diminished role for programmers, the demand for skilled software engineers has remained robust, adapting to the evolving landscape of technology and tools.

What do others think?

There are folks on both sides of the fence here, but there is this video of a CS professor that reckons that we humans will just be checking the code created by LLMs. Oddly enough, IIRC, he mentions that LLMs aren't concerned about abstraction and proper code refactoring and the like, but we shouldn't care since the code that is produced will get the job done. However, part of all that abstraction and code reuse is to make it easier to understand. If a human has to check the code produced by an LLM, it better be understandable.

What I think...

My experience with LLMs has been hit-and-miss. Boilerplate code is great, but I've had to fix the code produced more often than not. I recently asked ChatGPT and Mistral to help me with integration code for SharePoint, and they both were quite adamant that it is possible to authenticate using an Azure App Registration client ID and secret. From what I can tell, one would need to use a certificate. I usually end up searching the Internet when I just cannot get the LLM to produce working code.

NB! Not a real doctor... But I'll still tell you what's wrong with you - Dr. Noob

Another factor is trust. Are we really going to trust what an LLM is producing? If an AI can tell us to eat rocks by training on satire, then how do we know that it did not train on sub-standard or, even worse, malicious code?

There are just so many moving parts when it comes to software engineering that for an LLM to put together a coherent solution may be somewhat of a stretch.

How did we get here?

  • Having to remember method calls (names, parameters, and types).
  • IntelliSense showing method names, parameters, and types.
  • Coding assistants like ReSharper completing for loops and the like.
  • AI: in IDE (Copilot), or external (LLMs) providing more elaborate solutions.

BASIC on the Commodor VIC-20 was pretty much having to code with no help whatsoever.

Some folks even reckon that AI is "Autocomplete on steroids".

Let's just hope that not too many potential software engineers are dissuaded from entering the profession by the "end of programming".