BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Podcasts If LLMs Do the Easy Programming Tasks - How are Junior Developers Trained? What Have We Done?

If LLMs Do the Easy Programming Tasks - How are Junior Developers Trained? What Have We Done?

In this podcast Michael Stiefel spoke to Anthony Alford and Roland Meertens about the future of software development and the training of new developers, in a world where Large Language Models heavily contribute to software development.

Key Takeaways

  • Large Language Models are beginning to have a strong impact on software development.
  • The exact long term impact of Large Language Models on software development is unknown. Possible roles are coding suggestions, prototypes, DevOps prioritization, and code explanation.
  • The most immediate problem is knowing how to recognize when a LLM has generated an incorrect solution.
  • One of the long term challenges is understanding the context in which a LLM operates, or understanding the code that it generates if a human has to modify or debug it.
  • The greatest problem is that the history of technological development demonstrates that the true downsides of technology are only apparent when the unforeseen side effects appear from the interaction of the technology with society.

Introduction [00:18]

Michael Stiefel: Welcome to the What Have I Done Podcast, where we ask ourselves do we really want the technology future that we seem to be heading for? This podcast was inspired by that iconic moment at the end of the movie, The Bridge on the River Kwai, where the British Commander, Colonel Nicholson, realizes that his obsession with building a technologically superior bridge has aided the enemy and asked himself, "What have I done," right before he dies, falling the detonator that blows up the bridge.

For our first episode, I wanted to discuss the impact of large language models on software development. I have two guests, well known in the world of InfoQ, Anthony Alford and Roland Meertens. Both host the “Generally AI” Podcast for InfoQ.

Anthony is a director of development at Genesys, where he's working on several AI and NL projects related to customer experience. He has over 20 years of experience in designing and building scalable software. Anthony holds a PhD degree in electrical engineering with specialization in intelligent robotic software, and has worked on various problems in the areas of human AI interaction and predictive analytics for SaaS business optimization.

Roland is tech lead at Wayve, a company which is building embodied AI for self-driving cars. Besides robotics, he has worked on the safety for dating apps, transforming the exciting world of human love into the computer readable data. I can't until that and LLMs get together. He also bakes a pretty good pizza.

Welcome, both of you, to the podcast.

Anthony Alford: Thanks for having us.

Roland Meertens: Yes, thank you very much.

The Software Development Lifeycle of the Future [02:05]

Michael Stiefel: I would like to start out with the assumption that we live at some future time, not probably too far in the future, when the problems of using LLMs for writing code have largely been ironed out. The first question to ask both of you is what does the software development life cycle look like in this world?

Roland Meertens: Well, what I assume is that bots will automatically find issues within software, they can automatically raise a PR, and then other bots automatically accept the improvements. Basically, none of the code is readable anymore, which is not that much different than today.

Anthony Alford: That's a very cynical take. But then you've worked with robots a lot. I start out with my general principle here is, probably what will actually happen is going to be surprising to a lot of people. I'm going to go with the safe bets and I'm going to go with the concept of what robots are for. They're for automating the tasks that people find dangerous, dirty and dull. I've never really experienced a lot of danger or dirt in my software development career, but there's a lot of dullness. I'm going to go with the idea that the automation, the LLMs, are going to take care of the dull stuff. Like Roland said, that's definitely pull requests, code reviews for sure. But also things like writing tests, writing documentation. Things that we find hard, like naming variables.

The idea is to free up the time for the human engineers to focus on the important things, like do we use spaces or tabs?

Michael Stiefel: In other words, coding standards.

Roland Meertens: Those are things we should have, ideally, already automated, like those decisions you should give to an intern normally. I think that's at least already something which you give away.

Who of you two is using GitHub Copilot at the moment?

Anthony Alford: I've used it for fun side projects. It's not bad.

Roland Meertens: But you're not using it for your day-to-day work?

Legal or Regulatory Issues [04:24]

Anthony Alford: That's an interesting ... One of the premises of this episode is that all of the problems have been ironed out. One of the problems for us, professionally at our company, is we don't want to send data out into the potential training dataset. There's also concerns about the code that you get back. Who owns the copyright to that code, for example? No, we're not using Copilot at work.

Roland Meertens: But it's because of legal trouble and not because of technical capabilities?

Anthony Alford: More or less, yes.

Teaching Future Developers [04:57]

Michael Stiefel: Well, both of you have hit on the idea that we're going to use this technology to do all the dull stuff and the easy stuff. Isn't that traditionally where novice programmers get trained? So the question then becomes, if we've automated all the easy stuff, how are we going to train programmers? What is the programming class of the future going to look like?

Anthony Alford: The Copilot is actually, I think, a pretty useful paradigm, if we want to use that word.

Michael Stiefel: Do you want to explain to some of the listeners what the Copilot is? Because they may or may not know what it is.

Anthony Alford: Yes, it can mean a couple things. The capital letter Copilot is a product from GitHub that is a code generating LLM. You type a comment that says, "This function does," whatever you want the function to do and it will spit out the entire code for the function, maybe.

Michael Stiefel: In whichever language you tell it to.

Anthony Alford: Right, exactly. Copilot could also mean an LLM that's assisting you and I think that might be a nice model for training novices. Maybe more it's a real time code reviewer, a real time debugging assistant might even be better.

The other way to look at it is maybe the LLMs save your senior programmers so much time that they'll have nothing else to do but mentor the younger ones. I don't know.

Michael Stiefel: Well this is, I think what you're hitting on, is one of the I've always found the paradoxes of programming technology in general. It's that unlike other engineering ... For example, if you're a civil engineer, you can spend your entire life building the same bridge over and over again.

Anthony Alford: It's the Big Dig.

Michael Stiefel: Well, yes. For those of you who don't live in Boston, that was a very interesting civil engineering experience for many years, paid for with taxpayer money from throughout the United States. But in any case, there were very few projects like that in the engineering world. You do something that has never been done before.

When I was a graduate student, I took nuclear engineering. On one of the final exams of reactor design was we were supposed to design the cooling system for a nuclear reactor. Not from physics first principles, but taking the ASME standard and applying that to the design. Software's very different. In software, if you want another copy of something, we just copy the bits. We tend, in software, always to do new things that we haven't done before because otherwise, why write a software program?

The question is how do you take a LLM, or any technology that is trained on the past, and apply it to the future that we don't necessarily know what it is?

Roland Meertens: But aren't you also frequently doing the same thing, over and over again?

Michael Stiefel: Yes.

Roland Meertens: As humankind.

Michael Stiefel: Yes.

Roland Meertens: That's why Stack Overflow is so popular because everyone seems to be solving the same problems every day.

Michael Stiefel: Yes. But the question is ... Let's say, for example, I go back to the days, well I don't want to say exactly how old I am, but I remember card readers and even before virtual memory. Yes, there were repetitive things, but people forget things like compilers, linkers, loaders, debuggers. These were all invented at some point in time and they were new technologies that required insight. Firewalls, load balancers. How does a LLM take all these things into consideration if it's never seen them before?

Roland Meertens: Yes. But also, I think that we started this discussion with how do you learn if you don't know about the past? In this case, I'm the youngest being only 33-years-old and I unfortunately missed out on the days of punch card programming.

Michael Stiefel: You didn't miss much.

Roland Meertens: That's just what I'm asking is how often do you, in your daily work think, "Oh yes, I remember this from punch card days. This is exactly what I need."

Michael Stiefel: But the point is who would have thought of a compiler? In other words, a LLM is not going to come up with something new. Is it going to look at data and say, "Ha, if we could do this, it would be great, and this is how we're going to do something that I've never seen before?"

What Will Programmers Actually Understand [09:39]

Roland Meertens: I'm mostly wondering what this means for future senior developers. These are the people who are beginning with programming today. I think the question is are they going to learn faster because they can focus on code 100% of the time, instead of having to go through many obscure online fora to find the API code they need? Or are they not going to build a thorough understanding of code and what the machine actually does, because they just ask ChatGPT to generate everything they do?

Anthony Alford: Yes. I was going to say yes it's true that sometimes software developers have to solve a problem that has not come up before, but really I think a more common use case is it's like you were talking about with ASME standards, you're basically putting together pieces that you've already seen before. Maybe in a novel way, but quite often not really. "I need to make a rest API. I need something that stores ..." This is how these frameworks, like Rails and Django work. They know that there's common patterns that you're going to go to. I think that the LLM is just the next iteration of that.

Uses of Large Language Models

LLMs Embody Software Patterns of the Future [10:56]

Michael Stiefel: So it's the next iteration, design patterns, architecture patterns, enterprise integration patterns.

LLMs as The DevOp First Responder [11:02]

Anthony Alford: Probably. The other thing is, like I said, the code itself is not the entire job. There's a lot of other stuff. Let's take the devops model. In my company, the software engineers who write code are on call for that code in production. What if we could make an LLM be the first responder to the pager and have it either automate a remedy or filter out noise? Or pass it along when it's stuck. LLMs could do other things like helping you design your APIs, helping you generate test cases, maybe even debug or optimize your code.

How Understandable Will LLM Written Code Be? [11:46]

Again, I talked about automating the parts that are dull. Or maybe not necessarily dull, but maybe harder. I don't think we're going to see LLMs writing all the code. Maybe we will, but I think it's still very important. Like Roland said, we don't want code that's just completely LLM generated, because then nobody will know how it works.

LLMs as Code Explainers [12:06]

Michael Stiefel: Well, I think it's hard enough to sometimes figure out how the software we write manually works.

Anthony Alford: Exactly.

Michael Stiefel: In fact, I've come across code that I wrote maybe two or three years ago and look at, and say, "How does this work?"

Anthony Alford: Sure. That's what the LLM-

Michael Stiefel: Incidentally, sometimes I try to convince myself there's a bug, but then I realize I was right the first time. It's just that I forgot the intricacies of what was going on.

Roland Meertens: It is worse if there's a comment which says, "To do: fix this," next to it.

Anthony Alford: Maybe that's the way that LLMs can help us. The LLM can explain the code to you maybe. That would be a great start.

Michael Stiefel: Yes.

Anthony Alford: What does this code do?

Michael Stiefel: I like your idea about the pager. Having worn a pager for a small period of time in my life, anything that would prevent me from getting beeped in the middle of something else would be a great improvement.

LLMs and Refining Requirements [13:03]

We haven't quite figured out how you'd train the new programmers yet, because I really want to come back to that. But how do you describe requirements... One of the things that's the toughest, to do in a software development, is figure out what the requirements are. I've worked for myself for many, many years and I've said, over the years, there are only three things that I do in my life. Inserting levels of indirection, trading off space and time, and trying to figure out what my customers really want.

Anthony Alford: I actually made a note of that as well. If we could get an LLM to translate what the product managers write into something that we can actually implement, I think that would be a huge ... I've had to do that myself. The product managers, let's assume they know what the customers want because they're supposed to. They know better than I do. But what they write, sometimes you have to go back and forth with them a couple of times. "What do you really mean? How can I put this in more technical terms?"

Michael Stiefel: Well, I think the point is they very often don't understand the technology and the customers don't. Because one of the things I find is just because you can write a simple English language statement doesn't mean it's easy to implement. That would be interesting. How would you see that working with the LLM? In other words, the product manager says, I don't know, that we need something fast and responsive. How would the LLM get the program manager to explain what they really mean by that?

Roland Meertens: I think here, there's two possible options again. On the one hand, I think that sometimes thinking about a problem when manually programming gives you some insights, and that also goes for junior developers who need to learn how to code. It's often not the results which counts, but the process. It's not about auto generating a guitar song, it's about slowly learning and understanding your instrument.

LLMs Generating Prototypes [15:10]

On the other hand, if you have a product manager which asks for a specific website, if you can have ChatGPT generate you five examples, and they can pinpoint and say, "Yes, that's what I want." If you can auto generate mock ups or auto generate some ideas, then maybe you get it right from the first time, instead of first having to spend three sprints building the wrong product.

Michael Stiefel: A LLM, another idea is to have it be an advanced prototyping tool.

Anthony Alford: Absolutely. I think we've all seen that demo where somebody drew a picture on a napkin of a website.

Michael Stiefel: Yes.

Roland Meertens: Right.

Anthony Alford: And they give it to the image understander.

Michael Stiefel: Interesting. I think because there's been a lot of attempts to do prototyping code generation, I'm sure you've all seen them in the past. There's a frustration with them, but maybe large language models can ... Again, how would you train such a prototype? Would you put before them samples?

Because one of the things I think with machine learning in general, they don't understand vagueness very well. In other words, you give a machine learning algorithm something, it comes up with an answer, but it doesn't come up with probabilities. When you're doing prototyping, you're combining probabilities and there's no certainty. How do you solve that kind of problem? If you understand what I'm trying to get at.

Roland Meertens: But isn't this the same problem as we used to have with search engines?

Michael Stiefel: Yes.

Roland Meertens: Where people would go to a search engine and they would type, "Please give me a great recipe for a cake." Now everybody knows to type, "Chocolate cake recipe 15 minutes," or something.

Michael Stiefel: Right, because we trained the humans that deal with the software.

Roland Meertens: Yes.

Michael Stiefel: But ideally, it really should be the software that can deal with the humans.

Roland Meertens: Yes. I think my father already stopped using search engines and is now only asking ChatGPT for answers, which I don't know how I feel about that.

Michael Stiefel: I asked ChatGPT to come up with a bio of myself and it came up with something that was 80% true, 20% complete fabrication, but I couldn't tell. I know because I knew my own bio, but reading it, you couldn't tell what was true and what was false.

Roland Meertens: But are you paying for GPT-4?

Michael Stiefel: This was on GPT-3 I think I was doing this.

Roland Meertens: Yes. I noticed that on GPT-3, it also knew my name. I assumed that it knows all of our names because of InfoQ. Then it generated something which, indeed, was 80% true and 20% made me look better than I actually am.

Michael Stiefel: Yes.

Roland Meertens: I was happy with that. GPT-4 actually seems to do some retrieval.

Anthony Alford: Isn't that a game, two truths and a lie, or something like that?

Michael Stiefel: Yes. Yes, yes, it is. But isn't that the worry about using large language models in general?

Anthony Alford: Yes, but I would submit that we already have that problem. The developers have been writing bugs for-

Michael Stiefel: Yes.

Anthony Alford: Ever since, I guess even Ada Lovelace maybe wrote a bug, I don't know.

Michael Stiefel: Well, supposedly the term bug came because The Grace Hopper found an insect in the hardware. But I think the saving grace, I would like to think, with software developers, unlike the general public, they could recognize when the LLM has generated something that makes no sense. It gets caught in a test cause. In other words, you could have maybe one LLM generates the test cases and another one generates the code, the way they like to have battling, sometimes, machine learning systems.

Anthony Alford: Yes, and maybe that's the way we'll wind up doing this is invert it. The human is the code reviewer, although I can't imagine that would ... I hope it's not to that point. Nobody likes reading code.

Michael Stiefel: Oh, no. Yes. To do a code review, I've done code reviews because again, being in this business for a long time, that was a big thing. Code reviews, at some point in time, people said it was going to solve the software quality problem. To do a code review is really hard.

Anthony Alford: Yes.

Michael Stiefel: Because you have to understand the assumptions of the code, you have to spend time reading it. I would hope that the LLMs could do the code reviews.

Anthony Alford: Me, too.

Roland Meertens: Well, I mostly want to make sure that we keep things balanced, not that the product manager automatically generates the requirements, and then someone automatically generates the code. Then the reviewer, at the end, has to spot that the requirements were bad to begin with.

Michael Stiefel: Well, you see, you raise an interesting point here because when we speak of requirements right now, even when you talk about the program manager, the project manager having requirements, that's already where the requirements are some way fleshed out. But if you've ever done real requirements analysis and I have, you sit down with the client and they don't really know what they want, and you have to pull that out of them. There is an art form of asking open ended questions. Because most of, when we do requirements analysis, ask them, "Do you want A or do you want B?" But you already narrowed the field and given the person you're asking perhaps a false choice. You have to be able to deal with open ended questions. Do you see LLMs being able to deal with open-ended questions? And then refine down, as answers come.

Anthony Alford: I really like the idea of the LLM as a rapid prototyper and even a simulator. I've seen projects where people have an LLM pretend to be an API, for example. I have a feeling that would be quite handy.

Michael Stiefel: In that case, you'd still be training the software developers the old way.

Let's go with that idea, for the moment. The LLMs are the rapid prototypers. They may or may not generate reusable code. Because one thing I found with prototypes is you have to be prepared to throw the whole thing away.

Anthony Alford: Yes.

Michael Stiefel: Because you very often get in trouble when you build a prototype and then you say, "Oh, I want to salvage this code." You have to be prepared to throw the whole thing away. So we use the LLM to come up with a rapid prototype. Then, what's the next step?

Also, I'm thinking, when I think of the next step, how do you do the ilities? The security, the scalability. Because it's one thing to design an algorithm, it's another thing to design an algorithm that can scale.

Anthony Alford: Yes. Well, the security part, for example, our company, our security team is already doing a lot of automated stuff. I think that's another great fit for an LLM. Maybe not an LLM, but automation for sure, is something that security people use a lot. Maybe LLMs are good on reading things like logs.

Michael Stiefel: Yes.

Anthony Alford: To find an intrusion, for example. Anyway, that's the only part of that that I have an answer for. I don't know about scalability, other than maybe helping you generate load at scale somehow. Roland, you got an idea?

Roland Meertens: No, not really. In this case, I must also say that as a human, I don't always know how to do things except for go to InfoQ and see how other experts do things. I can only imagine that an LLM has already read all of the articles you guys wrote so they can already summarize them for me.

Michael Stiefel: Assuming I was right to begin with, in the article.

Anthony Alford: Yes.

Roland Meertens: Yes. Yes, but in that sense, that those code generation requirements I think could be a good way to brainstorm. I think that something like ChatGPT can remind you to also think about the ilities.

Michael Stiefel: What I hear being developed is that the LLMs are essentially being used as idea generators, checkers to make sure that the human has done, "Have you considered this? I've looked at the code." Yes, it may generate a lot of stupid things, just looking at the code, but it will generate a checklist. "If you use this API, have you considered this? Should you use a Mutex here?" Or something like that. Is that where we're going with this?

What Could Go Wrong? [24:20]

Roland Meertens: Well, as this podcast is about What Have I Done, I think the dystopian thing I'm not looking forward to is that I think there will be a day where someone adds me, a junior developer will add me to a pull request. I argue that I am right and their AI generated code is wrong, and then I probably learn that their ChatGPT generated code was better to begin with and their AI generated proposal is faster than my code.

Michael Stiefel: Okay, that's humiliating for us, but that's not necessarily a bad future. Going with that idea, what could go wrong? Let's say we were doing a pre-mortem. You're familiar with the idea of a pre-mortem. Something is successful and you ask yourself, "Well, what could go wrong?" What could go wrong here?

Anthony Alford: I think we've already touched on it. I think a big risk of having generated code like this is when something goes wrong, nobody has an idea of how to solve it.

Here's something, I have this idea with autonomous vehicles. Some of you who are experts in that area may fact check me here. My suspicion that if all vehicles were autonomous, overall traffic accidents would go down. But the ones that happened would be just absurd. It would be stuff that no human would ever do, like on The Office, driving into the lake or something.

Michael Stiefel: Right.

Anthony Alford: I suspect something similar would happen with-

Michael Stiefel: Well, yes.

Anthony Alford: Generated code.

Michael Stiefel: Let's take your analogy, because I think there's something very interesting about this. Before you get to fully ... I think with self-driving cars, the problem is the world getting to self-driving cars. When you have humans and self-driving cars at the same time, you have a problem. I'll give you two examples.

One is there's something called the Pittsburgh Left. Which, for those of us who drive the United States, you generally, if you're at an intersection, the cars going straight have the right-of-way over the cars that are turning. But in Pittsburgh, there's a local custom that those making the left turn have the right-of-way. The question is, you have a car that was trained on data in some other city that comes to Pittsburgh, what is that situation? Or you have the situation happen in Sweden, where they went from driving on the left side of the road to the right side of the road overnight. Humans did wonderfully. I don't see how a self-driving car could do that.

Roland Meertens: I'm only hearing that we need to learn from situations as fast as possible, and that we need and learned driving, so you can actually capture, all in once, as in all the different areas.

Michael Stiefel: Yes. But also, I think the easy case is when everything is automated. As you say, Anthony, there is that case where it comes across something it didn't expect, like a chicken running across the road. But if everything's automated, then everything's predictable because they all know what they should do. The problem is in the world where you're half automated and you're half not, that's where you get into a problem. I don't know if there's an analogy here with, you brought up self-driving cars, an analogy with doing the LLMs to generate code and not knowing, if the LLMs are always generating the code?

Roland Meertens: Well, I still think the problem is mostly here with humans, that thinking about the code and thinking about the problem gives you insights into what you actually want to achieve. Whereas if you automate everything, at the end, you maybe have a website very quickly but why did you make this website again? Were you just happy that ChatGPT generated it? I think that's at least one thing which I noticed when using things like ChatGPT, is that at the start, I used it quite often to help me write a message to a friend. I'm like, "I don't want to lose the capability of writing messages to a friend." I think we all lost the capability of remembering 10-digit phone numbers because we just stored them in our phone.

Michael Stiefel: Well, I still could do that but that's because I got trained with that a long time ago.

Roland Meertens: Yes. The younger folks, they don't know how to remember 10-digit numbers anymore.

Michael Stiefel: Well, it always amazes me at the checkout, when someone is at the checkout and I sometimes like to use cash instead of a credit card. I can compute the change in my head and the person at the other end is, "How'd you do that?"

Roland Meertens: Yes. Yes, maybe at some point, someone will say, "Wait, you can actually open Notepad and edit the HTML yourself."

Michael Stiefel: Well, that's Back to the Future.

Roland Meertens: Yes.

Anthony Alford: We're going to turn this into “The Kids Today”.

Michael Stiefel: Actually, you raise a very interesting point there because ... Let's go back to the point, you were both talking about before about figuring out what ChatGPT has done. The question is how elegant will the code ... Because if ChatGPT can be clever, you could have the equivalent of go-tos all over the place. They could produce spaghetti code and they understand it, but then if you have to, as you say, open up Notepad and look at the HTML, you'll look at it and say, "What the hell is going on here?" Is that a danger?

Roland Meertens: Have you guys seen that DeepMind's AlphaDev found a faster sorting algorithm?

Michael Stiefel: No.

Anthony Alford: Yes, I did see that headline.

Roland Meertens: Yes. A while ago, they trained some kind of reinforcement learning agent to generate code and their algorithm found, I think they went from, I don't know, 33 instructions to 32 or something like that. It was a bit faster than the fastest human written code.

Michael Stiefel: But the question is in sorting algorithms, because if I go back to the good old days, we had to choose them and write them yourself. Sorting algorithms are not universally used in all cases. For example, Bubble Sort, if I remember going back to, is a very good sort except if the data is almost already in ordered state. Do I recall that right? I don't know.

The question is, in the situation you just had where you came up with a faster sort, does the algorithm now know the cases to use this sort? Or is it just going to blindly use it every place?

Roland Meertens: Or maybe you can apply this algorithm to your specific deployment.

Michael Stiefel: Yes.

Roland Meertens: You just tell it, "Optimize my P99 latency." Then you don't know what's going on, what kind AB tests is set up, but your P99 latency slowly goes down. You don't know if that's maybe because your AI starts rejecting customers from a certain country. I think that's the real danger is what are you optimizing, at the end of the day?

Michael Stiefel: So what you're saying that in this world of LLMs, we have to log a lot of stuff.

Anthony Alford: Yes. Well actually, now that I started thinking about it, if the LLM is going to be part of your software development pipeline, we're going to want to check in the prompts into Git. You're going to want to commit your prompts to the source code because now, that's the source code.

Michael Stiefel: Right.

Anthony Alford: Maybe.

Michael Stiefel: So you have version control on the prompts, and you have ... Well, the question is then .... All right. Let me think about this for a moment. Because many, many years ago, I worked in the computer design world for military. The military is one of the users of the application. They used to, when they archived the designs, they archived the software that was used to create the design, so if they ever had to revise the design, they could bring back the exact software to use it. Are you suggesting perhaps not only do we archive the prompts, but we archive the LLM that was used with those prompts as well?

Anthony Alford: I think you should. It's almost like PIP freezing your requirements for a Python environment. I don't know. It depends on the model we're using. If the LLM is just a Copilot and it's helping you, that's basically the same as copy and pasting from Stack Overflow.

Michael Stiefel: Right, right. Because you have the responsibility, in the end, for what you cut and paste, or what you put in, or what was generated. The question then becomes, at some point, does the LLM become compilers, who we just assumed that it works?

I can remember, one time, actually finding a bug in a compiler because it put an instruction on a page boundary, and we took a page fault that actually caused it, but that's really sophisticated and you have to understand what's going on behind the scenes. Are people going to understand?

Roland Meertens: I think you can build up this understanding faster. I personally, people are probably going to laugh, but I have no clue how to write SQL queries or work with other large databases. I don't really know how to work with, I don't know, PySpark. But nowadays, all these tools have AI built in, so the only thing I do is say, "Fetch me this data from this table, and then do this with it, and select these specific things." The first day that you're doing this, you have no clue what you're doing but you get some auto generated code which does the thing for you. That's great, but then after a couple of weeks, you start seeing patterns and you actually start learning it. So it's more interactive learning for me, and slowly learning an API through AI generated commands. Whenever something crashes, you can actually ask it to fix it, which is insanely powerful.

Michael Stiefel: But you said something very interesting. One of the things you do learn when you've done SQL, and believe me, I've written my share of SQL in my life, is that for example, if you're doing certain types of queries, you may want to put indices on columns.

Anthony Alford: Hints.

Michael Stiefel: Or hints. Or you may want to renormalize or denormalize things, for example, for performance. There are all kinds of things that you may not learn or the thing may not know to do. Again, I guess what I'm trying to get at is there's always some sort of a contextual or meta problem, so what I'm afraid of, is in this new world of LLMs do a lot, that people lose their knowledge of the meta problems. They lose their ability to make change, they lose their ability to have long attention spans, or whatever it is, and they lose the context and they begin to trust these things. Then we find ourselves in a situation where it's too late to get out of, except to rip the whole thing up.

Anthony Alford: I don't know if we'll get to that point. But I do think that ... As someone with teenage children, I can see the other side. There's a reason we're not still writing machine code, most of us.

Michael Stiefel: Yes.

Anthony Alford: Some of us do. Very few of us, I imagine. But I'm sure that, when the compilers came along, everybody was saying, "These kids today don't know how to write machine code."

Michael Stiefel: Yes, they did. There were some, they did say it. They did say, "They don't know how to write assembly."

Anthony Alford: But I still learned it and I wasn't that long ago, I hope. But anyway, where I was going was I like what Roland said. If we can use these things as tools to help us learn things, help increase our productivity, I think that is a good future. I think you sure will lose a few skills, but sometimes ... Really, in the grand scheme of things, is writing machine code a skill that people still need?

Michael Stiefel: No, probably-

Anthony Alford: People still can write and writing's been around a long time.

Michael Stiefel: I know that when phones first came out, the ability to write machine language code was very important. That skill had to come back because you didn't have virtual memory. You had to worry about memory mapping and things like that, because again, this goes back to the whole context.

Anthony Alford: My wife was still writing assembler in the 21st Century for embedded software.

Michael Stiefel: Yes.

Anthony Alford: For sure.

Michael Stiefel: Again, this comes back to I guess the point of context and knowing where the tool works and where the tool doesn't work. I'm afraid that that would get lost in such a world, where people don't know. I guess, as Donald Rumsfeld said, "It's the unknown unknowns that get you." Or the limits of the tools that get you. The more you automate, the more you run the risk of that. Where's the happy medium?

Because again, economic efficiency is going to drive us. The most expensive thing in a programming project probably is the cost of the programmer.

Roland Meertens: Yes. Or alternatively, the highest cost is the very large SQL query this developer wrote, who had no clue how to use indices.

Michael Stiefel: Right.

Anthony Alford: I would say that's R&D cost. What's the cost of an outage, a multi-hour, multi-day outage of your software? It's true, that there are always going to be companies that are foolishly shortsighted in some ways, but the ones that survive, we hope, are the ones who are not.

Michael Stiefel: But the question of what is the path for then getting them to survive? How much suffering is there in the process of that evolution? Dinosaurs disappeared in the process of evolution. They didn't make it, but that took a long time. The question then becomes how do we know what we don't know? Because I think the economic efficiency, I'm convinced because I've seen this happen over and over again, most managers think programmers are replaceable. Interchangeable, especially if you get to larger organizations.

Anthony Alford: Fungible.

Michael Stiefel: Fungible, okay. That is going to provide an economic incentive for people to get rid of programmers and to use technologies. I remember years ago, before even LLMs came out, people were saying automatic generation of code is around the corner.

Anthony Alford: Yes, they've been saying that for a while.

Michael Stiefel: Yes. But there was a strong incentive for managers to believe this because programmers are pains in the neck, they cost money. They say no, so get rid of them. I'm playing Devil's Advocate here, to some extent, but that is going to be a big push, I think, for having LLMs. Or startups that can't afford programmers.

Roland Meertens: But then you will lose a lot of the tribal knowledge in companies.

Michael Stiefel: Yes, you do. But are they going to care? The more I think about this, the more I realize this world that we move to is a world that ... A lot of technology changes. For example, take phones. No one thought about the attention span problem. No one thought of TikTok. No one thought of spyware. No one thought of all the privacy problems.

Roland Meertens: I agree that, in that sense, the difference between a good senior developer and a bad senior developer I think will be how much restraint they were able to use to either automatically accept every LLM generated proposal, or taking a minute to think through what is actually generated.

Although, reading is going to become a way, way more important skill, reading codes and quickly being able to understand what's happening, and either accepting or rejecting it.

Michael Stiefel: Well, I think you're right. One of the things that I learned very early in my programming career is that code is read far more often than it's written. Code should be written from the point of view of the reader, the potential reader, as opposed to the writer. In other words, Donald Knuth, I'm sure you both know that name, wrote a book called Literate Programming. His idea was you program in an explanatory way.

Roland Meertens: Yes, but how often do you take a moment to read some nice codes? I always think it's weird that if you are a science fiction writer and people ask you, "What is the last book you read?" If you say, "Oh, I never read books," people will be like, "Oh, how can he be a good writer?" But if you ask programmers, "What's the last code you read?" People are like, "Oh, I'm not going to read code for fun." Even though that's a new skill to have.

Michael Stiefel: I used to read code all the time because I had to debug other people's code, look at code. I read code so that I could understand my peers. I'd learnt probably reading code from my betters, when I got stared. Perhaps that's a skill that's going to have to come back in the future world.

Can We, or Should We Stop This Future? [43:18]

I'd like to sum up at this point and ask both of you. The question is, based on our conversation and any thoughts you have, what does this world look like and do we really, really want to live in this world? Because now's the time to say, "Stop." Now, can we really say stop? Is it like the atomic bomb that's going to be developed by somebody and then everybody has to have it? Or is there a realistic chance that we can say that this is not a good idea, or it should be restricted to a certain area?

Anthony Alford: I was going to say if anybody can stop it, it would be the lawyers, but I don't know if they will.

Michael Stiefel: They're already starting. Well, for example, did you know the story about the Air Canada bot?

Anthony Alford: Yes.

Roland Meertens: I know the story about the Air Canada bot, yes. It promised things it couldn't promise.

Michael Stiefel: Yes. But the thing is that Air Canada tried to say, "No, no, no, this is a platform. We're not responsible." The judge said, "No, you're responsible."

Most of them I think are copyright lawsuits right now, but eventually there will be lawsuits. That's one thing that's certainly a possibility.

Anthony Alford: If I were going to say here's how we could turn this future into a bright future, I'd say let the machines write the code, but let's write really good tests. That's what I tell my development teams today already. Let's make sure we have really good tests. If we have really good tests, if we run tests all the time, we run tests for scale, for security and all that, fantastic. We'll keep an old timer around who can go in and debug the code, or look at the code and make the tweaks. But other than that, let's let 'er rip.

Michael Stiefel: What happens when the old timer retires? How do you get the next old timer?

Anthony Alford: There's always somebody who's an old timer at heart.

Michael Stiefel: What you're suggesting, if I understand you correctly, is a division of labor.

Anthony Alford: Yes.

Michael Stiefel: That also means that the LLMs have to learn to write testable code.

Anthony Alford: Well, that's the key, isn't it? That's all of us.

Michael Stiefel: Yes, but that's the point. In other words, for your division of labor to work, the LLM has to generate code that could be unit test, that could be scenario test, that could be use case test. It has to know how to do that.

Anthony Alford: The way we do testing is we use our API. We write tests that call our API, so end-to-end test it. Unit tests for sure, but end-to-end tests, that's the truth.

Michael Stiefel: But you also have to test the user interface as well.

Anthony Alford: Right. That's a great human skill.

Michael Stiefel: Yes.

Roland Meertens: I think for me, as someone who has been using Copilot ever since it was in beta phase, I learned a lot of new tricks in terms of programming from having this Copilot automatically inject code into my code. I have therefore explored APIs which I would normally ever do, like found new tricks which can do things faster. If I would have to think about them from scratch, I would have not implemented them this way. I found I became a better programmer with the help of LLMs. But people should show restraint. Don't just accept every suggestion.

Michael Stiefel: Right.

Roland Meertens: Keep thinking about what you actually want to achieve. I think this constraint, for me, is the hardest. Sometimes when I get tired, I notice myself accepting everything which gets proposed and that's the moment where I start writing extremely bad code, very bad prototypes, nothing makes sense anymore, it's not maintainable anymore. You have to be at least a certain amount of awake to use it responsibly. It's like a lot of things which are addictive, you got to keep using it responsibly.

Michael Stiefel: I have two more questions before we wrap up. One is, again, how do you train the new developers? You talk about exercising restraint. You're coming from a world where you wrote code and you know what restraint is. How do you teach the next generation of programmers to fear, to respect, whatever adjective or verb you want to use, to know how to treat the technology?

Roland Meertens: You keep pressing the needs work button on GitHub until they finally get it right.

Michael Stiefel: Anthony?

Anthony Alford: Yes, one of those, maybe ... I don't have the answer, I should have the answer. I think it's experience. We could copy and paste from Stack Overflow. How do we know whether we should or not? Some of its experience. I think with the younger generation, sometimes you throw them in the deep end.

Michael Stiefel: Yes.

Anthony Alford: Of course, you mentor them, you don't let them drown.

Michael Stiefel: But they come close to drowning.

Anthony Alford: Really, that's how people learn is by making mistakes so we've got to give people an environment where they can make mistakes. It's just like now. We've got to give people an environment where they can make mistakes, where a mistake is not catastrophic, and I was going to say, cross our fingers.

Michael Stiefel: Well, yes. There's always a certain amount of crossing our fingers with software development.

Just to ask you the last question. To think about Colonel Nicholson, what would cause you to say, "What have I done with this technology?" What is it that you would fear would cause you to ask yourself, "What have we done with this world?"

Roland Meertens: I think for me, there are a lot of things which used to be fun for me because there was a certain challenge to it, from generating simple websites about something silly or building cool tech prototypes about something cool. A lot of those things which would previously be fun weekend projects, you can nowadays generate with ChatGPT in five minutes and that takes all the fun out of it. As long as I can keep doing things manually, I am extremely happy. But just knowing that something can be automated sometimes takes the fun out of it, unfortunately.

Anthony Alford: I think just losing sight of the fact that we need to stay in control, we need to exercise restraint. We need to not let the robots take over. I think that's the fear we all have. I think when it comes down to it, that's the fear we all have is that the robots take over. I don't think that's going to end life, but it could be very expensive for a company if something bad happened. That's where I would see, "What have I done?" I'm not a CTO, but if I were a CTO and we implemented this, and it ruined the company, "Oops."

Michael Stiefel: Well, thank you very much, both of you. I found this very interesting and hopefully our listeners will learn something, and hopefully we'll have a chance to make the world a little bit better place, get people to think about stuff.

Anthony Alford: Yes, it was a lot of fun. Thanks for having me.

Roland Meertens: Thanks for having us.

About the Authors

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and the Google Podcast. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Rate this Article

Adoption
Style

BT