7 minutes to read With insights from... David Elliman Global Chief of Software Engineering david.elliman@zuhlke.com Sebastian Schweitzer Lead Data Consultant sebastian.schweitzer@zuehlke.com Moritz Gruber Lead Data Consultant moritz.gruber@zuhlke.com Once upon a time, the sci-fi future of robotic artificial intelligence looked poised to make back-breaking manual labour a thing of the past. It seems ironic, then, that while Boston Dynamics’ best efforts are content with parkour gymnastics and moving uniform boxes from shelf to shelf, it now appears that large language models and generative AI pose a more serious threat to creative, technical, and clerical jobs instead. Or do they? Will AI automate software engineers out of a job, for instance? Or will there always need to be a human layer in between the artificial intelligence and the end product? Fresh from talks on exactly that subject at Microsoft Ignite Switzerland, we’ve been talking with Zühlke AI and data experts Moritz Gruber, David Elliman, and Sebastian Schweitzer to find out… Capturing our imagination ‘AI has always had this ability to capture the public imagination – ever since the term was coined in the 1950s’, says David. ‘People’s expectations of AI were already high. Then ChatGPT comes and blows the whole thing up. I mean, massively blows it up. Now expectations are through the roof. Especially when it comes to technology buyers – like your average CIO and CTO. Their expectations of the technology are sky high’. But, he stresses, people's expectation of AI often outstrips its actual capability and its implementation within software engineering environments has been pretty limited to date. He’s of the opinion that software engineering jobs are pretty safe from encroaching technology, for now at least. But he does concede that things have come a long way since he first started working alongside early AI frameworks in the 1980s. In its current state, AI is now being used in software development as a code completion tool, but it struggles with tasks beyond simple use cases. ‘The focus has been on the implementation part of the process’, Moritz adds. ‘So that’s the pure code-writing part’. A blunt instrument, then. Even if blunt instruments can still crack nuts pretty effectively. But Moritz is convinced that we’ve merely scraped the surface when it comes to AI’s potential applications. Longer term, GenAI could transform digital product design and development. Moritz gives the examples of using GenAI to automate parts of user research and app design, to generate and review technical architecture, and recommend new features based on operational metrics and user feedback. But before people start revising their CVs, it’s important to understand that AI has inherent limitations – innate bottlenecks that make replacing developers wholesale pretty tricky. So let’s explore what generative AI adds to the engineering mix, and which limitations are likely to remain… A two-part problem While AI can generate code, it lacks any true understanding and – crucially – real domain expertise. In other words, it requires human guidance, correction, and knowledge to effectively handle any non-trivial task. As David explains, this is a two-part problem: ‘The first part is that AI is not very good at anything past simple, narrowly-scoped code generation jobs. And if it doesn't give you what you want because its training corpus didn't encompass anything much past that point, then you start running into problems quickly. ‘And the second part is, how on earth are people supposed to diagnose and fix things unless they do it from first principles themselves? You need that human layer that can tell what's wrong – and the only way to solve those problems is to know what you're doing in the first place, either in terms of correcting it yourself or in terms of being able to re-prompt it’. So we’re hitting on a couple of big limitations here: training data, and human-driven quality assurance. AI must surmount the challenge of inadequate training data for complex software systems, while individuals must verify and correct errors. With the former, AI has to be able to overcome the hurdle that its training data may be insufficient for handling complex software systems and architectures. With the latter, there’s a need for people to be able to fact-check and fix. An analogy here might be building a house. If you could get a robot to do it for you, that might seem tempting. But only a builder would be able to say if the resulting house had been built properly, or explain why the roof has fallen off. Or, as David puts it: ‘You need to understand it well enough to say, no, I didn't mean this’. Because of those limitations, both David and Moritz think of AI as an optimiser, rather than a replacer – a tool that experienced developers can use to enhance their productivity. That means that developers with domain knowledge and the ability to define requirements, handle complexity, and ensure quality will still be needed to provide that human oversight. At the same time, we know that things are always progressing. So will that sentiment hold up in five, ten, or 20 years’ time? ‘You can never say with certainty quite how it's going to develop’, David says, ‘or how these statements could be proved wrong at some point very soon. ‘I do think we'll see an increase in the capability of the model types themselves in terms of what they can do. And I think we will train them more effectively, but I don't think that we will get to the point where human developers won't be needed, because the direction it's going is still based on human input’. A costly problem to unpick If AI isn’t going to take jobs, then, it begs an obvious question: what will working alongside AI entail? And how will that relationship affect roles and responsibilities? In simple terms, people will become more multi-skilled. Roles like quality assurance and testing will become more important as AI increases the need to validate any code that gets generated. That’s while domain experts who also understand business logic will be highly valued. This is likely to result in some roles dovetailing into one another. Moritz explains: ‘If you look at key disciplines like customer experience and user experience, we envisage a future where individual sub-disciplines make sense to merge’. Future-proof tips for software leaders For agile teams and practitioners: You will be outpaced by peers who use and understand generative AI, so experiment with new tools and learn to nail your prompts Beginner level tech stack expertise will become a commodity in the long term; stand out now by expanding your domain knowledge Generative AI will blur boundaries between disciplines, so it’ll pay to expand your discipline footprint For technical leads and decision makers: Generative AI increases productivity competition, so accelerate tool adoption across your company now AI introduces new risks and opportunities, so consider introducing an AI toolsmith to your team In the long term, GenAI will cause disciplines to converge and overlap. In the near term, it will be a great accelerator within respective disciplines when combined with human oversight. You’ll want to hire domain expertise that spans disciplines and departments At the bottom end, both David and Moritz concede that there might be a threshold under which the least complex work (the simplest stuff that’s currently handled by humans) is automated out. But, as David explains, the all-important human layer will still need to exist to unravel sticky problems: ‘If you're generating code and you get to a point of over-complexity with AI tools, you’ll soon arrive at a piece of code that you don't understand, and things happen that are unexpected. And if you can’t track that problem down, it becomes a costly problem to unpick. ‘So we don't want to build ourselves into a position where we don't actually understand what’s gone into the code. And while you could argue that we’ll just also build ourselves more intelligent diagnostic tools, in that scenario you suddenly end up with too much system interdependency’. Ultimately then, humans will always be needed on-hand. Specifically, humans who understand what AI is up to – or at least know how to fix its faults. That positions AI as a powerful tool to augment and optimise software development workflows, but not as an outright replacement for human software engineers. And it provides today’s software engineers with a clear message: keep up with AI software development tools as they evolve, hone your prompts, and start getting curious about the disciplines to your left and right. You might also like... AI software development: what business leaders should know Responsible AI: a framework for ethical solutions AI predictions: how AI will impact your organisation