The artifical intelligence chainsaw massacre

The Agile community is a diverse and colorful bunch. Some have the enthusiasm of fresh converts, and others are jaded and cynical from years of mistreatment by the business community. I land in the middle of the spectrum. I am committed to improving work culture, but I understand that many mediocre people in leadership roles do not share my values. It is refreshing to hear the stories of other coaches and scrum masters and how they deal with struggle. A fellow presenter at AgileIndy 2023, Thomas Meloche, posted a strong opinion about software development and generative AI. His shocking opinion was like using vinegar as a mouthwash. I also could not argue against it. This week, I want to discuss Meoche's ideas and why we should pay attention to them.
Generative Artificial intelligence has been all over the news since the announcement of ChatGPT and Google's Bard product. Both products looked impressive. The CBS news magazine 60 Minutes did a fawning on-air story that I thought featured reporter Scott Pelly giving Google CEO Sundar Pichai a pedicure. Technology has progressed from impossible in the 1970s to challenging computer problems in the 2010s to the fulfillment of science fiction horror stories.
As consumers of this technology, we need to be cautious, set the hyperbole aside, and look at it with a critical eye. Artificial intelligence does incredible things, but is it doing what we meant it to do? Meloche and I are skeptical and pragmatic in our perspective.
The Midjourney tool can create fantastic graphical images, and I spend some time each week practicing prompts and learning new illustration styles. The Gramerly tool is worth its subscription price, helping make my spelling and grammar acceptable to everyday readers. I even use Google's Bard tool to help clean up copy if I need help with awkward wording or active tense in my writing.
What I do not use generative Artificial Intelligence for is creating software code. I made this deliberate choice because the code generated is good, but it only works in the context of a prompt I gave the large language model. So, if I ask it to write a bubble sort for an array, it will write a simple one, but it would be folly to incorporate it with existing software. Why? Because Artificial Intelligence writes software in the context of the software prompt instead of the system's context, it does not work on the same problem the software developer is attempting to solve. An experienced developer must determine how to fit this code into systems. If that developer does not understand the fundamentals of development or the context of how the code operates, it is a case of a blind person attempting to fit a puzzle piece into place.
GitHub CoPilot and some Artificial Intelligence tools are remarkable. I am very impressed with some of the new features in Microsoft Visual Studio, such as its suggested code feature. Still, the dream of telling an Artificial Intelligence to build working software is a pipe dream because Artificial Intelligence requires specific instructions, which most business people need help to provide. The creative process of software development involves negotiation, creativity, the ability to understand the complex dynamics of business, and emotional intelligence. Artificial Intelligence falls short in each of the areas compared to human beings.
It is why the pipe dream of business leaders telling Artificial Intelligence to create software applications is so far from reality. Artificial Intelligence is good at making source control cleaner and getting unit tests written. However, the difficult work of authoring a piece of production-ready software is still a human activity. In their rush to automate everything, ambitious business leaders trample on the messy reality of human collaboration and force-fit AI tools onto their organizations, potentially suffocating the organic development process. It will create a dangerous situation where we will deliver work faster, but the work delivered will have minimal value to the organization. A business risks losing its way sooner and getting further from its goals with the incorrect use of Artificial Intelligence tools.
I liken the situation to a logger with an axe and a saw. Given enough time, a crew of lumberjacks will cut down many trees. Give those lumberjacks chainsaws, and they can cut down an entire forest. The productivity gains are tremendous, but the chainsaws must be regularly maintained and sharpened. As confined spaces fill with whirring chainsaws and crashing timber, the danger of injury skyrockets. Every downed tree, every buzzing blade demands your unwavering vigilance. Finally, using a chainsaw requires safety training and practice to be efficient. So, giving out Artificial Intelligence tools to developers is like giving chainsaws to loggers.
I am not worried about Artificial intelligence making office work obsolete; instead, I am concerned we are adding an unnecessary element of danger that will hurt individual careers and kill businesses. People drunk on entitlement and power are going to use these tools, and it is going to resemble a late 1970s horror movie. It is up to technology professionals, agile coaches, and business leaders to see this and take necessary precautions. Otherwise, there will be plenty of blood to mop up in the next five years.
Until next time.
Scott Pelly is getting overwhelmed by technology.
Comments ()